-
-
Notifications
You must be signed in to change notification settings - Fork 17.3k
Description
Search before asking
- I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
We are trying to enhance the distributed yolov5s.pt model (we are using tagged 6.0 code, #5141, https://github.com/ultralytics/yolov5/releases/tag/v6.0) with additional training using some focused images. We have performed inferencing, using the distributed yolov5s.pt, and got quite a number of detections of objects on a set of images within our domain.
We are now executing train.py adding an additional ~220 images with labeled objects for some subset of COCO 80 classes. We executed the train.py as follows:
python train.py --img 640 --batch 64 --data dataset.yaml --weights "weights/yolov5s.pt" --epochs 300 --device 0 --freeze 10
Note that the frames have resolutions greater then 640 ( 1520 and 1920). We are using the distributed hyp.scratch.yaml parameters. We tried the “freeze 10” to freeze the backbone, which we understand freeze the feature extraction layers. Results:
The resulting model, when used in inferencing, produces considerably less object detections then the original yolov5s model.
Questions:
• Is there something, fundamentally, that we are doing wrong with the configuration of train.py?
• Does the “–img 640” need to be set to this because of the testing with yolov5s at 640? Or can we increase this to our min resolution?
Additional
No response
