-
-
Notifications
You must be signed in to change notification settings - Fork 17.2k
Description
🐛 Bug
I prepared some annotations which were then filtered to removed annotations for the first three classes. This cause problems with plot legends.
When there are not instances of bounding boxes for certain classes in the dataset.yaml file, the legends for various plots can be misaligned. See labels.jpg correctly shows no count for instances of class "D00", "D10" and "D20" whilst "D40" and "EB" do have bars. However, PR_curve.png mistakenly marks "D00" and "D01" in the legend as having data (but not "D40" and "EB" classes).
To Reproduce (REQUIRED)
Prepare annotation labels that do not have any instances of the first three classes listed in the dataset yaml file, then run:
python train.py --img 640 --batch 54 --device 0,1 --cfg models/yolov5x_road.yaml --data filtered_dataset.yaml --weights weights/IMSC/last_95.pt --hyp ./data/hyp.scratch.yaml --name TEST --epochs 1
where filtered_dataset.yaml is:
train: datasets/bbox_collation_6_D40_EB_only/train/images/ val: datasets/bbox_collation_6_split_D40_EB_only/val/images/ nc: 5 names: ['D00','D10','D20','D40', 'EB']
Expected behavior
PR Curve should have lines for classes "D40" and "EB" marked in the legend but nothing for the first three classes "D00", "D10" and "D20".
Context
I realise that it would be unusual to plan to have no bounding box instances for the first 3 class names, but things can evolve this way over time. In my case, I didn't want to use all of the classes provided by an open source data set. I noticed that having classes with low instance counts could harm the detection performance of my priority classes (even when I remove class weighting). So I filter these out when training a model for production, but I retain data for those first three classes for a time in the future when I have accumulated higher instance counts. Hopefully after increasing the instance counts of those classes, the performance of priority classes will not be reduced.
.