-
-
Notifications
You must be signed in to change notification settings - Fork 17.2k
Closed
Labels
StaleStale and schedule for closing soonStale and schedule for closing soonbugSomething isn't workingSomething isn't working
Description
🐛 Bug
When I run training with the --evolve parameter, I get a KeyError: 'anchors'
error.
Digging into the train.py script, it looks like the meta
variable defined in line 536 contains the anchor
param but that param is commented out in the default hyper parameter file located at data/hyp.scratch.yaml
. Is this intentional or a bug?
To Reproduce (REQUIRED)
Input:
python train.py --epochs 10 --evolve
Output:
github: up to date with https://github.com/ultralytics/yolov5 ✅
YOLOv5 🚀 v4.0-138-ged2c742 torch 1.7.1 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)
Namespace(adam=False, batch_size=16, bucket='', cache_images=False, cfg='', data='data/coco128.yaml', device='', entity=None, epochs=10, evolve=True, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[640,
640], linear_lr=False, local_rank=-1, log_artifacts=False, log_imgs=16, multi_scale=False, name='evolve', noautoanchor=False, nosave=False, notest=False, project='runs/train', quad=False, rect=False, resume=False, save_dir='runs/train/evolve', s
ingle_cls=False, sync_bn=False, total_batch_size=16, weights='yolov5s.pt', workers=8, world_size=1)
Traceback (most recent call last):
File "train.py", line 600, in <module>
hyp[k] = max(hyp[k], v[1]) # lower limit
KeyError: 'anchors'
Environment
If applicable, add screenshots to help explain your problem.
- OS: Debian 10
- GPU: Tesla V100
fernandeslouro and sinhau
Metadata
Metadata
Assignees
Labels
StaleStale and schedule for closing soonStale and schedule for closing soonbugSomething isn't workingSomething isn't working