-
-
Notifications
You must be signed in to change notification settings - Fork 17.2k
Closed
Labels
StaleStale and schedule for closing soonStale and schedule for closing soonbugSomething isn't workingSomething isn't working
Description
Search before asking
- I have searched the YOLOv5 issues and found no similar bug report.
YOLOv5 Component
Training, Evolution
Bug
RuntimeError: CUDA out of memory. Tried to allocate 126.00 MiB (GPU 0; 10.76 GiB total capacity; 9.45 GiB already allocated; 93.69 MiB free; 9.57 GiB reserved in total by PyTorch)
Environment
YOLOv5 v6.0-134-gc45f9f6 torch 1.8.1+cu102 CUDA:0 (GeForce RTX 2080 Ti, 11019MiB)
Minimal Reproducible Example
python train.py --epochs 10 --data gpr_highway.yaml --weights yolov5x6.pt --cache --evolve 10 --device 0
Gives the OOM error
python train.py --epochs 10 --data gpr_highway.yaml --weights yolov5x6.pt --cache --evolve 10
Runs just fine.
Additional
No response
Are you willing to submit a PR?
- Yes I'd like to help by submitting a PR!
Metadata
Metadata
Assignees
Labels
StaleStale and schedule for closing soonStale and schedule for closing soonbugSomething isn't workingSomething isn't working