Skip to content

-- device command returns OOM, runs fine without it #5988

@GGDRriedel

Description

@GGDRriedel

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

Training, Evolution

Bug

RuntimeError: CUDA out of memory. Tried to allocate 126.00 MiB (GPU 0; 10.76 GiB total capacity; 9.45 GiB already allocated; 93.69 MiB free; 9.57 GiB reserved in total by PyTorch)

Environment

YOLOv5  v6.0-134-gc45f9f6 torch 1.8.1+cu102 CUDA:0 (GeForce RTX 2080 Ti, 11019MiB)

Minimal Reproducible Example

python train.py --epochs 10 --data gpr_highway.yaml --weights yolov5x6.pt --cache --evolve 10 --device 0

Gives the OOM error

python train.py --epochs 10 --data gpr_highway.yaml --weights yolov5x6.pt --cache --evolve 10

Runs just fine.

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

Metadata

Metadata

Assignees

No one assigned

    Labels

    StaleStale and schedule for closing soonbugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions