-
-
Notifications
You must be signed in to change notification settings - Fork 17.2k
Closed
Labels
StaleStale and schedule for closing soonStale and schedule for closing soonquestionFurther information is requestedFurther information is requested
Description
Search before asking
- I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
Hi Sir, thanks for providing amazing codes! Now I am almost familiar with how to make own dataset, training with all -- commands, inference, and deploy in TensorRT. The question is no matter I use n/s/m model, or edit width_multiple: 0.50 to 0.25 even less in yolov5_custom.yaml, or change --img from 640 to 320,finally I turn these pt models to tensorRT, the GPU memory usages are always larger than 1700 Mib in nvidia-smi (even file size of pt can be 1.6 mb which is far smaller than 14.4 mb yolov5s.engine). Could you tell me how to further decrease GPU memory usage during inference in tensorRT?
Additional
No response
Metadata
Metadata
Assignees
Labels
StaleStale and schedule for closing soonStale and schedule for closing soonquestionFurther information is requestedFurther information is requested