-
-
Couldn't load subscription status.
- Fork 17.3k
Description
Search before asking
- I have searched the YOLOv5 issues and found no similar bug report.
YOLOv5 Component
Validation, Detection, Export
Bug
Traceback (most recent call last):
File "detect.py", line 261, in <module>
main(opt)
File "detect.py", line 256, in main
run(**vars(opt))
File "/home/leonardo/miniconda3/envs/yolo_env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "detect.py", line 92, in run
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data)
File "/home/leonardo/Scrivania/yolov5/models/common.py", line 351, in __init__
context = model.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'
Environment
I am working in a Conda env on Ubuntu 20.04 host
Minimal Reproducible Example
I exported onnx and engine model with export.py using the following command:
python export.py --weights path-to-pt-weights --imgsz 540 960 --batch-size 1 --device 0 --include engine onnx
Then i tried to do inference with engine file using:
python detect.py --weights path-to-engine-file --source val/images/ --device 0 --imgsz 960
Or i tried to run validation with:
python val.py --weights path-to-engine-file --data path-to-yaml-file --device 0 --imgsz 960 --task val
But in both case it returns the error i mentioned above.
Additional
I have some doubt about image-size declaration while exporting the model.
The model should infer on image with fixed size [540,960] and that is the size i used in export.py.
But during the inference with detect.py or during validation with val.py i can only declare one size, so i use the biggest, 960.
If i try to do inference with onnx model with:
python detect.py --weights pesi_sar/best_model_1502.onnx --source val_new/images/ --device 0 --imgsz 960
It obviously returns:
Traceback (most recent call last):
File "detect.py", line 261, in <module>
main(opt)
File "detect.py", line 256, in main
run(**vars(opt))
File "/home/leonardo/miniconda3/envs/yolo_env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "detect.py", line 117, in run
model.warmup(imgsz=(1 if pt else bs, 3, *imgsz), half=half) # warmup
File "/home/leonardo/Scrivania/yolov5/models/common.py", line 474, in warmup
self.forward(im) # warmup
File "/home/leonardo/Scrivania/yolov5/models/common.py", line 421, in forward
y = self.session.run([self.session.get_outputs()[0].name], {self.session.get_inputs()[0].name: im})[0]
File "/home/leonardo/miniconda3/envs/yolo_env/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 192, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: images for the following indices
index: 2 Got: 960 Expected: 544
Please fix either the inputs or the model.
So i tried to export the model declaring only the biggest size with the following command:
python export.py --weights path-to-pt-weights --imgsz 960 --batch-size 1 --device 0 --include engine onnx
The engine model still doesn't work and throw the error mentioned on top.
The onnx model works but it shows different perfomances respect to pytorch model.
That's probably because with onnx weigths the model infers at 960x960, while with pytorch weigths it infers at 544x960 (and thats what i desire!!).
Can you clarify the best practice to use for exporting the model to onnx or engine?
If it is still not clear, i trained my model at 960 image size, but the images i need to process are 540,960 (a quarter of the FHD format btw). I would like to export the model in TRT and keep the same performances as in PyTorch while of course decreasing the inference time.
Cheers
Are you willing to submit a PR?
- Yes I'd like to help by submitting a PR!