-
-
Notifications
You must be signed in to change notification settings - Fork 17.2k
Closed
Labels
enhancementNew feature or requestNew feature or request
Description
Search before asking
- I have searched the YOLOv5 issues and found no similar feature requests.
Description
Using onnxruntime-gpu 1.10, the following error will occur.
raise ValueError("This ORT build has {} enabled. ".format(available_providers) +
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
Use case
onnxruntime-gpu 1.10 requires providers
elif onnx: # ONNX Runtime
LOGGER.info(f'Loading {w} for ONNX Runtime inference...')
check_requirements(('onnx', 'onnxruntime-gpu' if torch.cuda.is_available() else 'onnxruntime'))
import onnxruntime
if torch.cuda.is_available():
session = onnxruntime.InferenceSession(w, None, providers=["CUDAExecutionProvider"])
else:
session = onnxruntime.InferenceSession(w, None)
Additional
No response
Are you willing to submit a PR?
- Yes I'd like to help by submitting a PR!
Athrun0109
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request