Skip to content

onnxruntime-gpu 1.10 #5916

@guishilike

Description

@guishilike

Search before asking

  • I have searched the YOLOv5 issues and found no similar feature requests.

Description

Using onnxruntime-gpu 1.10, the following error will occur.

raise ValueError("This ORT build has {} enabled. ".format(available_providers) +
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

Use case

onnxruntime-gpu 1.10 requires providers

elif onnx:  # ONNX Runtime
    LOGGER.info(f'Loading {w} for ONNX Runtime inference...')
    check_requirements(('onnx', 'onnxruntime-gpu' if torch.cuda.is_available() else 'onnxruntime'))
    import onnxruntime
    if torch.cuda.is_available():
         session = onnxruntime.InferenceSession(w, None, providers=["CUDAExecutionProvider"])
    else:
          session = onnxruntime.InferenceSession(w, None)

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions