-
Notifications
You must be signed in to change notification settings - Fork 3.5k
[OpenVINO-EP] Using Core::ReadNetwork() method for creating a CNNNetwork #6374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
->Using Core::ReadNetwork() method for reading and creating a CNNNework ->Since OpenVINO™ 2020.4 version, Inference Engine enables reading ONNX models via the Inference Engine Core API and there is no need to use directly the low-level ONNX* Importer API anymore. To read ONNX* models, it's recommended to use the Core::ReadNetwork() method that provide a uniform way to read models from ONNX format. Signed-off-by: MaajidKhan <[email protected]>
-> use InferenceEngine::details::InferenceEngineException to catch the exception for ReadNetwork() Signed-off-by: MaajidKhan <[email protected]>
->The UEP component fails to compile with OpenVINO_2021.1 release version due to indentation error.Indentation is fixed with this commit. Signed-off-by: MaajidKhan <[email protected]>
|
/azp run Linux CPU CI Pipeline,Linux CPU x64 NoContribops CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,MacOS CI Pipeline,MacOS NoContribops CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline |
|
Azure Pipelines successfully started running 9 pipeline(s). |
|
/azp run orttraining-linux-ci-pipeline,orttraining-mac-ci-pipeline,orttraining-linux-gpu-ci-pipeline,centos7_cpu,Linux OpenVINO CI Pipeline,/azp run orttraining-linux-ci-pipeline,orttraining-mac-ci-pipeline,orttraining-linux-gpu-ci-pipeline,centos7_cpu,Linux OpenVINO CI Pipeline |
|
Azure Pipelines successfully started running 5 pipeline(s). |
|
/azp run Linux CPU Minimal Build E2E CI Pipeline, orttraining-distributed |
|
Azure Pipelines successfully started running 2 pipeline(s). |
|
@MaajidKhan, please take a look at the test failure |
|
Hector, I am looking into this issue. I will let you know once I fix that. |
|
/azp run Linux OpenVINO CI Pipeline |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
@HectorSVC This issue is only when we are building onnxruntime with OpenVINO Binary Release package. However, we are able to successfully run c++ applications when onnxruntime is built with Openvino from source ( master (or) releases/2021/2 branch). so, we think, When we are building openvino from source, the protobuf gets installed on the fly and hence we are not facing any issues and this is the same with onnxruntime. In onnxruntime, the segfault happens at this line of code where it tries to shutdown the protobuf libraray. we are talking to OpenVINO team to check how this issue could be resolved. so closing this PR for now, will create a new one, once we get this thing resolved. |
Description:
Using Core::ReadNetwork() method for reading and creating a CNNNetwork for OpenVINO versions >=2020.4
Motivation and Context
Since OpenVINO™ 2020.4 version, Inference Engine enables reading ONNX models via the Inference Engine Core API and there is no need to use directly the low-level ONNX* Importer API anymore. To read ONNX* models, it's recommended to use the Core::ReadNetwork() method that provide a uniform way to read models from ONNX format.
More Info here:
ONNX* Importer API is deprecated.
https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_OnnxImporterTutorial.html
Info on ReadNetwork() method from InferenceEngine::Core Class:
https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1Core.html#a251861e52a979d6e61848babae3673ef