Skip to content

Conversation

JTischbein
Copy link
Contributor

@JTischbein JTischbein commented Feb 8, 2024

Description

Added CUDNN Frontend and used it for NHWC convolutions, and optionally fuse activation.

Backward compatible

  • For model existed with FusedConv, model can still run.
  • If ORT is built with cuDNN 8, cuDNN frontend will not be built into binary. Old kernels (using cudnn backend APIs) are used.

Major Changes

  • For cuDNN 9, we will enable cudnn frontend to fuse convolution and bias when a provider option fuse_conv_bias=1.
  • Remove the fusion of FusedConv from graph transformer for CUDA provider, so there will not be FusedConv be added to graph for CUDA EP in the future.
  • Update cmake files regarding to cudnn settings. The search order of CUDNN installation in build are like the following:
    • environment variable CUDNN_PATH
    • onnxruntime_CUDNN_HOME cmake extra defines. If a build starts from build.py/build.sh, user can pass it through --cudnn_home parameter, or by environment variable CUDNN_HOME if --cudnn_home not used.
    • cudnn python package installation directory like python3.xx/site-packages/nvidia/cudnn
    • CUDA installation path

Potential Issues

  • If ORT is built with cuDNN 8, FusedConv fusion is no longer done automatically, so some model might have performance regression. If user still wants FusedConv operator for performance reason, they can still have multiple ways to walkaround: like use older version of onnxruntime; or use older version of ORT to save optimized onnx, then run with latest version of ORT. We believe that majority users will run cudnn 9 when 1.20 release, because cudnn 9 has been released in ORT and PyTorch for 3 months at that time.
  • cuDNN frontend uses TF32 by default, and user cannot disable TF32 in convolution through the use_tf32 cuda provider option. If user encounters accuracy issue (like for testing purpose), user has to set environment variable NVIDIA_TF32_OVERRIDE=0 to disable TF32, or set larger tolerance in testing. Need update the document of use_tf32 about TF32 impact in convolution.

Follow ups

This is one of PRs that target to enable NHWC convolution in CUDA EP by default if device supports it. There are other changes will follow up to make it possible.
(1) Enable prefer_nhwc by default for device with sm >= 70.
(2) Change fuse_conv_bias=1 by default after more testing.
(3) Add other NHWC operators (like Resize or UpSample).

Motivation and Context

The new CUDNN Frontend library provides the functionality to fuse operations and provides new heuristics for kernel selection. Here it fuses the convolution with the pointwise bias operation. On the NVIDIA ResNet50 we get a performance boost from 49.1144 ms to 42.4643 ms per inference on a 2560x1440 input (onnxruntime_perf_test -e cuda -I -q -r 100-d 1 -i 'prefer_nhwc|1' resnet50.onnx).

@gedoensmax
Copy link
Contributor

@hariharans29 Could you help review this ? And ideally even help resolve the compile problems we are seeing with cuDNN frontend.

@tianleiwu
Copy link
Contributor

/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Big Models

Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@JTischbein
Copy link
Contributor Author

@microsoft-github-policy-service agree company="NVIDIA"

1 similar comment
@JTischbein
Copy link
Contributor Author

@microsoft-github-policy-service agree company="NVIDIA"

@tianleiwu
Copy link
Contributor

tianleiwu commented Feb 8, 2024

Any idea why there is failure in creating graph execution plan for stable diffusion 1.5 VAE decoder:

2024-02-08 17:11:31.766176892 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running NhwcConv node. Name:'NhwcConv_35-/decoder/conv_out/Conv' Status Message: CUDNN_FE failure 6: GRAPH_EXECUTION_PLAN_CREATION_FAILED ; GPU=0 ; hostname=79ffae696371 ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/nn/conv.cc ; line=400 ; expr=s_.cudnn_fe_graph->check_support(handle);

The node is like the following (the weight of shape 3x3x3x128 is in NHWC format):
image

@tianleiwu
Copy link
Contributor

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline,Big Models,Android CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run iOS CI Pipeline,ONNX Runtime React Native CI Pipeline

Copy link

Pull request contains merge conflicts.

2 similar comments
Copy link

Pull request contains merge conflicts.

Copy link

Pull request contains merge conflicts.

@tianleiwu
Copy link
Contributor

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline,Big Models,Android CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run iOS CI Pipeline,ONNX Runtime React Native CI Pipeline

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

1 similar comment
Copy link

Azure Pipelines successfully started running 10 pipeline(s).

@tianleiwu
Copy link
Contributor

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline,Big Models,Android CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run iOS CI Pipeline,ONNX Runtime React Native CI Pipeline

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

1 similar comment
Copy link

Azure Pipelines successfully started running 10 pipeline(s).

@tianleiwu
Copy link
Contributor

There are build error in Windows:
D:\a_work\1\b\RelWithDebInfo_deps\cudnn_frontend-src\include\cudnn_frontend\graph_interface.h(444,19): Error C2248: 'cudnn_frontend::graph::Layernorm_attributes::forward_phase': cannot access private member declared in class 'cudnn_frontend::graph::Layernorm_attributes'

Some test error for NhwcConv:

Run failed but expected success: Non-zero status code returned while running NhwcConv node. Name:'node1' Status Message: CUDNN failure 3: CUDNN_STATUS_BAD_PARAM ; GPU=0 ; hostname=8dc224a231be ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/nn/conv.cc ; line=548 ; expr=cudnnConvolutionForward(cudnn_handle, &alpha, s_.x_tensor, s_.x_data, s_.w_desc, s_.w_data, s_.conv_desc, s_.algo, workspace.get(), s_.workspace_bytes, &beta, s_.y_tensor, s_.y_data);
Google Test trace:
/onnxruntime_src/onnxruntime/test/providers/base_tester.cc:791: registered execution providers: CUDAExecutionProvider

@JTischbein JTischbein requested a review from a team as a code owner March 1, 2024 23:59
@JTischbein JTischbein requested a review from a team March 1, 2024 23:59
tianleiwu added a commit that referenced this pull request Aug 9, 2024
### Description
* Fix migraphx build error caused by
#21598:
Add a conditional compile on code block that depends on ROCm >= 6.2.
Note that the pipeline uses ROCm 6.0.

Unblock orttraining-linux-gpu-ci-pipeline and
orttraining-ortmodule-distributed and orttraining-amd-gpu-ci-pipeline
pipelines:
* Disable a model test in linux GPU training ci pipelines caused by
#19470:
Sometime, cudnn frontend throws exception that cudnn graph does not
support a Conv node of keras_lotus_resnet3D model on V100 GPU.
Note that same test does not throw exception in other GPU pipelines. The
failure might be related to cudnn 8.9 and V100 GPU used in the pipeline
(Amper GPUs and cuDNN 9.x do not have the issue).
The actual fix requires fallback logic, which will take time to
implement, so we temporarily disable the test in training pipelines.
* Force install torch for cuda 11.8. (The docker has torch 2.4.0 for
cuda 12.1 to build torch extension, which it is not compatible cuda
11.8). Note that this is temporary walkround. More elegant fix is to
make sure right torch version in docker build step, that might need
update install_python_deps.sh and corresponding requirements.txt.
* Skip test_gradient_correctness_conv1d since it causes segment fault.
Root cause need more investigation (maybe due to cudnn frontend as
well).
* Skip test_aten_attention since it causes assert failure. Root cause
need more investigation (maybe due to torch version).
* Skip orttraining_ortmodule_distributed_tests.py since it has error
that compiler for torch extension does not support c++17. One possible
fix it to set the following compile argument inside setup.py of
extension fused_adam: extra_compile_args['cxx'] = ['-std=c++17'].
However, due to the urgency of unblocking the pipelines, just disable
the test for now.
* skip test_softmax_bf16_large. For some reason,
torch.cuda.is_bf16_supported() returns True in V100 with torch 2.3.1, so
the test was run in CI, but V100 does not support bf16 natively.
* Fix typo of deterministic

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
prathikr pushed a commit that referenced this pull request Aug 9, 2024
* Fix migraphx build error caused by
#21598:
Add a conditional compile on code block that depends on ROCm >= 6.2.
Note that the pipeline uses ROCm 6.0.

Unblock orttraining-linux-gpu-ci-pipeline and
orttraining-ortmodule-distributed and orttraining-amd-gpu-ci-pipeline
pipelines:
* Disable a model test in linux GPU training ci pipelines caused by
#19470:
Sometime, cudnn frontend throws exception that cudnn graph does not
support a Conv node of keras_lotus_resnet3D model on V100 GPU.
Note that same test does not throw exception in other GPU pipelines. The
failure might be related to cudnn 8.9 and V100 GPU used in the pipeline
(Amper GPUs and cuDNN 9.x do not have the issue).
The actual fix requires fallback logic, which will take time to
implement, so we temporarily disable the test in training pipelines.
* Force install torch for cuda 11.8. (The docker has torch 2.4.0 for
cuda 12.1 to build torch extension, which it is not compatible cuda
11.8). Note that this is temporary walkround. More elegant fix is to
make sure right torch version in docker build step, that might need
update install_python_deps.sh and corresponding requirements.txt.
* Skip test_gradient_correctness_conv1d since it causes segment fault.
Root cause need more investigation (maybe due to cudnn frontend as
well).
* Skip test_aten_attention since it causes assert failure. Root cause
need more investigation (maybe due to torch version).
* Skip orttraining_ortmodule_distributed_tests.py since it has error
that compiler for torch extension does not support c++17. One possible
fix it to set the following compile argument inside setup.py of
extension fused_adam: extra_compile_args['cxx'] = ['-std=c++17'].
However, due to the urgency of unblocking the pipelines, just disable
the test for now.
* skip test_softmax_bf16_large. For some reason,
torch.cuda.is_bf16_supported() returns True in V100 with torch 2.3.1, so
the test was run in CI, but V100 does not support bf16 natively.
* Fix typo of deterministic

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
sumitsays pushed a commit that referenced this pull request Aug 9, 2024
* Fix migraphx build error caused by
#21598:
Add a conditional compile on code block that depends on ROCm >= 6.2.
Note that the pipeline uses ROCm 6.0.

Unblock orttraining-linux-gpu-ci-pipeline and
orttraining-ortmodule-distributed and orttraining-amd-gpu-ci-pipeline
pipelines:
* Disable a model test in linux GPU training ci pipelines caused by
#19470:
Sometime, cudnn frontend throws exception that cudnn graph does not
support a Conv node of keras_lotus_resnet3D model on V100 GPU.
Note that same test does not throw exception in other GPU pipelines. The
failure might be related to cudnn 8.9 and V100 GPU used in the pipeline
(Amper GPUs and cuDNN 9.x do not have the issue).
The actual fix requires fallback logic, which will take time to
implement, so we temporarily disable the test in training pipelines.
* Force install torch for cuda 11.8. (The docker has torch 2.4.0 for
cuda 12.1 to build torch extension, which it is not compatible cuda
11.8). Note that this is temporary walkround. More elegant fix is to
make sure right torch version in docker build step, that might need
update install_python_deps.sh and corresponding requirements.txt.
* Skip test_gradient_correctness_conv1d since it causes segment fault.
Root cause need more investigation (maybe due to cudnn frontend as
well).
* Skip test_aten_attention since it causes assert failure. Root cause
need more investigation (maybe due to torch version).
* Skip orttraining_ortmodule_distributed_tests.py since it has error
that compiler for torch extension does not support c++17. One possible
fix it to set the following compile argument inside setup.py of
extension fused_adam: extra_compile_args['cxx'] = ['-std=c++17'].
However, due to the urgency of unblocking the pipelines, just disable
the test for now.
* skip test_softmax_bf16_large. For some reason,
torch.cuda.is_bf16_supported() returns True in V100 with torch 2.3.1, so
the test was run in CI, but V100 does not support bf16 natively.
* Fix typo of deterministic

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
tianleiwu added a commit that referenced this pull request Aug 15, 2024
### Description
Exclude cuDNN 9 and CUDA 12 DLLs from manylinux wheel to reduce python
package size.

### Motivation and Context

The 1.20.0 ort-nightly-gpu python wheels on linux are suddenly > 800 MB
in size. The wheels built on 1.19 release branch have a size of around
220 MB.

The size change is caused by
#19470.
tianleiwu pushed a commit that referenced this pull request Sep 10, 2024
### Description
Added CUDNN Frontend and used it for NHWC ConvTranspose op including
option for bias fusion. Similar to this [Conv
PR](#19470)

### Backward compatible
If ORT is built with cuDNN 8, cuDNN frontend will not be built into
binary. Old kernels (using cudnn backend APIs) are used.

### Major Changes
For cuDNN 9, we will enable cudnn frontend to fuse data gradient
convolution and bias when a provider option fuse_conv_bias=1.

### Potential Issues
cuDNN frontend uses TF32 by default. It can be disabled using use_tf32
cuda provider option, but in the case cuDNN frontend encounters issues
building an operation graph it will fallback to using TF32.

### Follow ups
This is one of the PRs that target to enable NHWC, here the
ConvTranspose operation in CUDA EP by default if device supports it.
There are other changes will follow up to make it possible.
(1) Enable prefer_nhwc by default for device with sm >= 70.
(2) Change fuse_conv_bias=1 by default after more testing.
(3) Add other NHWC operators (like Resize or UpSample).

### Motivation and Context
The new CUDNN Frontend library provides the functionality to fuse
operations and provides new heuristics for kernel selection. Here it
fuses the convolution data gradient operation (ConvTranspose) with the
pointwise bias operation.

### Minor Change
In the CUDA convolution operation was a small bug when
`GetCudnnConv1dPadToNc1d ` was enabled.
chilo-ms pushed a commit that referenced this pull request Nov 15, 2024
### Description
Fixes build failure for the cuda minimal build




### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
[This change](#19470) in
1.20 is causing build failures for the cuda minimal build.
Essentially, some cudnn logic was not guarded by the `USE_CUDA_MINIMAL`.
Also the build is looking for cudnn while in the cuda minimal build it
shouldn't depend on it, resulting in linking error.


cc @gedoensmax @chilo-ms
guschmue pushed a commit that referenced this pull request Dec 2, 2024
### Description
Fixes build failure for the cuda minimal build




### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
[This change](#19470) in
1.20 is causing build failures for the cuda minimal build.
Essentially, some cudnn logic was not guarded by the `USE_CUDA_MINIMAL`.
Also the build is looking for cudnn while in the cuda minimal build it
shouldn't depend on it, resulting in linking error.


cc @gedoensmax @chilo-ms
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
Fixes build failure for the cuda minimal build




### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
[This change](microsoft#19470) in
1.20 is causing build failures for the cuda minimal build.
Essentially, some cudnn logic was not guarded by the `USE_CUDA_MINIMAL`.
Also the build is looking for cudnn while in the cuda minimal build it
shouldn't depend on it, resulting in linking error.


cc @gedoensmax @chilo-ms
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
Fixes build failure for the cuda minimal build




### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
[This change](microsoft#19470) in
1.20 is causing build failures for the cuda minimal build.
Essentially, some cudnn logic was not guarded by the `USE_CUDA_MINIMAL`.
Also the build is looking for cudnn while in the cuda minimal build it
shouldn't depend on it, resulting in linking error.


cc @gedoensmax @chilo-ms
tianleiwu added a commit that referenced this pull request Feb 12, 2025
### Description

Do not link cuDNN sub libs.

Before:
```
 objdump -p libonnxruntime_providers_cuda.so | grep NEEDED
  NEEDED               libcublasLt.so.12
  NEEDED               libcublas.so.12
  NEEDED               libcurand.so.10
  NEEDED               libcufft.so.11
  NEEDED               libcudart.so.12
  NEEDED               libcudnn.so.9
  NEEDED               libcudnn_adv.so.9
  NEEDED               libcudnn_ops.so.9
  NEEDED               libcudnn_cnn.so.9
  NEEDED               libcudnn_graph.so.9
  NEEDED               libcudnn_engines_runtime_compiled.so.9
  NEEDED               libcudnn_engines_precompiled.so.9
  NEEDED               libcudnn_heuristic.so.9
  NEEDED               libdl.so.2
  NEEDED               librt.so.1
  NEEDED               libnvrtc.so.12
  NEEDED               libpthread.so.0
  NEEDED               libstdc++.so.6
  NEEDED               libm.so.6
  NEEDED               libgcc_s.so.1
  NEEDED               libc.so.6
  NEEDED               ld-linux-x86-64.so.2
```

After:
```
$ objdump -p libonnxruntime_providers_cuda.so | grep NEEDED
  NEEDED               libcublasLt.so.12
  NEEDED               libcublas.so.12
  NEEDED               libcurand.so.10
  NEEDED               libcufft.so.11
  NEEDED               libcudart.so.12
  NEEDED               libcudnn.so.9
  NEEDED               libnvrtc.so.12
  NEEDED               libstdc++.so.6
  NEEDED               libm.so.6
  NEEDED               libgcc_s.so.1
  NEEDED               libc.so.6
  NEEDED               ld-linux-x86-64.so.2
```
### Motivation and Context

Avoid direct dependency on cuDNN sub libraries, which were introduced in
#19470.

#23643
gedoensmax pushed a commit to gedoensmax/onnxruntime that referenced this pull request Feb 27, 2025
### Description
Fixes build failure for the cuda minimal build

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
[This change](microsoft#19470) in
1.20 is causing build failures for the cuda minimal build.
Essentially, some cudnn logic was not guarded by the `USE_CUDA_MINIMAL`.
Also the build is looking for cudnn while in the cuda minimal build it
shouldn't depend on it, resulting in linking error.

cc @gedoensmax @chilo-ms

(cherry picked from commit bbe7c87)
guschmue pushed a commit that referenced this pull request Mar 6, 2025
### Description

Do not link cuDNN sub libs.

Before:
```
 objdump -p libonnxruntime_providers_cuda.so | grep NEEDED
  NEEDED               libcublasLt.so.12
  NEEDED               libcublas.so.12
  NEEDED               libcurand.so.10
  NEEDED               libcufft.so.11
  NEEDED               libcudart.so.12
  NEEDED               libcudnn.so.9
  NEEDED               libcudnn_adv.so.9
  NEEDED               libcudnn_ops.so.9
  NEEDED               libcudnn_cnn.so.9
  NEEDED               libcudnn_graph.so.9
  NEEDED               libcudnn_engines_runtime_compiled.so.9
  NEEDED               libcudnn_engines_precompiled.so.9
  NEEDED               libcudnn_heuristic.so.9
  NEEDED               libdl.so.2
  NEEDED               librt.so.1
  NEEDED               libnvrtc.so.12
  NEEDED               libpthread.so.0
  NEEDED               libstdc++.so.6
  NEEDED               libm.so.6
  NEEDED               libgcc_s.so.1
  NEEDED               libc.so.6
  NEEDED               ld-linux-x86-64.so.2
```

After:
```
$ objdump -p libonnxruntime_providers_cuda.so | grep NEEDED
  NEEDED               libcublasLt.so.12
  NEEDED               libcublas.so.12
  NEEDED               libcurand.so.10
  NEEDED               libcufft.so.11
  NEEDED               libcudart.so.12
  NEEDED               libcudnn.so.9
  NEEDED               libnvrtc.so.12
  NEEDED               libstdc++.so.6
  NEEDED               libm.so.6
  NEEDED               libgcc_s.so.1
  NEEDED               libc.so.6
  NEEDED               ld-linux-x86-64.so.2
```
### Motivation and Context

Avoid direct dependency on cuDNN sub libraries, which were introduced in
#19470.

#23643
ashrit-ms pushed a commit that referenced this pull request Mar 17, 2025
### Description

Do not link cuDNN sub libs.

Before:
```
 objdump -p libonnxruntime_providers_cuda.so | grep NEEDED
  NEEDED               libcublasLt.so.12
  NEEDED               libcublas.so.12
  NEEDED               libcurand.so.10
  NEEDED               libcufft.so.11
  NEEDED               libcudart.so.12
  NEEDED               libcudnn.so.9
  NEEDED               libcudnn_adv.so.9
  NEEDED               libcudnn_ops.so.9
  NEEDED               libcudnn_cnn.so.9
  NEEDED               libcudnn_graph.so.9
  NEEDED               libcudnn_engines_runtime_compiled.so.9
  NEEDED               libcudnn_engines_precompiled.so.9
  NEEDED               libcudnn_heuristic.so.9
  NEEDED               libdl.so.2
  NEEDED               librt.so.1
  NEEDED               libnvrtc.so.12
  NEEDED               libpthread.so.0
  NEEDED               libstdc++.so.6
  NEEDED               libm.so.6
  NEEDED               libgcc_s.so.1
  NEEDED               libc.so.6
  NEEDED               ld-linux-x86-64.so.2
```

After:
```
$ objdump -p libonnxruntime_providers_cuda.so | grep NEEDED
  NEEDED               libcublasLt.so.12
  NEEDED               libcublas.so.12
  NEEDED               libcurand.so.10
  NEEDED               libcufft.so.11
  NEEDED               libcudart.so.12
  NEEDED               libcudnn.so.9
  NEEDED               libnvrtc.so.12
  NEEDED               libstdc++.so.6
  NEEDED               libm.so.6
  NEEDED               libgcc_s.so.1
  NEEDED               libc.so.6
  NEEDED               ld-linux-x86-64.so.2
```
### Motivation and Context

Avoid direct dependency on cuDNN sub libraries, which were introduced in
#19470.

#23643
edgchen1 added a commit that referenced this pull request Apr 25, 2025
### Description
<!-- Describe your changes. -->

An additional check for non-constant inputs was added to
ConvActivationFusion in #20282. This was to avoid fusing an Add in a
Conv+Add+Relu that has another non-constant input.


https://github.com/microsoft/onnxruntime/blob/6c8cb6a6d1993f84fcf4008f468a071c0b73aad3/onnxruntime/core/optimizer/conv_activation_fusion.cc#L26-L39

However, this check fails to account for implicit inputs and will read
past the end of a node's explicit input defs if any implicit inputs are
present.

Moreover, this check is no longer necessary after #19470 removed
Conv+Add+Relu fusion from ConvActivationFusion.

This change removes the check and some other unused code.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Fix #24473.
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request May 12, 2025
…24525)

### Description
<!-- Describe your changes. -->

An additional check for non-constant inputs was added to
ConvActivationFusion in microsoft#20282. This was to avoid fusing an Add in a
Conv+Add+Relu that has another non-constant input.


https://github.com/microsoft/onnxruntime/blob/6c8cb6a6d1993f84fcf4008f468a071c0b73aad3/onnxruntime/core/optimizer/conv_activation_fusion.cc#L26-L39

However, this check fails to account for implicit inputs and will read
past the end of a node's explicit input defs if any implicit inputs are
present.

Moreover, this check is no longer necessary after microsoft#19470 removed
Conv+Add+Relu fusion from ConvActivationFusion.

This change removes the check and some other unused code.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Fix microsoft#24473.
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jun 22, 2025
[ARM] MatMulNBits Fp16 support - API change only (microsoft#22826)

A break-down PR of microsoft#22651
Op API change only.
- add template to functions and classes that support fp32 and fp16
- rename functions, classes and files that support fp32 and fp16 from
SQNBxxx to QNBxxx

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Change-Id: Ib489e7858d42abcbe0514ac44e4d2172e32384a3

Re-enable test symbolic shape infer (microsoft#22737)

<!-- Describe your changes. -->
It seems after CI updated to py310, numpy got updated to 2.0 and sympy
1.2 failed to cast float numpy array.
Pointing sympy to 1.13 when py>=3.9 and re-enable unit test

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Error: Linux CPU
CI

[Quant tool] Handle input models with pre-quantized weights (microsoft#22633)

Allows the QDQ quantizer to handle input models that already have some
pre-quantized weights. In this case, the qdq quantizer will properly
skip/handle the pre-quantized weights.

Also handles an operator (e.g., Conv) with a pre-quantized weight and a
float bias. The tool will read the pre-quantized weight's quantization
scale to compute the bias's scale (`bias_scale = input_scale *
weight_scale`).

Input model (pre-quantized Conv weight):

![image](https://github.com/user-attachments/assets/7d2626e4-49ad-47ae-bd0e-6339ac590435)

Output QDQ model (everything is quantized):

![image](https://github.com/user-attachments/assets/393804d3-f042-47bd-895f-3d667fb2ae94)

Customers may use external tools to quantize some weights (e.g., int4
for Conv/MatMul). The qdq quantizer should still be able to quantize the
rest of the model (float weights and activations) in this case.

Update Gradle version 8.7 and java version 17 within onnxruntime/java (microsoft#22771)

This change is to update the Gradle version within java project to 8.7,
it also upgrades the JAVA to 17. Gradle version from react-native was
also updated to 7.5 to make it compatible with changes from the Java
directory. However, the target java version remains the same. Java
version from these will be upgraded in a separated PR.

This is spited from microsoft#22206

This is the first step to upgrade the react native version.

Ovep develop 1.21 (microsoft#22824)

OVEP development changes for ORT 1.21 Release

Has critical bug fixes
Support for concurrency execution of models is enabled
Support for OV 2024.5
Memory optimizations for NPU platform

---------

Co-authored-by: jatinwadhwa921 <[email protected]>
Co-authored-by: Ankit Maheshkar <[email protected]>
Co-authored-by: sfatimar <[email protected]>
Co-authored-by: saurabhkale17 <[email protected]>
Co-authored-by: TejalKhade28 <[email protected]>
Co-authored-by: Javier E. Martinez <[email protected]>

Fix 1.20 cuda minimal build failure (microsoft#22751)

Fixes build failure for the cuda minimal build

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
[This change](microsoft#19470) in
1.20 is causing build failures for the cuda minimal build.
Essentially, some cudnn logic was not guarded by the `USE_CUDA_MINIMAL`.
Also the build is looking for cudnn while in the cuda minimal build it
shouldn't depend on it, resulting in linking error.

cc @gedoensmax @chilo-ms

[ARM] MatMulNBits fp16 support - connect kernels (microsoft#22856)

A breakdown PR of microsoft#22651

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Change-Id: I3014c1002ff375a507bc04de7756baacf9a2b77a

[WebNN EP] Support Einsum op (microsoft#19558)

Adds support for einsum via WebNN matmul, transpose, reshape, reducesum,
identity and element-wise binary ops.

Refactor SkipLayerNorm and handle beta properly (microsoft#22862)

Signed-off-by: Liqun Fu <[email protected]>
Signed-off-by: Liqun Fu <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Change-Id: Ic5b8a6eb775542a57f07f5e593cc399dd7eeaa8f

Fix CUDA/DML package exception caused by ENABLE_CUDA_NHWC_OPS (microsoft#22851)

Now,  ENABLE_CUDA_NHWC_OPS is enabled by default.
It adds a new chance to create cuda provider while both cuda/dml are
enabled

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Optimize Transpose around QLinearSoftmax (microsoft#22849)

<!-- Describe your changes. -->

- Improved Transpose around QLinearSoftmax in Level 3 NHWC Transformer.
- Removed redundant code HandleQLinearConcat, HandleQLinearBinaryOp.

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

By merging and eliminating redundant transpose , the Image Segmentation
i8 model (MobileNetv2 + DeepLabv3) achieves a 2.34X speedup.

Replace INFINITY by std::numeric_limits<float>::infinity() (microsoft#22868)

Replace INFINITY by `std::numeric_limits<float>::infinity()` to avoid
build errors with Visual Studio 2022 v17.12 Preview 5

microsoft#22728

[js/webgpu] Optimize transpose as reshape when suitable (microsoft#22870)

BUG microsoft#22031

Change-Id: I6c70d84228f1563792218c6c3c18b023852d4147

clang format code

Change-Id: I422a9474da9e9cfc9ac8819569a13520c5d2641f
rohan11235813 pushed a commit to quadric-io/onnxruntime that referenced this pull request Aug 19, 2025
### Description
* Fix migraphx build error caused by
microsoft/onnxruntime#21598:
Add a conditional compile on code block that depends on ROCm >= 6.2.
Note that the pipeline uses ROCm 6.0.

Unblock orttraining-linux-gpu-ci-pipeline and
orttraining-ortmodule-distributed and orttraining-amd-gpu-ci-pipeline
pipelines:
* Disable a model test in linux GPU training ci pipelines caused by
microsoft/onnxruntime#19470:
Sometime, cudnn frontend throws exception that cudnn graph does not
support a Conv node of keras_lotus_resnet3D model on V100 GPU.
Note that same test does not throw exception in other GPU pipelines. The
failure might be related to cudnn 8.9 and V100 GPU used in the pipeline
(Amper GPUs and cuDNN 9.x do not have the issue).
The actual fix requires fallback logic, which will take time to
implement, so we temporarily disable the test in training pipelines.
* Force install torch for cuda 11.8. (The docker has torch 2.4.0 for
cuda 12.1 to build torch extension, which it is not compatible cuda
11.8). Note that this is temporary walkround. More elegant fix is to
make sure right torch version in docker build step, that might need
update install_python_deps.sh and corresponding requirements.txt.
* Skip test_gradient_correctness_conv1d since it causes segment fault.
Root cause need more investigation (maybe due to cudnn frontend as
well).
* Skip test_aten_attention since it causes assert failure. Root cause
need more investigation (maybe due to torch version).
* Skip orttraining_ortmodule_distributed_tests.py since it has error
that compiler for torch extension does not support c++17. One possible
fix it to set the following compile argument inside setup.py of
extension fused_adam: extra_compile_args['cxx'] = ['-std=c++17'].
However, due to the urgency of unblocking the pipelines, just disable
the test for now.
* skip test_softmax_bf16_large. For some reason,
torch.cuda.is_bf16_supported() returns True in V100 with torch 2.3.1, so
the test was run in CI, but V100 does not support bf16 natively.
* Fix typo of deterministic

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
rohan11235813 pushed a commit to quadric-io/onnxruntime that referenced this pull request Aug 19, 2025
### Description
Exclude cuDNN 9 and CUDA 12 DLLs from manylinux wheel to reduce python
package size.

### Motivation and Context

The 1.20.0 ort-nightly-gpu python wheels on linux are suddenly > 800 MB
in size. The wheels built on 1.19 release branch have a size of around
220 MB.

The size change is caused by
microsoft/onnxruntime#19470.
rohan11235813 pushed a commit to quadric-io/onnxruntime that referenced this pull request Aug 19, 2025
### Description
Added CUDNN Frontend and used it for NHWC ConvTranspose op including
option for bias fusion. Similar to this [Conv
PR](microsoft/onnxruntime#19470)

### Backward compatible
If ORT is built with cuDNN 8, cuDNN frontend will not be built into
binary. Old kernels (using cudnn backend APIs) are used.

### Major Changes
For cuDNN 9, we will enable cudnn frontend to fuse data gradient
convolution and bias when a provider option fuse_conv_bias=1.

### Potential Issues
cuDNN frontend uses TF32 by default. It can be disabled using use_tf32
cuda provider option, but in the case cuDNN frontend encounters issues
building an operation graph it will fallback to using TF32.

### Follow ups
This is one of the PRs that target to enable NHWC, here the
ConvTranspose operation in CUDA EP by default if device supports it.
There are other changes will follow up to make it possible.
(1) Enable prefer_nhwc by default for device with sm >= 70.
(2) Change fuse_conv_bias=1 by default after more testing.
(3) Add other NHWC operators (like Resize or UpSample).

### Motivation and Context
The new CUDNN Frontend library provides the functionality to fuse
operations and provides new heuristics for kernel selection. Here it
fuses the convolution data gradient operation (ConvTranspose) with the
pointwise bias operation.

### Minor Change
In the CUDA convolution operation was a small bug when
`GetCudnnConv1dPadToNc1d ` was enabled.
rohan11235813 pushed a commit to quadric-io/onnxruntime that referenced this pull request Sep 15, 2025
### Description
* Fix migraphx build error caused by
microsoft/onnxruntime#21598:
Add a conditional compile on code block that depends on ROCm >= 6.2.
Note that the pipeline uses ROCm 6.0.

Unblock orttraining-linux-gpu-ci-pipeline and
orttraining-ortmodule-distributed and orttraining-amd-gpu-ci-pipeline
pipelines:
* Disable a model test in linux GPU training ci pipelines caused by
microsoft/onnxruntime#19470:
Sometime, cudnn frontend throws exception that cudnn graph does not
support a Conv node of keras_lotus_resnet3D model on V100 GPU.
Note that same test does not throw exception in other GPU pipelines. The
failure might be related to cudnn 8.9 and V100 GPU used in the pipeline
(Amper GPUs and cuDNN 9.x do not have the issue).
The actual fix requires fallback logic, which will take time to
implement, so we temporarily disable the test in training pipelines.
* Force install torch for cuda 11.8. (The docker has torch 2.4.0 for
cuda 12.1 to build torch extension, which it is not compatible cuda
11.8). Note that this is temporary walkround. More elegant fix is to
make sure right torch version in docker build step, that might need
update install_python_deps.sh and corresponding requirements.txt.
* Skip test_gradient_correctness_conv1d since it causes segment fault.
Root cause need more investigation (maybe due to cudnn frontend as
well).
* Skip test_aten_attention since it causes assert failure. Root cause
need more investigation (maybe due to torch version).
* Skip orttraining_ortmodule_distributed_tests.py since it has error
that compiler for torch extension does not support c++17. One possible
fix it to set the following compile argument inside setup.py of
extension fused_adam: extra_compile_args['cxx'] = ['-std=c++17'].
However, due to the urgency of unblocking the pipelines, just disable
the test for now.
* skip test_softmax_bf16_large. For some reason,
torch.cuda.is_bf16_supported() returns True in V100 with torch 2.3.1, so
the test was run in CI, but V100 does not support bf16 natively.
* Fix typo of deterministic

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
rohan11235813 pushed a commit to quadric-io/onnxruntime that referenced this pull request Sep 15, 2025
### Description
Exclude cuDNN 9 and CUDA 12 DLLs from manylinux wheel to reduce python
package size.

### Motivation and Context

The 1.20.0 ort-nightly-gpu python wheels on linux are suddenly > 800 MB
in size. The wheels built on 1.19 release branch have a size of around
220 MB.

The size change is caused by
microsoft/onnxruntime#19470.
rohan11235813 pushed a commit to quadric-io/onnxruntime that referenced this pull request Sep 15, 2025
### Description
Added CUDNN Frontend and used it for NHWC ConvTranspose op including
option for bias fusion. Similar to this [Conv
PR](microsoft/onnxruntime#19470)

### Backward compatible
If ORT is built with cuDNN 8, cuDNN frontend will not be built into
binary. Old kernels (using cudnn backend APIs) are used.

### Major Changes
For cuDNN 9, we will enable cudnn frontend to fuse data gradient
convolution and bias when a provider option fuse_conv_bias=1.

### Potential Issues
cuDNN frontend uses TF32 by default. It can be disabled using use_tf32
cuda provider option, but in the case cuDNN frontend encounters issues
building an operation graph it will fallback to using TF32.

### Follow ups
This is one of the PRs that target to enable NHWC, here the
ConvTranspose operation in CUDA EP by default if device supports it.
There are other changes will follow up to make it possible.
(1) Enable prefer_nhwc by default for device with sm >= 70.
(2) Change fuse_conv_bias=1 by default after more testing.
(3) Add other NHWC operators (like Resize or UpSample).

### Motivation and Context
The new CUDNN Frontend library provides the functionality to fuse
operations and provides new heuristics for kernel selection. Here it
fuses the convolution data gradient operation (ConvTranspose) with the
pointwise bias operation.

### Minor Change
In the CUDA convolution operation was a small bug when
`GetCudnnConv1dPadToNc1d ` was enabled.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants