Skip to content

Conversation

jchen351
Copy link
Contributor

@jchen351 jchen351 commented Nov 11, 2024

Description

Making ::p optional in the Linux python CUDA package pipeline

Motivation and Context

Linux stage from Python-CUDA-Packaging-Pipeline has failed since merge of #22773

@jchen351 jchen351 requested review from mszhanyi and snnn November 11, 2024 22:08
@jchen351 jchen351 changed the title Fix python gpu package pipeline Fix Linux python CUDA package pipeline Nov 11, 2024
@snnn
Copy link
Member

snnn commented Nov 11, 2024

Is it duplicated with #22801 ?

@jchen351
Copy link
Contributor Author

jchen351 commented Nov 11, 2024

Is it duplicated with #22801 ?

Yes, kind of, but since they are approach from 2 different shell scripts to address the same issue. I think we can keep both as the safety mechanism.

@jchen351 jchen351 requested a review from snnn November 13, 2024 19:23
@jchen351 jchen351 merged commit f423b73 into main Nov 13, 2024
91 of 93 checks passed
@jchen351 jchen351 deleted the Cjian/fix_py_gpu_package branch November 13, 2024 22:20
guschmue pushed a commit that referenced this pull request Dec 2, 2024
### Description
Making ::p optional in the Linux python CUDA package pipeline



### Motivation and Context
Linux stage from Python-CUDA-Packaging-Pipeline has failed since merge
of #22773
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
Making ::p optional in the Linux python CUDA package pipeline



### Motivation and Context
Linux stage from Python-CUDA-Packaging-Pipeline has failed since merge
of microsoft#22773
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
Making ::p optional in the Linux python CUDA package pipeline



### Motivation and Context
Linux stage from Python-CUDA-Packaging-Pipeline has failed since merge
of microsoft#22773
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
### Description
Making ::p optional in the Linux python CUDA package pipeline



### Motivation and Context
Linux stage from Python-CUDA-Packaging-Pipeline has failed since merge
of microsoft#22773
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jun 22, 2025
[ARM] MatMulNBits FP16 support - kernels only (microsoft#22806)

A break down PR of microsoft#22651
Add fp16 kernels.

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Revert Implement DML copy for Lora Adapters (microsoft#22814)

Revert microsoft#22396

Fix issue microsoft#22796 - a typo: (__GNUC__ > 9) -> (__GNUC__ > 10) (microsoft#22807)

fix microsoft#22796
Signed-off-by: liqunfu <[email protected]>

[js/webgpu] Add scatterND (microsoft#22755)

<!-- Describe your changes. -->

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

[WebNN] Remove validation for coordinate_transformation_mode (microsoft#22811)

The performance cost of falling back to the CPU EP is high for several
resampling nodes and causes multiple partitions in SD Turbo and VAE
decoder. Since the asymmetric mode with nearest to floor and integer
scales is identical to half_pixel anyway, stick with the WebNN EP.

[TensorRT EP] Add new provider option to exclude nodes from running on TRT (microsoft#22681)

Add new provider option `trt_op_types_to_exclude`:
- User can provide op type list to be excluded from running on TRT
- e.g. `trt_op_types_to_exclude="MaxPool"`

There is a known performance issue with the DDS ops (NonMaxSuppression,
NonZero and RoiAlign) from TRT versions 10.0 to 10.7. TRT EP excludes
DDS ops from running on TRT by default, user can override default value
with empty string to include all ops.

Keep the model metadata on the generated EP context model (microsoft#22825)

Keep the model metadata on the generated EP context model

[WebNN EP] Fix issues of GRU operator (microsoft#22123)

This PR fixes the spelling of the key value of the GRU operator in the
map in the `GetSupportedNodes` function (Gru -> GRU) and removes the
data type check for the fifth input (sequence_lens) of the GRU operator.

PTAL, thanks!

Auto-generated baselines by 1ES Pipeline Templates (microsoft#22817)

Fix Linux python CUDA package pipeline (microsoft#22803)

Making ::p optional in the Linux python CUDA package pipeline

Linux stage from Python-CUDA-Packaging-Pipeline has failed since merge
of microsoft#22773

[WebNN] Fix MLTensorUsage is undefined issue (microsoft#22831)

`MLTensorUsage` has been removed from Chromium:
https://chromium-review.googlesource.com/c/chromium/src/+/6015318, but
we still need to make it compatible with old Chrome versions, so just
make it `undefined` for latest Chrome version.

Enable ConvReplaceWithQLinear when using ACL (microsoft#22823)

Enable the ConvReplaceWithQLinear graph optimization when using the ACL
execution provider.

Fixes an issue where quantized Conv nodes followed by ReLU don't get
converted to QLinearConv, so ACL sees the weights as mutable and
therefore cannot run the Conv node.

Signed-off-by: Michael Tyler <[email protected]>

[CUDA] stable diffusion benchmark allows IO binding for optimum (microsoft#22834)

Update stable diffusion benchmark:
(1) allow IO binding for optimum.
(2) do not use num_images_per_prompt across all engines for fair
comparison.

Example to run benchmark of optimum on stable diffusion 1.5:
```
git clone https://github.com/tianleiwu/optimum
cd optimum
git checkout tlwu/diffusers-io-binding
pip install -e .

pip install -U onnxruntime-gpu
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion
git checkout tlwu/benchmark_sd_optimum_io_binding
pip install -r requirements/cuda12/requirements.txt

optimum-cli export onnx --model runwayml/stable-diffusion-v1-5  --task text-to-image ./sd_onnx_fp32

python optimize_pipeline.py -i ./sd_onnx_fp32 -o ./sd_onnx_fp16 --float16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16 --use_io_binding
```

Example output in H100_80GB_HBM3: 572 ms with IO Binding; 588 ms without
IO Binding; IO binding gains 16ms, or 2.7%,

Optimum is working on enabling I/O binding:
huggingface/optimum#2056. This could help
testing the impact of I/O binding on the performance of the stable
diffusion.

Fix Linux CI pipeline where ep was not provided for py-packaging-linux-test-cpu.yml (microsoft#22828)

Current linux-ci-pipeline was broken due to missing parameters from
`py-packaging-linux-test-cpu.yml` template

Fix Linux CI pipeline

Register groupnorm for opset 21 (microsoft#22830)

This PR registers GroupNormalization for opset 21

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Fix spellchecks from Optional Lint (microsoft#22802)

<!-- Describe your changes. -->

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Change-Id: I561dfcdadcc6fa4cda899ef3bb181f0713fadebb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants