Skip to content

Conversation

IlyasMoutawwakil
Copy link
Member

@IlyasMoutawwakil IlyasMoutawwakil commented Oct 13, 2024

What does this PR do?

This is also my attempt to create a generalizable io binding framework, the idea is to always have output_shapes = fn(input_shapes, known_shapes) where known_shapes is mostly stuff we find in the config, we the use this information at runtime with a simple symbolic resolver, keeping the shape inference time minimal, to create output tensors in torch and thus accelerate inference without the need to pass by ort values / cupy / numpy.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Who can review?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Comment on lines +330 to 331
if self.use_io_binding is False and provider == "CUDAExecutionProvider":
self.use_io_binding = True

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This overrides use_io_binding choice from user. What if user want to run performance test with io binding disabled?

I suggest that:
if use_io_binding is None: change it to True
if not use_io_binding and it is cuda provider, log a warning.

Copy link
Member Author

@IlyasMoutawwakil IlyasMoutawwakil Oct 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already the default behavior in ORTModels, I kept it for consistency (I'm not a fan of it tbh) to not break stuff for old users.

Comment on lines +211 to +224
def providers(self) -> Tuple[str]:
return self._validate_same_attribute_value_across_components("providers")

@property
def provider(self) -> str:
return self._validate_same_attribute_value_across_components("provider")

@property
def providers_options(self) -> Dict[str, Dict[str, Any]]:
return self._validate_same_attribute_value_across_components("providers_options")

@property
def provider_options(self) -> Dict[str, Any]:
return self._validate_same_attribute_value_across_components("provider_options")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not necessary to validate same value across components.

I think it is feasible to use different provider and different provider options for components. For example, we can run text_encoder with CPU, and unet with CUDA provider. Or we want to enable cuda graph in one component but not the other in provider option.

May add some comments and loose the constraint later.

Copy link
Member Author

@IlyasMoutawwakil IlyasMoutawwakil Oct 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a comment in _validate_same_attribute_value_across_components definition explaining the reasoning behind these checks, which is exactly what you said. Pipeline attributes can be accessed but they only make sense when they're consistent, for now this is my proposition for multi model parts pipelines, an alternative would be to return that of the main component (unet/transformer) or not supporting these attributes at all for the main pipeline (replace them with provider_map for example like device vs device_map).


return resolved_output_shapes

def _prepare_io_binding(self, model_inputs: torch.Tensor) -> Tuple[ort.IOBinding, Dict[str, torch.Tensor]]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model_inputs data type is Dict[str, torch.Tensor]

shape=tuple(self._output_buffers[output_name].size()),
)

return io_binding, model_inputs, self._output_buffers

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model_inputs are not used by caller. Not need to return here.

io_binding.bind_input(
name=input_name,
device_type=self.device.type,
device_id=self.device.index if self.device.index is not None else -1,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest to assert self.device.index is not None.
ORT does not handle device id -1


return self

def _get_output_shapes(self, **model_inputs: torch.Tensor) -> Dict[str, int]:
Copy link

@tianleiwu tianleiwu Nov 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is very slow.

An example improvement can be found (might be a little hacky): tianleiwu@dde8a73

The performance impact for image size 512x512 and 50 steps on H100_80GB_HBM3:

  • 588 ms without IO Binding.
  • 649 ms with IO Binding and current implementation of _get_output_shapes.
  • 572 ms with IO Binding with updated output shape logic.

BTW, the return data type for shape is Sequence[int] instead of int.

name=input_name,
device_type=self.device.type,
device_id=self.device.index if self.device.index is not None else -1,
element_type=TypeHelper.ort_type_to_numpy_type(self.input_dtypes[input_name]),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For onnxruntime 1.20 or later, recommend using onnx type instead of numpy type here. It is because numpy does not support bfloat16, float8; but onnx type supports it.

The mapping from ort type to onnx type is like:
{
"tensor(float)": onnx.TensorProto.FLOAT,
"tensor(float16)": onnx.TensorProto.FLOAT16,
...
}

tianleiwu added a commit to microsoft/onnxruntime that referenced this pull request Nov 14, 2024
### Description

Update stable diffusion benchmark:
(1) allow IO binding for optimum.
(2) do not use num_images_per_prompt across all engines for fair
comparison.

Example to run benchmark of optimum on stable diffusion 1.5:
```
git clone https://github.com/tianleiwu/optimum
cd optimum
git checkout tlwu/diffusers-io-binding
pip install -e .

pip install -U onnxruntime-gpu
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion
git checkout tlwu/benchmark_sd_optimum_io_binding
pip install -r requirements/cuda12/requirements.txt

optimum-cli export onnx --model runwayml/stable-diffusion-v1-5  --task text-to-image ./sd_onnx_fp32

python optimize_pipeline.py -i ./sd_onnx_fp32 -o ./sd_onnx_fp16 --float16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16 --use_io_binding
```

Example output in H100_80GB_HBM3: 572 ms with IO Binding; 588 ms without
IO Binding; IO binding gains 16ms, or 2.7%,

### Motivation and Context

Optimum is working on enabling I/O binding:
huggingface/optimum#2056. This could help
testing the impact of I/O binding on the performance of the stable
diffusion.
guschmue pushed a commit to microsoft/onnxruntime that referenced this pull request Dec 2, 2024
### Description

Update stable diffusion benchmark:
(1) allow IO binding for optimum.
(2) do not use num_images_per_prompt across all engines for fair
comparison.

Example to run benchmark of optimum on stable diffusion 1.5:
```
git clone https://github.com/tianleiwu/optimum
cd optimum
git checkout tlwu/diffusers-io-binding
pip install -e .

pip install -U onnxruntime-gpu
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion
git checkout tlwu/benchmark_sd_optimum_io_binding
pip install -r requirements/cuda12/requirements.txt

optimum-cli export onnx --model runwayml/stable-diffusion-v1-5  --task text-to-image ./sd_onnx_fp32

python optimize_pipeline.py -i ./sd_onnx_fp32 -o ./sd_onnx_fp16 --float16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16 --use_io_binding
```

Example output in H100_80GB_HBM3: 572 ms with IO Binding; 588 ms without
IO Binding; IO binding gains 16ms, or 2.7%,

### Motivation and Context

Optimum is working on enabling I/O binding:
huggingface/optimum#2056. This could help
testing the impact of I/O binding on the performance of the stable
diffusion.
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
…osoft#22834)

### Description

Update stable diffusion benchmark:
(1) allow IO binding for optimum.
(2) do not use num_images_per_prompt across all engines for fair
comparison.

Example to run benchmark of optimum on stable diffusion 1.5:
```
git clone https://github.com/tianleiwu/optimum
cd optimum
git checkout tlwu/diffusers-io-binding
pip install -e .

pip install -U onnxruntime-gpu
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion
git checkout tlwu/benchmark_sd_optimum_io_binding
pip install -r requirements/cuda12/requirements.txt

optimum-cli export onnx --model runwayml/stable-diffusion-v1-5  --task text-to-image ./sd_onnx_fp32

python optimize_pipeline.py -i ./sd_onnx_fp32 -o ./sd_onnx_fp16 --float16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16 --use_io_binding
```

Example output in H100_80GB_HBM3: 572 ms with IO Binding; 588 ms without
IO Binding; IO binding gains 16ms, or 2.7%,

### Motivation and Context

Optimum is working on enabling I/O binding:
huggingface/optimum#2056. This could help
testing the impact of I/O binding on the performance of the stable
diffusion.
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
…osoft#22834)

### Description

Update stable diffusion benchmark:
(1) allow IO binding for optimum.
(2) do not use num_images_per_prompt across all engines for fair
comparison.

Example to run benchmark of optimum on stable diffusion 1.5:
```
git clone https://github.com/tianleiwu/optimum
cd optimum
git checkout tlwu/diffusers-io-binding
pip install -e .

pip install -U onnxruntime-gpu
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion
git checkout tlwu/benchmark_sd_optimum_io_binding
pip install -r requirements/cuda12/requirements.txt

optimum-cli export onnx --model runwayml/stable-diffusion-v1-5  --task text-to-image ./sd_onnx_fp32

python optimize_pipeline.py -i ./sd_onnx_fp32 -o ./sd_onnx_fp16 --float16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16 --use_io_binding
```

Example output in H100_80GB_HBM3: 572 ms with IO Binding; 588 ms without
IO Binding; IO binding gains 16ms, or 2.7%,

### Motivation and Context

Optimum is working on enabling I/O binding:
huggingface/optimum#2056. This could help
testing the impact of I/O binding on the performance of the stable
diffusion.
Copy link

This PR has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@github-actions github-actions bot added the Stale label Feb 12, 2025
@github-actions github-actions bot closed this Mar 14, 2025
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jun 22, 2025
[ARM] MatMulNBits FP16 support - kernels only (microsoft#22806)

A break down PR of microsoft#22651
Add fp16 kernels.

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Revert Implement DML copy for Lora Adapters (microsoft#22814)

Revert microsoft#22396

Fix issue microsoft#22796 - a typo: (__GNUC__ > 9) -> (__GNUC__ > 10) (microsoft#22807)

fix microsoft#22796
Signed-off-by: liqunfu <[email protected]>

[js/webgpu] Add scatterND (microsoft#22755)

<!-- Describe your changes. -->

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

[WebNN] Remove validation for coordinate_transformation_mode (microsoft#22811)

The performance cost of falling back to the CPU EP is high for several
resampling nodes and causes multiple partitions in SD Turbo and VAE
decoder. Since the asymmetric mode with nearest to floor and integer
scales is identical to half_pixel anyway, stick with the WebNN EP.

[TensorRT EP] Add new provider option to exclude nodes from running on TRT (microsoft#22681)

Add new provider option `trt_op_types_to_exclude`:
- User can provide op type list to be excluded from running on TRT
- e.g. `trt_op_types_to_exclude="MaxPool"`

There is a known performance issue with the DDS ops (NonMaxSuppression,
NonZero and RoiAlign) from TRT versions 10.0 to 10.7. TRT EP excludes
DDS ops from running on TRT by default, user can override default value
with empty string to include all ops.

Keep the model metadata on the generated EP context model (microsoft#22825)

Keep the model metadata on the generated EP context model

[WebNN EP] Fix issues of GRU operator (microsoft#22123)

This PR fixes the spelling of the key value of the GRU operator in the
map in the `GetSupportedNodes` function (Gru -> GRU) and removes the
data type check for the fifth input (sequence_lens) of the GRU operator.

PTAL, thanks!

Auto-generated baselines by 1ES Pipeline Templates (microsoft#22817)

Fix Linux python CUDA package pipeline (microsoft#22803)

Making ::p optional in the Linux python CUDA package pipeline

Linux stage from Python-CUDA-Packaging-Pipeline has failed since merge
of microsoft#22773

[WebNN] Fix MLTensorUsage is undefined issue (microsoft#22831)

`MLTensorUsage` has been removed from Chromium:
https://chromium-review.googlesource.com/c/chromium/src/+/6015318, but
we still need to make it compatible with old Chrome versions, so just
make it `undefined` for latest Chrome version.

Enable ConvReplaceWithQLinear when using ACL (microsoft#22823)

Enable the ConvReplaceWithQLinear graph optimization when using the ACL
execution provider.

Fixes an issue where quantized Conv nodes followed by ReLU don't get
converted to QLinearConv, so ACL sees the weights as mutable and
therefore cannot run the Conv node.

Signed-off-by: Michael Tyler <[email protected]>

[CUDA] stable diffusion benchmark allows IO binding for optimum (microsoft#22834)

Update stable diffusion benchmark:
(1) allow IO binding for optimum.
(2) do not use num_images_per_prompt across all engines for fair
comparison.

Example to run benchmark of optimum on stable diffusion 1.5:
```
git clone https://github.com/tianleiwu/optimum
cd optimum
git checkout tlwu/diffusers-io-binding
pip install -e .

pip install -U onnxruntime-gpu
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion
git checkout tlwu/benchmark_sd_optimum_io_binding
pip install -r requirements/cuda12/requirements.txt

optimum-cli export onnx --model runwayml/stable-diffusion-v1-5  --task text-to-image ./sd_onnx_fp32

python optimize_pipeline.py -i ./sd_onnx_fp32 -o ./sd_onnx_fp16 --float16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16 --use_io_binding
```

Example output in H100_80GB_HBM3: 572 ms with IO Binding; 588 ms without
IO Binding; IO binding gains 16ms, or 2.7%,

Optimum is working on enabling I/O binding:
huggingface/optimum#2056. This could help
testing the impact of I/O binding on the performance of the stable
diffusion.

Fix Linux CI pipeline where ep was not provided for py-packaging-linux-test-cpu.yml (microsoft#22828)

Current linux-ci-pipeline was broken due to missing parameters from
`py-packaging-linux-test-cpu.yml` template

Fix Linux CI pipeline

Register groupnorm for opset 21 (microsoft#22830)

This PR registers GroupNormalization for opset 21

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Fix spellchecks from Optional Lint (microsoft#22802)

<!-- Describe your changes. -->

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Change-Id: I561dfcdadcc6fa4cda899ef3bb181f0713fadebb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants