Skip to content

Commit 29534cf

Browse files
committed
moving AutoDeploy README to doc
Signed-off-by: Frida Hou <[email protected]> move autodeploy doc into torch, update links Signed-off-by: Frida Hou <[email protected]> update contents Signed-off-by: Frida Hou <[email protected]> replace hyperlink with modular path Signed-off-by: Frida Hou <[email protected]> minor Signed-off-by: Frida Hou <[email protected]>
1 parent baece56 commit 29534cf

File tree

7 files changed

+431
-0
lines changed

7 files changed

+431
-0
lines changed

docs/source/torch.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,3 +40,7 @@ Here is a simple example to show how to use `tensorrt_llm.LLM` API with Llama mo
4040
## Known Issues
4141

4242
- The PyTorch backend on SBSA is incompatible with bare metal environments like Ubuntu 24.04. Please use the [PyTorch NGC Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for optimal support on SBSA platforms.
43+
44+
## Prototype Feature
45+
46+
- [AutoDeploy: Seamless Model Deployment from PyTorch to TRT-LLM](./torch/auto_deploy/auto-deploy.md)
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
# Example Run Script
2+
3+
To build and run AutoDeploy example, use `examples/auto_deploy/build_and_run_ad.py` script:
4+
5+
```bash
6+
cd examples/auto_deploy
7+
python build_and_run_ad.py --model "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
8+
```
9+
10+
You can arbitrarily configure your experiment. Use the `-h/--help` flag to see available options:
11+
12+
```bash
13+
python build_and_run_ad.py --help
14+
```
15+
16+
Below is a non-exhaustive list of common config options:
17+
18+
| Configuration Key | Description |
19+
|-------------------|-------------|
20+
| `--model` | The HF model card or path to a HF checkpoint folder |
21+
| `--args.model-factory` | Choose model factory implementation (`"AutoModelForCausalLM"`, ...) |
22+
| `--args.skip-loading-weights` | Only load the architecture, not the weights |
23+
| `--args.model-kwargs` | Extra kwargs that are being passed to the model initializer in the model factory |
24+
| `--args.tokenizer-kwargs` | Extra kwargs that are being passed to the tokenizer initializer in the model factory |
25+
| `--args.world-size` | The number of GPUs used for auto-sharding the model |
26+
| `--args.runtime` | Specifies which type of Engine to use during runtime (`"demollm"` or `"trtllm"`) |
27+
| `--args.compile-backend` | Specifies how to compile the graph at the end |
28+
| `--args.attn-backend` | Specifies kernel implementation for attention |
29+
| `--args.mla-backend` | Specifies implementation for multi-head latent attention |
30+
| `--args.max-seq-len` | Maximum sequence length for inference/cache |
31+
| `--args.max-batch-size` | Maximum dimension for statically allocated KV cache |
32+
| `--args.attn-page-size` | Page size for attention |
33+
| `--prompt.batch-size` | Number of queries to generate |
34+
| `--benchmark.enabled` | Whether to run the built-in benchmark (true/false) |
35+
36+
For default values and additional configuration options, refer to the `ExperimentConfig` class in `examples/auto_deploy/build_and_run_ad.py` file.
37+
38+
Here is a more complete example of using the script:
39+
40+
```bash
41+
cd examples/auto_deploy
42+
python build_and_run_ad.py \
43+
--model "TinyLlama/TinyLlama-1.1B-Chat-v1.0" \
44+
--args.world-size 2 \
45+
--args.runtime "demollm" \
46+
--args.compile-backend "torch-compile" \
47+
--args.attn-backend "flashinfer" \
48+
--benchmark.enabled True
49+
```
Lines changed: 181 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,181 @@
1+
# Expert Configuration of LLM API
2+
3+
For expert TensorRT-LLM users, we also expose the full set of `tensorrt_llm._torch.auto_deploy.llm_args.LlmArgs`
4+
*at your own risk* (the argument list diverges from TRT-LLM's argument list):
5+
6+
- All config fields that are used by the AutoDeploy core pipeline (i.e. the `InferenceOptimizer`) are
7+
_exclusively_ exposed in the `AutoDeployConfig` in `tensorrt_llm._torch.auto_deploy.llm_args`.
8+
Please make sure to refer to those first.
9+
- For expert users we expose the full set of `LlmArgs` in `tensorrt_llm._torch.auto_deploy.llm_args`
10+
that can be used to configure the AutoDeploy `LLM` API including runtime options.
11+
- Note that some fields in the full `LlmArgs`
12+
object are overlapping, duplicated, and/or _ignored_ in AutoDeploy, particularly arguments
13+
pertaining to configuring the model itself since AutoDeploy's model ingestion+optimize pipeline
14+
significantly differs from the default manual workflow in TensorRT-LLM.
15+
- However, with the proper care the full `LlmArgs`
16+
objects can be used to configure advanced runtime options in TensorRT-LLM.
17+
- Note that any valid field can be simply provided as keyword argument ("`**kwargs`") to the AutoDeploy `LLM` API.
18+
19+
# Expert Configuration of `build_and_run_ad.py`
20+
21+
For expert users, `build_and_run_ad.py` provides advanced configuration capabilities through a flexible argument parser powered by PyDantic Settings and OmegaConf. You can use dot notation for CLI arguments, provide multiple YAML configuration files, and leverage sophisticated configuration precedence rules to create complex deployment configurations.
22+
23+
## CLI Arguments with Dot Notation
24+
25+
The script supports flexible CLI argument parsing using dot notation to modify nested configurations dynamically. You can target any field in both the `ExperimentConfig` in `examples/auto_deploy/build_and_run_ad.py` and nested `AutoDeployConfig`/`LlmArgs` objects in `tensorrt_llm._torch.auto_deploy.llm_args`:
26+
27+
```bash
28+
# Configure model parameters
29+
# NOTE: config values like num_hidden_layers are automatically resolved into the appropriate nested
30+
# dict value ``{"args": {"model_kwargs": {"num_hidden_layers": 10}}}`` although not explicitly
31+
# specified as CLI arg
32+
python build_and_run_ad.py \
33+
--model "meta-llama/Meta-Llama-3.1-8B-Instruct" \
34+
--args.model-kwargs.num-hidden-layers=10 \
35+
--args.model-kwargs.hidden-size=2048 \
36+
--args.tokenizer-kwargs.padding-side=left
37+
38+
# Configure runtime and backend settings
39+
python build_and_run_ad.py \
40+
--model "TinyLlama/TinyLlama-1.1B-Chat-v1.0" \
41+
--args.world-size=2 \
42+
--args.compile-backend=torch-opt \
43+
--args.attn-backend=flashinfer
44+
45+
# Configure prompting and benchmarking
46+
python build_and_run_ad.py \
47+
--model "microsoft/phi-4" \
48+
--prompt.batch-size=4 \
49+
--prompt.sp-kwargs.max-tokens=200 \
50+
--prompt.sp-kwargs.temperature=0.7 \
51+
--benchmark.enabled=true \
52+
--benchmark.bs=8 \
53+
--benchmark.isl=1024
54+
```
55+
56+
## YAML Configuration Files
57+
58+
Both `ExperimentConfig` and `AutoDeployConfig`/`LlmArgs` inherit from `DynamicYamlMixInForSettings`, enabling you to provide multiple YAML configuration files that are automatically deep-merged at runtime.
59+
60+
Create a YAML configuration file (e.g., `my_config.yaml`):
61+
62+
```yaml
63+
# my_config.yaml
64+
args:
65+
model_kwargs:
66+
num_hidden_layers: 12
67+
hidden_size: 1024
68+
world_size: 4
69+
compile_backend: torch-compile
70+
attn_backend: triton
71+
max_seq_len: 2048
72+
max_batch_size: 16
73+
transforms:
74+
sharding:
75+
strategy: auto
76+
quantization:
77+
enabled: false
78+
79+
prompt:
80+
batch_size: 8
81+
sp_kwargs:
82+
max_tokens: 150
83+
temperature: 0.8
84+
top_k: 50
85+
86+
benchmark:
87+
enabled: true
88+
num: 20
89+
bs: 4
90+
isl: 1024
91+
osl: 256
92+
```
93+
94+
Create an additional override file (e.g., `production.yaml`):
95+
96+
```yaml
97+
# production.yaml
98+
args:
99+
world_size: 8
100+
compile_backend: torch-opt
101+
max_batch_size: 32
102+
103+
benchmark:
104+
enabled: false
105+
```
106+
107+
Then use these configurations:
108+
109+
```bash
110+
# Using single YAML config
111+
python build_and_run_ad.py \
112+
--model "meta-llama/Meta-Llama-3.1-8B-Instruct" \
113+
--yaml-configs my_config.yaml
114+
115+
# Using multiple YAML configs (deep merged in order, later files have higher priority)
116+
python build_and_run_ad.py \
117+
--model "meta-llama/Meta-Llama-3.1-8B-Instruct" \
118+
--yaml-configs my_config.yaml production.yaml
119+
120+
# Targeting nested AutoDeployConfig with separate YAML
121+
python build_and_run_ad.py \
122+
--model "meta-llama/Meta-Llama-3.1-8B-Instruct" \
123+
--yaml-configs my_config.yaml \
124+
--args.yaml-configs autodeploy_overrides.yaml
125+
```
126+
127+
## Configuration Precedence and Deep Merging
128+
129+
The configuration system follows a strict precedence order where higher priority sources override lower priority ones:
130+
131+
1. **CLI Arguments** (highest priority) - Direct command line arguments
132+
1. **YAML Configs** - Files specified via `--yaml-configs` and `--args.yaml-configs`
133+
1. **Default Settings** (lowest priority) - Built-in defaults from the config classes
134+
135+
**Deep Merging**: Unlike simple overwriting, deep merging intelligently combines nested dictionaries recursively. For example:
136+
137+
```yaml
138+
# Base config
139+
args:
140+
model_kwargs:
141+
num_hidden_layers: 10
142+
hidden_size: 1024
143+
max_seq_len: 2048
144+
```
145+
146+
```yaml
147+
# Override config
148+
args:
149+
model_kwargs:
150+
hidden_size: 2048 # This will override
151+
# num_hidden_layers: 10 remains unchanged
152+
world_size: 4 # This gets added
153+
```
154+
155+
**Nested Config Behavior**: When using nested configurations, outer YAML configs become init settings for inner objects, giving them higher precedence:
156+
157+
```bash
158+
# The outer yaml-configs affects the entire ExperimentConfig
159+
# The inner args.yaml-configs affects only the AutoDeployConfig
160+
python build_and_run_ad.py \
161+
--model "meta-llama/Meta-Llama-3.1-8B-Instruct" \
162+
--yaml-configs experiment_config.yaml \
163+
--args.yaml-configs autodeploy_config.yaml \
164+
--args.world-size=8 # CLI override beats both YAML configs
165+
```
166+
167+
## Built-in Default Configuration
168+
169+
Both `AutoDeployConfig` and `LlmArgs` classes automatically load a built-in `default.yaml` configuration file that provides sensible defaults for the AutoDeploy inference optimizer pipeline. This file is specified in the `_get_config_dict()` function in `tensorrt_llm._torch.auto_deploy.llm_args` and defines default transform configurations for graph optimization stages.
170+
171+
The built-in defaults are automatically merged with your configurations at the lowest priority level, ensuring that your custom settings always override the defaults. You can inspect the current default configuration to understand the baseline transform pipeline:
172+
173+
```bash
174+
# View the default configuration
175+
cat tensorrt_llm/_torch/auto_deploy/config/default.yaml
176+
177+
# Override specific transform settings
178+
python build_and_run_ad.py \
179+
--model "TinyLlama/TinyLlama-1.1B-Chat-v1.0" \
180+
--args.transforms.export-to-gm.strict=true
181+
```
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# Logging Level
2+
3+
Use the following env variable to specify the logging level of our built-in logger ordered by
4+
decreasing verbosity;
5+
6+
```bash
7+
AUTO_DEPLOY_LOG_LEVEL=DEBUG
8+
AUTO_DEPLOY_LOG_LEVEL=INFO
9+
AUTO_DEPLOY_LOG_LEVEL=WARNING
10+
AUTO_DEPLOY_LOG_LEVEL=ERROR
11+
AUTO_DEPLOY_LOG_LEVEL=INTERNAL_ERROR
12+
```
13+
14+
The default level is `INFO`.
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
### Incorporating `auto_deploy` into your own workflow
2+
3+
AutoDeploy can be seamlessly integrated into your existing workflows using TRT-LLM's LLM high-level API. This section provides a blueprint for configuring and invoking AutoDeploy within your custom applications.
4+
5+
Here is an example of how you can build an LLM object with AutoDeploy integration:
6+
7+
```
8+
from tensorrt_llm._torch.auto_deploy import LLM
9+
10+
11+
# Construct the LLM high-level interface object with autodeploy as backend
12+
llm = LLM(
13+
model=<HF_MODEL_CARD_OR_DIR>,
14+
world_size=<DESIRED_WORLD_SIZE>,
15+
compile_backend="torch-compile",
16+
model_kwargs={"num_hidden_layers": 2}, # test with smaller model configuration
17+
attn_backend="flashinfer", # choose between "triton" and "flashinfer"
18+
attn_page_size=64, # page size for attention (tokens_per_block, should be == max_seq_len for triton)
19+
skip_loading_weights=False,
20+
model_factory="AutoModelForCausalLM", # choose appropriate model factory
21+
mla_backend="MultiHeadLatentAttention", # for models that support MLA
22+
free_mem_ratio=0.8, # fraction of available memory for cache
23+
simple_shard_only=False, # tensor parallelism sharding strategy
24+
max_seq_len=<MAX_SEQ_LEN>,
25+
max_batch_size=<MAX_BATCH_SIZE>,
26+
)
27+
28+
```
29+
30+
Please consult the AutoDeploy `LLM` API in `tensorrt_llm._torch.auto_deploy.llm` and the
31+
`AutoDeployConfig` class in `tensorrt_llm._torch.auto_deploy.llm_args`
32+
for more detail on how AutoDeploy is configured via the `**kwargs` of the `LLM` API.
Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
# AutoDeploy
2+
3+
```{note}
4+
Note:
5+
This project is in active development and is currently in an early (beta) stage. The code is experimental, subject to change, and may include backward-incompatible updates. While we strive for correctness, we provide no guarantees regarding functionality, stability, or reliability.
6+
```
7+
8+
<h4> Seamless Model Deployment from PyTorch to TRT-LLM</h4>
9+
10+
AutoDeploy is an experimental feature in beta stage designed to simplify and accelerate the deployment of PyTorch models, including off-the-shelf models like those from Hugging Face, to TensorRT-LLM. It automates graph transformations to integrate inference optimizations such as tensor parallelism, KV-caching and quantization. AutoDeploy supports optimized in-framework deployment, minimizing the amount of manual modification needed.
11+
12+
## Motivation & Approach
13+
14+
Deploying large language models (LLMs) can be challenging, especially when balancing ease of use with high performance. Teams need simple, intuitive deployment solutions that reduce engineering effort, speed up the integration of new models, and support rapid experimentation without compromising performance.
15+
16+
AutoDeploy addresses these challenges with a streamlined, (semi-)automated pipeline that transforms in-framework PyTorch models, including Hugging Face models, into optimized inference-ready models for TRT-LLM. It simplifies deployment, optimizes models for efficient inference, and bridges the gap between simplicity and performance.
17+
18+
### **Key Features:**
19+
20+
- **Seamless Model Transition:** Automatically converts PyTorch/Hugging Face models to TRT-LLM without manual rewrites.
21+
- **Unified Model Definition:** Maintain a single source of truth with your original PyTorch/Hugging Face model.
22+
- **Optimized Inference:** Built-in transformations for sharding, quantization, KV-cache integration, MHA fusion, and CudaGraph optimization.
23+
- **Immediate Deployment:** Day-0 support for models with continuous performance enhancements.
24+
- **Quick Setup & Prototyping:** Lightweight pip package for easy installation with a demo environment for fast testing.
25+
26+
## Get Started
27+
28+
1. **Install AutoDeploy:**
29+
30+
AutoDeploy is accessible through TRT-LLM installation.
31+
32+
```bash
33+
sudo apt-get -y install libopenmpi-dev && pip3 install --upgrade pip setuptools && pip3 install tensorrt_llm
34+
```
35+
36+
You can refer to [TRT-LLM installation guide](../../installation/linux.md) for more information.
37+
38+
2. **Run Llama Example:**
39+
40+
You are ready to run an in-framework LLama Demo now.
41+
42+
The general entrypoint to run the auto-deploy demo is the `build_and_run_ad.py` script, Checkpoints are loaded directly from Huggingface (HF) or a local HF-like directory:
43+
44+
```bash
45+
cd examples/auto_deploy
46+
python build_and_run_ad.py --model "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
47+
```
48+
49+
## Support Matrix
50+
51+
AutoDeploy streamlines the model deployment process through an automated workflow designed for efficiency and performance. The workflow begins with a PyTorch model, which is exported using `torch.export` to generate a standard Torch graph. This graph contains core PyTorch ATen operations alongside custom attention operations, determined by the attention backend specified in the configuration.
52+
53+
The exported graph then undergoes a series of automated transformations, including graph sharding, KV-cache insertion, and GEMM fusion, to optimize model performance. After these transformations, the graph is compiled using one of the supported compile backends (like `torch-opt`), followed by deploying it via the TRT-LLM runtime.
54+
55+
- [Supported Matrix](support_matrix.md)
56+
57+
## Advanced Usage
58+
59+
- [Example Run Script](./advanced/example_run.md)
60+
- [Logging Level](./advanced/logging.md)
61+
- [Incorporating AutoDeploy into Your Own Workflow](./advanced/workflow.md)
62+
- [Expert Configurations](./advanced/expert_configurations.md)
63+
64+
## Roadmap
65+
66+
We are actively expanding AutoDeploy to support a broader range of model architectures and inference features.
67+
68+
**Upcoming Model Support:**
69+
70+
- Vision-Language Models (VLMs)
71+
72+
- Structured State Space Models (SSMs) and Linear Attention architectures
73+
74+
**Planned Features:**
75+
76+
- Low-Rank Adaptation (LoRA)
77+
78+
- Speculative Decoding for accelerated generation
79+
80+
To track development progress and contribute, visit our [Github Project Board](https://github.com/orgs/NVIDIA/projects/83/views/13).
81+
We welcome community contributions, see `examples/auto_deploy/CONTRIBUTING.md` for guidelines.

0 commit comments

Comments
 (0)