|
| 1 | +# Expert Configuration of LLM API |
| 2 | + |
| 3 | +For expert TensorRT-LLM users, we also expose the full set of `tensorrt_llm._torch.auto_deploy.llm_args.LlmArgs` |
| 4 | +*at your own risk* (the argument list diverges from TRT-LLM's argument list): |
| 5 | + |
| 6 | +- All config fields that are used by the AutoDeploy core pipeline (i.e. the `InferenceOptimizer`) are |
| 7 | + _exclusively_ exposed in the `AutoDeployConfig` in `tensorrt_llm._torch.auto_deploy.llm_args`. |
| 8 | + Please make sure to refer to those first. |
| 9 | +- For expert users we expose the full set of `LlmArgs` in `tensorrt_llm._torch.auto_deploy.llm_args` |
| 10 | + that can be used to configure the AutoDeploy `LLM` API including runtime options. |
| 11 | +- Note that some fields in the full `LlmArgs` |
| 12 | + object are overlapping, duplicated, and/or _ignored_ in AutoDeploy, particularly arguments |
| 13 | + pertaining to configuring the model itself since AutoDeploy's model ingestion+optimize pipeline |
| 14 | + significantly differs from the default manual workflow in TensorRT-LLM. |
| 15 | +- However, with the proper care the full `LlmArgs` |
| 16 | + objects can be used to configure advanced runtime options in TensorRT-LLM. |
| 17 | +- Note that any valid field can be simply provided as keyword argument ("`**kwargs`") to the AutoDeploy `LLM` API. |
| 18 | + |
| 19 | +# Expert Configuration of `build_and_run_ad.py` |
| 20 | + |
| 21 | +For expert users, `build_and_run_ad.py` provides advanced configuration capabilities through a flexible argument parser powered by PyDantic Settings and OmegaConf. You can use dot notation for CLI arguments, provide multiple YAML configuration files, and leverage sophisticated configuration precedence rules to create complex deployment configurations. |
| 22 | + |
| 23 | +## CLI Arguments with Dot Notation |
| 24 | + |
| 25 | +The script supports flexible CLI argument parsing using dot notation to modify nested configurations dynamically. You can target any field in both the `ExperimentConfig` in `examples/auto_deploy/build_and_run_ad.py` and nested `AutoDeployConfig`/`LlmArgs` objects in `tensorrt_llm._torch.auto_deploy.llm_args`: |
| 26 | + |
| 27 | +```bash |
| 28 | +# Configure model parameters |
| 29 | +# NOTE: config values like num_hidden_layers are automatically resolved into the appropriate nested |
| 30 | +# dict value ``{"args": {"model_kwargs": {"num_hidden_layers": 10}}}`` although not explicitly |
| 31 | +# specified as CLI arg |
| 32 | +python build_and_run_ad.py \ |
| 33 | + --model "meta-llama/Meta-Llama-3.1-8B-Instruct" \ |
| 34 | + --args.model-kwargs.num-hidden-layers=10 \ |
| 35 | + --args.model-kwargs.hidden-size=2048 \ |
| 36 | + --args.tokenizer-kwargs.padding-side=left |
| 37 | + |
| 38 | +# Configure runtime and backend settings |
| 39 | +python build_and_run_ad.py \ |
| 40 | + --model "TinyLlama/TinyLlama-1.1B-Chat-v1.0" \ |
| 41 | + --args.world-size=2 \ |
| 42 | + --args.compile-backend=torch-opt \ |
| 43 | + --args.attn-backend=flashinfer |
| 44 | + |
| 45 | +# Configure prompting and benchmarking |
| 46 | +python build_and_run_ad.py \ |
| 47 | + --model "microsoft/phi-4" \ |
| 48 | + --prompt.batch-size=4 \ |
| 49 | + --prompt.sp-kwargs.max-tokens=200 \ |
| 50 | + --prompt.sp-kwargs.temperature=0.7 \ |
| 51 | + --benchmark.enabled=true \ |
| 52 | + --benchmark.bs=8 \ |
| 53 | + --benchmark.isl=1024 |
| 54 | +``` |
| 55 | + |
| 56 | +## YAML Configuration Files |
| 57 | + |
| 58 | +Both `ExperimentConfig` and `AutoDeployConfig`/`LlmArgs` inherit from `DynamicYamlMixInForSettings`, enabling you to provide multiple YAML configuration files that are automatically deep-merged at runtime. |
| 59 | + |
| 60 | +Create a YAML configuration file (e.g., `my_config.yaml`): |
| 61 | + |
| 62 | +```yaml |
| 63 | +# my_config.yaml |
| 64 | +args: |
| 65 | + model_kwargs: |
| 66 | + num_hidden_layers: 12 |
| 67 | + hidden_size: 1024 |
| 68 | + world_size: 4 |
| 69 | + compile_backend: torch-compile |
| 70 | + attn_backend: triton |
| 71 | + max_seq_len: 2048 |
| 72 | + max_batch_size: 16 |
| 73 | + transforms: |
| 74 | + sharding: |
| 75 | + strategy: auto |
| 76 | + quantization: |
| 77 | + enabled: false |
| 78 | + |
| 79 | +prompt: |
| 80 | + batch_size: 8 |
| 81 | + sp_kwargs: |
| 82 | + max_tokens: 150 |
| 83 | + temperature: 0.8 |
| 84 | + top_k: 50 |
| 85 | + |
| 86 | +benchmark: |
| 87 | + enabled: true |
| 88 | + num: 20 |
| 89 | + bs: 4 |
| 90 | + isl: 1024 |
| 91 | + osl: 256 |
| 92 | +``` |
| 93 | +
|
| 94 | +Create an additional override file (e.g., `production.yaml`): |
| 95 | + |
| 96 | +```yaml |
| 97 | +# production.yaml |
| 98 | +args: |
| 99 | + world_size: 8 |
| 100 | + compile_backend: torch-opt |
| 101 | + max_batch_size: 32 |
| 102 | +
|
| 103 | +benchmark: |
| 104 | + enabled: false |
| 105 | +``` |
| 106 | + |
| 107 | +Then use these configurations: |
| 108 | + |
| 109 | +```bash |
| 110 | +# Using single YAML config |
| 111 | +python build_and_run_ad.py \ |
| 112 | + --model "meta-llama/Meta-Llama-3.1-8B-Instruct" \ |
| 113 | + --yaml-configs my_config.yaml |
| 114 | +
|
| 115 | +# Using multiple YAML configs (deep merged in order, later files have higher priority) |
| 116 | +python build_and_run_ad.py \ |
| 117 | + --model "meta-llama/Meta-Llama-3.1-8B-Instruct" \ |
| 118 | + --yaml-configs my_config.yaml production.yaml |
| 119 | +
|
| 120 | +# Targeting nested AutoDeployConfig with separate YAML |
| 121 | +python build_and_run_ad.py \ |
| 122 | + --model "meta-llama/Meta-Llama-3.1-8B-Instruct" \ |
| 123 | + --yaml-configs my_config.yaml \ |
| 124 | + --args.yaml-configs autodeploy_overrides.yaml |
| 125 | +``` |
| 126 | + |
| 127 | +## Configuration Precedence and Deep Merging |
| 128 | + |
| 129 | +The configuration system follows a strict precedence order where higher priority sources override lower priority ones: |
| 130 | + |
| 131 | +1. **CLI Arguments** (highest priority) - Direct command line arguments |
| 132 | +1. **YAML Configs** - Files specified via `--yaml-configs` and `--args.yaml-configs` |
| 133 | +1. **Default Settings** (lowest priority) - Built-in defaults from the config classes |
| 134 | + |
| 135 | +**Deep Merging**: Unlike simple overwriting, deep merging intelligently combines nested dictionaries recursively. For example: |
| 136 | + |
| 137 | +```yaml |
| 138 | +# Base config |
| 139 | +args: |
| 140 | + model_kwargs: |
| 141 | + num_hidden_layers: 10 |
| 142 | + hidden_size: 1024 |
| 143 | + max_seq_len: 2048 |
| 144 | +``` |
| 145 | + |
| 146 | +```yaml |
| 147 | +# Override config |
| 148 | +args: |
| 149 | + model_kwargs: |
| 150 | + hidden_size: 2048 # This will override |
| 151 | + # num_hidden_layers: 10 remains unchanged |
| 152 | + world_size: 4 # This gets added |
| 153 | +``` |
| 154 | + |
| 155 | +**Nested Config Behavior**: When using nested configurations, outer YAML configs become init settings for inner objects, giving them higher precedence: |
| 156 | + |
| 157 | +```bash |
| 158 | +# The outer yaml-configs affects the entire ExperimentConfig |
| 159 | +# The inner args.yaml-configs affects only the AutoDeployConfig |
| 160 | +python build_and_run_ad.py \ |
| 161 | + --model "meta-llama/Meta-Llama-3.1-8B-Instruct" \ |
| 162 | + --yaml-configs experiment_config.yaml \ |
| 163 | + --args.yaml-configs autodeploy_config.yaml \ |
| 164 | + --args.world-size=8 # CLI override beats both YAML configs |
| 165 | +``` |
| 166 | + |
| 167 | +## Built-in Default Configuration |
| 168 | + |
| 169 | +Both `AutoDeployConfig` and `LlmArgs` classes automatically load a built-in `default.yaml` configuration file that provides sensible defaults for the AutoDeploy inference optimizer pipeline. This file is specified in the `_get_config_dict()` function in `tensorrt_llm._torch.auto_deploy.llm_args` and defines default transform configurations for graph optimization stages. |
| 170 | + |
| 171 | +The built-in defaults are automatically merged with your configurations at the lowest priority level, ensuring that your custom settings always override the defaults. You can inspect the current default configuration to understand the baseline transform pipeline: |
| 172 | + |
| 173 | +```bash |
| 174 | +# View the default configuration |
| 175 | +cat tensorrt_llm/_torch/auto_deploy/config/default.yaml |
| 176 | +
|
| 177 | +# Override specific transform settings |
| 178 | +python build_and_run_ad.py \ |
| 179 | + --model "TinyLlama/TinyLlama-1.1B-Chat-v1.0" \ |
| 180 | + --args.transforms.export-to-gm.strict=true |
| 181 | +``` |
0 commit comments