You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The configuration supplied to LMI provides required and optional information that LMI will use to load and serve your model.
4
-
LMI containers accept configurations provided in two formats. In order of priority, these are:
3
+
The configuration supplied to LMI provides information that LMI will use to load and serve your model.
4
+
LMI containers accept configurations provided in two formats.
5
5
6
6
*`serving.properties` Configuration File (per model configurations)
7
7
* Environment Variables (global configurations)
8
8
9
-
We recommend using the `serving.properties` configuration file for the following reasons:
10
-
11
-
* Supports SageMaker Multi Model Endpoints with per model configurations
12
-
* Environment Variables are applied globally to all models hosted by the model server, so they can't be used for model specific configuration
13
-
* Separates model configuration from the SageMaker Model Object (deployment unit)
14
-
* Configurations can be modified and updated independently of the deployment unit/code
15
-
16
-
Environment Variables are a good option for the proof-of-concept and experimentation phase for a single model.
17
-
You can modify the environment variables as part of your deployment script without having to re-upload configurations to S3.
18
-
This typically leads to a faster iteration loop when modifying and experimenting with configuration values.
9
+
For most use-cases, using environment variables is sufficient.
10
+
If you are deploying LMI to serve multiple models within the same container (SageMaker Multi-Model Endpoint use-case), you should use per model `serving.properties` configuration files.
11
+
Environment Variables are global settings and will apply to all models being served within a single instance of LMI.
19
12
20
13
While you can mix configurations between `serving.properties` and environment variables, we recommend you choose one and specify all configuration in that format.
21
14
Configurations specified in the `serving.properties` files will override configurations specified in environment variables.
@@ -24,59 +17,46 @@ Both configuration mechanisms offer access to the same set of configurations.
24
17
25
18
If you know which backend you are going to use, you can find a set of starter configurations in the corresponding [user guide](../user_guides/README.md).
26
19
We recommend using the quick start configurations as a starting point if you have decided on a particular backend.
27
-
The only change required to the starter configurations is specifying `option.model_id` to point to your model artifacts.
28
20
29
-
We will now cover the components of a minimal starting configuration. This minimal configuration will look like:
21
+
We will now cover the two types of configuration formats
30
22
31
-
```
32
-
# use standard python engine, or mpi aware python engine
# how many gpus to shard the model across with tensor parallelism
39
-
option.tensor_parallel_degree=<max|number between 1 and number of gpus available>
40
-
```
23
+
## serving.properties
41
24
42
-
There are additional configurations that can be specified.
43
-
We will cover the common configurations (across backends) in [LMI Common Configurations](#lmi-common-configurations)
44
-
45
-
## Model Artifact Configuration
25
+
### Model Artifact Configuration (required)
46
26
47
27
If you are deploying a model hosted on the HuggingFace Hub, you must specify the `option.model_id=<hf_hub_model_id>` configuration.
48
-
When using a model directly from the hub, we recommend you also specify the model revision (commit hash) via `option.revision=<commit hash>`.
28
+
When using a model directly from the hub, we recommend you also specify the model revision (commit hash or branch) via `option.revision=<commit hash/branch>`.
49
29
Since model artifacts are downloaded at runtime from the Hub, using a specific revision ensures you are using a model compatible with package versions in the runtime environment.
50
-
Open Source model artifacts on the hub are subject to change at any time, and these changes may cause issues when instantiating the model (the model may require a newer version of transformers than what is available in the container).
51
-
If a model provides custom model (*modeling.py) and custom tokenizer (*tokenizer.py) files, you need to specify `option.trust_remote_code=true` to load and use the model.
30
+
Open Source model artifacts on the hub are subject to change at any time.
31
+
These changes may cause issues when instantiating the model (updated model artifacts may require a newer version of a dependency than what is bundled in the container).
32
+
If a model provides custom model (*modeling.py) and/or custom tokenizer (*tokenizer.py) files, you need to specify `option.trust_remote_code=true` to load and use the model.
52
33
53
34
If you are deploying a model hosted in S3, `option.model_id=<s3 uri>` should be the s3 object prefix of the model artifacts.
54
35
Alternatively, you can upload the `serving.properties` file to S3 alongside your model artifacts (under the same prefix) and omit the `option.model_id` config from your `serving.properties` file.
55
36
Example code for leveraging uncompressed artifacts in S3 are provided in the [deploying your endpoint](deploying-your-endpoint.md#configuration---servingproperties) section.
56
37
57
-
## Inference Library Configuration
58
-
59
-
LMI expects the following two configurations to determine which backend to use:
38
+
### Inference Library Configuration (optional)
60
39
61
-
*`engine`. The options are `Python` and `MPI`, which dictates how we launch the Python processes
62
-
*`option.rolling_batch`. This represents the inference library to use. The available options depend on the container.
63
-
*`option.entryPoint`. This represents the default inference handler to use. In most cases, this can be auto-detected and does not need to be specified
40
+
Inference library configurations are optional, but allow you to override the default backend for your model.
41
+
To override, or explicitly set the inference backend, you should set `option.rolling_batch`.
42
+
This represents the inference library to use.
43
+
The available options depend on the container.
64
44
65
45
In the LMI Container:
66
46
67
-
* to use vLLM, use `engine=Python` and `option.rolling_batch=vllm`
68
-
* to use lmi-dist, use `engine=MPI` and `option.rolling_batch=lmi-dist`
69
-
* to use huggingface accelerate, use `engine=Python` and `option.rolling_batch=auto` for text generation models, or `option.rolling_batch=disable` for non-text generation models.
47
+
* to use vLLM, use `option.rolling_batch=vllm`
48
+
* to use lmi-dist, use `option.rolling_batch=lmi-dist`
49
+
* to use huggingface accelerate, use `option.rolling_batch=auto` for text generation models, or `option.rolling_batch=disable` for non-text generation models.
70
50
71
51
In the TensorRT-LLM Container:
72
52
73
-
* use `engine=MPI` and `option.rolling_batch=trtllm` to use TensorRT-LLM
53
+
* use `option.rolling_batch=trtllm` to use TensorRT-LLM (this is the default)
74
54
75
55
In the Transformers NeuronX Container:
76
56
77
-
* use `engine=Python` and `option.rolling_batch=auto` to use Transformers NeuronX
57
+
* use `option.rolling_batch=auto` to use Transformers NeuronX (this is the default)
78
58
79
-
## Tensor Parallelism Configuration
59
+
###Tensor Parallelism Configuration
80
60
81
61
The `option.tensor_parallel_degree` configuration is used to specify how many GPUs to shard the model across using tensor parallelism.
82
62
This value should be between 1, and the maximum number of GPUs available on an instance.
@@ -87,12 +67,13 @@ Alternatively, if this value is specified as a number, LMI will attempt to maxim
87
67
88
68
For example, using an instance with 4 gpus and a tensor parallel degree of 2 will result in 2 model copies, each using 2 gpus.
89
69
90
-
## LMI Common Configurations
70
+
71
+
### LMI Common Configurations
91
72
92
73
There are two classes of configurations provided by LMI:
93
74
94
75
* Model Server level configurations. These configurations do not have a prefix (e.g. `job_queue_size`)
95
-
* Engine/Backend level configurations. These configurations have a `option.` prefix (e.g. `option.model_id`)
76
+
* Engine/Backend level configurations. These configurations have a `option.` prefix (e.g. `option.dtype`)
96
77
97
78
Since LMI is built using the DJLServing model server, all DJLServing configurations are available in LMI.
98
79
You can find a list of these configurations [here](../../configurations_model.md#python-model-configuration).
@@ -123,48 +104,20 @@ You can find these configurations in the respective [user guides](../user_guides
123
104
124
105
## Environment Variable Configurations
125
106
126
-
All LMI Configuration keys available in the `serving.properties` format can be specified as environment variables.
127
-
128
-
The translation for `engine` is unique. The configuration `engine=<engine>` is translated to `SERVING_LOAD_MODELS=test::<engine>=/opt/ml/model`.
129
-
For example:
107
+
The core configurations available via environment variables are documented in our [starting guide](../user_guides/starting-guide.md#available-environment-variable-configurations).
130
108
131
-
*`engine=Python` is translated to environment variable `SERVING_LOAD_MODELS=test::Python=/opt/ml/model`
132
-
*`engine=MPI` is translated to environment variable `SERVING_LOAD_MODELS=test::MPI=/opt/ml/model`
109
+
For other configurations, the `serving.property` configuration can be translated into an equivalent environment variable configuration.
133
110
134
-
Configuration keys that start with `option.` can be specified as environment variables using the `OPTION_` prefix.
111
+
Keys that start with `option.` can be specified as environment variables using the `OPTION_` prefix.
135
112
The configuration `option.<property>` is translated to environment variable `OPTION_<PROPERTY>`. For example:
136
113
137
-
*`option.model_id` is translated to environment variable `OPTION_MODEL_ID`
138
-
*`option.tensor_parallel_degree` is translated to environment variable `OPTION_TENSOR_PARALLEL_DEGREE`
114
+
*`option.rolling_batch` is translated to environment variable `OPTION_ROLLING_BATCH`
139
115
140
-
Configuration keys that do not start with option can be specified as environment variables using the `SERVING_` prefix.
116
+
Configuration keys that do not start with `option` can be specified as environment variables using the `SERVING_` prefix.
141
117
The configuration `<property>` is translated to environment variable `SERVING_<PROPERTY>`. For example:
142
118
143
119
*`job_queue_size` is translated to environment variable `SERVING_JOB_QUEUE_SIZE`
144
120
145
-
For a full example, given the following `serving.properties` file:
146
-
147
-
```
148
-
engine=MPI
149
-
option.model_id=tiiuae/falcon-40b
150
-
option.entryPoint=djl_python.transformersneuronx
151
-
option.trust_remote_code=true
152
-
option.tensor_parallel_degree=4
153
-
option.max_rolling_batch_size=32
154
-
option.rolling_batch=auto
155
-
```
156
-
157
-
We can translate the configuration to environment variables like this:
158
-
159
-
```
160
-
HF_MODEL_ID=tiiuae/falcon-40b
161
-
OPTION_ENTRYPOINT=djl_python.transformersneuronx
162
-
HF_MODEL_TRUST_REMOTE_CODE=true
163
-
TENSOR_PARALLEL_DEGREE=4
164
-
OPTION_MAX_ROLLING_BATCH_SIZE=32
165
-
OPTION_ROLLING_BATCH=auto
166
-
```
167
-
168
121
Next: [Deploying your endpoint](deploying-your-endpoint.md)
Copy file name to clipboardExpand all lines: serving/docs/lmi/deployment_guide/deploying-your-endpoint.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -176,7 +176,8 @@ The following options may be added to the `ModelDataSource` field to support unc
176
176
This mechanism is useful when deploying SageMaker endpoints with network isolation.
177
177
Model artifacts will be downloaded by SageMaker and mounted to the container rather than being downloaded by the container at runtime.
178
178
179
-
If you use this mechanism to deploy the container, you should set `option.model_id=/opt/ml/model` in serving.properties, or `OPTION_MODEL_ID=/opt/ml/model` in environment variables depending on which configuration style you are using.
179
+
If you use this mechanism to deploy the container, you do not need to specify the `option.model_id` or `HF_MODEL_ID` config.
180
+
LMI will load the model artifacts from the model directory by default, which is where SageMaker downloads and mounts the model artifacts from S3.
180
181
181
182
Follow this link for a detailed overview of this option: https://docs.aws.amazon.com/sagemaker/latest/dg/large-model-inference-uncompressed.html
These below configurations helps you configure the inference optimizations parameters. You can check all the configurations of TensorRT-LLM LMI handler [in our docs](../user_guides/trt_llm_user_guide.md#advanced-tensorrt-llm-configurations).
51
51
52
52
```
53
-
OPTION_MODEL_ID={{s3url}}
53
+
HF_MODEL_ID={{s3url}}
54
54
OPTION_TENSOR_PARALLEL_DEGREE=8
55
55
OPTION_MAX_ROLLING_BATCH_SIZE=128
56
56
OPTION_DTYPE=fp16
@@ -87,7 +87,7 @@ In the below example, the model artifacts will be saved to `$MODEL_REPO_DIR` cre
87
87
docker run --runtime=nvidia --gpus all --shm-size 12gb \
**Note:** After uploading model artifacts to s3, you can just update the model_id(env var or in `serving.properties`) to the newly created s3 url with compiled model artifacts and use the same rest of the environment variables or `serving.properties` when deploying on SageMaker. Here, you can check the [tutorial](https://github.com/deepjavalibrary/djl-demo/blob/master/aws/sagemaker/large-model-inference/sample-llm/trtllm_rollingbatch_deploy_llama_13b.ipynb) on how to run inference using TensorRT-LLM DLC. Below snippet shows example updated model_id.
0 commit comments