-
-
Notifications
You must be signed in to change notification settings - Fork 9.2k
[Frontend] [Core] Integrate Tensorizer in to extant S3 loading machinery, allow passing arbitrary arguments during save/load #19616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
chore: Push upstream changes
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
…` and add to test Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
…serializer` Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
…el files Signed-off-by: Sanger Steel <[email protected]>
Some changes to `TensorizerConfig` have added a few parameters that are used for convenience internally, but are exposed as public parameters. This unnecessarily complicates `TensorizerConfig` as it makes it seem like these are important parameters users need to understand and contend with to use `TensorizerConfig` with the public-facing API. They have been made private, so users can disregard them and have less parameters to wrap their heads around. Signed-off-by: Sanger Steel <[email protected]>
Adjusts the regex string in `arg_utils.parse_type` to allow for newlines within the JSON string Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Simply call `snapshot_download` to a tempdir and serialize that to S3 for model artifacts, completely decoupling Tensorizer from the original machinery needed to load specific files. Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Since `model_loader_extra_config` can be a `TensorizerConfig` instance as well as a dict, add a `__getitem__` method to `TensorizerConfig` and fix checker function to work without importing `TensorizerConfig` (that would've caused a circular import) Signed-off-by: Sanger Steel <[email protected]>
Co-authored-by: Eta <[email protected]>
Apply the batch of commits suggested in this review. Co-authored-by: Eta <[email protected]>
Also fixes the logic for parsing different permutations for using the example script based on whether args are passed to CLI args directly or packaged in --model-loader-extra-config Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
…end-and-docs feat: Allow passing arbitrary Tensorizer serialization and deserialization kwargs; update docs
Signed-off-by: Sanger Steel <[email protected]>
…make-cli-streamlined feat: Allow serializing and deserializing with Tensorizer without passing `--model-loader-extra-config`
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
…-aws-update-and-any-kwargs Signed-off-by: Sanger Steel <[email protected]> # Conflicts: # setup.py # tests/tensorizer_loader/conftest.py # tests/tensorizer_loader/test_tensorizer.py # vllm/model_executor/model_loader/tensorizer.py # vllm/model_executor/model_loader/tensorizer_loader.py
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @sangstar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the integration of Tensorizer with vLLM, particularly for S3-based workflows. It simplifies the loading process by allowing direct specification of the serialized model directory, ensures that serialization captures all necessary model files, and provides greater flexibility by enabling users to pass specific arguments to Tensorizer's core components during both serialization and deserialization.
Highlights
- Simplified Tensorizer Loading: Tensorizer models can now be loaded directly by providing the S3 or local directory path containing the
model.tensors
file and other artifacts in the standard--model
argument, eliminating the need for the--model-loader-extra-config
JSON string in many cases. - Comprehensive Serialization: Tensorizer serialization now automatically includes non-tensor model artifacts (like config and tokenizer files) by downloading them from Hugging Face Hub, ensuring a complete model package is saved.
- Arbitrary Tensorizer Arguments: Added support for passing arbitrary keyword arguments to
TensorSerializer
,TensorDeserializer
, andopen_stream
via theserialization_kwargs
,deserialization_kwargs
, andstream_kwargs
fields within themodel_loader_extra_config
JSON. - Tensorizer Version Update: Updated the required Tensorizer version to 2.10.0, which includes enhanced S3 credential handling.
- Improved Config Parsing: Fixed the regex used to parse the
--model-loader-extra-config
JSON string to correctly handle whitespace and newlines.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request enhances Tensorizer integration within vLLM with a focus on seamless S3 loading, version updates, and flexible argument passing. Key improvements include simplified S3 artifact resolution, comprehensive artifact handling, and the ability to pass arbitrary keyword arguments to Tensorizer's core components. The refactoring of TensorizerConfig
and related classes improves clarity and robustness. The changes are well-tested and improve the user experience. Two minor suggestions relate to potential log verbosity and adding code comments.
logger.info( | ||
"Updating TensorizerConfig with %s from " | ||
"--model-loader-extra-config provided", k | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger.info( | ||
"Using Tensorizer args from --model-loader-extra-config. " | ||
"Note that you can now simply pass the S3 directory in the " | ||
"model tag instead of providing the JSON string.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -56,7 +56,8 @@ | |||
|
|||
def _parse_type(val: str) -> T: | |||
try: | |||
if return_type is json.loads and not re.match("^{.*}$", val): | |||
if return_type is json.loads and not re.match( | |||
r"(?s)^\s*{.*}\s*$", val): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if not isinstance(loaded, dict): | ||
raise argparse.ArgumentTypeError( | ||
f"Not deserializable to dict: {value}. serialization_kwargs and " | ||
f"deserialization_kwargs must be " | ||
f"deserializable from a JSON string to a dictionary. ") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tensorizer_args = tensorizer_config._construct_tensorizer_args() | ||
before_mem = get_mem_usage() | ||
start = time.perf_counter() | ||
with _read_stream( | ||
if not is_valid_deserialization_uri( | ||
tensorizer_config.tensorizer_uri): | ||
raise ValueError( | ||
f"{tensorizer_config.tensorizer_uri} is not a valid " | ||
f"tensorizer URI. Please check that the URI is correct. " | ||
f"It must either point to a local existing file, or have a " | ||
f"S3, HTTP or HTTPS scheme.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dtype=tensorizer_config._model_cls_dtype, | ||
**tensorizer_args.deserialization_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.no_valid_tensorizer_args_in_model_loader_extra_config()): | ||
logger.info("Inferring Tensorizer args from %s", self.model) | ||
self.model_loader_extra_config = {"tensorizer_dir": self.model} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Signed-off-by: Sanger Steel <[email protected]>
Tensorizer and
S3Model
loading integrated, updatedtensorizer==2.10.0
, support passing allTensorSerializer
andTensorDeserializer
paramsThis PR does the following:
Tensorizer
loading in to theS3Model
machinery. It now seamlessly can be used with it to load all non-tensor model artifacts.Tensorizer
now, when serializing, will not only serialize model tensors, but all model artifacts needed to run a model on vLLM, relying onhuggingface_hub
'ssnapshot_download
.--model-loader-extra-config
. Providing an S3 directory in the model tag will allow Tensorizer to resolve everything as long as all model artifacts forserved_model_name
are in the aforementioned directory, and Tensorizer can authenticate to S3 (which is does so with the usual boto3-style AWS environment variables, thes3cmd
-style environment variables, an~/.s3cfg
file, or the~/.aws/
config and credential files on one's home path. For example, after serializing a model with Tensorizer, this now works:--model-loader-extra
is still supported, and can accept additional nestedserialization_kwargs
anddeserialization_kwargs
JSONs, which allow configuringTensorDeserializer
andTensorSerializer
with arbitrary parameters (as long as they do not conflict with vLLM)tensorizer
version to==2.10.0
. This version comes with the boto3-style credential support.--model-loader-extra-config
to respect newlines.