Skip to content

[Frontend] [Core] Integrate Tensorizer in to extant S3 loading machinery, allow passing arbitrary arguments during save/load #19616

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

sangstar
Copy link
Collaborator

@sangstar sangstar commented Jun 13, 2025

Tensorizer and S3Model loading integrated, updated tensorizer==2.10.0, support passing all TensorSerializer and TensorDeserializer params

This PR does the following:

  • Integrates Tensorizer loading in to the S3Model machinery. It now seamlessly can be used with it to load all non-tensor model artifacts.
  • Consequently, Tensorizer now, when serializing, will not only serialize model tensors, but all model artifacts needed to run a model on vLLM, relying on huggingface_hub's snapshot_download.
  • This means that when loading with Tensorizer, it is no longer necessary to provide Tensorizer args with --model-loader-extra-config. Providing an S3 directory in the model tag will allow Tensorizer to resolve everything as long as all model artifacts for served_model_name are in the aforementioned directory, and Tensorizer can authenticate to S3 (which is does so with the usual boto3-style AWS environment variables, the s3cmd-style environment variables, an ~/.s3cfg file, or the ~/.aws/ config and credential files on one's home path. For example, after serializing a model with Tensorizer, this now works:
vllm serve s3://my-bucket/vllm/facebook/opt-125m/v1 --load-format=tensorizer
  • Original functionality with specifying --model-loader-extra is still supported, and can accept additional nested serialization_kwargs and deserialization_kwargs JSONs, which allow configuring TensorDeserializer and TensorSerializer with arbitrary parameters (as long as they do not conflict with vLLM)
  • Updated vLLM's tensorizer version to ==2.10.0. This version comes with the boto3-style credential support.
  • Additionally fixes the regex when parsing --model-loader-extra-config to respect newlines.

wbrown and others added 30 commits May 8, 2025 14:14
chore: Push upstream changes
Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Some changes to `TensorizerConfig` have added a few
parameters that are used for convenience internally,
but are exposed as public parameters. This unnecessarily
complicates `TensorizerConfig` as it makes it seem like
these are important parameters users need to understand
and contend with to use `TensorizerConfig` with
the public-facing API. They have been made private,
so users can disregard them and have less parameters to
wrap their heads around.
Signed-off-by: Sanger Steel <[email protected]>
Adjusts the regex string in `arg_utils.parse_type` to allow
for newlines within the JSON string

Signed-off-by: Sanger Steel <[email protected]>
Simply call `snapshot_download` to a tempdir and serialize that to
S3 for model artifacts, completely decoupling Tensorizer from the
original machinery needed to load specific files.

Signed-off-by: Sanger Steel <[email protected]>
Signed-off-by: Sanger Steel <[email protected]>
Since `model_loader_extra_config` can be a `TensorizerConfig` instance
as well as a dict, add a `__getitem__` method to `TensorizerConfig` and
fix checker function to work without importing `TensorizerConfig` (that
would've caused a circular import)

Signed-off-by: Sanger Steel <[email protected]>
Apply the batch of commits suggested in this review.

Co-authored-by: Eta <[email protected]>
Also fixes the logic for parsing different permutations for
using the example script based on whether args are passed to
CLI args directly or packaged in
--model-loader-extra-config

Signed-off-by: Sanger Steel <[email protected]>
sangstar added 9 commits June 12, 2025 15:53
…end-and-docs

feat: Allow passing arbitrary Tensorizer serialization and deserialization kwargs; update docs
…make-cli-streamlined

feat: Allow serializing and deserializing with Tensorizer without passing `--model-loader-extra-config`
Signed-off-by: Sanger Steel <[email protected]>
…-aws-update-and-any-kwargs

Signed-off-by: Sanger Steel <[email protected]>

# Conflicts:
#	setup.py
#	tests/tensorizer_loader/conftest.py
#	tests/tensorizer_loader/test_tensorizer.py
#	vllm/model_executor/model_loader/tensorizer.py
#	vllm/model_executor/model_loader/tensorizer_loader.py
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @sangstar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the integration of Tensorizer with vLLM, particularly for S3-based workflows. It simplifies the loading process by allowing direct specification of the serialized model directory, ensures that serialization captures all necessary model files, and provides greater flexibility by enabling users to pass specific arguments to Tensorizer's core components during both serialization and deserialization.

Highlights

  • Simplified Tensorizer Loading: Tensorizer models can now be loaded directly by providing the S3 or local directory path containing the model.tensors file and other artifacts in the standard --model argument, eliminating the need for the --model-loader-extra-config JSON string in many cases.
  • Comprehensive Serialization: Tensorizer serialization now automatically includes non-tensor model artifacts (like config and tokenizer files) by downloading them from Hugging Face Hub, ensuring a complete model package is saved.
  • Arbitrary Tensorizer Arguments: Added support for passing arbitrary keyword arguments to TensorSerializer, TensorDeserializer, and open_stream via the serialization_kwargs, deserialization_kwargs, and stream_kwargs fields within the model_loader_extra_config JSON.
  • Tensorizer Version Update: Updated the required Tensorizer version to 2.10.0, which includes enhanced S3 credential handling.
  • Improved Config Parsing: Fixed the regex used to parse the --model-loader-extra-config JSON string to correctly handle whitespace and newlines.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added documentation Improvements or additions to documentation ci/build llama Related to Llama models v1 labels Jun 13, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances Tensorizer integration within vLLM with a focus on seamless S3 loading, version updates, and flexible argument passing. Key improvements include simplified S3 artifact resolution, comprehensive artifact handling, and the ability to pass arbitrary keyword arguments to Tensorizer's core components. The refactoring of TensorizerConfig and related classes improves clarity and robustness. The changes are well-tested and improve the user experience. Two minor suggestions relate to potential log verbosity and adding code comments.

Comment on lines +237 to +240
logger.info(
"Updating TensorizerConfig with %s from "
"--model-loader-extra-config provided", k
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logging statement is inside a loop. If extra_cfg contains many keys, this could lead to a large number of log messages. Consider if a summary log message before or after the loop, or using logger.debug for per-key updates, might be more appropriate.

Comment on lines +978 to +981
logger.info(
"Using Tensorizer args from --model-loader-extra-config. "
"Note that you can now simply pass the S3 directory in the "
"model tag instead of providing the JSON string.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The log message within the no_valid_tensorizer_args_in_model_loader_extra_config method could be quite verbose if self.model_loader_extra_config is large or complex. Consider changing this to logger.debug if the detailed content of model_loader_extra_config is primarily for debugging purposes.

@@ -56,7 +56,8 @@

def _parse_type(val: str) -> T:
try:
if return_type is json.loads and not re.match("^{.*}$", val):
if return_type is json.loads and not re.match(
r"(?s)^\s*{.*}\s*$", val):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The regex r"(?s)^\s*{.*}\s*$" is used to check if a string is a JSON object. The (?s) flag enables dotall mode, allowing the dot (.) to match newline characters. This is important for parsing multi-line JSON strings.

Comment on lines +67 to +71
if not isinstance(loaded, dict):
raise argparse.ArgumentTypeError(
f"Not deserializable to dict: {value}. serialization_kwargs and "
f"deserialization_kwargs must be "
f"deserializable from a JSON string to a dictionary. ")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tensorizer_kwargs_arg function raises an argparse.ArgumentTypeError if the input is not deserializable to a dictionary. This is a good check to ensure that the user provides valid JSON for serialization and deserialization kwargs.

Comment on lines 479 to 486
tensorizer_args = tensorizer_config._construct_tensorizer_args()
before_mem = get_mem_usage()
start = time.perf_counter()
with _read_stream(
if not is_valid_deserialization_uri(
tensorizer_config.tensorizer_uri):
raise ValueError(
f"{tensorizer_config.tensorizer_uri} is not a valid "
f"tensorizer URI. Please check that the URI is correct. "
f"It must either point to a local existing file, or have a "
f"S3, HTTP or HTTPS scheme.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The check if not is_valid_deserialization_uri(tensorizer_config.tensorizer_uri) ensures that the provided URI is valid (either a local file or an S3/HTTP(S) URI). This helps prevent errors due to invalid paths.

Comment on lines +250 to +251
dtype=tensorizer_config._model_cls_dtype,
**tensorizer_args.deserialization_kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using dtype=tensorizer_config._model_cls_dtype ensures that the tensors are deserialized with the correct data type, as inferred from the model class. This is important for maintaining consistency and avoiding type-related errors.

Comment on lines +974 to +976
self.no_valid_tensorizer_args_in_model_loader_extra_config()):
logger.info("Inferring Tensorizer args from %s", self.model)
self.model_loader_extra_config = {"tensorizer_dir": self.model}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to infer Tensorizer arguments from the model name when --model-loader-extra-config is not provided simplifies the user experience. It allows users to simply specify the S3 directory in the model tag instead of providing a JSON string.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build documentation Improvements or additions to documentation llama Related to Llama models v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants