Skip to content

[Model] Support HF format of minimax #20211

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 11, 2025

Conversation

mgoin
Copy link
Member

@mgoin mgoin commented Jun 28, 2025

Purpose

#20199 set us up to test the older format of MiniMaxAI/MiniMax-Text-01 we initially implemented. This PR implements the small changes to support the new HF format within the existing model definition and the new explicit format at https://huggingface.co/MiniMaxAI/MiniMax-Text-01-hf . This was based on looking at the changes to the model config and basically amounts to renames for existing configs https://huggingface.co/MiniMaxAI/MiniMax-Text-01/commit/130e821701f3244ab18bab619dbd6fa9b5a92d28

Test Plan

Test using the registry. It would be nice to run an evaluation, but I don't have a machine to load the ~460B model atm.

Test Result

@mgoin mgoin added the new-model Requests to new models label Jun 28, 2025
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @mgoin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enables vLLM to support the latest Hugging Face configuration format for the MiniMaxAI/MiniMax-Text-01 model. It involves adapting the model's parameter parsing logic to accommodate new naming conventions and structures for attention and normalization layers, ensuring that models saved in this new format can be loaded and run correctly.

Highlights

  • New Model Format Support: Added support for the new Hugging Face configuration format for the MiniMaxAI/MiniMax-Text-01 model, allowing vLLM to correctly interpret its updated structure.
  • Configuration Parsing Updates: Modified the minimax_text_01 model's initialization to dynamically retrieve layernorm alpha/beta factors and attention layer types (linear_attention vs. full_attention) from the new HF config's naming conventions, including a fallback mechanism for parameters and a mapping for layer types.
  • Internal Consistency: Updated internal references within the minimax_text_01 model from self.config.attn_type_list to self.model.decoder_attention_types to align with the newly parsed attention type information, ensuring correct calculation of flash layer counts and weight loading.
  • Registry Updates: Registered the new MiniMaxForCausalLM model name in both the test and main model registries, linking it to the existing minimax_text_01 implementation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the new Hugging Face format of MiniMaxAI/MiniMax-Text-01 while maintaining backward compatibility with the older format. The changes are logical and correctly implemented. I've provided a couple of suggestions to improve code readability and maintainability by using more idiomatic Python constructs.

@ywang96
Copy link
Member

ywang96 commented Jun 28, 2025

FYI @qscqesze

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 28, 2025
@qscqesze
Copy link
Contributor

qscqesze commented Jul 9, 2025

Sorry for the late response — I just saw this PR. It's great, LGTM!

@qscqesze
Copy link
Contributor

qscqesze commented Jul 9, 2025

The -hf repository isn't our main focus; it's a special version tailored for the transformers library, created because some config parameters conflicted with the original framework. In vLLM, it's optional and unlikely to be used for deploying a version meant specifically for transformers.

@mgoin
Copy link
Member Author

mgoin commented Jul 9, 2025

Okay, thanks a lot for the context @qscqesze. I only made this because the original model checkpoint was changed on this commit but I see now it has been changed back.

I'll still work on adding the HF support since new quantized versions could use it.

@mgoin mgoin enabled auto-merge (squash) July 10, 2025 16:49
@mgoin mgoin merged commit 922f316 into vllm-project:main Jul 11, 2025
69 checks passed
@mgoin mgoin deleted the support-minimax branch July 11, 2025 03:00
Chen-zexi pushed a commit to Chen-zexi/vllm that referenced this pull request Jul 13, 2025
patrickvonplaten pushed a commit to patrickvonplaten/vllm that referenced this pull request Jul 15, 2025
LyrisZhong pushed a commit to LyrisZhong/vllm that referenced this pull request Jul 23, 2025
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new-model Requests to new models ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants