Skip to content

[Example] Fix Qwen VL ignore list #1545

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 13, 2025

Conversation

arunmadhusud
Copy link
Contributor

SUMMARY:

CHANGES

  • Add "model.visual." to ignore list

TEST PLAN:
Run qwen2_vl_example.py
Run qwen_2_5_vl_example.py

Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @arunmadhusud, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a change in the Hugging Face Transformers library (v4.52+) regarding the naming of weights for Qwen2-VL and Qwen2.5-VL models. To maintain compatibility and correctly prevent the vision model weights from being quantized, I've updated the quantization configuration in the relevant example scripts.

Highlights

  • Quantization Ignore List: Updated the quantization ignore list for Qwen2-VL and Qwen2.5-VL examples to include the new weight naming convention introduced in Transformers v4.52.
  • Vision Model Exclusion: Ensured that the vision components of the Qwen VL models are explicitly excluded from quantization by adding the 're:model.visual.*' pattern to the ignore list.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the ignore list for quantization in the Qwen2-VL and Qwen2.5-VL example scripts. The change adds the pattern "re:model.visual.*" to the list. This addition is necessary to correctly ignore vision model weights due to a naming convention change in transformers library version 4.52 and later, as detailed in the PR description.

The modification is applied consistently in both affected example files (qwen2_vl_example.py and qwen_2_5_vl_example.py). The change appears correct and directly addresses the stated problem, ensuring that the vision components of these models are not inadvertently quantized when using newer versions of the transformers library. The test plan of running the example scripts is appropriate for verifying this type of change. No issues of medium or higher severity were identified.

@brian-dellabetta
Copy link
Collaborator

Thank you @arunmadhusud for the contribution! We were running into this issue and needed to debug this. I will validate locally and try to get this in if it's all good

@brian-dellabetta
Copy link
Collaborator

Resolves neuralmagic/compressed-tensors#353

Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested locally. I was able to reproduce the error on latest transformers, and confirm that the example file runs successfully on this branch.
This resolves neuralmagic/compressed-tensors#353

Copy link
Collaborator

@shanjiaz shanjiaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@brian-dellabetta brian-dellabetta merged commit 60511ab into vllm-project:main Jun 13, 2025
8 checks passed
dsikka added a commit that referenced this pull request Jun 14, 2025
SUMMARY:

- Update ignore list as per
#1545
- Note: We don't use an ignore list for for the Qwen2.5-VL FP8_Dynamic
test?
aireilly pushed a commit to aireilly/llm-compressor that referenced this pull request Jul 30, 2025
**SUMMARY:**

- Transformers v4.52 will map weights name for Qwen2-VL/Qwen2.5-VL:
https://github.com/huggingface/transformers/blob/de4cf5a38e9678b9e465867a8a6b88ea727bea52/src/transformers/models/qwen2_vl/modeling_qwen2_vl.py#L1359-L1362
- So the qwen2-vl/qwen2.5-vl checkpoints saved after transformers v4.52
will have different weights name.
- Update the ignore list to prevent the vision model from being
quantized

**CHANGES**

- Add "model.visual." to ignore list


**TEST PLAN:**
Run
[qwen2_vl_example.py](https://github.com/vllm-project/llm-compressor/blob/main/examples/multimodal_vision/qwen2_vl_example.py)
Run
[qwen_2_5_vl_example.py](https://github.com/vllm-project/llm-compressor/blob/main/examples/multimodal_vision/qwen_2_5_vl_example.py)
aireilly pushed a commit to aireilly/llm-compressor that referenced this pull request Jul 30, 2025
SUMMARY:

- Update ignore list as per
vllm-project#1545
- Note: We don't use an ignore list for for the Qwen2.5-VL FP8_Dynamic
test?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready When a PR is ready for review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants