-
-
Notifications
You must be signed in to change notification settings - Fork 9.2k
[Multimodal] Use fast processor for Qwen2/2.5-VL #19789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @WoosukKwon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request aims to optimize the performance of Qwen2 and Qwen2.5-VL models by ensuring that their respective processors, including the image processor for Qwen2-VL, consistently utilize the faster implementation by default. This change addresses a scenario where the system might inadvertently fall back to a slower processor due to the absence of an explicit use_fast
flag in the initialization parameters.
Highlights
- Processor Initialization: I've updated the
get_hf_processor
functions acrossqwen2_5_omni_thinker.py
,qwen2_5_vl.py
, andqwen2_vl.py
to ensure that theuse_fast
parameter defaults toTrue
when initializing the Qwen processor. Previously, ifuse_fast
was not explicitly provided inkwargs
, it would default toFalse
, potentially leading to the use of a slower processor. - Image Processor Default: In
qwen2_vl.py
, I've also added an explicit line within theget_image_processor
function to setkwargs["use_fast"] = kwargs.get("use_fast", True)
. This ensures consistency and guarantees the fast image processor is used by default for Qwen2-VL models.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to ensure that the fast processor is used by default for Qwen2/2.5-VL models by changing kwargs.get("use_fast")
to kwargs.get("use_fast", True)
. This is a sensible change as it makes the behavior explicit and aligns with the expectation of using the more performant fast processor unless otherwise specified.
The changes are applied consistently across qwen2_5_omni_thinker.py
, qwen2_5_vl.py
, and qwen2_vl.py
.
Specifically:
- In
get_hf_processor
methods, theuse_fast
parameter for the main processor and the image processor (when called fromget_hf_processor
) now defaults toTrue
. - In
vllm/model_executor/models/qwen2_vl.py
, theget_image_processor
method also ensuresuse_fast
defaults toTrue
. This is important for calls toget_image_processor
from contexts other thanget_hf_processor
(e.g., from utility methods like_get_vision_info
), ensuring consistent behavior.
The code modifications are clear, concise, and directly address the described problem. I found no issues of medium
, high
, or critical
severity in the provided diffs.
One minor point regarding the PR process: the PR description checklist (Purpose, Test Plan, Test Result) is not fully completed. While the purpose is clear from the description body, including a brief test plan and results (even if manual verification) would enhance confidence in the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I vaguely remember there are some discussion around correctness issue of the fast image processor of Qwen2-VL but I'm not sure if it's been resolved. cc @Isotr0py to confirm
At least in Transformers v4.52.4, fast processor still has numeric divergence with slow ones. But in my test, the difference is quite minor now (only 0.3% difference for a 2048x1365 image), which is unlikely to affect the generation results very much. So it should be fine to use fast processor IMO: import requests
import torch
from io import BytesIO
from PIL import Image
from transformers import Qwen2VLImageProcessorFast, Qwen2VLImageProcessor
slow_processor = Qwen2VLImageProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
fast_processor = Qwen2VLImageProcessorFast.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
image = Image.open(BytesIO(requests.get("https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg").content))
fast_output = fast_processor(image, return_tensors="pt")["pixel_values"]
slow_output = slow_processor(image, return_tensors="pt")["pixel_values"]
torch.testing.assert_close(fast_output, slow_output)
|
Currently when I load the
So I guess transformers will eventually use fast processor by default anyway. The warning message is a bit misleading though. cc @zucchini-nlp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Qwen2-VL tests can still pass with fast processor locally, so LGTM!
Yes, we will use it be default after a few releases, the message was indeed not updated accordingly |
Signed-off-by: minpeter <[email protected]>
Signed-off-by: Yang Wang <[email protected]>
Signed-off-by: Will Eaton <[email protected]>
Signed-off-by: avigny <[email protected]>
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
We use
use_fast=kwargs.get("use_fast")
when initializing the qwen processor. Becausekwargs
typically doesn't haveuse_fast
, this makes us fall back to slow processor (even if the fast processor is default after tranformers v4.52). This PR fixes this by usingkwargs.get("use_fast", True)
.Test Plan
Test Result
(Optional) Documentation Update