Skip to content

[Multimodal] Use fast processor for Qwen2/2.5-VL #19789

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 18, 2025
Merged

Conversation

WoosukKwon
Copy link
Collaborator

@WoosukKwon WoosukKwon commented Jun 18, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

We use use_fast=kwargs.get("use_fast") when initializing the qwen processor. Because kwargs typically doesn't have use_fast, this makes us fall back to slow processor (even if the fast processor is default after tranformers v4.52). This PR fixes this by using kwargs.get("use_fast", True).

Test Plan

Test Result

(Optional) Documentation Update

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @WoosukKwon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request aims to optimize the performance of Qwen2 and Qwen2.5-VL models by ensuring that their respective processors, including the image processor for Qwen2-VL, consistently utilize the faster implementation by default. This change addresses a scenario where the system might inadvertently fall back to a slower processor due to the absence of an explicit use_fast flag in the initialization parameters.

Highlights

  • Processor Initialization: I've updated the get_hf_processor functions across qwen2_5_omni_thinker.py, qwen2_5_vl.py, and qwen2_vl.py to ensure that the use_fast parameter defaults to True when initializing the Qwen processor. Previously, if use_fast was not explicitly provided in kwargs, it would default to False, potentially leading to the use of a slower processor.
  • Image Processor Default: In qwen2_vl.py, I've also added an explicit line within the get_image_processor function to set kwargs["use_fast"] = kwargs.get("use_fast", True). This ensures consistency and guarantees the fast image processor is used by default for Qwen2-VL models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to ensure that the fast processor is used by default for Qwen2/2.5-VL models by changing kwargs.get("use_fast") to kwargs.get("use_fast", True). This is a sensible change as it makes the behavior explicit and aligns with the expectation of using the more performant fast processor unless otherwise specified.

The changes are applied consistently across qwen2_5_omni_thinker.py, qwen2_5_vl.py, and qwen2_vl.py.

Specifically:

  • In get_hf_processor methods, the use_fast parameter for the main processor and the image processor (when called from get_hf_processor) now defaults to True.
  • In vllm/model_executor/models/qwen2_vl.py, the get_image_processor method also ensures use_fast defaults to True. This is important for calls to get_image_processor from contexts other than get_hf_processor (e.g., from utility methods like _get_vision_info), ensuring consistent behavior.

The code modifications are clear, concise, and directly address the described problem. I found no issues of medium, high, or critical severity in the provided diffs.

One minor point regarding the PR process: the PR description checklist (Purpose, Test Plan, Test Result) is not fully completed. While the purpose is clear from the description body, including a brief test plan and results (even if manual verification) would enhance confidence in the change.

Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I vaguely remember there are some discussion around correctness issue of the fast image processor of Qwen2-VL but I'm not sure if it's been resolved. cc @Isotr0py to confirm

@Isotr0py
Copy link
Member

Isotr0py commented Jun 18, 2025

I vaguely remember there are some discussion around correctness issue of the fast image processor of Qwen2-VL but I'm not sure if it's been resolved.

At least in Transformers v4.52.4, fast processor still has numeric divergence with slow ones. But in my test, the difference is quite minor now (only 0.3% difference for a 2048x1365 image), which is unlikely to affect the generation results very much. So it should be fine to use fast processor IMO:

import requests
import torch

from io import BytesIO
from PIL import Image
from transformers import Qwen2VLImageProcessorFast, Qwen2VLImageProcessor

slow_processor = Qwen2VLImageProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")
fast_processor = Qwen2VLImageProcessorFast.from_pretrained("Qwen/Qwen2-VL-2B-Instruct")

image = Image.open(BytesIO(requests.get("https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg").content))

fast_output = fast_processor(image, return_tensors="pt")["pixel_values"]
slow_output = slow_processor(image, return_tensors="pt")["pixel_values"]
torch.testing.assert_close(fast_output, slow_output)
AssertionError: Tensor-likes are not close!

Mismatched elements: 46330 / 16826208 (0.3%)
Greatest absolute difference: 0.03001582622528076 at index (7263, 474) (up to 1e-05 allowed)
Greatest relative difference: 10.668573379516602 at index (7788, 897) (up to 1.3e-06 allowed)

@DarkLight1337
Copy link
Member

DarkLight1337 commented Jun 18, 2025

Currently when I load the AutoProcessor in v4.52.4, I get this message:

Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.

So I guess transformers will eventually use fast processor by default anyway. The warning message is a bit misleading though. cc @zucchini-nlp

Copy link
Member

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Qwen2-VL tests can still pass with fast processor locally, so LGTM!

@zucchini-nlp
Copy link
Contributor

Yes, we will use it be default after a few releases, the message was indeed not updated accordingly

@mergify mergify bot added the qwen Related to Qwen models label Jun 18, 2025
@WoosukKwon WoosukKwon added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 18, 2025
@WoosukKwon WoosukKwon merged commit d49adea into main Jun 18, 2025
79 of 83 checks passed
@WoosukKwon WoosukKwon deleted the qwen-fast-processor branch June 18, 2025 22:49
@ywang96 ywang96 mentioned this pull request Jun 19, 2025
4 tasks
yeqcharlotte pushed a commit to yeqcharlotte/vllm that referenced this pull request Jun 22, 2025
minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Jun 24, 2025
gmarinho2 pushed a commit to gmarinho2/vllm that referenced this pull request Jun 26, 2025
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jun 30, 2025
wseaton pushed a commit to wseaton/vllm that referenced this pull request Jun 30, 2025
wseaton pushed a commit to wseaton/vllm that referenced this pull request Jun 30, 2025
wwl2755-google pushed a commit to wwl2755-google/vllm that referenced this pull request Jul 1, 2025
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants