Skip to content

[Core] Optimize update checks in LogitsProcessor #21245

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 22, 2025

Conversation

Jialin
Copy link
Contributor

@Jialin Jialin commented Jul 20, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

Fix update checks in MinTokensLogitsProcessor and LogitBiasLogitsProcessor. For a benchmark run without override min length or logit bias, we still see noticeable cost coming from MinTokensLogitsProcessor and LogitBiasLogitsProcessor.

Screenshot 2025-07-19 at 11 05 47 PM

We found that it's due to inefficient needs_update tagging which would be tagged to True whenever there're new requests added to the batch. In this diff, we would tag needs_update to True, if

  • new added request had customized min_token config
  • a request with min_token config got popped

Test Plan

Rerun the benchmark.

# vLLM Serving
export VLLM_USE_MODELSCOPE=False;
export VLLM_TORCH_PROFILER_DIR=~/vllm_profile; # for profiling
vllm serve facebook/opt-125m \
    --swap-space 16 \
    --disable-log-requests \
    --host :: \
    --dtype float16

# Capture traces
vllm bench serve \
    --dataset-name random \
    --model facebook/opt-125m \
    --served-model-name facebook/opt-125m \
    --random-input-len 700 \
    --random-output-len 1 \
    --endpoint /v1/completions \
    --ignore-eos \
    --host localhost \
    --port 8000 \
    --request-rate 200 \
    --num-prompts 100

Test Result

Confirmed the cost from MinTokensLogitsProcessor and LogitBiasLogitsProcessor is mostly gone.

After
Screenshot 2025-07-20 at 2 56 38 AM
Before
Screenshot 2025-07-20 at 2 58 38 AM

(Optional) Documentation Update

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Jul 20, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request optimizes update checks in MinTokensLogitsProcessor. I've added a suggestion to improve the maintainability of the new logic by making it more explicit and avoiding a side effect in a conditional statement.

@Jialin Jialin changed the title [Core] Optimize update checks in MinTokensLogitsProcessor [Core] Optimize update checks in LogitsProcessor Jul 20, 2025
@njhill
Copy link
Member

njhill commented Jul 20, 2025

Thanks @Jialin. I think I had similar logic in the my original impl of these LPs here https://github.com/vllm-project/vllm/pull/13360/files#diff-d01f143e1af472f24af24842cb879907ce624e6e5c977935e944545240723529R51 and hadn't realized that had been changed. cc @afeldman-nm

Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

@houseroad houseroad added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 21, 2025
@houseroad houseroad added the performance Performance-related issues label Jul 21, 2025
@houseroad houseroad enabled auto-merge (squash) July 21, 2025 22:08
auto-merge was automatically disabled July 22, 2025 05:25

Head branch was pushed to by a user without write access

@vllm-bot vllm-bot merged commit a322376 into vllm-project:main Jul 22, 2025
63 of 65 checks passed
@afeldman-nm
Copy link
Contributor

Thanks @Jialin ! I think this was probably my bad so thanks for the fix

@Jialin
Copy link
Contributor Author

Jialin commented Jul 22, 2025

Thanks @Jialin ! I think this was probably my bad so thanks for the fix

No worry :)

yeqcharlotte pushed a commit to yeqcharlotte/vllm that referenced this pull request Jul 23, 2025
zixi-qi pushed a commit to zixi-qi/vllm that referenced this pull request Jul 23, 2025
LyrisZhong pushed a commit to LyrisZhong/vllm that referenced this pull request Jul 23, 2025
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
wenscarl pushed a commit to wenscarl/vllm that referenced this pull request Aug 4, 2025
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants