Skip to content

[bugfix] fix vllm sleep&wake_up produces meaningless output #5143

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 29, 2025

Conversation

hjh0119
Copy link
Collaborator

@hjh0119 hjh0119 commented Jul 29, 2025

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

vllm-project/vllm#17103

Fix a bug: When the vllm engine executes sleep + wake_up without calling reset_prefix_cache, it may produce incorrect outputs.

Impact: In GRPO colocate mode, when validate is enabled, all outputs from the first step of validation until the next training step will be abnormal.

Solution: Call reset_prefix_cache before each sleep operation.

A simple reproduction code:

  from swift.llm import InferEngine, InferRequest, RequestConfig
  from swift.plugin import InferStats
  model = 'Qwen/Qwen2.5-0.5B-Instruct'
  from swift.llm import VllmEngine
  engine = VllmEngine(model, gpu_memory_utilization=0.6, enable_prefix_caching=True, max_model_len=8192,enable_sleep_mode=True)
  req = InferRequest(messages=[{'role': 'user', 'content': '你是谁?'}])
  # engine.engine.reset_prefix_cache()
  output = engine.infer([req] * 10, RequestConfig(n=1))
  engine.engine.reset_prefix_cache()
  engine.engine.sleep(1)
  engine.engine.wake_up()
  output2 = engine.infer([req] * 10, RequestConfig(n=1))
  engine.engine.reset_prefix_cache()
  output3 = engine.infer([req] * 10, RequestConfig(n=1))

Write the detail information belongs to this PR.

Experiment results

Paste your experiment result here(if needed).

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @hjh0119, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug in the vLLM integration, specifically when operating in GRPO colocate mode with sleep functionality enabled. The bug caused abnormal outputs after sleep and wake_up cycles due to an un-reset prefix cache. The fix ensures data integrity by explicitly clearing the prefix cache before the engine goes to sleep, preventing erroneous generations in subsequent inference steps.

Highlights

  • Bug Fix: Resolves an issue where the vLLM engine could produce incorrect outputs after sleep and wake_up operations if reset_prefix_cache was not explicitly called. This specifically impacted GRPO colocate mode, leading to abnormal outputs from validation steps until the next training step.
  • Implementation: Introduces a call to self.engine.engine.reset_prefix_cache() immediately before the sleep() operation within the _fast_infer method of grpo_trainer.py. This ensures the prefix cache is cleared before the engine goes to sleep, preventing data corruption upon waking.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug where the vLLM engine could produce incorrect outputs after a sleep/wake-up cycle. The fix, which involves calling reset_prefix_cache before the engine sleeps, is correct and well-targeted. My review includes a suggestion to add a comment to the new code to improve future maintainability by clarifying its interaction with other parts of the code. Overall, this is a good fix.

@hjh0119 hjh0119 merged commit 4cac1a9 into modelscope:main Jul 29, 2025
1 of 2 checks passed
@hjh0119 hjh0119 deleted the fix-reset-prefix branch July 29, 2025 03:48
Jintao-Huang pushed a commit that referenced this pull request Jul 29, 2025
@hjh0119 hjh0119 mentioned this pull request Jul 29, 2025
meichangsu1 pushed a commit to tpx818/ms-swift that referenced this pull request Aug 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants