Skip to content

[CUDA graphs] Enable full cuda graphs with FA3 AoT scheduling #20301

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jul 1, 2025

Conversation

WoosukKwon
Copy link
Collaborator

@WoosukKwon WoosukKwon commented Jul 1, 2025

This PR enables the full cuda graph with FA3 AoT scheduling.
Previously, AoT scheduling caused illegal memory access when the run-time split factor is larger than num_splits set by internal heuristics at capture-time.
This case can be prevented by explicitly setting num_splits (the upper bound) at both capture and run time.

Copy link

github-actions bot commented Jul 1, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @WoosukKwon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the FlashAttention backend by enabling full CUDA graph support when using FlashAttention v3's Ahead-of-Time (AoT) scheduling. The core problem addressed was an illegal memory access occurring when the runtime split factor exceeded the capture-time num_splits heuristic. This is resolved by explicitly managing and passing a max_num_splits parameter, ensuring consistent memory allocation and preventing runtime issues, thereby allowing for more efficient execution with CUDA graphs.

Highlights

  • CUDA Graph Compatibility: Enabled full CUDA graph support for FlashAttention v3 (FA3) with Ahead-of-Time (AoT) scheduling, which was previously unsupported due to memory access issues.
  • Memory Access Fix: Resolved an illegal memory access issue that occurred when the runtime split factor exceeded the num_splits set at capture-time. This is addressed by explicitly setting and consistently using max_num_splits as an upper bound.
  • Dependency Update: Updated the flash-attention external project dependency to a newer commit (1c2624e) to incorporate necessary changes for FA3 and CUDA graph compatibility.
  • max_num_splits Management: Introduced max_num_splits in FlashAttentionMetadata and FlashAttentionBackend to explicitly control the number of splits for intermediate buffer pre-allocation when using CUDA graphs, ensuring proper memory handling.
  • CUDA Graph Constraints: Added validation checks for CUDA graph capture sizes, enforcing that AoT scheduling is required and that capture sizes do not exceed 992 tokens, aligning with current FA3 internal heuristics.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@WoosukKwon WoosukKwon requested a review from LucasWilkinson July 1, 2025 05:42
@WoosukKwon WoosukKwon added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 1, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables full CUDA graph support with FlashAttention-3 AoT scheduling by explicitly managing the num_splits parameter. The changes look solid and correctly plumb the max_num_splits value from initialization down to the attention kernel. My main feedback is to replace a magic number with a named constant for better code clarity and maintainability.

Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
"cudagraph_capture_sizes should not be None when "
"full_cuda_graph is True.")
self.max_cudagraph_size = max(capture_sizes)
if self.max_cudagraph_size > 992:
Copy link
Collaborator

@LucasWilkinson LucasWilkinson Jul 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

were you hitting an assert/IMA here? if the batch is >992 FA3 "should" silently fall back to no dynamic split (AoT scheduling) granted if we fallback on no dynamic split; setting the split manually may be high since it will try to split to that instead of using it as an upper bound. (so perf may be bad so I think it still makes sense to limit this)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LucasWilkinson I didn't test bs > 992. I just wanted to add a safety check just because I'm not sure what will happen in the case.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense; I agree with this approach. We can try to add support in a later PR

@WoosukKwon WoosukKwon requested a review from LucasWilkinson July 1, 2025 15:57
Copy link
Collaborator

@LucasWilkinson LucasWilkinson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@WoosukKwon WoosukKwon merged commit 8acb4ba into main Jul 1, 2025
75 checks passed
@WoosukKwon WoosukKwon deleted the woosuk/full-graph-aot branch July 1, 2025 16:07
CSWYF3634076 pushed a commit to CSWYF3634076/vllm that referenced this pull request Jul 2, 2025
@lengrongfu
Copy link
Contributor

I in main branch run -m vllm.entrypoints.cli.main serve meta-llama/Llama-3.2-3B-Instruct --trust-remote-code --gpu-memory-utilization 0.95 this command.

but get this error: TypeError: flash_attn_varlen_func() got an unexpected keyword argument 'num_splits'

@WoosukKwon How should I solve it?

@WoosukKwon
Copy link
Collaborator Author

@lengrongfu Could you please re-install vllm? This happens because we updated the vllm_flash_attn version in this PR. Please do VLLM_USE_PRECOMPILED=1 pip install -e .

@lengrongfu
Copy link
Contributor

@lengrongfu Could you please re-install vllm? This happens because we updated the vllm_flash_attn version in this PR. Please do VLLM_USE_PRECOMPILED=1 pip install -e .

Thanks, this VLLM_USE_PRECOMPILED=1 pip install -e . command is can ok.

avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants