-
Notifications
You must be signed in to change notification settings - Fork 3.1k
[Fix] Fix hicache backend #8991
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fix] Fix hicache backend #8991
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @DarkSharpness, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've implemented a fix to resolve illegal memory access
errors that arise when FlashAttention3 (FA3) decode kernels conflict with hicache kernels writing data from GPU to CPU. My approach enables a hybrid attention backend, specifically utilizing flashinfer
for decode operations when FA3 is the chosen prefill backend. This allows us to leverage the performance benefits of FA3 prefill while ensuring stability during decode. I've also included a fallback to cudaMemcpyAsync
for hicache I/O if flashinfer
isn't available or if FA3 is explicitly set for decode, ensuring the system remains functional even if not optimally performant in all scenarios.
Highlights
- Problem Resolution: This pull request directly addresses and resolves the
illegal memory access
errors that occur when FlashAttention3 (FA3) decode kernels are run concurrently with hicache kernels performing GPU-to-CPU writes. - Hybrid Backend Implementation: The core solution involves enabling a hybrid attention backend. When FA3 is used for prefill operations,
flashinfer
is now leveraged for decode operations to maintain performance while circumventing the memory access issues. - Intelligent Fallback Mechanism: A robust fallback mechanism has been implemented: if
flashinfer
is not available or ifdecode_attention_backend
is explicitly set to FA3, thehicache_io_backend
will automatically revert to "direct" (usingcudaMemcpyAsync
). This ensures stability, albeit with potentially suboptimal performance compared toflashinfer
. - Performance Optimization: The changes aim to improve overall performance by allowing the continued use of high-performance FA3 prefill kernels, avoiding the previous workarounds that either bypassed FA3 entirely or relied solely on
cudaMemcpyAsync
, which could lead to suboptimal performance.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
Warning Gemini encountered an error creating the review. You can try again by commenting |
Motivation
It is a known issue that FA3 kernels do not work well with hicache kernels.
When running FA3 decode kernels while hicache kernels are writing from GPU to CPU, an
illegal memory access
error may occur.Previously, we worked around this by either:
cudaMemcpyAsync
instead of hicache kernelsHowever, these workarounds can lead to suboptimal performance.
This PR addresses the issue by enabling the hybrid attention backend and using flashinfer for decode.
This way, we can still leverage the high performance of FA3 prefill kernels while avoiding
illegal memory access
during decode.Modifications
cudaMemcpyAsync
in hicache (same behavior as before this PR).Accuracy Test
Benchmark & Profiling
Checklist