Skip to content

Conversation

DarkSharpness
Copy link
Collaborator

Motivation

It is a known issue that FA3 kernels do not work well with hicache kernels.
When running FA3 decode kernels while hicache kernels are writing from GPU to CPU, an illegal memory access error may occur.

Previously, we worked around this by either:

  • Avoiding the FA3 backend entirely, or
  • Using direct cudaMemcpyAsync instead of hicache kernels

However, these workarounds can lead to suboptimal performance.

This PR addresses the issue by enabling the hybrid attention backend and using flashinfer for decode.
This way, we can still leverage the high performance of FA3 prefill kernels while avoiding illegal memory access during decode.

Modifications

  • When the default attention backend is FA3 and hicache kernels are enabled, use flashinfer for decode.
  • If override is not possible, fall back to using direct cudaMemcpyAsync in hicache (same behavior as before this PR).

Accuracy Test

Benchmark & Profiling

Checklist

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @DarkSharpness, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've implemented a fix to resolve illegal memory access errors that arise when FlashAttention3 (FA3) decode kernels conflict with hicache kernels writing data from GPU to CPU. My approach enables a hybrid attention backend, specifically utilizing flashinfer for decode operations when FA3 is the chosen prefill backend. This allows us to leverage the performance benefits of FA3 prefill while ensuring stability during decode. I've also included a fallback to cudaMemcpyAsync for hicache I/O if flashinfer isn't available or if FA3 is explicitly set for decode, ensuring the system remains functional even if not optimally performant in all scenarios.

Highlights

  • Problem Resolution: This pull request directly addresses and resolves the illegal memory access errors that occur when FlashAttention3 (FA3) decode kernels are run concurrently with hicache kernels performing GPU-to-CPU writes.
  • Hybrid Backend Implementation: The core solution involves enabling a hybrid attention backend. When FA3 is used for prefill operations, flashinfer is now leveraged for decode operations to maintain performance while circumventing the memory access issues.
  • Intelligent Fallback Mechanism: A robust fallback mechanism has been implemented: if flashinfer is not available or if decode_attention_backend is explicitly set to FA3, the hicache_io_backend will automatically revert to "direct" (using cudaMemcpyAsync). This ensures stability, albeit with potentially suboptimal performance compared to flashinfer.
  • Performance Optimization: The changes aim to improve overall performance by allowing the continued use of high-performance FA3 prefill kernels, avoiding the previous workarounds that either bypassed FA3 entirely or relied solely on cudaMemcpyAsync, which could lead to suboptimal performance.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

Warning

Gemini encountered an error creating the review. You can try again by commenting /gemini review.

@xiezhq-hermann xiezhq-hermann self-assigned this Aug 9, 2025
@xiezhq-hermann xiezhq-hermann added the ready-to-merge The PR is ready to merge after the CI is green. label Aug 9, 2025
@merrymercy merrymercy merged commit 19bc77f into sgl-project:main Aug 10, 2025
83 of 92 checks passed
@DarkSharpness DarkSharpness deleted the fix_hicache_backend branch August 10, 2025 22:38
narutolhy pushed a commit to narutolhy/sglang that referenced this pull request Aug 17, 2025
narutolhy pushed a commit to narutolhy/sglang that referenced this pull request Aug 18, 2025
MahmoudAshraf97 pushed a commit to MahmoudAshraf97/sglang that referenced this pull request Sep 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-to-merge The PR is ready to merge after the CI is green.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants