-
Notifications
You must be signed in to change notification settings - Fork 3k
[sgl-kernel] 1/N Refactor sglang cutlass 3x - gemm fp8 blockwise sm90 #8913
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[sgl-kernel] 1/N Refactor sglang cutlass 3x - gemm fp8 blockwise sm90 #8913
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @yuan-luo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I am refactoring the SGLang kernel's CUTLASS 3.x GEMM implementation, focusing on FP8 blockwise operations for NVIDIA's SM90 architecture. This initial phase involves adapting code from vLLM, introducing new template functions to generalize the GEMM calling process, and setting up the foundation for future refactors targeting other architectures. The goal is to enhance the efficiency and maintainability of the matrix multiplication routines within the SGLang kernel.
Highlights
- New CUTLASS 3.x Components for FP8 GEMM on SM90: I've introduced new CUTLASS 3.x components specifically for FP8 blockwise GEMM operations on SM90 architectures. This includes defining new dispatch policies and kernel structures to leverage Hopper's capabilities for efficient low-precision matrix multiplication.
- Generalized CUTLASS GEMM Caller: I've created a generic
cutlass_gemm_caller
template function and a specializedcutlass_gemm_caller_blockwise
to abstract and streamline the launching of CUTLASS GEMM kernels. This improves code reusability and simplifies future integrations. - Dynamic Scheduler Dispatch for FP8 GEMM: I've implemented a dispatch mechanism that dynamically selects the appropriate CUTLASS scheduler (
StreamKScheduler
orPersistentScheduler
) for FP8 blockwise GEMM based on the dimensions of the input matrices. This ensures optimal performance for varying workloads. - Integration into Existing GEMM Kernel: I've integrated the newly developed CUTLASS extensions into the existing
fp8_blockwise_gemm_kernel.cu
file, replacing older dispatch calls with the new, refactored functions. This marks the first step in a broader refactoring effort.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the FP8 blockwise GEMM for SM90 architectures by introducing a more modular structure with new CUTLASS extension files. The changes are well-structured and abstract away the kernel launching logic. My review focuses on improving code quality and fixing a potential compilation error. I've pointed out some unused headers and a variable that can be removed for better code hygiene. More critically, there's a reference to a type that is not defined within the PR, which will likely cause a compilation failure. I've suggested a fix for this.
sgl-kernel/csrc/cutlass_extensions/gemm/fp8_blockwise_gemm_sm90_dispatch.cuh
Show resolved
Hide resolved
sgl-kernel/csrc/cutlass_extensions/gemm/fp8_blockwise_gemm_sm90_dispatch.cuh
Outdated
Show resolved
Hide resolved
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
1 similar comment
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
649d72c
to
6a9ef1b
Compare
6a9ef1b
to
17f6a96
Compare
sgl-kernel/csrc/cutlass_extensions/gemm/fp8_blockwise_gemm_sm90_dispatch.cuh
Show resolved
Hide resolved
1a6659d
to
e5bc932
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great Job. Could you please respond to the comments I made above?
sgl-kernel/csrc/cutlass_extensions/gemm/fp8_blockwise_gemm_sm90_dispatch.cuh
Show resolved
Hide resolved
sgl-kernel/csrc/cutlass_extensions/gemm/fp8_blockwise_gemm_sm90_dispatch.cuh
Show resolved
Hide resolved
sgl-kernel/csrc/cutlass_extensions/gemm/fp8_blockwise_gemm_sm90_dispatch.cuh
Show resolved
Hide resolved
30a0df7
to
96b1dc1
Compare
Co-authored-by: luoyuan.luo <[email protected]>
96b1dc1
to
e3fe35f
Compare
LGTM |
…sgl-project#8913) Co-authored-by: luoyuan.luo <[email protected]>
…sgl-project#8913) Co-authored-by: luoyuan.luo <[email protected]>
…sgl-project#8913) Co-authored-by: luoyuan.luo <[email protected]>
Motivation
This PR is to refactor sglang cutlass 3x - gemm fp8 blockwise sm90. The sweet point is to make the cutlass lib wrapper in sglang well-structured and be easy to extend for various GPU archs.
The most code base is adapted from vllm, with some sglang customized change as vllm cutlass version is revised, not everything can be duplicated. Thanks to the vllm developers.
The main idea is to introduce template function of cutlass_gemm_caller_blockwise and cutlass_3x_gemm_fp8_blockwise. In the next step, we will refactor blockwise fp8 gemm for sm89 and sm100.
Modifications
Accuracy Test
More test is on-going.
Benchmark & Profiling
Checklist