Skip to content

Conversation

yuan-luo
Copy link
Collaborator

@yuan-luo yuan-luo commented Aug 7, 2025

Motivation

This PR is to refactor sglang cutlass 3x - gemm fp8 blockwise sm90. The sweet point is to make the cutlass lib wrapper in sglang well-structured and be easy to extend for various GPU archs.
The most code base is adapted from vllm, with some sglang customized change as vllm cutlass version is revised, not everything can be duplicated. Thanks to the vllm developers.

The main idea is to introduce template function of cutlass_gemm_caller_blockwise and cutlass_3x_gemm_fp8_blockwise. In the next step, we will refactor blockwise fp8 gemm for sm89 and sm100.

Modifications

Accuracy Test

➜  sglang_dev git:(refactor_cutlass_3x_fp8_blockwise_gemm_sm90) ✗ python ./sgl-kernel/tests/test_fp8_blockwise_gemm.py
=============================================================================================================== test session starts ===============================================================================================================
platform linux -- Python 3.10.12, pytest-8.4.1, pluggy-1.6.0
rootdir: /sgl-workspace/sglang_dev/sgl-kernel
configfile: pyproject.toml
plugins: typeguard-4.4.4, anyio-4.9.0
collected 576 items

sgl-kernel/tests/test_fp8_blockwise_gemm.py ............................................................................................................................................................................................... [ 33%]
........................................................................................................................................................................................................................................... [ 73%]
......................................................................................................................................................                                                                                      [100%]

=============================================================================================================== 576 passed in 6.39s ===============================================================================================================

More test is on-going.

Benchmark & Profiling

Checklist

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @yuan-luo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I am refactoring the SGLang kernel's CUTLASS 3.x GEMM implementation, focusing on FP8 blockwise operations for NVIDIA's SM90 architecture. This initial phase involves adapting code from vLLM, introducing new template functions to generalize the GEMM calling process, and setting up the foundation for future refactors targeting other architectures. The goal is to enhance the efficiency and maintainability of the matrix multiplication routines within the SGLang kernel.

Highlights

  • New CUTLASS 3.x Components for FP8 GEMM on SM90: I've introduced new CUTLASS 3.x components specifically for FP8 blockwise GEMM operations on SM90 architectures. This includes defining new dispatch policies and kernel structures to leverage Hopper's capabilities for efficient low-precision matrix multiplication.
  • Generalized CUTLASS GEMM Caller: I've created a generic cutlass_gemm_caller template function and a specialized cutlass_gemm_caller_blockwise to abstract and streamline the launching of CUTLASS GEMM kernels. This improves code reusability and simplifies future integrations.
  • Dynamic Scheduler Dispatch for FP8 GEMM: I've implemented a dispatch mechanism that dynamically selects the appropriate CUTLASS scheduler (StreamKScheduler or PersistentScheduler) for FP8 blockwise GEMM based on the dimensions of the input matrices. This ensures optimal performance for varying workloads.
  • Integration into Existing GEMM Kernel: I've integrated the newly developed CUTLASS extensions into the existing fp8_blockwise_gemm_kernel.cu file, replacing older dispatch calls with the new, refactored functions. This marks the first step in a broader refactoring effort.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the FP8 blockwise GEMM for SM90 architectures by introducing a more modular structure with new CUTLASS extension files. The changes are well-structured and abstract away the kernel launching logic. My review focuses on improving code quality and fixing a potential compilation error. I've pointed out some unused headers and a variable that can be removed for better code hygiene. More critically, there's a reference to a type that is not defined within the PR, which will likely cause a compilation failure. I've suggested a fix for this.

Copy link
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

1 similar comment
Copy link
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

@yuan-luo yuan-luo force-pushed the refactor_cutlass_3x_fp8_blockwise_gemm_sm90 branch from 649d72c to 6a9ef1b Compare August 8, 2025 02:42
@yuan-luo yuan-luo changed the title [WIP][sgl-kernel] 1/N Refactor sglang cutlass 3x - gemm fp8 blockwise sm90 [sgl-kernel] 1/N Refactor sglang cutlass 3x - gemm fp8 blockwise sm90 Aug 8, 2025
@yuan-luo yuan-luo force-pushed the refactor_cutlass_3x_fp8_blockwise_gemm_sm90 branch from 6a9ef1b to 17f6a96 Compare August 8, 2025 05:47
@yuan-luo yuan-luo force-pushed the refactor_cutlass_3x_fp8_blockwise_gemm_sm90 branch from 1a6659d to e5bc932 Compare August 11, 2025 05:48
Copy link
Collaborator

@BBuf BBuf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Copy link
Collaborator

@HydraQYH HydraQYH left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great Job. Could you please respond to the comments I made above?

@yuan-luo yuan-luo force-pushed the refactor_cutlass_3x_fp8_blockwise_gemm_sm90 branch from 30a0df7 to 96b1dc1 Compare August 12, 2025 02:03
@yuan-luo yuan-luo force-pushed the refactor_cutlass_3x_fp8_blockwise_gemm_sm90 branch from 96b1dc1 to e3fe35f Compare August 12, 2025 02:05
@HydraQYH HydraQYH self-requested a review August 12, 2025 07:45
@BBuf BBuf enabled auto-merge (squash) August 13, 2025 03:07
@BBuf BBuf disabled auto-merge August 13, 2025 03:07
@HydraQYH
Copy link
Collaborator

LGTM

@zhyncs zhyncs merged commit 432f205 into sgl-project:main Aug 14, 2025
76 of 88 checks passed
yilian49 pushed a commit to yilian49/sglang that referenced this pull request Aug 16, 2025
narutolhy pushed a commit to narutolhy/sglang that referenced this pull request Aug 17, 2025
MahmoudAshraf97 pushed a commit to MahmoudAshraf97/sglang that referenced this pull request Sep 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants