Skip to content

[Feature] Integrate SM100 DeepGEMM support #20087

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 43 commits into from
Jul 11, 2025

Conversation

yewentao256
Copy link
Contributor

@yewentao256 yewentao256 commented Jun 25, 2025

Purpose

DeepGemm is updating to v2.0, which includes a new implementation for SM100 that expects block FP8 scales in E8M0 format deepseek-ai/DeepGEMM#112

Previous context: #19820

We add a wrapper to support both Hopper (1.x) and Blackwell (2.x) interfaces

Test

Unit Test

image

Acc Test

Qwen3

VLLM_USE_DEEP_GEMM=1 lm_eval   --model vllm   --model_args "pretrained=Qwen/Qwen3-30B-A3B-FP8,max_model_len=32768,enforce_eager=True"   --trust_remote_code   --tasks gsm8k   --num_fewshot 5   --batch_size auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.8567|±  |0.0097|
|     |       |strict-match    |     5|exact_match||0.8893|±  |0.0086|

# No deepgemm
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.8256|±  |0.0105|
|     |       |strict-match    |     5|exact_match||0.8878|±  |0.0087|

Qwen3 (DP+EP)

export VLLM_ALL2ALL_BACKEND="deepep_high_throughput"
VLLM_USE_DEEP_GEMM=1 lm_eval   --model vllm   --model_args "pretrained=Qwen/Qwen3-30B-A3B-FP8,data_parallel_size=2,max_model_len=32768,enable_expert_parallel=True,enforce_eager=True"   --trust_remote_code   --tasks gsm8k   --num_fewshot 5   --batch_size auto
|Tasks|Version|   Filter   |n-shot| Metric  |  |Value |  |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|   3|flexible-extract|   5|exact_match||0.8302|± |0.0103|
|   |    |strict-match  |   5|exact_match||0.8294|± |0.0104|
# No deepgemm
|Tasks|Version|   Filter   |n-shot| Metric  |  |Value |  |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|   3|flexible-extract|   5|exact_match||0.8150|± |0.0107|
|   |    |strict-match  |   5|exact_match||0.8848|± |0.0088|

R1(TP)

VLLM_USE_DEEP_GEMM=1 lm_eval  --model vllm  --model_args "pretrained=deepseek-ai/DeepSeek-R1,tensor_parallel_size=8,max_model_len=16384,enforce_eager=True"  --tasks gsm8k  --num_fewshot 5  --batch_size auto  --trust_remote_code
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.9477|±  |0.0061|
|     |       |strict-match    |     5|exact_match||0.9484|±  |0.0061|
# No deepgemm
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.9500|±  |0.0060|
|     |       |strict-match    |     5|exact_match||0.9477|±  |0.0061|

R1 DP+EP

export VLLM_ALL2ALL_BACKEND="deepep_high_throughput"
VLLM_USE_DEEP_GEMM=1 lm_eval   --model vllm   --model_args "pretrained=deepseek-ai/DeepSeek-R1,data_parallel_size=8,gpu_memory_utilization=0.95,max_model_len=16384,enable_expert_parallel=True"   --tasks gsm8k   --batch_size auto   --num_fewshot 5
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.9492|±  | 0.006|
|     |       |strict-match    |     5|exact_match||0.9492|±  | 0.006|

# Nodeepgemm
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.9530|±  |0.0058|
|     |       |strict-match    |     5|exact_match||0.9522|±  |0.0059|

Performance

Kernel

python benchmark_moe.py --use-deep-gemm --dtype fp8_w8a8
Batch size: 1, config: {'BLOCK_SIZE_M': 16, 'BLOCK_SIZE_N': 32, 'BLOCK_SIZE_K': 64, 'GROUP_SIZE_M': 1}
Kernel time: 114.16 us
Batch size: 2, config: {'BLOCK_SIZE_M': 16, 'BLOCK_SIZE_N': 32, 'BLOCK_SIZE_K': 64, 'GROUP_SIZE_M': 1}
Kernel time: 142.56 us
Batch size: 4, config: {'BLOCK_SIZE_M': 16, 'BLOCK_SIZE_N': 32, 'BLOCK_SIZE_K': 64, 'GROUP_SIZE_M': 1}
Kernel time: 182.89 us
Batch size: 8, config: {'BLOCK_SIZE_M': 16, 'BLOCK_SIZE_N': 32, 'BLOCK_SIZE_K': 64, 'GROUP_SIZE_M': 1}
Kernel time: 223.89 us
Batch size: 16, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 702.21 us
Batch size: 24, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 706.32 us
Batch size: 32, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 712.53 us
Batch size: 48, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 715.12 us
Batch size: 64, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 719.69 us
Batch size: 96, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 727.46 us
Batch size: 128, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 540.07 us
Batch size: 256, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 584.94 us
Batch size: 512, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 680.25 us
Batch size: 1024, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 913.30 us
Batch size: 1536, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 1143.87 us
Batch size: 2048, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 1432.49 us
Batch size: 3072, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 1881.03 us
Batch size: 4096, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 2346.28 us

# No deepgemm
Batch size: 1, config: {'BLOCK_SIZE_M': 16, 'BLOCK_SIZE_N': 32, 'BLOCK_SIZE_K': 64, 'GROUP_SIZE_M': 1}
Kernel time: 106.60 us
Batch size: 2, config: {'BLOCK_SIZE_M': 16, 'BLOCK_SIZE_N': 32, 'BLOCK_SIZE_K': 64, 'GROUP_SIZE_M': 1}
Kernel time: 132.80 us
Batch size: 4, config: {'BLOCK_SIZE_M': 16, 'BLOCK_SIZE_N': 32, 'BLOCK_SIZE_K': 64, 'GROUP_SIZE_M': 1}
Kernel time: 163.22 us
Batch size: 8, config: {'BLOCK_SIZE_M': 16, 'BLOCK_SIZE_N': 32, 'BLOCK_SIZE_K': 64, 'GROUP_SIZE_M': 1}
Kernel time: 188.97 us
Batch size: 16, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 234.29 us
Batch size: 24, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 237.68 us
Batch size: 32, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 243.71 us
Batch size: 48, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 247.41 us
Batch size: 64, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 251.17 us
Batch size: 96, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 261.22 us
Batch size: 128, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 269.07 us
Batch size: 256, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 399.63 us
Batch size: 512, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 574.25 us
Batch size: 1024, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 951.36 us
Batch size: 1536, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 1334.19 us
Batch size: 2048, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 1712.87 us
Batch size: 3072, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 2492.51 us
Batch size: 4096, config: {'BLOCK_SIZE_M': 64, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}
Kernel time: 3247.75 us

Qwen3

VLLM_USE_DEEP_GEMM=1 vllm bench throughput  --model Qwen/Qwen3-30B-A3B-FP8 --load-format dummy --input-len 1000 --output-len 100 --trust_remote_code --enforce-eager --enable-expert-parallel --quantization fp8
Throughput: 23.87 requests/s, 26204.10 total tokens/s, 2387.13 output tokens/s

# No deepgemm
Throughput: 40.88 requests/s, 44870.57 total tokens/s, 4087.60 output tokens/s

R1(TP)

VLLM_USE_DEEP_GEMM=1 vllm bench throughput  --model deepseek-ai/DeepSeek-R1 --load-format dummy --input-len 1000 --output-len 100 --trust_remote_code --enforce-eager --enable-expert-parallel  -tp 8
Throughput: 5.94 requests/s, 6523.81 total tokens/s, 593.93 output tokens/s
# NO deepgemm
Throughput: 8.52 requests/s, 9361.67 total tokens/s, 852.29 output tokens/s

R1 DP+EP

export VLLM_USE_DEEP_GEMM=1 
export VLLM_ALL2ALL_BACKEND="deepep_high_throughput"
export VLLM_RANDOMIZE_DP_DUMMY_INPUTS=1
vllm serve deepseek-ai/DeepSeek-R1        --load-format dummy        --trust-remote-code        --enforce-eager        --enable-expert-parallel        --data-parallel-size 8

vllm bench serve        --model deepseek-ai/DeepSeek-R1        --dataset-name random        --random-input-len 256        --random-output-len 100        --num-prompts 1000        --request-rate inf
============ Serving Benchmark Result ============
Successful requests:                     1000      
Benchmark duration (s):                  39.66     
Total input tokens:                      254242    
Total generated tokens:                  100000    
Request throughput (req/s):              25.21     
Output token throughput (tok/s):         2521.12   
Total Token throughput (tok/s):          8930.87   
---------------Time to First Token----------------
Mean TTFT (ms):                          3046.61   
Median TTFT (ms):                        3329.41   
P99 TTFT (ms):                           4115.32   
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          365.50    
Median TPOT (ms):                        363.06    
P99 TPOT (ms):                           374.11    
---------------Inter-token Latency----------------
Mean ITL (ms):                           365.50    
Median ITL (ms):                         355.88    
P99 ITL (ms):                            787.11    
==================================================

# No deepgemm
============ Serving Benchmark Result ============
Successful requests:                     1000      
Benchmark duration (s):                  75.61     
Total input tokens:                      254242    
Total generated tokens:                  100000    
Request throughput (req/s):              13.23     
Output token throughput (tok/s):         1322.57   
Total Token throughput (tok/s):          4685.11   
---------------Time to First Token----------------
Mean TTFT (ms):                          7876.30   
Median TTFT (ms):                        10187.52  
P99 TTFT (ms):                           11462.69  
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          680.14    
Median TPOT (ms):                        658.94    
P99 TPOT (ms):                           739.41    
---------------Inter-token Latency----------------
Mean ITL (ms):                           680.14    
Median ITL (ms):                         646.89    
P99 ITL (ms):                            1273.78   
==================================================

Acc on H100

VLLM_USE_DEEP_GEMM=1 lm_eval   --model vllm   --model_args "pretrained=Qwen/Qwen3-30B-A3B-FP8,max_model_len=32768,enforce_eager=True"   --trust_remote_code   --tasks gsm8k   --num_fewshot 5   --batch_size auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.8317|±  |0.0103|
|     |       |strict-match    |     5|exact_match||0.8961|±  |0.0084|
# without deepgemm
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.8271|±  |0.0104|
|     |       |strict-match    |     5|exact_match||0.8999|±  |0.0083|
LLM_USE_DEEP_GEMM=1 lm_eval   --model vllm   --model_args "pretrained=Qwen/Qwen3-30B-A3B-FP8,data_parallel_size=2,max_model_len=32768,enable_expert_parallel=True,enforce_eager=True"   --trust_remote_code   --tasks gsm8k   --num_fewshot 5   --batch_size auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.8180|±  |0.0106|
|     |       |strict-match    |     5|exact_match||0.8946|±  |0.0085|
# No deepgemm
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.8143|±  |0.0107|
|     |       |strict-match    |     5|exact_match||0.8870|±  |0.0087|

Conclusion:

  • DeepGemm is good only for Deepseek models.
  • Integration of Deepgemm make great benefits for deepseek models. (1.8x E2E performance improvement)

Follow up TODOs:

  • Currently every shape goes to DeepGemm branch because they only support E8M0 scale on B200, so some smaller shape falling back to Triton would cause trouble to accuracy. This could be optimized if they support E4M3 in the future
  • Document about the best recipe for using MOE on B200
  • Current quant is just using Deepgemm's torch version, which could be slow, we can optimize it further.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the performance Performance-related issues label Jun 25, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @yewentao256, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request is a significant migration effort to integrate the breaking API changes introduced in DeepGEMM v2.0 into the vLLM codebase. The primary goal is to maintain compatibility with the latest version of the DeepGEMM library, which is crucial for optimized FP8 GEMM operations. The changes involve updating function calls across various modules, adjusting import paths for DeepGEMM utilities, and temporarily incorporating some DeepGEMM helper functions directly into vLLM until they are officially re-exposed in DeepGEMM's public API.

Highlights

  • DeepGEMM API Migration: Updated all usages of the DeepGEMM library to its new v2.0 API, reflecting significant interface changes. This includes renaming core GEMM functions (e.g., gemm_fp8_fp8_bf16_nt to fp8_gemm_nt) and grouped GEMM functions (e.g., m_grouped_gemm_fp8_fp8_bf16_nt_masked to fp8_m_grouped_gemm_nt_masked).
  • Utility Function Relocation: Adjusted imports for DeepGEMM utility functions (like per_token_cast_to_fp8 and per_block_cast_to_fp8) to use their new locations within the deep_gemm.utils.math module. Additionally, some previously DeepGEMM-provided utilities (ceil_div, get_tma_aligned_size, get_col_major_tma_aligned_tensor) were copied locally into vLLM's fp8_utils.py as a temporary measure, with a TODO to remove them once DeepGEMM re-exposes them publicly.
  • Benchmark and Test Updates: Modified existing benchmarks (benchmark_fp8_block_dense_gemm.py, benchmark_moe.py) and unit tests (test_deepep_deepgemm_moe.py, test_block_fp8.py) to align with the new DeepGEMM API. This ensures continued correctness and performance evaluation of FP8 operations within vLLM.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request integrates the new DeepGEMM library, updating the API interfaces and migrating to the new version. The changes include modifications to benchmark scripts and test files to accommodate the new DeepGEMM API. The code has been reviewed and suggestions have been provided to improve code clarity and efficiency.

Copy link

mergify bot commented Jul 2, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @yewentao256.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 2, 2025
@mergify mergify bot removed the needs-rebase label Jul 2, 2025
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
@yewentao256 yewentao256 marked this pull request as draft July 3, 2025 21:20
Signed-off-by: yewentao256 <[email protected]>
@yewentao256 yewentao256 requested a review from aarnphm as a code owner July 10, 2025 19:28
@mergify mergify bot added the frontend label Jul 10, 2025
Copy link

mergify bot commented Jul 10, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @yewentao256.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 10, 2025
@yewentao256 yewentao256 changed the title [Do Not Merge Now] Integrate new deepgemm [Feature] Integrate new deepgemm Jul 10, 2025
@mgoin mgoin changed the title [Feature] Integrate new deepgemm [Feature] Integrate SM100 DeepGEMM support Jul 10, 2025
@mergify mergify bot removed the needs-rebase label Jul 10, 2025
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you for iterating on this

@mgoin mgoin enabled auto-merge (squash) July 10, 2025 23:01
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 10, 2025
@mgoin mgoin added quantization deepseek Related to DeepSeek models ready ONLY add when PR is ready to merge/full CI is needed and removed ready ONLY add when PR is ready to merge/full CI is needed labels Jul 10, 2025
@simon-mo simon-mo merged commit e2de455 into vllm-project:main Jul 11, 2025
63 of 71 checks passed
Chen-zexi pushed a commit to Chen-zexi/vllm that referenced this pull request Jul 13, 2025
@yewentao256 yewentao256 deleted the wye-integrate-new-deepgemm branch July 14, 2025 15:13
patrickvonplaten pushed a commit to patrickvonplaten/vllm that referenced this pull request Jul 15, 2025
LyrisZhong pushed a commit to LyrisZhong/vllm that referenced this pull request Jul 23, 2025
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deepseek Related to DeepSeek models frontend performance Performance-related issues quantization ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants