Skip to content

[Bugfix] Fix Maverick correctness by filling zero to cache space in cutlass_moe #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

minosfuture
Copy link
Owner

@minosfuture minosfuture commented Jun 25, 2025

Purpose

vllm-project#19667 changed the workspace creation from torch.zeros to torch.empty. This ends up causing correctness for models using cutlass_moe, e.g. Maverick in our test case. This PR fixes the correctness issue by explicitly filling zeros in cutlass_moe.

Test Plan

lm_eval, ut

Test Result

lm_eval results:

local-chat-completions (model=meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8,base_url=http://127.0.0.1:8081/v1/chat/completions,num_concurrent=32), gen_kwargs: (None), limit: 200.0, num_fewshot: 5, batch_size: 1

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.935 ± 0.0175
strict-match 5 exact_match 0.920 ± 0.0192

unit test stability verified:

  • without c1.fill_(0), the following one liner verifies stable failure:
for i in {1..10}; do echo $i; pytest -s tests/kernels/moe/test_cutlass_moe.py  -k "test_run_cutlass_moe_fp8 or test_cutlass_moe_8_bit_EP_large" -v  2>&1 > /dev/null && { echo "shouldn't succeed"; exit 1; } done`
  • with c1.fill_(0), the following verifies stable success:
for i in {1..10}; do echo $i; pytest -s tests/kernels/moe/test_cutlass_moe.py  -k "test_run_cutlass_moe_fp8 or test_cutlass_moe_8_bit_EP_large" -v  2>&1 > /dev/null || { echo "should succeed"; exit 1; } done

(Optional) Documentation Update

BEFORE SUBMITTING, PLEASE READ https://docs.vllm.ai/en/latest/contributing (anything written below this line will be removed by GitHub Actions)

@@ -176,6 +176,7 @@ def run_cutlass_moe_fp8(
c1 = _resize_cache(workspace13, (M * topk, N * 2))
c2 = _resize_cache(workspace2, (M * topk, N))
c3 = _resize_cache(workspace13, (M * topk, K))
c1.fill_(0)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great! any way to capture this in test_cutlass_moe?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, added a couple unit tests

@minosfuture minosfuture force-pushed the fix_maverick_correctness branch from 52be3eb to 66c457b Compare June 27, 2025 05:07
@minosfuture minosfuture force-pushed the fix_maverick_correctness branch from 66c457b to 25d3af8 Compare June 27, 2025 06:07
Comment on lines +180 to +181
if expert_map is not None:
c1.fill_(0)
Copy link

@ElizaWszola ElizaWszola Jul 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more tiny thing: can you check if we need to do this if per_act_token is true?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no we don't. I figured out the root cause is that the random data in the unused space in c1 caused scale (over the whole c1) to be larger, resulting in precision loss for the actual data. So if we use per_act_token==True, scales won't be impacted. Let me update the PR in vllm-project.
I'll close this PR to avoid confusion -- this was a experimental PR for early review.

@minosfuture
Copy link
Owner Author

move to vllm-project#20167. closing.

@minosfuture minosfuture closed this Jul 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants