Skip to content

Conversation

@kaix-nv
Copy link
Contributor

@kaix-nv kaix-nv commented Nov 11, 2025

What does this PR do?

Type of change: ?
new feature

Overview: ?

  • This PR adds the sparse attention calibration algorithm.
  • This PR adds sparse attention integration to the llm_eval evaluation examples including lm_eval and MMLU.

Usage

import modelopt.torch.sparsity.attention_sparsity as mtsa

# Apply sparse attention with calibration
model = mtsa.sparsify(model, config=SKIP_SOFTMAX_CALIB)

# Print summary - now shows actual thresholds
mtsa.print_sparse_attention_summary(model)
# Output:
# Method: flash_skip_softmax, Threshold: Dynamic (λ=437.395926)

# Or llm_eval integration
# HuggingFace sparse attention example
python examples/llm_sparsity/attention_sparsity/hf_sa.py \
    --pyt_ckpt_path Qwen/Qwen3-4B \
    --sparse_attn skip_softmax_calib \
    --verify_output

# LM Eval with sparse attention
python examples/llm_eval/lm_eval_hf.py \
    --model hf \
    --model_args pretrained=Qwen/Qwen3-4B \
    --tasks hellaswag \
    --sparse_cfg SKIP_SOFTMAX_CALIB

# MMLU with sparse attention (NEW)
python examples/llm_eval/mmlu.py \
    --model_name causal \
    --model_path Qwen/Qwen3-4B \
    --sparse_cfg SKIP_SOFTMAX_CALIB

Testing

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes/No
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

@kaix-nv kaix-nv requested review from a team as code owners November 11, 2025 22:38
@kaix-nv kaix-nv requested review from RalphMao and removed request for RalphMao November 11, 2025 22:38
@codecov
Copy link

codecov bot commented Nov 11, 2025

Codecov Report

❌ Patch coverage is 81.33484% with 165 lines in your changes missing coverage. Please review.
✅ Project coverage is 74.69%. Comparing base (8188a01) to head (c9d7008).

Files with missing lines Patch % Lines
...rsity/attention_sparsity/calibration/calibrator.py 32.45% 77 Missing ⚠️
...arsity/attention_sparsity/calibration/calibrate.py 51.72% 28 Missing ⚠️
...ch/sparsity/attention_sparsity/sparse_attention.py 75.36% 17 Missing ⚠️
...pt/torch/sparsity/attention_sparsity/conversion.py 91.85% 11 Missing ⚠️
...y/attention_sparsity/methods/flash_skip_softmax.py 91.00% 9 Missing ⚠️
...orch/sparsity/attention_sparsity/model_sparsify.py 76.00% 6 Missing ⚠️
...sparsity/attention_sparsity/calibration/dataset.py 97.43% 5 Missing ⚠️
...ch/sparsity/attention_sparsity/methods/registry.py 80.76% 5 Missing ⚠️
...delopt/torch/sparsity/attention_sparsity/config.py 95.40% 4 Missing ⚠️
modelopt/torch/sparsity/attention_sparsity/mode.py 90.32% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #538      +/-   ##
==========================================
+ Coverage   74.37%   74.69%   +0.32%     
==========================================
  Files         182      196      +14     
  Lines       18219    19103     +884     
==========================================
+ Hits        13550    14269     +719     
- Misses       4669     4834     +165     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@kaix-nv kaix-nv force-pushed the kaix/sparse_attention_calibration branch from 8c7ee86 to da6f627 Compare November 12, 2025 00:17
@kaix-nv kaix-nv changed the title [3/n] Adds sparse attention integration to the llm_eval examples [OMNIML-2850] [3/n] Adds sparse attention integration to the llm_eval examples Nov 12, 2025
@kaix-nv kaix-nv changed the title [OMNIML-2850] [3/n] Adds sparse attention integration to the llm_eval examples [OMNIML-2850][3/n] Adds sparse attention integration to the llm_eval examples Nov 12, 2025
@kaix-nv kaix-nv changed the title [OMNIML-2850][3/n] Adds sparse attention integration to the llm_eval examples [OMNIML-2850] [3/n] Adds sparse attention integration to the llm_eval examples Nov 12, 2025
@kaix-nv kaix-nv force-pushed the kaix/sparse_attention_calibration branch from da6f627 to 9a0bb6e Compare November 12, 2025 21:17
@kaix-nv kaix-nv force-pushed the kaix/sparse_attention_calibration branch 2 times, most recently from a18d230 to 525a119 Compare November 13, 2025 07:10
@kaix-nv kaix-nv force-pushed the kaix/sparse_attention_calibration branch from 525a119 to c9d7008 Compare November 13, 2025 07:40
@kaix-nv kaix-nv changed the title [OMNIML-2850] [3/n] Adds sparse attention integration to the llm_eval examples [OMNIML-2850] [3/n] Adds sparse attention calibration; Adds llm_eval support Nov 14, 2025
@kaix-nv kaix-nv changed the title [OMNIML-2850] [3/n] Adds sparse attention calibration; Adds llm_eval support [OMNIML-2850] [3/n] Adds sparse attention calibration Nov 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants