Skip to content

Conversation

nv-yilinf
Copy link
Collaborator

@nv-yilinf nv-yilinf commented Jul 23, 2025

Summary by CodeRabbit

  • Bug Fixes
    • Improved handling of attention settings for certain backends to ensure stability and compatibility when using chunked attention.

Description

XQA does not support chunked attention at the moment, while MMHA supports chunked attention but is slower. In this PR we trick trtllm to select XQA kernels when we are sure that chunked attention will not be needed (i.e., max_seqlen < chunked_size).

This PR has to land after #6282

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@nv-yilinf nv-yilinf requested a review from a team as a code owner July 23, 2025 18:45
Copy link
Contributor

coderabbitai bot commented Jul 23, 2025

📝 Walkthrough

Walkthrough

A conditional check was added to the Llama4Attention constructor to handle the "TRTLLM" backend. If chunked attention is enabled but the model's max_seq_len is smaller than the chunk size, chunked attention is disabled by setting attention_chunk_size to None. No public API signatures were changed.

Changes

File(s) Change Summary
Llama model attention adjustment
tensorrt_llm/_torch/models/modeling_llama.py
Added conditional to disable chunked attention for "TRTLLM" backend if max_seq_len is smaller than attention_chunk_size.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Suggested reviewers

  • yilin-void
  • chzblych
  • HuiGao-NV
  • yuxianq
  • pcastonguay

Poem

In the world of Llama, a tweak so slight,
Chunked attention now checks if it’s right.
If tokens are few and the backend is keen,
It quietly disables, keeps the logic clean.
A hop and a skip, the rabbit approves—
Careful attention as the model improves! 🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b0ea6c5 and a9aef6e.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/models/modeling_llama.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tensorrt_llm/_torch/models/modeling_llama.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@nv-yilinf nv-yilinf requested a review from schetlur-nv July 23, 2025 18:45
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tensorrt_llm/_torch/models/modeling_llama.py (1)

72-76: LGTM! Logic correctly disables chunked attention for small sequences.

The conditional check properly identifies when chunked attention provides no benefit (when max_num_tokens < attention_chunk_size) and disables it to allow faster XQA kernel selection. This aligns well with the PR objective to improve performance for small sequence lengths.

Consider adding a more descriptive comment explaining the performance rationale:

        else:
-            # Disable chunked attention when max_num_tokens is smaller than attention_chunk_size
-            # TODO: Remove this after all attention kernels in TRTLLM backend support chunked attention
+            # Disable chunked attention when max_num_tokens < chunk_size to enable faster XQA kernels.
+            # XQA kernels don't support chunked attention but are faster for small sequences.
+            # TODO: Remove this after all attention kernels in TRTLLM backend support chunked attention
            if attention_chunk_size and model_config.max_num_tokens < attention_chunk_size:
                attention_chunk_size = None
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cf4f4e8 and 8358e01.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/models/modeling_llama.py (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
tensorrt_llm/_torch/models/modeling_llama.py (1)

Learnt from: yechank-nvidia
PR: #6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using from_shared_tensor() is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call strip_for_generation() to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

Copy link
Collaborator

@schetlur-nv schetlur-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13824 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13824 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10394 completed with status: 'FAILURE'

@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13830 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13830 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10400 completed with status: 'FAILURE'

@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14017 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14017 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10567 completed with status: 'FAILURE'

@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14038 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14038 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10587 completed with status: 'SUCCESS'

@nv-yilinf
Copy link
Collaborator Author

/bot run

@nv-yilinf nv-yilinf enabled auto-merge (squash) August 5, 2025 15:27
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14165 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14165 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #10691 completed with status: 'FAILURE'

@nv-yilinf nv-yilinf changed the title [Perf] Improve Llama4 performance for small max_seqlen cases [https://nvbugspro.nvidia.com/bug/5398180] Improve Llama4 performance for small max_seqlen cases Aug 5, 2025
Signed-off-by: Yilin Fan <[email protected]>
@nv-yilinf nv-yilinf changed the title [https://nvbugspro.nvidia.com/bug/5398180] Improve Llama4 performance for small max_seqlen cases [https://nvbugspro.nvidia.com/bug/5398180][feat] Improve Llama4 performance for small max_seqlen cases Aug 5, 2025
@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14625 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14625 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11049 completed with status: 'FAILURE'

@nv-yilinf nv-yilinf requested a review from a team as a code owner August 8, 2025 19:17
@nv-yilinf nv-yilinf requested a review from achartier August 8, 2025 19:17
@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14639 [ run ] triggered by Bot

@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14642 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14639 [ run ] completed with state ABORTED

@nv-yilinf nv-yilinf requested a review from a team as a code owner August 8, 2025 20:57
@nv-yilinf nv-yilinf requested a review from kris1025 August 8, 2025 20:57
@nv-yilinf
Copy link
Collaborator Author

/bot run

1 similar comment
@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14644 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14642 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14645 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14644 [ run ] completed with state ABORTED

@nv-yilinf nv-yilinf force-pushed the optimize-chunked-attention-for-small-max-seqlen branch from a9d76c8 to d3ef4f7 Compare August 8, 2025 21:51
Signed-off-by: Yilin Fan <[email protected]>
@nv-yilinf nv-yilinf force-pushed the optimize-chunked-attention-for-small-max-seqlen branch from d3ef4f7 to d41e955 Compare August 8, 2025 21:57
@nv-yilinf
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14652 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14645 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14652 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11061 completed with status: 'SUCCESS'

@nv-yilinf nv-yilinf merged commit d643aef into NVIDIA:main Aug 9, 2025
4 checks passed
@nv-yilinf nv-yilinf deleted the optimize-chunked-attention-for-small-max-seqlen branch September 4, 2025 16:38
nv-yilinf added a commit to nv-yilinf/TensorRT-LLM that referenced this pull request Sep 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants