Skip to content

Conversation

brb-nv
Copy link
Collaborator

@brb-nv brb-nv commented Jul 18, 2025

Description

Currently, there's a limitation placed on having a single image per sample in Gemma3 VLM. This MR lifts it.

Images tested come from here: http://vision.stanford.edu/aditya86/ImageNetDogs/menu_frame.html

$ python3 examples/llm-api/quickstart_multimodal.py --model_dir ../random/hf_models/gemma-3-4b-it/ --modality image --prompt "Compare the three images and explain differences." --media "http://vision.stanford.edu/aditya86/ImageNetDogs/images/n02091032-Italian_greyhound/n02091032_8855.jpg" "http://vision.stanford.edu/aditya86/ImageNetDogs/images/n02085620-Chihuahua/n02085620_10131.jpg" "http://vision.stanford.edu/aditya86/ImageNetDogs/images/n02109047-Great_Dane/n02109047_10414.jpg" --image_format pil --attention_backend FLASHINFER --disable_kv_cache_reuse --max_tokens 8192

Formatted output looks like this:
image

Also, attaching stdout from terminal for reference.
multi_image_per_sample.txt

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@brb-nv brb-nv requested review from a team as code owners July 18, 2025 23:32
@brb-nv brb-nv requested review from FrankD412, hyukn and nv-yilinf July 18, 2025 23:32
Copy link
Contributor

coderabbitai bot commented Jul 18, 2025

Warning

Rate limit exceeded

@brb-nv has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 4 minutes and 30 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between b3eec8b and c34706d.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/models/modeling_gemma3vl.py (1 hunks)

Walkthrough

Audio modality support is added to the multimodal input utilities, including audio loading functions and placeholder handling for the new "phi4mm" model group. The Gemma3 vision-language model's preprocessing is updated to allow multiple images and add debug prints. Test code for Gemma3 is updated to use a weight mapper during model loading.

Changes

File(s) Change Summary
tensorrt_llm/_torch/models/modeling_gemma3vl.py Removed single-image restriction; added debug prints for image and tensor shapes; adjusted image processing.
tensorrt_llm/inputs/utils.py Added audio loading (sync/async); integrated "phi4mm" model group; extended placeholder and input handling.
tests/unittest/_torch/modeling/test_modeling_gemma3.py Updated test to use Gemma3HfWeightMapper during model weight loading.

Poem

In the warren of code, a new sound appears,
Audio hops in, joining images and peers.
Placeholders multiply, "phi4mm" leads the way,
Debug prints sparkle like dew in the hay.
With tests remapped and inputs anew,
This bunny’s excited—how about you?
🐇🎶🖼️

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
tensorrt_llm/_torch/models/modeling_gemma3vl.py (3)

48-51: Remove commented-out validation code.

Since the single image restriction is being lifted, this commented-out validation code should be removed rather than left as comments for cleaner code maintenance.

Apply this diff to remove the commented code:

-        # if images and len(images) != 1:
-        #     print(f"RECEIVED MORE THAN ONE IMAGE FOR PROCESSING. len(images): {len(images)}.")
-        # for img_idx, img in enumerate(images):
-        #     print(f"[Gemma3InputProcessor::_preprocess] img_idx: {img_idx}, img.shape: {img.shape}")

65-69: Consider removing debug print statements for production code.

While helpful for development, these commented-out debug print statements should be removed before merging to maintain clean code.

Apply this diff to remove the commented debug prints:

-        # for img_idx, pixel_value in enumerate(pixel_values):
-        #     print(f"[Gemma3InputProcessor::_preprocess] pixel_idx: {img_idx}, pixel_value.shape: {pixel_value.shape}")
-
-        # print(f"[Gemma3InputProcessor::_preprocess] input_ids: {input_ids}, pixel_values: {pixel_values}")
-

202-205: Remove debug print statements from production code.

These debug print statements should be removed before merging to avoid cluttering production logs and maintain code cleanliness.

Apply this diff to remove the debug prints:

-            print(f"[Gemma3VLM::forward] pixel_values concat shape: {torch.cat(pixel_values).shape}")
             image_features = self._get_image_features(
                 pixel_values=torch.cat(pixel_values))
-            print(f"[Gemma3VLM::forward] image_features shape: {image_features.shape}")
tensorrt_llm/inputs/utils.py (3)

441-441: Remove debug print statement from production code.

This debug print should be removed before merging to avoid cluttering production logs.

Apply this diff to remove the debug print:

-    print(f"[default_multimodal_input_loader::convert_to_conversation_message] prompts: {prompts}, media: {media}, modality: {modality}")

483-487: Remove debug instrumentation from production code.

These debug print statements should be removed before merging for cleaner production code.

Apply this diff to remove the debug prints:

-    idx = 0
     for prompt, media in zip(prompts, media):
-        print(f"[default_multimodal_input_loader::apply_chat_template] idx: {idx}, prompt: {prompt}, media: {media}")
         conv = convert_to_conversation_message(prompt, media, modality)
-        print(f"[default_multimodal_input_loader::apply_chat_template] conv: {conv}")

507-511: Remove debug prints and unnecessary counter variable.

The debug prints and the idx counter variable should be removed for production code.

Apply this diff to remove the debug instrumentation:

-        idx += 1
-
-    print(f"[default_multimodal_input_loader::inputs] inputs[-1]: {inputs[-1]}")
-    # for img_idx, img in enumerate(inputs[-1]["multi_modal_data"]["image"]):
-    #     print(f"[default_multimodal_input_loader::inputs] img_idx: {img_idx}, img.shape: {img.shape}")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 152e2df and 90a893a.

📒 Files selected for processing (5)
  • tensorrt_llm/_torch/attention_backend/flashinfer.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_gemma3vl.py (2 hunks)
  • tensorrt_llm/bench/benchmark/utils/asynchronous.py (1 hunks)
  • tensorrt_llm/inputs/utils.py (3 hunks)
  • tests/unittest/_torch/modeling/test_modeling_gemma3.py (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tensorrt_llm/bench/benchmark/utils/asynchronous.py (1)

50-52: LGTM! Enhanced error reporting improves debugging.

The change from a generic error message to including the actual exception details will significantly improve debugging capabilities when tasks fail during inference.

tensorrt_llm/_torch/attention_backend/flashinfer.py (2)

300-309: Excellent cache management improvement for attention plan wrappers.

The logic correctly distinguishes between plan parameters with and without custom attention masks. Removing entries with non-trivial attention_mask_data from the cache is the right approach since these are specific to individual forward passes and shouldn't persist for subsequent calls.


435-435: Minor comment punctuation fix.

Good catch on the comment formatting.

tensorrt_llm/_torch/models/modeling_gemma3vl.py (1)

53-57: Excellent update to support multiple images per sample.

The changes correctly remove the single image restriction and pass the full list of images to the processor, aligning perfectly with the PR objective.

tests/unittest/_torch/modeling/test_modeling_gemma3.py (3)

13-14: Good addition of AttentionMetadata import for type checking.

The import addition supports the new test helper method's type annotations.


218-230: Excellent test coverage for attention plan cache management.

This helper method provides thorough validation of the new cache-flushing behavior:

  1. Verifies that plan parameters with custom masks exist after forward pass
  2. Confirms they are properly flushed after calling prepare()

The test correctly validates the cache management improvement introduced in the FlashInfer backend.


346-346: Good integration of cache verification test.

The test helper is appropriately called after the forward pass to verify the cache management behavior.

@brb-nv brb-nv force-pushed the user/brb/multiple-images-per-sample branch 2 times, most recently from 61b0837 to 2cfb277 Compare July 18, 2025 23:39
@brb-nv brb-nv requested review from tijyojwad and 2ez4bz July 18, 2025 23:47
@brb-nv brb-nv force-pushed the user/brb/multiple-images-per-sample branch from 2cfb277 to c34706d Compare July 18, 2025 23:48
@brb-nv
Copy link
Collaborator Author

brb-nv commented Jul 18, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12355 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12355 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9177 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@QiJune QiJune merged commit a433eba into NVIDIA:main Jul 21, 2025
3 checks passed
reasonsolo pushed a commit to reasonsolo/TensorRT-LLM that referenced this pull request Jul 21, 2025
timlee0212 pushed a commit to timlee0212/TensorRT-LLM that referenced this pull request Jul 21, 2025
NVShreyas pushed a commit to NVShreyas/TensorRT-LLM that referenced this pull request Jul 28, 2025
Ransiki pushed a commit to Ransiki/TensorRT-LLM that referenced this pull request Jul 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants