Skip to content

Conversation

dcampora
Copy link
Collaborator

@dcampora dcampora commented May 13, 2025

Mass integration 0.19

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Copy link
Collaborator

@nv-guomingz nv-guomingz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dcampora
Copy link
Collaborator Author

/bot run

@dcampora dcampora force-pushed the user/dcampora/mi_0_19 branch from 2cfd942 to c8a4c33 Compare May 14, 2025 07:36
@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5140 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5140 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #3747 completed with status: 'FAILURE'

@dcampora dcampora force-pushed the user/dcampora/mi_0_19 branch from c8a4c33 to 908fbab Compare May 14, 2025 12:14
@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5172 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5172 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #3773 completed with status: 'FAILURE'

nv-guomingz and others added 16 commits May 16, 2025 07:57
* fix remote mpi session

Signed-off-by: Superjomn <[email protected]>

* fix

Signed-off-by: Superjomn <[email protected]>

---------

Signed-off-by: Superjomn <[email protected]>
…rlap scheduler (NVIDIA#3975)

* fix

Signed-off-by: Enwei Zhu <[email protected]>

* update multigpu list

Signed-off-by: Enwei Zhu <[email protected]>

* fix namings

Signed-off-by: Enwei Zhu <[email protected]>

---------

Signed-off-by: Enwei Zhu <[email protected]>
* fix doc

Signed-off-by: jiahanc <[email protected]>

* update perf number

Signed-off-by: jiahanc <[email protected]>

---------

Signed-off-by: jiahanc <[email protected]>
…4042)

Force tuning up to 8192 sequence length for NVFP4 linear op. Also, make this runtime-selectable with UB enabled.

Signed-off-by: Yukun He <[email protected]>
NVIDIA#4060)

The NVFP4 Linear op is very sensitive to the host overhead.
This PR introduces customizable `find_nearest_profile` and `get_cache_key_specifc`, which allow users to override the default method for generating the cache key.

Signed-off-by: Yukun He <[email protected]>
Signed-off-by: Daniel Campora <[email protected]>
Co-authored-by: Enwei Zhu <[email protected]>
Signed-off-by: Daniel Cámpora <[email protected]>
Signed-off-by: Daniel Campora <[email protected]>
@tensorrt-cicd
Copy link
Collaborator

PR_Github #5458 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #3982 (Partly Tested) completed with status: 'SUCCESS'

@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5482 [ run ] triggered by Bot

@dcampora
Copy link
Collaborator Author

/bot skip --comment "Tests passed."

@dcampora dcampora force-pushed the user/dcampora/mi_0_19 branch from cd82d7c to 5d3fa62 Compare May 16, 2025 08:26
@tensorrt-cicd
Copy link
Collaborator

PR_Github #5486 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5482 [ run ] completed with state ABORTED

@dcampora
Copy link
Collaborator Author

/bot skip --comment "Tests passed."

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5488 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5486 [ skip ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5488 [ skip ] completed with state SUCCESS
Skipping testing for commit 5d3fa62

@dcampora dcampora merged commit df19430 into NVIDIA:main May 16, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.