Skip to content

Update CMakeLists.txt extend find_library names #6609

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

mc-nv
Copy link

@mc-nv mc-nv commented Aug 4, 2025

Update lookup names in CMake config

Summary by CodeRabbit

  • Chores
    • Improved detection of the MLX5 library during setup to ensure required components are found more reliably.

Description

Fix CMake configuration.
CMake always rely on file ending .so in when searching for a libraries.
Unfortunately installed files with package ibverbs-providers provide file libmlx5.so.1 instead of libxml5.so .

$ docker run  -it --rm nvcr.io/nvidia/pytorch:25.05-py3 dpkg -L ibverbs-providers 

=============
== PyTorch ==
=============

NVIDIA Release 25.05 (build 170559088)
PyTorch Version 2.8.0a0+5228986
Container image Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2014-2024 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies    (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU                      (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006      Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015      Google Inc.
Copyright (c) 2015      Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

GOVERNING TERMS: The software and materials are governed by the NVIDIA Software License Agreement
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/)
and the Product-Specific Terms for NVIDIA AI Products
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/).

WARNING: The NVIDIA Driver was not detected.  GPU functionality will not be available.
   Use the NVIDIA Container Toolkit to start this container with GPU support; see
   https://docs.nvidia.com/datacenter/cloud-native/ .

NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be
   insufficient for PyTorch.  NVIDIA recommends the use of the following flags:
   docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...

/.
/etc
/etc/libibverbs.d
/etc/libibverbs.d/bnxt_re.driver
/etc/libibverbs.d/cxgb4.driver
/etc/libibverbs.d/efa.driver
/etc/libibverbs.d/erdma.driver
/etc/libibverbs.d/hfi1verbs.driver
/etc/libibverbs.d/hns.driver
/etc/libibverbs.d/ipathverbs.driver
/etc/libibverbs.d/irdma.driver
/etc/libibverbs.d/mana.driver
/etc/libibverbs.d/mlx4.driver
/etc/libibverbs.d/mlx5.driver
/etc/libibverbs.d/mthca.driver
/etc/libibverbs.d/ocrdma.driver
/etc/libibverbs.d/qedr.driver
/etc/libibverbs.d/rxe.driver
/etc/libibverbs.d/siw.driver
/etc/libibverbs.d/vmw_pvrdma.driver
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libefa.so.1.3.50.0
/usr/lib/x86_64-linux-gnu/libibverbs
/usr/lib/x86_64-linux-gnu/libibverbs/libbnxt_re-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libcxgb4-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/liberdma-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libhfi1verbs-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libhns-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libipathverbs-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libirdma-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libmthca-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libocrdma-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libqedr-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/librxe-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libsiw-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libvmw_pvrdma-rdmav34.so
/usr/lib/x86_64-linux-gnu/libmana.so.1.0.50.0
/usr/lib/x86_64-linux-gnu/libmlx4.so.1.0.50.0
/usr/lib/x86_64-linux-gnu/libmlx5.so.1.24.50.0
/usr/share
/usr/share/doc
/usr/share/doc/ibverbs-providers
/usr/share/doc/ibverbs-providers/copyright
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/ibverbs-providers
/usr/lib/x86_64-linux-gnu/libefa.so.1
/usr/lib/x86_64-linux-gnu/libibverbs/libefa-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libmana-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libmlx4-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libmlx5-rdmav34.so
/usr/lib/x86_64-linux-gnu/libmana.so.1
/usr/lib/x86_64-linux-gnu/libmlx4.so.1
/usr/lib/x86_64-linux-gnu/libmlx5.so.1
/usr/share/doc/ibverbs-providers/changelog.Debian.gz

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Update lookup names in CMake config

Signed-off-by: Misha Chornyi <[email protected]>
Copy link
Contributor

coderabbitai bot commented Aug 4, 2025

📝 Walkthrough

Walkthrough

The change updates the CMake configuration in the TensorRT-LLM project to modify how the MLX5 library is located. The find_library command now searches for both "mlx5" and "libmlx5.so.1" and marks the library as required, expanding the search criteria for the MLX5 dependency.

Changes

Cohort / File(s) Change Summary
CMake MLX5 Library Search Update
cpp/tensorrt_llm/deep_ep/CMakeLists.txt
Modified the find_library command to search for "mlx5" and "libmlx5.so.1", and set as required.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@mc-nv mc-nv marked this pull request as draft August 4, 2025 18:00
@mc-nv mc-nv marked this pull request as ready for review August 4, 2025 18:00
@mc-nv
Copy link
Author

mc-nv commented Aug 4, 2025

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6135f75 and d86bd2e.

📒 Files selected for processing (1)
  • cpp/tensorrt_llm/deep_ep/CMakeLists.txt (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
📚 Learning: applies to **/*.{cpp,h,hpp,cc,cxx,cu,py} : all tensorrt-llm open source software code should contain...
Learnt from: CR
PR: NVIDIA/TensorRT-LLM#0
File: CODING_GUIDELINES.md:0-0
Timestamp: 2025-08-04T02:12:17.582Z
Learning: Applies to **/*.{cpp,h,hpp,cc,cxx,cu,py} : All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Applied to files:

  • cpp/tensorrt_llm/deep_ep/CMakeLists.txt
📚 Learning: in tensorrt-llm, examples directory can have different dependency versions than the root requirement...
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • cpp/tensorrt_llm/deep_ep/CMakeLists.txt
🔇 Additional comments (1)
cpp/tensorrt_llm/deep_ep/CMakeLists.txt (1)

40-40: MLX5_lib is found but never linked – confirm this is intentional

The variable created by find_library is not referenced later (e.g. in target_link_libraries).
If the goal is merely an early availability check, please add an inline comment clarifying that intent; otherwise, remember to add it to the deep_ep_cpp_tllm link line to avoid unresolved symbols at link time or at runtime via dlopen.

# Example if linking is required
target_link_libraries(
  deep_ep_cpp_tllm
  PRIVATE ${MLX5_lib}            # add before or after other deps
          nvshmem_project::nvshmem
          ${TORCH_LIBRARIES}
          ${TORCH_PYTHON_LIB})

@@ -37,7 +37,7 @@ if(NOT DEEP_EP_CUDA_ARCHITECTURES)
endif()

# Ensure that dependent libraries are installed
find_library(MLX5_lib NAMES mlx5 REQUIRED)
find_library(MLX5_lib NAMES mlx5 libmlx5.so.1 REQUIRED)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Broaden the pattern to avoid hard-coding the exact .so.1 suffix

find_library already appends the platform-specific prefixes/suffixes (lib, .so) and also matches versioned filenames such as .so.* when only the bare stem (mlx5) is supplied. By explicitly listing libmlx5.so.1 you:

  1. Lose the benefit of that automatic fallback – a system that only provides libmlx5.so.2 or libmlx5.so.1.8.0 will now fail.
  2. Couple the build to a specific provider version.

A more future-proof approach is to keep the unqualified stem(s) and, if desired, add a generic mlx5.so variant, e.g.

# Accept all versioned variants that CMake already recognises
find_library(MLX5_lib NAMES mlx5 mlx5.so REQUIRED)

This continues to handle distributions that omit the plain libmlx5.so symlink, without locking you to a single minor version.

🤖 Prompt for AI Agents
In cpp/tensorrt_llm/deep_ep/CMakeLists.txt at line 40, the find_library call
hard-codes the exact library filename with the .so.1 suffix, which restricts
compatibility to that specific version. To fix this, replace the explicit
'libmlx5.so.1' with a more general pattern by listing just 'mlx5' and optionally
'mlx5.so' as names in find_library. This allows CMake to automatically find any
versioned variant of the library, improving future compatibility and avoiding
coupling to a specific version.

@svc-trtllm-gh-bot svc-trtllm-gh-bot added the Community want to contribute PRs initiated from Community label Aug 4, 2025
mc-nv added a commit to triton-inference-server/tensorrtllm_backend that referenced this pull request Aug 4, 2025
@tburt-nv tburt-nv requested a review from yuantailing August 5, 2025 16:12
@tburt-nv tburt-nv removed the Community want to contribute PRs initiated from Community label Aug 5, 2025
@svc-trtllm-gh-bot svc-trtllm-gh-bot added the Community want to contribute PRs initiated from Community label Aug 5, 2025
find_library(MLX5_lib NAMES mlx5 REQUIRED)
find_library(MLX5_lib NAMES mlx5 libmlx5.so.1 REQUIRED)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. What you need may be package libibverbs-dev. The package ibverbs-providers only provides runtime but not development environment of mlx5.
  2. This change doesn't work because the find_library statement in this file is only a sanity check, and in nvshmem src/CMakeLists.txt it finds mlx5 again. If you have libmlx5.so.1 but do not have mlx5, you will pass this check but get an error in nvshmem build.

Copy link
Author

@mc-nv mc-nv Aug 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. TensorRT-LLM is using NVIDIA PyTorch container nvcr.io/nvidia/pytorch:25.06-py3
    as base instance: https://github.com/NVIDIA/TensorRT-LLM/blob/main/docker/Dockerfile.multi#L5 :
  2. Library libmlx5.so.1 is installed by default, but can't be found by CMake
$ docker run --rm -it nvcr.io/nvidia/pytorch:25.06-py3 bash -c "echo 'project(mlx5repro)' > /tmp/CMakeLists.txt ; echo 'cmake_minimum_required(VERSION 3.31.6)' >> /tmp/CMakeLists.txt ; echo 'find_library(MLX5_lib NAMES mlx5 REQUIRED)' >> /tmp/CMakeLists.txt ; cmake -S /tmp/ -B /tmp/build --debug-find"

=============
== PyTorch ==
=============

NVIDIA Release 25.06 (build 177567386)
PyTorch Version 2.8.0a0+5228986
Container image Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2014-2024 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies    (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU                      (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006      Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015      Google Inc.
Copyright (c) 2015      Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

GOVERNING TERMS: The software and materials are governed by the NVIDIA Software License Agreement
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/)
and the Product-Specific Terms for NVIDIA AI Products
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/).

WARNING: The NVIDIA Driver was not detected.  GPU functionality will not be available.
   Use the NVIDIA Container Toolkit to start this container with GPU support; see
   https://docs.nvidia.com/datacenter/cloud-native/ .

NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be
   insufficient for PyTorch.  NVIDIA recommends the use of the following flags:
   docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...

Running with debug output on for the `find` commands.
CMake Warning (dev) at CMakeLists.txt:1 (project):
  cmake_minimum_required() should be called prior to this top-level project()
  call.  Please see the cmake-commands(7) manual for usage documentation of
  both commands.
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineSystem.cmake:12 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_UNAME
    NAMES: "uname"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 1
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/uname
    /usr/local/nvidia/bin/uname
    /usr/local/cuda/bin/uname
    /usr/local/mpi/bin/uname
    /usr/local/sbin/uname
    /usr/local/bin/uname
    /usr/sbin/uname

  The item was found at

    /usr/bin/uname

Call Stack (most recent call first):
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeUnixFindMake.cmake:5 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_MAKE_PROGRAM
    NAMES: "gmake"
           "make"
           "smake"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 1
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/gmake
    /usr/local/nvidia/bin/gmake
    /usr/local/cuda/bin/gmake
    /usr/local/mpi/bin/gmake
    /usr/local/sbin/gmake
    /usr/local/bin/gmake
    /usr/sbin/gmake

  The item was found at

    /usr/bin/gmake

Call Stack (most recent call first):
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCompiler.cmake:69 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_C_COMPILER
    NAMES: "cc"
           "gcc"
           "cl"
           "bcc"
           "xlc"
           "icx"
           "clang"
    Documentation: C compiler
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 1
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/cc
    /usr/local/nvidia/bin/cc
    /usr/local/cuda/bin/cc
    /usr/local/mpi/bin/cc
    /usr/local/sbin/cc
    /usr/local/bin/cc
    /usr/sbin/cc

  The item was found at

    /usr/bin/cc

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:65 (_cmake_find_compiler)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCompilerId.cmake:400 (find_file):
  find_file called with the following settings:

    VAR: src_in
    NAMES: "CMakeCCompilerId.c.in"
    Documentation: Path to a file.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    NO_DEFAULT_PATH Enabled

  find_file considered the following locations:

  The item was found at

    /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeCCompilerId.c.in

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCompilerId.cmake:438 (CMAKE_DETERMINE_COMPILER_ID_WRITE)
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCompilerId.cmake:64 (__determine_compiler_id_test)
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:123 (CMAKE_DETERMINE_COMPILER_ID)
  CMakeLists.txt:1 (project)


-- The C compiler identification is GNU 13.3.0
CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_AR
    NAMES: "ar"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

  The item was found at

    /usr/bin/ar

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_RANLIB
    NAMES: "ranlib"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

  The item was found at

    /usr/bin/ranlib

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_STRIP
    NAMES: "strip"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

  The item was found at

    /usr/bin/strip

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_LINKER
    NAMES: "ld"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

  The item was found at

    /usr/bin/ld

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_NM
    NAMES: "nm"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

  The item was found at

    /usr/bin/nm

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_OBJDUMP
    NAMES: "objdump"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

  The item was found at

    /usr/bin/objdump

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_OBJCOPY
    NAMES: "objcopy"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

  The item was found at

    /usr/bin/objcopy

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_READELF
    NAMES: "readelf"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

  The item was found at

    /usr/bin/readelf

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_DLLTOOL
    NAMES: "dlltool"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/bin/dlltool
    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/dlltool
    /usr/local/nvidia/bin/dlltool
    /usr/local/cuda/bin/dlltool
    /usr/local/mpi/bin/dlltool
    /usr/local/sbin/dlltool
    /usr/local/bin/dlltool
    /usr/sbin/dlltool
    /sbin/dlltool
    /bin/dlltool
    /usr/local/ucx/bin/dlltool
    /opt/amazon/efa/bin/dlltool
    /opt/tensorrt/bin/dlltool

  The item was not found.

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_ADDR2LINE
    NAMES: "addr2line"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

  The item was found at

    /usr/bin/addr2line

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_TAPI
    NAMES: "tapi"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/bin/tapi
    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/tapi
    /usr/local/nvidia/bin/tapi
    /usr/local/cuda/bin/tapi
    /usr/local/mpi/bin/tapi
    /usr/local/sbin/tapi
    /usr/local/bin/tapi
    /usr/sbin/tapi
    /sbin/tapi
    /bin/tapi
    /usr/local/ucx/bin/tapi
    /opt/amazon/efa/bin/tapi
    /opt/tensorrt/bin/tapi

  The item was not found.

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:201 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/Compiler/GNU-FindBinUtils.cmake:18 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_C_COMPILER_AR
    NAMES: "gcc-ar-13.3"
           "gcc-ar-13"
           "gcc-ar13"
           "gcc-ar"
    Documentation: A wrapper around 'ar' adding the appropriate '--plugin' option for the GCC compiler
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/bin/gcc-ar-13.3
    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/gcc-ar-13.3
    /usr/local/nvidia/bin/gcc-ar-13.3
    /usr/local/cuda/bin/gcc-ar-13.3
    /usr/local/mpi/bin/gcc-ar-13.3
    /usr/local/sbin/gcc-ar-13.3
    /usr/local/bin/gcc-ar-13.3
    /usr/sbin/gcc-ar-13.3
    /sbin/gcc-ar-13.3
    /bin/gcc-ar-13.3
    /usr/local/ucx/bin/gcc-ar-13.3
    /opt/amazon/efa/bin/gcc-ar-13.3
    /opt/tensorrt/bin/gcc-ar-13.3

  The item was found at

    /usr/bin/gcc-ar-13

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:202 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/Compiler/GNU-FindBinUtils.cmake:30 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_C_COMPILER_RANLIB
    NAMES: "gcc-ranlib-13.3"
           "gcc-ranlib-13"
           "gcc-ranlib13"
           "gcc-ranlib"
    Documentation: A wrapper around 'ranlib' adding the appropriate '--plugin' option for the GCC compiler
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/bin/gcc-ranlib-13.3
    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/gcc-ranlib-13.3
    /usr/local/nvidia/bin/gcc-ranlib-13.3
    /usr/local/cuda/bin/gcc-ranlib-13.3
    /usr/local/mpi/bin/gcc-ranlib-13.3
    /usr/local/sbin/gcc-ranlib-13.3
    /usr/local/bin/gcc-ranlib-13.3
    /usr/sbin/gcc-ranlib-13.3
    /sbin/gcc-ranlib-13.3
    /bin/gcc-ranlib-13.3
    /usr/local/ucx/bin/gcc-ranlib-13.3
    /opt/amazon/efa/bin/gcc-ranlib-13.3
    /opt/tensorrt/bin/gcc-ranlib-13.3

  The item was found at

    /usr/bin/gcc-ranlib-13

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCCompiler.cmake:202 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCompiler.cmake:50 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_CXX_COMPILER
    NAMES: "c++"
           "CC"
           "g++"
           "aCC"
           "cl"
           "bcc"
           "xlC"
           "icpx"
           "icx"
           "clang++"
    Documentation: CXX compiler
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    NO_DEFAULT_PATH Enabled

  find_program considered the following locations:

  The item was found at

    /usr/bin/c++

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCXXCompiler.cmake:70 (_cmake_find_compiler)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCompilerId.cmake:400 (find_file):
  find_file called with the following settings:

    VAR: src_in
    NAMES: "CMakeCXXCompilerId.cpp.in"
    Documentation: Path to a file.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    NO_DEFAULT_PATH Enabled

  find_file considered the following locations:

  The item was found at

    /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeCXXCompilerId.cpp.in

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCompilerId.cmake:438 (CMAKE_DETERMINE_COMPILER_ID_WRITE)
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCompilerId.cmake:64 (__determine_compiler_id_test)
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCXXCompiler.cmake:126 (CMAKE_DETERMINE_COMPILER_ID)
  CMakeLists.txt:1 (project)


-- The CXX compiler identification is GNU 13.3.0
CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_DLLTOOL
    NAMES: "dlltool"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/bin/dlltool
    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/dlltool
    /usr/local/nvidia/bin/dlltool
    /usr/local/cuda/bin/dlltool
    /usr/local/mpi/bin/dlltool
    /usr/local/sbin/dlltool
    /usr/local/bin/dlltool
    /usr/sbin/dlltool
    /sbin/dlltool
    /bin/dlltool
    /usr/local/ucx/bin/dlltool
    /opt/amazon/efa/bin/dlltool
    /opt/tensorrt/bin/dlltool

  The item was not found.

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCXXCompiler.cmake:207 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeFindBinUtils.cmake:225 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_TAPI
    NAMES: "tapi"
    Documentation: Path to a program.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/bin/tapi
    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/tapi
    /usr/local/nvidia/bin/tapi
    /usr/local/cuda/bin/tapi
    /usr/local/mpi/bin/tapi
    /usr/local/sbin/tapi
    /usr/local/bin/tapi
    /usr/sbin/tapi
    /sbin/tapi
    /bin/tapi
    /usr/local/ucx/bin/tapi
    /opt/amazon/efa/bin/tapi
    /opt/tensorrt/bin/tapi

  The item was not found.

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCXXCompiler.cmake:207 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/Compiler/GNU-FindBinUtils.cmake:18 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_CXX_COMPILER_AR
    NAMES: "gcc-ar-13.3"
           "gcc-ar-13"
           "gcc-ar13"
           "gcc-ar"
    Documentation: A wrapper around 'ar' adding the appropriate '--plugin' option for the GCC compiler
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/bin/gcc-ar-13.3
    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/gcc-ar-13.3
    /usr/local/nvidia/bin/gcc-ar-13.3
    /usr/local/cuda/bin/gcc-ar-13.3
    /usr/local/mpi/bin/gcc-ar-13.3
    /usr/local/sbin/gcc-ar-13.3
    /usr/local/bin/gcc-ar-13.3
    /usr/sbin/gcc-ar-13.3
    /sbin/gcc-ar-13.3
    /bin/gcc-ar-13.3
    /usr/local/ucx/bin/gcc-ar-13.3
    /opt/amazon/efa/bin/gcc-ar-13.3
    /opt/tensorrt/bin/gcc-ar-13.3

  The item was found at

    /usr/bin/gcc-ar-13

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCXXCompiler.cmake:208 (include)
  CMakeLists.txt:1 (project)


CMake Debug Log at /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/Compiler/GNU-FindBinUtils.cmake:30 (find_program):
  find_program called with the following settings:

    VAR: CMAKE_CXX_COMPILER_RANLIB
    NAMES: "gcc-ranlib-13.3"
           "gcc-ranlib-13"
           "gcc-ranlib13"
           "gcc-ranlib"
    Documentation: A wrapper around 'ranlib' adding the appropriate '--plugin' option for the GCC compiler
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 0
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 0
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_program considered the following locations:

    /usr/bin/gcc-ranlib-13.3
    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/gcc-ranlib-13.3
    /usr/local/nvidia/bin/gcc-ranlib-13.3
    /usr/local/cuda/bin/gcc-ranlib-13.3
    /usr/local/mpi/bin/gcc-ranlib-13.3
    /usr/local/sbin/gcc-ranlib-13.3
    /usr/local/bin/gcc-ranlib-13.3
    /usr/sbin/gcc-ranlib-13.3
    /sbin/gcc-ranlib-13.3
    /bin/gcc-ranlib-13.3
    /usr/local/ucx/bin/gcc-ranlib-13.3
    /opt/amazon/efa/bin/gcc-ranlib-13.3
    /opt/tensorrt/bin/gcc-ranlib-13.3

  The item was found at

    /usr/bin/gcc-ranlib-13

Call Stack (most recent call first):
  /usr/local/lib/python3.12/dist-packages/cmake/data/share/cmake-3.31/Modules/CMakeDetermineCXXCompiler.cmake:208 (include)
  CMakeLists.txt:1 (project)


-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Debug Log at CMakeLists.txt:3 (find_library):
  find_library called with the following settings:

    VAR: MLX5_lib
    NAMES: "mlx5"
    Documentation: Path to a library.
    Framework
      Only Search Frameworks: 0
      Search Frameworks Last: 0
      Search Frameworks First: 0
    AppBundle
      Only Search AppBundle: 0
      Search AppBundle Last: 0
      Search AppBundle First: 0
    CMAKE_FIND_USE_CMAKE_PATH: 1
    CMAKE_FIND_USE_CMAKE_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_SYSTEM_ENVIRONMENT_PATH: 1
    CMAKE_FIND_USE_CMAKE_SYSTEM_PATH: 1
    CMAKE_FIND_USE_INSTALL_PREFIX: 1

  find_library considered the following locations:

    /usr/local/lib/python3.12/dist-packages/torch_tensorrt/bin/libmlx5(\.so|\.a)
    /usr/local/nvidia/bin/libmlx5(\.so|\.a)
    /usr/local/cuda/bin/libmlx5(\.so|\.a)
    /usr/local/mpi/bin/libmlx5(\.so|\.a)
    /usr/local/sbin/libmlx5(\.so|\.a)
    /usr/local/bin/libmlx5(\.so|\.a)
    /usr/sbin/libmlx5(\.so|\.a)
    /usr/bin/libmlx5(\.so|\.a)
    /sbin/libmlx5(\.so|\.a)
    /bin/libmlx5(\.so|\.a)
    /usr/local/ucx/bin/libmlx5(\.so|\.a)
    /opt/amazon/efa/bin/libmlx5(\.so|\.a)
    /opt/tensorrt/bin/libmlx5(\.so|\.a)
    /usr/local/lib/x86_64-linux-gnu/libmlx5(\.so|\.a)
    /usr/local/lib/libmlx5(\.so|\.a)
    /usr/local/libmlx5(\.so|\.a)
    /usr/lib/x86_64-linux-gnu/libmlx5(\.so|\.a)
    /usr/lib/libmlx5(\.so|\.a)
    /usr/libmlx5(\.so|\.a)
    /lib/x86_64-linux-gnu/libmlx5(\.so|\.a)
    /lib/libmlx5(\.so|\.a)
    /usr/local/lib/python3.12/dist-packages/cmake/data/lib/x86_64-linux-gnu/libmlx5(\.so|\.a)
    /usr/local/lib/python3.12/dist-packages/cmake/data/lib/libmlx5(\.so|\.a)
    /usr/local/lib/python3.12/dist-packages/cmake/data/libmlx5(\.so|\.a)
    /usr/X11R6/lib/x86_64-linux-gnu/libmlx5(\.so|\.a)
    /usr/X11R6/lib/libmlx5(\.so|\.a)
    /usr/X11R6/libmlx5(\.so|\.a)
    /usr/pkg/lib/x86_64-linux-gnu/libmlx5(\.so|\.a)
    /usr/pkg/lib/libmlx5(\.so|\.a)
    /usr/pkg/libmlx5(\.so|\.a)
    /opt/lib/x86_64-linux-gnu/libmlx5(\.so|\.a)
    /opt/lib/libmlx5(\.so|\.a)
    /opt/libmlx5(\.so|\.a)
    /usr/lib/X11/libmlx5(\.so|\.a)

  The item was not found.



CMake Error at CMakeLists.txt:3 (find_library):
  Could not find MLX5_lib using the following names: mlx5


-- Configuring incomplete, errors occurred!
  1. Assume solution is to extend searching names, or remove this check from CMake:
docker run --rm -it nvcr.io/nvidia/pytorch:25.06-py3 bash -c "echo 'project(mlx5repro)' > /tmp/CMakeLists.txt ; echo 'cmake_minimum_required(VERSION 3.31.6)' >> /tmp/CMakeLists.txt ; echo 'find_library(MLX5_lib NAMES mlx5 libmlx5.so.1 REQUIRED)' >> /tmp/CMakeLists.txt ; cmake -S /tmp/ -B /tmp/build"

=============
== PyTorch ==
=============

NVIDIA Release 25.06 (build 177567386)
PyTorch Version 2.8.0a0+5228986
Container image Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2014-2024 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies    (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU                      (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006      Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015      Google Inc.
Copyright (c) 2015      Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

GOVERNING TERMS: The software and materials are governed by the NVIDIA Software License Agreement
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/)
and the Product-Specific Terms for NVIDIA AI Products
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/).

WARNING: The NVIDIA Driver was not detected.  GPU functionality will not be available.
   Use the NVIDIA Container Toolkit to start this container with GPU support; see
   https://docs.nvidia.com/datacenter/cloud-native/ .

NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be
   insufficient for PyTorch.  NVIDIA recommends the use of the following flags:
   docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...

CMake Warning (dev) at CMakeLists.txt:1 (project):
  cmake_minimum_required() should be called prior to this top-level project()
  call.  Please see the cmake-commands(7) manual for usage documentation of
  both commands.
This warning is for project developers.  Use -Wno-dev to suppress it.

-- The C compiler identification is GNU 13.3.0
-- The CXX compiler identification is GNU 13.3.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done (0.9s)
-- Generating done (0.0s)
-- Build files have been written to: /tmp/build

Note

My PR is suggestion because it breaks the Triton build, but I have no knowledge and details why you place it in.

Copy link
Collaborator

@yuantailing yuantailing Aug 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @mc-nv ,
If you're using 25.05, the solution is apt-get install libibverbs-dev . It includes libmlx5.so.
If you're using 25.06, the solution is apt-get --reinstall install libibverbs-dev . The reason that we need --reinstall is that in the base image 25.06, libmlx5.so was manually deleted after libibverbs-dev was installed, so the package libibverbs-dev is broken and needs reinstall.
If you do not want to install libmlx5.so, besides the change you mentioned, you also need to change the PATCH_COMMAND of NVSHMEM:

ExternalProject_Add(
nvshmem_project
URL file://${CMAKE_CURRENT_SOURCE_DIR}/nvshmem_src_3.2.5-1.txz
URL_HASH ${NVSHMEM_URL_HASH}
PATCH_COMMAND patch -p1 --forward --batch -i
${DEEP_EP_SOURCE_DIR}/third-party/nvshmem.patch
COMMAND sed "s/TRANSPORT_VERSION_MAJOR 3/TRANSPORT_VERSION_MAJOR 103/" -i
src/CMakeLists.txt
COMMAND patch -p1 --forward --batch -i
${CMAKE_CURRENT_SOURCE_DIR}/nvshmem_fast_build.patch
. The reason is that there is also a find_library(MLX5_lib NAMES mlx5 REQUIRED) statement in NVSHMEM's src/CMakeLists.txt. But I think this workaround does not make sense, because in the context of Linux package management, the versioned libmlx5.so.1 is used only for runtime, and the unversioned libmlx5.so is used for building/linking.

Library libmlx5.so.1 is installed by default, but can't be found by CMake

What we need in the development environment (used to build TensorRT-LLM) is libmlx5.so instead of libmlx5.so.1 (only used for runtime), so it's expected behavior that CMake does not find libmlx5.so.1.

Assume solution is to extend searching names, or remove this check from CMake: ...

You will eventually get an error when compile the ExternalProject NVSHMEM.

Copy link
Author

@mc-nv mc-nv Aug 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. To me it doesn't look like the patch you are referring to has any deal with libmlx5.so.1

  2. Looking close appears, file SONAME is different to the name and someone removed link that used to resolve library location in CMake.
    Anyway CMake will link against libmlx5.so.1 by using SONAME there why installation of the package libibverbs-dev could be redundant.

root@bc0bddf58dc0:/workspace# readelf -a /usr/lib/x86_64-linux-gnu/libmlx5.so | grep SONAME
 0x000000000000000e (SONAME)             Library soname: [libmlx5.so.1]
root@bc0bddf58dc0:/workspace# readelf -a /usr/lib/x86_64-linux-gnu/libmlx5.so.1 | grep SONAME
 0x000000000000000e (SONAME)             Library soname: [libmlx5.so.1]
 root@bc0bddf58dc0:/workspace# ll /usr/lib/x86_64-linux-gnu/libmlx5.so
lrwxrwxrwx 1 root root 12 Apr 14 10:00 /usr/lib/x86_64-linux-gnu/libmlx5.so -> libmlx5.so.1
  1. I checked hash sum and files are identical this installation could be not necessary.

cc: @MartinMarciniszyn (as an owner of the original change)

Copy link
Collaborator

@yuantailing yuantailing Aug 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me it doesn't look like the patch you are referring to has any deal with libmlx5.so.1

It's true that "the patch you are referring to has any deal with libmlx5.so.1", thus NVSHMEM still finds libmlx5.so and will get an error.

Anyway CMake will link against libmlx5.so.1 by using SONAME there

Yes, but I think we should not do that. For example, if we want to link to zlib, we should install libzstd-dev instead of libzstd1, even if you know libz.so is libz.so.1; if we have a program that is already linked to libz.so.1, we need to install libzstd1.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mc-nv , I added this line only to throw an error early when libibverbs-dev. Maybe we could make this more explicit in the error message. This is to prevent more obscure errors when installing DeepEP. The best way to mitigate this is reinstalling libibverbs-dev as suggest by @yuantailing . We do this in our docker build here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Choice of the dependency provider for C++ always can rise questions.
In this particular change I see that the library we are linking agains is already preinstalled in DLFW container.
By knowing that NVIDIA Optimized Framework may do some in place compilation for third-party libraries in order handle performance, I suggest to reuse existing one.

Ideally, the project shouldn't really care where a dependency comes from, as long as it provides the things it expects (often just some imported targets). The project says what it needs and may also specify where to get it from, in the absence of any other details, so that it can still be built out-of-the-box.

As I mentioned above I don't know the reasons why we seeking that library and how it linked and so on.
But it's important for Triton to fix the build and keep container size as small as possible, because we ship the container.
My observations are:

  1. I don't see any headers *mlx* related to be used by TensorRT-LLM code:
    https://github.com/search?q=repo%3ANVIDIA%2FTensorRT-LLM%20mlx&type=code
  2. In the same time I see that the *mlx* header are needed in DeepEP code as infiniband/mlx5dv.h :
    https://github.com/search?q=repo%3Adeepseek-ai%2FDeepEP%20mlx&type=code
  3. And all those files are inside the base container.
$ docker run --rm -it  nvcr.io/nvidia/pytorch:25.06-py3  find /usr -name mlx5*.h -o -name *mlx*so*
...
/usr/lib/x86_64-linux-gnu/libmlx5.so.1.24.50.0
/usr/lib/x86_64-linux-gnu/libmlx4.so.1.0.50.0
/usr/lib/x86_64-linux-gnu/libmlx5.so.1
/usr/lib/x86_64-linux-gnu/libmlx4.so.1
/usr/lib/x86_64-linux-gnu/libibverbs/libmlx5-rdmav34.so
/usr/lib/x86_64-linux-gnu/libibverbs/libmlx4-rdmav34.so
/usr/include/rdma/mlx5_user_ioctl_cmds.h
/usr/include/rdma/mlx5_user_ioctl_verbs.h
/usr/include/rdma/mlx5-abi.h
/usr/include/infiniband/mlx5_user_ioctl_verbs.h
/usr/include/infiniband/mlx5dv.h
/usr/include/infiniband/mlx5_api.h

I'm just trying to figure out what is need to be done to build Triton TensorRT-LLM container image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Community want to contribute PRs initiated from Community
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants