Skip to content

Conversation

johnnynunez
Copy link
Contributor

@johnnynunez johnnynunez commented Oct 7, 2025

Motivation

Make Possible to build with CUDA 13.

  • Upstream Flash-Attention and SGL-Kernel
  • Reduce Binaries for x86 builds
  • Fix installation in aarch64, decord is not available so bump to decord2 modified by me (https://pypi.org/project/decord2/).
  • Added CI CUDA 13 with Pytorch Test URL (This is temporal until 15/10/25 Release.
  • Added Jetson Orin Support (2.2 millions of jetson orin were sold)
  • Add new CCCL Format

Modifications

  • Merged SGL-KERNEL with my fixes for CUDA 13. (Upstream Flash-Attention and SGL-Kernel)
  • Added CI for CUDA 13 in x86 and aarch64
  • Modified Cmakelists.txt with added CMAKE PROCESSOR to avoid build aarch64 devices in x86
  • Modified common.py to fix memory issues in Unified Memory
  • Modified build.sh to add cuda 13 support and new CCCL https://nvidia.github.io/cccl/cccl/3.0_migration_guide.html
On platforms that use Unified Memory Architecture (UMA) — such as Orin, Thor, and Spark — both the CPU and GPU share the same system memory.
Because of this, the cudaMemGetInfo function doesn’t show the actual available GPU memory. Instead, it reports the amount of free system memory at that moment.

When you call torch.cuda.mem_get_info(), it only returns the “free” portion of memory, which may appear smaller than the true available memory. This happens because cached memory isn’t counted in that figure.

For a detailed explanation of how to accurately calculate the total allocatable device memory on integrated GPUs, you can refer to NVIDIA’s official documentation:
https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/#estimating-total-allocatable-device-memory-on-an-integrated-gpu-device

Copy link
Contributor

Summary of Changes

Hello @johnnynunez, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the project's compatibility and performance by introducing full support for CUDA 13. It integrates critical upstream updates for core kernels, optimizes the build process for various architectures, and addresses specific memory reporting challenges on NVIDIA's Unified Memory Architecture devices. These changes collectively aim to broaden hardware support, streamline development, and improve resource utilization.

Highlights

  • CUDA 13 Support: Enabled the project to build successfully with CUDA 13, incorporating necessary updates for compatibility and performance across different architectures.
  • Upstream Kernel Updates: Integrated updated versions of Flash-Attention and SGL-Kernel, ensuring the use of the latest features and fixes from upstream sources for improved efficiency.
  • Jetson Orin and AArch64 Support: Introduced specific support for Jetson Orin devices and improved the installation process for AArch64 architectures, including a fix for the 'decord' dependency by switching to 'decord2'.
  • Memory Management for Unified Memory Architectures: Implemented a fix for accurate GPU memory reporting on Unified Memory Architecture (UMA) platforms like Orin, Thor, and Spark, preventing misinterpretation of available memory.
  • Optimized Build Process: Refined the build system to reduce binary sizes for x86 builds by conditionally adding AArch64-specific gencode flags and preventing accidental AArch64 compilation on x86 systems.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/release-whl-kernel.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for building with CUDA 13, adds specific support for Jetson Orin devices, and optimizes the build process by reducing binary sizes for x86 platforms. The changes look good overall, but I've identified a couple of areas for improvement. There's a redundant code path in python/sglang/srt/utils/common.py that can be refactored for clarity and efficiency. More critically, there's a bug in sgl-kernel/build.sh where a conditional block is unreachable due to a duplicated condition, which could affect builds for different CUDA versions. I've provided suggestions to address these points.

@mickqian mickqian added the run-ci label Oct 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants