-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Description
the docs on build isolation explain how to setup a two-stage build/compile uv sync
setup to deal with packages which depend on torch at build time.
i'm glad this is documented and it is possible to use uv
for these packages, but it's quite an annoying manual step. will it be possible to make this automatic in a single uv sync
in the future?
in the torch ecosystem, it is very common to encounter cuda kernels and other cuda/cutlass libraries which have to be built against the version of pytorch being used. it is also common to need to use nightly for specific bug fixes.
i'm providing one example from my project which illustrates some of the challenges, i.e. custom env vars and cli flags currently required. i would love to be able to move as much of this configuration directly into the pyproject.toml
so all a user needs is uv sync
[project]
name = "torchnightly"
requires-python = ">=3.11.9"
dependencies = [
"torch",
"torchao",
"torchaudio",
"torchvision",
"setuptools", # for grouped_gemm
"nvidia-cusparselt-cu12", # workaround #10693
]
dynamic = ["version"]
[project.optional-dependencies]
compile = [
"grouped_gemm @ git+https://github.com/tgale96/grouped_gemm",
]
[tool.uv]
no-build-isolation-package = ['grouped_gemm']
[tool.uv.sources]
torch = [{ index = "pytorch-nightly-cu126" }]
torchvision = [{ index = "pytorch-nightly-cu126" }]
torchaudio = [{ index = "pytorch-nightly-cu126" }]
torchao = [{ index = "pytorch-nightly-cu126" }]
[[tool.uv.index]]
name = "pytorch-nightly-cu126"
url = "https://download.pytorch.org/whl/nightly/cu126"
explicit = true
$ uv sync
Using CPython 3.11.9
Creating virtual environment at: .venv
⠦ Resolving dependencies...
warning: Missing version constraint (e.g., a lower bound) for `torch`
warning: Missing version constraint (e.g., a lower bound) for `torchao`
warning: Missing version constraint (e.g., a lower bound) for `torchaudio`
warning: Missing version constraint (e.g., a lower bound) for `torchvision`
Updated https://github.com/tgale96/grouped_gemm (ebeae0b)
Resolved 20 packages in 7.79s
Installed 14 packages in 39.34s
+ filelock==3.16.1
+ fsspec==2024.12.0
+ jinja2==3.1.5
+ markupsafe==3.0.2
+ mpmath==1.3.0
+ networkx==3.4.2
+ numpy==2.2.1
+ pillow==11.1.0
+ sympy==1.13.1
+ torch==2.7.0.dev20250116+cu126
+ torchao==0.8.0.dev20250116+cu126
+ torchaudio==2.6.0.dev20250116+cu126
+ torchvision==0.22.0.dev20250116+cu126
+ typing-extensions==4.12.2
then second stage:
$ TORCH_CUDA_ARCH_LIST=9.0 GROUPED_GEMM_CUTLASS=1 uv sync --extra compile --no-binary-package grouped_gemm
Resolved 21 packages in 5ms
Built grouped-gemm @ git+https://github.com/tgale96/grouped_gemm@ebeae0bb3ded459886309b2a30410deb16937af4
Prepared 1 package in 13.65s
Installed 1 package in 60ms
+ grouped-gemm==0.1.6 (from git+https://github.com/tgale96/grouped_gemm@ebeae0bb3ded459886309b2a30410deb16937af4)
$ uv run python -c 'import grouped_gemm' ; echo $?
0
so to recap, I'm looking for a way to specify the following in pyproject.toml
:
--no-binary-package grouped_gemm
- env vars that should be used during grouped_gemm build
- explicit build time dependency between grouped_gemm and torch, so the build waits for and uses the torch-nightly package automatically
- not have to pollute my deps with
setuptools
just because grouped_gemm needs it