-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
build: bump torch to 2.7.1 and CUDA 12.8 support #1182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Add a .python-version file to pin the project to Python 3.11 - Update README setup to require CUDA toolkit 12.8 instead of 12.4 (Linux and Windows) - Raise the project’s minimum Python requirement to ≥3.10,<3.13 - Bump torch dependency from 2.6.0 to 2.7.1 - Switch the PyTorch CUDA wheel index from cu124 to cu128 Signed-off-by: CHEN, CHUN <[email protected]>
…n README" This reverts commit 6fe0a87. The issue of relying on two different versions of CUDNN in this project has been resolved. Signed-off-by: CHEN, CHUN <[email protected]>
- Only download torch from PyTorch; obtain all other packages from PyPI. There is a chance it can run on Python 3.9. - Restrict numpy, onnxruntime, pandas to be compatible with Python 3.9 Signed-off-by: CHEN, CHUN <[email protected]>
- Add triton version 3.3.0 or newer to the dependencies to support arm64 architecture. Signed-off-by: CHEN, CHUN <[email protected]>
My understanding is that PyTorch comes pre-built with the required CUDA runtime libraries included. I’m a bit confused then, who are the official CUDA installation instructions meant for in that case? Thanks! |
Doc from Pytorchhttps://pytorch.org/get-started/locally/ Doc from NVIDIAWindowshttps://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html Windows WSLhttps://docs.nvidia.com/cuda/wsl-user-guide/index.html#nvidia-compute-software-support-on-wsl-2 Linux, OSXhttps://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html Then on a native Linux workstation with Podman, you probably work with CDI. https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html That's a lot. I always have Windows users install NV's CUDA Toolkit exe, and let Linux users read that document. (...Linux users probably know what they should do.) |
Thanks |
Really looking forward for this to be merged in order to upgrade my infrastructure! |
- Add a platform marker to the triton dependency to skip it on Windows, as triton does not support Windows. Signed-off-by: Jim Chen <[email protected]>
Signed-off-by: Jim Chen <[email protected]>
Thanks for the patch! It does not install in macos via git install. |
- macOS uses CPU-only PyTorch from pytorch-cpu index - Linux and Windows use CUDA 12.8 PyTorch from pytorch index - triton only installs on Linux with CUDA 12.8 support - Update lockfile to support multi-platform builds
@kalvin807 Please try using 0e7153b to see if it works on Mac. |
it installed successfully. thanks |
In addition, upgrading to cu128 means we theoretically support Blackwell (RTX 50XX), but I don't have the hardware to test it.
This is a reissue of #1098 and I mentioned it in the closing comment of PR #1133.
This version of the code is currently released through the https://github.com/jim60105/docker-whisperX project.