-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Description
Summary
Tested the following command on a system with a GTX 1080 Ti and CUDA 12.8 driver installed (what shows in nvidia-smi
).
uv pip install torch --torch-backend=auto --preview
Then ran this test script:
import torch
tensor = torch.randn(3, 4, device='cuda')
print(tensor)
And got this:
cpu = _conversion_method_template(device=torch.device("cpu"))
/root/env/lib/python3.11/site-packages/torch/cuda/__init__.py:262: UserWarning:
Found GPU0 NVIDIA GeForce GTX 1080 Ti which is of cuda capability 6.1.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 7.5.
warnings.warn(
/root/env/lib/python3.11/site-packages/torch/cuda/__init__.py:287: UserWarning:
NVIDIA GeForce GTX 1080 Ti with CUDA capability sm_61 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_75 sm_80 sm_86 sm_90 sm_100 sm_120 compute_120.
If you want to use the NVIDIA GeForce GTX 1080 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(
Traceback (most recent call last):
File "/root/torch_test.py", line 4, in <module>
tensor = torch.randn(3, 4, device='cuda')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
I think just looking at the installed CUDA version/driver might not be enough, the CUDA compute capability supported by different torch versions is of relevance as well. In this case, we have a GTX 1080 Ti with CUDA 12.8 drivers installed, but we can't actually use the cu128
torch wheel, which --torch-backend=auto
installs, as it doesn't support GPUs with CUDA compute capability < 7.5.
Some relevant discussion I found:
https://discuss.pytorch.org/t/gpu-compute-capability-support-for-each-pytorch-version/62434/4
https://github.com/moi90/pytorch_compute_capabilities
I then also tested without --torch-backend=auto
, so simply:
uv pip install torch
This seems to install the cu126
wheel (going off the PyTorch website), and with this the test script worked just fine, meaning the cu126
wheel has not yet dropped support for CUDA compute capability 6.1 devices.
Platform
Linux 5.15.0-142-generic x86_64 GNU/Linux
Version
0.8.0
Python version
Python 3.11.13