-
Notifications
You must be signed in to change notification settings - Fork 29.4k
Supported pytorch added for CUDA 11.3 and CUDA 11.7, dynamically. (Windows only) #17163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Also called the early_access_blackwell_wheels under def prepare_environment()
|
sorry no offense your actions are very suspicious
assuming that you are responding to #17158 this meant that you deleted your old account (whitch itself it's a new account that has no other activity)
calling early_access_blackwell_wheels makes zero sense the PR description feels like LLM generated I think you are a bot, or at least someone that may not understand what there are doing |
I am sorry, I understand why you feel my actions might be suspicious. The reason I called early_access_blackwell was to make it work under torch command, as that was the logic of my code. I wanted to make sure both work. I can provide you screenshots or proof for it working on a 970. The reason I deleted my old account was because the entire branch there was a mess. If you still don't want to merge the PR, I understand. Thanks for your time. And I am still a beginner, so I'd appreciate guidance. |
Here are the screenshots. |
Lastly, for the clarification of my logic. The torch_command variable didn't call for the defined variables, which was responsible for smi checking and installing separate pytorch versions. So I created a condition, calling those variables, and if the state was None, it'd fallback to original. |
|
have you read early_access_blackwell stable-diffusion-webui/modules/launch_utils.py Lines 331 to 334 in 6685e53
when something is "deprecated" it means it is something that was used before but is not to be used anymore |
Oh, I see. I didn't know that. Thanks for correcting me. |
The reason I called my variable was because it didn't read the logic and proceeded with usual installation process. I thought calling the earlier variable might be a good idea too. This is my first PR, so I'm sorry for the mistake. |
|
and why did you delete your previous account and create new account and make a new PR thing |
I made that account when I was 13, and it has nothing valuable but mess of code. So I made a new one. I'm not aware of LLM farming. |
if my memory serves me I'm pretty sure that previous account was created less than two weeks ago |
So, will creating a new PR without the early_access_blackwell variable called be worthy? |
N |
|
from what I can find Maxwell cards shoud work with cu126 try something for me install with use the dev branch (without your changes from this PR) set TORCH_INDEX_URL=https://download.pytorch.org/whl/cu126or directly modify the code (temporarily)
- 'TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu128"
+ 'TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu126"then do a clean install see if it works |
|
and if it doesn't work with pytorch 2.7 then it seems to work with 2.6 |
I've tried cu12x versions, even cu118 doesn't work. Pytorch dropped full library support for CC5.2 after cu113, 1.12.1. But if you say so, I'll try again with cu126. |
|
your are on 970 right? |
Wierd then. When I try installing either of those, it says 'MemoryError'. I can say cu118 works on oobabooga, with full GPU acceleration. But it doens't on automatic1111. I tried compiling pytorch from source, that failed too. But I can confirm cu126 or latest one mentioned doens't work. And besides, there is little to no improvement from 11.x to 12.x performance for such old GPUs. cu113 is the safest, I believe. |
Update: --no-cache-dir does makes all pytorch versions upto cu118 work. cu121 is targeted for pascal, still doesn't install. Edit: It might still work with memory optimisations, but it's too much work. Waiting for your call. |
Hey, I know it has been a while, but I have some important updates. I recently freshly installed latest Nvidia drivers, and tried reinstalling CUDA 12.8(cu128) pyotrch with --no-cache-dir command. Now, every pytorch version installs successfully. But cu128 pytorch is the only one that doesn't officially support maxwell. This is the install log: And all Cuda versions up to cu126 work fine now. Only cu128 gives an error at the start: |





Also called the early_access_blackwell_wheels under def prepare_environment():
Description
This change adds dynamic detection using nvidia-smi to determine Compute Capability for Nvidia GPUs with CC<7 on Windows. It then installs supported, latest Pytorch for the same. In all other cases, it falls back to default logic or that mentioned in early_access_blackwell_wheels. This has been tested on a GTX 970. No issues.
I also apologize for the confusion in my previous PR, as the changes were clearly bogus there.
Checklist: