Skip to content

Conversation

@kavyamali
Copy link

@kavyamali kavyamali commented Oct 27, 2025

Also called the early_access_blackwell_wheels under def prepare_environment():

Description

This change adds dynamic detection using nvidia-smi to determine Compute Capability for Nvidia GPUs with CC<7 on Windows. It then installs supported, latest Pytorch for the same. In all other cases, it falls back to default logic or that mentioned in early_access_blackwell_wheels. This has been tested on a GTX 970. No issues.

I also apologize for the confusion in my previous PR, as the changes were clearly bogus there.

Checklist:

Also called the early_access_blackwell_wheels under def prepare_environment()
@w-e-w
Copy link
Collaborator

w-e-w commented Oct 31, 2025

sorry no offense your actions are very suspicious

I also apologize for the confusion in my previous PR, as the changes were clearly bogus there.

assuming that you are responding to #17158 this meant that you deleted your old account (whitch itself it's a new account that has no other activity)
that is not normal behavior

Also called the early_access_blackwell_wheels under def prepare_environment():

calling early_access_blackwell_wheels makes zero sense

the PR description feels like LLM generated
I can't tell if the code is generated and the clame of it works on GTX970 is just a hallucination of an LLM


I think you are a bot, or at least someone that may not understand what there are doing
I can't trust your code and can't trust your clame of it works on older GPU
so sorry I'm not going to merge this PR

@kavyamali
Copy link
Author

kavyamali commented Oct 31, 2025

sorry no offense your actions are very suspicious

I also apologize for the confusion in my previous PR, as the changes were clearly bogus there.

assuming that you are responding to #17158 this meant that you deleted your old account (whitch itself it's a new account that has no other activity) that is not normal behavior

Also called the early_access_blackwell_wheels under def prepare_environment():

calling early_access_blackwell_wheels makes zero sense

the PR description feels like LLM generated I can't tell if the code is generated and the clame of it works on GTX970 is just a hallucination of an LLM

I think you are a bot, or at least someone that may not understand what there are doing I can't trust your code and can't trust your clame of it works on older GPU so sorry I'm not going to merge this PR

I am sorry, I understand why you feel my actions might be suspicious. The reason I called early_access_blackwell was to make it work under torch command, as that was the logic of my code. I wanted to make sure both work. I can provide you screenshots or proof for it working on a 970. The reason I deleted my old account was because the entire branch there was a mess.

If you still don't want to merge the PR, I understand. Thanks for your time. And I am still a beginner, so I'd appreciate guidance.

@kavyamali
Copy link
Author

sorry no offense your actions are very suspicious

I also apologize for the confusion in my previous PR, as the changes were clearly bogus there.

assuming that you are responding to #17158 this meant that you deleted your old account (whitch itself it's a new account that has no other activity) that is not normal behavior

Also called the early_access_blackwell_wheels under def prepare_environment():

calling early_access_blackwell_wheels makes zero sense

the PR description feels like LLM generated I can't tell if the code is generated and the clame of it works on GTX970 is just a hallucination of an LLM

I think you are a bot, or at least someone that may not understand what there are doing I can't trust your code and can't trust your clame of it works on older GPU so sorry I'm not going to merge this PR

Capture Capture1

Here are the screenshots.

@kavyamali
Copy link
Author

sorry no offense your actions are very suspicious

I also apologize for the confusion in my previous PR, as the changes were clearly bogus there.

assuming that you are responding to #17158 this meant that you deleted your old account (whitch itself it's a new account that has no other activity) that is not normal behavior

Also called the early_access_blackwell_wheels under def prepare_environment():

calling early_access_blackwell_wheels makes zero sense

the PR description feels like LLM generated I can't tell if the code is generated and the clame of it works on GTX970 is just a hallucination of an LLM

I think you are a bot, or at least someone that may not understand what there are doing I can't trust your code and can't trust your clame of it works on older GPU so sorry I'm not going to merge this PR

Lastly, for the clarification of my logic. The torch_command variable didn't call for the defined variables, which was responsible for smi checking and installing separate pytorch versions. So I created a condition, calling those variables, and if the state was None, it'd fallback to original.

@w-e-w
Copy link
Collaborator

w-e-w commented Oct 31, 2025

have you read early_access_blackwell

def early_access_blackwell_wheels():
"""For Blackwell GPUs, use Early Access PyTorch Wheels provided by Nvidia"""
print('deprecated early_access_blackwell_wheels')
if all([

when something is "deprecated" it means it is something that was used before but is not to be used anymore
basically you by using that function means that you either don't know what the deprecated term mean or you never read the function

@kavyamali
Copy link
Author

have you read early_access_blackwell

def early_access_blackwell_wheels():
"""For Blackwell GPUs, use Early Access PyTorch Wheels provided by Nvidia"""
print('deprecated early_access_blackwell_wheels')
if all([

when something is "deprecated" it means it is something that was used before but is not to be used anymore basically you by using that function means that you either don't know what the deprecated term mean or you never read the function

Oh, I see. I didn't know that. Thanks for correcting me.

@kavyamali
Copy link
Author

have you read early_access_blackwell

def early_access_blackwell_wheels():
"""For Blackwell GPUs, use Early Access PyTorch Wheels provided by Nvidia"""
print('deprecated early_access_blackwell_wheels')
if all([

when something is "deprecated" it means it is something that was used before but is not to be used anymore basically you by using that function means that you either don't know what the deprecated term mean or you never read the function

The reason I called my variable was because it didn't read the logic and proceeded with usual installation process. I thought calling the earlier variable might be a good idea too. This is my first PR, so I'm sorry for the mistake.

@w-e-w
Copy link
Collaborator

w-e-w commented Oct 31, 2025

and why did you delete your previous account and create new account and make a new PR thing
that feels like something like a LLM code PR farm would do

@kavyamali
Copy link
Author

and why did you delete your previous account and create new account and make a new PR thing that feels like something like a LLM code PR farm would do

I made that account when I was 13, and it has nothing valuable but mess of code. So I made a new one. I'm not aware of LLM farming.

@w-e-w
Copy link
Collaborator

w-e-w commented Oct 31, 2025

I made that account when I was 13, and it has nothing valuable but mess of code. So I made a new one. I'm not aware of LLM farming.

if my memory serves me I'm pretty sure that previous account was created less than two weeks ago

@kavyamali
Copy link
Author

have you read early_access_blackwell

def early_access_blackwell_wheels():
"""For Blackwell GPUs, use Early Access PyTorch Wheels provided by Nvidia"""
print('deprecated early_access_blackwell_wheels')
if all([

when something is "deprecated" it means it is something that was used before but is not to be used anymore basically you by using that function means that you either don't know what the deprecated term mean or you never read the function

So, will creating a new PR without the early_access_blackwell variable called be worthy?

@kavyamali
Copy link
Author

I made that account when I was 13, and it has nothing valuable but mess of code. So I made a new one. I'm not aware of LLM farming.

if my memory serves me I'm pretty sure that previous account was created less than two weeks ago

N
It was made around 2022, had three repositories.

@kavyamali
Copy link
Author

I made that account when I was 13, and it has nothing valuable but mess of code. So I made a new one. I'm not aware of LLM farming.

if my memory serves me I'm pretty sure that previous account was created less than two weeks ago

Screenshot_2025-10-31-12-40-51-90_e307a3f9df9f380ebaf106e1dc980bb6
Here is a mail for that account from 2022.

@w-e-w
Copy link
Collaborator

w-e-w commented Oct 31, 2025

from what I can find Maxwell cards shoud work with cu126

try something for me install with cu126 and see it it works

use the dev branch (without your changes from this PR)
either set TORCH_INDEX_URL in webui-user.bat
like so

set TORCH_INDEX_URL=https://download.pytorch.org/whl/cu126

or directly modify the code (temporarily)

torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu128")

- 'TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu128"
+ 'TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu126"

then do a clean install see if it works

@w-e-w
Copy link
Collaborator

w-e-w commented Oct 31, 2025

and if it doesn't work with pytorch 2.7 then it seems to work with 2.6
so might not be a need to drop down to 1.12.1+cu113

@kavyamali
Copy link
Author

kavyamali commented Oct 31, 2025

and if it doesn't work with pytorch 2.7 then it seems to work with 2.6 so might not be a need to drop down to 1.12.1+cu113

I've tried cu12x versions, even cu118 doesn't work. Pytorch dropped full library support for CC5.2 after cu113, 1.12.1. But if you say so, I'll try again with cu126.

@w-e-w
Copy link
Collaborator

w-e-w commented Oct 31, 2025

your are on 970 right?
base on the data hear it seems to work with
2.1.2+cu121 (this is the pytorch version that is used on the master branch)
https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html
image

@kavyamali
Copy link
Author

your are on 970 right? base on the data hear it seems to work with 2.1.2+cu121 (this is the pytorch version that is used on the master branch) https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html image

Wierd then. When I try installing either of those, it says 'MemoryError'. I can say cu118 works on oobabooga, with full GPU acceleration. But it doens't on automatic1111. I tried compiling pytorch from source, that failed too. But I can confirm cu126 or latest one mentioned doens't work. And besides, there is little to no improvement from 11.x to 12.x performance for such old GPUs. cu113 is the safest, I believe.

@kavyamali
Copy link
Author

kavyamali commented Oct 31, 2025

your are on 970 right? base on the data hear it seems to work with 2.1.2+cu121 (this is the pytorch version that is used on the master branch) https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html image

Update: --no-cache-dir does makes all pytorch versions upto cu118 work. cu121 is targeted for pascal, still doesn't install.

Edit: It might still work with memory optimisations, but it's too much work. Waiting for your call.

@kavyamali
Copy link
Author

your are on 970 right? base on the data hear it seems to work with 2.1.2+cu121 (this is the pytorch version that is used on the master branch) https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html image

Hey, I know it has been a while, but I have some important updates. I recently freshly installed latest Nvidia drivers, and tried reinstalling CUDA 12.8(cu128) pyotrch with --no-cache-dir command. Now, every pytorch version installs successfully. But cu128 pytorch is the only one that doesn't officially support maxwell. This is the install log:
log.txt

And all Cuda versions up to cu126 work fine now. Only cu128 gives an error at the start:

log1.txt

@kavyamali kavyamali deleted the branch AUTOMATIC1111:dev November 7, 2025 09:12
@kavyamali kavyamali closed this Nov 7, 2025
@kavyamali kavyamali deleted the dev branch November 7, 2025 09:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants