Skip to content

backend: build with CUDA compute 5.0 support by default #3499

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Feb 19, 2025

Conversation

cebtenzzre
Copy link
Member

It is possible that our llama.cpp backend is compatible with CUDA compute 5.0 GPUs such as the GTX 750 from 2014. This was not believed to be the case before since upstream llama.cpp has not mentioned this officially that I know of, but apparently ollama includes compute 5.0 support in its CUDA 11 backend. We are also using CUDA 11, so it is worth a shot.

Fixes #3481

@cebtenzzre cebtenzzre marked this pull request as ready for review February 14, 2025 20:12
@cebtenzzre
Copy link
Member Author

CUDA is still working on Linux with this change. The installer size only seems to be around 1MiB bigger. I don't have a compute 5.0 GPU to test on however.

@cebtenzzre cebtenzzre requested a review from manyoso February 18, 2025 16:44
@cebtenzzre cebtenzzre marked this pull request as draft February 18, 2025 16:49
@cebtenzzre cebtenzzre marked this pull request as ready for review February 18, 2025 16:51
@cebtenzzre
Copy link
Member Author

A user with a GTX 950M confirmed in the linked issue that this patch works.

@cebtenzzre cebtenzzre merged commit 96aeb44 into main Feb 19, 2025
4 of 18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for GPUs with cuda compute capability 5.0
2 participants