Skip to content
This repository was archived by the owner on May 11, 2025. It is now read-only.

Conversation

IlyasMoutawwakil
Copy link
Contributor

@IlyasMoutawwakil IlyasMoutawwakil commented Jan 22, 2024

  • Adapt setup.py
  • Adapt build.yaml workflow

@IlyasMoutawwakil IlyasMoutawwakil changed the title ROCm support AMD ROCm support Jan 22, 2024
@casper-hansen
Copy link
Owner

I have no AMD ROCm GPUs and no way of testing this. Is it correctly understood that it's exclusively ExLlama kernels that will work with AMD GPUs on ROCm 5 (but not 6)?

@IlyasMoutawwakil
Copy link
Contributor Author

IlyasMoutawwakil commented Jan 22, 2024

I tested on ROCm 5.6.1
@seungrokj do these changes build on both 5 and 6 without impacting performance ?
If so I can add them in this PR.

@seungrokj
Copy link

I tested on ROCm 5.6.1 @seungrokj do these changes build on both 5 and 6 without impacting performance ? If so I can add them in this PR.

@IlyasMoutawwakil I just checked AutoGPTQ/AutoGPTQ#515 in rocm5.7 env. It is compiled and there should be no perf issue because they are basically same syntaxes. (Somehow the old syntax is not compatible with rocm6.0 though)

@IlyasMoutawwakil
Copy link
Contributor Author

@casper-hansen for the other part of your question, yes for now Exllama is going to be the only way to run AWQ model on AMD GPUs. Untill we get a hipifiable, ROCm native or performant triton GEMM.

@casper-hansen casper-hansen merged commit 3aed4bf into casper-hansen:main Jan 26, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants