Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,15 @@ RamaLama eliminates the need to configure the host system by instead pulling a c

## Accelerated images

| Accelerator | Image |
| :-----------------------| :------------------------- |
| CPU, Vulkan, Apple | quay.io/ramalama/ramalama |
| HIP_VISIBLE_DEVICES | quay.io/ramalama/rocm |
| CUDA_VISIBLE_DEVICES | quay.io/ramalama/cuda |
| ASAHI_VISIBLE_DEVICES | quay.io/ramalama/asahi |
| INTEL_VISIBLE_DEVICES | quay.io/ramalama/intel-gpu |
| ASCEND_VISIBLE_DEVICES | quay.io/ramalama/cann |
| MUSA_VISIBLE_DEVICES | quay.io/ramalama/musa |
| Accelerator | Image |
| :---------------------------------| :------------------------- |
| GGML_VK_VISIBLE_DEVICES (or CPU) | quay.io/ramalama/ramalama |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be simpler to just name this VULKAN_VISIBLE_DEVICES?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It wouldn't because these env var names are all from llama.cpp

| HIP_VISIBLE_DEVICES | quay.io/ramalama/rocm |
| CUDA_VISIBLE_DEVICES | quay.io/ramalama/cuda |
| ASAHI_VISIBLE_DEVICES | quay.io/ramalama/asahi |
| INTEL_VISIBLE_DEVICES | quay.io/ramalama/intel-gpu |
| ASCEND_VISIBLE_DEVICES | quay.io/ramalama/cann |
| MUSA_VISIBLE_DEVICES | quay.io/ramalama/musa |

### GPU support inspection
On first run, RamaLama inspects your system for GPU support, falling back to CPU if none are present. RamaLama uses container engines like Podman or Docker to pull the appropriate OCI image with all necessary software to run an AI Model for your system setup.
Expand Down
1 change: 1 addition & 0 deletions ramalama/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -507,6 +507,7 @@ def set_gpu_type_env_vars():
"ASAHI_VISIBLE_DEVICES",
"ASCEND_VISIBLE_DEVICES",
"CUDA_VISIBLE_DEVICES",
"GGML_VK_VISIBLE_DEVICES",
"HIP_VISIBLE_DEVICES",
"INTEL_VISIBLE_DEVICES",
"MUSA_VISIBLE_DEVICES",
Expand Down
1 change: 1 addition & 0 deletions ramalama/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ class BaseConfig:
"ASAHI_VISIBLE_DEVICES": "quay.io/ramalama/asahi",
"ASCEND_VISIBLE_DEVICES": "quay.io/ramalama/cann",
"CUDA_VISIBLE_DEVICES": "quay.io/ramalama/cuda",
"GGML_VK_VISIBLE_DEVICES": "quay.io/ramalama/ramalama",
"HIP_VISIBLE_DEVICES": "quay.io/ramalama/rocm",
"INTEL_VISIBLE_DEVICES": "quay.io/ramalama/intel-gpu",
"MUSA_VISIBLE_DEVICES": "quay.io/ramalama/musa",
Expand Down