- Ubuntu 22.04 LTS
- CUDA 12.8 / 12.4 (12.8 is default)
- Python 3.12.9 / 3.11.12 (3.12.9 is default)
- Torch 2.8.0 / 2.6.0 (2.8.0 is default)
- xformers 0.0.32.post1 / 0.0.29.post3 (0.0.32.post1 is default)
- Jupyter Lab
- code-server
- ComfyUI v0.3.51
- runpodctl
- OhMyRunPod
- RunPod File Uploader
- croc
- rclone
- Application Manager
- CivitAI Downloader
This image is designed to work on RunPod.
There are options for both CUDA 12.4 (for non 5090 GPU types), and CUDA 12.8 (for the RTX 5090 GPU type).
There are also options for Python 3.11 and Python 3.12 for each since some ComfyUI custom nodes require Python 3.11 and don't work correctly with Python 3.12.
Click on the appropriate link below to deploy the template of your choice on Runpod.
Runpod Template Version | Runpod Template Description |
---|---|
CUDA 12.4 + Python 3.11 | Template with CUDA 12.4 and Python 3.11 for non-RTX 5090 GPU types |
CUDA 12.4 + Python 3.12 | Template with CUDA 12.4 and Python 3.12 for non-RTX 5090 GPU types |
CUDA 12.8 + Python 3.11 | Template with CUDA 12.8 and Python 3.11 for RTX 5090 GPU type |
CUDA 12.8 + Python 3.12 | Template with CUDA 12.8 and Python 3.12 for RTX 5090 GPU type |
Note
You will need to edit the docker-bake.hcl
file and update REGISTRY_USER
,
and RELEASE
. You can obviously edit the other values too, but these
are the most important ones.
Important
In order to cache the models, you will need at least 32GB of CPU/system
memory (not VRAM) due to the large size of the models. If you have less
than 32GB of system memory, you can comment out or remove the code in the
Dockerfile
that caches the models.
# Clone the repo
git clone https://github.com/ashleykleynhans/comfyui-docker.git
# Log in to Docker Hub
docker login
# Build the default image (CUDA 12.8 and Python 3.12), tag the image, and push the image to Docker Hub
docker buildx bake -f docker-bake.hcl --push
# OR build a different image (eg. CUDA 12.4 and Python 3.11), tag the image, and push the image to Docker Hub
docker buildx bake -f docker-bake.hcl cu124-py311 --push
# OR build ALL images, tag the images, and push the images to Docker Hub
docker buildx bake -f docker-bake.hcl all --push
# Same as above but customize registry/user/release:
REGISTRY=ghcr.io REGISTRY_USER=myuser RELEASE=my-release docker buildx \
bake -f docker-bake.hcl --push
docker run -d \
--gpus all \
-v /workspace \
-p 2999:2999 \
-p 3000:3001 \
-p 7777:7777 \
-p 8000:8000 \
-p 8888:8888 \
-e JUPYTER_PASSWORD=Jup1t3R! \
-e EXTRA_ARGS=--lowvram --disable-xformers \
ashleykza/comfyui:latest
You can obviously substitute the image name and tag with your own.
Connect Port | Internal Port | Description |
---|---|---|
3000 | 3001 | ComfyUI |
7777 | 7777 | Code Server |
8000 | 8000 | Application Manager |
8888 | 8888 | Jupyter Lab |
2999 | 2999 | RunPod File Uploader |
Variable | Description | Default |
---|---|---|
JUPYTER_LAB_PASSWORD | Set a password for Jupyter lab | not set - no password |
DISABLE_AUTOLAUNCH | Disable application from launching automatically | (not set) |
DISABLE_SYNC | Disable syncing if using a RunPod network volume | (not set) |
EXTRA_ARGS | Specify extra command line arguments for ComfyUI, eg. --lowvram , --disable-xformers etc |
(not set) |
ComfyUI creates a log file, and you can tail it instead of killing the service to view the logs
Application | Log file |
---|---|
ComfyUI | /workspace/logs/comfyui.log |
Pull requests and issues on GitHub are welcome. Bug fixes and new features are encouraged.