Skip to content

Commit afb00c8

Browse files
hongxiayanggarg-amit
authored andcommitted
[CI/Build][Bugfix][Doc][ROCm] CI fix and doc update after ROCm 6.2 upgrade (vllm-project#8777)
Signed-off-by: Amit Garg <[email protected]>
1 parent 5100dc5 commit afb00c8

File tree

3 files changed

+16
-3
lines changed

3 files changed

+16
-3
lines changed

.buildkite/test-pipeline.yaml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,8 +90,11 @@ steps:
9090
commands:
9191
- pip install -e ./plugins/vllm_add_dummy_model
9292
- pip install git+https://github.com/EleutherAI/lm-evaluation-harness.git@a4987bba6e9e9b3f22bd3a6c1ecf0abd04fd5622#egg=lm_eval[api]
93-
- pytest -v -s entrypoints/llm --ignore=entrypoints/llm/test_lazy_outlines.py
93+
- pytest -v -s entrypoints/llm --ignore=entrypoints/llm/test_lazy_outlines.py --ignore=entrypoints/llm/test_generate.py --ignore=entrypoints/llm/test_generate_multiple_loras.py --ignore=entrypoints/llm/test_guided_generate.py
9494
- pytest -v -s entrypoints/llm/test_lazy_outlines.py # it needs a clean process
95+
- pytest -v -s entrypoints/llm/test_generate.py # it needs a clean process
96+
- pytest -v -s entrypoints/llm/test_generate_multiple_loras.py # it needs a clean process
97+
- pytest -v -s entrypoints/llm/test_guided_generate.py # it needs a clean process
9598
- pytest -v -s entrypoints/openai
9699
- pytest -v -s entrypoints/test_chat_utils.py
97100
- pytest -v -s entrypoints/offline_mode # Needs to avoid interference with other tests

Dockerfile.rocm

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ COPY . .
120120

121121
# Package upgrades for useful functionality or to avoid dependency issues
122122
RUN --mount=type=cache,target=/root/.cache/pip \
123-
python3 -m pip install --upgrade numba scipy huggingface-hub[cli]
123+
python3 -m pip install --upgrade numba scipy huggingface-hub[cli] pytest-shard
124124

125125

126126
# Workaround for ray >= 2.10.0

docs/source/getting_started/amd-installation.rst

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,16 @@ Option 1: Build from source with docker (recommended)
2828
You can build and install vLLM from source.
2929

3030
First, build a docker image from `Dockerfile.rocm <https://github.com/vllm-project/vllm/blob/main/Dockerfile.rocm>`_ and launch a docker container from the image.
31+
It is important that the user kicks off the docker build using buildkit. Either the user put DOCKER_BUILDKIT=1 as environment variable when calling docker build command, or the user needs to setup buildkit in the docker daemon configuration /etc/docker/daemon.json as follows and restart the daemon:
32+
33+
.. code-block:: console
34+
35+
{
36+
"features": {
37+
"buildkit": true
38+
}
39+
}
40+
3141
3242
`Dockerfile.rocm <https://github.com/vllm-project/vllm/blob/main/Dockerfile.rocm>`_ uses ROCm 6.2 by default, but also supports ROCm 5.7, 6.0 and 6.1 in older vLLM branches.
3343
It provides flexibility to customize the build of docker image using the following arguments:
@@ -152,7 +162,7 @@ Note to get your gfx architecture, run `rocminfo |grep gfx`.
152162
$ python3 setup.py develop
153163
154164
155-
This may take 5-10 minutes. Currently, `pip install .`` does not work for ROCm installation
165+
This may take 5-10 minutes. Currently, :code:`pip install .` does not work for ROCm installation.
156166

157167

158168
.. tip::

0 commit comments

Comments
 (0)