Skip to content

Commit 46c8401

Browse files
annapendletonjimpang
authored andcommitted
[Doc] Add troubleshooting section to k8s deployment (vllm-project#19377)
Signed-off-by: Anna Pendleton <[email protected]>
1 parent 34eab65 commit 46c8401

File tree

1 file changed

+24
-10
lines changed

1 file changed

+24
-10
lines changed

docs/deployment/k8s.md

Lines changed: 24 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -5,19 +5,22 @@ title: Using Kubernetes
55

66
Deploying vLLM on Kubernetes is a scalable and efficient way to serve machine learning models. This guide walks you through deploying vLLM using native Kubernetes.
77

8-
* [Deployment with CPUs](#deployment-with-cpus)
9-
* [Deployment with GPUs](#deployment-with-gpus)
8+
- [Deployment with CPUs](#deployment-with-cpus)
9+
- [Deployment with GPUs](#deployment-with-gpus)
10+
- [Troubleshooting](#troubleshooting)
11+
- [Startup Probe or Readiness Probe Failure, container log contains "KeyboardInterrupt: terminated"](#startup-probe-or-readiness-probe-failure-container-log-contains-keyboardinterrupt-terminated)
12+
- [Conclusion](#conclusion)
1013

1114
Alternatively, you can deploy vLLM to Kubernetes using any of the following:
1215

13-
* [Helm](frameworks/helm.md)
14-
* [InftyAI/llmaz](integrations/llmaz.md)
15-
* [KServe](integrations/kserve.md)
16-
* [kubernetes-sigs/lws](frameworks/lws.md)
17-
* [meta-llama/llama-stack](integrations/llamastack.md)
18-
* [substratusai/kubeai](integrations/kubeai.md)
19-
* [vllm-project/aibrix](https://github.com/vllm-project/aibrix)
20-
* [vllm-project/production-stack](integrations/production-stack.md)
16+
- [Helm](frameworks/helm.md)
17+
- [InftyAI/llmaz](integrations/llmaz.md)
18+
- [KServe](integrations/kserve.md)
19+
- [kubernetes-sigs/lws](frameworks/lws.md)
20+
- [meta-llama/llama-stack](integrations/llamastack.md)
21+
- [substratusai/kubeai](integrations/kubeai.md)
22+
- [vllm-project/aibrix](https://github.com/vllm-project/aibrix)
23+
- [vllm-project/production-stack](integrations/production-stack.md)
2124

2225
## Deployment with CPUs
2326

@@ -351,6 +354,17 @@ INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
351354

352355
If the service is correctly deployed, you should receive a response from the vLLM model.
353356

357+
## Troubleshooting
358+
359+
### Startup Probe or Readiness Probe Failure, container log contains "KeyboardInterrupt: terminated"
360+
361+
If the startup or readiness probe failureThreshold is too low for the time needed to startup the server, Kubernetes scheduler will kill the container. A couple of indications that this has happened:
362+
363+
1. container log contains "KeyboardInterrupt: terminated"
364+
2. `kubectl get events` shows message `Container $NAME failed startup probe, will be restarted`
365+
366+
To mitigate, increase the failureThreshold to allow more time for the model server to start serving. You can identify an ideal failureThreshold by removing the probes from the manifest and measuring how much time it takes for the model server to show it's ready to serve.
367+
354368
## Conclusion
355369

356370
Deploying vLLM with Kubernetes allows for efficient scaling and management of ML models leveraging GPU resources. By following the steps outlined above, you should be able to set up and test a vLLM deployment within your Kubernetes cluster. If you encounter any issues or have suggestions, please feel free to contribute to the documentation.

0 commit comments

Comments
 (0)