-
-
Notifications
You must be signed in to change notification settings - Fork 9.4k
[Frontend] Add readiness and liveness endpoints to OpenAI API server #7078
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
mfournioux
wants to merge
18
commits into
vllm-project:main
from
mfournioux:add_readiness_liveness_k8s_probes
Closed
Changes from 15 commits
Commits
Show all changes
18 commits
Select commit
Hold shift + click to select a range
bac2c90
add readiness and liveness k8s probes for openai api_server
mfournioux d65bf58
update naming for pydantic classes from openai protocol
mfournioux 0c7945d
update naming for pydantic classes from openai protocol and remove aw…
mfournioux fa1c549
add tests for readiness and liveness endpoints
mfournioux 2fbaa2f
correct syntax pydantic class in protocol
mfournioux 27ef5ac
correct ruff errors
mfournioux 7fa6a37
correct ruff errors
mfournioux 18a9f2c
fixing isort issues
mfournioux 32e030b
update some typo
mfournioux 5127e91
correct some yapf errors
mfournioux c698d76
correct readiness probe regarding its http status
mfournioux ea8be80
replace liveness endpoint by health endpoint and renaming readiness e…
mfournioux c0baaea
clean some imports and configure error response for readiness endpoint
mfournioux 3a8b227
correct model response in readiness endpoint
mfournioux ac095c1
add return response 500 for readiness if model weights not loaded
mfournioux 14b2b91
Update test_basic.py
mfournioux b06d686
update the readiness endpoint with a try clause
mfournioux e950836
add check if KV cache has been set up in readiness endpoint
mfournioux File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -719,4 +719,4 @@ class DetokenizeRequest(OpenAIBaseModel): | |
|
||
|
||
class DetokenizeResponse(OpenAIBaseModel): | ||
prompt: str | ||
prompt: str | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please avoid deleting the last line here. (Since otherwise, the file remains unchanged) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess you're going to update this test to check when the server is not ready?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes exactly, I am working on it
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding readiness, I have worked on create a proper unit test when the server is not ready. I was thinking to test that when the model weight were not loaded or KV cache was not set up, readiness endpoints would return an error message.
But when I have checked how VLLM server was launched, I realized that the endpoints would not be callable until the server was properly deployed and model was loaded with KV cache setup. So I don't see how I can test if the model weights are not loaded or KV cache is not set up, because if these conditions are not reached, the endpoint for readiness will not be callable.
So, do you have any other idea how to do this test?
Is it compulsory to add this test for the PR to be merged?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this would basically defeat the purpose of this PR in terms of fulfilling #6073. If the server cannot accept any requests until everything has been fully loaded, then there is essentially no difference between
/ready
and/health
. Instead, we should enable the/health
endpoint to respond before the vLLM engine has finished starting up.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand your point. Furthermore, after having checked the new release 0.5.4, I have noticed several updates have been added on the rpc server to check if it is ready. So, I don't think the readiness endpoint implemented in this PR is still useful.
These are the next possible actions I propose :
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mfournioux thanks for all your work on this feature.
I do have a deployment using a startup probe and liveness checks afterwards.
The main issue I have with the startup probe is that it is a workaround for applications with a potentially long startup unable to communicate their readiness. One never knows how much time a pod needs (downloading model, weights, ...) and in the meantime the liveness cannot be checked.
Switching to a liveness check that is available very early during startup would be nice, but that then requires a readiness indicator to not send traffic until vLLM is ready.
I have not looked at the recent changes yet. But there really should be a way to bring up the webserver (endpoint) early and also to indicate when it's ready.
Additionally I would love for some metrics to also be returned during the initialization phase, allowing for that to be observed.
As for the Helm chat idea, I am thrilled for an official chart, so people don't have to individually write their deployments and figure out how to best configure vLLM and its, checks or also storage /caching. Also good liveness and readiness checks is something that could come with it. I still stand behind the proposal, that vLLM should be a better K8s citizen and prove these endpoints as best as possible.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@frittentheke I have opened a PR #9199 to share a chart helm in order to have an example how to deploy vllm on k8s, including probes configuration.