-
Notifications
You must be signed in to change notification settings - Fork 3.2k
[Feature] Add /tokenize and /detokenize OpenAI compatible endpoints #9545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 2 commits
e08c877
75f963d
2e8648a
7231a93
60cd111
7612300
fc25031
ba0bb79
6b0ca39
8cb4a0b
f95f828
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -629,12 +629,50 @@ class RerankResponse(BaseModel): | |
| meta_info: Optional[dict] = None | ||
|
|
||
|
|
||
| class TokenizeRequest(BaseModel): | ||
| """Request schema for the /tokenize endpoint.""" | ||
|
|
||
| model: str | ||
adarshxs marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| prompt: Union[str, List[str]] | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Shall we keep the batched option? cc @slin1237 @CatherineSue There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we can keep it as this is not an official OpenAI endpoint, and it directly uses tokenizer, so no performance or compatibility concerns. |
||
| add_special_tokens: bool = Field( | ||
| default=True, | ||
| description="whether to add model-specific special tokens (e.g. BOS/EOS) during encoding.", | ||
| ) | ||
|
|
||
|
|
||
| class TokenizeResponse(BaseModel): | ||
| """Response schema for the /tokenize endpoint.""" | ||
|
|
||
| tokens: Union[List[int], List[List[int]]] | ||
| count: Union[int, List[int]] | ||
| max_model_len: int | ||
|
|
||
|
|
||
| class DetokenizeRequest(BaseModel): | ||
| """Request schema for the /detokenize endpoint.""" | ||
|
|
||
| model: str | ||
adarshxs marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| tokens: Union[List[int], List[List[int]]] | ||
| skip_special_tokens: bool = Field( | ||
| default=True, | ||
| description="whether to exclude special tokens (e.g. padding or EOS) during decoding.", | ||
| ) | ||
|
|
||
|
|
||
| class DetokenizeResponse(BaseModel): | ||
| """Response schema for the /detokenize endpoint.""" | ||
|
|
||
| text: Union[str, List[str]] | ||
|
|
||
|
|
||
| OpenAIServingRequest = Union[ | ||
| ChatCompletionRequest, | ||
| CompletionRequest, | ||
| EmbeddingRequest, | ||
| ScoringRequest, | ||
| V1RerankReqInput, | ||
| TokenizeRequest, | ||
| DetokenizeRequest, | ||
| ] | ||
|
|
||
|
|
||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.