Skip to content

Commit dcf5217

Browse files
authored
docs: improve Getting Started page and SDK documentation structure (#17614)
* docs: update Getting Started page with accurate endpoints and fix exception handling - Update endpoints list to include /responses, /audio, /batches - Change "Consistent output" to be endpoint-agnostic - Clarify Response Format title as "OpenAI Chat Completions Format" - Fix exception handling example: use litellm exceptions instead of deprecated openai.error - Add model prefix (anthropic/) to example * docs: reorganize sidebar and improve SDK documentation structure Sidebar changes: - Reorder: Python SDK first, then AI Gateway (Proxy) - Rename "LiteLLM - Getting Started" to "Getting Started" - Restructure SDK section with Core Functions, Configuration subsections - Move budget_manager to Guides - Move sdk_custom_pricing and migration to Extras - Remove duplicate embedding/async_embedding and embedding/moderation Content changes: - Add Response Format section to response_api.md - Add async aembedding() section to supported_embedding.md * docs: add deprecation notice for OpenAI Assistants API OpenAI has deprecated the Assistants API, shutting down on August 26, 2026. Added warning banner directing users to the Responses API. * docs: expand Core Functions in SDK sidebar Add more SDK functions to Core Functions category: - text_completion() - image_generation() - transcription() - speech() - Link to "All Supported Endpoints" for complete list * Rename Sidebar Item * docs: revert Getting Started label to original * Rename sidebar label from 'LiteLLM - Getting Started' to 'Getting Started'
1 parent 7b47c0f commit dcf5217

File tree

5 files changed

+152
-31
lines changed

5 files changed

+152
-31
lines changed

docs/my-website/docs/assistants.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,14 @@ import TabItem from '@theme/TabItem';
33

44
# /assistants
55

6+
:::warning Deprecation Notice
7+
8+
OpenAI has deprecated the Assistants API. It will shut down on **August 26, 2026**.
9+
10+
Consider migrating to the [Responses API](/docs/response_api) instead. See [OpenAI's migration guide](https://platform.openai.com/docs/guides/responses-vs-assistants) for details.
11+
12+
:::
13+
614
Covers Threads, Messages, Assistants.
715

816
LiteLLM currently covers:

docs/my-website/docs/embedding/supported_embedding.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,26 @@ import os
1010
os.environ['OPENAI_API_KEY'] = ""
1111
response = embedding(model='text-embedding-ada-002', input=["good morning from litellm"])
1212
```
13+
14+
## Async Usage - `aembedding()`
15+
16+
LiteLLM provides an asynchronous version of the `embedding` function called `aembedding`:
17+
18+
```python
19+
from litellm import aembedding
20+
import asyncio
21+
22+
async def get_embedding():
23+
response = await aembedding(
24+
model='text-embedding-ada-002',
25+
input=["good morning from litellm"]
26+
)
27+
return response
28+
29+
response = asyncio.run(get_embedding())
30+
print(response)
31+
```
32+
1333
## Proxy Usage
1434

1535
**NOTE**

docs/my-website/docs/index.md

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ https://github.com/BerriAI/litellm
77

88
## **Call 100+ LLMs using the OpenAI Input/Output Format**
99

10-
- Translate inputs to provider's `completion`, `embedding`, and `image_generation` endpoints
11-
- [Consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']`
10+
- Translate inputs to provider's endpoints (`/chat/completions`, `/responses`, `/embeddings`, `/images`, `/audio`, `/batches`, and more)
11+
- [Consistent output](https://docs.litellm.ai/docs/supported_endpoints) - same response format regardless of which provider you use
1212
- Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - [Router](https://docs.litellm.ai/docs/routing)
1313
- Track spend & set budgets per project [LiteLLM Proxy Server](https://docs.litellm.ai/docs/simple_proxy)
1414

@@ -245,7 +245,7 @@ response = completion(
245245

246246
</Tabs>
247247

248-
### Response Format (OpenAI Format)
248+
### Response Format (OpenAI Chat Completions Format)
249249

250250
```json
251251
{
@@ -514,15 +514,22 @@ response = completion(
514514
LiteLLM maps exceptions across all supported providers to the OpenAI exceptions. All our exceptions inherit from OpenAI's exception types, so any error-handling you have for that, should work out of the box with LiteLLM.
515515

516516
```python
517-
from openai.error import OpenAIError
517+
import litellm
518518
from litellm import completion
519+
import os
519520

520521
os.environ["ANTHROPIC_API_KEY"] = "bad-key"
521522
try:
522-
# some code
523-
completion(model="claude-instant-1", messages=[{"role": "user", "content": "Hey, how's it going?"}])
524-
except OpenAIError as e:
525-
print(e)
523+
completion(model="anthropic/claude-instant-1", messages=[{"role": "user", "content": "Hey, how's it going?"}])
524+
except litellm.AuthenticationError as e:
525+
# Thrown when the API key is invalid
526+
print(f"Authentication failed: {e}")
527+
except litellm.RateLimitError as e:
528+
# Thrown when you've exceeded your rate limit
529+
print(f"Rate limited: {e}")
530+
except litellm.APIError as e:
531+
# Thrown for general API errors
532+
print(f"API error: {e}")
526533
```
527534
### See How LiteLLM Transforms Your Requests
528535

docs/my-website/docs/response_api.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,38 @@ response = litellm.responses(
4343
print(response)
4444
```
4545

46+
#### Response Format (OpenAI Responses API Format)
47+
48+
```json
49+
{
50+
"id": "resp_abc123",
51+
"object": "response",
52+
"created_at": 1734366691,
53+
"status": "completed",
54+
"model": "o1-pro-2025-01-30",
55+
"output": [
56+
{
57+
"type": "message",
58+
"id": "msg_abc123",
59+
"status": "completed",
60+
"role": "assistant",
61+
"content": [
62+
{
63+
"type": "output_text",
64+
"text": "Once upon a time, a little unicorn named Stardust lived in a magical meadow where flowers sang lullabies. One night, she discovered that her horn could paint dreams across the sky, and she spent the evening creating the most beautiful aurora for all the forest creatures to enjoy. As the animals drifted off to sleep beneath her shimmering lights, Stardust curled up on a cloud of moonbeams, happy to have shared her magic with her friends.",
65+
"annotations": []
66+
}
67+
]
68+
}
69+
],
70+
"usage": {
71+
"input_tokens": 18,
72+
"output_tokens": 98,
73+
"total_tokens": 116
74+
}
75+
}
76+
```
77+
4678
#### Streaming
4779
```python showLineNumbers title="OpenAI Streaming Response"
4880
import litellm

docs/my-website/sidebars.js

Lines changed: 77 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -118,11 +118,83 @@ const sidebars = {
118118
],
119119
// But you can create a sidebar manually
120120
tutorialSidebar: [
121-
{ type: "doc", id: "index" }, // NEW
121+
{ type: "doc", id: "index", label: "Getting Started" },
122122

123123
{
124124
type: "category",
125-
label: "LiteLLM AI Gateway",
125+
label: "LiteLLM Python SDK",
126+
items: [
127+
{
128+
type: "link",
129+
label: "Quick Start",
130+
href: "/docs/#litellm-python-sdk",
131+
},
132+
{
133+
type: "category",
134+
label: "SDK Functions",
135+
items: [
136+
{
137+
type: "doc",
138+
id: "completion/input",
139+
label: "completion()",
140+
},
141+
{
142+
type: "doc",
143+
id: "embedding/supported_embedding",
144+
label: "embedding()",
145+
},
146+
{
147+
type: "doc",
148+
id: "response_api",
149+
label: "responses()",
150+
},
151+
{
152+
type: "doc",
153+
id: "text_completion",
154+
label: "text_completion()",
155+
},
156+
{
157+
type: "doc",
158+
id: "image_generation",
159+
label: "image_generation()",
160+
},
161+
{
162+
type: "doc",
163+
id: "audio_transcription",
164+
label: "transcription()",
165+
},
166+
{
167+
type: "doc",
168+
id: "text_to_speech",
169+
label: "speech()",
170+
},
171+
{
172+
type: "link",
173+
label: "All Supported Endpoints →",
174+
href: "/docs/supported_endpoints",
175+
},
176+
],
177+
},
178+
{
179+
type: "category",
180+
label: "Configuration",
181+
items: [
182+
"set_keys",
183+
"caching/all_caches",
184+
],
185+
},
186+
"completion/token_usage",
187+
"exception_mapping",
188+
{
189+
type: "category",
190+
label: "LangChain, LlamaIndex, Instructor",
191+
items: ["langchain/langchain", "tutorials/instructor"],
192+
}
193+
],
194+
},
195+
{
196+
type: "category",
197+
label: "LiteLLM AI Gateway (Proxy)",
126198
link: {
127199
type: "generated-index",
128200
title: "LiteLLM AI Gateway (LLM Proxy)",
@@ -696,6 +768,7 @@ const sidebars = {
696768
type: "category",
697769
label: "Guides",
698770
items: [
771+
"budget_manager",
699772
"completion/computer_use",
700773
"completion/web_search",
701774
"completion/web_fetch",
@@ -748,27 +821,6 @@ const sidebars = {
748821
"wildcard_routing"
749822
],
750823
},
751-
{
752-
type: "category",
753-
label: "LiteLLM Python SDK",
754-
items: [
755-
"set_keys",
756-
"budget_manager",
757-
"caching/all_caches",
758-
"completion/token_usage",
759-
"sdk_custom_pricing",
760-
"embedding/async_embedding",
761-
"embedding/moderation",
762-
"migration",
763-
"sdk_custom_pricing",
764-
{
765-
type: "category",
766-
label: "LangChain, LlamaIndex, Instructor Integration",
767-
items: ["langchain/langchain", "tutorials/instructor"],
768-
}
769-
],
770-
},
771-
772824
{
773825
type: "category",
774826
label: "Load Testing",
@@ -838,6 +890,8 @@ const sidebars = {
838890
type: "category",
839891
label: "Extras",
840892
items: [
893+
"sdk_custom_pricing",
894+
"migration",
841895
"data_security",
842896
"data_retention",
843897
"proxy/security_encryption_faq",

0 commit comments

Comments
 (0)