-
-
Notifications
You must be signed in to change notification settings - Fork 697
docs: update explanation for separate config use in litellm #958
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 1 commit
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
8cfdf3f
docs: update explanation for separate config use in litellm
Hoder-zyf 43120b6
docs: update default backend to `rdagent.oai.backend.LiteLLMAPIBackend`
Hoder-zyf 57c187f
docs: update .rst format
Hoder-zyf 7be9bf5
Update installation_and_configuration.rst
Hoder-zyf File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -153,16 +153,40 @@ Ensure the current user can run Docker commands **without using sudo**. You can | |
|
||
You can set your Chat Model and Embedding Model in the following ways: | ||
|
||
- **Using LiteLLM (Recommended)**: We now support LiteLLM as a backend for integration with multiple LLM providers. You can configure as follows: | ||
- **Using LiteLLM (Recommended)**: We now support LiteLLM as a backend for integration with multiple LLM providers. You can configure in two ways: | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think put Unified API before seperate API will be more user friendly |
||
**Option 1: Separate API bases for Chat and Embedding models** | ||
```bash | ||
cat << EOF > .env | ||
BACKEND=rdagent.oai.backend.LiteLLMAPIBackend | ||
# Set to any model supported by LiteLLM. | ||
# Configure separate API bases for chat and embedding | ||
|
||
# CHAT MODEL: | ||
CHAT_MODEL=gpt-4o | ||
OPENAI_API_BASE=<your_chat_api_base> | ||
OPENAI_API_KEY=<replace_with_your_openai_api_key> | ||
|
||
# EMBEDDING MODEL: | ||
# TAKE siliconflow as an example, you can use other providers. | ||
# Note: embedding requires litellm_proxy prefix | ||
EMBEDDING_MODEL=litellm_proxy/BAAI/bge-large-en-v1.5 | ||
LITELLM_PROXY_API_KEY=<replace_with_your_siliconflow_api_key> | ||
LITELLM_PROXY_API_BASE=https://api.siliconflow.cn/v1 | ||
``` | ||
|
||
**Option 2: Unified API base for both models** | ||
```bash | ||
cat << EOF > .env | ||
BACKEND=rdagent.oai.backend.LiteLLMAPIBackend | ||
# Set to any model supported by LiteLLM. | ||
CHAT_MODEL=gpt-4o | ||
EMBEDDING_MODEL=text-embedding-3-small | ||
# Then configure the environment variables required by your chosen model in the convention of LiteLLM here. | ||
# Configure unified API base | ||
OPENAI_API_BASE=<your_unified_api_base> | ||
OPENAI_API_KEY=<replace_with_your_openai_api_key> | ||
``` | ||
|
||
Notice: If you are using reasoning models that include thought processes in their responses (such as \<think> tags), you need to set the following environment variable: | ||
```bash | ||
REASONING_THINK_RM=True | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -18,15 +18,40 @@ LiteLLM Backend Configuration | |
|
||
Please create a `.env` file in the root directory of the project and add environment variables. | ||
|
||
Here is a sample configuration for using OpenAI's gpt-4o via LiteLLM. | ||
We now support LiteLLM as a backend for integration with multiple LLM providers. You can configure in two ways: | ||
|
||
Option 1: Separate API bases for Chat and Embedding models | ||
---------------------------------------------------------- | ||
|
||
.. code-block:: Properties | ||
|
||
BACKEND=rdagent.oai.backend.LiteLLMAPIBackend | ||
# Set to any model supported by LiteLLM. | ||
|
||
# CHAT MODEL: | ||
CHAT_MODEL=gpt-4o | ||
OPENAI_API_BASE=<your_chat_api_base> | ||
OPENAI_API_KEY=<replace_with_your_openai_api_key> | ||
|
||
# EMBEDDING MODEL: | ||
# TAKE siliconflow as an example, you can use other providers. | ||
# Note: embedding requires litellm_proxy prefix | ||
EMBEDDING_MODEL=litellm_proxy/BAAI/bge-large-en-v1.5 | ||
LITELLM_PROXY_API_KEY=<replace_with_your_siliconflow_api_key> | ||
LITELLM_PROXY_API_BASE=https://api.siliconflow.cn/v1 | ||
|
||
Option 2: Unified API base for both models | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Switch |
||
------------------------------------------- | ||
|
||
.. code-block:: Properties | ||
|
||
BACKEND=rdagent.oai.backend.LiteLLMAPIBackend | ||
# It can be modified to any model supported by LiteLLM. | ||
CHAT_MODEL=gpt-4o | ||
# Set to any model supported by LiteLLM. | ||
CHAT_MODEL=gpt-4o | ||
EMBEDDING_MODEL=text-embedding-3-small | ||
# Configure unified API base | ||
# The backend api_key fully follows the convention of litellm. | ||
OPENAI_API_BASE=<your_unified_api_base> | ||
OPENAI_API_KEY=<replace_with_your_openai_api_key> | ||
|
||
Necessary parameters include: | ||
|
@@ -37,6 +62,14 @@ Necessary parameters include: | |
|
||
- `EMBEDDING_MODEL`: The model name of the embedding model. | ||
|
||
- `OPENAI_API_BASE`: The base URL of the API, which is used for both chat and embedding models if `EMBEDDING_MODEL` does not start with `litellm_proxy/`, else used for `CHAT_MODEL` only. | ||
|
||
- `LITELLM_PROXY_API_KEY`: The API key of the API, which is used for embedding models if `EMBEDDING_MODEL` starts with `litellm_proxy/`. | ||
|
||
- `LITELLM_PROXY_API_BASE`: The base URL of the API, which is used for embedding models if `EMBEDDING_MODEL` starts with `litellm_proxy/`. | ||
|
||
|
||
|
||
The `CHAT_MODEL` and `EMBEDDING_MODEL` parameters will be passed into LiteLLM's completion function. | ||
|
||
Therefore, when utilizing models provided by different providers, first review the interface configuration of LiteLLM. The model names must match those allowed by LiteLLM. | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove these options