You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+11-28Lines changed: 11 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -151,41 +151,24 @@ Ensure the current user can run Docker commands **without using sudo**. You can
151
151
- json_mode
152
152
- embedding query
153
153
154
-
- For example: If you are using the `OpenAI API`, you have to configure your GPT model in the `.env` file like this.
155
-
```bash
156
-
cat <<EOF > .env
157
-
OPENAI_API_KEY=<replace_with_your_openai_api_key>
158
-
# EMBEDDING_MODEL=text-embedding-3-small
159
-
CHAT_MODEL=gpt-4-turbo
160
-
EOF
161
-
```
162
-
- However, not every API services support these features by default. For example: `AZURE OpenAI`, you have to configure your GPT model in the `.env` file like this.
You can set your Chat Model and Embedding Model in the following ways:
176
155
177
-
- We now support LiteLLM as a backend for integration with multiple LLM providers. If you use LiteLLM Backend to use models, you can configure as follows:
156
+
-**Using LiteLLM (Recommended)**: We now support LiteLLM as a backend for integration with multiple LLM providers. You can configure as follows:
178
157
```bash
179
158
cat <<EOF > .env
180
159
BACKEND=rdagent.oai.backend.LiteLLMAPIBackend
181
-
# It can be modified to any model supported by LiteLLM.
182
-
CHAT_MODEL=gpt-4o
160
+
# Set to any model supported by LiteLLM.
161
+
CHAT_MODEL=gpt-4o
183
162
EMBEDDING_MODEL=text-embedding-3-small
184
-
# The backend api_key fully follow the convention of litellm.
163
+
# Then configure the environment variables required by your chosen model in the convention of LiteLLM here.
185
164
OPENAI_API_KEY=<replace_with_your_openai_api_key>
186
165
```
187
-
188
-
- For more configuration information, please refer to the [documentation](https://rdagent.readthedocs.io/en/latest/installation_and_configuration.html).
166
+
Notice: If you are using reasoning models that include thought processes in their responses (such as \<think> tags), you need to set the following environment variable:
167
+
```bash
168
+
REASONING_THINK_RM=True
169
+
```
170
+
171
+
- You can also use a deprecated backend if you only use `OpenAI API` or `Azure OpenAI` directly. For this deprecated setting and more configuration information, please refer to the [documentation](https://rdagent.readthedocs.io/en/latest/installation_and_configuration.html).
Besides, when you are using reasoning models, the response might include the thought process. For this case, you need to set the following environment variable:
55
+
56
+
.. code-block:: Properties
57
+
58
+
REASONING_THINK_RM=True
59
+
54
60
For more details on LiteLLM requirements, refer to the `official LiteLLM documentation <https://docs.litellm.ai/docs>`_.
0 commit comments