Skip to content

docs: update llm setting guidance and "REASONING_THINK_RM" description #943

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 11, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 19 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,15 +151,32 @@ Ensure the current user can run Docker commands **without using sudo**. You can
- json_mode
- embedding query

- For example: If you are using the `OpenAI API`, you have to configure your GPT model in the `.env` file like this.
You can set your Chat Model and Embedding Model in the following ways:

- **Using LiteLLM (Recommended)**: We now support LiteLLM as a backend for integration with multiple LLM providers. You can configure as follows:
```bash
cat << EOF > .env
BACKEND=rdagent.oai.backend.LiteLLMAPIBackend
# Set to any model supported by LiteLLM.
CHAT_MODEL=gpt-4o
EMBEDDING_MODEL=text-embedding-3-small
# Then configure the environment variables required by your chosen model in the convention of LiteLLM here.
OPENAI_API_KEY=<replace_with_your_openai_api_key>
```
Notice: If you are using reasoning models that include thought processes in their responses (such as \<think> tags), you need to set the following environment variable:
```bash
REASONING_THINK_RM=True
```

- **Using OpenAI API Directly**: If you are using the `OpenAI API`, you can configure your GPT model in the `.env` file like this without setting LiteLLM Backend.
```bash
cat << EOF > .env
OPENAI_API_KEY=<replace_with_your_openai_api_key>
# EMBEDDING_MODEL=text-embedding-3-small
CHAT_MODEL=gpt-4-turbo
EOF
```
- However, not every API services support these features by default. For example: `AZURE OpenAI`, you have to configure your GPT model in the `.env` file like this.
- **Using Azure OpenAI Directly**: You can configure your Azure GPT model in the `.env` file like this.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can remove these from the home page and only leave a link to the docs. And tell user it is the deprecated API backend

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

```bash
cat << EOF > .env
USE_AZURE=True
Expand All @@ -174,16 +191,6 @@ Ensure the current user can run Docker commands **without using sudo**. You can
EOF
```

- We now support LiteLLM as a backend for integration with multiple LLM providers. If you use LiteLLM Backend to use models, you can configure as follows:
```bash
cat << EOF > .env
BACKEND=rdagent.oai.backend.LiteLLMAPIBackend
# It can be modified to any model supported by LiteLLM.
CHAT_MODEL=gpt-4o
EMBEDDING_MODEL=text-embedding-3-small
# The backend api_key fully follow the convention of litellm.
OPENAI_API_KEY=<replace_with_your_openai_api_key>
```

- For more configuration information, please refer to the [documentation](https://rdagent.readthedocs.io/en/latest/installation_and_configuration.html).

Expand Down
6 changes: 6 additions & 0 deletions docs/installation_and_configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,12 @@ For example, if you are using a DeepSeek model, you need to set as follows:
CHAT_MODEL=deepseek/deepseek-chat
DEEPSEEK_API_KEY=<replace_with_your_deepseek_api_key>

Besides, when you are using reasoning models, the response might include the thought process. For this case, you need to set the following environment variable:

.. code-block:: Properties

REASONING_THINK_RM=True

For more details on LiteLLM requirements, refer to the `official LiteLLM documentation <https://docs.litellm.ai/docs>`_.


Expand Down