-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Description
Problem (one or two sentences)
Right now, the Anthropic provider in Roo Code supports:
- Anthropic API key
- Optional “Use custom base URL”
Docs: https://docs.roocode.com/providers/anthropic
With Azure AI Foundry / Microsoft Foundry, Anthropic’s Claude models (Sonnet 4.5, Opus 4.5, Haiku 4.5, etc.) can be deployed behind an Anthropic-compatible endpoint, for example:
- Base URL:
https://<resource-name>.services.ai.azure.com/anthropic - Messages endpoint:
https://<resource-name>.services.ai.azure.com/anthropic/v1/messages - Auth via
x-api-keyor Microsoft Entra ID
(See Microsoft docs for “Deploy and use Claude models in Microsoft Foundry”.)
When I configure Roo Code like this:
- Provider:
Anthropic - Check “Use custom base URL”
- Base URL:
https://<resource-name>.services.ai.azure.com/anthropic - API key: my Azure Foundry API key
- Model: select Claude Sonnet 4.5 in the model dropdown
…I can get something working with a Claude 4.5 Sonnet deployment.
However, I cannot get a Claude 4.5 Opus deployment to work, because of the model name:
- In Azure Foundry, the
modelfield must match my deployment name, e.g.claude-opus-4-5. - In Roo Code, the Anthropic model list uses a different id (including a date suffix / hardcoded Anthropic model id).
- There is no way to override the model name when using the Anthropic provider, only the base URL.
So I’m blocked from using Claude 4.5 Opus hosted on Azure AI Foundry via the Anthropic provider, even though the endpoint is Anthropic-compatible.
There is a related bug when trying to hit Azure Anthropic via the OpenAI Compatible provider: Roo Code detects the endpoint as Azure AI Inference and appends models/chat/completions, and also uses Authorization: Bearer instead of x-api-key, which yields 401 errors (see issue #9467).
#9467
Context (who is affected and when)
In the Anthropic provider settings, add dedicated support for Azure AI Foundry / Microsoft Foundry, something like:
- Use Azure AI Foundry (AnthropicFoundry)
When this checkbox is enabled:
-
Connection fields
- Base URL, e.g.
https://<resource-name>.services.ai.azure.com/anthropic - Auth:
- API key (sent as
x-api-key), and/or - Microsoft Entra ID (sent as
Authorization: Bearer <token>)
- API key (sent as
- Base URL, e.g.
-
Model list sourced from Foundry
Instead of the static Anthropic model id list, Roo Code would query the Foundry resource for available Claude deployments and show those deployment names in the Model dropdown. Conceptually, this is the list of deployments the user created for:
claude-sonnet-4-5claude-opus-4-5claude-haiku-4-5claude-opus-4-1- etc.
The selected item would then be used as the
modelfield in the Anthropic Messages API request. -
Request wiring
- Use path
/anthropic/v1/messages(nomodels/chat/completionssuffix). - Use
x-api-keyfor key-based auth when targeting Azure Anthropic. - Keep the Anthropic-compatible headers (
anthropic-version, etc.) as in the Microsoft Learn examples. - Do not apply the
_isAzureAiInferenceOpenAI-style heuristic to these endpoints (see [BUG] Unable to use Anthropic via Foundry #9467).
- Use path
-
Model name behavior
- The
modelfield should be exactly the Foundry deployment name (user-defined), not a hardcoded Anthropic model id with a date suffix. - This avoids brittle coupling between Roo Code’s internal model list and the names configured in Foundry.
- The
Desired behavior (conceptual, not technical)
a
Constraints / preferences (optional)
-
Azure AI Foundry / Microsoft Foundry is now one of the main ways to access Anthropic’s Claude models (including Claude Opus 4.5 and Sonnet 4.5) in enterprise environments, with quotas, governance, etc. managed in Azure.
-
Many enterprise users will have Claude on Foundry as their only allowed path to Anthropic models.
-
Having first-class Azure AI Foundry support in the Anthropic provider allows:
- Provider:
Anthropic - Check: “Use Azure AI Foundry”
- Paste endpoint + key
- Pick any deployed Claude model from a dropdown
- Start using Roo Code immediately
- Provider:
Without this, we have to rely on workarounds (OpenAI Compatible provider with special-cased logic, or trying to guess model ids), which breaks easily when model names or endpoints differ.
Request checklist
- I've searched existing Issues and Discussions for duplicates
- This describes a specific problem with clear context and impact
Roo Code Task Links (optional)
No response
Acceptance criteria (optional)
No response
Proposed approach (optional)
No response
Trade-offs / risks (optional)
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status