Skip to content

[ENHANCEMENT] First-class Azure AI Foundry support in Anthropic provide #9940

@ClemCreator

Description

@ClemCreator

Problem (one or two sentences)

Right now, the Anthropic provider in Roo Code supports:

  • Anthropic API key
  • Optional “Use custom base URL”

Docs: https://docs.roocode.com/providers/anthropic

With Azure AI Foundry / Microsoft Foundry, Anthropic’s Claude models (Sonnet 4.5, Opus 4.5, Haiku 4.5, etc.) can be deployed behind an Anthropic-compatible endpoint, for example:

  • Base URL: https://<resource-name>.services.ai.azure.com/anthropic
  • Messages endpoint: https://<resource-name>.services.ai.azure.com/anthropic/v1/messages
  • Auth via x-api-key or Microsoft Entra ID
    (See Microsoft docs for “Deploy and use Claude models in Microsoft Foundry”.)

When I configure Roo Code like this:

  1. Provider: Anthropic
  2. Check “Use custom base URL”
  3. Base URL: https://<resource-name>.services.ai.azure.com/anthropic
  4. API key: my Azure Foundry API key
  5. Model: select Claude Sonnet 4.5 in the model dropdown

…I can get something working with a Claude 4.5 Sonnet deployment.

However, I cannot get a Claude 4.5 Opus deployment to work, because of the model name:

  • In Azure Foundry, the model field must match my deployment name, e.g. claude-opus-4-5.
  • In Roo Code, the Anthropic model list uses a different id (including a date suffix / hardcoded Anthropic model id).
  • There is no way to override the model name when using the Anthropic provider, only the base URL.

So I’m blocked from using Claude 4.5 Opus hosted on Azure AI Foundry via the Anthropic provider, even though the endpoint is Anthropic-compatible.

There is a related bug when trying to hit Azure Anthropic via the OpenAI Compatible provider: Roo Code detects the endpoint as Azure AI Inference and appends models/chat/completions, and also uses Authorization: Bearer instead of x-api-key, which yields 401 errors (see issue #9467).
#9467

Context (who is affected and when)

In the Anthropic provider settings, add dedicated support for Azure AI Foundry / Microsoft Foundry, something like:

  • Use Azure AI Foundry (AnthropicFoundry)

When this checkbox is enabled:

  1. Connection fields

    • Base URL, e.g.
      https://<resource-name>.services.ai.azure.com/anthropic
    • Auth:
      • API key (sent as x-api-key), and/or
      • Microsoft Entra ID (sent as Authorization: Bearer <token>)
  2. Model list sourced from Foundry

    Instead of the static Anthropic model id list, Roo Code would query the Foundry resource for available Claude deployments and show those deployment names in the Model dropdown. Conceptually, this is the list of deployments the user created for:

    • claude-sonnet-4-5
    • claude-opus-4-5
    • claude-haiku-4-5
    • claude-opus-4-1
    • etc.

    The selected item would then be used as the model field in the Anthropic Messages API request.

  3. Request wiring

    • Use path /anthropic/v1/messages (no models/chat/completions suffix).
    • Use x-api-key for key-based auth when targeting Azure Anthropic.
    • Keep the Anthropic-compatible headers (anthropic-version, etc.) as in the Microsoft Learn examples.
    • Do not apply the _isAzureAiInference OpenAI-style heuristic to these endpoints (see [BUG] Unable to use Anthropic via Foundry #9467).
  4. Model name behavior

    • The model field should be exactly the Foundry deployment name (user-defined), not a hardcoded Anthropic model id with a date suffix.
    • This avoids brittle coupling between Roo Code’s internal model list and the names configured in Foundry.

Desired behavior (conceptual, not technical)

a

Constraints / preferences (optional)

  • Azure AI Foundry / Microsoft Foundry is now one of the main ways to access Anthropic’s Claude models (including Claude Opus 4.5 and Sonnet 4.5) in enterprise environments, with quotas, governance, etc. managed in Azure.

  • Many enterprise users will have Claude on Foundry as their only allowed path to Anthropic models.

  • Having first-class Azure AI Foundry support in the Anthropic provider allows:

    • Provider: Anthropic
    • Check: “Use Azure AI Foundry”
    • Paste endpoint + key
    • Pick any deployed Claude model from a dropdown
    • Start using Roo Code immediately

Without this, we have to rely on workarounds (OpenAI Compatible provider with special-cased logic, or trying to guess model ids), which breaks easily when model names or endpoints differ.

Request checklist

  • I've searched existing Issues and Discussions for duplicates
  • This describes a specific problem with clear context and impact

Roo Code Task Links (optional)

No response

Acceptance criteria (optional)

No response

Proposed approach (optional)

No response

Trade-offs / risks (optional)

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    EnhancementNew feature or requestIssue/PR - TriageNew issue. Needs quick review to confirm validity and assign labels.

    Type

    No type

    Projects

    Status

    Triage

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions