Skip to content

VS Code extension does not work with custom model provider #4558

@nmartorell

Description

@nmartorell

What version of the VS Code extension are you using?

0.4.15

Which IDE are you using?

VS Code Server

What platform is your computer?

Linux 6.1.141-165.249.amzn2023.x86_64 x86_64 x86_64

What steps can reproduce the bug?

I setup Codex to work with a custom model provider. The config.toml file is essentially:

profile = "o4-custom"
sandbox_mode = "danger-full-access"

[model_providers.custom-provider]
name = "LLM Gateway"
base_url = "<openAI-compliant gateway URL>"
wire_api = "chat"

[profiles."o4-custom"]
model = "azureopenai:azure-openai:o4-mini"
model_provider = "custom-provider"
model_context_window = 20000
model_max_output_tokens = 5000

When I run codex from the CLI, this works as expected; however, when I use the VS Code extension, it only works if I continue a conversation that I started from the CLI.

If I start a new conversation using the VS Code extension, then the extension seems to send the LLM API calls with the "custom-provider", but ignored the "model" name, and instead defaults to GPT-5. The error message I receive in the VS Code extension UI is:

unexpected status 400 Bad Request: data: {"object":"chat.completion","error":{"message":"Invalid LLM id: gpt-5-codex"}}

Metadata

Metadata

Assignees

No one assigned

    Labels

    extensionIssues related to the VS Code extension

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions