Skip to content

[Bug]: Model are exposed on the proxy API not just by their name, but also by their type #14257

@Dobiasd

Description

@Dobiasd

What happened?

Here is a minimal example showing it:

Run the proxy with Docker:

litellm_config.yaml

model_list:
  - model_name: foo
    litellm_params:
      model: gpt-4.1
      api_key: ...
general_settings:
  master_key: sk-1234
docker run \
    -v $(pwd)/litellm_config.yaml:/app/config.yaml \
    -p 4000:4000 \
    ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.75.8-stable \
    --config /app/config.yaml

Check which models are available:

curl 'http://0.0.0.0:4000/v1/models' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' | jq .
{
  "data": [
    {
      "id": "foo",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    }
  ],
  "object": "list"
}

This request is expected to succeed:

curl -X POST 'http://0.0.0.0:4000/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
    "model": "foo",
    "messages": [{"role": "user", "content": "Hi."}]
}'

The following request should not succeed, because there is no model with the name gpt-4.1 exposed in the config.

curl -X POST 'http://0.0.0.0:4000/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
    "model": "gpt-4.1",
    "messages": [{"role": "user", "content": "Hi."}]
}'

However, it succeeds nonetheless, instead of failing with Invalid model name passed in or similar.

Is this a bug or a feature?

Relevant log output

Are you a ML Ops Team?

Yes

What LiteLLM version are you on ?

v1.75.8

Twitter / LinkedIn details

https://www.linkedin.com/in/t-hermann/

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions