When working on mozilla-ai/any-agent#828 (comment) I encountered an issue that XAI models fail when structured output (response_format) is specified in OpenAI format, for example:
openai_json_schema = {
"type": "json_schema",
"json_schema": {
"name": "StructuredOutput",
"schema": {**StructuredOutput.model_json_schema(), "additionalProperties": False},
"strict": True,
},
}
Note that directly using the pydantic model for StructuredOutput works alright.
@daavoo 's suggestion: any-llm should support receiving as response_format any of the argument types that openai supports and take care of converting them to the appropriate format for each provider under the hood.
Specifically for XAI this would be:
https://docs.x.ai/docs/guides/structured-outputs