This repository showcases a current bug in the LlamaIndex OpenAIResponses implementation.
When the following conditions are met:
- A
FunctionAgentwith tools - When
OpenAIResponsesis thellm - When the response is NOT streamed...
LlamaIndex will return an empty response immediately after the tool call instead of fetching the LLM's final response.
To run this example:
- Copy
.env.exampleto.env - Add your OpenAI key to
.env - Run the script:
$ uv run python main.py
This will result in the following output:
OpenAI response: The result of \(2 \times 3 + 4\) is 10.
OpenAIResponses response: