-
Notifications
You must be signed in to change notification settings - Fork 6.5k
Closed
Labels
questionFurther information is requestedFurther information is requested
Description
Question Validation
- I have searched both the documentation and discord for an answer.
Question
Hi,
I'm using LlamaIndex's FunctionAgent with both Ollama LLM and Anthropic LLM. With Ollama, the agent calls the tool and then generates a response to the user as expected. With Anthropic LLM, the tool call is made successfully and the tool result is returned, but after that, the agent does not generate a follow-up response. There are no errors or warnings—the process just stops after the tool result.
What I’ve checked:
- The tool call and tool result are handled correctly.
- The same workflow works with Ollama LLM.
- No exceptions or errors are raised.
- Using llama-index-llms-anthropic v0.9.5.
Messages sent to Anthropic (printed just before the response object):
anthropic_messages, system_prompt = messages_to_anthropic_messages(messages, self.cache_idx)
# Printed anthropic_messages hereExample (anthropic_messages):
user- 'content': [
{'type': 'text', 'text': 'Hello'}
]
assistant- 'content': [
{'type': 'text', 'text': "Hello! I'm **AI Assistant**, your Digital Cooking Agent. \n\nHow can I help you today?"}
]
user- 'content': [
{'type': 'text', 'text': 'I want to cook eggs today. how can i do that?'}
]
assistant- 'content': [
{'id': 'toolu_01XpPXWym1Nu8W3XbrJEvBQf', 'input': {'cooking': {'to_be_cooked': 'eggs'}}, 'name': 'start_cooking', 'type': 'tool_use'},
{'id': 'toolu_01TqML4SacyqDNzmh9M1s21H', 'input': {}, 'name': 'get_current_cooking_status', 'type': 'tool_use'}
]
user- 'content': [
{'tool_use_id': 'toolu_01XpPXWym1Nu8W3XbrJEvBQf', 'type': 'tool_result', 'content': [{'type': 'text', 'text': "{'cooking': {'to_be_cooked': 'eggs', 'chilly': None, 'pepper': None, 'salt': None, 'oil': None}}"}]},
{'tool_use_id': 'toolu_01TqML4SacyqDNzmh9M1s21H', 'type': 'tool_result', 'content': [{'type': 'text', 'text': "{'cooking': {'to_be_cooked': 'eggs', 'chilly': None, 'pepper': None, 'salt': None, 'oil': None}}"}]}
]After this, the response object is not generated and nothing happens.
Is there something different about how Anthropic expects tool results to be formatted or streamed back? Any guidance would be appreciated!
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested