-
Notifications
You must be signed in to change notification settings - Fork 2.8k
openai[minor]: upgrade sdk and support new built-in tools #8324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Skipped Deployment
|
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"<details>\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem to render correctly in the preview.
const response2 = await model.invoke( | ||
[new HumanMessage({ content: approvals })], | ||
{ | ||
previous_response_id: response.response_metadata.id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious if passing the full message history works here instead of using previous_response_id
.
Using the MCP tool from what I see generates multiple reasoning items. In python we're just storing one of them in additional_kwargs
. So when we attempt to pass back message history we get BadRequestError because we're missing reasoning items. I'm working on a fix, which would likely involve a breaking change to langchain-openai.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see what you mean about MCP emitting multiple reasoning blocks. Looks like we're doing the same thing here of just keeping the latest block when we're iterating over output items. Though I can just pass the output back without pinning it to the response ID and not get any explicit errors (maybe a server-end change from oAI fixed your issue?).
In any case this means we're losing reasoning context when looping in MCP, so I'm interested to hear what you fix entails.
8af3d18
to
7383999
Compare
Adds support for OpenAI's remote mcp, code interpreter, and image generation built-in tools supported by the responses API. Also bumps the openai sdk to surface these new types back through the library