Skip to content

Conversation

hntrl
Copy link
Member

@hntrl hntrl commented Jun 6, 2025

Adds support for OpenAI's remote mcp, code interpreter, and image generation built-in tools supported by the responses API. Also bumps the openai sdk to surface these new types back through the library

Copy link

vercel bot commented Jun 6, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchainjs-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 17, 2025 6:10pm
1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchainjs-api-refs ⬜️ Ignored (Inspect) Jun 17, 2025 6:10pm

@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. auto:improvement Medium size change to existing code to handle new use-cases labels Jun 6, 2025
"cell_type": "markdown",
"metadata": {},
"source": [
"<details>\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem to render correctly in the preview.

const response2 = await model.invoke(
[new HumanMessage({ content: approvals })],
{
previous_response_id: response.response_metadata.id,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious if passing the full message history works here instead of using previous_response_id.

Using the MCP tool from what I see generates multiple reasoning items. In python we're just storing one of them in additional_kwargs. So when we attempt to pass back message history we get BadRequestError because we're missing reasoning items. I'm working on a fix, which would likely involve a breaking change to langchain-openai.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean about MCP emitting multiple reasoning blocks. Looks like we're doing the same thing here of just keeping the latest block when we're iterating over output items. Though I can just pass the output back without pinning it to the response ID and not get any explicit errors (maybe a server-end change from oAI fixed your issue?).

In any case this means we're losing reasoning context when looping in MCP, so I'm interested to hear what you fix entails.

@dosubot dosubot bot added the lgtm PRs that are ready to be merged as-is label Jun 9, 2025
@hntrl hntrl merged commit 130aa96 into main Jun 17, 2025
36 checks passed
@hntrl hntrl deleted the hunter/openai-built-ins branch June 17, 2025 23:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

auto:improvement Medium size change to existing code to handle new use-cases lgtm PRs that are ready to be merged as-is size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants