Skip to content

Commit f36e014

Browse files
authored
Merge branch 'cc/1.0/standard_content' into mdrxy/openai-rfc-1-0
2 parents 4a26b37 + 313ed7b commit f36e014

File tree

51 files changed

+756
-246
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+756
-246
lines changed

docs/docs/concepts/messages.mdx

Lines changed: 35 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ An `AIMessage` has the following attributes. The attributes which are **standard
147147
| `tool_calls` | Standardized | Tool calls associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
148148
| `invalid_tool_calls` | Standardized | Tool calls with parsing errors associated with the message. See [tool calling](/docs/concepts/tool_calling) for details. |
149149
| `usage_metadata` | Standardized | Usage metadata for a message, such as [token counts](/docs/concepts/tokens). See [Usage Metadata API Reference](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.html). |
150-
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. |
150+
| `id` | Standardized | An optional unique identifier for the message, ideally provided by the provider/model that created the message. See [Message IDs](#message-ids) for details. |
151151
| `response_metadata` | Raw | Response metadata, e.g., response headers, logprobs, token counts. |
152152

153153
#### content
@@ -243,3 +243,37 @@ At the moment, the output of the model will be in terms of LangChain messages, s
243243
need OpenAI format for the output as well.
244244

245245
The [convert_to_openai_messages](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.utils.convert_to_openai_messages.html) utility function can be used to convert from LangChain messages to OpenAI format.
246+
247+
## Message IDs
248+
249+
LangChain messages include an optional `id` field that serves as a unique identifier. Understanding when and how these IDs are assigned can be helpful for debugging, tracing, and working with message history.
250+
251+
### When Messages Get IDs
252+
253+
Messages receive IDs in the following scenarios:
254+
255+
**Automatically assigned by LangChain:**
256+
- When generated through chat model invocation (`.invoke()`, `.stream()`, `.astream()`) with an active run manager/tracing context
257+
- IDs follow the format:
258+
- `run-$RUN_ID` (e.g., `run-ba48f958-6402-41a5-b461-5e250a4ebd36-0`)
259+
- `run-$RUN_ID-$IDX` (e.g., `run-ba48f958-6402-41a5-b461-5e250a4ebd36-1`) when there are multiple generations from a single chat model invocation.
260+
261+
**Provider-assigned IDs (highest priority):**
262+
- When the model provider assigns its own ID to the message
263+
- These take precedence over LangChain-generated run IDs
264+
- Format varies by provider
265+
266+
### When Messages Don't Get IDs
267+
268+
Messages will **not** receive IDs in these situations:
269+
270+
- **Manual message creation**: Messages created directly (e.g., `AIMessage(content="hello")`) without going through chat models
271+
- **No run manager context**: When there's no active callback/tracing infrastructure
272+
273+
### ID Priority System
274+
275+
LangChain follows a clear precedence system for message IDs:
276+
277+
1. **Provider-assigned IDs** (highest priority): IDs from the model provider
278+
2. **LangChain run IDs** (medium priority): IDs starting with `run-`
279+
3. **Manual IDs** (lowest priority): IDs explicitly set by users

docs/docs/how_to/chat_models_universal_init.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@
159159
},
160160
{
161161
"cell_type": "code",
162-
"execution_count": 8,
162+
"execution_count": null,
163163
"id": "321e3036-abd2-4e1f-bcc6-606efd036954",
164164
"metadata": {
165165
"execution": {
@@ -183,7 +183,7 @@
183183
],
184184
"source": [
185185
"configurable_model.invoke(\n",
186-
" \"what's your name\", config={\"configurable\": {\"model\": \"claude-3-5-sonnet-20240620\"}}\n",
186+
" \"what's your name\", config={\"configurable\": {\"model\": \"claude-3-5-sonnet-latest\"}}\n",
187187
")"
188188
]
189189
},
@@ -234,7 +234,7 @@
234234
},
235235
{
236236
"cell_type": "code",
237-
"execution_count": 7,
237+
"execution_count": null,
238238
"id": "6c8755ba-c001-4f5a-a497-be3f1db83244",
239239
"metadata": {
240240
"execution": {
@@ -261,7 +261,7 @@
261261
" \"what's your name\",\n",
262262
" config={\n",
263263
" \"configurable\": {\n",
264-
" \"first_model\": \"claude-3-5-sonnet-20240620\",\n",
264+
" \"first_model\": \"claude-3-5-sonnet-latest\",\n",
265265
" \"first_temperature\": 0.5,\n",
266266
" \"first_max_tokens\": 100,\n",
267267
" }\n",
@@ -336,7 +336,7 @@
336336
},
337337
{
338338
"cell_type": "code",
339-
"execution_count": 9,
339+
"execution_count": null,
340340
"id": "e57dfe9f-cd24-4e37-9ce9-ccf8daf78f89",
341341
"metadata": {
342342
"execution": {
@@ -368,14 +368,14 @@
368368
"source": [
369369
"llm_with_tools.invoke(\n",
370370
" \"what's bigger in 2024 LA or NYC\",\n",
371-
" config={\"configurable\": {\"model\": \"claude-3-5-sonnet-20240620\"}},\n",
371+
" config={\"configurable\": {\"model\": \"claude-3-5-sonnet-latest\"}},\n",
372372
").tool_calls"
373373
]
374374
}
375375
],
376376
"metadata": {
377377
"kernelspec": {
378-
"display_name": "langchain",
378+
"display_name": "langchain-monorepo",
379379
"language": "python",
380380
"name": "python3"
381381
},
@@ -389,7 +389,7 @@
389389
"name": "python",
390390
"nbconvert_exporter": "python",
391391
"pygments_lexer": "ipython3",
392-
"version": "3.10.16"
392+
"version": "3.12.11"
393393
}
394394
},
395395
"nbformat": 4,

docs/docs/how_to/custom_tools.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -741,13 +741,13 @@
741741
"\n",
742742
"If you're using tools with agents, you will likely need an error handling strategy, so the agent can recover from the error and continue execution.\n",
743743
"\n",
744-
"A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_error`. \n",
744+
"A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_errors`. \n",
745745
"\n",
746746
"When the error handler is specified, the exception will be caught and the error handler will decide which output to return from the tool.\n",
747747
"\n",
748-
"You can set `handle_tool_error` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.\n",
748+
"You can set `handle_tool_errors` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.\n",
749749
"\n",
750-
"Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_error` of the tool because its default value is `False`."
750+
"Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_errors` of the tool because its default value is `False`."
751751
]
752752
},
753753
{
@@ -777,7 +777,7 @@
777777
"id": "9d93b217-1d44-4d31-8956-db9ea680ff4f",
778778
"metadata": {},
779779
"source": [
780-
"Here's an example with the default `handle_tool_error=True` behavior."
780+
"Here's an example with the default `handle_tool_errors=True` behavior."
781781
]
782782
},
783783
{
@@ -807,7 +807,7 @@
807807
"source": [
808808
"get_weather_tool = StructuredTool.from_function(\n",
809809
" func=get_weather,\n",
810-
" handle_tool_error=True,\n",
810+
" handle_tool_errors=True,\n",
811811
")\n",
812812
"\n",
813813
"get_weather_tool.invoke({\"city\": \"foobar\"})"
@@ -818,7 +818,7 @@
818818
"id": "f91d6dc0-3271-4adc-a155-21f2e62ffa56",
819819
"metadata": {},
820820
"source": [
821-
"We can set `handle_tool_error` to a string that will always be returned."
821+
"We can set `handle_tool_errors` to a string that will always be returned."
822822
]
823823
},
824824
{
@@ -848,7 +848,7 @@
848848
"source": [
849849
"get_weather_tool = StructuredTool.from_function(\n",
850850
" func=get_weather,\n",
851-
" handle_tool_error=\"There is no such city, but it's probably above 0K there!\",\n",
851+
" handle_tool_errors=\"There is no such city, but it's probably above 0K there!\",\n",
852852
")\n",
853853
"\n",
854854
"get_weather_tool.invoke({\"city\": \"foobar\"})"
@@ -893,7 +893,7 @@
893893
"\n",
894894
"get_weather_tool = StructuredTool.from_function(\n",
895895
" func=get_weather,\n",
896-
" handle_tool_error=_handle_error,\n",
896+
" handle_tool_errors=_handle_error,\n",
897897
")\n",
898898
"\n",
899899
"get_weather_tool.invoke({\"city\": \"foobar\"})"

docs/docs/how_to/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -345,7 +345,7 @@ LangGraph is an extension of LangChain aimed at
345345
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
346346

347347
LangGraph documentation is currently hosted on a separate site.
348-
You can peruse [LangGraph how-to guides here](https://langchain-ai.github.io/langgraph/how-tos/).
348+
You can find the [LangGraph guides here](https://langchain-ai.github.io/langgraph/guides/).
349349

350350
## [LangSmith](https://docs.smith.langchain.com/)
351351

docs/docs/how_to/tool_artifacts.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@
147147
},
148148
{
149149
"cell_type": "code",
150-
"execution_count": 5,
150+
"execution_count": null,
151151
"id": "74de0286-b003-4b48-9cdd-ecab435515ca",
152152
"metadata": {},
153153
"outputs": [],
@@ -157,7 +157,7 @@
157157
"\n",
158158
"from langchain_anthropic import ChatAnthropic\n",
159159
"\n",
160-
"llm = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\", temperature=0)"
160+
"llm = ChatAnthropic(model=\"claude-3-5-sonnet-latest\", temperature=0)"
161161
]
162162
},
163163
{

docs/docs/how_to/tool_stream_events.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@
3838
},
3939
{
4040
"cell_type": "code",
41-
"execution_count": 1,
41+
"execution_count": null,
4242
"metadata": {},
4343
"outputs": [],
4444
"source": [
@@ -53,7 +53,7 @@
5353
"if \"ANTHROPIC_API_KEY\" not in os.environ:\n",
5454
" os.environ[\"ANTHROPIC_API_KEY\"] = getpass()\n",
5555
"\n",
56-
"model = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\", temperature=0)"
56+
"model = ChatAnthropic(model=\"claude-3-5-sonnet-latest\", temperature=0)"
5757
]
5858
},
5959
{

docs/docs/integrations/chat/anthropic.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -124,15 +124,15 @@
124124
},
125125
{
126126
"cell_type": "code",
127-
"execution_count": 4,
127+
"execution_count": null,
128128
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
129129
"metadata": {},
130130
"outputs": [],
131131
"source": [
132132
"from langchain_anthropic import ChatAnthropic\n",
133133
"\n",
134134
"llm = ChatAnthropic(\n",
135-
" model=\"claude-3-5-sonnet-20240620\",\n",
135+
" model=\"claude-3-5-sonnet-latest\",\n",
136136
" temperature=0,\n",
137137
" max_tokens=1024,\n",
138138
" timeout=None,\n",

docs/docs/integrations/chat/bedrock.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -129,15 +129,15 @@
129129
},
130130
{
131131
"cell_type": "code",
132-
"execution_count": 1,
132+
"execution_count": null,
133133
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
134134
"metadata": {},
135135
"outputs": [],
136136
"source": [
137137
"from langchain_aws import ChatBedrockConverse\n",
138138
"\n",
139139
"llm = ChatBedrockConverse(\n",
140-
" model_id=\"anthropic.claude-3-5-sonnet-20240620-v1:0\",\n",
140+
" model_id=\"anthropic.claude-3-5-sonnet-latest-v1:0\",\n",
141141
" # region_name=...,\n",
142142
" # aws_access_key_id=...,\n",
143143
" # aws_secret_access_key=...,\n",

docs/docs/integrations/tools/stripe.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@
153153
"from langgraph.prebuilt import create_react_agent\n",
154154
"\n",
155155
"llm = ChatAnthropic(\n",
156-
" model=\"claude-3-5-sonnet-20240620\",\n",
156+
" model=\"claude-3-5-sonnet-latest\",\n",
157157
")\n",
158158
"\n",
159159
"langgraph_agent_executor = create_react_agent(llm, stripe_agent_toolkit.get_tools())\n",

docs/docs/troubleshooting/errors/MESSAGE_COERCION_FAILURE.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545
},
4646
{
4747
"cell_type": "code",
48-
"execution_count": 5,
48+
"execution_count": null,
4949
"metadata": {},
5050
"outputs": [
5151
{
@@ -74,7 +74,7 @@
7474
"\n",
7575
"uncoercible_message = {\"role\": \"HumanMessage\", \"random_field\": \"random value\"}\n",
7676
"\n",
77-
"model = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\")\n",
77+
"model = ChatAnthropic(model=\"claude-3-5-sonnet-latest\")\n",
7878
"\n",
7979
"model.invoke([uncoercible_message])"
8080
]

0 commit comments

Comments
 (0)