You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m experimenting with langgraph and tracing via Opik. Here’s the behavior I’m seeing:
With langgraph + langchain:
When I trace execution using Opik, all LLM-related information (inputs, outputs, tokens, pricing, timing) is captured in a single trace, distributed across the nodes of the graph.
With langgraph + pydantic-ai (tracing enabled via LogFire with capture_all=True):
Instead of one unified trace, I get multiple separate traces, which makes it harder to visualize the entire flow in a single execution graph.
Question:
How can I configure pydantic-ai so that all LLM-related traces are logged within a single trace and distributed across the nodes of the graph, similar to how langgraph handles it with langchain?
The reason I’m exploring pydantic-ai is because langchain feels too complex for my use case.