-
Notifications
You must be signed in to change notification settings - Fork 0
docs: Add Humanloop as an observability provider #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
42abcba to
ece140f
Compare
Humanloop (https://humanloop.com) now supports ingesting OTEL traces from the Vercel AI SDK. This doc details the steps to setup this integration as well as configuration options.
ece140f to
acf9026
Compare
|
|
||
| # Humanloop Observability | ||
|
|
||
| [Humanloop](https://humanloop.com/) is the LLM evals platform for enterprises, giving you the tools that top teams use to ship and scale AI with confidence. Humanloop integrates with the AI SDK to provide: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
critique this final version
Humanloop is an enterprise LLMOps platform that helps you confidently evaluate, deploy, and scale AI features.
Our AI SDK integration allows you to seamlessly import telemetry data into Humanloop via the OpenTelemetry protocol.
You can visualize app traces and metrics for latency, cost, and errors. You can then set up automatic monitoring using code, human, and LLM evaluators.
|
|
||
| [Humanloop](https://humanloop.com/) is the LLM evals platform for enterprises, giving you the tools that top teams use to ship and scale AI with confidence. Humanloop integrates with the AI SDK to provide: | ||
|
|
||
| The AI SDK can log to [Humanloop](https://humanloop.com/) via OpenTelemetry. This integration enables trace visualization, cost/latency/error monitoring, and evaluation by code, LLM, or human judges. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this paragraph, rephrased version in previous comment
|
|
||
| | Parameter | Required | Description | | ||
| | --------------------- | -------- | ------------------------------------------------------------------------------ | | ||
| | `humanloopPromptPath` | Yes | Path to the prompt on Humanloop. Generation spans create Logs for this Prompt. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reminder that this is a first point of contact for new users: they're not yet in on the HL lingo
Alternative descriptions:
- Prompt path in the Humanloop workspace. A Prompt file logs requests made to the LLM provider
- Flow path in the Humanloop workspace. A Flow file holds traces of the full user-LLM interaction
- Unique trace ID for the current user session
| ```ts | ||
| experimental_telemetry: { | ||
| isEnabled: true, | ||
| functionId: 'unique-function-id', // Optional identifier for the function |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't understand this comment; can you try describing why I would add a functionId/ what id does?
| ```bash | ||
| OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel | ||
| OTEL_EXPORTER_OTLP_PROTOCOL=http/json | ||
| OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually like the xxxxx pattern here; it's a good way to signal you need to add X-API-KEY= prefix
|
|
||
| <Tabs items={['Next.js', 'Node.js']}> | ||
| <Tab> | ||
| Next.js has support for OpenTelemetry instrumentation on the framework level. Learn more about it in the [Next.js OpenTelemetry guide](https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Next.js has framework level support for OpenTelemetry instrumentation"
| } | ||
| ``` | ||
|
|
||
| Your calls to the AI SDK should now be logged to Humanloop. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Your AI SDK project will now log to Humanloop"
// More confidence, more salemanship - you are writing from a position of knowledge
|
|
||
| ### Node.js Implementation | ||
|
|
||
| OpenTelemetry has a package to auto-instrument Node.js applications. Learn more about it in the [OpenTelemetry Node.js guide](https://opentelemetry.io/docs/languages/js/getting-started/nodejs/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Add OpenTelemetry to your Node.js project. To learn more it, check out the [....."
| OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key | ||
| ``` | ||
|
|
||
| ## Framework Implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OpenTelemetry Setup?
| OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key | ||
| ``` | ||
|
|
||
| Register the OpenTelemetry SDK and add Humanloop metadata to the spans. The `humanloopPromptPath` specifies the (Prompt File)[http://localhost:3001/docs/v5/explanation/prompts] in Humanloop to which the spans will be logged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, localhost link
| const result = await generateText({ | ||
| model: openai('gpt-4o'), | ||
| prompt: 'Write a short story about a cat.', | ||
| experimental_telemetry: { isEnabled: true }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
want highlight on this line
|
|
||
| ## Debugging | ||
|
|
||
| If you aren't using Next.js 15+, you will also need to enable the experimental instrumentation hook (available in 13.4+). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to elaborate this further: it's next.js only, should mention what the hook's benefits, THEN mentions you don't need extra configuration for >= 15
|
|
||
| After instrumenting your AI SDK application with Humanloop, you can then: | ||
|
|
||
| - Experiment with different [versions of Prompts](https://humanloop.com/docs/v5/guides/evals/comparing-prompts) and try them out in the Editor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Try tweaking your Prompt in the workspace editor to improve its performance"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point about provider: they can tweak the Prompt but not call it after optimisation. Let's mention that briefly in the same paragraph: "Our AI SDK Provider implementation is coming soon, allowing you to switch between Prompt versions as you make tweaks"
| After instrumenting your AI SDK application with Humanloop, you can then: | ||
|
|
||
| - Experiment with different [versions of Prompts](https://humanloop.com/docs/v5/guides/evals/comparing-prompts) and try them out in the Editor | ||
| - Create [custom Evaluators](https://humanloop.com/docs/v5/explanation/evaluators) -- Human, Code, or LLM -- to monitor and benchmark your AI application |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather only point to monitoring - they're still dipping their toes and this is the next quantum of utility: BAM you have your AI Vercel project, now you also have monitoring in HL. That's a workable setup already and makes them come back for more
| Several LLM observability providers offer integrations with the AI SDK telemetry data: | ||
|
|
||
| - [Braintrust](/providers/observability/braintrust) | ||
| - [Humanloop](/providers/observability/humanloop) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Petition to put ourselves first /s
|
|
||
| ## Trace Grouping | ||
|
|
||
| To group multiple AI SDK calls into a single Flow Log, create and pass a Flow Log ID to the telemetry metadata of each AI SDK call. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"To trace a user-LLM session end to end, we will use Humanloop's tracing feature: Flows (add linked here). Pass A Flow Log ID to the telemetra metadata on all AI SDK calls.
| 2. Pass the Flow Log ID to each AI SDK call | ||
| 3. Update the Flow Log when all executions are complete | ||
|
|
||
| The Flow Log serves as a parent container for all related Prompt Logs in Humanloop. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
drop this paragraph
|
|
||
| 1. Create a Flow Log in Humanloop | ||
| 2. Pass the Flow Log ID to each AI SDK call | ||
| 3. Update the Flow Log when all executions are complete |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confused, expected the flow trace to be completed automatically
andreibratu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did a first pass, ping me IRL
Humanloop (https://humanloop.com) now supports ingesting OTEL traces from the Vercel AI SDK. This doc details the steps to setup this integration as well as configuration options.