Skip to content

Conversation

@jamesbaskerville
Copy link

Humanloop (https://humanloop.com) now supports ingesting OTEL traces from the Vercel AI SDK. This doc details the steps to setup this integration as well as configuration options.

@jamesbaskerville jamesbaskerville force-pushed the humanloop/add-observability-provider branch 2 times, most recently from 42abcba to ece140f Compare March 12, 2025 14:17
Humanloop (https://humanloop.com) now supports ingesting OTEL traces from the
Vercel AI SDK. This doc details the steps to setup this integration as well as
configuration options.
@jamesbaskerville jamesbaskerville force-pushed the humanloop/add-observability-provider branch from ece140f to acf9026 Compare March 12, 2025 14:18

# Humanloop Observability

[Humanloop](https://humanloop.com/) is the LLM evals platform for enterprises, giving you the tools that top teams use to ship and scale AI with confidence. Humanloop integrates with the AI SDK to provide:
Copy link

@andreibratu andreibratu Mar 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critique this final version

Humanloop is an enterprise LLMOps platform that helps you confidently evaluate, deploy, and scale AI features.

Our AI SDK integration allows you to seamlessly import telemetry data into Humanloop via the OpenTelemetry protocol.

You can visualize app traces and metrics for latency, cost, and errors. You can then set up automatic monitoring using code, human, and LLM evaluators.


[Humanloop](https://humanloop.com/) is the LLM evals platform for enterprises, giving you the tools that top teams use to ship and scale AI with confidence. Humanloop integrates with the AI SDK to provide:

The AI SDK can log to [Humanloop](https://humanloop.com/) via OpenTelemetry. This integration enables trace visualization, cost/latency/error monitoring, and evaluation by code, LLM, or human judges.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this paragraph, rephrased version in previous comment


| Parameter | Required | Description |
| --------------------- | -------- | ------------------------------------------------------------------------------ |
| `humanloopPromptPath` | Yes | Path to the prompt on Humanloop. Generation spans create Logs for this Prompt. |
Copy link

@andreibratu andreibratu Mar 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reminder that this is a first point of contact for new users: they're not yet in on the HL lingo

Alternative descriptions:

  • Prompt path in the Humanloop workspace. A Prompt file logs requests made to the LLM provider
  • Flow path in the Humanloop workspace. A Flow file holds traces of the full user-LLM interaction
  • Unique trace ID for the current user session

```ts
experimental_telemetry: {
isEnabled: true,
functionId: 'unique-function-id', // Optional identifier for the function

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't understand this comment; can you try describing why I would add a functionId/ what id does?

```bash
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel
OTEL_EXPORTER_OTLP_PROTOCOL=http/json
OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually like the xxxxx pattern here; it's a good way to signal you need to add X-API-KEY= prefix


<Tabs items={['Next.js', 'Node.js']}>
<Tab>
Next.js has support for OpenTelemetry instrumentation on the framework level. Learn more about it in the [Next.js OpenTelemetry guide](https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Next.js has framework level support for OpenTelemetry instrumentation"

}
```

Your calls to the AI SDK should now be logged to Humanloop.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Your AI SDK project will now log to Humanloop"

// More confidence, more salemanship - you are writing from a position of knowledge


### Node.js Implementation

OpenTelemetry has a package to auto-instrument Node.js applications. Learn more about it in the [OpenTelemetry Node.js guide](https://opentelemetry.io/docs/languages/js/getting-started/nodejs/).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Add OpenTelemetry to your Node.js project. To learn more it, check out the [....."

OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key
```

## Framework Implementation

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OpenTelemetry Setup?

OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key
```

Register the OpenTelemetry SDK and add Humanloop metadata to the spans. The `humanloopPromptPath` specifies the (Prompt File)[http://localhost:3001/docs/v5/explanation/prompts] in Humanloop to which the spans will be logged.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, localhost link

const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Write a short story about a cat.',
experimental_telemetry: { isEnabled: true },

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

want highlight on this line


## Debugging

If you aren't using Next.js 15+, you will also need to enable the experimental instrumentation hook (available in 13.4+).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to elaborate this further: it's next.js only, should mention what the hook's benefits, THEN mentions you don't need extra configuration for >= 15


After instrumenting your AI SDK application with Humanloop, you can then:

- Experiment with different [versions of Prompts](https://humanloop.com/docs/v5/guides/evals/comparing-prompts) and try them out in the Editor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Try tweaking your Prompt in the workspace editor to improve its performance"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point about provider: they can tweak the Prompt but not call it after optimisation. Let's mention that briefly in the same paragraph: "Our AI SDK Provider implementation is coming soon, allowing you to switch between Prompt versions as you make tweaks"

After instrumenting your AI SDK application with Humanloop, you can then:

- Experiment with different [versions of Prompts](https://humanloop.com/docs/v5/guides/evals/comparing-prompts) and try them out in the Editor
- Create [custom Evaluators](https://humanloop.com/docs/v5/explanation/evaluators) -- Human, Code, or LLM -- to monitor and benchmark your AI application

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather only point to monitoring - they're still dipping their toes and this is the next quantum of utility: BAM you have your AI Vercel project, now you also have monitoring in HL. That's a workable setup already and makes them come back for more

Several LLM observability providers offer integrations with the AI SDK telemetry data:

- [Braintrust](/providers/observability/braintrust)
- [Humanloop](/providers/observability/humanloop)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Petition to put ourselves first /s


## Trace Grouping

To group multiple AI SDK calls into a single Flow Log, create and pass a Flow Log ID to the telemetry metadata of each AI SDK call.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"To trace a user-LLM session end to end, we will use Humanloop's tracing feature: Flows (add linked here). Pass A Flow Log ID to the telemetra metadata on all AI SDK calls.

2. Pass the Flow Log ID to each AI SDK call
3. Update the Flow Log when all executions are complete

The Flow Log serves as a parent container for all related Prompt Logs in Humanloop.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

drop this paragraph


1. Create a Flow Log in Humanloop
2. Pass the Flow Log ID to each AI SDK call
3. Update the Flow Log when all executions are complete

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Confused, expected the flow trace to be completed automatically

Copy link

@andreibratu andreibratu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did a first pass, ping me IRL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants