Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Jan 13, 2026

Implements toggleable per-request span export using OpenTelemetry Context (AsyncLocalStorage) to carry authentication tokens, eliminating the need for token registries or caches.

Changes

  • Agent365ExporterOptions: Added enablePerRequestContextToken toggle and configuration for flush grace period (perRequestFlushGraceMs) and max trace age (perRequestMaxTraceAgeMs)

  • token-context.ts: Context utilities runWithExportToken() and getExportToken() for token management via OTel Context API

  • PerRequestSpanProcessor: Custom span processor that buffers spans per trace, flushes on root span completion, and exports under the original request Context so tokenResolver reads from context.active()

  • withExportToken: Middleware helper to wrap requests with token-carrying Context

  • setup-per-request-export.ts: Initialization helper that registers PerRequestSpanProcessor when enabled, otherwise uses BatchSpanProcessor

Usage

import { setupTracing, withExportToken } from '@microsoft/agents-a365-observability';

// Enable per-request mode - token read from OTel Context at export time
const provider = setupTracing((opts) => {
  opts.enablePerRequestContextToken = true;
  opts.perRequestFlushGraceMs = 250;  // Optional: wait for late child spans
});

// Inject token into request context (mount early in middleware stack)
app.use(withExportToken((req) => req.headers['authorization']?.split(' ')[1]));

// Traditional batch mode (when disabled)
setupTracing((opts) => {
  opts.enablePerRequestContextToken = false;
  opts.tokenResolver = async (agentId, tenantId) => getToken(agentId, tenantId);
});

No changes to Agent365Exporter - it uses the provided tokenResolver regardless of mode. When per-request mode is enabled, the processor stores only spans and a Context reference per trace; tokens exist solely in AsyncLocalStorage.

Original prompt

Add a toggleable, zero-storage per-request export feature using OpenTelemetry Context (AsyncLocalStorage) to carry the token. When enabled, spans are exported per trace/request via a custom PerRequestSpanProcessor, and Agent365Exporter reads the token from the active OTel Context at export time (no registry/cache). When disabled, retain existing behavior using BatchSpanProcessor with the provided tokenResolver.

Changes to implement:

  • Add a new option enablePerRequestContextToken: boolean to Agent365ExporterOptions to toggle the feature.
  • Introduce token-context.ts utilities to set/get a per-request export token via OTel Context.
  • Implement PerRequestSpanProcessor that buffers spans per trace and flushes when the root span ends (with a grace period); on flush, export is performed under the saved request Context so the exporter’s tokenResolver can read the token from context.active().
  • Add withExportToken middleware helper to run each request within a Context that carries the token. No explicit unset required; Context ends with request.
  • Provide setup-per-request-export.ts helper to initialize tracing: when the new option is true, register PerRequestSpanProcessor and set Agent365ExporterOptions.tokenResolver to read from OTel Context; otherwise register BatchSpanProcessor using existing batching options.
  • Do NOT modify Agent365Exporter logic; it will use the provided tokenResolver at export time.

Notes:

  • No token is stored outside OTel Context. The processor stores only spans per trace and a reference to the request Context.
  • Avoid registering BatchSpanProcessor alongside the PerRequestSpanProcessor to prevent cross-request batching.
  • Mount withExportToken early in the request pipeline so the server span is created inside the token-carrying Context.

Proposed files and modifications:

// ------------------------------------------------------------------------------
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
// ------------------------------------------------------------------------------

import { ClusterCategory } from '@microsoft/agents-a365-runtime';
/**
 * A function that resolves and returns an authentication token for the given agent and tenant.
 * Implementations may perform synchronous lookup (e.g., in-memory cache) or asynchronous network calls.
 * Return null if a token cannot be provided; exporter will log and proceed without an authorization header.
 */
export type TokenResolver = (agentId: string, tenantId: string) => string | null | Promise<string | null>;

/**
 * Options controlling the behavior of the Agent365 OpenTelemetry span exporter.
 *
 * These values tune batching, timeouts, token acquisition and endpoint shape. All properties have sensible
 * defaults so callers can usually construct without arguments and override selectively.
 *
 * @property {ClusterCategory | string} clusterCategory Environment / cluster category (e.g. "preprod", "prod", default to "prod").
 * @property {TokenResolver} [tokenResolver] Optional delegate to obtain an auth token. If omitted the exporter will
 *           fall back to reading the cached token (AgenticTokenCacheInstance.getObservabilityToken).
 * @property {boolean} [useS2SEndpoint] When true, exporter will POST to the S2S path (/maven/agent365/service/agents/{agentId}/traces).
 * @property {number} maxQueueSize Maximum span queue size before drops occur (passed to BatchSpanProcessor).
 * @property {number} scheduledDelayMilliseconds Delay between automatic batch flush attempts.
 * @property {number} exporterTimeoutMilliseconds Per-export timeout (abort if exceeded).
 * @property {number} maxExportBatchSize Maximum number of spans per export batch.
 * @property {boolean} enablePerRequestContextToken When true, use per-request (per trace) export and read the token from OTel Context at export time.
 */
export class Agent365ExporterOptions {
  /** Environment / cluster category (e.g. "preprod", "prod"). */
  public clusterCategory: ClusterCategory | string = 'prod';

  /** Optional delegate to resolve auth token used by exporter */
  public tokenResolver?: TokenResolver; // Optional if ENABLE_A365_OBSERVABILITY_EXPORTER is false

  /** When true, use S2S endpoint path for export. */
  public useS2SEndpoint: boolean = false;

  /** Maximum span queue size before new spans are dropped. */
  public maxQueueSize: number = 2048;

  /** Delay (ms) between automatic batch flush attempts. */
  public scheduledDelayMilliseconds: number = 5000;

  /** Per-export timeout in milliseconds. */
  public exporterTimeoutMilliseconds: number = 30000;

  /** Maximum number of spans per export batch. */
  public maxExportBatchSize: number = 512...

</details>



<!-- START COPILOT CODING AGENT SUFFIX -->

*This pull request was created from Copilot chat.*
>

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 We'd love your input! Share your thoughts on Copilot coding agent in our [2 minute survey](https://gh.io/copilot-coding-agent-survey).

Copilot AI and others added 2 commits January 13, 2026 00:09
- Added enablePerRequestContextToken option to Agent365ExporterOptions
- Created token-context.ts utilities for managing tokens in OTel Context
- Implemented PerRequestSpanProcessor for per-trace span buffering
- Added withExportToken middleware helper
- Created setup-per-request-export.ts for tracing initialization
- Updated index.ts to export new modules

Co-authored-by: fpfp100 <[email protected]>
- Extract magic string ZERO_SPAN_ID as named constant
- Fix optional chaining syntax in shutdown method
- Make flush grace and max trace age configurable via Agent365ExporterOptions
- Update JSDoc to document new configuration properties

Co-authored-by: fpfp100 <[email protected]>
Copilot AI changed the title [WIP] Add toggleable zero-storage export feature using OpenTelemetry Context Add per-request export with zero-storage token management via OpenTelemetry Context Jan 13, 2026
Copilot AI requested a review from fpfp100 January 13, 2026 00:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants