@juspay/neurolink 8.43.0 → 9.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,3 +1,15 @@
1
+ ## [9.0.0](https://github.com/juspay/neurolink/compare/v8.43.0...v9.0.0) (2026-02-03)
2
+
3
+ ### ⚠ BREAKING CHANGES
4
+
5
+ - **(observability):** @opentelemetry/api, @opentelemetry/sdk-trace-node, and
6
+ @opentelemetry/sdk-trace-base are now peerDependencies. Host applications
7
+ must install these packages directly.
8
+
9
+ ### Features
10
+
11
+ - **(observability):** add external TracerProvider support with operation name auto-detection ([25e3230](https://github.com/juspay/neurolink/commit/25e32301269b45b493df17f94b7e38af2bd7ef36))
12
+
1
13
  ## [8.43.0](https://github.com/juspay/neurolink/compare/v8.42.0...v8.43.0) (2026-02-02)
2
14
 
3
15
  ### Features
package/README.md CHANGED
@@ -35,30 +35,32 @@ Extracted from production systems at Juspay and battle-tested at enterprise scal
35
35
 
36
36
  ## What's New (Q1 2026)
37
37
 
38
- | Feature | Version | Description | Guide |
39
- | ---------------------------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |
40
- | **Server Adapters** | v8.41.0 | Multi-framework HTTP server with Hono, Express, Fastify, Koa support. Full CLI for server management with foreground/background modes. | [Server Adapters Guide](docs/guides/server-adapters/index.md) |
41
- | **Title Generation Events** | v8.38.0 | Emit `conversation:titleGenerated` event when conversation title is generated. Supports custom title prompts via `NEUROLINK_TITLE_PROMPT`. | [Conversation Memory Guide](docs/conversation-memory.md) |
42
- | **Video Generation with Veo** | v8.32.0 | Video generation using Veo 3.1 (`veo-3.1`). Realistic video generation with many parameter options | [Video Generation Guide](docs/features/video-generation.md) |
43
- | **Image Generation with Gemini** | v8.31.0 | Native image generation using Gemini 2.0 Flash Experimental (`imagen-3.0-generate-002`). High-quality image synthesis directly from Google AI. | [Image Generation Guide](docs/image-generation-streaming.md) |
44
- | **HTTP/Streamable HTTP Transport** | v8.29.0 | Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. | [HTTP Transport Guide](docs/mcp-http-transport.md) |
45
-
38
+ | Feature | Version | Description | Guide |
39
+ | ----------------------------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |
40
+ | **External TracerProvider Support** | v8.43.0 | Integrate NeuroLink with existing OpenTelemetry instrumentation. Prevents duplicate registration conflicts. | [Observability Guide](docs/features/observability.md) |
41
+ | **Server Adapters** | v8.43.0 | Multi-framework HTTP server with Hono, Express, Fastify, Koa support. Full CLI for server management with foreground/background modes. | [Server Adapters Guide](docs/guides/server-adapters/index.md) |
42
+ | **Title Generation Events** | v8.38.0 | Emit `conversation:titleGenerated` event when conversation title is generated. Supports custom title prompts via `NEUROLINK_TITLE_PROMPT`. | [Conversation Memory Guide](docs/conversation-memory.md) |
43
+ | **Video Generation with Veo** | v8.32.0 | Video generation using Veo 3.1 (`veo-3.1`). Realistic video generation with many parameter options | [Video Generation Guide](docs/features/video-generation.md) |
44
+ | **Image Generation with Gemini** | v8.31.0 | Native image generation using Gemini 2.0 Flash Experimental (`imagen-3.0-generate-002`). High-quality image synthesis directly from Google AI. | [Image Generation Guide](docs/image-generation-streaming.md) |
45
+ | **HTTP/Streamable HTTP Transport** | v8.29.0 | Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. | [HTTP Transport Guide](docs/mcp-http-transport.md) |
46
+
47
+ - **External TracerProvider Support** – Integrate NeuroLink with applications that already have OpenTelemetry instrumentation. Supports auto-detection and manual configuration. → [Observability Guide](docs/features/observability.md)
46
48
  - **Server Adapters** – Deploy NeuroLink as an HTTP API server with your framework of choice (Hono, Express, Fastify, Koa). Full CLI support with `serve` and `server` commands for foreground/background modes, route management, and OpenAPI generation. → [Server Adapters Guide](docs/guides/server-adapters/index.md)
47
49
  - **Title Generation Events** – Emit real-time events when conversation titles are auto-generated. Listen to `conversation:titleGenerated` for session tracking. → [Conversation Memory Guide](docs/conversation-memory.md#title-generation-events)
48
50
  - **Custom Title Prompts** – Customize conversation title generation with `NEUROLINK_TITLE_PROMPT` environment variable. Use `${userMessage}` placeholder for dynamic prompts. → [Conversation Memory Guide](docs/conversation-memory.md#customizing-the-title-prompt)
49
51
  - **Video Generation** – Transform images into 8-second videos with synchronized audio using Google Veo 3.1 via Vertex AI. Supports 720p/1080p resolutions, portrait/landscape aspect ratios. → [Video Generation Guide](docs/features/video-generation.md)
50
- - **Image Generation** – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. Supports streaming mode with automatic file saving. → [Image Generation Guide](docs/IMAGE-GENERATION-STREAMING.md)
51
- - **HTTP/Streamable HTTP Transport for MCP** – Connect to remote MCP servers via HTTP with authentication headers, retry logic, and rate limiting. → [HTTP Transport Guide](docs/MCP-HTTP-TRANSPORT.md)
52
+ - **Image Generation** – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. Supports streaming mode with automatic file saving. → [Image Generation Guide](docs/image-generation-streaming.md)
53
+ - **HTTP/Streamable HTTP Transport for MCP** – Connect to remote MCP servers via HTTP with authentication headers, retry logic, and rate limiting. → [HTTP Transport Guide](docs/mcp-http-transport.md)
52
54
  - 🧠 **Gemini 3 Preview Support** - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking capabilities
53
55
  - **Structured Output with Zod Schemas** – Type-safe JSON generation with automatic validation using `schema` + `output.format: "json"` in `generate()`. → [Structured Output Guide](docs/features/structured-output.md)
54
56
  - **CSV File Support** – Attach CSV files to prompts for AI-powered data analysis with auto-detection. → [CSV Guide](docs/features/multimodal-chat.md#csv-file-support)
55
57
  - **PDF File Support** – Process PDF documents with native visual analysis for Vertex AI, Anthropic, Bedrock, AI Studio. → [PDF Guide](docs/features/pdf-support.md)
56
- - **LiteLLM Integration** – Access 100+ AI models from all major providers through unified interface. → [Setup Guide](docs/LITELLM-INTEGRATION.md)
57
- - **SageMaker Integration** – Deploy and use custom trained models on AWS infrastructure. → [Setup Guide](docs/SAGEMAKER-INTEGRATION.md)
58
+ - **LiteLLM Integration** – Access 100+ AI models from all major providers through unified interface. → [Setup Guide](docs/litellm-integration.md)
59
+ - **SageMaker Integration** – Deploy and use custom trained models on AWS infrastructure. → [Setup Guide](docs/sagemaker-integration.md)
58
60
  - **OpenRouter Integration** – Access 300+ models from OpenAI, Anthropic, Google, Meta, and more through a single unified API. → [Setup Guide](docs/getting-started/providers/openrouter.md)
59
61
  - **Human-in-the-loop workflows** – Pause generation for user approval/input before tool execution. → [HITL Guide](docs/features/hitl.md)
60
62
  - **Guardrails middleware** – Block PII, profanity, and unsafe content with built-in filtering. → [Guardrails Guide](docs/features/guardrails.md)
61
- - **Context summarization** – Automatic conversation compression for long-running sessions. → [Summarization Guide](docs/CONTEXT-SUMMARIZATION.md)
63
+ - **Context summarization** – Automatic conversation compression for long-running sessions. → [Summarization Guide](docs/context-summarization.md)
62
64
  - **Redis conversation export** – Export full session history as JSON for analytics and debugging. → [History Guide](docs/features/conversation-history.md)
63
65
 
64
66
  ```typescript
@@ -57,6 +57,8 @@ export declare class TelemetryHandler {
57
57
  isEnabled: boolean;
58
58
  functionId?: string;
59
59
  metadata?: Record<string, string | number | boolean>;
60
+ recordInputs?: boolean;
61
+ recordOutputs?: boolean;
60
62
  } | undefined;
61
63
  /**
62
64
  * Handle tool execution storage if available
@@ -147,6 +147,8 @@ export class TelemetryHandler {
147
147
  isEnabled: true,
148
148
  functionId,
149
149
  metadata,
150
+ recordInputs: true,
151
+ recordOutputs: true,
150
152
  };
151
153
  }
152
154
  /**
package/dist/index.d.ts CHANGED
@@ -42,10 +42,10 @@ export type { DynamicModelConfig, ModelRegistry } from "./types/modelTypes.js";
42
42
  import { NeuroLink } from "./neurolink.js";
43
43
  export { NeuroLink };
44
44
  export type { MCPServerInfo } from "./types/mcpTypes.js";
45
- export type { ObservabilityConfig, LangfuseConfig, OpenTelemetryConfig, } from "./types/observability.js";
45
+ export type { ObservabilityConfig, LangfuseConfig, OpenTelemetryConfig, LangfuseSpanAttributes, TraceNameFormat, } from "./types/observability.js";
46
46
  export { buildObservabilityConfigFromEnv } from "./utils/observabilityHelpers.js";
47
- import { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext } from "./services/server/ai/observability/instrumentation.js";
48
- export { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, };
47
+ import { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, getLangfuseSpanProcessor, getTracerProvider, isOpenTelemetryInitialized, getSpanProcessors, createContextEnricher, isUsingExternalTracerProvider, getLangfuseContext, getTracer } from "./services/server/ai/observability/instrumentation.js";
48
+ export { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, getLangfuseSpanProcessor, getTracerProvider, isOpenTelemetryInitialized, getSpanProcessors, createContextEnricher, isUsingExternalTracerProvider, getLangfuseContext, getTracer, };
49
49
  export type { NeuroLinkMiddleware, MiddlewareContext, MiddlewareFactoryOptions, MiddlewarePreset, MiddlewareConfig, } from "./types/middlewareTypes.js";
50
50
  export { MiddlewareFactory } from "./middleware/factory.js";
51
51
  export declare const VERSION = "1.0.0";
package/dist/index.js CHANGED
@@ -47,9 +47,17 @@ export { dynamicModelProvider } from "./core/dynamicModels.js";
47
47
  import { NeuroLink } from "./neurolink.js";
48
48
  export { NeuroLink };
49
49
  export { buildObservabilityConfigFromEnv } from "./utils/observabilityHelpers.js";
50
- import { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, } from "./services/server/ai/observability/instrumentation.js";
50
+ import { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, getLangfuseSpanProcessor, getTracerProvider, isOpenTelemetryInitialized,
51
+ // NEW EXPORTS - External TracerProvider Support
52
+ getSpanProcessors, createContextEnricher, isUsingExternalTracerProvider,
53
+ // Enhanced context and tracing
54
+ getLangfuseContext, getTracer, } from "./services/server/ai/observability/instrumentation.js";
51
55
  import { initializeTelemetry as init, getTelemetryStatus as getStatus, } from "./telemetry/index.js";
52
- export { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, };
56
+ export { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, getLangfuseSpanProcessor, getTracerProvider, isOpenTelemetryInitialized,
57
+ // NEW EXPORTS - External TracerProvider Support
58
+ getSpanProcessors, createContextEnricher, isUsingExternalTracerProvider,
59
+ // Enhanced context and tracing
60
+ getLangfuseContext, getTracer, };
53
61
  export { MiddlewareFactory } from "./middleware/factory.js";
54
62
  // Version
55
63
  export const VERSION = "1.0.0";
@@ -57,6 +57,8 @@ export declare class TelemetryHandler {
57
57
  isEnabled: boolean;
58
58
  functionId?: string;
59
59
  metadata?: Record<string, string | number | boolean>;
60
+ recordInputs?: boolean;
61
+ recordOutputs?: boolean;
60
62
  } | undefined;
61
63
  /**
62
64
  * Handle tool execution storage if available
@@ -147,6 +147,8 @@ export class TelemetryHandler {
147
147
  isEnabled: true,
148
148
  functionId,
149
149
  metadata,
150
+ recordInputs: true,
151
+ recordOutputs: true,
150
152
  };
151
153
  }
152
154
  /**
@@ -42,10 +42,10 @@ export type { DynamicModelConfig, ModelRegistry } from "./types/modelTypes.js";
42
42
  import { NeuroLink } from "./neurolink.js";
43
43
  export { NeuroLink };
44
44
  export type { MCPServerInfo } from "./types/mcpTypes.js";
45
- export type { ObservabilityConfig, LangfuseConfig, OpenTelemetryConfig, } from "./types/observability.js";
45
+ export type { ObservabilityConfig, LangfuseConfig, OpenTelemetryConfig, LangfuseSpanAttributes, TraceNameFormat, } from "./types/observability.js";
46
46
  export { buildObservabilityConfigFromEnv } from "./utils/observabilityHelpers.js";
47
- import { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext } from "./services/server/ai/observability/instrumentation.js";
48
- export { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, };
47
+ import { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, getLangfuseSpanProcessor, getTracerProvider, isOpenTelemetryInitialized, getSpanProcessors, createContextEnricher, isUsingExternalTracerProvider, getLangfuseContext, getTracer } from "./services/server/ai/observability/instrumentation.js";
48
+ export { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, getLangfuseSpanProcessor, getTracerProvider, isOpenTelemetryInitialized, getSpanProcessors, createContextEnricher, isUsingExternalTracerProvider, getLangfuseContext, getTracer, };
49
49
  export type { NeuroLinkMiddleware, MiddlewareContext, MiddlewareFactoryOptions, MiddlewarePreset, MiddlewareConfig, } from "./types/middlewareTypes.js";
50
50
  export { MiddlewareFactory } from "./middleware/factory.js";
51
51
  export declare const VERSION = "1.0.0";
package/dist/lib/index.js CHANGED
@@ -47,9 +47,17 @@ export { dynamicModelProvider } from "./core/dynamicModels.js";
47
47
  import { NeuroLink } from "./neurolink.js";
48
48
  export { NeuroLink };
49
49
  export { buildObservabilityConfigFromEnv } from "./utils/observabilityHelpers.js";
50
- import { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, } from "./services/server/ai/observability/instrumentation.js";
50
+ import { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, getLangfuseSpanProcessor, getTracerProvider, isOpenTelemetryInitialized,
51
+ // NEW EXPORTS - External TracerProvider Support
52
+ getSpanProcessors, createContextEnricher, isUsingExternalTracerProvider,
53
+ // Enhanced context and tracing
54
+ getLangfuseContext, getTracer, } from "./services/server/ai/observability/instrumentation.js";
51
55
  import { initializeTelemetry as init, getTelemetryStatus as getStatus, } from "./telemetry/index.js";
52
- export { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, };
56
+ export { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, getLangfuseSpanProcessor, getTracerProvider, isOpenTelemetryInitialized,
57
+ // NEW EXPORTS - External TracerProvider Support
58
+ getSpanProcessors, createContextEnricher, isUsingExternalTracerProvider,
59
+ // Enhanced context and tracing
60
+ getLangfuseContext, getTracer, };
53
61
  export { MiddlewareFactory } from "./middleware/factory.js";
54
62
  // Version
55
63
  export const VERSION = "1.0.0";
@@ -44,7 +44,7 @@ import { directToolsServer } from "./mcp/servers/agent/directToolsServer.js";
44
44
  // Import orchestration components
45
45
  import { ModelRouter } from "./utils/modelRouter.js";
46
46
  import { BinaryTaskClassifier } from "./utils/taskClassifier.js";
47
- import { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, } from "./services/server/ai/observability/instrumentation.js";
47
+ import { initializeOpenTelemetry, shutdownOpenTelemetry, flushOpenTelemetry, getLangfuseHealthStatus, setLangfuseContext, isOpenTelemetryInitialized, } from "./services/server/ai/observability/instrumentation.js";
48
48
  import { initializeMem0 } from "./memory/mem0Initializer.js";
49
49
  /**
50
50
  * NeuroLink - Universal AI Development Platform
@@ -635,7 +635,10 @@ Current user's request: ${currentInput}`;
635
635
  const langfuseInitStartTime = process.hrtime.bigint();
636
636
  try {
637
637
  const langfuseConfig = this.observabilityConfig?.langfuse;
638
- if (langfuseConfig?.enabled) {
638
+ // Check if we should use external provider mode - bypass enabled check
639
+ const useExternalProvider = langfuseConfig?.autoDetectExternalProvider === true ||
640
+ langfuseConfig?.useExternalTracerProvider === true;
641
+ if (langfuseConfig?.enabled || useExternalProvider) {
639
642
  logger.debug(`[NeuroLink] 📊 LOG_POINT_C019_LANGFUSE_INIT_START`, {
640
643
  logPoint: "C019_LANGFUSE_INIT_START",
641
644
  constructorId,
@@ -1212,7 +1215,12 @@ Current user's request: ${currentInput}`;
1212
1215
  * Centralized utility to avoid duplication across providers
1213
1216
  */
1214
1217
  isTelemetryEnabled() {
1215
- return this.observabilityConfig?.langfuse?.enabled || false;
1218
+ // Check if observability config enables telemetry
1219
+ if (this.observabilityConfig?.langfuse?.enabled) {
1220
+ return true;
1221
+ }
1222
+ // Check if OpenTelemetry was initialized (by this or external app)
1223
+ return isOpenTelemetryInitialized();
1216
1224
  }
1217
1225
  /**
1218
1226
  * Public method to initialize Langfuse observability
@@ -37,9 +37,9 @@ export declare const AgentExecuteRequestSchema: z.ZodObject<{
37
37
  };
38
38
  provider?: string | undefined;
39
39
  model?: string | undefined;
40
+ userId?: string | undefined;
40
41
  tools?: string[] | undefined;
41
42
  sessionId?: string | undefined;
42
- userId?: string | undefined;
43
43
  temperature?: number | undefined;
44
44
  maxTokens?: number | undefined;
45
45
  stream?: boolean | undefined;
@@ -52,9 +52,9 @@ export declare const AgentExecuteRequestSchema: z.ZodObject<{
52
52
  };
53
53
  provider?: string | undefined;
54
54
  model?: string | undefined;
55
+ userId?: string | undefined;
55
56
  tools?: string[] | undefined;
56
57
  sessionId?: string | undefined;
57
- userId?: string | undefined;
58
58
  temperature?: number | undefined;
59
59
  maxTokens?: number | undefined;
60
60
  stream?: boolean | undefined;
@@ -71,12 +71,12 @@ export declare const ToolExecuteRequestSchema: z.ZodObject<{
71
71
  }, "strip", z.ZodTypeAny, {
72
72
  name: string;
73
73
  arguments: Record<string, unknown>;
74
- sessionId?: string | undefined;
75
74
  userId?: string | undefined;
75
+ sessionId?: string | undefined;
76
76
  }, {
77
77
  name: string;
78
- sessionId?: string | undefined;
79
78
  userId?: string | undefined;
79
+ sessionId?: string | undefined;
80
80
  arguments?: Record<string, unknown> | undefined;
81
81
  }>;
82
82
  /**
@@ -8,7 +8,50 @@
8
8
  */
9
9
  import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
10
10
  import { LangfuseSpanProcessor } from "@langfuse/otel";
11
+ import type { SpanProcessor } from "@opentelemetry/sdk-trace-base";
12
+ import { trace } from "@opentelemetry/api";
11
13
  import type { LangfuseConfig } from "../../../../types/observability.js";
14
+ /**
15
+ * Extended context for Langfuse spans
16
+ * Supports all Langfuse trace attributes for rich observability
17
+ */
18
+ type LangfuseContext = {
19
+ userId?: string | null;
20
+ sessionId?: string | null;
21
+ /** Conversation/thread identifier for grouping related traces */
22
+ conversationId?: string | null;
23
+ /** Request identifier for correlating with application logs */
24
+ requestId?: string | null;
25
+ /** Custom trace name for better organization in Langfuse UI */
26
+ traceName?: string | null;
27
+ /** Custom metadata to attach to spans */
28
+ metadata?: Record<string, unknown> | null;
29
+ /**
30
+ * Explicit operation name (e.g., "ai.streamText", "chat", "embeddings")
31
+ *
32
+ * If set, this overrides auto-detection from the span name.
33
+ * Use this when you want a custom operation name that doesn't match
34
+ * the auto-detected Vercel AI SDK operation.
35
+ *
36
+ * @example
37
+ * await setLangfuseContext({
38
+ * userId: "user@email.com",
39
+ * operationName: "customer-support-chat"
40
+ * }, async () => {
41
+ * // Trace name: "user@email.com:customer-support-chat"
42
+ * });
43
+ */
44
+ operationName?: string | null;
45
+ /**
46
+ * Override global autoDetectOperationName setting for this context.
47
+ *
48
+ * When undefined, uses the global setting from LangfuseConfig.
49
+ * Set to false to disable auto-detection for this specific context.
50
+ *
51
+ * @default undefined (uses global setting, which defaults to true)
52
+ */
53
+ autoDetectOperationName?: boolean;
54
+ };
12
55
  /**
13
56
  * Initialize OpenTelemetry with Langfuse span processor
14
57
  *
@@ -17,6 +60,10 @@ import type { LangfuseConfig } from "../../../../types/observability.js";
17
60
  * 2. Creating a NodeTracerProvider with service metadata and span processor
18
61
  * 3. Registering the provider globally for AI SDK to use
19
62
  *
63
+ * NEW: If useExternalTracerProvider is true or autoDetectExternalProvider detects
64
+ * an existing provider, steps 2 and 3 are skipped. The span processors are still
65
+ * created and can be retrieved via getSpanProcessors().
66
+ *
20
67
  * @param config - Langfuse configuration passed from parent application
21
68
  */
22
69
  export declare function initializeOpenTelemetry(config: LangfuseConfig): void;
@@ -42,33 +89,109 @@ export declare function getTracerProvider(): NodeTracerProvider | null;
42
89
  export declare function isOpenTelemetryInitialized(): boolean;
43
90
  /**
44
91
  * Get health status for Langfuse observability
92
+ *
93
+ * @returns Health status object with initialization and configuration details
45
94
  */
46
95
  export declare function getLangfuseHealthStatus(): {
47
- isHealthy: boolean | undefined;
96
+ isHealthy: boolean;
48
97
  initialized: boolean;
49
98
  credentialsValid: boolean;
50
99
  enabled: boolean;
51
100
  hasProcessor: boolean;
52
- config: {
101
+ usingExternalProvider: boolean;
102
+ config?: {
53
103
  baseUrl: string;
54
104
  environment: string;
55
105
  release: string;
56
- } | undefined;
106
+ };
57
107
  };
58
108
  /**
59
109
  * Set user and session context for Langfuse spans in the current async context
60
110
  *
61
111
  * Merges the provided context with existing AsyncLocalStorage context. If a callback is provided,
62
- * the context is scoped to that callback execution. Without a callback, the context applies to
63
- * the current execution context and its children.
112
+ * the context is scoped to that callback execution and returns the callback's result.
113
+ * Without a callback, the context applies to the current execution context and its children.
64
114
  *
65
115
  * Uses AsyncLocalStorage to properly scope context per request, avoiding race conditions
66
116
  * in concurrent scenarios.
67
117
  *
68
- * @param context - Object containing optional userId and/or sessionId to merge with existing context
118
+ * @param context - Object containing context fields to merge with existing context
69
119
  * @param callback - Optional callback to run within the context scope. If omitted, context applies to current execution
120
+ * @returns The callback's return value if provided, otherwise void
121
+ *
122
+ * @example
123
+ * // With callback - returns the result
124
+ * const result = await setLangfuseContext({ userId: "user123" }, async () => {
125
+ * return await generateText({ model: "gpt-4", prompt: "Hello" });
126
+ * });
127
+ *
128
+ * @example
129
+ * // Without callback - sets context for current execution
130
+ * await setLangfuseContext({ sessionId: "session456", traceName: "chat-completion" });
70
131
  */
71
- export declare function setLangfuseContext(context: {
132
+ export declare function setLangfuseContext<T = void>(context: {
72
133
  userId?: string | null;
73
134
  sessionId?: string | null;
74
- }, callback?: () => void | Promise<void>): Promise<void>;
135
+ conversationId?: string | null;
136
+ requestId?: string | null;
137
+ traceName?: string | null;
138
+ metadata?: Record<string, unknown> | null;
139
+ /** Explicit operation name (overrides auto-detection) */
140
+ operationName?: string | null;
141
+ /** Override global autoDetectOperationName for this context */
142
+ autoDetectOperationName?: boolean;
143
+ }, callback?: () => T | Promise<T>): Promise<T | void>;
144
+ /**
145
+ * Get the current Langfuse context from AsyncLocalStorage
146
+ *
147
+ * Returns the current context including userId, sessionId, conversationId,
148
+ * requestId, traceName, and metadata. Returns undefined if no context is set.
149
+ *
150
+ * @returns The current LangfuseContext or undefined
151
+ *
152
+ * @example
153
+ * const context = getLangfuseContext();
154
+ * console.log(context?.userId, context?.sessionId);
155
+ */
156
+ export declare function getLangfuseContext(): LangfuseContext | undefined;
157
+ /**
158
+ * Get an OpenTelemetry Tracer for creating custom spans
159
+ *
160
+ * This allows applications to create their own spans that will be
161
+ * processed by the same span processors (ContextEnricher + LangfuseSpanProcessor).
162
+ *
163
+ * @param name - Tracer name, defaults to "neurolink"
164
+ * @param version - Tracer version, optional
165
+ * @returns OpenTelemetry Tracer instance
166
+ *
167
+ * @example
168
+ * const tracer = getTracer("my-app");
169
+ * const span = tracer.startSpan("custom-operation");
170
+ * try {
171
+ * // ... do work
172
+ * } finally {
173
+ * span.end();
174
+ * }
175
+ */
176
+ export declare function getTracer(name?: string, version?: string): ReturnType<typeof trace.getTracer>;
177
+ /**
178
+ * Create a new ContextEnricher span processor
179
+ * Use this when useExternalTracerProvider is true to add to your own TracerProvider
180
+ *
181
+ * @returns A new ContextEnricher instance
182
+ */
183
+ export declare function createContextEnricher(): SpanProcessor;
184
+ /**
185
+ * Get all span processors that NeuroLink would use
186
+ * Convenience function that returns [ContextEnricher, LangfuseSpanProcessor]
187
+ *
188
+ * @returns Array of span processors, or empty array if not initialized
189
+ */
190
+ export declare function getSpanProcessors(): SpanProcessor[];
191
+ /**
192
+ * Check if using external TracerProvider mode
193
+ *
194
+ * @returns true if operating in external TracerProvider mode
195
+ */
196
+ export declare function isUsingExternalTracerProvider(): boolean;
197
+ export {};