langfuse-ruby 0.1.2 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 0bb8cc1f797c0b240830c5f909665dde61bb8a5540f1db8a04393ae0d741ee44
4
- data.tar.gz: bc5b9e1dc24e2861ac66955bec4dccd25bbadaf1a3c5f59ddbacd784a50b2180
3
+ metadata.gz: 0bb59924d3a9dcb1cb99841c6d07dd1277838ffee389d13ada1c8b03cafeb524
4
+ data.tar.gz: c5ba1846cd51fc3dc82fb43dd335d88394549652531a5243101ccd43ef71b3fa
5
5
  SHA512:
6
- metadata.gz: d5486239447652d3b97eb6200bc2f24ee829bef24da86409747f6a250fcc1126ce68841673b0e67da56a75e08ff4ec1bc9edf4b241c3431b855abdf2befdd7b9
7
- data.tar.gz: 88b61075d0a394629dec59bbec509a7f4c80b0269a887dcef866b3a8a250fd75b362fe84fc6247e72b5305935702a7175c1db825461bb05270145912d6e8f79e
6
+ metadata.gz: 9e448848c65cb8cd6f7ab7e4e566787bcab4aa8c43f20da7cc7a7a39e24e888c734c23644682f080635a0288b40fc4e3e169d8576445fdd07dde8d676faaccca
7
+ data.tar.gz: 9d9d1e88bf2e1161bd1d7f316e17600904169344b679839992a227eb35ac48e7edbe188ce782e3757f9f03a8a39104eac8c171a23f7bdc0c54b707754dfd87e8
@@ -2,17 +2,18 @@ name: CI
2
2
 
3
3
  on:
4
4
  push:
5
- branches: [ main ]
5
+ branches: [ main, master ]
6
6
  pull_request:
7
- branches: [ main ]
7
+ branches: [ main, master ]
8
8
 
9
9
  jobs:
10
10
  test:
11
11
  runs-on: ubuntu-latest
12
+
12
13
  strategy:
13
14
  matrix:
14
- ruby-version: ['2.7', '3.0', '3.1', '3.2']
15
-
15
+ ruby-version: ['2.7', '3.0', '3.1', '3.2', '3.3']
16
+
16
17
  steps:
17
18
  - uses: actions/checkout@v4
18
19
 
@@ -28,11 +29,14 @@ jobs:
28
29
  - name: Run offline tests
29
30
  run: ruby test_offline.rb
30
31
 
31
- - name: Check gem build
32
- run: gem build langfuse.gemspec
33
-
34
- lint:
32
+ - name: Run RuboCop
33
+ run: bundle exec rubocop
34
+ continue-on-error: true
35
+
36
+ build:
35
37
  runs-on: ubuntu-latest
38
+ needs: test
39
+
36
40
  steps:
37
41
  - uses: actions/checkout@v4
38
42
 
@@ -42,6 +46,8 @@ jobs:
42
46
  ruby-version: '3.2'
43
47
  bundler-cache: true
44
48
 
45
- - name: Run RuboCop
46
- run: bundle exec rubocop
47
- continue-on-error: true
49
+ - name: Build gem
50
+ run: gem build langfuse-ruby.gemspec
51
+
52
+ - name: Verify gem can be installed
53
+ run: gem install langfuse-ruby-*.gem
data/CHANGELOG.md CHANGED
@@ -5,7 +5,42 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
- ## [0.1.1] - 2025-01-12
8
+ ## [Unreleased]
9
+
10
+ ## [0.1.4] - 2025-07-29
11
+
12
+ ### Added
13
+ - Added support for `trace-update` event type in Langfuse ingestion API
14
+ - Added support for `event-create` event type in Langfuse ingestion API
15
+ - New `Event` class for creating generic events within traces, spans, and generations
16
+ - Added `event()` method to `Client`, `Trace`, `Span`, and `Generation` classes
17
+ - Enhanced event validation to include all supported Langfuse event types
18
+ - New example file `examples/event_usage.rb` demonstrating event functionality
19
+
20
+ ### Fixed
21
+ - Improved offline test error handling and authentication validation
22
+ - Enhanced error handling tests with proper configuration management
23
+ - Fixed prompt template validation tests in offline mode
24
+ - Better error message handling for authentication failures
25
+
26
+ ### Improved
27
+ - More comprehensive error handling test coverage
28
+ - Better test isolation and cleanup procedures
29
+ - Enhanced debugging capabilities for offline testing
30
+
31
+ ## [0.1.3] - 2025-07-13
32
+
33
+ ### Fixed
34
+ - Enhanced event data validation and debugging capabilities
35
+ - More detailed error messages for event structure validation failures
36
+
37
+ ## [0.1.2] - 2025-07-12
38
+
39
+ ### Fixed
40
+ - Enhanced event data validation and debugging capabilities
41
+ - More detailed error messages for event structure validation failures
42
+
43
+ ## [0.1.1] - 2025-07-12
9
44
 
10
45
  ### Fixed
11
46
  - Improved error handling for `get_prompt` method when prompt doesn't exist
@@ -21,7 +56,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
21
56
  - Updated gemspec metadata to avoid RubyGems warnings
22
57
  - Improved documentation with clearer error handling examples
23
58
 
24
- ## [0.1.0] - 2025-01-12
59
+ ## [0.1.0] - 2025-07-12
25
60
 
26
61
  ### Added
27
62
  - Initial release of Langfuse Ruby SDK
data/Gemfile.lock CHANGED
@@ -1,10 +1,10 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- langfuse-ruby (0.1.2)
4
+ langfuse-ruby (0.1.4)
5
5
  concurrent-ruby (~> 1.0)
6
- faraday (~> 2.0)
7
- faraday-net_http (~> 3.0)
6
+ faraday (>= 1.8, < 3.0)
7
+ faraday-net_http (>= 1.0, < 4.0)
8
8
  json (~> 2.0)
9
9
 
10
10
  GEM
data/README.md CHANGED
@@ -1,7 +1,9 @@
1
1
  # Langfuse Ruby SDK
2
2
 
3
3
  [![Gem Version](https://badge.fury.io/rb/langfuse-ruby.svg)](https://badge.fury.io/rb/langfuse-ruby)
4
- [![Build Status](https://github.com/ai-firstly/langfuse-ruby/workflows/CI/badge.svg)](https://github.com/ai-firstly/langfuse-ruby/actions)
4
+ [![CI](https://github.com/ai-firstly/langfuse-ruby/workflows/CI/badge.svg)](https://github.com/ai-firstly/langfuse-ruby/actions/workflows/ci.yml)
5
+ [![Ruby](https://img.shields.io/badge/ruby-%3E%3D%202.7.0-red.svg)](https://www.ruby-lang.org/)
6
+ [![License](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
5
7
 
6
8
  Ruby SDK for [Langfuse](https://langfuse.com) - the open-source LLM engineering platform. This SDK provides comprehensive tracing, prompt management, and evaluation capabilities for LLM applications.
7
9
 
@@ -10,6 +12,7 @@ Ruby SDK for [Langfuse](https://langfuse.com) - the open-source LLM engineering
10
12
  - 🔍 **Tracing**: Complete observability for LLM applications with traces, spans, and generations
11
13
  - 📝 **Prompt Management**: Version control and deployment of prompts with caching
12
14
  - 📊 **Evaluation**: Built-in evaluators and custom scoring capabilities
15
+ - 🎯 **Events**: Generic event tracking for custom application events and logging
13
16
  - 🚀 **Async Processing**: Background event processing with automatic batching
14
17
  - 🔒 **Type Safety**: Comprehensive error handling and validation
15
18
  - 🎯 **Framework Integration**: Easy integration with popular Ruby frameworks
@@ -66,7 +69,6 @@ trace = client.trace(
66
69
  name: "chat-completion",
67
70
  user_id: "user123",
68
71
  session_id: "session456",
69
- input: { message: "Hello, world!" },
70
72
  metadata: { environment: "production" }
71
73
  )
72
74
 
@@ -75,15 +77,12 @@ generation = trace.generation(
75
77
  name: "openai-completion",
76
78
  model: "gpt-3.5-turbo",
77
79
  input: [{ role: "user", content: "Hello, world!" }],
78
- output: { content: "Hello! How can I help you today?" },
79
- usage: { prompt_tokens: 10, completion_tokens: 15, total_tokens: 25 },
80
80
  model_parameters: { temperature: 0.7, max_tokens: 100 }
81
81
  )
82
82
 
83
- # Update trace with final output
84
- trace.update(
85
- output: { response: "Hello! How can I help you today?" }
86
- )
83
+ generation.end(output: 'Hello! How can I help you today?', usage: { prompt_tokens: 10, completion_tokens: 15, total_tokens: 25 })
84
+
85
+ trace.update(output: 'Hello! How can I help you today?')
87
86
 
88
87
  # Flush events (optional - happens automatically)
89
88
  client.flush
@@ -130,12 +129,40 @@ llm_gen = answer_span.generation(
130
129
  input: [
131
130
  { role: "system", content: "Answer based on context" },
132
131
  { role: "user", content: "What is machine learning?" }
133
- ],
134
- output: { content: "Machine learning is a subset of AI..." },
135
- usage: { prompt_tokens: 50, completion_tokens: 30, total_tokens: 80 }
132
+ ]
136
133
  )
137
134
 
138
- answer_span.end(output: { answer: "Machine learning is a subset of AI..." })
135
+ answer_span.end(output: { answer: "Machine learning is a subset of AI..." }, usage: { prompt_tokens: 50, completion_tokens: 30, total_tokens: 80 })
136
+ ```
137
+
138
+ ## Events
139
+
140
+ Create generic events for custom application events and logging:
141
+
142
+ ```ruby
143
+ # Create events from trace
144
+ event = trace.event(
145
+ name: "user_action",
146
+ input: { action: "login", user_id: "123" },
147
+ output: { success: true },
148
+ metadata: { ip: "192.168.1.1" }
149
+ )
150
+
151
+ # Create events from spans or generations
152
+ validation_event = span.event(
153
+ name: "validation_check",
154
+ input: { rules: ["required", "format"] },
155
+ output: { valid: true, warnings: [] }
156
+ )
157
+
158
+ # Direct event creation
159
+ event = client.event(
160
+ trace_id: trace.id,
161
+ name: "audit_log",
162
+ input: { operation: "data_export" },
163
+ output: { status: "completed" },
164
+ level: "INFO"
165
+ )
139
166
  ```
140
167
 
141
168
  ## Prompt Management
@@ -304,7 +331,9 @@ client = Langfuse.new(
304
331
  host: "https://your-instance.langfuse.com",
305
332
  debug: true, # Enable debug logging
306
333
  timeout: 30, # Request timeout in seconds
307
- retries: 3 # Number of retry attempts
334
+ retries: 3, # Number of retry attempts
335
+ flush_interval: 30, # Event flush interval in seconds (default: 5)
336
+ auto_flush: true # Enable automatic flushing (default: true)
308
337
  )
309
338
  ```
310
339
 
@@ -316,6 +345,76 @@ You can also configure the client using environment variables:
316
345
  export LANGFUSE_PUBLIC_KEY="pk-lf-..."
317
346
  export LANGFUSE_SECRET_KEY="sk-lf-..."
318
347
  export LANGFUSE_HOST="https://cloud.langfuse.com"
348
+ export LANGFUSE_FLUSH_INTERVAL=5
349
+ export LANGFUSE_AUTO_FLUSH=true
350
+ ```
351
+
352
+ ### Automatic Flush Control
353
+
354
+ By default, the Langfuse client automatically flushes events to the server at regular intervals using a background thread. You can control this behavior:
355
+
356
+ #### Enable/Disable Auto Flush
357
+
358
+ ```ruby
359
+ # Enable automatic flushing (default)
360
+ client = Langfuse.new(
361
+ public_key: "pk-lf-...",
362
+ secret_key: "sk-lf-...",
363
+ auto_flush: true,
364
+ flush_interval: 5 # Flush every 5 seconds
365
+ )
366
+
367
+ # Disable automatic flushing for manual control
368
+ client = Langfuse.new(
369
+ public_key: "pk-lf-...",
370
+ secret_key: "sk-lf-...",
371
+ auto_flush: false
372
+ )
373
+
374
+ # Manual flush when auto_flush is disabled
375
+ client.flush
376
+ ```
377
+
378
+ #### Global Configuration
379
+
380
+ ```ruby
381
+ Langfuse.configure do |config|
382
+ config.auto_flush = false # Disable auto flush globally
383
+ config.flush_interval = 10
384
+ end
385
+ ```
386
+
387
+ #### Environment Variable
388
+
389
+ ```bash
390
+ export LANGFUSE_AUTO_FLUSH=false
391
+ ```
392
+
393
+ #### Use Cases
394
+
395
+ **Auto Flush Enabled (Default)**
396
+ - Best for most applications
397
+ - Events are sent automatically
398
+ - No manual management required
399
+
400
+ **Auto Flush Disabled**
401
+ - Better performance for batch operations
402
+ - More control over when events are sent
403
+ - Requires manual flush calls
404
+ - Useful for high-frequency operations
405
+
406
+ ```ruby
407
+ # Example: Batch processing with manual flush
408
+ client = Langfuse.new(auto_flush: false)
409
+
410
+ # Process many items
411
+ 1000.times do |i|
412
+ trace = client.trace(name: "batch-item-#{i}")
413
+ # ... process item
414
+ end
415
+
416
+ # Flush all events at once
417
+ client.flush
319
418
  ```
320
419
 
321
420
  ### Shutdown
@@ -356,9 +455,7 @@ class ChatController < ApplicationController
356
455
 
357
456
  # Your LLM logic here
358
457
  response = generate_response(params[:message])
359
-
360
- trace.update(output: { response: response })
361
-
458
+
362
459
  render json: { response: response }
363
460
  end
364
461
  end
@@ -370,19 +467,17 @@ end
370
467
  class LLMProcessingJob < ApplicationJob
371
468
  def perform(user_id, message)
372
469
  client = Langfuse.new
373
-
470
+
374
471
  trace = client.trace(
375
472
  name: "background-llm-processing",
376
473
  user_id: user_id,
377
474
  input: { message: message },
378
475
  metadata: { job_class: self.class.name }
379
476
  )
380
-
477
+
381
478
  # Process with LLM
382
479
  result = process_with_llm(message)
383
-
384
- trace.update(output: result)
385
-
480
+
386
481
  # Ensure events are flushed
387
482
  client.flush
388
483
  end
@@ -0,0 +1,202 @@
1
+ # 类型验证错误故障排除指南
2
+
3
+ ## 问题描述
4
+
5
+ 当您遇到以下错误时:
6
+
7
+ ```json
8
+ {
9
+ "id": "xxx",
10
+ "status": 400,
11
+ "message": "Invalid request data",
12
+ "error": [
13
+ {
14
+ "code": "invalid_union",
15
+ "errors": [],
16
+ "note": "No matching discriminator",
17
+ "path": ["type"],
18
+ "message": "Invalid input"
19
+ }
20
+ ]
21
+ }
22
+ ```
23
+
24
+ 这表示 Langfuse 服务器端的 API 验证发现事件的 `type` 字段不符合预期的格式。
25
+
26
+ ## 根本原因
27
+
28
+ 1. **事件类型无效**: 发送的事件类型不在服务器端支持的列表中
29
+ 2. **事件数据结构错误**: 事件的数据结构不符合对应类型的要求
30
+ 3. **数据序列化问题**: 事件数据在序列化过程中出现问题
31
+
32
+ ## 常见修复案例
33
+
34
+ ### 1. 事件类型无效
35
+
36
+ ```ruby
37
+ client.trace(name: "my-trace", user_id: "user-123", input: { query: "Hello" })
38
+ ```
39
+
40
+ ## 解决方案
41
+
42
+ ### 1. 启用调试模式
43
+
44
+ ```ruby
45
+ client = Langfuse.new(
46
+ public_key: "your-key",
47
+ secret_key: "your-secret",
48
+ debug: true # 启用调试模式
49
+ )
50
+ ```
51
+
52
+ 调试模式会显示:
53
+ - 发送的事件类型
54
+ - 事件数据结构
55
+ - 详细的错误信息
56
+
57
+ ### 2. 检查支持的事件类型
58
+
59
+ 当前支持的事件类型:
60
+ - `trace-create`
61
+ - `generation-create`
62
+ - `generation-update`
63
+ - `span-create`
64
+ - `span-update`
65
+ - `event-create`
66
+ - `score-create`
67
+
68
+ ### 3. 验证事件数据
69
+
70
+ 确保事件数据包含必要的字段:
71
+
72
+ #### Trace 事件
73
+ ```ruby
74
+ {
75
+ id: "uuid",
76
+ name: "trace-name",
77
+ user_id: "user-id",
78
+ input: { ... },
79
+ metadata: { ... },
80
+ tags: [...],
81
+ timestamp: "2025-01-01T00:00:00.000Z"
82
+ }
83
+ ```
84
+
85
+ #### Generation 事件
86
+ ```ruby
87
+ {
88
+ id: "uuid",
89
+ trace_id: "trace-uuid",
90
+ name: "generation-name",
91
+ model: "gpt-3.5-turbo",
92
+ input: [...],
93
+ output: { ... },
94
+ usage: { ... },
95
+ metadata: { ... }
96
+ }
97
+ ```
98
+
99
+ #### Span 事件
100
+ ```ruby
101
+ {
102
+ id: "uuid",
103
+ trace_id: "trace-uuid",
104
+ name: "span-name",
105
+ start_time: "2025-01-01T00:00:00.000Z",
106
+ end_time: "2025-01-01T00:00:01.000Z",
107
+ input: { ... },
108
+ output: { ... },
109
+ metadata: { ... }
110
+ }
111
+ ```
112
+
113
+ #### Event 事件
114
+ ```ruby
115
+
116
+ ```ruby
117
+
118
+ ```
119
+
120
+ #### Score 事件
121
+ ```ruby
122
+
123
+ ```
124
+
125
+ ### 4. 检查网络和认证
126
+
127
+ 确保:
128
+ - API 密钥正确
129
+ - 网络连接正常
130
+ - 服务器端点可访问
131
+
132
+ ### 5. 使用错误处理
133
+
134
+ ```ruby
135
+ begin
136
+ client.flush
137
+ rescue Langfuse::ValidationError => e
138
+ if e.message.include?('Event type validation failed')
139
+ puts "类型验证错误: #{e.message}"
140
+ # 检查事件数据格式
141
+ else
142
+ puts "其他验证错误: #{e.message}"
143
+ end
144
+ rescue Langfuse::APIError => e
145
+ puts "API 错误: #{e.message}"
146
+ end
147
+ ```
148
+
149
+ ## 预防措施
150
+
151
+ 1. **使用官方 SDK 方法**: 避免直接构造事件数据
152
+ 2. **数据验证**: 在发送前验证数据完整性
153
+ 3. **错误监控**: 实施适当的错误处理和监控
154
+ 4. **测试环境**: 在测试环境中验证集成
155
+ 5. **保持更新**: 定期更新 SDK 到最新版本
156
+
157
+ ## 示例代码
158
+
159
+ ```ruby
160
+ # 正确的使用方式
161
+ client = Langfuse.new(
162
+ public_key: "pk-lf-xxx",
163
+ secret_key: "sk-lf-xxx",
164
+ debug: true
165
+ )
166
+
167
+ # 创建 trace
168
+ trace = client.trace(
169
+ name: "my-trace",
170
+ user_id: "user-123",
171
+ input: { query: "Hello" }
172
+ )
173
+
174
+ # 创建 generation
175
+ generation = trace.generation(
176
+ name: "my-generation",
177
+ model: "gpt-3.5-turbo",
178
+ input: [{ role: "user", content: "Hello" }],
179
+ output: { content: "Hi there!" }
180
+ )
181
+
182
+ # 安全地刷新事件
183
+ begin
184
+ client.flush
185
+ puts "事件发送成功"
186
+ rescue => e
187
+ puts "发送失败: #{e.message}"
188
+ end
189
+ ```
190
+
191
+ ## 联系支持
192
+
193
+ 如果问题持续存在,请提供:
194
+ 1. 完整的错误消息
195
+ 2. 调试模式的输出
196
+ 3. 相关的代码片段
197
+ 4. SDK 版本信息
198
+
199
+ ## 更新日志
200
+
201
+ - v0.1.1: 改进了错误消息的可读性
202
+ - v0.1.0: 初始版本