tracekit 0.2.2 → 0.2.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 7a2a8b53b2c100727c3161bd201a6e310135d603c7ac7ab1e8370417a4ff2342
4
- data.tar.gz: 710eb5980307b7a4e82a698a86ff1dea131af93a24b0785f9d88c07a1cb4288a
3
+ metadata.gz: 9133bb6f1112ae31a4732cf52a12d209fa908e6f0acb5af92d3bf566ecc99ff5
4
+ data.tar.gz: dd8eeb7476eef7bc591833194b8805c9f0c74f8c1b8e389a4b34834ceea5c201
5
5
  SHA512:
6
- metadata.gz: 7548453f27b8b0781cda582feb75c25479e5abb5d2cf670c969055402dd6cea6d15582df725038112e35f277045d13baa6b82645295aa8b8a348349b4a0a0d17
7
- data.tar.gz: 9e103dc9c20d0a00182bc403df79cb7fe8043bc58adf68a63a76996ed0952183a5251b94f6c35fdc47957e71e1934ab4365d630a1dd2268d1c85afec71668727
6
+ metadata.gz: 10f4ff9f1123866d67485d7b02a1224a22abc9f3e05814064d8057103172c7a13a7a527a8626b612a622df3feed3f9ec0ddb632550d669a1a62f46a18354c86f
7
+ data.tar.gz: 3b9c8a7b79eb316ef4322b1b54deba4bc9e6b3538ff2c3fdefb6b4cfff5b394e6003d4dcbe225d837f2da5fa6f246044e1c6052d539b6e7a1063317c71b639a8
data/CHANGELOG.md CHANGED
@@ -5,6 +5,22 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.2.3] - 2026-03-21
9
+
10
+ ### Added
11
+ - LLM auto-instrumentation for OpenAI and Anthropic APIs via Module#prepend
12
+ - Streaming support for both OpenAI (SSE chunks) and Anthropic (SSE events) chat completions
13
+ - Automatic capture of gen_ai.* semantic convention attributes (model, provider, tokens, cost, latency, finish_reason)
14
+ - Content capture option for request/response messages (TRACEKIT_LLM_CAPTURE_CONTENT env var)
15
+ - Tool call detection and instrumentation for function calling
16
+ - PII scrubbing for captured LLM content
17
+ - Provider auto-detection via LoadError handling
18
+ - StreamWrapper for OpenAI and AnthropicStreamWrapper for Anthropic streaming responses
19
+ - Anthropic cache token tracking (cache_creation_input_tokens, cache_read_input_tokens)
20
+
21
+ ### Changed
22
+ - SDK init auto-detects and patches OpenAI::Client#chat and Anthropic::Client::Messages#create when gems are present
23
+
8
24
  ## [0.1.0] - 2024-02-04
9
25
 
10
26
  ### Added
@@ -136,3 +152,4 @@ This is the first production-ready release of the TraceKit Ruby SDK. It provides
136
152
  ---
137
153
 
138
154
  [0.1.0]: https://github.com/Tracekit-Dev/ruby-sdk/releases/tag/v0.1.0
155
+ [0.2.3]: https://github.com/Tracekit-Dev/ruby-sdk/releases/tag/v0.2.3
data/README.md CHANGED
@@ -24,6 +24,7 @@ TraceKit Ruby SDK provides production-ready distributed tracing, metrics, and co
24
24
  - **Code Monitoring**: Live production debugging with non-breaking snapshots
25
25
  - **Security Scanning**: Automatic detection of sensitive data (PII, credentials)
26
26
  - **Local UI Auto-Detection**: Automatically sends traces to local TraceKit UI
27
+ - **LLM Auto-Instrumentation**: Zero-config tracing of OpenAI and Anthropic API calls via Module#prepend
27
28
  - **Rails Auto-Configuration**: Zero-configuration setup via Railtie
28
29
  - **Rack Middleware**: Automatic request instrumentation for any Rack application
29
30
  - **Thread-Safe Metrics**: Concurrent metric collection with automatic buffering
@@ -377,6 +378,100 @@ sdk.capture_snapshot("process-data", { batch_size: 100 })
377
378
  - The SDK automatically retries after the cooldown period
378
379
  - Thread-safe via `Mutex` — safe for multi-threaded Ruby applications (Puma, Sidekiq)
379
380
 
381
+ ## LLM Instrumentation
382
+
383
+ TraceKit automatically instruments OpenAI and Anthropic API calls when the gems are present. No manual setup required — the SDK patches clients at init via `Module#prepend`.
384
+
385
+ ### Supported Gems
386
+
387
+ - **[ruby-openai](https://github.com/alexrudall/ruby-openai)** (~> 7.0) — `OpenAI::Client#chat`
388
+ - **[anthropic](https://github.com/alexrudall/anthropic)** (~> 0.3) — `Anthropic::Client#messages`
389
+
390
+ ### Usage
391
+
392
+ ```ruby
393
+ # Just use the gems normally — TraceKit instruments automatically
394
+
395
+ # OpenAI
396
+ client = OpenAI::Client.new(access_token: ENV["OPENAI_API_KEY"])
397
+ response = client.chat(parameters: {
398
+ model: "gpt-4o-mini",
399
+ messages: [{ role: "user", content: "Hello!" }],
400
+ max_tokens: 100
401
+ })
402
+
403
+ # Anthropic
404
+ client = Anthropic::Client.new(access_token: ENV["ANTHROPIC_API_KEY"])
405
+ response = client.messages(parameters: {
406
+ model: "claude-sonnet-4-20250514",
407
+ max_tokens: 100,
408
+ messages: [{ role: "user", content: "Hello!" }]
409
+ })
410
+ ```
411
+
412
+ ### Streaming
413
+
414
+ Both streaming and non-streaming calls are instrumented:
415
+
416
+ ```ruby
417
+ # OpenAI streaming
418
+ client.chat(parameters: {
419
+ model: "gpt-4o-mini",
420
+ messages: [{ role: "user", content: "Tell me a story" }],
421
+ stream: proc { |chunk, _bytesize|
422
+ print chunk.dig("choices", 0, "delta", "content")
423
+ }
424
+ })
425
+
426
+ # Anthropic streaming
427
+ client.messages(parameters: {
428
+ model: "claude-sonnet-4-20250514",
429
+ max_tokens: 200,
430
+ messages: [{ role: "user", content: "Tell me a story" }],
431
+ stream: proc { |event|
432
+ if event["type"] == "content_block_delta"
433
+ print event.dig("delta", "text")
434
+ end
435
+ }
436
+ })
437
+ ```
438
+
439
+ ### Captured Attributes
440
+
441
+ Each LLM call creates a span with [GenAI semantic convention](https://opentelemetry.io/docs/specs/semconv/gen-ai/) attributes:
442
+
443
+ | Attribute | Description |
444
+ |-----------|-------------|
445
+ | `gen_ai.system` | `openai` or `anthropic` |
446
+ | `gen_ai.request.model` | Model name (e.g., `gpt-4o-mini`) |
447
+ | `gen_ai.request.max_tokens` | Max tokens requested |
448
+ | `gen_ai.response.model` | Model used in response |
449
+ | `gen_ai.response.id` | Response ID |
450
+ | `gen_ai.response.finish_reason` | `stop`, `end_turn`, etc. |
451
+ | `gen_ai.usage.input_tokens` | Prompt tokens used |
452
+ | `gen_ai.usage.output_tokens` | Completion tokens used |
453
+
454
+ ### Content Capture
455
+
456
+ Input/output content capture is **disabled by default** for privacy. Enable it with:
457
+
458
+ ```bash
459
+ TRACEKIT_LLM_CAPTURE_CONTENT=true
460
+ ```
461
+
462
+ ### Configuration
463
+
464
+ LLM instrumentation is enabled by default when OpenAI or Anthropic gems are detected. To disable:
465
+
466
+ ```ruby
467
+ Tracekit.configure do |config|
468
+ config.llm = { enabled: false } # Disable all LLM instrumentation
469
+ config.llm = { openai: false } # Disable OpenAI only
470
+ config.llm = { anthropic: false } # Disable Anthropic only
471
+ config.llm = { capture_content: true } # Enable content capture via config
472
+ end
473
+ ```
474
+
380
475
  ## Distributed Tracing
381
476
 
382
477
  The SDK automatically:
@@ -452,6 +547,10 @@ ruby-sdk/
452
547
  │ │ ├── sdk.rb # Main SDK class
453
548
  │ │ ├── railtie.rb # Rails auto-configuration
454
549
  │ │ ├── middleware.rb # Rack middleware
550
+ │ │ ├── llm/ # LLM auto-instrumentation
551
+ │ │ │ ├── common.rb # Shared helpers, PII scrubbing
552
+ │ │ │ ├── openai_instrumentation.rb # OpenAI Module#prepend
553
+ │ │ │ └── anthropic_instrumentation.rb # Anthropic Module#prepend
455
554
  │ │ ├── metrics/ # Metrics implementation
456
555
  │ │ ├── security/ # Security scanning
457
556
  │ │ └── snapshots/ # Code monitoring
@@ -523,6 +622,7 @@ bundle exec rails server -p 5002
523
622
  - `GET /api/call-go` - Call Go test service
524
623
  - `GET /api/call-node` - Call Node test service
525
624
  - `GET /api/call-all` - Call all test services
625
+ - `GET /api/llm` - LLM instrumentation test (OpenAI + Anthropic, streaming + non-streaming)
526
626
 
527
627
  See [ruby-test/README.md](ruby-test/README.md) for details.
528
628
 
@@ -597,4 +697,4 @@ Built on [OpenTelemetry](https://opentelemetry.io/) - the industry standard for
597
697
  ---
598
698
 
599
699
  **Repository**: git@github.com:Tracekit-Dev/ruby-sdk.git
600
- **Version**: v0.2.0
700
+ **Version**: v0.2.3
@@ -6,7 +6,8 @@ module Tracekit
6
6
  class Config
7
7
  attr_reader :api_key, :service_name, :endpoint, :use_ssl, :environment,
8
8
  :service_version, :enable_code_monitoring,
9
- :code_monitoring_poll_interval, :local_ui_port, :sampling_rate
9
+ :code_monitoring_poll_interval, :local_ui_port, :sampling_rate,
10
+ :llm
10
11
 
11
12
  def initialize(builder)
12
13
  @api_key = builder.api_key
@@ -19,6 +20,7 @@ module Tracekit
19
20
  @code_monitoring_poll_interval = builder.code_monitoring_poll_interval || 30
20
21
  @local_ui_port = builder.local_ui_port || 9999
21
22
  @sampling_rate = builder.sampling_rate || 1.0
23
+ @llm = (builder.llm || {}).freeze
22
24
 
23
25
  validate!
24
26
  freeze # Make configuration immutable
@@ -35,7 +37,8 @@ module Tracekit
35
37
  class Builder
36
38
  attr_accessor :api_key, :service_name, :endpoint, :use_ssl, :environment,
37
39
  :service_version, :enable_code_monitoring,
38
- :code_monitoring_poll_interval, :local_ui_port, :sampling_rate
40
+ :code_monitoring_poll_interval, :local_ui_port, :sampling_rate,
41
+ :llm
39
42
 
40
43
  def initialize
41
44
  # Set defaults in builder
@@ -47,6 +50,7 @@ module Tracekit
47
50
  @code_monitoring_poll_interval = 30
48
51
  @local_ui_port = 9999
49
52
  @sampling_rate = 1.0
53
+ @llm = { enabled: true, openai: true, anthropic: true, capture_content: false }
50
54
  end
51
55
  end
52
56