dspy 0.34.1 → 0.34.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 100e82f4aeff8020a845aa80a63ad86278ead32f34b7846b0624db99dc060325
4
- data.tar.gz: 43ed18798e67e829e2decdd8ef0519751d6f4fc2cf52571850f1acfd671f2780
3
+ metadata.gz: 7fe4cbffffd520f31219caa7dbce4b75ef39684cdd5e422b60144aa3ae3329e9
4
+ data.tar.gz: '0974465586686c0b93292b464f9b49abcb10a265ca85295c65584e989a271d7f'
5
5
  SHA512:
6
- metadata.gz: 3081d9fada92dcf1f7f5b003212fd6d5b94787b6982c6828bd95704e84f197be3017d08eff81c4c1271e4a1d442e6a235973f4c2ab69c47493e025b2d155b1ca
7
- data.tar.gz: 32454139be26918608d4c63f58a706d762caa457ca3df9addb238eb9c9bfd5e97f749054923573838b6e983db129732de290387076423e39293bcf02e2747bc4
6
+ metadata.gz: 5602b8a59a8454306921a2528ccb58824cfe3878a4d31f3b8b4b93d94730c6f2e804c439a9fd1f7bb44a2afe6aff6cb3843e1d7ad6f45c51e4e01aec7d44db5c
7
+ data.tar.gz: 4bf2656ade52cbf55e6416a061e9107c9332a0e669b9bafe07acf045aac605eaa38410ae433268b78a7d285b47450e03e8416e9a456d978e258177fce00a90e9
data/README.md CHANGED
@@ -6,56 +6,94 @@
6
6
  [![Documentation](https://img.shields.io/badge/docs-oss.vicente.services%2Fdspy.rb-blue)](https://oss.vicente.services/dspy.rb/)
7
7
  [![Discord](https://img.shields.io/discord/1161519468141355160?label=discord&logo=discord&logoColor=white)](https://discord.gg/zWBhrMqn)
8
8
 
9
- > [!NOTE]
10
- > The core Prompt Engineering Framework is production-ready with
11
- > comprehensive documentation. I am focusing now on educational content on systematic Prompt Optimization and Context Engineering.
12
- > Your feedback is invaluable. if you encounter issues, please open an [issue](https://github.com/vicentereig/dspy.rb/issues). If you have suggestions, open a [new thread](https://github.com/vicentereig/dspy.rb/discussions).
13
- >
14
- > If you want to contribute, feel free to reach out to me to coordinate efforts: hey at vicente.services
15
- >
16
-
17
9
  **Build reliable LLM applications in idiomatic Ruby using composable, type-safe modules.**
18
10
 
19
- DSPy.rb is the Ruby-first surgical port of Stanford's [DSPy paradigm](https://github.com/stanfordnlp/dspy). It delivers structured LLM programming, prompt engineering, and context engineering in the language we love. Instead of wrestling with brittle prompt strings, you define typed signatures in idiomatic Ruby and compose workflows and agents that actually behave.
11
+ DSPy.rb is the Ruby port of Stanford's [DSPy](https://dspy.ai). Instead of wrestling with brittle prompt strings, you define typed signatures and let the framework handle the rest. Prompts become functions. LLM calls become predictable.
20
12
 
21
- **Prompts are just functions.** Traditional prompting is like writing code with string concatenation: it works until it doesn't. DSPy.rb brings you the programming approach pioneered by [dspy.ai](https://dspy.ai/): define modular signatures and let the framework deal with the messy bits.
13
+ ```ruby
14
+ require 'dspy'
22
15
 
23
- While we implement the same signatures, predictors, and optimization algorithms as the original library, DSPy.rb leans hard into Ruby conventions with Sorbet-based typing, ReAct loops, and production-ready integrations like non-blocking OpenTelemetry instrumentation.
16
+ DSPy.configure do |c|
17
+ c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
18
+ end
24
19
 
25
- **What you get?** Ruby LLM applications that scale and don't break when you sneeze.
20
+ class Summarize < DSPy::Signature
21
+ description "Summarize the given text in one sentence."
26
22
 
27
- Check the [examples](examples/) and take them for a spin!
23
+ input do
24
+ const :text, String
25
+ end
28
26
 
29
- ## Your First DSPy Program
30
- ### Installation
27
+ output do
28
+ const :summary, String
29
+ end
30
+ end
31
31
 
32
- Add to your Gemfile:
32
+ summarizer = DSPy::Predict.new(Summarize)
33
+ result = summarizer.call(text: "DSPy.rb brings structured LLM programming to Ruby...")
34
+ puts result.summary
35
+ ```
36
+
37
+ That's it. No prompt templates. No JSON parsing. No prayer-based error handling.
38
+
39
+ ## Installation
33
40
 
34
41
  ```ruby
42
+ # Gemfile
35
43
  gem 'dspy'
44
+ gem 'dspy-openai' # For OpenAI, OpenRouter, or Ollama
45
+ # gem 'dspy-anthropic' # For Claude
46
+ # gem 'dspy-gemini' # For Gemini
47
+ # gem 'dspy-ruby_llm' # For 12+ providers via RubyLLM
36
48
  ```
37
49
 
38
- and
39
-
40
50
  ```bash
41
51
  bundle install
42
52
  ```
43
53
 
44
- ### Your First Reliable Predictor
54
+ ## Quick Start
45
55
 
46
- ```ruby
47
- require 'dspy'
56
+ ### Configure Your LLM
48
57
 
49
- # Configure DSPy globally to use your fave LLM (you can override per predictor).
58
+ ```ruby
59
+ # OpenAI
50
60
  DSPy.configure do |c|
51
61
  c.lm = DSPy::LM.new('openai/gpt-4o-mini',
52
62
  api_key: ENV['OPENAI_API_KEY'],
53
- structured_outputs: true) # Enable OpenAI's native JSON mode
63
+ structured_outputs: true)
54
64
  end
55
65
 
56
- # Define a signature for sentiment classification - instead of writing a full prompt!
66
+ # Anthropic Claude
67
+ DSPy.configure do |c|
68
+ c.lm = DSPy::LM.new('anthropic/claude-sonnet-4-20250514',
69
+ api_key: ENV['ANTHROPIC_API_KEY'])
70
+ end
71
+
72
+ # Google Gemini
73
+ DSPy.configure do |c|
74
+ c.lm = DSPy::LM.new('gemini/gemini-2.5-flash',
75
+ api_key: ENV['GEMINI_API_KEY'])
76
+ end
77
+
78
+ # Ollama (local, free)
79
+ DSPy.configure do |c|
80
+ c.lm = DSPy::LM.new('ollama/llama3.2')
81
+ end
82
+
83
+ # OpenRouter (200+ models)
84
+ DSPy.configure do |c|
85
+ c.lm = DSPy::LM.new('openrouter/deepseek/deepseek-chat-v3.1:free',
86
+ api_key: ENV['OPENROUTER_API_KEY'])
87
+ end
88
+ ```
89
+
90
+ ### Define a Signature
91
+
92
+ Signatures are typed contracts for LLM operations. Define inputs, outputs, and let DSPy handle the prompt:
93
+
94
+ ```ruby
57
95
  class Classify < DSPy::Signature
58
- description "Classify sentiment of a given sentence." # sets the goal of the underlying prompt
96
+ description "Classify sentiment of a given sentence."
59
97
 
60
98
  class Sentiment < T::Enum
61
99
  enums do
@@ -64,245 +102,130 @@ class Classify < DSPy::Signature
64
102
  Neutral = new('neutral')
65
103
  end
66
104
  end
67
-
68
- # Structured Inputs: makes sure you are sending only valid prompt inputs to your model
105
+
69
106
  input do
70
107
  const :sentence, String, description: 'The sentence to analyze'
71
108
  end
72
109
 
73
- # Structured Outputs: your predictor will validate the output of the model too.
74
110
  output do
75
- const :sentiment, Sentiment, description: 'The sentiment of the sentence'
76
- const :confidence, Float, description: 'A number between 0.0 and 1.0'
111
+ const :sentiment, Sentiment
112
+ const :confidence, Float
77
113
  end
78
114
  end
79
115
 
80
- # Wire it to the simplest prompting technique: a prediction loop.
81
- classify = DSPy::Predict.new(Classify)
82
- # it may raise an error if you mess the inputs or your LLM messes the outputs.
83
- result = classify.call(sentence: "This book was super fun to read!")
116
+ classifier = DSPy::Predict.new(Classify)
117
+ result = classifier.call(sentence: "This book was super fun to read!")
84
118
 
85
- puts result.sentiment # => #<Sentiment::Positive>
86
- puts result.confidence # => 0.85
119
+ result.sentiment # => #<Sentiment::Positive>
120
+ result.confidence # => 0.92
87
121
  ```
88
122
 
89
- Save this as `examples/first_predictor.rb` and run it with:
123
+ ### Chain of Thought
90
124
 
91
- ```bash
92
- bundle exec ruby examples/first_predictor.rb
93
- ```
94
-
95
- ### Sibling Gems
96
-
97
- DSPy.rb ships multiple gems from this monorepo so you can opt into features with heavier dependency trees (e.g., datasets pull in Polars/Arrow, MIPROv2 requires `numo-*` BLAS bindings) only when you need them. Add these alongside `dspy`:
125
+ For complex reasoning, use `ChainOfThought` to get step-by-step explanations:
98
126
 
99
- | Gem | Description | Status |
100
- | --- | --- | --- |
101
- | `dspy-schema` | Exposes `DSPy::TypeSystem::SorbetJsonSchema` for downstream reuse. (Still required by the core `dspy` gem; extraction lets other projects depend on it directly.) | **Stable** (v1.0.0) |
102
- | `dspy-openai` | Packages the OpenAI/OpenRouter/Ollama adapters plus the official SDK guardrails. Install whenever you call `openai/*`, `openrouter/*`, or `ollama/*`. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/openai/README.md) | **Stable** (v1.0.0) |
103
- | `dspy-anthropic` | Claude adapters, streaming, and structured-output helpers behind the official `anthropic` SDK. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/anthropic/README.md) | **Stable** (v1.0.0) |
104
- | `dspy-gemini` | Gemini adapters with multimodal + tool-call support via `gemini-ai`. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/gemini/README.md) | **Stable** (v1.0.0) |
105
- | `dspy-ruby_llm` | Unified access to 12+ LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Ollama, DeepSeek, etc.) via [RubyLLM](https://rubyllm.com). [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/ruby_llm/README.md) | **Stable** (v0.1.0) |
106
- | `dspy-code_act` | Think-Code-Observe agents that synthesize and execute Ruby safely. (Add the gem or set `DSPY_WITH_CODE_ACT=1` before requiring `dspy/code_act`.) | **Stable** (v1.0.0) |
107
- | `dspy-datasets` | Dataset helpers plus Parquet/Polars tooling for richer evaluation corpora. (Toggle via `DSPY_WITH_DATASETS`.) | **Stable** (v1.0.0) |
108
- | `dspy-evals` | High-throughput evaluation harness with metrics, callbacks, and regression fixtures. (Toggle via `DSPY_WITH_EVALS`.) | **Stable** (v1.0.0) |
109
- | `dspy-miprov2` | Bayesian optimization + Gaussian Process backend for the MIPROv2 teleprompter. (Install or export `DSPY_WITH_MIPROV2=1` before requiring the teleprompter.) | **Stable** (v1.0.0) |
110
- | `dspy-gepa` | `DSPy::Teleprompt::GEPA`, reflection loops, experiment tracking, telemetry adapters. (Install or set `DSPY_WITH_GEPA=1`.) | **Stable** (v1.0.0) |
111
- | `gepa` | GEPA optimizer core (Pareto engine, telemetry, reflective proposer). | **Stable** (v1.0.0) |
112
- | `dspy-o11y` | Core observability APIs: `DSPy::Observability`, async span processor, observation types. (Install or set `DSPY_WITH_O11Y=1`.) | **Stable** (v1.0.0) |
113
- | `dspy-o11y-langfuse` | Auto-configures DSPy observability to stream spans to Langfuse via OTLP. (Install or set `DSPY_WITH_O11Y_LANGFUSE=1`.) | **Stable** (v1.0.0) |
114
- | `dspy-deep_search` | Production DeepSearch loop with Exa-backed search/read, token budgeting, and instrumentation (Issue #163). | **Stable** (v1.0.0) |
115
- | `dspy-deep_research` | Planner/QA orchestration atop DeepSearch plus the memory supervisor used by the CLI example. | **Stable** (v1.0.0) |
116
- | `sorbet-toon` | Token-Oriented Object Notation (TOON) codec, prompt formatter, and Sorbet mixins for BAML/TOON Enhanced Prompting. [Sorbet::Toon README](https://github.com/vicentereig/dspy.rb/blob/main/lib/sorbet/toon/README.md) | **Alpha** (v0.1.0) |
127
+ ```ruby
128
+ solver = DSPy::ChainOfThought.new(MathProblem)
129
+ result = solver.call(problem: "If a train travels 120km in 2 hours, what's its speed?")
117
130
 
118
- **Provider adapters:** Add `dspy-openai`, `dspy-anthropic`, and/or `dspy-gemini` next to `dspy` in your Gemfile depending on which `DSPy::LM` providers you call. Each gem already depends on the official SDK (`openai`, `anthropic`, `gemini-ai`), and DSPy auto-loads the adapters when the gem is present—no extra `require` needed.
131
+ result.reasoning # => "Speed = Distance / Time = 120km / 2h = 60km/h"
132
+ result.answer # => "60 km/h"
133
+ ```
119
134
 
120
- Set the matching `DSPY_WITH_*` environment variables (see `Gemfile`) to include or exclude each sibling gem when running Bundler locally (for example `DSPY_WITH_GEPA=1` or `DSPY_WITH_O11Y_LANGFUSE=1`). Refer to `adr/013-dependency-tree.md` for the full dependency map and roadmap.
121
- ### Access to 200+ Models Across 5 Providers
135
+ ### ReAct Agents
122
136
 
123
- DSPy.rb provides unified access to major LLM providers with provider-specific optimizations:
137
+ Build agents that use tools to accomplish tasks:
124
138
 
125
139
  ```ruby
126
- # OpenAI (GPT-4, GPT-4o, GPT-4o-mini, GPT-5, etc.)
127
- DSPy.configure do |c|
128
- c.lm = DSPy::LM.new('openai/gpt-4o-mini',
129
- api_key: ENV['OPENAI_API_KEY'],
130
- structured_outputs: true) # Native JSON mode
131
- end
140
+ class SearchTool < DSPy::Tools::Tool
141
+ tool_name "search"
142
+ description "Search for information"
132
143
 
133
- # Google Gemini (Gemini 1.5 Pro, Flash, Gemini 2.0, etc.)
134
- DSPy.configure do |c|
135
- c.lm = DSPy::LM.new('gemini/gemini-2.5-flash',
136
- api_key: ENV['GEMINI_API_KEY'],
137
- structured_outputs: true) # Native structured outputs
138
- end
144
+ input do
145
+ const :query, String
146
+ end
139
147
 
140
- # Anthropic Claude (Claude 3.5, Claude 4, etc.)
141
- DSPy.configure do |c|
142
- c.lm = DSPy::LM.new('anthropic/claude-sonnet-4-5-20250929',
143
- api_key: ENV['ANTHROPIC_API_KEY'],
144
- structured_outputs: true) # Tool-based extraction (default)
145
- end
148
+ output do
149
+ const :results, T::Array[String]
150
+ end
146
151
 
147
- # Ollama - Run any local model (Llama, Mistral, Gemma, etc.)
148
- DSPy.configure do |c|
149
- c.lm = DSPy::LM.new('ollama/llama3.2') # Free, runs locally, no API key needed
152
+ def call(query:)
153
+ # Your search implementation
154
+ { results: ["Result 1", "Result 2"] }
155
+ end
150
156
  end
151
157
 
152
- # OpenRouter - Access to 200+ models from multiple providers
153
- DSPy.configure do |c|
154
- c.lm = DSPy::LM.new('openrouter/deepseek/deepseek-chat-v3.1:free',
155
- api_key: ENV['OPENROUTER_API_KEY'])
156
- end
158
+ toolset = DSPy::Tools::Toolset.new(tools: [SearchTool.new])
159
+ agent = DSPy::ReAct.new(signature: ResearchTask, tools: toolset, max_iterations: 5)
160
+ result = agent.call(question: "What's the latest on Ruby 3.4?")
157
161
  ```
158
162
 
159
- ## What You Get
160
-
161
- **Developer Experience:** Official clients, multimodal coverage, and observability baked in.
162
- <details>
163
- <summary>Expand for everything included</summary>
164
-
165
- - LLM provider support using official Ruby clients:
166
- - [OpenAI Ruby](https://github.com/openai/openai-ruby) with vision model support
167
- - [Anthropic Ruby SDK](https://github.com/anthropics/anthropic-sdk-ruby) with multimodal capabilities
168
- - [Google Gemini API](https://ai.google.dev/) with native structured outputs
169
- - [Ollama](https://ollama.com/) via OpenAI compatibility layer for local models
170
- - **Multimodal Support** - Complete image analysis with DSPy::Image, type-safe bounding boxes, vision-capable models
171
- - Runtime type checking with [Sorbet](https://sorbet.org/) including T::Enum and union types
172
- - Type-safe tool definitions for ReAct agents
173
- - Comprehensive instrumentation and observability
174
- </details>
175
-
176
- **Core Building Blocks:** Predictors, agents, and pipelines wired through type-safe signatures.
177
- <details>
178
- <summary>Expand for everything included</summary>
179
-
180
- - **Signatures** - Define input/output schemas using Sorbet types with T::Enum and union type support
181
- - **Predict** - LLM completion with structured data extraction and multimodal support
182
- - **Chain of Thought** - Step-by-step reasoning for complex problems with automatic prompt optimization
183
- - **ReAct** - Tool-using agents with type-safe tool definitions and error recovery
184
- - **Module Composition** - Combine multiple LLM calls into production-ready workflows
185
- </details>
186
-
187
- **Optimization & Evaluation:** Treat prompt optimization like a real ML workflow.
188
- <details>
189
- <summary>Expand for everything included</summary>
190
-
191
- - **Prompt Objects** - Manipulate prompts as first-class objects instead of strings
192
- - **Typed Examples** - Type-safe training data with automatic validation
193
- - **Evaluation Framework** - Advanced metrics beyond simple accuracy with error-resilient pipelines
194
- - **MIPROv2 Optimization** - Advanced Bayesian optimization with Gaussian Processes, multiple optimization strategies, auto-config presets, and storage persistence
195
- </details>
196
-
197
- **Production Features:** Hardened behaviors for teams shipping actual products.
198
- <details>
199
- <summary>Expand for everything included</summary>
200
-
201
- - **Reliable JSON Extraction** - Native structured outputs for OpenAI and Gemini, Anthropic tool-based extraction, and automatic strategy selection with fallback
202
- - **Type-Safe Configuration** - Strategy enums with automatic provider optimization (Strict/Compatible modes)
203
- - **Smart Retry Logic** - Progressive fallback with exponential backoff for handling transient failures
204
- - **Zero-Config Langfuse Integration** - Set env vars and get automatic OpenTelemetry traces in Langfuse
205
- - **Performance Caching** - Schema and capability caching for faster repeated operations
206
- - **File-based Storage** - Optimization result persistence with versioning
207
- - **Structured Logging** - JSON and key=value formats with span tracking
208
- </details>
209
-
210
- ## Recent Achievements
211
-
212
- DSPy.rb has gone from experimental to production-ready in three fast releases.
213
- <details>
214
- <summary>Expand for the full changelog highlights</summary>
215
-
216
- ### Foundation
217
- - ✅ **JSON Parsing Reliability** - Native OpenAI structured outputs with adaptive retry logic and schema-aware fallbacks
218
- - ✅ **Type-Safe Strategy Configuration** - Provider-optimized strategy selection and enum-backed optimizer presets
219
- - ✅ **Core Module System** - Predict, ChainOfThought, ReAct with type safety (add `dspy-code_act` for Think-Code-Observe agents)
220
- - ✅ **Production Observability** - OpenTelemetry, New Relic, and Langfuse integration
221
- - ✅ **Advanced Optimization** - MIPROv2 with Bayesian optimization, Gaussian Processes, and multi-mode search
222
-
223
- ### Recent Advances
224
- - ✅ **MIPROv2 ADE Integrity (v0.29.1)** - Stratified train/val/test splits, honest precision accounting, and enum-driven `--auto` presets with integration coverage
225
- - ✅ **Instruction Deduplication (v0.29.1)** - Candidate generation now filters repeated programs so optimization logs highlight unique strategies
226
- - ✅ **GEPA Teleprompter (v0.29.0)** - Genetic-Pareto reflective prompt evolution with merge proposer scheduling, reflective mutation, and ADE demo parity
227
- - ✅ **Optimizer Utilities Parity (v0.29.0)** - Bootstrap strategies, dataset summaries, and Layer 3 utilities unlock multi-predictor programs on Ruby
228
- - ✅ **Observability Hardening (v0.29.0)** - OTLP exporter runs on a single-thread executor preventing frozen SSL contexts without blocking spans
229
- - ✅ **Documentation Refresh (v0.29.x)** - New GEPA guide plus ADE optimization docs covering presets, stratified splits, and error-handling defaults
230
- </details>
231
-
232
- **Current Focus Areas:** Closing the loop on production patterns and community adoption ahead of v1.0.
233
- <details>
234
- <summary>Expand for the roadmap</summary>
235
-
236
- ### Production Readiness
237
- - 🚧 **Production Patterns** - Real-world usage validation and performance optimization
238
- - 🚧 **Ruby Ecosystem Integration** - Rails integration, Sidekiq compatibility, deployment patterns
239
-
240
- ### Community & Adoption
241
- - 🚧 **Community Examples** - Real-world applications and case studies
242
- - 🚧 **Contributor Experience** - Making it easier to contribute and extend
243
- - 🚧 **Performance Benchmarks** - Comparative analysis vs other frameworks
244
- </details>
245
-
246
- **v1.0 Philosophy:** v1.0 lands after battle-testing, not checkbox bingo. The API is already stable; the milestone marks production confidence.
163
+ ## What's Included
247
164
 
165
+ **Core Modules**: Predict, ChainOfThought, ReAct agents, and composable pipelines.
248
166
 
249
- ## Documentation
167
+ **Type Safety**: Sorbet-based runtime validation. Enums, unions, nested structs—all work.
168
+
169
+ **Multimodal**: Image analysis with `DSPy::Image` for vision-capable models.
250
170
 
251
- 📖 **[Complete Documentation Website](https://oss.vicente.services/dspy.rb/)**
171
+ **Observability**: Zero-config Langfuse integration via OpenTelemetry. Non-blocking, production-ready.
252
172
 
253
- ### LLM-Friendly Documentation
173
+ **Optimization**: MIPROv2 (Bayesian optimization) and GEPA (genetic evolution) for prompt tuning.
254
174
 
255
- For LLMs and AI assistants working with DSPy.rb:
256
- - **[llms.txt](https://oss.vicente.services/dspy.rb/llms.txt)** - Concise reference optimized for LLMs
257
- - **[llms-full.txt](https://oss.vicente.services/dspy.rb/llms-full.txt)** - Comprehensive API documentation
175
+ **Provider Support**: OpenAI, Anthropic, Gemini, Ollama, and OpenRouter via official SDKs.
176
+
177
+ ## Documentation
178
+
179
+ **[Full Documentation](https://oss.vicente.services/dspy.rb/)** — Getting started, core concepts, advanced patterns.
180
+
181
+ **[llms.txt](https://oss.vicente.services/dspy.rb/llms.txt)** — LLM-friendly reference for AI assistants.
258
182
 
259
183
  ### Claude Skill
260
184
 
261
- A [Claude Skill](https://github.com/vicentereig/dspy-rb-skill) is available to help you build DSPy.rb applications with Claude Code or claude.ai.
185
+ A [Claude Skill](https://github.com/vicentereig/dspy-rb-skill) is available to help you build DSPy.rb applications:
262
186
 
263
- **Claude Code:**
264
187
  ```bash
188
+ # Claude Code
265
189
  git clone https://github.com/vicentereig/dspy-rb-skill ~/.claude/skills/dspy-rb
266
190
  ```
267
191
 
268
- **Claude.ai (Pro/Max):** Download the [skill as a ZIP](https://github.com/vicentereig/dspy-rb-skill/archive/refs/heads/main.zip) and upload via Settings > Skills.
269
-
270
- ### Getting Started
271
- - **[Installation & Setup](docs/src/getting-started/installation.md)** - Detailed installation and configuration
272
- - **[Quick Start Guide](docs/src/getting-started/quick-start.md)** - Your first DSPy programs
273
- - **[Core Concepts](docs/src/getting-started/core-concepts.md)** - Understanding signatures, predictors, and modules
192
+ For Claude.ai Pro/Max, download the [skill ZIP](https://github.com/vicentereig/dspy-rb-skill/archive/refs/heads/main.zip) and upload via Settings > Skills.
274
193
 
275
- ### Prompt Engineering
276
- - **[Signatures & Types](docs/src/core-concepts/signatures.md)** - Define typed interfaces for LLM operations
277
- - **[Predictors](docs/src/core-concepts/predictors.md)** - Predict, ChainOfThought, ReAct, and more
278
- - **[Modules & Pipelines](docs/src/core-concepts/modules.md)** - Compose complex multi-stage workflows
279
- - **[Multimodal Support](docs/src/core-concepts/multimodal.md)** - Image analysis with vision-capable models
280
- - **[Examples & Validation](docs/src/core-concepts/examples.md)** - Type-safe training data
281
- - **[Rich Types](docs/src/advanced/complex-types.md)** - Sorbet type integration with automatic coercion for structs, enums, and arrays
282
- - **[Composable Pipelines](docs/src/advanced/pipelines.md)** - Manual module composition patterns
194
+ ## Examples
283
195
 
284
- ### Prompt Optimization
285
- - **[Evaluation Framework](docs/src/optimization/evaluation.md)** - Advanced metrics beyond simple accuracy
286
- - **[Prompt Optimization](docs/src/optimization/prompt-optimization.md)** - Manipulate prompts as objects
287
- - **[MIPROv2 Optimizer](docs/src/optimization/miprov2.md)** - Advanced Bayesian optimization with Gaussian Processes
288
- - **[GEPA Optimizer](docs/src/optimization/gepa.md)** *(beta)* - Reflective mutation with optional reflection LMs
196
+ The [examples/](examples/) directory has runnable code for common patterns:
289
197
 
290
- ### Context Engineering
291
- - **[Tools](docs/src/core-concepts/toolsets.md)** - Tool wieldint agents.
292
- - **[Agentic Memory](docs/src/core-concepts/memory.md)** - Memory Tools & Agentic Loops
293
- - **[RAG Patterns](docs/src/advanced/rag.md)** - Manual RAG implementation with external services
198
+ - Sentiment classification
199
+ - ReAct agents with tools
200
+ - Image analysis
201
+ - Prompt optimization
294
202
 
295
- ### Production Features
296
- - **[Observability](docs/src/production/observability.md)** - Zero-config Langfuse integration with a dedicated export worker that never blocks your LLMs
297
- - **[Storage System](docs/src/production/storage.md)** - Persistence and optimization result storage
298
- - **[Custom Metrics](docs/src/advanced/custom-metrics.md)** - Proc-based evaluation logic
203
+ ```bash
204
+ bundle exec ruby examples/first_predictor.rb
205
+ ```
299
206
 
207
+ ## Optional Gems
300
208
 
209
+ DSPy.rb ships sibling gems for features with heavier dependencies. Add them as needed:
301
210
 
211
+ | Gem | What it does |
212
+ | --- | --- |
213
+ | `dspy-datasets` | Dataset helpers, Parquet/Polars tooling |
214
+ | `dspy-evals` | Evaluation harness with metrics and callbacks |
215
+ | `dspy-miprov2` | Bayesian optimization for prompt tuning |
216
+ | `dspy-gepa` | Genetic-Pareto prompt evolution |
217
+ | `dspy-o11y-langfuse` | Auto-configure Langfuse tracing |
218
+ | `dspy-code_act` | Think-Code-Observe agents |
219
+ | `dspy-deep_search` | Production DeepSearch with Exa |
302
220
 
221
+ See [the full list](https://oss.vicente.services/dspy.rb/getting-started/installation/) in the docs.
303
222
 
223
+ ## Contributing
304
224
 
225
+ Feedback is invaluable. If you encounter issues, [open an issue](https://github.com/vicentereig/dspy.rb/issues). For suggestions, [start a discussion](https://github.com/vicentereig/dspy.rb/discussions).
305
226
 
227
+ Want to contribute code? Reach out: hey at vicente.services
306
228
 
307
229
  ## License
308
- This project is licensed under the MIT License.
230
+
231
+ MIT License.
data/lib/dspy/context.rb CHANGED
@@ -7,43 +7,33 @@ module DSPy
7
7
  class Context
8
8
  class << self
9
9
  def current
10
- # Use Thread storage as primary source to ensure thread isolation
11
- # Fiber storage is used for OpenTelemetry context propagation within the same thread
12
-
13
- # Create a unique key for this thread to ensure isolation
14
- thread_key = :"dspy_context_#{Thread.current.object_id}"
15
-
16
- # Always check thread-local storage first for proper isolation
17
- if Thread.current[thread_key]
18
- # Thread has context, ensure fiber inherits it for OpenTelemetry propagation
19
- Fiber[:dspy_context] = Thread.current[thread_key]
20
- Thread.current[:dspy_context] = Thread.current[thread_key] # Keep for backward compatibility
21
- return Thread.current[thread_key]
10
+ # Prefer fiber-local context for async safety; fall back to thread root context.
11
+ fiber_context = Fiber[:dspy_context]
12
+ if fiber_context && fiber_context[:thread_id] == Thread.current.object_id
13
+ return fiber_context if fiber_context[:fiber_id] == Fiber.current.object_id
14
+
15
+ Fiber[:dspy_context] = fork_context(fiber_context)
16
+ return Fiber[:dspy_context]
22
17
  end
23
-
24
- # Check if current fiber has context that was set by this same thread
25
- # This handles cases where context was set via OpenTelemetry propagation within the thread
26
- if Fiber[:dspy_context] && Thread.current[:dspy_context] == Fiber[:dspy_context]
27
- # This fiber context was set by this thread, safe to use
28
- Thread.current[thread_key] = Fiber[:dspy_context]
18
+
19
+ thread_key = :"dspy_context_#{Thread.current.object_id}"
20
+ thread_context = Thread.current[thread_key]
21
+
22
+ if thread_context
23
+ Fiber[:dspy_context] = fork_context(thread_context)
29
24
  return Fiber[:dspy_context]
30
25
  end
31
-
32
- # No existing context or context belongs to different thread - create new one
33
- context = {
34
- trace_id: SecureRandom.uuid,
35
- span_stack: [],
36
- otel_span_stack: [],
37
- module_stack: []
38
- }
39
-
40
- # Set in both Thread and Fiber storage
26
+
27
+ context = build_context
41
28
  Thread.current[thread_key] = context
42
- Thread.current[:dspy_context] = context # Keep for backward compatibility
29
+ Thread.current[:dspy_context] = context # Backward compatibility (thread root)
43
30
  Fiber[:dspy_context] = context
44
-
45
31
  context
46
32
  end
33
+
34
+ def fork_context(parent_context)
35
+ clone_context(parent_context)
36
+ end
47
37
 
48
38
  def with_span(operation:, **attributes)
49
39
  span_id = SecureRandom.uuid
@@ -219,6 +209,27 @@ module DSPy
219
209
  false
220
210
  end
221
211
 
212
+ def build_context
213
+ {
214
+ trace_id: SecureRandom.uuid,
215
+ thread_id: Thread.current.object_id,
216
+ fiber_id: Fiber.current.object_id,
217
+ span_stack: [],
218
+ otel_span_stack: [],
219
+ module_stack: []
220
+ }
221
+ end
222
+
223
+ def clone_context(context)
224
+ cloned = context.dup
225
+ cloned[:span_stack] = Array(context[:span_stack]).dup
226
+ cloned[:otel_span_stack] = Array(context[:otel_span_stack]).dup
227
+ cloned[:module_stack] = Array(context[:module_stack]).map { |entry| entry.dup }
228
+ cloned[:thread_id] = Thread.current.object_id
229
+ cloned[:fiber_id] = Fiber.current.object_id
230
+ cloned
231
+ end
232
+
222
233
  def sanitize_span_attributes(attributes)
223
234
  attributes.each_with_object({}) do |(key, value), acc|
224
235
  sanitized_value = sanitize_attribute_value(value)
data/lib/dspy/lm.rb CHANGED
@@ -42,15 +42,11 @@ module DSPy
42
42
 
43
43
  def chat(inference_module, input_values, &block)
44
44
  # Capture the current DSPy context before entering Sync block
45
- parent_context = DSPy::Context.current.dup
45
+ parent_context = DSPy::Context.current
46
46
 
47
47
  Sync do
48
- # Properly restore the context in the new fiber created by Sync
49
- # We need to set both thread and fiber storage for the new context system
50
- thread_key = :"dspy_context_#{Thread.current.object_id}"
51
- Thread.current[thread_key] = parent_context
52
- Thread.current[:dspy_context] = parent_context # Keep for backward compatibility
53
- Fiber[:dspy_context] = parent_context
48
+ # Isolate fiber context while preserving trace/module ancestry
49
+ Fiber[:dspy_context] = DSPy::Context.fork_context(parent_context)
54
50
 
55
51
  signature_class = inference_module.signature_class
56
52
 
@@ -136,29 +132,6 @@ module DSPy
136
132
  response
137
133
  end
138
134
 
139
- # Determines if LM-level events should be emitted using smart consolidation
140
- def should_emit_lm_events?
141
- # Emit LM events only if we're not in a nested context (smart consolidation)
142
- !is_nested_context?
143
- end
144
-
145
- # Determines if we're in a nested context where higher-level events are being emitted
146
- def is_nested_context?
147
- caller_locations = caller_locations(1, 30)
148
- return false if caller_locations.nil?
149
-
150
- # Look for higher-level DSPy modules in the call stack
151
- # We consider ChainOfThought and ReAct as higher-level modules
152
- higher_level_modules = caller_locations.select do |loc|
153
- loc.path.include?('chain_of_thought') ||
154
- loc.path.include?('re_act') ||
155
- loc.path.include?('react')
156
- end
157
-
158
- # If we have higher-level modules in the call stack, we're in a nested context
159
- higher_level_modules.any?
160
- end
161
-
162
135
  def parse_model_id(model_id)
163
136
  unless model_id.include?('/')
164
137
  raise ArgumentError, "model_id must include provider (e.g., 'openai/gpt-4', 'anthropic/claude-3'). Legacy format without provider is no longer supported."
@@ -423,11 +396,21 @@ module DSPy
423
396
  if message.is_a?(Message)
424
397
  # Already validated by type system
425
398
  next
426
- elsif message.is_a?(Hash) && message.key?(:role) && message.key?(:content)
427
- # Legacy hash format - validate role
399
+ elsif message.is_a?(Hash) || message.respond_to?(:to_h)
400
+ data = message.is_a?(Hash) ? message : message.to_h
401
+ unless data.is_a?(Hash)
402
+ raise ArgumentError, "Message at index #{index} must be a Message object or hash with :role and :content"
403
+ end
404
+
405
+ normalized = data.transform_keys(&:to_sym)
406
+ unless normalized.key?(:role) && normalized.key?(:content)
407
+ raise ArgumentError, "Message at index #{index} must have :role and :content"
408
+ end
409
+
410
+ role = normalized[:role].to_s
428
411
  valid_roles = %w[system user assistant]
429
- unless valid_roles.include?(message[:role])
430
- raise ArgumentError, "Invalid role at index #{index}: #{message[:role]}. Must be one of: #{valid_roles.join(', ')}"
412
+ unless valid_roles.include?(role)
413
+ raise ArgumentError, "Invalid role at index #{index}: #{normalized[:role]}. Must be one of: #{valid_roles.join(', ')}"
431
414
  end
432
415
  else
433
416
  raise ArgumentError, "Message at index #{index} must be a Message object or hash with :role and :content"
@@ -475,23 +458,28 @@ module DSPy
475
458
  messages.each_with_index do |msg, index|
476
459
  if msg.is_a?(Message)
477
460
  normalized << msg
478
- elsif msg.is_a?(Hash)
479
- # Validate hash has required fields
480
- unless msg.key?(:role) && msg.key?(:content)
461
+ elsif msg.is_a?(Hash) || msg.respond_to?(:to_h)
462
+ data = msg.is_a?(Hash) ? msg : msg.to_h
463
+ unless data.is_a?(Hash)
464
+ raise ArgumentError, "Message at index #{index} must be a Message object or hash with :role and :content"
465
+ end
466
+
467
+ normalized_hash = data.transform_keys(&:to_sym)
468
+ unless normalized_hash.key?(:role) && normalized_hash.key?(:content)
481
469
  raise ArgumentError, "Message at index #{index} must have :role and :content"
482
470
  end
483
-
484
- # Validate role
471
+
472
+ role = normalized_hash[:role].to_s
485
473
  valid_roles = %w[system user assistant]
486
- unless valid_roles.include?(msg[:role])
487
- raise ArgumentError, "Invalid role at index #{index}: #{msg[:role]}. Must be one of: #{valid_roles.join(', ')}"
474
+ unless valid_roles.include?(role)
475
+ raise ArgumentError, "Invalid role at index #{index}: #{normalized_hash[:role]}. Must be one of: #{valid_roles.join(', ')}"
488
476
  end
489
-
490
- # Create Message object
491
- message = MessageFactory.create(msg)
477
+
478
+ message = MessageFactory.create(normalized_hash)
492
479
  if message.nil?
493
480
  raise ArgumentError, "Failed to create Message from hash at index #{index}"
494
481
  end
482
+
495
483
  normalized << message
496
484
  else
497
485
  raise ArgumentError, "Message at index #{index} must be a Message object or hash with :role and :content"
data/lib/dspy/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module DSPy
4
- VERSION = "0.34.1"
4
+ VERSION = "0.34.2"
5
5
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: dspy
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.34.1
4
+ version: 0.34.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Vicente Reig Rincón de Arellano