dspy-ruby_llm 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8442dc722fa4ef77810d65ede3d52c66910af543550138add7656c4e703ada7b
4
- data.tar.gz: 0ab6ee908cc82c0b674e52f2b5eeee6038266fc41171ba270aa1b13dd641e7f7
3
+ metadata.gz: 5131c4c5545dc867c284d046a0f43cf8bb109f9f0417f4154cf9cfad4f73b55b
4
+ data.tar.gz: c10d6e0614e3bd3ae92d88ef93197e86370d25e9b49d7a4fb946928b3039f5cd
5
5
  SHA512:
6
- metadata.gz: 68d12fbfe7c4732c07b7287f55a771ad8f55a4d7fc7eb66bd9f6a5f4a267b65811c9d2771604f083835d352b0c0907d9b01a1bf211cc454d8af87ad644bcabe5
7
- data.tar.gz: 9c754e16b521f75373ca648a60010caa467b78c87ba1ea72c8bb63a5d6a9260261c4c42b004e42603d816f123db409090dd903de985e30444a8c18449eeef3d6
6
+ metadata.gz: 781466444aae497b51952ec0e2914dc551ef8f6008449bc95676e4fc84a3aec2bebf27f1b5673901c9d8f19c3f86f0f692477ddd98a2ef1bf3faaa55788a0576
7
+ data.tar.gz: 10b9cdb52396c6f089cdd32c40f41363f463f1e1acc19fcaa741cbf2668e79b8af81f898170afcadfdb0e05aa2dff0b601fa22e67cd2397a448ee9e0ed2e6ec6
data/README.md CHANGED
@@ -3,59 +3,98 @@
3
3
  [![Gem Version](https://img.shields.io/gem/v/dspy)](https://rubygems.org/gems/dspy)
4
4
  [![Total Downloads](https://img.shields.io/gem/dt/dspy)](https://rubygems.org/gems/dspy)
5
5
  [![Build Status](https://img.shields.io/github/actions/workflow/status/vicentereig/dspy.rb/ruby.yml?branch=main&label=build)](https://github.com/vicentereig/dspy.rb/actions/workflows/ruby.yml)
6
- [![Documentation](https://img.shields.io/badge/docs-vicentereig.github.io%2Fdspy.rb-blue)](https://vicentereig.github.io/dspy.rb/)
6
+ [![Documentation](https://img.shields.io/badge/docs-oss.vicente.services%2Fdspy.rb-blue)](https://oss.vicente.services/dspy.rb/)
7
7
  [![Discord](https://img.shields.io/discord/1161519468141355160?label=discord&logo=discord&logoColor=white)](https://discord.gg/zWBhrMqn)
8
8
 
9
- > [!NOTE]
10
- > The core Prompt Engineering Framework is production-ready with
11
- > comprehensive documentation. I am focusing now on educational content on systematic Prompt Optimization and Context Engineering.
12
- > Your feedback is invaluable. if you encounter issues, please open an [issue](https://github.com/vicentereig/dspy.rb/issues). If you have suggestions, open a [new thread](https://github.com/vicentereig/dspy.rb/discussions).
13
- >
14
- > If you want to contribute, feel free to reach out to me to coordinate efforts: hey at vicente.services
15
- >
16
-
17
9
  **Build reliable LLM applications in idiomatic Ruby using composable, type-safe modules.**
18
10
 
19
- DSPy.rb is the Ruby-first surgical port of Stanford's [DSPy paradigm](https://github.com/stanfordnlp/dspy). It delivers structured LLM programming, prompt engineering, and context engineering in the language we love. Instead of wrestling with brittle prompt strings, you define typed signatures in idiomatic Ruby and compose workflows and agents that actually behave.
11
+ DSPy.rb is the Ruby port of Stanford's [DSPy](https://dspy.ai). Instead of wrestling with brittle prompt strings, you define typed signatures and let the framework handle the rest. Prompts become functions. LLM calls become predictable.
12
+ The `1.x` line is the stable release track for production Ruby LLM applications.
20
13
 
21
- **Prompts are just functions.** Traditional prompting is like writing code with string concatenation: it works until it doesn't. DSPy.rb brings you the programming approach pioneered by [dspy.ai](https://dspy.ai/): define modular signatures and let the framework deal with the messy bits.
14
+ ```ruby
15
+ require 'dspy'
22
16
 
23
- While we implement the same signatures, predictors, and optimization algorithms as the original library, DSPy.rb leans hard into Ruby conventions with Sorbet-based typing, ReAct loops, and production-ready integrations like non-blocking OpenTelemetry instrumentation.
17
+ DSPy.configure do |c|
18
+ c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
19
+ end
20
+
21
+ class Summarize < DSPy::Signature
22
+ description "Summarize the given text in one sentence."
23
+
24
+ input do
25
+ const :text, String
26
+ end
24
27
 
25
- **What you get?** Ruby LLM applications that scale and don't break when you sneeze.
28
+ output do
29
+ const :summary, String
30
+ end
31
+ end
26
32
 
27
- Check the [examples](examples/) and take them for a spin!
33
+ summarizer = DSPy::Predict.new(Summarize)
34
+ result = summarizer.call(text: "DSPy.rb brings structured LLM programming to Ruby...")
35
+ puts result.summary
36
+ ```
28
37
 
29
- ## Your First DSPy Program
30
- ### Installation
38
+ That's it. No prompt templates. No JSON parsing. No prayer-based error handling.
31
39
 
32
- Add to your Gemfile:
40
+ ## Installation
33
41
 
34
42
  ```ruby
43
+ # Gemfile
35
44
  gem 'dspy'
45
+ gem 'dspy-openai' # For OpenAI, OpenRouter, or Ollama
46
+ # gem 'dspy-anthropic' # For Claude
47
+ # gem 'dspy-gemini' # For Gemini
48
+ # gem 'dspy-ruby_llm' # For 12+ providers via RubyLLM
36
49
  ```
37
50
 
38
- and
39
-
40
51
  ```bash
41
52
  bundle install
42
53
  ```
43
54
 
44
- ### Your First Reliable Predictor
55
+ ## Quick Start
45
56
 
46
- ```ruby
47
- require 'dspy'
57
+ ### Configure Your LLM
48
58
 
49
- # Configure DSPy globally to use your fave LLM (you can override per predictor).
59
+ ```ruby
60
+ # OpenAI
50
61
  DSPy.configure do |c|
51
62
  c.lm = DSPy::LM.new('openai/gpt-4o-mini',
52
63
  api_key: ENV['OPENAI_API_KEY'],
53
- structured_outputs: true) # Enable OpenAI's native JSON mode
64
+ structured_outputs: true)
54
65
  end
55
66
 
56
- # Define a signature for sentiment classification - instead of writing a full prompt!
67
+ # Anthropic Claude
68
+ DSPy.configure do |c|
69
+ c.lm = DSPy::LM.new('anthropic/claude-sonnet-4-20250514',
70
+ api_key: ENV['ANTHROPIC_API_KEY'])
71
+ end
72
+
73
+ # Google Gemini
74
+ DSPy.configure do |c|
75
+ c.lm = DSPy::LM.new('gemini/gemini-2.5-flash',
76
+ api_key: ENV['GEMINI_API_KEY'])
77
+ end
78
+
79
+ # Ollama (local, free)
80
+ DSPy.configure do |c|
81
+ c.lm = DSPy::LM.new('ollama/llama3.2')
82
+ end
83
+
84
+ # OpenRouter (200+ models)
85
+ DSPy.configure do |c|
86
+ c.lm = DSPy::LM.new('openrouter/deepseek/deepseek-chat-v3.1:free',
87
+ api_key: ENV['OPENROUTER_API_KEY'])
88
+ end
89
+ ```
90
+
91
+ ### Define a Signature
92
+
93
+ Signatures are typed contracts for LLM operations. Define inputs, outputs, and let DSPy handle the prompt:
94
+
95
+ ```ruby
57
96
  class Classify < DSPy::Signature
58
- description "Classify sentiment of a given sentence." # sets the goal of the underlying prompt
97
+ description "Classify sentiment of a given sentence."
59
98
 
60
99
  class Sentiment < T::Enum
61
100
  enums do
@@ -64,234 +103,122 @@ class Classify < DSPy::Signature
64
103
  Neutral = new('neutral')
65
104
  end
66
105
  end
67
-
68
- # Structured Inputs: makes sure you are sending only valid prompt inputs to your model
106
+
69
107
  input do
70
108
  const :sentence, String, description: 'The sentence to analyze'
71
109
  end
72
110
 
73
- # Structured Outputs: your predictor will validate the output of the model too.
74
111
  output do
75
- const :sentiment, Sentiment, description: 'The sentiment of the sentence'
76
- const :confidence, Float, description: 'A number between 0.0 and 1.0'
112
+ const :sentiment, Sentiment
113
+ const :confidence, Float
77
114
  end
78
115
  end
79
116
 
80
- # Wire it to the simplest prompting technique: a prediction loop.
81
- classify = DSPy::Predict.new(Classify)
82
- # it may raise an error if you mess the inputs or your LLM messes the outputs.
83
- result = classify.call(sentence: "This book was super fun to read!")
117
+ classifier = DSPy::Predict.new(Classify)
118
+ result = classifier.call(sentence: "This book was super fun to read!")
84
119
 
85
- puts result.sentiment # => #<Sentiment::Positive>
86
- puts result.confidence # => 0.85
120
+ result.sentiment # => #<Sentiment::Positive>
121
+ result.confidence # => 0.92
87
122
  ```
88
123
 
89
- Save this as `examples/first_predictor.rb` and run it with:
90
-
91
- ```bash
92
- bundle exec ruby examples/first_predictor.rb
93
- ```
124
+ ### Chain of Thought
94
125
 
95
- ### Sibling Gems
126
+ For complex reasoning, use `ChainOfThought` to get step-by-step explanations:
96
127
 
97
- DSPy.rb ships multiple gems from this monorepo so you can opt into features with heavier dependency trees (e.g., datasets pull in Polars/Arrow, MIPROv2 requires `numo-*` BLAS bindings) only when you need them. Add these alongside `dspy`:
98
-
99
- | Gem | Description | Status |
100
- | --- | --- | --- |
101
- | `dspy-schema` | Exposes `DSPy::TypeSystem::SorbetJsonSchema` for downstream reuse. (Still required by the core `dspy` gem; extraction lets other projects depend on it directly.) | **Stable** (v1.0.0) |
102
- | `dspy-openai` | Packages the OpenAI/OpenRouter/Ollama adapters plus the official SDK guardrails. Install whenever you call `openai/*`, `openrouter/*`, or `ollama/*`. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/openai/README.md) | **Stable** (v1.0.0) |
103
- | `dspy-anthropic` | Claude adapters, streaming, and structured-output helpers behind the official `anthropic` SDK. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/anthropic/README.md) | **Stable** (v1.0.0) |
104
- | `dspy-gemini` | Gemini adapters with multimodal + tool-call support via `gemini-ai`. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/gemini/README.md) | **Stable** (v1.0.0) |
105
- | `dspy-ruby_llm` | Unified access to 12+ LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Ollama, DeepSeek, etc.) via [RubyLLM](https://rubyllm.com). [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/ruby_llm/README.md) | **Stable** (v0.1.0) |
106
- | `dspy-code_act` | Think-Code-Observe agents that synthesize and execute Ruby safely. (Add the gem or set `DSPY_WITH_CODE_ACT=1` before requiring `dspy/code_act`.) | **Stable** (v1.0.0) |
107
- | `dspy-datasets` | Dataset helpers plus Parquet/Polars tooling for richer evaluation corpora. (Toggle via `DSPY_WITH_DATASETS`.) | **Stable** (v1.0.0) |
108
- | `dspy-evals` | High-throughput evaluation harness with metrics, callbacks, and regression fixtures. (Toggle via `DSPY_WITH_EVALS`.) | **Stable** (v1.0.0) |
109
- | `dspy-miprov2` | Bayesian optimization + Gaussian Process backend for the MIPROv2 teleprompter. (Install or export `DSPY_WITH_MIPROV2=1` before requiring the teleprompter.) | **Stable** (v1.0.0) |
110
- | `dspy-gepa` | `DSPy::Teleprompt::GEPA`, reflection loops, experiment tracking, telemetry adapters. (Install or set `DSPY_WITH_GEPA=1`.) | **Stable** (v1.0.0) |
111
- | `gepa` | GEPA optimizer core (Pareto engine, telemetry, reflective proposer). | **Stable** (v1.0.0) |
112
- | `dspy-o11y` | Core observability APIs: `DSPy::Observability`, async span processor, observation types. (Install or set `DSPY_WITH_O11Y=1`.) | **Stable** (v1.0.0) |
113
- | `dspy-o11y-langfuse` | Auto-configures DSPy observability to stream spans to Langfuse via OTLP. (Install or set `DSPY_WITH_O11Y_LANGFUSE=1`.) | **Stable** (v1.0.0) |
114
- | `dspy-deep_search` | Production DeepSearch loop with Exa-backed search/read, token budgeting, and instrumentation (Issue #163). | **Stable** (v1.0.0) |
115
- | `dspy-deep_research` | Planner/QA orchestration atop DeepSearch plus the memory supervisor used by the CLI example. | **Stable** (v1.0.0) |
116
- | `sorbet-toon` | Token-Oriented Object Notation (TOON) codec, prompt formatter, and Sorbet mixins for BAML/TOON Enhanced Prompting. [Sorbet::Toon README](https://github.com/vicentereig/dspy.rb/blob/main/lib/sorbet/toon/README.md) | **Alpha** (v0.1.0) |
128
+ ```ruby
129
+ solver = DSPy::ChainOfThought.new(MathProblem)
130
+ result = solver.call(problem: "If a train travels 120km in 2 hours, what's its speed?")
117
131
 
118
- **Provider adapters:** Add `dspy-openai`, `dspy-anthropic`, and/or `dspy-gemini` next to `dspy` in your Gemfile depending on which `DSPy::LM` providers you call. Each gem already depends on the official SDK (`openai`, `anthropic`, `gemini-ai`), and DSPy auto-loads the adapters when the gem is present—no extra `require` needed.
132
+ result.reasoning # => "Speed = Distance / Time = 120km / 2h = 60km/h"
133
+ result.answer # => "60 km/h"
134
+ ```
119
135
 
120
- Set the matching `DSPY_WITH_*` environment variables (see `Gemfile`) to include or exclude each sibling gem when running Bundler locally (for example `DSPY_WITH_GEPA=1` or `DSPY_WITH_O11Y_LANGFUSE=1`). Refer to `adr/013-dependency-tree.md` for the full dependency map and roadmap.
121
- ### Access to 200+ Models Across 5 Providers
136
+ ### ReAct Agents
122
137
 
123
- DSPy.rb provides unified access to major LLM providers with provider-specific optimizations:
138
+ Build agents that use tools to accomplish tasks:
124
139
 
125
140
  ```ruby
126
- # OpenAI (GPT-4, GPT-4o, GPT-4o-mini, GPT-5, etc.)
127
- DSPy.configure do |c|
128
- c.lm = DSPy::LM.new('openai/gpt-4o-mini',
129
- api_key: ENV['OPENAI_API_KEY'],
130
- structured_outputs: true) # Native JSON mode
141
+ class SearchTool < DSPy::Tools::Base
142
+ tool_name "search"
143
+ tool_description "Search for information"
144
+
145
+ sig { params(query: String).returns(String) }
146
+ def call(query:)
147
+ # Your search implementation
148
+ "Result 1, Result 2"
149
+ end
131
150
  end
132
151
 
133
- # Google Gemini (Gemini 1.5 Pro, Flash, Gemini 2.0, etc.)
134
- DSPy.configure do |c|
135
- c.lm = DSPy::LM.new('gemini/gemini-2.5-flash',
136
- api_key: ENV['GEMINI_API_KEY'],
137
- structured_outputs: true) # Native structured outputs
138
- end
152
+ agent = DSPy::ReAct.new(ResearchTask, tools: [SearchTool.new], max_iterations: 5)
153
+ result = agent.call(question: "What's the latest on Ruby 3.4?")
154
+ ```
139
155
 
140
- # Anthropic Claude (Claude 3.5, Claude 4, etc.)
141
- DSPy.configure do |c|
142
- c.lm = DSPy::LM.new('anthropic/claude-sonnet-4-5-20250929',
143
- api_key: ENV['ANTHROPIC_API_KEY'],
144
- structured_outputs: true) # Tool-based extraction (default)
145
- end
156
+ ## What's Included
146
157
 
147
- # Ollama - Run any local model (Llama, Mistral, Gemma, etc.)
148
- DSPy.configure do |c|
149
- c.lm = DSPy::LM.new('ollama/llama3.2') # Free, runs locally, no API key needed
150
- end
158
+ **Core Modules**: Predict, ChainOfThought, ReAct agents, and composable pipelines.
151
159
 
152
- # OpenRouter - Access to 200+ models from multiple providers
153
- DSPy.configure do |c|
154
- c.lm = DSPy::LM.new('openrouter/deepseek/deepseek-chat-v3.1:free',
155
- api_key: ENV['OPENROUTER_API_KEY'])
156
- end
157
- ```
160
+ **Type Safety**: Sorbet-based runtime validation. Enums, unions, nested structs—all work.
161
+
162
+ **Multimodal**: Image analysis with `DSPy::Image` for vision-capable models.
158
163
 
159
- ## What You Get
160
-
161
- **Developer Experience:** Official clients, multimodal coverage, and observability baked in.
162
- <details>
163
- <summary>Expand for everything included</summary>
164
-
165
- - LLM provider support using official Ruby clients:
166
- - [OpenAI Ruby](https://github.com/openai/openai-ruby) with vision model support
167
- - [Anthropic Ruby SDK](https://github.com/anthropics/anthropic-sdk-ruby) with multimodal capabilities
168
- - [Google Gemini API](https://ai.google.dev/) with native structured outputs
169
- - [Ollama](https://ollama.com/) via OpenAI compatibility layer for local models
170
- - **Multimodal Support** - Complete image analysis with DSPy::Image, type-safe bounding boxes, vision-capable models
171
- - Runtime type checking with [Sorbet](https://sorbet.org/) including T::Enum and union types
172
- - Type-safe tool definitions for ReAct agents
173
- - Comprehensive instrumentation and observability
174
- </details>
175
-
176
- **Core Building Blocks:** Predictors, agents, and pipelines wired through type-safe signatures.
177
- <details>
178
- <summary>Expand for everything included</summary>
179
-
180
- - **Signatures** - Define input/output schemas using Sorbet types with T::Enum and union type support
181
- - **Predict** - LLM completion with structured data extraction and multimodal support
182
- - **Chain of Thought** - Step-by-step reasoning for complex problems with automatic prompt optimization
183
- - **ReAct** - Tool-using agents with type-safe tool definitions and error recovery
184
- - **Module Composition** - Combine multiple LLM calls into production-ready workflows
185
- </details>
186
-
187
- **Optimization & Evaluation:** Treat prompt optimization like a real ML workflow.
188
- <details>
189
- <summary>Expand for everything included</summary>
190
-
191
- - **Prompt Objects** - Manipulate prompts as first-class objects instead of strings
192
- - **Typed Examples** - Type-safe training data with automatic validation
193
- - **Evaluation Framework** - Advanced metrics beyond simple accuracy with error-resilient pipelines
194
- - **MIPROv2 Optimization** - Advanced Bayesian optimization with Gaussian Processes, multiple optimization strategies, auto-config presets, and storage persistence
195
- </details>
196
-
197
- **Production Features:** Hardened behaviors for teams shipping actual products.
198
- <details>
199
- <summary>Expand for everything included</summary>
200
-
201
- - **Reliable JSON Extraction** - Native structured outputs for OpenAI and Gemini, Anthropic tool-based extraction, and automatic strategy selection with fallback
202
- - **Type-Safe Configuration** - Strategy enums with automatic provider optimization (Strict/Compatible modes)
203
- - **Smart Retry Logic** - Progressive fallback with exponential backoff for handling transient failures
204
- - **Zero-Config Langfuse Integration** - Set env vars and get automatic OpenTelemetry traces in Langfuse
205
- - **Performance Caching** - Schema and capability caching for faster repeated operations
206
- - **File-based Storage** - Optimization result persistence with versioning
207
- - **Structured Logging** - JSON and key=value formats with span tracking
208
- </details>
209
-
210
- ## Recent Achievements
211
-
212
- DSPy.rb has gone from experimental to production-ready in three fast releases.
213
- <details>
214
- <summary>Expand for the full changelog highlights</summary>
215
-
216
- ### Foundation
217
- - ✅ **JSON Parsing Reliability** - Native OpenAI structured outputs with adaptive retry logic and schema-aware fallbacks
218
- - ✅ **Type-Safe Strategy Configuration** - Provider-optimized strategy selection and enum-backed optimizer presets
219
- - ✅ **Core Module System** - Predict, ChainOfThought, ReAct with type safety (add `dspy-code_act` for Think-Code-Observe agents)
220
- - ✅ **Production Observability** - OpenTelemetry, New Relic, and Langfuse integration
221
- - ✅ **Advanced Optimization** - MIPROv2 with Bayesian optimization, Gaussian Processes, and multi-mode search
222
-
223
- ### Recent Advances
224
- - ✅ **MIPROv2 ADE Integrity (v0.29.1)** - Stratified train/val/test splits, honest precision accounting, and enum-driven `--auto` presets with integration coverage
225
- - ✅ **Instruction Deduplication (v0.29.1)** - Candidate generation now filters repeated programs so optimization logs highlight unique strategies
226
- - ✅ **GEPA Teleprompter (v0.29.0)** - Genetic-Pareto reflective prompt evolution with merge proposer scheduling, reflective mutation, and ADE demo parity
227
- - ✅ **Optimizer Utilities Parity (v0.29.0)** - Bootstrap strategies, dataset summaries, and Layer 3 utilities unlock multi-predictor programs on Ruby
228
- - ✅ **Observability Hardening (v0.29.0)** - OTLP exporter runs on a single-thread executor preventing frozen SSL contexts without blocking spans
229
- - ✅ **Documentation Refresh (v0.29.x)** - New GEPA guide plus ADE optimization docs covering presets, stratified splits, and error-handling defaults
230
- </details>
231
-
232
- **Current Focus Areas:** Closing the loop on production patterns and community adoption ahead of v1.0.
233
- <details>
234
- <summary>Expand for the roadmap</summary>
235
-
236
- ### Production Readiness
237
- - 🚧 **Production Patterns** - Real-world usage validation and performance optimization
238
- - 🚧 **Ruby Ecosystem Integration** - Rails integration, Sidekiq compatibility, deployment patterns
239
-
240
- ### Community & Adoption
241
- - 🚧 **Community Examples** - Real-world applications and case studies
242
- - 🚧 **Contributor Experience** - Making it easier to contribute and extend
243
- - 🚧 **Performance Benchmarks** - Comparative analysis vs other frameworks
244
- </details>
245
-
246
- **v1.0 Philosophy:** v1.0 lands after battle-testing, not checkbox bingo. The API is already stable; the milestone marks production confidence.
164
+ **Observability**: Zero-config Langfuse integration via OpenTelemetry. Non-blocking, production-ready.
247
165
 
166
+ **Optimization**: MIPROv2 (Bayesian optimization) and GEPA (genetic evolution) for prompt tuning.
167
+
168
+ **Provider Support**: OpenAI, Anthropic, Gemini, Ollama, and OpenRouter via official SDKs.
248
169
 
249
170
  ## Documentation
250
171
 
251
- 📖 **[Complete Documentation Website](https://vicentereig.github.io/dspy.rb/)**
172
+ **[Full Documentation](https://oss.vicente.services/dspy.rb/)** — Getting started, core concepts, advanced patterns.
173
+
174
+ **[llms.txt](https://oss.vicente.services/dspy.rb/llms.txt)** — LLM-friendly reference for AI assistants.
252
175
 
253
- ### LLM-Friendly Documentation
176
+ ### Claude Skill
254
177
 
255
- For LLMs and AI assistants working with DSPy.rb:
256
- - **[llms.txt](https://vicentereig.github.io/dspy.rb/llms.txt)** - Concise reference optimized for LLMs
257
- - **[llms-full.txt](https://vicentereig.github.io/dspy.rb/llms-full.txt)** - Comprehensive API documentation
178
+ A [Claude Skill](https://github.com/vicentereig/dspy-rb-skill) is available to help you build DSPy.rb applications:
179
+
180
+ ```bash
181
+ # Claude Code — install from the vicentereig/engineering marketplace
182
+ claude install-skill vicentereig/engineering --skill dspy-rb
183
+ ```
258
184
 
259
- ### Getting Started
260
- - **[Installation & Setup](docs/src/getting-started/installation.md)** - Detailed installation and configuration
261
- - **[Quick Start Guide](docs/src/getting-started/quick-start.md)** - Your first DSPy programs
262
- - **[Core Concepts](docs/src/getting-started/core-concepts.md)** - Understanding signatures, predictors, and modules
185
+ For Claude.ai Pro/Max, download the [skill ZIP](https://github.com/vicentereig/dspy-rb-skill/archive/refs/heads/main.zip) and upload via Settings > Skills.
263
186
 
264
- ### Prompt Engineering
265
- - **[Signatures & Types](docs/src/core-concepts/signatures.md)** - Define typed interfaces for LLM operations
266
- - **[Predictors](docs/src/core-concepts/predictors.md)** - Predict, ChainOfThought, ReAct, and more
267
- - **[Modules & Pipelines](docs/src/core-concepts/modules.md)** - Compose complex multi-stage workflows
268
- - **[Multimodal Support](docs/src/core-concepts/multimodal.md)** - Image analysis with vision-capable models
269
- - **[Examples & Validation](docs/src/core-concepts/examples.md)** - Type-safe training data
270
- - **[Rich Types](docs/src/advanced/complex-types.md)** - Sorbet type integration with automatic coercion for structs, enums, and arrays
271
- - **[Composable Pipelines](docs/src/advanced/pipelines.md)** - Manual module composition patterns
187
+ ## Examples
272
188
 
273
- ### Prompt Optimization
274
- - **[Evaluation Framework](docs/src/optimization/evaluation.md)** - Advanced metrics beyond simple accuracy
275
- - **[Prompt Optimization](docs/src/optimization/prompt-optimization.md)** - Manipulate prompts as objects
276
- - **[MIPROv2 Optimizer](docs/src/optimization/miprov2.md)** - Advanced Bayesian optimization with Gaussian Processes
277
- - **[GEPA Optimizer](docs/src/optimization/gepa.md)** *(beta)* - Reflective mutation with optional reflection LMs
189
+ The [examples/](examples/) directory has runnable code for common patterns:
278
190
 
279
- ### Context Engineering
280
- - **[Tools](docs/src/core-concepts/toolsets.md)** - Tool wieldint agents.
281
- - **[Agentic Memory](docs/src/core-concepts/memory.md)** - Memory Tools & Agentic Loops
282
- - **[RAG Patterns](docs/src/advanced/rag.md)** - Manual RAG implementation with external services
191
+ - Sentiment classification
192
+ - ReAct agents with tools
193
+ - Image analysis
194
+ - Prompt optimization
283
195
 
284
- ### Production Features
285
- - **[Observability](docs/src/production/observability.md)** - Zero-config Langfuse integration with a dedicated export worker that never blocks your LLMs
286
- - **[Storage System](docs/src/production/storage.md)** - Persistence and optimization result storage
287
- - **[Custom Metrics](docs/src/advanced/custom-metrics.md)** - Proc-based evaluation logic
196
+ ```bash
197
+ bundle exec ruby examples/basic_search_agent.rb
198
+ ```
288
199
 
200
+ ## Optional Gems
289
201
 
202
+ DSPy.rb ships sibling gems for features with heavier dependencies. Add them as needed:
290
203
 
204
+ | Gem | What it does |
205
+ | --- | --- |
206
+ | `dspy-datasets` | Dataset helpers, Parquet/Polars tooling |
207
+ | `dspy-evals` | Evaluation harness with metrics and callbacks |
208
+ | `dspy-miprov2` | Bayesian optimization for prompt tuning |
209
+ | `dspy-gepa` | Genetic-Pareto prompt evolution |
210
+ | `dspy-o11y-langfuse` | Auto-configure Langfuse tracing |
211
+ | `dspy-code_act` | Think-Code-Observe agents |
212
+ | `dspy-deep_search` | Production DeepSearch with Exa |
291
213
 
214
+ See [the full list](https://oss.vicente.services/dspy.rb/getting-started/installation/) in the docs.
292
215
 
216
+ ## Contributing
293
217
 
218
+ Feedback is invaluable. If you encounter issues, [open an issue](https://github.com/vicentereig/dspy.rb/issues). For suggestions, [start a discussion](https://github.com/vicentereig/dspy.rb/discussions).
294
219
 
220
+ Want to contribute code? Reach out: hey at vicente.services
295
221
 
296
222
  ## License
297
- This project is licensed under the MIT License.
223
+
224
+ MIT License.
@@ -5,9 +5,6 @@ require 'ruby_llm'
5
5
  require 'dspy/lm/adapter'
6
6
  require 'dspy/lm/vision_models'
7
7
 
8
- require 'dspy/ruby_llm/guardrails'
9
- DSPy::RubyLLM::Guardrails.ensure_ruby_llm_installed!
10
-
11
8
  module DSPy
12
9
  module RubyLLM
13
10
  module LM
@@ -49,10 +46,12 @@ module DSPy
49
46
  def chat(messages:, signature: nil, &block)
50
47
  normalized_messages = normalize_messages(messages)
51
48
 
49
+ validate_document_support!(normalized_messages)
50
+
52
51
  # Validate vision support if images are present
53
52
  if contains_images?(normalized_messages)
54
53
  validate_vision_support!
55
- normalized_messages = format_multimodal_messages(normalized_messages)
54
+ normalized_messages = format_multimodal_messages(normalized_messages, provider)
56
55
  end
57
56
 
58
57
  chat_instance = create_chat_instance
@@ -255,6 +254,9 @@ module DSPy
255
254
  elsif item[:image_url]
256
255
  attachments << item[:image_url][:url]
257
256
  end
257
+ when 'document'
258
+ document = item[:document]
259
+ attachments << document.to_ruby_llm_attachment if document
258
260
  end
259
261
  end
260
262
  content = text_parts.join("\n")
@@ -263,6 +265,14 @@ module DSPy
263
265
  [content.to_s, attachments]
264
266
  end
265
267
 
268
+ def validate_document_support!(messages)
269
+ return unless contains_documents?(messages)
270
+ return if provider == 'anthropic'
271
+
272
+ raise DSPy::LM::IncompatibleDocumentFeatureError,
273
+ "RubyLLM document inputs are currently supported only when the underlying provider is Anthropic."
274
+ end
275
+
266
276
  def map_response(ruby_llm_response)
267
277
  DSPy::LM::Response.new(
268
278
  content: ruby_llm_response.content.to_s,
@@ -358,32 +368,6 @@ module DSPy
358
368
  # If DSPy doesn't know about the model, let RubyLLM handle it
359
369
  # RubyLLM has its own model registry with capability detection
360
370
  end
361
-
362
- def format_multimodal_messages(messages)
363
- messages.map do |msg|
364
- if msg[:content].is_a?(Array)
365
- formatted_content = msg[:content].map do |item|
366
- case item[:type]
367
- when 'text'
368
- { type: 'text', text: item[:text] }
369
- when 'image'
370
- # Validate and format image for provider
371
- image = item[:image]
372
- if image.respond_to?(:validate_for_provider!)
373
- image.validate_for_provider!(provider)
374
- end
375
- item
376
- else
377
- item
378
- end
379
- end
380
-
381
- { role: msg[:role], content: formatted_content }
382
- else
383
- msg
384
- end
385
- end
386
- end
387
371
  end
388
372
  end
389
373
  end
@@ -2,6 +2,6 @@
2
2
 
3
3
  module DSPy
4
4
  module RubyLLM
5
- VERSION = '0.1.0'
5
+ VERSION = '0.1.1'
6
6
  end
7
7
  end
data/lib/dspy/ruby_llm.rb CHANGED
@@ -2,7 +2,4 @@
2
2
 
3
3
  require 'dspy/ruby_llm/version'
4
4
 
5
- require 'dspy/ruby_llm/guardrails'
6
- DSPy::RubyLLM::Guardrails.ensure_ruby_llm_installed!
7
-
8
5
  require 'dspy/ruby_llm/lm/adapters/ruby_llm_adapter'
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: dspy-ruby_llm
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.0
4
+ version: 0.1.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Vicente Reig Rincón de Arellano
@@ -16,28 +16,40 @@ dependencies:
16
16
  requirements:
17
17
  - - ">="
18
18
  - !ruby/object:Gem::Version
19
- version: '0.30'
19
+ version: 0.30.1
20
+ - - "<"
21
+ - !ruby/object:Gem::Version
22
+ version: '2.0'
20
23
  type: :runtime
21
24
  prerelease: false
22
25
  version_requirements: !ruby/object:Gem::Requirement
23
26
  requirements:
24
27
  - - ">="
25
28
  - !ruby/object:Gem::Version
26
- version: '0.30'
29
+ version: 0.30.1
30
+ - - "<"
31
+ - !ruby/object:Gem::Version
32
+ version: '2.0'
27
33
  - !ruby/object:Gem::Dependency
28
34
  name: ruby_llm
29
35
  requirement: !ruby/object:Gem::Requirement
30
36
  requirements:
31
- - - "~>"
37
+ - - ">="
32
38
  - !ruby/object:Gem::Version
33
- version: '1.3'
39
+ version: 1.14.1
40
+ - - "<"
41
+ - !ruby/object:Gem::Version
42
+ version: '2.0'
34
43
  type: :runtime
35
44
  prerelease: false
36
45
  version_requirements: !ruby/object:Gem::Requirement
37
46
  requirements:
38
- - - "~>"
47
+ - - ">="
48
+ - !ruby/object:Gem::Version
49
+ version: 1.14.1
50
+ - - "<"
39
51
  - !ruby/object:Gem::Version
40
- version: '1.3'
52
+ version: '2.0'
41
53
  description: Provides a unified adapter using RubyLLM to access OpenAI, Anthropic,
42
54
  Gemini, Bedrock, Ollama, and more through a single interface in DSPy.rb projects.
43
55
  email:
@@ -51,7 +63,6 @@ files:
51
63
  - README.md
52
64
  - lib/dspy/ruby_llm.rb
53
65
  - lib/dspy/ruby_llm/README.md
54
- - lib/dspy/ruby_llm/guardrails.rb
55
66
  - lib/dspy/ruby_llm/lm/adapters/ruby_llm_adapter.rb
56
67
  - lib/dspy/ruby_llm/version.rb
57
68
  homepage: https://github.com/vicentereig/dspy.rb
@@ -1,24 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- require 'dspy/lm/errors'
4
-
5
- module DSPy
6
- module RubyLLM
7
- class Guardrails
8
- SUPPORTED_RUBY_LLM_VERSIONS = "~> 1.3".freeze
9
-
10
- def self.ensure_ruby_llm_installed!
11
- require 'ruby_llm'
12
-
13
- spec = Gem.loaded_specs["ruby_llm"]
14
- unless spec && Gem::Requirement.new(SUPPORTED_RUBY_LLM_VERSIONS).satisfied_by?(spec.version)
15
- msg = <<~MSG
16
- DSPy requires the `ruby_llm` gem #{SUPPORTED_RUBY_LLM_VERSIONS}.
17
- Please install or upgrade it with `bundle add ruby_llm --version "#{SUPPORTED_RUBY_LLM_VERSIONS}"`.
18
- MSG
19
- raise DSPy::LM::UnsupportedVersionError, msg
20
- end
21
- end
22
- end
23
- end
24
- end