dspy-ruby_llm 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 8442dc722fa4ef77810d65ede3d52c66910af543550138add7656c4e703ada7b
4
+ data.tar.gz: 0ab6ee908cc82c0b674e52f2b5eeee6038266fc41171ba270aa1b13dd641e7f7
5
+ SHA512:
6
+ metadata.gz: 68d12fbfe7c4732c07b7287f55a771ad8f55a4d7fc7eb66bd9f6a5f4a267b65811c9d2771604f083835d352b0c0907d9b01a1bf211cc454d8af87ad644bcabe5
7
+ data.tar.gz: 9c754e16b521f75373ca648a60010caa467b78c87ba1ea72c8bb63a5d6a9260261c4c42b004e42603d816f123db409090dd903de985e30444a8c18449eeef3d6
data/LICENSE ADDED
@@ -0,0 +1,45 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Vicente Services SL
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
23
+ This project is a Ruby port of the original Python [DSPy library](https://github.com/stanfordnlp/dspy), which is licensed under the MIT License:
24
+
25
+ MIT License
26
+
27
+ Copyright (c) 2023 Stanford Future Data Systems
28
+
29
+ Permission is hereby granted, free of charge, to any person obtaining a copy
30
+ of this software and associated documentation files (the "Software"), to deal
31
+ in the Software without restriction, including without limitation the rights
32
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
33
+ copies of the Software, and to permit persons to whom the Software is
34
+ furnished to do so, subject to the following conditions:
35
+
36
+ The above copyright notice and this permission notice shall be included in all
37
+ copies or substantial portions of the Software.
38
+
39
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
40
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
41
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
42
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
43
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
44
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
45
+ SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,297 @@
1
+ # DSPy.rb
2
+
3
+ [![Gem Version](https://img.shields.io/gem/v/dspy)](https://rubygems.org/gems/dspy)
4
+ [![Total Downloads](https://img.shields.io/gem/dt/dspy)](https://rubygems.org/gems/dspy)
5
+ [![Build Status](https://img.shields.io/github/actions/workflow/status/vicentereig/dspy.rb/ruby.yml?branch=main&label=build)](https://github.com/vicentereig/dspy.rb/actions/workflows/ruby.yml)
6
+ [![Documentation](https://img.shields.io/badge/docs-vicentereig.github.io%2Fdspy.rb-blue)](https://vicentereig.github.io/dspy.rb/)
7
+ [![Discord](https://img.shields.io/discord/1161519468141355160?label=discord&logo=discord&logoColor=white)](https://discord.gg/zWBhrMqn)
8
+
9
+ > [!NOTE]
10
+ > The core Prompt Engineering Framework is production-ready with
11
+ > comprehensive documentation. I am focusing now on educational content on systematic Prompt Optimization and Context Engineering.
12
+ > Your feedback is invaluable. if you encounter issues, please open an [issue](https://github.com/vicentereig/dspy.rb/issues). If you have suggestions, open a [new thread](https://github.com/vicentereig/dspy.rb/discussions).
13
+ >
14
+ > If you want to contribute, feel free to reach out to me to coordinate efforts: hey at vicente.services
15
+ >
16
+
17
+ **Build reliable LLM applications in idiomatic Ruby using composable, type-safe modules.**
18
+
19
+ DSPy.rb is the Ruby-first surgical port of Stanford's [DSPy paradigm](https://github.com/stanfordnlp/dspy). It delivers structured LLM programming, prompt engineering, and context engineering in the language we love. Instead of wrestling with brittle prompt strings, you define typed signatures in idiomatic Ruby and compose workflows and agents that actually behave.
20
+
21
+ **Prompts are just functions.** Traditional prompting is like writing code with string concatenation: it works until it doesn't. DSPy.rb brings you the programming approach pioneered by [dspy.ai](https://dspy.ai/): define modular signatures and let the framework deal with the messy bits.
22
+
23
+ While we implement the same signatures, predictors, and optimization algorithms as the original library, DSPy.rb leans hard into Ruby conventions with Sorbet-based typing, ReAct loops, and production-ready integrations like non-blocking OpenTelemetry instrumentation.
24
+
25
+ **What you get?** Ruby LLM applications that scale and don't break when you sneeze.
26
+
27
+ Check the [examples](examples/) and take them for a spin!
28
+
29
+ ## Your First DSPy Program
30
+ ### Installation
31
+
32
+ Add to your Gemfile:
33
+
34
+ ```ruby
35
+ gem 'dspy'
36
+ ```
37
+
38
+ and
39
+
40
+ ```bash
41
+ bundle install
42
+ ```
43
+
44
+ ### Your First Reliable Predictor
45
+
46
+ ```ruby
47
+ require 'dspy'
48
+
49
+ # Configure DSPy globally to use your fave LLM (you can override per predictor).
50
+ DSPy.configure do |c|
51
+ c.lm = DSPy::LM.new('openai/gpt-4o-mini',
52
+ api_key: ENV['OPENAI_API_KEY'],
53
+ structured_outputs: true) # Enable OpenAI's native JSON mode
54
+ end
55
+
56
+ # Define a signature for sentiment classification - instead of writing a full prompt!
57
+ class Classify < DSPy::Signature
58
+ description "Classify sentiment of a given sentence." # sets the goal of the underlying prompt
59
+
60
+ class Sentiment < T::Enum
61
+ enums do
62
+ Positive = new('positive')
63
+ Negative = new('negative')
64
+ Neutral = new('neutral')
65
+ end
66
+ end
67
+
68
+ # Structured Inputs: makes sure you are sending only valid prompt inputs to your model
69
+ input do
70
+ const :sentence, String, description: 'The sentence to analyze'
71
+ end
72
+
73
+ # Structured Outputs: your predictor will validate the output of the model too.
74
+ output do
75
+ const :sentiment, Sentiment, description: 'The sentiment of the sentence'
76
+ const :confidence, Float, description: 'A number between 0.0 and 1.0'
77
+ end
78
+ end
79
+
80
+ # Wire it to the simplest prompting technique: a prediction loop.
81
+ classify = DSPy::Predict.new(Classify)
82
+ # it may raise an error if you mess the inputs or your LLM messes the outputs.
83
+ result = classify.call(sentence: "This book was super fun to read!")
84
+
85
+ puts result.sentiment # => #<Sentiment::Positive>
86
+ puts result.confidence # => 0.85
87
+ ```
88
+
89
+ Save this as `examples/first_predictor.rb` and run it with:
90
+
91
+ ```bash
92
+ bundle exec ruby examples/first_predictor.rb
93
+ ```
94
+
95
+ ### Sibling Gems
96
+
97
+ DSPy.rb ships multiple gems from this monorepo so you can opt into features with heavier dependency trees (e.g., datasets pull in Polars/Arrow, MIPROv2 requires `numo-*` BLAS bindings) only when you need them. Add these alongside `dspy`:
98
+
99
+ | Gem | Description | Status |
100
+ | --- | --- | --- |
101
+ | `dspy-schema` | Exposes `DSPy::TypeSystem::SorbetJsonSchema` for downstream reuse. (Still required by the core `dspy` gem; extraction lets other projects depend on it directly.) | **Stable** (v1.0.0) |
102
+ | `dspy-openai` | Packages the OpenAI/OpenRouter/Ollama adapters plus the official SDK guardrails. Install whenever you call `openai/*`, `openrouter/*`, or `ollama/*`. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/openai/README.md) | **Stable** (v1.0.0) |
103
+ | `dspy-anthropic` | Claude adapters, streaming, and structured-output helpers behind the official `anthropic` SDK. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/anthropic/README.md) | **Stable** (v1.0.0) |
104
+ | `dspy-gemini` | Gemini adapters with multimodal + tool-call support via `gemini-ai`. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/gemini/README.md) | **Stable** (v1.0.0) |
105
+ | `dspy-ruby_llm` | Unified access to 12+ LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Ollama, DeepSeek, etc.) via [RubyLLM](https://rubyllm.com). [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/ruby_llm/README.md) | **Stable** (v0.1.0) |
106
+ | `dspy-code_act` | Think-Code-Observe agents that synthesize and execute Ruby safely. (Add the gem or set `DSPY_WITH_CODE_ACT=1` before requiring `dspy/code_act`.) | **Stable** (v1.0.0) |
107
+ | `dspy-datasets` | Dataset helpers plus Parquet/Polars tooling for richer evaluation corpora. (Toggle via `DSPY_WITH_DATASETS`.) | **Stable** (v1.0.0) |
108
+ | `dspy-evals` | High-throughput evaluation harness with metrics, callbacks, and regression fixtures. (Toggle via `DSPY_WITH_EVALS`.) | **Stable** (v1.0.0) |
109
+ | `dspy-miprov2` | Bayesian optimization + Gaussian Process backend for the MIPROv2 teleprompter. (Install or export `DSPY_WITH_MIPROV2=1` before requiring the teleprompter.) | **Stable** (v1.0.0) |
110
+ | `dspy-gepa` | `DSPy::Teleprompt::GEPA`, reflection loops, experiment tracking, telemetry adapters. (Install or set `DSPY_WITH_GEPA=1`.) | **Stable** (v1.0.0) |
111
+ | `gepa` | GEPA optimizer core (Pareto engine, telemetry, reflective proposer). | **Stable** (v1.0.0) |
112
+ | `dspy-o11y` | Core observability APIs: `DSPy::Observability`, async span processor, observation types. (Install or set `DSPY_WITH_O11Y=1`.) | **Stable** (v1.0.0) |
113
+ | `dspy-o11y-langfuse` | Auto-configures DSPy observability to stream spans to Langfuse via OTLP. (Install or set `DSPY_WITH_O11Y_LANGFUSE=1`.) | **Stable** (v1.0.0) |
114
+ | `dspy-deep_search` | Production DeepSearch loop with Exa-backed search/read, token budgeting, and instrumentation (Issue #163). | **Stable** (v1.0.0) |
115
+ | `dspy-deep_research` | Planner/QA orchestration atop DeepSearch plus the memory supervisor used by the CLI example. | **Stable** (v1.0.0) |
116
+ | `sorbet-toon` | Token-Oriented Object Notation (TOON) codec, prompt formatter, and Sorbet mixins for BAML/TOON Enhanced Prompting. [Sorbet::Toon README](https://github.com/vicentereig/dspy.rb/blob/main/lib/sorbet/toon/README.md) | **Alpha** (v0.1.0) |
117
+
118
+ **Provider adapters:** Add `dspy-openai`, `dspy-anthropic`, and/or `dspy-gemini` next to `dspy` in your Gemfile depending on which `DSPy::LM` providers you call. Each gem already depends on the official SDK (`openai`, `anthropic`, `gemini-ai`), and DSPy auto-loads the adapters when the gem is present—no extra `require` needed.
119
+
120
+ Set the matching `DSPY_WITH_*` environment variables (see `Gemfile`) to include or exclude each sibling gem when running Bundler locally (for example `DSPY_WITH_GEPA=1` or `DSPY_WITH_O11Y_LANGFUSE=1`). Refer to `adr/013-dependency-tree.md` for the full dependency map and roadmap.
121
+ ### Access to 200+ Models Across 5 Providers
122
+
123
+ DSPy.rb provides unified access to major LLM providers with provider-specific optimizations:
124
+
125
+ ```ruby
126
+ # OpenAI (GPT-4, GPT-4o, GPT-4o-mini, GPT-5, etc.)
127
+ DSPy.configure do |c|
128
+ c.lm = DSPy::LM.new('openai/gpt-4o-mini',
129
+ api_key: ENV['OPENAI_API_KEY'],
130
+ structured_outputs: true) # Native JSON mode
131
+ end
132
+
133
+ # Google Gemini (Gemini 1.5 Pro, Flash, Gemini 2.0, etc.)
134
+ DSPy.configure do |c|
135
+ c.lm = DSPy::LM.new('gemini/gemini-2.5-flash',
136
+ api_key: ENV['GEMINI_API_KEY'],
137
+ structured_outputs: true) # Native structured outputs
138
+ end
139
+
140
+ # Anthropic Claude (Claude 3.5, Claude 4, etc.)
141
+ DSPy.configure do |c|
142
+ c.lm = DSPy::LM.new('anthropic/claude-sonnet-4-5-20250929',
143
+ api_key: ENV['ANTHROPIC_API_KEY'],
144
+ structured_outputs: true) # Tool-based extraction (default)
145
+ end
146
+
147
+ # Ollama - Run any local model (Llama, Mistral, Gemma, etc.)
148
+ DSPy.configure do |c|
149
+ c.lm = DSPy::LM.new('ollama/llama3.2') # Free, runs locally, no API key needed
150
+ end
151
+
152
+ # OpenRouter - Access to 200+ models from multiple providers
153
+ DSPy.configure do |c|
154
+ c.lm = DSPy::LM.new('openrouter/deepseek/deepseek-chat-v3.1:free',
155
+ api_key: ENV['OPENROUTER_API_KEY'])
156
+ end
157
+ ```
158
+
159
+ ## What You Get
160
+
161
+ **Developer Experience:** Official clients, multimodal coverage, and observability baked in.
162
+ <details>
163
+ <summary>Expand for everything included</summary>
164
+
165
+ - LLM provider support using official Ruby clients:
166
+ - [OpenAI Ruby](https://github.com/openai/openai-ruby) with vision model support
167
+ - [Anthropic Ruby SDK](https://github.com/anthropics/anthropic-sdk-ruby) with multimodal capabilities
168
+ - [Google Gemini API](https://ai.google.dev/) with native structured outputs
169
+ - [Ollama](https://ollama.com/) via OpenAI compatibility layer for local models
170
+ - **Multimodal Support** - Complete image analysis with DSPy::Image, type-safe bounding boxes, vision-capable models
171
+ - Runtime type checking with [Sorbet](https://sorbet.org/) including T::Enum and union types
172
+ - Type-safe tool definitions for ReAct agents
173
+ - Comprehensive instrumentation and observability
174
+ </details>
175
+
176
+ **Core Building Blocks:** Predictors, agents, and pipelines wired through type-safe signatures.
177
+ <details>
178
+ <summary>Expand for everything included</summary>
179
+
180
+ - **Signatures** - Define input/output schemas using Sorbet types with T::Enum and union type support
181
+ - **Predict** - LLM completion with structured data extraction and multimodal support
182
+ - **Chain of Thought** - Step-by-step reasoning for complex problems with automatic prompt optimization
183
+ - **ReAct** - Tool-using agents with type-safe tool definitions and error recovery
184
+ - **Module Composition** - Combine multiple LLM calls into production-ready workflows
185
+ </details>
186
+
187
+ **Optimization & Evaluation:** Treat prompt optimization like a real ML workflow.
188
+ <details>
189
+ <summary>Expand for everything included</summary>
190
+
191
+ - **Prompt Objects** - Manipulate prompts as first-class objects instead of strings
192
+ - **Typed Examples** - Type-safe training data with automatic validation
193
+ - **Evaluation Framework** - Advanced metrics beyond simple accuracy with error-resilient pipelines
194
+ - **MIPROv2 Optimization** - Advanced Bayesian optimization with Gaussian Processes, multiple optimization strategies, auto-config presets, and storage persistence
195
+ </details>
196
+
197
+ **Production Features:** Hardened behaviors for teams shipping actual products.
198
+ <details>
199
+ <summary>Expand for everything included</summary>
200
+
201
+ - **Reliable JSON Extraction** - Native structured outputs for OpenAI and Gemini, Anthropic tool-based extraction, and automatic strategy selection with fallback
202
+ - **Type-Safe Configuration** - Strategy enums with automatic provider optimization (Strict/Compatible modes)
203
+ - **Smart Retry Logic** - Progressive fallback with exponential backoff for handling transient failures
204
+ - **Zero-Config Langfuse Integration** - Set env vars and get automatic OpenTelemetry traces in Langfuse
205
+ - **Performance Caching** - Schema and capability caching for faster repeated operations
206
+ - **File-based Storage** - Optimization result persistence with versioning
207
+ - **Structured Logging** - JSON and key=value formats with span tracking
208
+ </details>
209
+
210
+ ## Recent Achievements
211
+
212
+ DSPy.rb has gone from experimental to production-ready in three fast releases.
213
+ <details>
214
+ <summary>Expand for the full changelog highlights</summary>
215
+
216
+ ### Foundation
217
+ - ✅ **JSON Parsing Reliability** - Native OpenAI structured outputs with adaptive retry logic and schema-aware fallbacks
218
+ - ✅ **Type-Safe Strategy Configuration** - Provider-optimized strategy selection and enum-backed optimizer presets
219
+ - ✅ **Core Module System** - Predict, ChainOfThought, ReAct with type safety (add `dspy-code_act` for Think-Code-Observe agents)
220
+ - ✅ **Production Observability** - OpenTelemetry, New Relic, and Langfuse integration
221
+ - ✅ **Advanced Optimization** - MIPROv2 with Bayesian optimization, Gaussian Processes, and multi-mode search
222
+
223
+ ### Recent Advances
224
+ - ✅ **MIPROv2 ADE Integrity (v0.29.1)** - Stratified train/val/test splits, honest precision accounting, and enum-driven `--auto` presets with integration coverage
225
+ - ✅ **Instruction Deduplication (v0.29.1)** - Candidate generation now filters repeated programs so optimization logs highlight unique strategies
226
+ - ✅ **GEPA Teleprompter (v0.29.0)** - Genetic-Pareto reflective prompt evolution with merge proposer scheduling, reflective mutation, and ADE demo parity
227
+ - ✅ **Optimizer Utilities Parity (v0.29.0)** - Bootstrap strategies, dataset summaries, and Layer 3 utilities unlock multi-predictor programs on Ruby
228
+ - ✅ **Observability Hardening (v0.29.0)** - OTLP exporter runs on a single-thread executor preventing frozen SSL contexts without blocking spans
229
+ - ✅ **Documentation Refresh (v0.29.x)** - New GEPA guide plus ADE optimization docs covering presets, stratified splits, and error-handling defaults
230
+ </details>
231
+
232
+ **Current Focus Areas:** Closing the loop on production patterns and community adoption ahead of v1.0.
233
+ <details>
234
+ <summary>Expand for the roadmap</summary>
235
+
236
+ ### Production Readiness
237
+ - 🚧 **Production Patterns** - Real-world usage validation and performance optimization
238
+ - 🚧 **Ruby Ecosystem Integration** - Rails integration, Sidekiq compatibility, deployment patterns
239
+
240
+ ### Community & Adoption
241
+ - 🚧 **Community Examples** - Real-world applications and case studies
242
+ - 🚧 **Contributor Experience** - Making it easier to contribute and extend
243
+ - 🚧 **Performance Benchmarks** - Comparative analysis vs other frameworks
244
+ </details>
245
+
246
+ **v1.0 Philosophy:** v1.0 lands after battle-testing, not checkbox bingo. The API is already stable; the milestone marks production confidence.
247
+
248
+
249
+ ## Documentation
250
+
251
+ 📖 **[Complete Documentation Website](https://vicentereig.github.io/dspy.rb/)**
252
+
253
+ ### LLM-Friendly Documentation
254
+
255
+ For LLMs and AI assistants working with DSPy.rb:
256
+ - **[llms.txt](https://vicentereig.github.io/dspy.rb/llms.txt)** - Concise reference optimized for LLMs
257
+ - **[llms-full.txt](https://vicentereig.github.io/dspy.rb/llms-full.txt)** - Comprehensive API documentation
258
+
259
+ ### Getting Started
260
+ - **[Installation & Setup](docs/src/getting-started/installation.md)** - Detailed installation and configuration
261
+ - **[Quick Start Guide](docs/src/getting-started/quick-start.md)** - Your first DSPy programs
262
+ - **[Core Concepts](docs/src/getting-started/core-concepts.md)** - Understanding signatures, predictors, and modules
263
+
264
+ ### Prompt Engineering
265
+ - **[Signatures & Types](docs/src/core-concepts/signatures.md)** - Define typed interfaces for LLM operations
266
+ - **[Predictors](docs/src/core-concepts/predictors.md)** - Predict, ChainOfThought, ReAct, and more
267
+ - **[Modules & Pipelines](docs/src/core-concepts/modules.md)** - Compose complex multi-stage workflows
268
+ - **[Multimodal Support](docs/src/core-concepts/multimodal.md)** - Image analysis with vision-capable models
269
+ - **[Examples & Validation](docs/src/core-concepts/examples.md)** - Type-safe training data
270
+ - **[Rich Types](docs/src/advanced/complex-types.md)** - Sorbet type integration with automatic coercion for structs, enums, and arrays
271
+ - **[Composable Pipelines](docs/src/advanced/pipelines.md)** - Manual module composition patterns
272
+
273
+ ### Prompt Optimization
274
+ - **[Evaluation Framework](docs/src/optimization/evaluation.md)** - Advanced metrics beyond simple accuracy
275
+ - **[Prompt Optimization](docs/src/optimization/prompt-optimization.md)** - Manipulate prompts as objects
276
+ - **[MIPROv2 Optimizer](docs/src/optimization/miprov2.md)** - Advanced Bayesian optimization with Gaussian Processes
277
+ - **[GEPA Optimizer](docs/src/optimization/gepa.md)** *(beta)* - Reflective mutation with optional reflection LMs
278
+
279
+ ### Context Engineering
280
+ - **[Tools](docs/src/core-concepts/toolsets.md)** - Tool wieldint agents.
281
+ - **[Agentic Memory](docs/src/core-concepts/memory.md)** - Memory Tools & Agentic Loops
282
+ - **[RAG Patterns](docs/src/advanced/rag.md)** - Manual RAG implementation with external services
283
+
284
+ ### Production Features
285
+ - **[Observability](docs/src/production/observability.md)** - Zero-config Langfuse integration with a dedicated export worker that never blocks your LLMs
286
+ - **[Storage System](docs/src/production/storage.md)** - Persistence and optimization result storage
287
+ - **[Custom Metrics](docs/src/advanced/custom-metrics.md)** - Proc-based evaluation logic
288
+
289
+
290
+
291
+
292
+
293
+
294
+
295
+
296
+ ## License
297
+ This project is licensed under the MIT License.
@@ -0,0 +1,174 @@
1
+ # DSPy RubyLLM Adapter
2
+
3
+ Unified access to 12+ LLM providers through a single adapter using [RubyLLM](https://rubyllm.com).
4
+
5
+ ## Installation
6
+
7
+ Add to your Gemfile:
8
+
9
+ ```ruby
10
+ gem 'dspy-ruby_llm'
11
+ ```
12
+
13
+ ## Usage
14
+
15
+ ### Using Existing RubyLLM Configuration (Recommended)
16
+
17
+ If you already have RubyLLM configured, DSPy will use your existing setup automatically:
18
+
19
+ ```ruby
20
+ # Your existing RubyLLM configuration
21
+ RubyLLM.configure do |config|
22
+ config.openai_api_key = ENV['OPENAI_API_KEY']
23
+ config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
24
+ end
25
+
26
+ # DSPy uses your existing config - no api_key needed!
27
+ lm = DSPy::LM.new("ruby_llm/gpt-4o")
28
+ lm = DSPy::LM.new("ruby_llm/claude-sonnet-4")
29
+ ```
30
+
31
+ ### Model ID Format
32
+
33
+ Use `ruby_llm/{model_id}` format where `model_id` is the RubyLLM model identifier:
34
+
35
+ ```ruby
36
+ # With explicit API key (creates scoped context)
37
+ lm = DSPy::LM.new("ruby_llm/gpt-4o", api_key: ENV['OPENAI_API_KEY'])
38
+
39
+ # Or use global RubyLLM config (no api_key needed)
40
+ lm = DSPy::LM.new("ruby_llm/gpt-4o")
41
+ lm = DSPy::LM.new("ruby_llm/claude-sonnet-4")
42
+ lm = DSPy::LM.new("ruby_llm/gemini-1.5-pro")
43
+
44
+ # For models not in RubyLLM registry, specify provider explicitly
45
+ lm = DSPy::LM.new("ruby_llm/llama3.2", provider: 'ollama')
46
+ ```
47
+
48
+ The adapter detects the provider from RubyLLM's model registry. For models not in the registry, use the `provider:` option.
49
+
50
+ ### Provider Override
51
+
52
+ For custom deployments or models not in the registry, explicitly specify the provider:
53
+
54
+ ```ruby
55
+ # OpenRouter
56
+ lm = DSPy::LM.new("ruby_llm/anthropic/claude-3-opus",
57
+ api_key: ENV['OPENROUTER_API_KEY'],
58
+ provider: 'openrouter'
59
+ )
60
+
61
+ # Custom model with explicit provider
62
+ lm = DSPy::LM.new("ruby_llm/my-custom-model",
63
+ api_key: ENV['OPENAI_API_KEY'],
64
+ provider: 'openai',
65
+ base_url: 'https://custom-endpoint.com/v1'
66
+ )
67
+
68
+ # AWS Bedrock - configure RubyLLM globally first
69
+ RubyLLM.configure do |c|
70
+ c.bedrock_api_key = ENV['AWS_ACCESS_KEY_ID']
71
+ c.bedrock_secret_key = ENV['AWS_SECRET_ACCESS_KEY']
72
+ c.bedrock_region = 'us-east-1'
73
+ end
74
+ lm = DSPy::LM.new("ruby_llm/anthropic.claude-3-5-sonnet", provider: 'bedrock')
75
+
76
+ # VertexAI - configure RubyLLM globally first
77
+ RubyLLM.configure do |c|
78
+ c.vertexai_project_id = 'your-project-id'
79
+ c.vertexai_location = 'us-central1'
80
+ end
81
+ lm = DSPy::LM.new("ruby_llm/gemini-pro", provider: 'vertexai')
82
+ ```
83
+
84
+ ### Supported Providers
85
+
86
+ | Provider | Example Model ID | Notes |
87
+ |----------|------------------|-------|
88
+ | OpenAI | `ruby_llm/gpt-4o` | In RubyLLM registry |
89
+ | Anthropic | `ruby_llm/claude-sonnet-4` | In RubyLLM registry |
90
+ | Google Gemini | `ruby_llm/gemini-1.5-pro` | In RubyLLM registry |
91
+ | DeepSeek | `ruby_llm/deepseek-chat` | In RubyLLM registry |
92
+ | Mistral | `ruby_llm/mistral-large` | In RubyLLM registry |
93
+ | Ollama | `ruby_llm/llama3.2` | Use `provider: 'ollama'`, no API key needed |
94
+ | AWS Bedrock | `ruby_llm/anthropic.claude-3-5-sonnet` | Configure RubyLLM globally |
95
+ | VertexAI | `ruby_llm/gemini-pro` | Configure RubyLLM globally |
96
+ | OpenRouter | `ruby_llm/anthropic/claude-3-opus` | Use `provider: 'openrouter'` |
97
+ | Perplexity | `ruby_llm/llama-3.1-sonar-large` | Use `provider: 'perplexity'` |
98
+ | GPUStack | `ruby_llm/model-name` | Use `provider: 'gpustack'` |
99
+
100
+ ### Configuration Options
101
+
102
+ ```ruby
103
+ lm = DSPy::LM.new("ruby_llm/gpt-4o",
104
+ api_key: ENV['OPENAI_API_KEY'], # API key (or use global RubyLLM config)
105
+ base_url: 'https://custom.com/v1', # Custom endpoint
106
+ timeout: 120, # Request timeout in seconds
107
+ max_retries: 3, # Retry count
108
+ structured_outputs: true # Enable JSON schema (default: true)
109
+ )
110
+ ```
111
+
112
+ For providers with non-standard auth (Bedrock, VertexAI), configure RubyLLM globally - see examples above.
113
+
114
+ ### With DSPy Signatures
115
+
116
+ ```ruby
117
+ class Summarize < DSPy::Signature
118
+ description "Summarize the given text"
119
+
120
+ input do
121
+ const :text, String
122
+ end
123
+
124
+ output do
125
+ const :summary, String
126
+ end
127
+ end
128
+
129
+ DSPy.configure do |config|
130
+ config.lm = DSPy::LM.new("ruby_llm/claude-sonnet-4")
131
+ end
132
+
133
+ summarizer = DSPy::Predict.new(Summarize)
134
+ result = summarizer.call(text: "Long article text here...")
135
+ puts result.summary
136
+ ```
137
+
138
+ ### Streaming
139
+
140
+ ```ruby
141
+ lm = DSPy::LM.new("ruby_llm/gpt-4o", api_key: ENV['OPENAI_API_KEY'])
142
+
143
+ response = lm.chat(messages: [{ role: 'user', content: 'Tell me a story' }]) do |chunk|
144
+ print chunk # Print each chunk as it arrives
145
+ end
146
+ ```
147
+
148
+ ## Dependencies
149
+
150
+ This gem depends on:
151
+ - `dspy` (>= 0.32)
152
+ - `ruby_llm` (~> 1.3)
153
+
154
+ RubyLLM itself has minimal dependencies (Faraday, Zeitwerk, Marcel).
155
+
156
+ ## Why Use This Adapter?
157
+
158
+ 1. **Unified interface** - One API for all providers
159
+ 2. **Lightweight** - RubyLLM has only 3 dependencies
160
+ 3. **Provider coverage** - Access Bedrock, VertexAI, DeepSeek without separate adapters
161
+ 4. **Built-in retries** - Automatic retry with exponential backoff
162
+ 5. **Model registry** - 500+ models with capability detection and auto provider resolution
163
+
164
+ ## Error Handling
165
+
166
+ The adapter maps RubyLLM errors to DSPy error types:
167
+
168
+ | RubyLLM Error | DSPy Error |
169
+ |---------------|------------|
170
+ | `UnauthorizedError` | `MissingAPIKeyError` |
171
+ | `RateLimitError` | `AdapterError` (with retry hint) |
172
+ | `ModelNotFoundError` | `AdapterError` |
173
+ | `BadRequestError` | `AdapterError` |
174
+ | `ConfigurationError` | `ConfigurationError` |
@@ -0,0 +1,24 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'dspy/lm/errors'
4
+
5
+ module DSPy
6
+ module RubyLLM
7
+ class Guardrails
8
+ SUPPORTED_RUBY_LLM_VERSIONS = "~> 1.3".freeze
9
+
10
+ def self.ensure_ruby_llm_installed!
11
+ require 'ruby_llm'
12
+
13
+ spec = Gem.loaded_specs["ruby_llm"]
14
+ unless spec && Gem::Requirement.new(SUPPORTED_RUBY_LLM_VERSIONS).satisfied_by?(spec.version)
15
+ msg = <<~MSG
16
+ DSPy requires the `ruby_llm` gem #{SUPPORTED_RUBY_LLM_VERSIONS}.
17
+ Please install or upgrade it with `bundle add ruby_llm --version "#{SUPPORTED_RUBY_LLM_VERSIONS}"`.
18
+ MSG
19
+ raise DSPy::LM::UnsupportedVersionError, msg
20
+ end
21
+ end
22
+ end
23
+ end
24
+ end
@@ -0,0 +1,391 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'uri'
4
+ require 'ruby_llm'
5
+ require 'dspy/lm/adapter'
6
+ require 'dspy/lm/vision_models'
7
+
8
+ require 'dspy/ruby_llm/guardrails'
9
+ DSPy::RubyLLM::Guardrails.ensure_ruby_llm_installed!
10
+
11
+ module DSPy
12
+ module RubyLLM
13
+ module LM
14
+ module Adapters
15
+ class RubyLLMAdapter < DSPy::LM::Adapter
16
+ attr_reader :provider
17
+
18
+ # Options that require a scoped context instead of global RubyLLM config
19
+ SCOPED_OPTIONS = %i[base_url timeout max_retries].freeze
20
+
21
+ def initialize(model:, api_key: nil, **options)
22
+ @api_key = api_key
23
+ @options = options
24
+ @structured_outputs_enabled = options.fetch(:structured_outputs, true)
25
+ @provider_override = options[:provider] # Optional provider override
26
+
27
+ # Detect provider eagerly (matches OpenAI/Anthropic/Gemini adapters)
28
+ @provider = detect_provider(model)
29
+
30
+ # Determine if we should use global RubyLLM config or create scoped context
31
+ @use_global_config = should_use_global_config?(api_key, options)
32
+
33
+ super(model: model, api_key: api_key)
34
+
35
+ # Only validate API key if not using global config
36
+ unless @use_global_config
37
+ validate_api_key_for_provider!(api_key)
38
+ end
39
+
40
+ # Validate base_url if provided
41
+ validate_base_url!(@options[:base_url])
42
+ end
43
+
44
+ # Returns the context - either scoped or global
45
+ def context
46
+ @context ||= @use_global_config ? nil : create_context(@api_key)
47
+ end
48
+
49
+ def chat(messages:, signature: nil, &block)
50
+ normalized_messages = normalize_messages(messages)
51
+
52
+ # Validate vision support if images are present
53
+ if contains_images?(normalized_messages)
54
+ validate_vision_support!
55
+ normalized_messages = format_multimodal_messages(normalized_messages)
56
+ end
57
+
58
+ chat_instance = create_chat_instance
59
+
60
+ if block_given?
61
+ stream_response(chat_instance, normalized_messages, signature, &block)
62
+ else
63
+ standard_response(chat_instance, normalized_messages, signature)
64
+ end
65
+ rescue ::RubyLLM::UnauthorizedError => e
66
+ raise DSPy::LM::MissingAPIKeyError.new(provider)
67
+ rescue ::RubyLLM::RateLimitError => e
68
+ raise DSPy::LM::AdapterError, "Rate limit exceeded for #{provider}: #{e.message}"
69
+ rescue ::RubyLLM::ModelNotFoundError => e
70
+ raise DSPy::LM::AdapterError, "Model not found: #{e.message}. Check available models with RubyLLM.models.all"
71
+ rescue ::RubyLLM::BadRequestError => e
72
+ raise DSPy::LM::AdapterError, "Invalid request to #{provider}: #{e.message}"
73
+ rescue ::RubyLLM::ConfigurationError => e
74
+ raise DSPy::LM::ConfigurationError, "RubyLLM configuration error: #{e.message}"
75
+ rescue ::RubyLLM::Error => e
76
+ raise DSPy::LM::AdapterError, "RubyLLM error (#{provider}): #{e.message}"
77
+ end
78
+
79
+ private
80
+
81
+ # Detect provider from RubyLLM's model registry or use explicit override
82
+ def detect_provider(model_id)
83
+ return @provider_override.to_s if @provider_override
84
+
85
+ model_info = ::RubyLLM.models.find(model_id)
86
+ model_info.provider.to_s
87
+ rescue ::RubyLLM::ModelNotFoundError
88
+ raise DSPy::LM::ConfigurationError,
89
+ "Model '#{model_id}' not found in RubyLLM registry. " \
90
+ "Use provider: option to specify explicitly, or run RubyLLM.models.refresh!"
91
+ end
92
+
93
+ # Check if we should use RubyLLM's global configuration
94
+ # Uses global config when no api_key and no provider-specific options provided
95
+ def should_use_global_config?(api_key, options)
96
+ api_key.nil? && (options.keys & SCOPED_OPTIONS).empty?
97
+ end
98
+
99
+ # Validate API key for providers that require it
100
+ def validate_api_key_for_provider!(api_key)
101
+ # Ollama and some local providers don't require API keys
102
+ return if provider_allows_no_api_key?
103
+
104
+ validate_api_key!(api_key, provider)
105
+ end
106
+
107
+ def provider_allows_no_api_key?
108
+ %w[ollama gpustack].include?(provider)
109
+ end
110
+
111
+ def validate_base_url!(url)
112
+ return if url.nil?
113
+
114
+ uri = URI.parse(url)
115
+ unless %w[http https].include?(uri.scheme)
116
+ raise DSPy::LM::ConfigurationError, "base_url must use http or https scheme"
117
+ end
118
+ rescue URI::InvalidURIError
119
+ raise DSPy::LM::ConfigurationError, "Invalid base_url format: #{url}"
120
+ end
121
+
122
+ def create_context(api_key)
123
+ ::RubyLLM.context do |config|
124
+ configure_provider(config, api_key)
125
+ configure_connection(config)
126
+ end
127
+ end
128
+
129
+ # Configure RubyLLM using convention: {provider}_api_key and {provider}_api_base
130
+ # For providers with non-standard auth (bedrock, vertexai), configure RubyLLM globally
131
+ def configure_provider(config, api_key)
132
+ key_method = "#{provider}_api_key="
133
+ config.send(key_method, api_key) if api_key && config.respond_to?(key_method)
134
+
135
+ base_method = "#{provider}_api_base="
136
+ config.send(base_method, @options[:base_url]) if @options[:base_url] && config.respond_to?(base_method)
137
+ end
138
+
139
+ def configure_connection(config)
140
+ config.request_timeout = @options[:timeout] if @options[:timeout]
141
+ config.max_retries = @options[:max_retries] if @options[:max_retries]
142
+ end
143
+
144
+ def create_chat_instance
145
+ chat_options = { model: model }
146
+
147
+ # If provider is explicitly overridden, pass it to RubyLLM
148
+ if @provider_override
149
+ chat_options[:provider] = @provider_override.to_sym
150
+ chat_options[:assume_model_exists] = true
151
+ end
152
+
153
+ # Use global RubyLLM config or scoped context
154
+ if @use_global_config
155
+ ::RubyLLM.chat(**chat_options)
156
+ else
157
+ context.chat(**chat_options)
158
+ end
159
+ end
160
+
161
+ def standard_response(chat_instance, messages, signature)
162
+ chat_instance = prepare_chat_instance(chat_instance, messages, signature)
163
+ content, attachments = prepare_message_content(messages)
164
+ return build_empty_response unless content
165
+
166
+ response = send_message(chat_instance, content, attachments)
167
+ map_response(response)
168
+ end
169
+
170
+ def stream_response(chat_instance, messages, signature, &block)
171
+ chat_instance = prepare_chat_instance(chat_instance, messages, signature)
172
+ content, attachments = prepare_message_content(messages)
173
+ return build_empty_response unless content
174
+
175
+ response = send_message(chat_instance, content, attachments, &block)
176
+ map_response(response)
177
+ end
178
+
179
+ # Common setup: apply system instructions, build conversation history, and optional schema
180
+ def prepare_chat_instance(chat_instance, messages, signature)
181
+ # First, handle system messages via with_instructions for proper system prompt handling
182
+ system_message = messages.find { |m| m[:role] == 'system' }
183
+ chat_instance = chat_instance.with_instructions(system_message[:content]) if system_message
184
+
185
+ # Build conversation history by adding all non-system messages except the last user message
186
+ # The last user message will be passed to ask() to get the response
187
+ messages_to_add = messages.reject { |m| m[:role] == 'system' }
188
+
189
+ # Find the index of the last user message
190
+ last_user_index = messages_to_add.rindex { |m| m[:role] == 'user' }
191
+
192
+ if last_user_index && last_user_index > 0
193
+ # Add all messages before the last user message to build history
194
+ messages_to_add[0...last_user_index].each do |msg|
195
+ content, attachments = extract_content_and_attachments(msg)
196
+ next unless content
197
+
198
+ # Add message with appropriate role
199
+ if attachments.any?
200
+ chat_instance.add_message(role: msg[:role].to_sym, content: content, attachments: attachments)
201
+ else
202
+ chat_instance.add_message(role: msg[:role].to_sym, content: content)
203
+ end
204
+ end
205
+ end
206
+
207
+ if signature && @structured_outputs_enabled
208
+ schema = build_json_schema(signature)
209
+ chat_instance = chat_instance.with_schema(schema) if schema
210
+ end
211
+
212
+ chat_instance
213
+ end
214
+
215
+ # Extract content from last user message
216
+ # RubyLLM's Chat API builds conversation history via add_message() for previous turns,
217
+ # and the last user message is passed to ask() to get the response.
218
+ def prepare_message_content(messages)
219
+ last_user_message = messages.reverse.find { |m| m[:role] == 'user' }
220
+ return [nil, []] unless last_user_message
221
+
222
+ extract_content_and_attachments(last_user_message)
223
+ end
224
+
225
+ # Send message with optional streaming block
226
+ def send_message(chat_instance, content, attachments, &block)
227
+ kwargs = attachments.any? ? { with: attachments } : {}
228
+
229
+ if block_given?
230
+ chat_instance.ask(content, **kwargs) do |chunk|
231
+ block.call(chunk.content) if chunk.content
232
+ end
233
+ else
234
+ chat_instance.ask(content, **kwargs)
235
+ end
236
+ end
237
+
238
+ def extract_content_and_attachments(message)
239
+ content = message[:content]
240
+ attachments = []
241
+
242
+ if content.is_a?(Array)
243
+ text_parts = []
244
+ content.each do |item|
245
+ case item[:type]
246
+ when 'text'
247
+ text_parts << item[:text]
248
+ when 'image'
249
+ # Extract image URL or path
250
+ image = item[:image]
251
+ if image.respond_to?(:url)
252
+ attachments << image.url
253
+ elsif image.respond_to?(:path)
254
+ attachments << image.path
255
+ elsif item[:image_url]
256
+ attachments << item[:image_url][:url]
257
+ end
258
+ end
259
+ end
260
+ content = text_parts.join("\n")
261
+ end
262
+
263
+ [content.to_s, attachments]
264
+ end
265
+
266
+ def map_response(ruby_llm_response)
267
+ DSPy::LM::Response.new(
268
+ content: ruby_llm_response.content.to_s,
269
+ usage: build_usage(ruby_llm_response),
270
+ metadata: build_metadata(ruby_llm_response)
271
+ )
272
+ end
273
+
274
+ def build_usage(response)
275
+ input_tokens = response.input_tokens || 0
276
+ output_tokens = response.output_tokens || 0
277
+
278
+ DSPy::LM::Usage.new(
279
+ input_tokens: input_tokens,
280
+ output_tokens: output_tokens,
281
+ total_tokens: input_tokens + output_tokens
282
+ )
283
+ end
284
+
285
+ def build_metadata(response)
286
+ DSPy::LM::ResponseMetadataFactory.create('ruby_llm', {
287
+ model: response.model_id || model,
288
+ underlying_provider: provider
289
+ })
290
+ end
291
+
292
+ def build_empty_response
293
+ DSPy::LM::Response.new(
294
+ content: '',
295
+ usage: DSPy::LM::Usage.new(input_tokens: 0, output_tokens: 0, total_tokens: 0),
296
+ metadata: DSPy::LM::ResponseMetadataFactory.create('ruby_llm', {
297
+ model: model,
298
+ underlying_provider: provider
299
+ })
300
+ )
301
+ end
302
+
303
+ def build_json_schema(signature)
304
+ return nil unless signature.respond_to?(:json_schema)
305
+
306
+ schema = signature.json_schema
307
+ normalize_schema(schema)
308
+ end
309
+
310
+ def normalize_schema(schema)
311
+ return schema unless schema.is_a?(Hash)
312
+
313
+ @normalized_schema_cache ||= {}
314
+ cache_key = schema.hash
315
+
316
+ @normalized_schema_cache[cache_key] ||= begin
317
+ duped = deep_dup(schema)
318
+ add_additional_properties_false(duped)
319
+ duped.freeze
320
+ end
321
+ end
322
+
323
+ def add_additional_properties_false(schema)
324
+ return unless schema.is_a?(Hash)
325
+
326
+ if schema[:type] == 'object' || schema['type'] == 'object'
327
+ schema[:additionalProperties] = false
328
+ schema['additionalProperties'] = false
329
+ end
330
+
331
+ # Recursively process nested schemas
332
+ schema.each_value { |v| add_additional_properties_false(v) if v.is_a?(Hash) }
333
+
334
+ # Handle arrays with items
335
+ if schema[:items]
336
+ add_additional_properties_false(schema[:items])
337
+ elsif schema['items']
338
+ add_additional_properties_false(schema['items'])
339
+ end
340
+ end
341
+
342
+ def deep_dup(obj)
343
+ case obj
344
+ when Hash
345
+ obj.transform_values { |v| deep_dup(v) }
346
+ when Array
347
+ obj.map { |v| deep_dup(v) }
348
+ else
349
+ obj
350
+ end
351
+ end
352
+
353
+ def validate_vision_support!
354
+ # RubyLLM handles vision validation internally, but we can add
355
+ # additional DSPy-specific validation here if needed
356
+ DSPy::LM::VisionModels.validate_vision_support!(provider, model)
357
+ rescue DSPy::LM::IncompatibleImageFeatureError
358
+ # If DSPy doesn't know about the model, let RubyLLM handle it
359
+ # RubyLLM has its own model registry with capability detection
360
+ end
361
+
362
+ def format_multimodal_messages(messages)
363
+ messages.map do |msg|
364
+ if msg[:content].is_a?(Array)
365
+ formatted_content = msg[:content].map do |item|
366
+ case item[:type]
367
+ when 'text'
368
+ { type: 'text', text: item[:text] }
369
+ when 'image'
370
+ # Validate and format image for provider
371
+ image = item[:image]
372
+ if image.respond_to?(:validate_for_provider!)
373
+ image.validate_for_provider!(provider)
374
+ end
375
+ item
376
+ else
377
+ item
378
+ end
379
+ end
380
+
381
+ { role: msg[:role], content: formatted_content }
382
+ else
383
+ msg
384
+ end
385
+ end
386
+ end
387
+ end
388
+ end
389
+ end
390
+ end
391
+ end
@@ -0,0 +1,7 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DSPy
4
+ module RubyLLM
5
+ VERSION = '0.1.0'
6
+ end
7
+ end
@@ -0,0 +1,8 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'dspy/ruby_llm/version'
4
+
5
+ require 'dspy/ruby_llm/guardrails'
6
+ DSPy::RubyLLM::Guardrails.ensure_ruby_llm_installed!
7
+
8
+ require 'dspy/ruby_llm/lm/adapters/ruby_llm_adapter'
metadata ADDED
@@ -0,0 +1,79 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: dspy-ruby_llm
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Vicente Reig Rincón de Arellano
8
+ - Kieran Klaassen
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 1980-01-02 00:00:00.000000000 Z
12
+ dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: dspy
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ">="
18
+ - !ruby/object:Gem::Version
19
+ version: '0.30'
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ">="
25
+ - !ruby/object:Gem::Version
26
+ version: '0.30'
27
+ - !ruby/object:Gem::Dependency
28
+ name: ruby_llm
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - "~>"
32
+ - !ruby/object:Gem::Version
33
+ version: '1.3'
34
+ type: :runtime
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - "~>"
39
+ - !ruby/object:Gem::Version
40
+ version: '1.3'
41
+ description: Provides a unified adapter using RubyLLM to access OpenAI, Anthropic,
42
+ Gemini, Bedrock, Ollama, and more through a single interface in DSPy.rb projects.
43
+ email:
44
+ - hey@vicente.services
45
+ - kieranklaassen@gmail.com
46
+ executables: []
47
+ extensions: []
48
+ extra_rdoc_files: []
49
+ files:
50
+ - LICENSE
51
+ - README.md
52
+ - lib/dspy/ruby_llm.rb
53
+ - lib/dspy/ruby_llm/README.md
54
+ - lib/dspy/ruby_llm/guardrails.rb
55
+ - lib/dspy/ruby_llm/lm/adapters/ruby_llm_adapter.rb
56
+ - lib/dspy/ruby_llm/version.rb
57
+ homepage: https://github.com/vicentereig/dspy.rb
58
+ licenses:
59
+ - MIT
60
+ metadata:
61
+ github_repo: git@github.com:vicentereig/dspy.rb
62
+ rdoc_options: []
63
+ require_paths:
64
+ - lib
65
+ required_ruby_version: !ruby/object:Gem::Requirement
66
+ requirements:
67
+ - - ">="
68
+ - !ruby/object:Gem::Version
69
+ version: 3.3.0
70
+ required_rubygems_version: !ruby/object:Gem::Requirement
71
+ requirements:
72
+ - - ">="
73
+ - !ruby/object:Gem::Version
74
+ version: '0'
75
+ requirements: []
76
+ rubygems_version: 3.6.9
77
+ specification_version: 4
78
+ summary: RubyLLM adapter for DSPy.rb - unified access to 12+ LLM providers.
79
+ test_files: []