dspy 0.13.0 → 0.15.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d80d5b0166fe5a101e4918ffee13a70dec6ca67b493cf2e68dff1c18b2df36c1
4
- data.tar.gz: 1687fe88d41c5d4627592ff5e98f87ca9f40186870386291b1ead091f51235da
3
+ metadata.gz: 0df0301fbda6076f6d4ecaeb7fa907982b11fe67521d4aa689ca812cb1485a71
4
+ data.tar.gz: ba7d57fa4fef32861b37ccb428c12f58c4262123bff1069dddb3c53c72e9dd67
5
5
  SHA512:
6
- metadata.gz: 7bedebf2e58243bedcf8003d25b4f55789a7e4a611f9f1997a322e93db553fe117fde155415c5a607a65b49f61f2a640c7899a0fd4793bcdf9e8672602f54755
7
- data.tar.gz: cdf39e605a7550e94334c5bad13ccc5af2a255b7039559b56b918e1ba3706e671c52960558459a0c96edab318c68bc9828b1eb03530b47daa896766a2cb7aa7d
6
+ metadata.gz: 07a619845a4e65cb03d4b9e53726bb6b306ddb5d5c08a6521785c415d6f4a5e1a21949f5120c2dafc3429114c5b97c0a1f2d19d30528bfcfe311aef97bc28245
7
+ data.tar.gz: 90a800ee9a612da5fd735b36b3c887df8fc777ccd0ed5b4badf4049352c19bcb64a508e7b722b4eae51b08ddf5386e292ae7bbc893c1b444ee1e3b1305f46f05
data/README.md CHANGED
@@ -40,13 +40,14 @@ The result? LLM applications that actually scale and don't break when you sneeze
40
40
  - LLM provider support using official Ruby clients:
41
41
  - [OpenAI Ruby](https://github.com/openai/openai-ruby)
42
42
  - [Anthropic Ruby SDK](https://github.com/anthropics/anthropic-sdk-ruby)
43
+ - [Ollama](https://ollama.com/) via OpenAI compatibility layer
43
44
  - Runtime type checking with [Sorbet](https://sorbet.org/)
44
45
  - Type-safe tool definitions for ReAct agents
45
46
  - Comprehensive instrumentation and observability
46
47
 
47
48
  ## Development Status
48
49
 
49
- DSPy.rb is actively developed and approaching stability at **v0.10.1**. The core framework is production-ready with comprehensive documentation, but I'm battle-testing features through the 0.x series before committing to a stable v1.0 API.
50
+ DSPy.rb is actively developed and approaching stability at **v0.13.0**. The core framework is production-ready with comprehensive documentation, but I'm battle-testing features through the 0.x series before committing to a stable v1.0 API.
50
51
 
51
52
  Real-world usage feedback is invaluable - if you encounter issues or have suggestions, please open a GitHub issue!
52
53
 
@@ -55,7 +56,7 @@ Real-world usage feedback is invaluable - if you encounter issues or have sugges
55
56
  ### Installation
56
57
 
57
58
  ```ruby
58
- gem 'dspy', '~> 0.9'
59
+ gem 'dspy', '~> 0.13'
59
60
  ```
60
61
 
61
62
  Or add to your Gemfile:
@@ -138,6 +139,12 @@ puts result.confidence # => 0.85
138
139
 
139
140
  📖 **[Complete Documentation Website](https://vicentereig.github.io/dspy.rb/)**
140
141
 
142
+ ### LLM-Friendly Documentation
143
+
144
+ For LLMs and AI assistants working with DSPy.rb:
145
+ - **[llms.txt](https://vicentereig.github.io/dspy.rb/llms.txt)** - Concise reference optimized for LLMs
146
+ - **[llms-full.txt](https://vicentereig.github.io/dspy.rb/llms-full.txt)** - Comprehensive API documentation
147
+
141
148
  ### Getting Started
142
149
  - **[Installation & Setup](docs/src/getting-started/installation.md)** - Detailed installation and configuration
143
150
  - **[Quick Start Guide](docs/src/getting-started/quick-start.md)** - Your first DSPy programs
@@ -177,7 +184,7 @@ DSPy.rb has rapidly evolved from experimental to production-ready:
177
184
 
178
185
  ## Roadmap - Battle-Testing Toward v1.0
179
186
 
180
- DSPy.rb is currently at **v0.10.1** and approaching stability. I'm focusing on real-world usage and refinement through the 0.11, 0.12+ series before committing to a stable v1.0 API.
187
+ DSPy.rb is currently at **v0.13.0** and approaching stability. I'm focusing on real-world usage and refinement through the 0.14, 0.15+ series before committing to a stable v1.0 API.
181
188
 
182
189
  **Current Focus Areas:**
183
190
  - 🚧 **Ollama Support** - Local model integration
@@ -7,7 +7,8 @@ module DSPy
7
7
  # Maps provider prefixes to adapter classes
8
8
  ADAPTER_MAP = {
9
9
  'openai' => 'OpenAIAdapter',
10
- 'anthropic' => 'AnthropicAdapter'
10
+ 'anthropic' => 'AnthropicAdapter',
11
+ 'ollama' => 'OllamaAdapter'
11
12
  }.freeze
12
13
 
13
14
  class << self
@@ -22,7 +23,8 @@ module DSPy
22
23
 
23
24
  # Pass provider-specific options
24
25
  adapter_options = { model: model, api_key: api_key }
25
- adapter_options.merge!(options) if provider == 'openai' # Only OpenAI accepts structured_outputs for now
26
+ # Both OpenAI and Ollama accept additional options
27
+ adapter_options.merge!(options) if %w[openai ollama].include?(provider)
26
28
 
27
29
  adapter_class.new(**adapter_options)
28
30
  end
@@ -15,15 +15,20 @@ module DSPy
15
15
  # Anthropic requires system message to be separate from messages
16
16
  system_message, user_messages = extract_system_message(normalize_messages(messages))
17
17
 
18
- # Apply JSON prefilling if needed for better Claude JSON compliance
19
- user_messages = prepare_messages_for_json(user_messages, system_message)
18
+ # Check if this is a tool use request
19
+ has_tools = extra_params.key?(:tools) && !extra_params[:tools].empty?
20
+
21
+ # Apply JSON prefilling if needed for better Claude JSON compliance (but not for tool use)
22
+ unless has_tools
23
+ user_messages = prepare_messages_for_json(user_messages, system_message)
24
+ end
20
25
 
21
26
  request_params = {
22
27
  model: model,
23
28
  messages: user_messages,
24
29
  max_tokens: 4096, # Required for Anthropic
25
30
  temperature: 0.0 # DSPy default for deterministic responses
26
- }
31
+ }.merge(extra_params)
27
32
 
28
33
  # Add system message if present
29
34
  request_params[:system] = system_message if system_message
@@ -60,21 +65,44 @@ module DSPy
60
65
  raise AdapterError, "Anthropic API error: #{response.error}"
61
66
  end
62
67
 
63
- content = response.content.first.text if response.content.is_a?(Array) && response.content.first
68
+ # Handle both text content and tool use
69
+ content = ""
70
+ tool_calls = []
71
+
72
+ if response.content.is_a?(Array)
73
+ response.content.each do |content_block|
74
+ case content_block.type.to_s
75
+ when "text"
76
+ content += content_block.text
77
+ when "tool_use"
78
+ tool_calls << {
79
+ id: content_block.id,
80
+ name: content_block.name,
81
+ input: content_block.input
82
+ }
83
+ end
84
+ end
85
+ end
86
+
64
87
  usage = response.usage
65
88
 
66
89
  # Convert usage data to typed struct
67
90
  usage_struct = UsageFactory.create('anthropic', usage)
68
91
 
92
+ metadata = {
93
+ provider: 'anthropic',
94
+ model: model,
95
+ response_id: response.id,
96
+ role: response.role
97
+ }
98
+
99
+ # Add tool calls to metadata if present
100
+ metadata[:tool_calls] = tool_calls unless tool_calls.empty?
101
+
69
102
  Response.new(
70
103
  content: content,
71
104
  usage: usage_struct,
72
- metadata: {
73
- provider: 'anthropic',
74
- model: model,
75
- response_id: response.id,
76
- role: response.role
77
- }
105
+ metadata: metadata
78
106
  )
79
107
  end
80
108
  rescue => e
@@ -0,0 +1,73 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'openai'
4
+
5
+ module DSPy
6
+ class LM
7
+ class OllamaAdapter < OpenAIAdapter
8
+ DEFAULT_BASE_URL = 'http://localhost:11434/v1'
9
+
10
+ def initialize(model:, api_key: nil, base_url: nil, structured_outputs: true)
11
+ # Ollama doesn't require API key for local instances
12
+ # But may need it for remote/protected instances
13
+ api_key ||= 'ollama' # OpenAI client requires non-empty key
14
+ base_url ||= DEFAULT_BASE_URL
15
+
16
+ # Store base_url before calling super
17
+ @base_url = base_url
18
+
19
+ # Don't call parent's initialize, do it manually to control client creation
20
+ @model = model
21
+ @api_key = api_key
22
+ @structured_outputs_enabled = structured_outputs
23
+ validate_configuration!
24
+
25
+ # Create client with custom base URL
26
+ @client = OpenAI::Client.new(
27
+ api_key: @api_key,
28
+ base_url: @base_url
29
+ )
30
+ end
31
+
32
+ def chat(messages:, signature: nil, response_format: nil, &block)
33
+ # For Ollama, we need to be more lenient with structured outputs
34
+ # as it may not fully support OpenAI's response_format spec
35
+ begin
36
+ super
37
+ rescue => e
38
+ # If structured output fails, retry with enhanced prompting
39
+ if @structured_outputs_enabled && signature && e.message.include?('response_format')
40
+ DSPy.logger.debug("Ollama structured output failed, falling back to enhanced prompting")
41
+ @structured_outputs_enabled = false
42
+ retry
43
+ else
44
+ raise
45
+ end
46
+ end
47
+ end
48
+
49
+ private
50
+
51
+ def validate_configuration!
52
+ super
53
+ # Additional Ollama-specific validation could go here
54
+ end
55
+
56
+ def validate_api_key!(api_key, provider)
57
+ # For Ollama, API key is optional for local instances
58
+ # Only validate if it looks like a remote URL
59
+ if @base_url && !@base_url.include?('localhost') && !@base_url.include?('127.0.0.1')
60
+ super
61
+ end
62
+ end
63
+
64
+
65
+ # Ollama may have different model support for structured outputs
66
+ def supports_structured_outputs?
67
+ # For now, assume all Ollama models support basic JSON mode
68
+ # but may not support full OpenAI structured output spec
69
+ true
70
+ end
71
+ end
72
+ end
73
+ end
@@ -0,0 +1,192 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "sorbet-runtime"
4
+
5
+ module DSPy
6
+ class LM
7
+ module Strategies
8
+ # Strategy for using Anthropic's tool use feature for guaranteed JSON output
9
+ class AnthropicToolUseStrategy < BaseStrategy
10
+ extend T::Sig
11
+
12
+ sig { override.returns(T::Boolean) }
13
+ def available?
14
+ # Only available for Anthropic adapters with models that support tool use
15
+ adapter.is_a?(DSPy::LM::AnthropicAdapter) && supports_tool_use?
16
+ end
17
+
18
+ sig { override.returns(Integer) }
19
+ def priority
20
+ 95 # Higher priority than extraction strategy - tool use is more reliable
21
+ end
22
+
23
+ sig { override.returns(String) }
24
+ def name
25
+ "anthropic_tool_use"
26
+ end
27
+
28
+ sig { override.params(messages: T::Array[T::Hash[Symbol, String]], request_params: T::Hash[Symbol, T.untyped]).void }
29
+ def prepare_request(messages, request_params)
30
+ # Convert signature output schema to Anthropic tool format
31
+ tool_schema = convert_to_tool_schema
32
+
33
+ # Add the tool definition to request params
34
+ request_params[:tools] = [tool_schema]
35
+
36
+ # Force the model to use our tool
37
+ request_params[:tool_choice] = {
38
+ type: "tool",
39
+ name: "json_output"
40
+ }
41
+
42
+ # Update the last user message to request tool use
43
+ if messages.any? && messages.last[:role] == "user"
44
+ messages.last[:content] += "\n\nPlease use the json_output tool to provide your response."
45
+ end
46
+ end
47
+
48
+ sig { override.params(response: DSPy::LM::Response).returns(T.nilable(String)) }
49
+ def extract_json(response)
50
+ # Extract JSON from tool use response
51
+ begin
52
+ # Check for tool calls in metadata first (this is the primary method)
53
+ if response.metadata && response.metadata[:tool_calls]
54
+ tool_calls = response.metadata[:tool_calls]
55
+ if tool_calls.is_a?(Array) && !tool_calls.empty?
56
+ first_call = tool_calls.first
57
+ if first_call[:name] == "json_output" && first_call[:input]
58
+ json_result = JSON.generate(first_call[:input])
59
+ return json_result
60
+ end
61
+ end
62
+ end
63
+
64
+ # Fallback: try to extract from content if it contains tool use blocks
65
+ content = response.content
66
+ if content && !content.empty? && content.include?("<tool_use>")
67
+ tool_content = content[/<tool_use>.*?<\/tool_use>/m]
68
+ if tool_content
69
+ json_match = tool_content[/<input>(.*?)<\/input>/m, 1]
70
+ return json_match.strip if json_match
71
+ end
72
+ end
73
+
74
+ nil
75
+ rescue => e
76
+ DSPy.logger.debug("Failed to extract tool use JSON: #{e.message}")
77
+ nil
78
+ end
79
+ end
80
+
81
+ sig { override.params(error: StandardError).returns(T::Boolean) }
82
+ def handle_error(error)
83
+ # Tool use errors should trigger fallback to extraction strategy
84
+ if error.message.include?("tool") || error.message.include?("invalid_request_error")
85
+ DSPy.logger.warn("Anthropic tool use failed: #{error.message}")
86
+ true # We handled it, try next strategy
87
+ else
88
+ false # Let retry handler deal with it
89
+ end
90
+ end
91
+
92
+ private
93
+
94
+ sig { returns(T::Boolean) }
95
+ def supports_tool_use?
96
+ # Check if model supports tool use
97
+ # Claude 3 models (Opus, Sonnet, Haiku) support tool use
98
+ model = adapter.model.downcase
99
+ model.include?("claude-3") || model.include?("claude-3.5")
100
+ end
101
+
102
+ sig { returns(T::Hash[Symbol, T.untyped]) }
103
+ def convert_to_tool_schema
104
+ # Get output fields from signature
105
+ output_fields = signature_class.output_field_descriptors
106
+
107
+ # Convert to Anthropic tool format
108
+ {
109
+ name: "json_output",
110
+ description: "Output the result in the required JSON format",
111
+ input_schema: {
112
+ type: "object",
113
+ properties: build_properties_from_fields(output_fields),
114
+ required: output_fields.keys.map(&:to_s)
115
+ }
116
+ }
117
+ end
118
+
119
+ sig { params(fields: T::Hash[Symbol, T.untyped]).returns(T::Hash[String, T.untyped]) }
120
+ def build_properties_from_fields(fields)
121
+ properties = {}
122
+
123
+ fields.each do |field_name, descriptor|
124
+ properties[field_name.to_s] = convert_type_to_json_schema(descriptor.type)
125
+ end
126
+
127
+ properties
128
+ end
129
+
130
+ sig { params(type: T.untyped).returns(T::Hash[String, T.untyped]) }
131
+ def convert_type_to_json_schema(type)
132
+ # Handle raw Ruby class types - use === for class comparison
133
+ if type == String
134
+ return { type: "string" }
135
+ elsif type == Integer
136
+ return { type: "integer" }
137
+ elsif type == Float
138
+ return { type: "number" }
139
+ elsif type == TrueClass || type == FalseClass
140
+ return { type: "boolean" }
141
+ end
142
+
143
+ # Handle Sorbet types
144
+ case type
145
+ when T::Types::Simple
146
+ case type.raw_type.to_s
147
+ when "String"
148
+ { type: "string" }
149
+ when "Integer"
150
+ { type: "integer" }
151
+ when "Float", "Numeric"
152
+ { type: "number" }
153
+ when "TrueClass", "FalseClass"
154
+ { type: "boolean" }
155
+ else
156
+ { type: "string" } # Default fallback
157
+ end
158
+ when T::Types::TypedArray
159
+ {
160
+ type: "array",
161
+ items: convert_type_to_json_schema(type.type)
162
+ }
163
+ when T::Types::TypedHash
164
+ {
165
+ type: "object",
166
+ additionalProperties: convert_type_to_json_schema(type.values)
167
+ }
168
+ else
169
+ # For complex types, try to introspect
170
+ if type.respond_to?(:props)
171
+ {
172
+ type: "object",
173
+ properties: build_properties_from_props(type.props)
174
+ }
175
+ else
176
+ { type: "object" } # Generic object fallback
177
+ end
178
+ end
179
+ end
180
+
181
+ sig { params(props: T.untyped).returns(T::Hash[String, T.untyped]) }
182
+ def build_properties_from_props(props)
183
+ result = {}
184
+ props.each do |prop_name, prop_info|
185
+ result[prop_name.to_s] = convert_type_to_json_schema(prop_info[:type])
186
+ end
187
+ result
188
+ end
189
+ end
190
+ end
191
+ end
192
+ end
@@ -11,10 +11,15 @@ module DSPy
11
11
 
12
12
  sig { override.returns(T::Boolean) }
13
13
  def available?
14
- # Check if adapter is OpenAI and supports structured outputs
15
- return false unless adapter.is_a?(DSPy::LM::OpenAIAdapter)
14
+ # Check if adapter is OpenAI or Ollama and supports structured outputs
15
+ return false unless adapter.is_a?(DSPy::LM::OpenAIAdapter) || adapter.is_a?(DSPy::LM::OllamaAdapter)
16
16
  return false unless adapter.instance_variable_get(:@structured_outputs_enabled)
17
17
 
18
+ # For Ollama, we assume it supports basic structured outputs
19
+ if adapter.is_a?(DSPy::LM::OllamaAdapter)
20
+ return true
21
+ end
22
+
18
23
  DSPy::LM::Adapters::OpenAI::SchemaConverter.supports_structured_outputs?(adapter.model)
19
24
  end
20
25
 
@@ -3,6 +3,7 @@
3
3
  require "sorbet-runtime"
4
4
  require_relative "strategies/base_strategy"
5
5
  require_relative "strategies/openai_structured_output_strategy"
6
+ require_relative "strategies/anthropic_tool_use_strategy"
6
7
  require_relative "strategies/anthropic_extraction_strategy"
7
8
  require_relative "strategies/enhanced_prompting_strategy"
8
9
 
@@ -15,6 +16,7 @@ module DSPy
15
16
  # Available strategies in order of registration
16
17
  STRATEGIES = [
17
18
  Strategies::OpenAIStructuredOutputStrategy,
19
+ Strategies::AnthropicToolUseStrategy,
18
20
  Strategies::AnthropicExtractionStrategy,
19
21
  Strategies::EnhancedPromptingStrategy
20
22
  ].freeze
@@ -99,7 +101,11 @@ module DSPy
99
101
  openai_strategy = find_strategy_by_name("openai_structured_output")
100
102
  return openai_strategy if openai_strategy&.available?
101
103
 
102
- # Try Anthropic extraction
104
+ # Try Anthropic tool use first
105
+ anthropic_tool_strategy = find_strategy_by_name("anthropic_tool_use")
106
+ return anthropic_tool_strategy if anthropic_tool_strategy&.available?
107
+
108
+ # Fall back to Anthropic extraction
103
109
  anthropic_strategy = find_strategy_by_name("anthropic_extraction")
104
110
  return anthropic_strategy if anthropic_strategy&.available?
105
111
 
@@ -8,6 +8,7 @@ module DSPy
8
8
  class StructuredOutputStrategy < T::Enum
9
9
  enums do
10
10
  OpenAIStructuredOutput = new("openai_structured_output")
11
+ AnthropicToolUse = new("anthropic_tool_use")
11
12
  AnthropicExtraction = new("anthropic_extraction")
12
13
  EnhancedPrompting = new("enhanced_prompting")
13
14
  end
data/lib/dspy/lm.rb CHANGED
@@ -13,6 +13,7 @@ require_relative 'instrumentation/token_tracker'
13
13
  # Load adapters
14
14
  require_relative 'lm/adapters/openai_adapter'
15
15
  require_relative 'lm/adapters/anthropic_adapter'
16
+ require_relative 'lm/adapters/ollama_adapter'
16
17
 
17
18
  # Load strategy system
18
19
  require_relative 'lm/strategy_selector'
@@ -118,7 +119,7 @@ module DSPy
118
119
  end
119
120
 
120
121
  # Let strategy handle JSON extraction if needed
121
- if signature_class && response.content
122
+ if signature_class
122
123
  extracted_json = strategy.extract_json(response)
123
124
  if extracted_json && extracted_json != response.content
124
125
  # Create a new response with extracted JSON
data/lib/dspy/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module DSPy
4
- VERSION = "0.13.0"
4
+ VERSION = "0.15.0"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: dspy
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.13.0
4
+ version: 0.15.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Vicente Reig Rincón de Arellano
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2025-07-25 00:00:00.000000000 Z
11
+ date: 2025-07-28 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: dry-configurable
@@ -150,8 +150,7 @@ dependencies:
150
150
  - - "~>"
151
151
  - !ruby/object:Gem::Version
152
152
  version: '1.2'
153
- description: A Ruby implementation of DSPy, a framework for programming with large
154
- language models
153
+ description: The Ruby framework for programming with large language models.
155
154
  email:
156
155
  - hey@vicente.services
157
156
  executables: []
@@ -173,6 +172,7 @@ files:
173
172
  - lib/dspy/lm/adapter.rb
174
173
  - lib/dspy/lm/adapter_factory.rb
175
174
  - lib/dspy/lm/adapters/anthropic_adapter.rb
175
+ - lib/dspy/lm/adapters/ollama_adapter.rb
176
176
  - lib/dspy/lm/adapters/openai/schema_converter.rb
177
177
  - lib/dspy/lm/adapters/openai_adapter.rb
178
178
  - lib/dspy/lm/cache_manager.rb
@@ -181,6 +181,7 @@ files:
181
181
  - lib/dspy/lm/response.rb
182
182
  - lib/dspy/lm/retry_handler.rb
183
183
  - lib/dspy/lm/strategies/anthropic_extraction_strategy.rb
184
+ - lib/dspy/lm/strategies/anthropic_tool_use_strategy.rb
184
185
  - lib/dspy/lm/strategies/base_strategy.rb
185
186
  - lib/dspy/lm/strategies/enhanced_prompting_strategy.rb
186
187
  - lib/dspy/lm/strategies/openai_structured_output_strategy.rb
@@ -249,5 +250,5 @@ requirements: []
249
250
  rubygems_version: 3.5.22
250
251
  signing_key:
251
252
  specification_version: 4
252
- summary: Ruby port of DSPy 2.6
253
+ summary: The Ruby framework for programming—rather than prompting—language models.
253
254
  test_files: []