dspy 0.26.0 → 0.27.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 573bfc89c0ca4d6c58a3d68fa96e0b0e8fc1d8bc2c38eec9c67e245a51527532
4
- data.tar.gz: a2b34e8127abafbdec6842c3db475e1abbfbcf06c0ca9bfcd3fa7ea8299d0d0f
3
+ metadata.gz: 5bb3b493e5411fd1f18028a3177c99149c2507f4d05a100746b2da734daa6a63
4
+ data.tar.gz: 78d2325d7b28a1b393284ec765c0f9fa3048c60afd864e3fcec0a88aac96cdc7
5
5
  SHA512:
6
- metadata.gz: 88d9b7c29cf89d386ada79f8696f561b94087458715813b45ce2be18947163e840deabfe681d0f7594023fd09c84c1ff3746c87d19c2ebc8043103f5f98cbdc7
7
- data.tar.gz: 5e5d518266d7dafe64abf3f3d6b083c6017feb2564743053c23bf9b6baa5a32be53eab299a57814b70ce05d236e3f5d659158679914ac097fd7fcc589b4ddb04
6
+ metadata.gz: c11ef22db12b776b0dacb648cc60312aedb398b13020991827b85eca169b319960e8c4f31cc8216c93a766f67ed779998ee22f932bc02cb4ba802ce58c0a4ff4
7
+ data.tar.gz: 68847a5ef35187be690b82e0bd1b30d4da7988c2bf1da9fd3820e4ae2315c96900f798b4580b1dcd236fa4c4e45a4289c57301b36b1c24564fda020ee174107b
data/README.md CHANGED
@@ -59,6 +59,25 @@ puts result.sentiment # => #<Sentiment::Positive>
59
59
  puts result.confidence # => 0.85
60
60
  ```
61
61
 
62
+ ### Alternative Providers
63
+
64
+ DSPy.rb supports multiple providers with native structured outputs:
65
+
66
+ ```ruby
67
+ # Google Gemini with native structured outputs
68
+ DSPy.configure do |c|
69
+ c.lm = DSPy::LM.new('gemini/gemini-1.5-flash',
70
+ api_key: ENV['GEMINI_API_KEY'],
71
+ structured_outputs: true) # Supports gemini-1.5-pro, gemini-1.5-flash, gemini-2.0-flash-exp
72
+ end
73
+
74
+ # Anthropic Claude with tool-based extraction
75
+ DSPy.configure do |c|
76
+ c.lm = DSPy::LM.new('anthropic/claude-3-sonnet-20241022',
77
+ api_key: ENV['ANTHROPIC_API_KEY']) # Automatic strategy selection
78
+ end
79
+ ```
80
+
62
81
  ## What You Get
63
82
 
64
83
  **Core Building Blocks:**
@@ -77,7 +96,7 @@ puts result.confidence # => 0.85
77
96
  - **GEPA Optimization** - Genetic-Pareto optimization for multi-objective prompt improvement
78
97
 
79
98
  **Production Features:**
80
- - **Reliable JSON Extraction** - Native OpenAI structured outputs, Anthropic extraction patterns, and automatic strategy selection with fallback
99
+ - **Reliable JSON Extraction** - Native structured outputs for OpenAI and Gemini, Anthropic tool-based extraction, and automatic strategy selection with fallback
81
100
  - **Type-Safe Configuration** - Strategy enums with automatic provider optimization (Strict/Compatible modes)
82
101
  - **Smart Retry Logic** - Progressive fallback with exponential backoff for handling transient failures
83
102
  - **Zero-Config Langfuse Integration** - Set env vars and get automatic OpenTelemetry traces in Langfuse
@@ -89,6 +108,7 @@ puts result.confidence # => 0.85
89
108
  - LLM provider support using official Ruby clients:
90
109
  - [OpenAI Ruby](https://github.com/openai/openai-ruby) with vision model support
91
110
  - [Anthropic Ruby SDK](https://github.com/anthropics/anthropic-sdk-ruby) with multimodal capabilities
111
+ - [Google Gemini API](https://ai.google.dev/) with native structured outputs
92
112
  - [Ollama](https://ollama.com/) via OpenAI compatibility layer for local models
93
113
  - **Multimodal Support** - Complete image analysis with DSPy::Image, type-safe bounding boxes, vision-capable models
94
114
  - Runtime type checking with [Sorbet](https://sorbet.org/) including T::Enum and union types
@@ -157,29 +177,6 @@ Then run:
157
177
  bundle install
158
178
  ```
159
179
 
160
- #### System Dependencies for Ubuntu/Pop!_OS
161
-
162
- If you need to compile the `numo-narray` dependency from source (used for numerical computing in Bayesian optimization), install these system packages:
163
-
164
- ```bash
165
- # Update package list
166
- sudo apt-get update
167
-
168
- # Install Ruby development files (if not already installed)
169
- sudo apt-get install ruby-full ruby-dev
170
-
171
- # Install essential build tools
172
- sudo apt-get install build-essential
173
-
174
- # Install BLAS and LAPACK libraries (required for numo-narray)
175
- sudo apt-get install libopenblas-dev liblapack-dev
176
-
177
- # Install additional development libraries
178
- sudo apt-get install libffi-dev libssl-dev
179
- ```
180
-
181
- **Note**: The `numo-narray` gem typically compiles quickly (1-2 minutes). Pre-built binaries are available for most platforms, so compilation is only needed if a pre-built binary isn't available for your system.
182
-
183
180
  ## Recent Achievements
184
181
 
185
182
  DSPy.rb has rapidly evolved from experimental to production-ready:
@@ -24,8 +24,8 @@ module DSPy
24
24
 
25
25
  # Pass provider-specific options
26
26
  adapter_options = { model: model, api_key: api_key }
27
- # Both OpenAI and Ollama accept additional options
28
- adapter_options.merge!(options) if %w[openai ollama].include?(provider)
27
+ # OpenAI, Ollama, and Gemini accept additional options
28
+ adapter_options.merge!(options) if %w[openai ollama gemini].include?(provider)
29
29
 
30
30
  adapter_class.new(**adapter_options)
31
31
  end
@@ -0,0 +1,170 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "sorbet-runtime"
4
+ require_relative "../../cache_manager"
5
+
6
+ module DSPy
7
+ class LM
8
+ module Adapters
9
+ module Gemini
10
+ # Converts DSPy signatures to Gemini structured output format
11
+ class SchemaConverter
12
+ extend T::Sig
13
+
14
+ # Models that support structured outputs
15
+ STRUCTURED_OUTPUT_MODELS = T.let([
16
+ "gemini-1.5-pro",
17
+ "gemini-1.5-flash",
18
+ "gemini-2.0-flash-exp"
19
+ ].freeze, T::Array[String])
20
+
21
+ sig { params(signature_class: T.class_of(DSPy::Signature)).returns(T::Hash[Symbol, T.untyped]) }
22
+ def self.to_gemini_format(signature_class)
23
+ # Check cache first
24
+ cache_manager = DSPy::LM.cache_manager
25
+ cached_schema = cache_manager.get_schema(signature_class, "gemini", {})
26
+
27
+ if cached_schema
28
+ DSPy.logger.debug("Using cached schema for #{signature_class.name}")
29
+ return cached_schema
30
+ end
31
+
32
+ # Get the output JSON schema from the signature class
33
+ output_schema = signature_class.output_json_schema
34
+
35
+ # Convert to Gemini format (OpenAPI 3.0 Schema subset - not related to OpenAI)
36
+ gemini_schema = convert_dspy_schema_to_gemini(output_schema)
37
+
38
+ # Cache the result
39
+ cache_manager.cache_schema(signature_class, "gemini", gemini_schema, {})
40
+
41
+ gemini_schema
42
+ end
43
+
44
+ sig { params(model: String).returns(T::Boolean) }
45
+ def self.supports_structured_outputs?(model)
46
+ # Check cache first
47
+ cache_manager = DSPy::LM.cache_manager
48
+ cached_result = cache_manager.get_capability(model, "structured_outputs")
49
+
50
+ if !cached_result.nil?
51
+ DSPy.logger.debug("Using cached capability check for #{model}")
52
+ return cached_result
53
+ end
54
+
55
+ # Extract base model name without provider prefix
56
+ base_model = model.sub(/^gemini\//, "")
57
+
58
+ # Check if it's a supported model or a newer version
59
+ result = STRUCTURED_OUTPUT_MODELS.any? { |supported| base_model.start_with?(supported) }
60
+
61
+ # Cache the result
62
+ cache_manager.cache_capability(model, "structured_outputs", result)
63
+
64
+ result
65
+ end
66
+
67
+ sig { params(schema: T::Hash[Symbol, T.untyped]).returns(T::Array[String]) }
68
+ def self.validate_compatibility(schema)
69
+ issues = []
70
+
71
+ # Check for deeply nested objects (Gemini has depth limits)
72
+ depth = calculate_depth(schema)
73
+ if depth > 5
74
+ issues << "Schema depth (#{depth}) exceeds recommended limit of 5 levels"
75
+ end
76
+
77
+ issues
78
+ end
79
+
80
+ private
81
+
82
+ sig { params(dspy_schema: T::Hash[Symbol, T.untyped]).returns(T::Hash[Symbol, T.untyped]) }
83
+ def self.convert_dspy_schema_to_gemini(dspy_schema)
84
+ result = {
85
+ type: "object",
86
+ properties: {},
87
+ required: []
88
+ }
89
+
90
+ # Convert properties
91
+ properties = dspy_schema[:properties] || {}
92
+ properties.each do |prop_name, prop_schema|
93
+ result[:properties][prop_name] = convert_property_to_gemini(prop_schema)
94
+ end
95
+
96
+ # Set required fields
97
+ result[:required] = (dspy_schema[:required] || []).map(&:to_s)
98
+
99
+ result
100
+ end
101
+
102
+ sig { params(property_schema: T::Hash[Symbol, T.untyped]).returns(T::Hash[Symbol, T.untyped]) }
103
+ def self.convert_property_to_gemini(property_schema)
104
+ case property_schema[:type]
105
+ when "string"
106
+ result = { type: "string" }
107
+ result[:enum] = property_schema[:enum] if property_schema[:enum]
108
+ result
109
+ when "integer"
110
+ { type: "integer" }
111
+ when "number"
112
+ { type: "number" }
113
+ when "boolean"
114
+ { type: "boolean" }
115
+ when "array"
116
+ {
117
+ type: "array",
118
+ items: convert_property_to_gemini(property_schema[:items] || { type: "string" })
119
+ }
120
+ when "object"
121
+ result = { type: "object" }
122
+
123
+ if property_schema[:properties]
124
+ result[:properties] = {}
125
+ property_schema[:properties].each do |nested_prop, nested_schema|
126
+ result[:properties][nested_prop] = convert_property_to_gemini(nested_schema)
127
+ end
128
+
129
+ # Set required fields for nested objects
130
+ if property_schema[:required]
131
+ result[:required] = property_schema[:required].map(&:to_s)
132
+ end
133
+ end
134
+
135
+ result
136
+ else
137
+ # Default to string for unknown types
138
+ { type: "string" }
139
+ end
140
+ end
141
+
142
+ sig { params(schema: T::Hash[Symbol, T.untyped], current_depth: Integer).returns(Integer) }
143
+ def self.calculate_depth(schema, current_depth = 0)
144
+ return current_depth unless schema.is_a?(Hash)
145
+
146
+ max_depth = current_depth
147
+
148
+ # Check properties
149
+ if schema[:properties].is_a?(Hash)
150
+ schema[:properties].each_value do |prop|
151
+ if prop.is_a?(Hash)
152
+ prop_depth = calculate_depth(prop, current_depth + 1)
153
+ max_depth = [max_depth, prop_depth].max
154
+ end
155
+ end
156
+ end
157
+
158
+ # Check array items
159
+ if schema[:items].is_a?(Hash)
160
+ items_depth = calculate_depth(schema[:items], current_depth + 1)
161
+ max_depth = [max_depth, items_depth].max
162
+ end
163
+
164
+ max_depth
165
+ end
166
+ end
167
+ end
168
+ end
169
+ end
170
+ end
@@ -7,10 +7,12 @@ require_relative '../vision_models'
7
7
  module DSPy
8
8
  class LM
9
9
  class GeminiAdapter < Adapter
10
- def initialize(model:, api_key:)
11
- super
10
+ def initialize(model:, api_key:, structured_outputs: false)
11
+ super(model: model, api_key: api_key)
12
12
  validate_api_key!(api_key, 'gemini')
13
13
 
14
+ @structured_outputs_enabled = structured_outputs
15
+
14
16
  @client = Gemini.new(
15
17
  credentials: {
16
18
  service: 'generative-language-api',
@@ -0,0 +1,67 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "base_strategy"
4
+ require_relative "../adapters/gemini/schema_converter"
5
+
6
+ module DSPy
7
+ class LM
8
+ module Strategies
9
+ # Strategy for using Gemini's native structured output feature
10
+ class GeminiStructuredOutputStrategy < BaseStrategy
11
+ extend T::Sig
12
+
13
+ sig { override.returns(T::Boolean) }
14
+ def available?
15
+ # Check if adapter is Gemini and supports structured outputs
16
+ return false unless adapter.is_a?(DSPy::LM::GeminiAdapter)
17
+ return false unless adapter.instance_variable_get(:@structured_outputs_enabled)
18
+
19
+ DSPy::LM::Adapters::Gemini::SchemaConverter.supports_structured_outputs?(adapter.model)
20
+ end
21
+
22
+ sig { override.returns(Integer) }
23
+ def priority
24
+ 100 # Highest priority - native structured outputs are most reliable
25
+ end
26
+
27
+ sig { override.returns(String) }
28
+ def name
29
+ "gemini_structured_output"
30
+ end
31
+
32
+ sig { override.params(messages: T::Array[T::Hash[Symbol, String]], request_params: T::Hash[Symbol, T.untyped]).void }
33
+ def prepare_request(messages, request_params)
34
+ # Convert signature to Gemini schema format
35
+ schema = DSPy::LM::Adapters::Gemini::SchemaConverter.to_gemini_format(signature_class)
36
+
37
+ # Add generation_config for structured output
38
+ request_params[:generation_config] = {
39
+ response_mime_type: "application/json",
40
+ response_schema: schema
41
+ }
42
+ end
43
+
44
+ sig { override.params(response: DSPy::LM::Response).returns(T.nilable(String)) }
45
+ def extract_json(response)
46
+ # With Gemini structured outputs, the response should already be valid JSON
47
+ # Just return the content as-is
48
+ response.content
49
+ end
50
+
51
+ sig { override.params(error: StandardError).returns(T::Boolean) }
52
+ def handle_error(error)
53
+ # Handle Gemini-specific structured output errors
54
+ error_msg = error.message.to_s.downcase
55
+ if error_msg.include?("schema") || error_msg.include?("generation_config") || error_msg.include?("response_schema")
56
+ # Log the error and return true to indicate we handled it
57
+ # This allows fallback to another strategy
58
+ DSPy.logger.warn("Gemini structured output failed: #{error.message}")
59
+ true
60
+ else
61
+ false
62
+ end
63
+ end
64
+ end
65
+ end
66
+ end
67
+ end
@@ -5,6 +5,7 @@ require_relative "strategies/base_strategy"
5
5
  require_relative "strategies/openai_structured_output_strategy"
6
6
  require_relative "strategies/anthropic_tool_use_strategy"
7
7
  require_relative "strategies/anthropic_extraction_strategy"
8
+ require_relative "strategies/gemini_structured_output_strategy"
8
9
  require_relative "strategies/enhanced_prompting_strategy"
9
10
 
10
11
  module DSPy
@@ -13,11 +14,23 @@ module DSPy
13
14
  class StrategySelector
14
15
  extend T::Sig
15
16
 
17
+ # Strategy names enum for type safety
18
+ class StrategyName < T::Enum
19
+ enums do
20
+ OpenAIStructuredOutput = new('openai_structured_output')
21
+ AnthropicToolUse = new('anthropic_tool_use')
22
+ AnthropicExtraction = new('anthropic_extraction')
23
+ GeminiStructuredOutput = new('gemini_structured_output')
24
+ EnhancedPrompting = new('enhanced_prompting')
25
+ end
26
+ end
27
+
16
28
  # Available strategies in order of registration
17
29
  STRATEGIES = [
18
30
  Strategies::OpenAIStructuredOutputStrategy,
19
31
  Strategies::AnthropicToolUseStrategy,
20
32
  Strategies::AnthropicExtractionStrategy,
33
+ Strategies::GeminiStructuredOutputStrategy,
21
34
  Strategies::EnhancedPromptingStrategy
22
35
  ].freeze
23
36
 
@@ -38,7 +51,7 @@ module DSPy
38
51
 
39
52
  # If strict strategy not available, fall back to compatible for Strict preference
40
53
  if is_strict_preference?(DSPy.config.structured_outputs.strategy)
41
- compatible_strategy = find_strategy_by_name("enhanced_prompting")
54
+ compatible_strategy = find_strategy_by_name(StrategyName::EnhancedPrompting)
42
55
  return compatible_strategy if compatible_strategy&.available?
43
56
  end
44
57
 
@@ -65,7 +78,7 @@ module DSPy
65
78
  end
66
79
 
67
80
  # Check if a specific strategy is available
68
- sig { params(strategy_name: String).returns(T::Boolean) }
81
+ sig { params(strategy_name: StrategyName).returns(T::Boolean) }
69
82
  def strategy_available?(strategy_name)
70
83
  strategy = find_strategy_by_name(strategy_name)
71
84
  strategy&.available? || false
@@ -82,7 +95,7 @@ module DSPy
82
95
  select_provider_optimized_strategy
83
96
  when DSPy::Strategy::Compatible
84
97
  # Use enhanced prompting
85
- find_strategy_by_name("enhanced_prompting")
98
+ find_strategy_by_name(StrategyName::EnhancedPrompting)
86
99
  else
87
100
  nil
88
101
  end
@@ -98,15 +111,19 @@ module DSPy
98
111
  sig { returns(T.nilable(Strategies::BaseStrategy)) }
99
112
  def select_provider_optimized_strategy
100
113
  # Try OpenAI structured output first
101
- openai_strategy = find_strategy_by_name("openai_structured_output")
114
+ openai_strategy = find_strategy_by_name(StrategyName::OpenAIStructuredOutput)
102
115
  return openai_strategy if openai_strategy&.available?
103
116
 
117
+ # Try Gemini structured output
118
+ gemini_strategy = find_strategy_by_name(StrategyName::GeminiStructuredOutput)
119
+ return gemini_strategy if gemini_strategy&.available?
120
+
104
121
  # Try Anthropic tool use first
105
- anthropic_tool_strategy = find_strategy_by_name("anthropic_tool_use")
122
+ anthropic_tool_strategy = find_strategy_by_name(StrategyName::AnthropicToolUse)
106
123
  return anthropic_tool_strategy if anthropic_tool_strategy&.available?
107
124
 
108
125
  # Fall back to Anthropic extraction
109
- anthropic_strategy = find_strategy_by_name("anthropic_extraction")
126
+ anthropic_strategy = find_strategy_by_name(StrategyName::AnthropicExtraction)
110
127
  return anthropic_strategy if anthropic_strategy&.available?
111
128
 
112
129
  # No provider-specific strategy available
@@ -118,9 +135,9 @@ module DSPy
118
135
  STRATEGIES.map { |klass| klass.new(@adapter, @signature_class) }
119
136
  end
120
137
 
121
- sig { params(name: String).returns(T.nilable(Strategies::BaseStrategy)) }
138
+ sig { params(name: StrategyName).returns(T.nilable(Strategies::BaseStrategy)) }
122
139
  def find_strategy_by_name(name)
123
- @strategies.find { |s| s.name == name }
140
+ @strategies.find { |s| s.name == name.serialize }
124
141
  end
125
142
  end
126
143
  end
@@ -32,6 +32,7 @@ module DSPy
32
32
  # instruction generation, and Bayesian optimization
33
33
  class MIPROv2 < Teleprompter
34
34
  extend T::Sig
35
+ include Dry::Configurable
35
36
 
36
37
  # Auto-configuration modes for different optimization needs
37
38
  module AutoMode
@@ -44,15 +45,17 @@ module DSPy
44
45
  ).returns(MIPROv2)
45
46
  end
46
47
  def self.light(metric: nil, **kwargs)
47
- config = MIPROv2Config.new
48
- config.num_trials = 6
49
- config.num_instruction_candidates = 3
50
- config.max_bootstrapped_examples = 2
51
- config.max_labeled_examples = 8
52
- config.bootstrap_sets = 3
53
- config.optimization_strategy = "greedy"
54
- config.early_stopping_patience = 2
55
- MIPROv2.new(metric: metric, config: config, **kwargs)
48
+ optimizer = MIPROv2.new(metric: metric, **kwargs)
49
+ optimizer.configure do |config|
50
+ config.num_trials = 6
51
+ config.num_instruction_candidates = 3
52
+ config.max_bootstrapped_examples = 2
53
+ config.max_labeled_examples = 8
54
+ config.bootstrap_sets = 3
55
+ config.optimization_strategy = :greedy
56
+ config.early_stopping_patience = 2
57
+ end
58
+ optimizer
56
59
  end
57
60
 
58
61
  sig do
@@ -62,15 +65,17 @@ module DSPy
62
65
  ).returns(MIPROv2)
63
66
  end
64
67
  def self.medium(metric: nil, **kwargs)
65
- config = MIPROv2Config.new
66
- config.num_trials = 12
67
- config.num_instruction_candidates = 5
68
- config.max_bootstrapped_examples = 4
69
- config.max_labeled_examples = 16
70
- config.bootstrap_sets = 5
71
- config.optimization_strategy = "adaptive"
72
- config.early_stopping_patience = 3
73
- MIPROv2.new(metric: metric, config: config, **kwargs)
68
+ optimizer = MIPROv2.new(metric: metric, **kwargs)
69
+ optimizer.configure do |config|
70
+ config.num_trials = 12
71
+ config.num_instruction_candidates = 5
72
+ config.max_bootstrapped_examples = 4
73
+ config.max_labeled_examples = 16
74
+ config.bootstrap_sets = 5
75
+ config.optimization_strategy = :adaptive
76
+ config.early_stopping_patience = 3
77
+ end
78
+ optimizer
74
79
  end
75
80
 
76
81
  sig do
@@ -80,135 +85,102 @@ module DSPy
80
85
  ).returns(MIPROv2)
81
86
  end
82
87
  def self.heavy(metric: nil, **kwargs)
83
- config = MIPROv2Config.new
84
- config.num_trials = 18
85
- config.num_instruction_candidates = 8
86
- config.max_bootstrapped_examples = 6
87
- config.max_labeled_examples = 24
88
- config.bootstrap_sets = 8
89
- config.optimization_strategy = "bayesian"
90
- config.early_stopping_patience = 5
91
- MIPROv2.new(metric: metric, config: config, **kwargs)
92
- end
93
- end
94
-
95
- # Configuration for MIPROv2 optimization
96
- class MIPROv2Config < Config
97
- extend T::Sig
98
-
99
- sig { returns(Integer) }
100
- attr_accessor :num_trials
101
-
102
- sig { returns(Integer) }
103
- attr_accessor :num_instruction_candidates
104
-
105
- sig { returns(Integer) }
106
- attr_accessor :bootstrap_sets
107
-
108
- sig { returns(String) }
109
- attr_accessor :optimization_strategy
110
-
111
- sig { returns(Float) }
112
- attr_accessor :init_temperature
113
-
114
- sig { returns(Float) }
115
- attr_accessor :final_temperature
116
-
117
- sig { returns(Integer) }
118
- attr_accessor :early_stopping_patience
119
-
120
- sig { returns(T::Boolean) }
121
- attr_accessor :use_bayesian_optimization
122
-
123
- sig { returns(T::Boolean) }
124
- attr_accessor :track_diversity
125
-
126
- sig { returns(DSPy::Propose::GroundedProposer::Config) }
127
- attr_accessor :proposer_config
128
-
129
- sig { void }
130
- def initialize
131
- super
132
- @num_trials = 12
133
- @num_instruction_candidates = 5
134
- @bootstrap_sets = 5
135
- @optimization_strategy = "adaptive" # greedy, adaptive, bayesian
136
- @init_temperature = 1.0
137
- @final_temperature = 0.1
138
- @early_stopping_patience = 3
139
- @use_bayesian_optimization = true
140
- @track_diversity = true
141
- @proposer_config = DSPy::Propose::GroundedProposer::Config.new
88
+ optimizer = MIPROv2.new(metric: metric, **kwargs)
89
+ optimizer.configure do |config|
90
+ config.num_trials = 18
91
+ config.num_instruction_candidates = 8
92
+ config.max_bootstrapped_examples = 6
93
+ config.max_labeled_examples = 24
94
+ config.bootstrap_sets = 8
95
+ config.optimization_strategy = :bayesian
96
+ config.early_stopping_patience = 5
97
+ end
98
+ optimizer
99
+ end
100
+ end
101
+
102
+ # Dry-configurable settings for MIPROv2
103
+ setting :num_trials, default: 12
104
+ setting :num_instruction_candidates, default: 5
105
+ setting :bootstrap_sets, default: 5
106
+ setting :max_bootstrapped_examples, default: 4
107
+ setting :max_labeled_examples, default: 16
108
+ setting :optimization_strategy, default: OptimizationStrategy::Adaptive, constructor: ->(value) {
109
+ # Coerce symbols to enum values
110
+ case value
111
+ when :greedy then OptimizationStrategy::Greedy
112
+ when :adaptive then OptimizationStrategy::Adaptive
113
+ when :bayesian then OptimizationStrategy::Bayesian
114
+ when OptimizationStrategy then value
115
+ when nil then OptimizationStrategy::Adaptive
116
+ else
117
+ raise ArgumentError, "Invalid optimization strategy: #{value}. Must be one of :greedy, :adaptive, :bayesian"
142
118
  end
119
+ }
120
+ setting :init_temperature, default: 1.0
121
+ setting :final_temperature, default: 0.1
122
+ setting :early_stopping_patience, default: 3
123
+ setting :use_bayesian_optimization, default: true
124
+ setting :track_diversity, default: true
125
+ setting :max_errors, default: 3
126
+ setting :num_threads, default: 1
143
127
 
144
- sig { returns(T::Hash[Symbol, T.untyped]) }
145
- def to_h
146
- super.merge({
147
- num_trials: @num_trials,
148
- num_instruction_candidates: @num_instruction_candidates,
149
- bootstrap_sets: @bootstrap_sets,
150
- optimization_strategy: @optimization_strategy,
151
- init_temperature: @init_temperature,
152
- final_temperature: @final_temperature,
153
- early_stopping_patience: @early_stopping_patience,
154
- use_bayesian_optimization: @use_bayesian_optimization,
155
- track_diversity: @track_diversity
156
- })
128
+ # Class-level configuration method - sets defaults for new instances
129
+ def self.configure(&block)
130
+ if block_given?
131
+ # Store configuration in a class variable for new instances
132
+ @default_config_block = block
157
133
  end
158
134
  end
159
135
 
160
- # Candidate configuration for optimization trials
161
- class CandidateConfig
162
- extend T::Sig
163
- include Dry::Configurable
164
-
165
- # Configuration settings
166
- setting :instruction, default: ""
167
- setting :few_shot_examples, default: []
168
- setting :type, default: CandidateType::Baseline
169
- setting :metadata, default: {}
136
+ # Get the default configuration block
137
+ def self.default_config_block
138
+ @default_config_block
139
+ end
170
140
 
171
- sig { returns(String) }
172
- def config_id
173
- @config_id ||= generate_config_id
174
- end
175
141
 
176
- sig { void }
177
- def finalize!
178
- # Freeze settings after configuration to prevent mutation
179
- config.instruction = config.instruction.freeze
180
- config.few_shot_examples = config.few_shot_examples.freeze
181
- config.metadata = config.metadata.freeze
142
+ # Simple data structure for evaluated candidate configurations (immutable)
143
+ EvaluatedCandidate = Data.define(
144
+ :instruction,
145
+ :few_shot_examples,
146
+ :type,
147
+ :metadata,
148
+ :config_id
149
+ ) do
150
+ extend T::Sig
151
+
152
+ # Generate a config ID based on content
153
+ sig { params(instruction: String, few_shot_examples: T::Array[T.untyped], type: CandidateType, metadata: T::Hash[Symbol, T.untyped]).returns(EvaluatedCandidate) }
154
+ def self.create(instruction:, few_shot_examples: [], type: CandidateType::Baseline, metadata: {})
155
+ content = "#{instruction}_#{few_shot_examples.size}_#{type.serialize}_#{metadata.hash}"
156
+ config_id = Digest::SHA256.hexdigest(content)[0, 12]
182
157
 
183
- # Generate ID after finalization
184
- @config_id = generate_config_id
158
+ new(
159
+ instruction: instruction.freeze,
160
+ few_shot_examples: few_shot_examples.freeze,
161
+ type: type,
162
+ metadata: metadata.freeze,
163
+ config_id: config_id
164
+ )
185
165
  end
186
166
 
187
167
  sig { returns(T::Hash[Symbol, T.untyped]) }
188
168
  def to_h
189
169
  {
190
- instruction: config.instruction,
191
- few_shot_examples: config.few_shot_examples.size,
192
- type: config.type.serialize,
193
- metadata: config.metadata,
170
+ instruction: instruction,
171
+ few_shot_examples: few_shot_examples.size,
172
+ type: type.serialize,
173
+ metadata: metadata,
194
174
  config_id: config_id
195
175
  }
196
176
  end
197
-
198
- private
199
-
200
- sig { returns(String) }
201
- def generate_config_id
202
- content = "#{config.instruction}_#{config.few_shot_examples.size}_#{config.type.serialize}_#{config.metadata.hash}"
203
- Digest::SHA256.hexdigest(content)[0, 12]
204
- end
205
177
  end
206
178
 
207
179
  # Result of MIPROv2 optimization
208
180
  class MIPROv2Result < OptimizationResult
209
181
  extend T::Sig
210
182
 
211
- sig { returns(T::Array[CandidateConfig]) }
183
+ sig { returns(T::Array[EvaluatedCandidate]) }
212
184
  attr_reader :evaluated_candidates
213
185
 
214
186
  sig { returns(T::Hash[Symbol, T.untyped]) }
@@ -228,7 +200,7 @@ module DSPy
228
200
  optimized_program: T.untyped,
229
201
  scores: T::Hash[Symbol, T.untyped],
230
202
  history: T::Hash[Symbol, T.untyped],
231
- evaluated_candidates: T::Array[CandidateConfig],
203
+ evaluated_candidates: T::Array[EvaluatedCandidate],
232
204
  optimization_trace: T::Hash[Symbol, T.untyped],
233
205
  bootstrap_statistics: T::Hash[Symbol, T.untyped],
234
206
  proposal_statistics: T::Hash[Symbol, T.untyped],
@@ -272,18 +244,25 @@ module DSPy
272
244
  sig { returns(T.nilable(DSPy::Propose::GroundedProposer)) }
273
245
  attr_reader :proposer
274
246
 
275
- sig do
276
- params(
277
- metric: T.nilable(T.proc.params(arg0: T.untyped, arg1: T.untyped).returns(T.untyped)),
278
- config: T.nilable(MIPROv2Config)
279
- ).void
280
- end
281
- def initialize(metric: nil, config: nil)
282
- @mipro_config = config || MIPROv2Config.new
283
- # Call parent teleprompter initializer, which handles dry-configurable internally
284
- super(metric: metric, config: @mipro_config)
247
+ # Override dry-configurable's initialize to add our parameter validation
248
+ def initialize(metric: nil, **kwargs)
249
+ # Reject old config parameter pattern
250
+ if kwargs.key?(:config)
251
+ raise ArgumentError, "config parameter is no longer supported. Use .configure blocks instead."
252
+ end
253
+
254
+ # Let dry-configurable handle its initialization
255
+ super(**kwargs)
256
+
257
+ # Apply class-level configuration if it exists
258
+ if self.class.default_config_block
259
+ configure(&self.class.default_config_block)
260
+ end
285
261
 
286
- @proposer = DSPy::Propose::GroundedProposer.new(config: @mipro_config.proposer_config)
262
+ @metric = metric
263
+
264
+ # Initialize proposer with a basic config for now (will be updated later)
265
+ @proposer = DSPy::Propose::GroundedProposer.new(config: DSPy::Propose::GroundedProposer::Config.new)
287
266
  @optimization_trace = []
288
267
  @evaluated_candidates = []
289
268
  end
@@ -302,8 +281,8 @@ module DSPy
302
281
  instrument_step('miprov2_compile', {
303
282
  trainset_size: trainset.size,
304
283
  valset_size: valset&.size || 0,
305
- num_trials: @mipro_config.num_trials,
306
- optimization_strategy: @mipro_config.optimization_strategy,
284
+ num_trials: config.num_trials,
285
+ optimization_strategy: config.optimization_strategy,
307
286
  mode: infer_auto_mode
308
287
  }) do
309
288
  # Convert examples to typed format
@@ -363,11 +342,11 @@ module DSPy
363
342
  sig { params(program: T.untyped, trainset: T::Array[DSPy::Example]).returns(Utils::BootstrapResult) }
364
343
  def phase_1_bootstrap(program, trainset)
365
344
  bootstrap_config = Utils::BootstrapConfig.new
366
- bootstrap_config.max_bootstrapped_examples = @mipro_config.max_bootstrapped_examples
367
- bootstrap_config.max_labeled_examples = @mipro_config.max_labeled_examples
368
- bootstrap_config.num_candidate_sets = @mipro_config.bootstrap_sets
369
- bootstrap_config.max_errors = @mipro_config.max_errors
370
- bootstrap_config.num_threads = @mipro_config.num_threads
345
+ bootstrap_config.max_bootstrapped_examples = config.max_bootstrapped_examples
346
+ bootstrap_config.max_labeled_examples = config.max_labeled_examples
347
+ bootstrap_config.num_candidate_sets = config.bootstrap_sets
348
+ bootstrap_config.max_errors = config.max_errors
349
+ bootstrap_config.num_threads = config.num_threads
371
350
 
372
351
  Utils.create_n_fewshot_demo_sets(program, trainset, config: bootstrap_config, metric: @metric)
373
352
  end
@@ -392,7 +371,7 @@ module DSPy
392
371
  raise ArgumentError, "Cannot extract signature class from program" unless signature_class
393
372
 
394
373
  # Configure proposer for this optimization run
395
- @mipro_config.proposer_config.num_instruction_candidates = @mipro_config.num_instruction_candidates
374
+ @proposer.config.num_instruction_candidates = config.num_instruction_candidates
396
375
 
397
376
  @proposer.propose_instructions(
398
377
  signature_class,
@@ -425,7 +404,7 @@ module DSPy
425
404
  best_program = nil
426
405
  best_evaluation_result = nil
427
406
 
428
- @mipro_config.num_trials.times do |trial_idx|
407
+ config.num_trials.times do |trial_idx|
429
408
  trials_completed = trial_idx + 1
430
409
 
431
410
  # Select next candidate based on optimization strategy
@@ -434,8 +413,8 @@ module DSPy
434
413
  emit_event('trial_start', {
435
414
  trial_number: trials_completed,
436
415
  candidate_id: candidate.config_id,
437
- instruction_preview: candidate.config.instruction[0, 50],
438
- num_few_shot: candidate.config.few_shot_examples.size
416
+ instruction_preview: candidate.instruction[0, 50],
417
+ num_few_shot: candidate.few_shot_examples.size
439
418
  })
440
419
 
441
420
  begin
@@ -494,37 +473,40 @@ module DSPy
494
473
  params(
495
474
  proposal_result: DSPy::Propose::GroundedProposer::ProposalResult,
496
475
  bootstrap_result: Utils::BootstrapResult
497
- ).returns(T::Array[CandidateConfig])
476
+ ).returns(T::Array[EvaluatedCandidate])
498
477
  end
499
478
  def generate_candidate_configurations(proposal_result, bootstrap_result)
500
479
  candidates = []
501
480
 
502
481
  # Base configuration (no modifications)
503
- candidates << create_candidate_config do |config|
504
- config.instruction = ""
505
- config.few_shot_examples = []
506
- config.type = CandidateType::Baseline
507
- config.metadata = {}
508
- end
482
+ candidates << EvaluatedCandidate.new(
483
+ instruction: "",
484
+ few_shot_examples: [],
485
+ type: CandidateType::Baseline,
486
+ metadata: {},
487
+ config_id: SecureRandom.hex(6)
488
+ )
509
489
 
510
490
  # Instruction-only candidates
511
491
  proposal_result.candidate_instructions.each_with_index do |instruction, idx|
512
- candidates << create_candidate_config do |config|
513
- config.instruction = instruction
514
- config.few_shot_examples = []
515
- config.type = CandidateType::InstructionOnly
516
- config.metadata = { proposal_rank: idx }
517
- end
492
+ candidates << EvaluatedCandidate.new(
493
+ instruction: instruction,
494
+ few_shot_examples: [],
495
+ type: CandidateType::InstructionOnly,
496
+ metadata: { proposal_rank: idx },
497
+ config_id: SecureRandom.hex(6)
498
+ )
518
499
  end
519
500
 
520
501
  # Few-shot only candidates
521
502
  bootstrap_result.candidate_sets.each_with_index do |candidate_set, idx|
522
- candidates << create_candidate_config do |config|
523
- config.instruction = ""
524
- config.few_shot_examples = candidate_set
525
- config.type = CandidateType::FewShotOnly
526
- config.metadata = { bootstrap_rank: idx }
527
- end
503
+ candidates << EvaluatedCandidate.new(
504
+ instruction: "",
505
+ few_shot_examples: candidate_set,
506
+ type: CandidateType::FewShotOnly,
507
+ metadata: { bootstrap_rank: idx },
508
+ config_id: SecureRandom.hex(6)
509
+ )
528
510
  end
529
511
 
530
512
  # Combined candidates (instruction + few-shot)
@@ -533,15 +515,16 @@ module DSPy
533
515
 
534
516
  top_instructions.each_with_index do |instruction, i_idx|
535
517
  top_bootstrap_sets.each_with_index do |candidate_set, b_idx|
536
- candidates << create_candidate_config do |config|
537
- config.instruction = instruction
538
- config.few_shot_examples = candidate_set
539
- config.type = CandidateType::Combined
540
- config.metadata = {
518
+ candidates << EvaluatedCandidate.new(
519
+ instruction: instruction,
520
+ few_shot_examples: candidate_set,
521
+ type: CandidateType::Combined,
522
+ metadata: {
541
523
  instruction_rank: i_idx,
542
524
  bootstrap_rank: b_idx
543
- }
544
- end
525
+ },
526
+ config_id: SecureRandom.hex(6)
527
+ )
545
528
  end
546
529
  end
547
530
 
@@ -549,13 +532,13 @@ module DSPy
549
532
  end
550
533
 
551
534
  # Initialize optimization state for candidate selection
552
- sig { params(candidates: T::Array[CandidateConfig]).returns(T::Hash[Symbol, T.untyped]) }
535
+ sig { params(candidates: T::Array[EvaluatedCandidate]).returns(T::Hash[Symbol, T.untyped]) }
553
536
  def initialize_optimization_state(candidates)
554
537
  {
555
538
  candidates: candidates,
556
539
  scores: {},
557
540
  exploration_counts: Hash.new(0),
558
- temperature: @mipro_config.init_temperature,
541
+ temperature: config.init_temperature,
559
542
  best_score_history: [],
560
543
  diversity_scores: {},
561
544
  no_improvement_count: 0
@@ -565,18 +548,18 @@ module DSPy
565
548
  # Select next candidate based on optimization strategy
566
549
  sig do
567
550
  params(
568
- candidates: T::Array[CandidateConfig],
551
+ candidates: T::Array[EvaluatedCandidate],
569
552
  state: T::Hash[Symbol, T.untyped],
570
553
  trial_idx: Integer
571
- ).returns(CandidateConfig)
554
+ ).returns(EvaluatedCandidate)
572
555
  end
573
556
  def select_next_candidate(candidates, state, trial_idx)
574
- case @mipro_config.optimization_strategy
575
- when "greedy"
557
+ case config.optimization_strategy
558
+ when OptimizationStrategy::Greedy
576
559
  select_candidate_greedy(candidates, state)
577
- when "adaptive"
560
+ when OptimizationStrategy::Adaptive
578
561
  select_candidate_adaptive(candidates, state, trial_idx)
579
- when "bayesian"
562
+ when OptimizationStrategy::Bayesian
580
563
  select_candidate_bayesian(candidates, state, trial_idx)
581
564
  else
582
565
  candidates.sample # Random fallback
@@ -584,7 +567,7 @@ module DSPy
584
567
  end
585
568
 
586
569
  # Greedy candidate selection (exploit best known configurations)
587
- sig { params(candidates: T::Array[CandidateConfig], state: T::Hash[Symbol, T.untyped]).returns(CandidateConfig) }
570
+ sig { params(candidates: T::Array[EvaluatedCandidate], state: T::Hash[Symbol, T.untyped]).returns(EvaluatedCandidate) }
588
571
  def select_candidate_greedy(candidates, state)
589
572
  # Prioritize unexplored candidates, then highest scoring
590
573
  unexplored = candidates.reject { |c| state[:scores].key?(c.config_id) }
@@ -598,15 +581,15 @@ module DSPy
598
581
  # Adaptive candidate selection (balance exploration and exploitation)
599
582
  sig do
600
583
  params(
601
- candidates: T::Array[CandidateConfig],
584
+ candidates: T::Array[EvaluatedCandidate],
602
585
  state: T::Hash[Symbol, T.untyped],
603
586
  trial_idx: Integer
604
- ).returns(CandidateConfig)
587
+ ).returns(EvaluatedCandidate)
605
588
  end
606
589
  def select_candidate_adaptive(candidates, state, trial_idx)
607
590
  # Update temperature based on progress
608
- progress = trial_idx.to_f / @mipro_config.num_trials
609
- state[:temperature] = @mipro_config.init_temperature * (1 - progress) + @mipro_config.final_temperature * progress
591
+ progress = trial_idx.to_f / config.num_trials
592
+ state[:temperature] = config.init_temperature * (1 - progress) + config.final_temperature * progress
610
593
 
611
594
  # Calculate selection scores combining exploitation and exploration
612
595
  candidate_scores = candidates.map do |candidate|
@@ -639,10 +622,10 @@ module DSPy
639
622
  # Bayesian candidate selection (use probabilistic model)
640
623
  sig do
641
624
  params(
642
- candidates: T::Array[CandidateConfig],
625
+ candidates: T::Array[EvaluatedCandidate],
643
626
  state: T::Hash[Symbol, T.untyped],
644
627
  trial_idx: Integer
645
- ).returns(CandidateConfig)
628
+ ).returns(EvaluatedCandidate)
646
629
  end
647
630
  def select_candidate_bayesian(candidates, state, trial_idx)
648
631
  # Need at least 3 observations to fit GP, otherwise fall back to adaptive
@@ -686,21 +669,9 @@ module DSPy
686
669
 
687
670
  private
688
671
 
689
- # Helper method to create CandidateConfig with dry-configurable syntax
690
- sig do
691
- params(
692
- block: T.proc.params(config: Dry::Configurable::Config).void
693
- ).returns(CandidateConfig)
694
- end
695
- def create_candidate_config(&block)
696
- candidate = CandidateConfig.new
697
- candidate.configure(&block)
698
- candidate.finalize!
699
- candidate
700
- end
701
672
 
702
673
  # Encode candidates as numerical features for Gaussian Process
703
- sig { params(candidates: T::Array[CandidateConfig]).returns(T::Array[T::Array[Float]]) }
674
+ sig { params(candidates: T::Array[EvaluatedCandidate]).returns(T::Array[T::Array[Float]]) }
704
675
  def encode_candidates_for_gp(candidates)
705
676
  # Simple encoding: use hash of config as features
706
677
  # In practice, this could be more sophisticated (e.g., instruction embeddings)
@@ -715,7 +686,7 @@ module DSPy
715
686
  features << ((config_hash / 1_000_000) % 1000).to_f / 1000.0 # Feature 3: high bits
716
687
 
717
688
  # Add instruction length if available
718
- instruction = candidate.config.instruction
689
+ instruction = candidate.instruction
719
690
  if instruction && !instruction.empty?
720
691
  features << [instruction.length.to_f / 100.0, 2.0].min # Instruction length, capped at 200 chars
721
692
  else
@@ -730,7 +701,7 @@ module DSPy
730
701
  sig do
731
702
  params(
732
703
  program: T.untyped,
733
- candidate: CandidateConfig,
704
+ candidate: EvaluatedCandidate,
734
705
  evaluation_set: T::Array[DSPy::Example]
735
706
  ).returns([Float, T.untyped, DSPy::Evaluate::BatchEvaluationResult])
736
707
  end
@@ -748,18 +719,18 @@ module DSPy
748
719
  end
749
720
 
750
721
  # Apply candidate configuration to program
751
- sig { params(program: T.untyped, candidate: CandidateConfig).returns(T.untyped) }
722
+ sig { params(program: T.untyped, candidate: EvaluatedCandidate).returns(T.untyped) }
752
723
  def apply_candidate_configuration(program, candidate)
753
724
  modified_program = program
754
725
 
755
726
  # Apply instruction if provided
756
- if !candidate.config.instruction.empty? && program.respond_to?(:with_instruction)
757
- modified_program = modified_program.with_instruction(candidate.config.instruction)
727
+ if !candidate.instruction.empty? && program.respond_to?(:with_instruction)
728
+ modified_program = modified_program.with_instruction(candidate.instruction)
758
729
  end
759
730
 
760
731
  # Apply few-shot examples if provided
761
- if candidate.config.few_shot_examples.any? && program.respond_to?(:with_examples)
762
- few_shot_examples = candidate.config.few_shot_examples.map do |example|
732
+ if candidate.few_shot_examples.any? && program.respond_to?(:with_examples)
733
+ few_shot_examples = candidate.few_shot_examples.map do |example|
763
734
  DSPy::FewShotExample.new(
764
735
  input: example.input_values,
765
736
  output: example.expected_values,
@@ -776,7 +747,7 @@ module DSPy
776
747
  sig do
777
748
  params(
778
749
  state: T::Hash[Symbol, T.untyped],
779
- candidate: CandidateConfig,
750
+ candidate: EvaluatedCandidate,
780
751
  score: Float
781
752
  ).void
782
753
  end
@@ -786,7 +757,7 @@ module DSPy
786
757
  state[:best_score_history] << score
787
758
 
788
759
  # Track diversity if enabled
789
- if @mipro_config.track_diversity
760
+ if config.track_diversity
790
761
  state[:diversity_scores][candidate.config_id] = calculate_diversity_score(candidate)
791
762
  end
792
763
 
@@ -802,18 +773,18 @@ module DSPy
802
773
  sig { params(state: T::Hash[Symbol, T.untyped], trial_idx: Integer).returns(T::Boolean) }
803
774
  def should_early_stop?(state, trial_idx)
804
775
  # Don't stop too early
805
- return false if trial_idx < @mipro_config.early_stopping_patience
776
+ return false if trial_idx < config.early_stopping_patience
806
777
 
807
778
  # Stop if no improvement for patience trials
808
- state[:no_improvement_count] >= @mipro_config.early_stopping_patience
779
+ state[:no_improvement_count] >= config.early_stopping_patience
809
780
  end
810
781
 
811
782
  # Calculate diversity score for candidate
812
- sig { params(candidate: CandidateConfig).returns(Float) }
783
+ sig { params(candidate: EvaluatedCandidate).returns(Float) }
813
784
  def calculate_diversity_score(candidate)
814
785
  # Simple diversity metric based on instruction length and few-shot count
815
- instruction_diversity = candidate.config.instruction.length / 200.0
816
- few_shot_diversity = candidate.config.few_shot_examples.size / 10.0
786
+ instruction_diversity = candidate.instruction.length / 200.0
787
+ few_shot_diversity = candidate.few_shot_examples.size / 10.0
817
788
 
818
789
  [instruction_diversity + few_shot_diversity, 1.0].min
819
790
  end
@@ -836,17 +807,17 @@ module DSPy
836
807
 
837
808
  history = {
838
809
  total_trials: optimization_result[:trials_completed],
839
- optimization_strategy: @mipro_config.optimization_strategy,
840
- early_stopped: optimization_result[:trials_completed] < @mipro_config.num_trials,
810
+ optimization_strategy: config.optimization_strategy,
811
+ early_stopped: optimization_result[:trials_completed] < config.num_trials,
841
812
  score_history: optimization_result[:optimization_state][:best_score_history]
842
813
  }
843
814
 
844
815
  metadata = {
845
816
  optimizer: "MIPROv2",
846
817
  auto_mode: infer_auto_mode,
847
- best_instruction: best_candidate&.config&.instruction || "",
848
- best_few_shot_count: best_candidate&.config&.few_shot_examples&.size || 0,
849
- best_candidate_type: best_candidate&.config&.type&.serialize || "unknown",
818
+ best_instruction: best_candidate&.instruction || "",
819
+ best_few_shot_count: best_candidate&.few_shot_examples&.size || 0,
820
+ best_candidate_type: best_candidate&.type&.serialize || "unknown",
850
821
  optimization_timestamp: Time.now.iso8601
851
822
  }
852
823
 
@@ -917,7 +888,7 @@ module DSPy
917
888
  # Infer auto mode based on configuration
918
889
  sig { returns(String) }
919
890
  def infer_auto_mode
920
- case @mipro_config.num_trials
891
+ case config.num_trials
921
892
  when 0..6 then "light"
922
893
  when 7..12 then "medium"
923
894
  else "heavy"
data/lib/dspy/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module DSPy
4
- VERSION = "0.26.0"
4
+ VERSION = "0.27.0"
5
5
  end
metadata CHANGED
@@ -1,13 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: dspy
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.26.0
4
+ version: 0.27.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Vicente Reig Rincón de Arellano
8
8
  bindir: bin
9
9
  cert_chain: []
10
- date: 2025-09-09 00:00:00.000000000 Z
10
+ date: 2025-09-13 00:00:00.000000000 Z
11
11
  dependencies:
12
12
  - !ruby/object:Gem::Dependency
13
13
  name: dry-configurable
@@ -207,6 +207,7 @@ files:
207
207
  - lib/dspy/lm/adapter.rb
208
208
  - lib/dspy/lm/adapter_factory.rb
209
209
  - lib/dspy/lm/adapters/anthropic_adapter.rb
210
+ - lib/dspy/lm/adapters/gemini/schema_converter.rb
210
211
  - lib/dspy/lm/adapters/gemini_adapter.rb
211
212
  - lib/dspy/lm/adapters/ollama_adapter.rb
212
213
  - lib/dspy/lm/adapters/openai/schema_converter.rb
@@ -221,6 +222,7 @@ files:
221
222
  - lib/dspy/lm/strategies/anthropic_tool_use_strategy.rb
222
223
  - lib/dspy/lm/strategies/base_strategy.rb
223
224
  - lib/dspy/lm/strategies/enhanced_prompting_strategy.rb
225
+ - lib/dspy/lm/strategies/gemini_structured_output_strategy.rb
224
226
  - lib/dspy/lm/strategies/openai_structured_output_strategy.rb
225
227
  - lib/dspy/lm/strategy_selector.rb
226
228
  - lib/dspy/lm/structured_output_strategy.rb