dspy 0.31.1 → 0.33.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 584b43a98aa27dcc4bcdb83c64f48b4c3d0f4b67453abfe9a0942cbc30b14e7d
4
- data.tar.gz: ab2b58be33ef10e36d1825c371e29783a7a3cea72bfb34b765b9aeab5fb4644f
3
+ metadata.gz: 52dc686ff0347f7844a3b6fc476b31737f3467d5d179974f34a98b8dbbd12073
4
+ data.tar.gz: 0e39c94a4766c481167268f49e42277d297b688ee9f960181785062e69f91572
5
5
  SHA512:
6
- metadata.gz: 6671b0cca0709aa9dea2eaf46abbbc2366d0779a89139377b6840279afa767e659bf635e8314a0a3ca45bc3f476200b7d430f74e83b0af08a085ae221a97c7c5
7
- data.tar.gz: 1f8aaba8e3e2f16e1bd0eb81f9694b677f20cbeb7d0dadbacc9571613076ab29bc44991c79f3c5908e821397b2e0616e04801704f4ced5e0e5a8e9eed4f84f4f
6
+ metadata.gz: bb4fb2ce89ed600e971a07cfabe3eb9edd344563aa77df57304dbec565121eeb9c8a53ba4cdd66f04c81cb3b1231d222a59fb4a15962680051c07de49c080dca
7
+ data.tar.gz: 2543dd3bc228c98a1ab82c14ce8fffbed86342fa661af674976b90b10752334a5606257401f9ff953bd82f36aa29fe816895acec9e30a5219e8c8dc2d3ea1727
data/README.md CHANGED
@@ -13,12 +13,10 @@
13
13
  >
14
14
  > If you want to contribute, feel free to reach out to me to coordinate efforts: hey at vicente.services
15
15
  >
16
- > And, yes, this is 100% a legit project. :)
17
-
18
16
 
19
17
  **Build reliable LLM applications in idiomatic Ruby using composable, type-safe modules.**
20
18
 
21
- DSPy.rb is the Ruby-first surgical port of Stanford's [DSPy framework](https://github.com/stanfordnlp/dspy). It delivers structured LLM programming, prompt engineering, and context engineering in the language we love. Instead of wrestling with brittle prompt strings, you define typed signatures in idiomatic Ruby and compose workflows and agents that actually behave.
19
+ DSPy.rb is the Ruby-first surgical port of Stanford's [DSPy paradigm](https://github.com/stanfordnlp/dspy). It delivers structured LLM programming, prompt engineering, and context engineering in the language we love. Instead of wrestling with brittle prompt strings, you define typed signatures in idiomatic Ruby and compose workflows and agents that actually behave.
22
20
 
23
21
  **Prompts are just functions.** Traditional prompting is like writing code with string concatenation: it works until it doesn't. DSPy.rb brings you the programming approach pioneered by [dspy.ai](https://dspy.ai/): define modular signatures and let the framework deal with the messy bits.
24
22
 
@@ -104,6 +102,7 @@ DSPy.rb ships multiple gems from this monorepo so you can opt into features with
104
102
  | `dspy-openai` | Packages the OpenAI/OpenRouter/Ollama adapters plus the official SDK guardrails. Install whenever you call `openai/*`, `openrouter/*`, or `ollama/*`. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/openai/README.md) | **Stable** (v1.0.0) |
105
103
  | `dspy-anthropic` | Claude adapters, streaming, and structured-output helpers behind the official `anthropic` SDK. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/anthropic/README.md) | **Stable** (v1.0.0) |
106
104
  | `dspy-gemini` | Gemini adapters with multimodal + tool-call support via `gemini-ai`. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/gemini/README.md) | **Stable** (v1.0.0) |
105
+ | `dspy-ruby_llm` | Unified access to 12+ LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Ollama, DeepSeek, etc.) via [RubyLLM](https://rubyllm.com). [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/ruby_llm/README.md) | **Stable** (v0.1.0) |
107
106
  | `dspy-code_act` | Think-Code-Observe agents that synthesize and execute Ruby safely. (Add the gem or set `DSPY_WITH_CODE_ACT=1` before requiring `dspy/code_act`.) | **Stable** (v1.0.0) |
108
107
  | `dspy-datasets` | Dataset helpers plus Parquet/Polars tooling for richer evaluation corpora. (Toggle via `DSPY_WITH_DATASETS`.) | **Stable** (v1.0.0) |
109
108
  | `dspy-evals` | High-throughput evaluation harness with metrics, callbacks, and regression fixtures. (Toggle via `DSPY_WITH_EVALS`.) | **Stable** (v1.0.0) |
@@ -2,6 +2,6 @@
2
2
 
3
3
  module DSPy
4
4
  class Evals
5
- VERSION = '1.0.0'
5
+ VERSION = '1.0.1'
6
6
  end
7
7
  end
@@ -10,10 +10,11 @@ module DSPy
10
10
  'anthropic' => { class_name: 'DSPy::Anthropic::LM::Adapters::AnthropicAdapter', gem_name: 'dspy-anthropic' },
11
11
  'ollama' => { class_name: 'DSPy::OpenAI::LM::Adapters::OllamaAdapter', gem_name: 'dspy-openai' },
12
12
  'gemini' => { class_name: 'DSPy::Gemini::LM::Adapters::GeminiAdapter', gem_name: 'dspy-gemini' },
13
- 'openrouter' => { class_name: 'DSPy::OpenAI::LM::Adapters::OpenRouterAdapter', gem_name: 'dspy-openai' }
13
+ 'openrouter' => { class_name: 'DSPy::OpenAI::LM::Adapters::OpenRouterAdapter', gem_name: 'dspy-openai' },
14
+ 'ruby_llm' => { class_name: 'DSPy::RubyLLM::LM::Adapters::RubyLLMAdapter', gem_name: 'dspy-ruby_llm' }
14
15
  }.freeze
15
16
 
16
- PROVIDERS_WITH_EXTRA_OPTIONS = %w[openai anthropic ollama gemini openrouter].freeze
17
+ PROVIDERS_WITH_EXTRA_OPTIONS = %w[openai anthropic ollama gemini openrouter ruby_llm].freeze
17
18
 
18
19
  class AdapterData < Data.define(:class_name, :gem_name)
19
20
  def self.from_prefix(provider_prefix)
@@ -37,7 +37,7 @@ module DSPy
37
37
  when ->(type) { hash_type?(type) }
38
38
  coerce_hash_value(value, prop_type)
39
39
  when ->(type) { type == String || simple_type_match?(type, String) }
40
- value.to_s
40
+ coerce_to_string(value)
41
41
  when ->(type) { enum_type?(type) }
42
42
  coerce_enum_value(value, prop_type)
43
43
  when ->(type) { type == Float || simple_type_match?(type, Float) }
@@ -295,6 +295,18 @@ module DSPy
295
295
  nil
296
296
  end
297
297
 
298
+ # Coerces a value to String with strict type checking
299
+ # Only allows String (passthrough) and Symbol (to_s) - rejects other types
300
+ sig { params(value: T.untyped).returns(String) }
301
+ def coerce_to_string(value)
302
+ case value
303
+ when String then value
304
+ when Symbol then value.to_s
305
+ else
306
+ raise TypeError, "Cannot coerce #{value.class} to String - expected String or Symbol"
307
+ end
308
+ end
309
+
298
310
  # Coerces a value to an enum, handling both strings and existing enum instances
299
311
  sig { params(value: T.untyped, prop_type: T.untyped).returns(T.untyped) }
300
312
  def coerce_enum_value(value, prop_type)
data/lib/dspy/module.rb CHANGED
@@ -179,6 +179,47 @@ module DSPy
179
179
  named_predictors.map { |(_, predictor)| predictor }
180
180
  end
181
181
 
182
+ # Override Dry::Configurable's configure to propagate LM to child predictors
183
+ # When you configure an agent's LM, it automatically propagates to all child predictors
184
+ # returned by named_predictors, recursively.
185
+ #
186
+ # @example Basic usage
187
+ # agent.configure { |c| c.lm = DSPy::LM.new('openai/gpt-4o') }
188
+ # # All internal predictors now use gpt-4o
189
+ #
190
+ # @example Fine-grained control (configure then override)
191
+ # agent.configure { |c| c.lm = cheap_lm }
192
+ # agent.configure_predictor('thought_generator') { |c| c.lm = expensive_lm }
193
+ #
194
+ # @return [self] for method chaining
195
+ sig { params(block: T.proc.params(config: T.untyped).void).returns(T.self_type) }
196
+ def configure(&block)
197
+ super(&block)
198
+ propagate_lm_to_children(config.lm) if config.lm
199
+ self
200
+ end
201
+
202
+ # Configure a specific child predictor by name
203
+ # Use this for fine-grained control when different predictors need different LMs
204
+ #
205
+ # @param predictor_name [String] The name of the predictor (e.g., 'thought_generator')
206
+ # @yield [config] Configuration block
207
+ # @return [self] for method chaining
208
+ # @raise [ArgumentError] if predictor_name is not found
209
+ #
210
+ # @example
211
+ # agent.configure_predictor('thought_generator') { |c| c.lm = expensive_lm }
212
+ sig { params(predictor_name: String, block: T.proc.params(config: T.untyped).void).returns(T.self_type) }
213
+ def configure_predictor(predictor_name, &block)
214
+ _, predictor = named_predictors.find { |name, _| name == predictor_name }
215
+ unless predictor
216
+ available = named_predictors.map(&:first).join(', ')
217
+ raise ArgumentError, "Unknown predictor: #{predictor_name}. Available: #{available}"
218
+ end
219
+ predictor.configure(&block)
220
+ self
221
+ end
222
+
182
223
  def instrument_forward_call(call_args, call_kwargs)
183
224
  ensure_module_subscriptions!
184
225
 
@@ -255,6 +296,21 @@ module DSPy
255
296
 
256
297
  private
257
298
 
299
+ # Propagate LM configuration to child predictors recursively
300
+ # Skips children that already have an explicit LM configured
301
+ sig { params(lm: T.untyped).void }
302
+ def propagate_lm_to_children(lm)
303
+ named_predictors.each do |(name, predictor)|
304
+ next if predictor == self # Skip self-references (Predict returns [['self', self]])
305
+
306
+ # Only propagate if child doesn't have explicit LM configured
307
+ unless predictor.config.lm
308
+ # Recursive: configure calls propagate_lm_to_children on the child too
309
+ predictor.configure { |c| c.lm = lm }
310
+ end
311
+ end
312
+ end
313
+
258
314
  def ensure_module_subscriptions!
259
315
  return if @module_subscriptions_registered
260
316
 
@@ -26,29 +26,33 @@ rescue LoadError
26
26
  end
27
27
  end
28
28
 
29
- class ObservationType < T::Enum
30
- enums do
31
- Generation = new('generation')
32
- Agent = new('agent')
33
- Tool = new('tool')
34
- Chain = new('chain')
35
- Retriever = new('retriever')
36
- Embedding = new('embedding')
37
- Evaluator = new('evaluator')
38
- Span = new('span')
39
- Event = new('event')
40
- end
29
+ # Guard against double-loading with Zeitwerk/Rails autoloader
30
+ # See: https://github.com/vicentereig/dspy.rb/issues/190
31
+ unless defined?(DSPy::ObservationType)
32
+ class ObservationType < T::Enum
33
+ enums do
34
+ Generation = new('generation')
35
+ Agent = new('agent')
36
+ Tool = new('tool')
37
+ Chain = new('chain')
38
+ Retriever = new('retriever')
39
+ Embedding = new('embedding')
40
+ Evaluator = new('evaluator')
41
+ Span = new('span')
42
+ Event = new('event')
43
+ end
41
44
 
42
- def self.for_module_class(_module_class)
43
- Span
44
- end
45
+ def self.for_module_class(_module_class)
46
+ Span
47
+ end
45
48
 
46
- def langfuse_attribute
47
- ['langfuse.observation.type', serialize]
48
- end
49
+ def langfuse_attribute
50
+ ['langfuse.observation.type', serialize]
51
+ end
49
52
 
50
- def langfuse_attributes
51
- { 'langfuse.observation.type' => serialize }
53
+ def langfuse_attributes
54
+ { 'langfuse.observation.type' => serialize }
55
+ end
52
56
  end
53
57
  end
54
58
  end
data/lib/dspy/re_act.rb CHANGED
@@ -9,23 +9,28 @@ require 'json'
9
9
  require_relative 'mixins/struct_builder'
10
10
 
11
11
  module DSPy
12
+ # Type alias for tool input parameters - provides semantic meaning in schemas
13
+ ToolInput = T.type_alias { T.nilable(T::Hash[String, T.untyped]) }
14
+
12
15
  # Define a simple struct for history entries with proper type annotations
13
16
  class HistoryEntry < T::Struct
14
17
  const :step, Integer
15
18
  prop :thought, T.nilable(String)
16
19
  prop :action, T.nilable(String)
17
- prop :action_input, T.nilable(T.any(String, Numeric, T::Hash[T.untyped, T.untyped], T::Array[T.untyped]))
20
+ prop :tool_input, ToolInput
18
21
  prop :observation, T.untyped
19
22
 
20
23
  # Custom serialization to ensure compatibility with the rest of the code
24
+ # Note: We don't use .compact here to ensure tool_input is always present as a key,
25
+ # even when nil, for consistent history entry structure
21
26
  def to_h
22
27
  {
23
28
  step: step,
24
29
  thought: thought,
25
30
  action: action,
26
- action_input: action_input,
31
+ tool_input: tool_input,
27
32
  observation: observation
28
- }.compact
33
+ }
29
34
  end
30
35
  end
31
36
  # Base class for ReAct thought generation - will be customized per input type
@@ -37,8 +42,10 @@ module DSPy
37
42
  description: "Reasoning about what to do next, considering the history and observations."
38
43
  const :action, String,
39
44
  description: "The action to take. MUST be one of the tool names listed in `available_tools` input, or the literal string \"finish\" to provide the final answer."
40
- const :action_input, T.untyped,
41
- description: "Input for the chosen action. If action is a tool name, this MUST be a JSON object matching the tool's schema. If action is \"finish\", this field MUST contain the final result based on processing the input data. This result MUST be directly taken from the relevant Observation in the history if available."
45
+ const :tool_input, ToolInput,
46
+ description: "Input for the chosen tool action. Required when action is a tool name. MUST be a JSON object matching the tool's parameter schema. Set to null when action is \"finish\"."
47
+ const :final_answer, T.nilable(String),
48
+ description: "The final answer to return. Required when action is \"finish\". Must match the expected output type. Set to null when action is a tool name."
42
49
  end
43
50
  end
44
51
 
@@ -72,10 +79,12 @@ module DSPy
72
79
  class TypeMismatchError < StandardError; end
73
80
 
74
81
  # AvailableTool struct for better type safety in ReAct agents
82
+ # Schema is stored as a pre-serialized string (JSON or BAML) to avoid
83
+ # T.untyped issues during schema format conversion
75
84
  class AvailableTool < T::Struct
76
85
  const :name, String
77
86
  const :description, String
78
- const :schema, T::Hash[Symbol, T.untyped]
87
+ const :schema, String
79
88
  end
80
89
 
81
90
  FINISH_ACTION = "finish"
@@ -211,7 +220,7 @@ module DSPy
211
220
  step: entry.step,
212
221
  thought: entry.thought,
213
222
  action: entry.action,
214
- action_input: serialize_for_llm(entry.action_input),
223
+ tool_input: serialize_for_llm(entry.tool_input),
215
224
  observation: serialize_for_llm(entry.observation)
216
225
  }.compact
217
226
  end
@@ -244,22 +253,26 @@ module DSPy
244
253
  def create_action_enum_class
245
254
  tool_names = @tools.keys
246
255
  all_actions = tool_names + [FINISH_ACTION]
247
-
256
+
248
257
  # Create a dynamic enum class using proper T::Enum pattern
249
258
  enum_class = Class.new(T::Enum)
250
-
259
+
260
+ # Give the anonymous class a proper name for BAML schema rendering
261
+ # This overrides the default behavior that returns #<Class:0x...>
262
+ enum_class.define_singleton_method(:name) { 'ActionEnum' }
263
+
251
264
  # Build the enums block code dynamically
252
265
  enum_definitions = all_actions.map do |action_name|
253
266
  const_name = action_name.upcase.gsub(/[^A-Z0-9_]/, '_')
254
267
  "#{const_name} = new(#{action_name.inspect})"
255
268
  end.join("\n ")
256
-
269
+
257
270
  enum_class.class_eval <<~RUBY
258
271
  enums do
259
272
  #{enum_definitions}
260
273
  end
261
274
  RUBY
262
-
275
+
263
276
  enum_class
264
277
  end
265
278
 
@@ -272,6 +285,11 @@ module DSPy
272
285
  else
273
286
  String
274
287
  end
288
+
289
+ # Get the output field type for the final_answer field
290
+ output_field_name = signature_class.output_struct_class.props.keys.first
291
+ output_field_type = signature_class.output_struct_class.props[output_field_name][:type_object]
292
+
275
293
  # Create new class that inherits from DSPy::Signature
276
294
  Class.new(DSPy::Signature) do
277
295
  # Set description
@@ -287,14 +305,16 @@ module DSPy
287
305
  description: "Array of available tools with their JSON schemas."
288
306
  end
289
307
 
290
- # Define output fields (same as ThoughtBase)
308
+ # Define output fields with separate tool_input and final_answer
291
309
  output do
292
310
  const :thought, String,
293
311
  description: "Reasoning about what to do next, considering the history and observations."
294
312
  const :action, action_enum_class,
295
313
  description: "The action to take. MUST be one of the tool names listed in `available_tools` input, or the literal string \"finish\" to provide the final answer."
296
- const :action_input, T.untyped,
297
- description: "Input for the chosen action. If action is a tool name, this MUST be a JSON object matching the tool's schema. If action is \"finish\", this field MUST contain the final result based on processing the input data."
314
+ const :tool_input, ToolInput,
315
+ description: "Input for the chosen tool action. Required when action is a tool name. MUST be a JSON object matching the tool's parameter schema. Set to null when action is \"finish\"."
316
+ const :final_answer, T.nilable(output_field_type),
317
+ description: "The final answer to return. Required when action is \"finish\". Must match the expected output type. Set to null when action is a tool name."
298
318
  end
299
319
  end
300
320
  end
@@ -337,11 +357,10 @@ module DSPy
337
357
  def execute_react_reasoning_loop(input_struct)
338
358
  history = T.let([], T::Array[HistoryEntry])
339
359
  available_tools_desc = @tools.map { |name, tool|
340
- schema = JSON.parse(tool.schema)
341
360
  AvailableTool.new(
342
361
  name: name,
343
362
  description: tool.description,
344
- schema: schema.transform_keys(&:to_sym)
363
+ schema: tool.schema
345
364
  )
346
365
  }
347
366
  final_answer = T.let(nil, T.untyped)
@@ -399,7 +418,7 @@ module DSPy
399
418
  # Process thought result
400
419
  if finish_action?(thought_obj.action)
401
420
  final_answer = handle_finish_action(
402
- thought_obj.action_input, last_observation, iteration,
421
+ thought_obj.final_answer, last_observation, iteration,
403
422
  thought_obj.thought, thought_obj.action, history
404
423
  )
405
424
  return { should_finish: true, final_answer: final_answer }
@@ -407,19 +426,19 @@ module DSPy
407
426
 
408
427
  # Execute tool action
409
428
  observation = execute_tool_with_instrumentation(
410
- thought_obj.action, thought_obj.action_input, iteration
429
+ thought_obj.action, thought_obj.tool_input, iteration
411
430
  )
412
431
 
413
432
  # Convert action enum to string for processing and storage
414
433
  action_str = thought_obj.action.respond_to?(:serialize) ? thought_obj.action.serialize : thought_obj.action.to_s
415
-
434
+
416
435
  # Track tools used
417
436
  tools_used << action_str.downcase if valid_tool?(thought_obj.action)
418
437
 
419
438
  # Add to history
420
439
  history << create_history_entry(
421
440
  iteration, thought_obj.thought, action_str,
422
- thought_obj.action_input, observation
441
+ thought_obj.tool_input, observation
423
442
  )
424
443
 
425
444
  # Process observation and decide next step
@@ -433,7 +452,7 @@ module DSPy
433
452
 
434
453
  emit_iteration_complete_event(
435
454
  iteration, thought_obj.thought, action_str,
436
- thought_obj.action_input, observation, tools_used
455
+ thought_obj.tool_input, observation, tools_used
437
456
  )
438
457
 
439
458
  {
@@ -613,8 +632,8 @@ module DSPy
613
632
  !!@tools[action_str.downcase]
614
633
  end
615
634
 
616
- sig { params(action: T.nilable(T.any(String, T::Enum)), action_input: T.untyped, iteration: Integer).returns(T.untyped) }
617
- def execute_tool_with_instrumentation(action, action_input, iteration)
635
+ sig { params(action: T.nilable(T.any(String, T::Enum)), tool_input: ToolInput, iteration: Integer).returns(T.untyped) }
636
+ def execute_tool_with_instrumentation(action, tool_input, iteration)
618
637
  raise InvalidActionError, "No action provided" unless action
619
638
 
620
639
  action_str = action.respond_to?(:serialize) ? action.serialize : action.to_s
@@ -630,19 +649,19 @@ module DSPy
630
649
  'dspy.module' => 'ReAct',
631
650
  'react.iteration' => iteration,
632
651
  'tool.name' => action_str.downcase,
633
- 'tool.input' => action_input
652
+ 'tool.input' => tool_input
634
653
  ) do
635
- execute_action(action_str, action_input)
654
+ execute_action(action_str, tool_input)
636
655
  end
637
656
  end
638
657
 
639
- sig { params(step: Integer, thought: String, action: String, action_input: T.untyped, observation: T.untyped).returns(HistoryEntry) }
640
- def create_history_entry(step, thought, action, action_input, observation)
658
+ sig { params(step: Integer, thought: String, action: String, tool_input: ToolInput, observation: T.untyped).returns(HistoryEntry) }
659
+ def create_history_entry(step, thought, action, tool_input, observation)
641
660
  HistoryEntry.new(
642
661
  step: step,
643
662
  thought: thought,
644
663
  action: action,
645
- action_input: action_input,
664
+ tool_input: tool_input,
646
665
  observation: observation
647
666
  )
648
667
  end
@@ -684,17 +703,17 @@ module DSPy
684
703
  end
685
704
  handle_finish_action(forced_answer, history.last&.observation, iteration + 1, final_thought.thought, FINISH_ACTION, history)
686
705
  else
687
- handle_finish_action(final_thought.action_input, history.last&.observation, iteration + 1, final_thought.thought, final_thought.action, history)
706
+ handle_finish_action(final_thought.final_answer, history.last&.observation, iteration + 1, final_thought.thought, final_thought.action, history)
688
707
  end
689
708
  end
690
709
 
691
- sig { params(iteration: Integer, thought: String, action: String, action_input: T.untyped, observation: T.untyped, tools_used: T::Array[String]).void }
692
- def emit_iteration_complete_event(iteration, thought, action, action_input, observation, tools_used)
710
+ sig { params(iteration: Integer, thought: String, action: String, tool_input: ToolInput, observation: T.untyped, tools_used: T::Array[String]).void }
711
+ def emit_iteration_complete_event(iteration, thought, action, tool_input, observation, tools_used)
693
712
  DSPy.event('react.iteration_complete', {
694
713
  'react.iteration' => iteration,
695
714
  'react.thought' => thought,
696
715
  'react.action' => action,
697
- 'react.action_input' => action_input,
716
+ 'react.tool_input' => tool_input,
698
717
  'react.observation' => observation,
699
718
  'react.tools_used' => tools_used.uniq
700
719
  })
@@ -820,8 +839,8 @@ module DSPy
820
839
  end
821
840
 
822
841
  # Tool execution method
823
- sig { params(action: String, action_input: T.untyped).returns(T.untyped) }
824
- def execute_action(action, action_input)
842
+ sig { params(action: String, tool_input: ToolInput).returns(T.untyped) }
843
+ def execute_action(action, tool_input)
825
844
  tool_name = action.downcase
826
845
  tool = @tools[tool_name]
827
846
 
@@ -829,10 +848,10 @@ module DSPy
829
848
  raise InvalidActionError, "Tool '#{action}' not found" unless tool
830
849
 
831
850
  # Execute tool - let errors propagate
832
- if action_input.nil? || (action_input.is_a?(String) && action_input.strip.empty?)
851
+ if tool_input.nil? || tool_input.empty?
833
852
  tool.dynamic_call({})
834
853
  else
835
- tool.dynamic_call(action_input)
854
+ tool.dynamic_call(tool_input)
836
855
  end
837
856
  end
838
857
 
@@ -872,7 +891,7 @@ module DSPy
872
891
  step: 1,
873
892
  thought: "I need to think about this question...",
874
893
  action: "some_tool",
875
- action_input: "input for tool",
894
+ tool_input: { "param" => "value" },
876
895
  observation: "result from tool"
877
896
  }
878
897
  ]
@@ -881,9 +900,9 @@ module DSPy
881
900
  example
882
901
  end
883
902
 
884
- sig { params(action_input: T.untyped, last_observation: T.untyped, step: Integer, thought: String, action: T.any(String, T::Enum), history: T::Array[HistoryEntry]).returns(T.untyped) }
885
- def handle_finish_action(action_input, last_observation, step, thought, action, history)
886
- final_answer = action_input
903
+ sig { params(final_answer_value: T.untyped, last_observation: T.untyped, step: Integer, thought: String, action: T.any(String, T::Enum), history: T::Array[HistoryEntry]).returns(T.untyped) }
904
+ def handle_finish_action(final_answer_value, last_observation, step, thought, action, history)
905
+ final_answer = final_answer_value
887
906
 
888
907
  # If final_answer is empty/nil but we have a last observation, use it
889
908
  if (final_answer.nil? || (final_answer.is_a?(String) && final_answer.empty?)) && last_observation
@@ -893,12 +912,12 @@ module DSPy
893
912
  # Convert action enum to string for storage in history
894
913
  action_str = action.respond_to?(:serialize) ? action.serialize : action.to_s
895
914
 
896
- # Always add the finish action to history
915
+ # Always add the finish action to history (tool_input is nil for finish actions)
897
916
  history << HistoryEntry.new(
898
917
  step: step,
899
918
  thought: thought,
900
919
  action: action_str,
901
- action_input: final_answer,
920
+ tool_input: nil,
902
921
  observation: nil # No observation for finish action
903
922
  )
904
923
 
@@ -0,0 +1,24 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'dspy/lm/errors'
4
+
5
+ module DSPy
6
+ module RubyLLM
7
+ class Guardrails
8
+ SUPPORTED_RUBY_LLM_VERSIONS = "~> 1.3".freeze
9
+
10
+ def self.ensure_ruby_llm_installed!
11
+ require 'ruby_llm'
12
+
13
+ spec = Gem.loaded_specs["ruby_llm"]
14
+ unless spec && Gem::Requirement.new(SUPPORTED_RUBY_LLM_VERSIONS).satisfied_by?(spec.version)
15
+ msg = <<~MSG
16
+ DSPy requires the `ruby_llm` gem #{SUPPORTED_RUBY_LLM_VERSIONS}.
17
+ Please install or upgrade it with `bundle add ruby_llm --version "#{SUPPORTED_RUBY_LLM_VERSIONS}"`.
18
+ MSG
19
+ raise DSPy::LM::UnsupportedVersionError, msg
20
+ end
21
+ end
22
+ end
23
+ end
24
+ end
@@ -0,0 +1,391 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'uri'
4
+ require 'ruby_llm'
5
+ require 'dspy/lm/adapter'
6
+ require 'dspy/lm/vision_models'
7
+
8
+ require 'dspy/ruby_llm/guardrails'
9
+ DSPy::RubyLLM::Guardrails.ensure_ruby_llm_installed!
10
+
11
+ module DSPy
12
+ module RubyLLM
13
+ module LM
14
+ module Adapters
15
+ class RubyLLMAdapter < DSPy::LM::Adapter
16
+ attr_reader :provider
17
+
18
+ # Options that require a scoped context instead of global RubyLLM config
19
+ SCOPED_OPTIONS = %i[base_url timeout max_retries].freeze
20
+
21
+ def initialize(model:, api_key: nil, **options)
22
+ @api_key = api_key
23
+ @options = options
24
+ @structured_outputs_enabled = options.fetch(:structured_outputs, true)
25
+ @provider_override = options[:provider] # Optional provider override
26
+
27
+ # Detect provider eagerly (matches OpenAI/Anthropic/Gemini adapters)
28
+ @provider = detect_provider(model)
29
+
30
+ # Determine if we should use global RubyLLM config or create scoped context
31
+ @use_global_config = should_use_global_config?(api_key, options)
32
+
33
+ super(model: model, api_key: api_key)
34
+
35
+ # Only validate API key if not using global config
36
+ unless @use_global_config
37
+ validate_api_key_for_provider!(api_key)
38
+ end
39
+
40
+ # Validate base_url if provided
41
+ validate_base_url!(@options[:base_url])
42
+ end
43
+
44
+ # Returns the context - either scoped or global
45
+ def context
46
+ @context ||= @use_global_config ? nil : create_context(@api_key)
47
+ end
48
+
49
+ def chat(messages:, signature: nil, &block)
50
+ normalized_messages = normalize_messages(messages)
51
+
52
+ # Validate vision support if images are present
53
+ if contains_images?(normalized_messages)
54
+ validate_vision_support!
55
+ normalized_messages = format_multimodal_messages(normalized_messages)
56
+ end
57
+
58
+ chat_instance = create_chat_instance
59
+
60
+ if block_given?
61
+ stream_response(chat_instance, normalized_messages, signature, &block)
62
+ else
63
+ standard_response(chat_instance, normalized_messages, signature)
64
+ end
65
+ rescue ::RubyLLM::UnauthorizedError => e
66
+ raise DSPy::LM::MissingAPIKeyError.new(provider)
67
+ rescue ::RubyLLM::RateLimitError => e
68
+ raise DSPy::LM::AdapterError, "Rate limit exceeded for #{provider}: #{e.message}"
69
+ rescue ::RubyLLM::ModelNotFoundError => e
70
+ raise DSPy::LM::AdapterError, "Model not found: #{e.message}. Check available models with RubyLLM.models.all"
71
+ rescue ::RubyLLM::BadRequestError => e
72
+ raise DSPy::LM::AdapterError, "Invalid request to #{provider}: #{e.message}"
73
+ rescue ::RubyLLM::ConfigurationError => e
74
+ raise DSPy::LM::ConfigurationError, "RubyLLM configuration error: #{e.message}"
75
+ rescue ::RubyLLM::Error => e
76
+ raise DSPy::LM::AdapterError, "RubyLLM error (#{provider}): #{e.message}"
77
+ end
78
+
79
+ private
80
+
81
+ # Detect provider from RubyLLM's model registry or use explicit override
82
+ def detect_provider(model_id)
83
+ return @provider_override.to_s if @provider_override
84
+
85
+ model_info = ::RubyLLM.models.find(model_id)
86
+ model_info.provider.to_s
87
+ rescue ::RubyLLM::ModelNotFoundError
88
+ raise DSPy::LM::ConfigurationError,
89
+ "Model '#{model_id}' not found in RubyLLM registry. " \
90
+ "Use provider: option to specify explicitly, or run RubyLLM.models.refresh!"
91
+ end
92
+
93
+ # Check if we should use RubyLLM's global configuration
94
+ # Uses global config when no api_key and no provider-specific options provided
95
+ def should_use_global_config?(api_key, options)
96
+ api_key.nil? && (options.keys & SCOPED_OPTIONS).empty?
97
+ end
98
+
99
+ # Validate API key for providers that require it
100
+ def validate_api_key_for_provider!(api_key)
101
+ # Ollama and some local providers don't require API keys
102
+ return if provider_allows_no_api_key?
103
+
104
+ validate_api_key!(api_key, provider)
105
+ end
106
+
107
+ def provider_allows_no_api_key?
108
+ %w[ollama gpustack].include?(provider)
109
+ end
110
+
111
+ def validate_base_url!(url)
112
+ return if url.nil?
113
+
114
+ uri = URI.parse(url)
115
+ unless %w[http https].include?(uri.scheme)
116
+ raise DSPy::LM::ConfigurationError, "base_url must use http or https scheme"
117
+ end
118
+ rescue URI::InvalidURIError
119
+ raise DSPy::LM::ConfigurationError, "Invalid base_url format: #{url}"
120
+ end
121
+
122
+ def create_context(api_key)
123
+ ::RubyLLM.context do |config|
124
+ configure_provider(config, api_key)
125
+ configure_connection(config)
126
+ end
127
+ end
128
+
129
+ # Configure RubyLLM using convention: {provider}_api_key and {provider}_api_base
130
+ # For providers with non-standard auth (bedrock, vertexai), configure RubyLLM globally
131
+ def configure_provider(config, api_key)
132
+ key_method = "#{provider}_api_key="
133
+ config.send(key_method, api_key) if api_key && config.respond_to?(key_method)
134
+
135
+ base_method = "#{provider}_api_base="
136
+ config.send(base_method, @options[:base_url]) if @options[:base_url] && config.respond_to?(base_method)
137
+ end
138
+
139
+ def configure_connection(config)
140
+ config.request_timeout = @options[:timeout] if @options[:timeout]
141
+ config.max_retries = @options[:max_retries] if @options[:max_retries]
142
+ end
143
+
144
+ def create_chat_instance
145
+ chat_options = { model: model }
146
+
147
+ # If provider is explicitly overridden, pass it to RubyLLM
148
+ if @provider_override
149
+ chat_options[:provider] = @provider_override.to_sym
150
+ chat_options[:assume_model_exists] = true
151
+ end
152
+
153
+ # Use global RubyLLM config or scoped context
154
+ if @use_global_config
155
+ ::RubyLLM.chat(**chat_options)
156
+ else
157
+ context.chat(**chat_options)
158
+ end
159
+ end
160
+
161
+ def standard_response(chat_instance, messages, signature)
162
+ chat_instance = prepare_chat_instance(chat_instance, messages, signature)
163
+ content, attachments = prepare_message_content(messages)
164
+ return build_empty_response unless content
165
+
166
+ response = send_message(chat_instance, content, attachments)
167
+ map_response(response)
168
+ end
169
+
170
+ def stream_response(chat_instance, messages, signature, &block)
171
+ chat_instance = prepare_chat_instance(chat_instance, messages, signature)
172
+ content, attachments = prepare_message_content(messages)
173
+ return build_empty_response unless content
174
+
175
+ response = send_message(chat_instance, content, attachments, &block)
176
+ map_response(response)
177
+ end
178
+
179
+ # Common setup: apply system instructions, build conversation history, and optional schema
180
+ def prepare_chat_instance(chat_instance, messages, signature)
181
+ # First, handle system messages via with_instructions for proper system prompt handling
182
+ system_message = messages.find { |m| m[:role] == 'system' }
183
+ chat_instance = chat_instance.with_instructions(system_message[:content]) if system_message
184
+
185
+ # Build conversation history by adding all non-system messages except the last user message
186
+ # The last user message will be passed to ask() to get the response
187
+ messages_to_add = messages.reject { |m| m[:role] == 'system' }
188
+
189
+ # Find the index of the last user message
190
+ last_user_index = messages_to_add.rindex { |m| m[:role] == 'user' }
191
+
192
+ if last_user_index && last_user_index > 0
193
+ # Add all messages before the last user message to build history
194
+ messages_to_add[0...last_user_index].each do |msg|
195
+ content, attachments = extract_content_and_attachments(msg)
196
+ next unless content
197
+
198
+ # Add message with appropriate role
199
+ if attachments.any?
200
+ chat_instance.add_message(role: msg[:role].to_sym, content: content, attachments: attachments)
201
+ else
202
+ chat_instance.add_message(role: msg[:role].to_sym, content: content)
203
+ end
204
+ end
205
+ end
206
+
207
+ if signature && @structured_outputs_enabled
208
+ schema = build_json_schema(signature)
209
+ chat_instance = chat_instance.with_schema(schema) if schema
210
+ end
211
+
212
+ chat_instance
213
+ end
214
+
215
+ # Extract content from last user message
216
+ # RubyLLM's Chat API builds conversation history via add_message() for previous turns,
217
+ # and the last user message is passed to ask() to get the response.
218
+ def prepare_message_content(messages)
219
+ last_user_message = messages.reverse.find { |m| m[:role] == 'user' }
220
+ return [nil, []] unless last_user_message
221
+
222
+ extract_content_and_attachments(last_user_message)
223
+ end
224
+
225
+ # Send message with optional streaming block
226
+ def send_message(chat_instance, content, attachments, &block)
227
+ kwargs = attachments.any? ? { with: attachments } : {}
228
+
229
+ if block_given?
230
+ chat_instance.ask(content, **kwargs) do |chunk|
231
+ block.call(chunk.content) if chunk.content
232
+ end
233
+ else
234
+ chat_instance.ask(content, **kwargs)
235
+ end
236
+ end
237
+
238
+ def extract_content_and_attachments(message)
239
+ content = message[:content]
240
+ attachments = []
241
+
242
+ if content.is_a?(Array)
243
+ text_parts = []
244
+ content.each do |item|
245
+ case item[:type]
246
+ when 'text'
247
+ text_parts << item[:text]
248
+ when 'image'
249
+ # Extract image URL or path
250
+ image = item[:image]
251
+ if image.respond_to?(:url)
252
+ attachments << image.url
253
+ elsif image.respond_to?(:path)
254
+ attachments << image.path
255
+ elsif item[:image_url]
256
+ attachments << item[:image_url][:url]
257
+ end
258
+ end
259
+ end
260
+ content = text_parts.join("\n")
261
+ end
262
+
263
+ [content.to_s, attachments]
264
+ end
265
+
266
+ def map_response(ruby_llm_response)
267
+ DSPy::LM::Response.new(
268
+ content: ruby_llm_response.content.to_s,
269
+ usage: build_usage(ruby_llm_response),
270
+ metadata: build_metadata(ruby_llm_response)
271
+ )
272
+ end
273
+
274
+ def build_usage(response)
275
+ input_tokens = response.input_tokens || 0
276
+ output_tokens = response.output_tokens || 0
277
+
278
+ DSPy::LM::Usage.new(
279
+ input_tokens: input_tokens,
280
+ output_tokens: output_tokens,
281
+ total_tokens: input_tokens + output_tokens
282
+ )
283
+ end
284
+
285
+ def build_metadata(response)
286
+ DSPy::LM::ResponseMetadataFactory.create('ruby_llm', {
287
+ model: response.model_id || model,
288
+ underlying_provider: provider
289
+ })
290
+ end
291
+
292
+ def build_empty_response
293
+ DSPy::LM::Response.new(
294
+ content: '',
295
+ usage: DSPy::LM::Usage.new(input_tokens: 0, output_tokens: 0, total_tokens: 0),
296
+ metadata: DSPy::LM::ResponseMetadataFactory.create('ruby_llm', {
297
+ model: model,
298
+ underlying_provider: provider
299
+ })
300
+ )
301
+ end
302
+
303
+ def build_json_schema(signature)
304
+ return nil unless signature.respond_to?(:json_schema)
305
+
306
+ schema = signature.json_schema
307
+ normalize_schema(schema)
308
+ end
309
+
310
+ def normalize_schema(schema)
311
+ return schema unless schema.is_a?(Hash)
312
+
313
+ @normalized_schema_cache ||= {}
314
+ cache_key = schema.hash
315
+
316
+ @normalized_schema_cache[cache_key] ||= begin
317
+ duped = deep_dup(schema)
318
+ add_additional_properties_false(duped)
319
+ duped.freeze
320
+ end
321
+ end
322
+
323
+ def add_additional_properties_false(schema)
324
+ return unless schema.is_a?(Hash)
325
+
326
+ if schema[:type] == 'object' || schema['type'] == 'object'
327
+ schema[:additionalProperties] = false
328
+ schema['additionalProperties'] = false
329
+ end
330
+
331
+ # Recursively process nested schemas
332
+ schema.each_value { |v| add_additional_properties_false(v) if v.is_a?(Hash) }
333
+
334
+ # Handle arrays with items
335
+ if schema[:items]
336
+ add_additional_properties_false(schema[:items])
337
+ elsif schema['items']
338
+ add_additional_properties_false(schema['items'])
339
+ end
340
+ end
341
+
342
+ def deep_dup(obj)
343
+ case obj
344
+ when Hash
345
+ obj.transform_values { |v| deep_dup(v) }
346
+ when Array
347
+ obj.map { |v| deep_dup(v) }
348
+ else
349
+ obj
350
+ end
351
+ end
352
+
353
+ def validate_vision_support!
354
+ # RubyLLM handles vision validation internally, but we can add
355
+ # additional DSPy-specific validation here if needed
356
+ DSPy::LM::VisionModels.validate_vision_support!(provider, model)
357
+ rescue DSPy::LM::IncompatibleImageFeatureError
358
+ # If DSPy doesn't know about the model, let RubyLLM handle it
359
+ # RubyLLM has its own model registry with capability detection
360
+ end
361
+
362
+ def format_multimodal_messages(messages)
363
+ messages.map do |msg|
364
+ if msg[:content].is_a?(Array)
365
+ formatted_content = msg[:content].map do |item|
366
+ case item[:type]
367
+ when 'text'
368
+ { type: 'text', text: item[:text] }
369
+ when 'image'
370
+ # Validate and format image for provider
371
+ image = item[:image]
372
+ if image.respond_to?(:validate_for_provider!)
373
+ image.validate_for_provider!(provider)
374
+ end
375
+ item
376
+ else
377
+ item
378
+ end
379
+ end
380
+
381
+ { role: msg[:role], content: formatted_content }
382
+ else
383
+ msg
384
+ end
385
+ end
386
+ end
387
+ end
388
+ end
389
+ end
390
+ end
391
+ end
@@ -0,0 +1,7 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DSPy
4
+ module RubyLLM
5
+ VERSION = '0.1.0'
6
+ end
7
+ end
@@ -0,0 +1,8 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'dspy/ruby_llm/version'
4
+
5
+ require 'dspy/ruby_llm/guardrails'
6
+ DSPy::RubyLLM::Guardrails.ensure_ruby_llm_installed!
7
+
8
+ require 'dspy/ruby_llm/lm/adapters/ruby_llm_adapter'
@@ -113,16 +113,17 @@ module DSPy
113
113
  elsif type.is_a?(T::Types::TypedHash)
114
114
  # Handle hashes as objects with additionalProperties
115
115
  # TypedHash has keys and values methods to access its key and value types
116
- key_schema = self.type_to_json_schema(type.keys, visited)
116
+ # Note: propertyNames is NOT supported by OpenAI structured outputs, so we omit it
117
117
  value_schema = self.type_to_json_schema(type.values, visited)
118
-
119
- # Create a more descriptive schema for nested structures
118
+ key_type_desc = type.keys.respond_to?(:raw_type) ? type.keys.raw_type.to_s : "string"
119
+ value_type_desc = value_schema[:description] || value_schema[:type].to_s
120
+
121
+ # Create a schema compatible with OpenAI structured outputs
120
122
  {
121
123
  type: "object",
122
- propertyNames: key_schema, # Describe key constraints
123
124
  additionalProperties: value_schema,
124
- # Add a more explicit description of the expected structure
125
- description: "A mapping where keys are #{key_schema[:type]}s and values are #{value_schema[:description] || value_schema[:type]}s"
125
+ # Description explains the expected structure without using propertyNames
126
+ description: "A mapping where keys are #{key_type_desc}s and values are #{value_type_desc}s"
126
127
  }
127
128
  elsif type.is_a?(T::Types::FixedHash)
128
129
  # Handle fixed hashes (from type aliases like { "key" => Type })
@@ -66,6 +66,8 @@ module DSPy
66
66
  tool :get_issue, description: "Get details of a specific GitHub issue"
67
67
  tool :get_pr, description: "Get details of a specific GitHub pull request"
68
68
  tool :api_request, description: "Make an arbitrary GitHub API request"
69
+ tool :traffic_views, description: "Get repository traffic views (last 14 days by default)"
70
+ tool :traffic_clones, description: "Get repository traffic clones (last 14 days by default)"
69
71
 
70
72
  sig { void }
71
73
  def initialize
@@ -216,6 +218,40 @@ module DSPy
216
218
  "Error making API request: #{e.message}"
217
219
  end
218
220
 
221
+ sig { params(repo: String, per: T.nilable(String)).returns(String) }
222
+ def traffic_views(repo:, per: nil)
223
+ endpoint = "repos/#{repo}/traffic/views"
224
+ cmd = build_gh_command(['api', shell_escape(endpoint)])
225
+ cmd << ['-f', "per=#{shell_escape(per)}"] if per
226
+
227
+ result = execute_command(cmd.flatten.join(' '))
228
+
229
+ if result[:success]
230
+ parse_traffic(result[:output], label: 'Views')
231
+ else
232
+ "Failed to fetch traffic views: #{result[:error]}"
233
+ end
234
+ rescue => e
235
+ "Error fetching traffic views: #{e.message}"
236
+ end
237
+
238
+ sig { params(repo: String, per: T.nilable(String)).returns(String) }
239
+ def traffic_clones(repo:, per: nil)
240
+ endpoint = "repos/#{repo}/traffic/clones"
241
+ cmd = build_gh_command(['api', shell_escape(endpoint)])
242
+ cmd << ['-f', "per=#{shell_escape(per)}"] if per
243
+
244
+ result = execute_command(cmd.flatten.join(' '))
245
+
246
+ if result[:success]
247
+ parse_traffic(result[:output], label: 'Clones')
248
+ else
249
+ "Failed to fetch traffic clones: #{result[:error]}"
250
+ end
251
+ rescue => e
252
+ "Error fetching traffic clones: #{e.message}"
253
+ end
254
+
219
255
  private
220
256
 
221
257
  sig { params(args: T::Array[String]).returns(T::Array[String]) }
@@ -225,6 +261,7 @@ module DSPy
225
261
 
226
262
  sig { params(str: String).returns(String) }
227
263
  def shell_escape(str)
264
+ return '""' if str.nil?
228
265
  "\"#{str.gsub(/"/, '\\"')}\""
229
266
  end
230
267
 
@@ -240,6 +277,29 @@ module DSPy
240
277
  }
241
278
  end
242
279
 
280
+ sig { params(json_output: String, label: String).returns(String) }
281
+ def parse_traffic(json_output, label:)
282
+ data = JSON.parse(json_output)
283
+
284
+ total = data['count'] || 0
285
+ uniques = data['uniques'] || 0
286
+ series = data[label.downcase] || data['views'] || []
287
+
288
+ lines = []
289
+ lines << "#{label}: #{total} total (#{uniques} unique) over the last #{series.length} data points"
290
+
291
+ series.each do |point|
292
+ ts = point['timestamp'] || point['timestamp'.to_sym]
293
+ count = point['count'] || 0
294
+ uniq = point['uniques'] || 0
295
+ lines << " #{ts}: #{count} (#{uniq} unique)"
296
+ end
297
+
298
+ lines.join("\n")
299
+ rescue JSON::ParserError => e
300
+ "Failed to parse traffic data: #{e.message}"
301
+ end
302
+
243
303
  sig { params(json_output: String).returns(String) }
244
304
  def parse_issue_list(json_output)
245
305
  issues = JSON.parse(json_output)
@@ -327,4 +387,4 @@ module DSPy
327
387
  end
328
388
  end
329
389
  end
330
- end
390
+ end
@@ -0,0 +1,39 @@
1
+ # typed: strict
2
+ # frozen_string_literal: true
3
+
4
+ require 'sorbet-runtime'
5
+
6
+ module DSPy
7
+ module Tools
8
+ # Represents a single parameter in a tool's schema
9
+ # Maps to JSON Schema property definitions used by LLM tool calling
10
+ class ToolParameterSchema < T::Struct
11
+ const :type, String
12
+ const :description, T.nilable(String), default: nil
13
+ const :enum, T.nilable(T::Array[String]), default: nil
14
+ end
15
+
16
+ # Represents the complete schema for a tool's parameters
17
+ # This is the "parameters" field in LLM tool definitions
18
+ class ToolSchema < T::Struct
19
+ const :type, String, default: 'object'
20
+ const :properties, T::Hash[Symbol, ToolParameterSchema], default: {}
21
+ const :required, T::Array[String], default: []
22
+
23
+ # Convert to hash format for JSON serialization
24
+ sig { returns(T::Hash[Symbol, T.untyped]) }
25
+ def to_h
26
+ {
27
+ type: type,
28
+ properties: properties.transform_values do |param|
29
+ h = { type: param.type }
30
+ h[:description] = param.description if param.description
31
+ h[:enum] = param.enum if param.enum
32
+ h
33
+ end,
34
+ required: required
35
+ }
36
+ end
37
+ end
38
+ end
39
+ end
data/lib/dspy/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module DSPy
4
- VERSION = "0.31.1"
4
+ VERSION = "0.33.0"
5
5
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: dspy
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.31.1
4
+ version: 0.33.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Vicente Reig Rincón de Arellano
@@ -99,14 +99,14 @@ dependencies:
99
99
  requirements:
100
100
  - - "~>"
101
101
  - !ruby/object:Gem::Version
102
- version: '0.1'
102
+ version: '0.5'
103
103
  type: :runtime
104
104
  prerelease: false
105
105
  version_requirements: !ruby/object:Gem::Requirement
106
106
  requirements:
107
107
  - - "~>"
108
108
  - !ruby/object:Gem::Version
109
- version: '0.1'
109
+ version: '0.5'
110
110
  - !ruby/object:Gem::Dependency
111
111
  name: sorbet-toon
112
112
  requirement: !ruby/object:Gem::Requirement
@@ -210,6 +210,10 @@ files:
210
210
  - lib/dspy/reflection_lm.rb
211
211
  - lib/dspy/registry/registry_manager.rb
212
212
  - lib/dspy/registry/signature_registry.rb
213
+ - lib/dspy/ruby_llm.rb
214
+ - lib/dspy/ruby_llm/guardrails.rb
215
+ - lib/dspy/ruby_llm/lm/adapters/ruby_llm_adapter.rb
216
+ - lib/dspy/ruby_llm/version.rb
213
217
  - lib/dspy/schema.rb
214
218
  - lib/dspy/schema/sorbet_json_schema.rb
215
219
  - lib/dspy/schema/sorbet_toon_adapter.rb
@@ -230,6 +234,7 @@ files:
230
234
  - lib/dspy/tools/base.rb
231
235
  - lib/dspy/tools/github_cli_toolset.rb
232
236
  - lib/dspy/tools/memory_toolset.rb
237
+ - lib/dspy/tools/schema.rb
233
238
  - lib/dspy/tools/text_processing_toolset.rb
234
239
  - lib/dspy/tools/toolset.rb
235
240
  - lib/dspy/type_serializer.rb