dspy 0.1.0 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 6b4d0c0f8eba6601ce96a8acf5a167a8a7be9fec7f20c024495eee01b702cff1
4
- data.tar.gz: 15f4abd449e6e74b30d0ea47231cb238e9e40b51b048c31b3a74c2c2571d022b
3
+ metadata.gz: e0ab7e7c2a8d741b4d0080ad96ce991752855d0bdc61393f6f329df404a5c956
4
+ data.tar.gz: e3f0a0c42b4b66e6a6427e7ff963524abb0059ac0ad78d18fdb12684240e9459
5
5
  SHA512:
6
- metadata.gz: f6c87053b33dbfc27eb2386801cfff2ce6fe67a9d8e3518624be72f914a099e7f85c18e2a90f06634b7e4e442c190ca30a8188ca75d9a40ed4ad3cb1dc79de63
7
- data.tar.gz: ec5ab6691f7494449ce4bf2469654eebb85d5de4cc76bf8893d895e5d00f3c64621d0602c1c800782a9917da281626d5020e192af439f683948e0d92f638c0fa
6
+ metadata.gz: 3530f0bb5a8cbfa5ffe99a4b0fb3e7d52f0e3c205501b62904b05b4e35f907e824a97a94f91f432e968aaf1360658757eb4f9a7a189238628f2187ad32ab4060
7
+ data.tar.gz: c737f35b0a17cd8ddf98a99656a1acf04b15753c5f3130a3a20627c751a814ec489fbd7ef640834c58b520ee3f7a68657e1473289b3a60bc9cd68cd673d90655
data/README.md CHANGED
@@ -1,10 +1,381 @@
1
1
  # DSPy.rb
2
2
 
3
- A port of the DSPy library to Ruby.
3
+ A Ruby port of the [DSPy library](https://dspy.ai/), enabling a composable and pipeline-oriented approach to programming with Large Language Models (LLMs) in Ruby.
4
+
5
+ ## Current State
6
+
7
+ DSPy.rb provides a foundation for composable LLM programming with the following implemented features:
8
+
9
+ - **Signatures**: Define input/output schemas for LLM interactions using JSON schemas
10
+ - **Predict**: Basic LLM completion with structured inputs and outputs
11
+ - **Chain of Thought**: Enhanced reasoning through step-by-step thinking
12
+ - **ReAct**: Compose multiple LLM calls in a structured workflow using tools.
13
+ - **RAG (Retrieval-Augmented Generation)**: Enriched responses with context from retrieval
14
+ - **Multi-stage Pipelines**: Compose multiple LLM calls in a structured workflow
15
+
16
+ The library currently supports:
17
+ - OpenAI and Anthropic via [Ruby LLM](https://github.com/crmne/ruby_llm)
18
+ - JSON schema validation with [dry-schema](https://dry-rb.org/gems/dry-schema/)
4
19
 
5
20
  ## Installation
6
21
 
7
- ```bash
8
- gem install dspy
22
+ This is not even fresh off the oven. I recommend you installing
23
+ it straight from this repo, while I build the first release.
24
+
25
+ ```ruby
26
+ gem 'dspy', github: 'vicentereig/dspy.rb'
27
+ ```
28
+
29
+ ## Usage Examples
30
+
31
+ ### Basic Prediction
32
+
33
+ ```ruby
34
+ # Define a signature for sentiment classification
35
+ class Classify < DSPy::Signature
36
+ description "Classify sentiment of a given sentence."
37
+
38
+ input do
39
+ required(:sentence).value(:string).meta(description: 'The sentence to analyze')
40
+ end
41
+
42
+ output do
43
+ required(:sentiment).value(included_in?: %w(positive negative neutral))
44
+ .meta(description: 'The sentiment classification')
45
+ required(:confidence).value(:float).meta(description: 'Confidence score')
46
+ end
47
+ end
48
+
49
+ # Initialize the language model
50
+ class SentimentClassifierWithDescriptions < DSPy::Signature
51
+ description "Classify sentiment of a given sentence."
52
+
53
+ input do
54
+ required(:sentence)
55
+ .value(:string)
56
+ .meta(description: 'The sentence whose sentiment you are analyzing')
57
+ end
58
+
59
+ output do
60
+ required(:sentiment)
61
+ .value(included_in?: [:positive, :negative, :neutral])
62
+ .meta(description: 'The allowed values to classify sentences')
63
+
64
+ required(:confidence).value(:float)
65
+ .meta(description:'The confidence score for the classification')
66
+ end
67
+ end
68
+ DSPy.configure do |c|
69
+ c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
70
+ end
71
+ # Create the predictor and run inference
72
+ classify = DSPy::Predict.new(Classify)
73
+ result = classify.call(sentence: "This book was super fun to read, though not the last chapter.")
74
+ # => {:confidence=>0.85, :sentence=>"This book was super fun to read, though not the last chapter.", :sentiment=>"positive"}
75
+ ```
76
+
77
+ ### Chain of Thought Reasoning
78
+
79
+ ```ruby
80
+ class AnswerPredictor < DSPy::Signature
81
+ description "Provides a concise answer to the question"
82
+
83
+ input do
84
+ required(:question).value(:string)
85
+ end
86
+
87
+ output do
88
+ required(:answer).value(:string)
89
+ end
90
+ end
91
+
92
+ DSPy.configure do |c|
93
+ c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
94
+ end
95
+
96
+ qa_cot = DSPy::ChainOfThought.new(AnswerPredictor)
97
+ response = qa_cot.call(question: "Two dice are tossed. What is the probability that the sum equals two?")
98
+ # Result includes reasoning and answer in the response
99
+ # {:question=>"...", :answer=>"1/36", :reasoning=>"There is only one way to get a sum of 2..."}
100
+ ```
101
+
102
+ ### RAG (Retrieval-Augmented Generation)
103
+
104
+ ```ruby
105
+ class ContextualQA < DSPy::Signature
106
+ description "Answers questions using relevant context"
107
+
108
+ input do
109
+ required(:context).value(Types::Array.of(:string))
110
+ required(:question).filled(:string)
111
+ end
112
+
113
+ output do
114
+ required(:response).filled(:string)
115
+ end
116
+ end
117
+
118
+ DSPy.configure do |c|
119
+ c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
120
+ end
121
+
122
+ # Set up retriever (example using ColBERT)
123
+ retriever = ColBERTv2.new(url: 'http://your-retriever-endpoint')
124
+ # Generate a contextual response
125
+ rag = DSPy::ChainOfThought.new(ContextualQA)
126
+ prediction = rag.call(question: question, context: retriever.call('your query').map(&:long_text))
9
127
  ```
10
128
 
129
+ ### Multi-stage Pipeline
130
+
131
+ ```ruby
132
+ # Create a pipeline for article drafting
133
+ class ArticleDrafter < DSPy::Module
134
+ def initialize
135
+ @build_outline = DSPy::ChainOfThought.new(Outline)
136
+ @draft_section = DSPy::ChainOfThought.new(DraftSection)
137
+ end
138
+
139
+ def forward(topic)
140
+ # First build the outline
141
+ outline = @build_outline.call(topic: topic)
142
+
143
+ # Then draft each section
144
+ sections = []
145
+ (outline[:section_subheadings] || {}).each do |heading, subheadings|
146
+ section = @draft_section.call(
147
+ topic: outline[:title],
148
+ section_heading: "## #{heading}",
149
+ section_subheadings: [subheadings].flatten.map { |sh| "### #{sh}" }
150
+ )
151
+ sections << section
152
+ end
153
+
154
+ DraftArticle.new(title: outline[:title], sections: sections)
155
+ end
156
+ end
157
+
158
+ DSPy.configure do |c|
159
+ c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
160
+ end
161
+ # Usage
162
+ drafter = ArticleDrafter.new
163
+ article = drafter.call("World Cup 2002")
164
+ ```
165
+
166
+ ### ReAct: Reasoning and Acting with Tools
167
+
168
+ The `DSPy::ReAct` module implements the ReAct (Reasoning and Acting) paradigm, allowing LLMs to synergize reasoning with tool usage to answer complex questions or complete tasks. The agent iteratively generates thoughts, chooses actions (either calling a tool or finishing), and observes the results to inform its next step.
169
+
170
+ **Core Components:**
171
+
172
+ * **Signature**: Defines the overall task for the ReAct agent (e.g., answering a question). The output schema of this signature will be augmented by ReAct to include `history` (an array of structured thought/action/observation steps) and `iterations`.
173
+ * **Tools**: Instances of classes inheriting from `DSPy::Tools::Tool`. Each tool has a `name`, `description` (used by the LLM to decide when to use the tool), and a `call` method that executes the tool's logic.
174
+ * **LLM**: The ReAct agent internally uses an LLM (configured via `DSPy.configure`) to generate thoughts and decide on actions.
175
+
176
+ **Example 1: Simple Arithmetic with a Tool**
177
+
178
+ Let's say we want to answer "What is 5 plus 7?". We can provide the ReAct agent with a simple calculator tool.
179
+
180
+ ```ruby
181
+ # Define a signature for the task
182
+ class MathQA < DSPy::Signature
183
+ description "Answers mathematical questions."
184
+
185
+ input do
186
+ required(:question).value(:string).meta(description: 'The math question to solve.')
187
+ end
188
+
189
+ output do
190
+ required(:answer).value(:string).meta(description: 'The numerical answer.')
191
+ end
192
+ end
193
+
194
+ # Define a simple calculator tool
195
+ class CalculatorTool < DSPy::Tools::Tool
196
+ def initialize
197
+ super('calculator', 'Calculates the result of a simple arithmetic expression (e.g., "5 + 7"). Input must be a string representing the expression.')
198
+ end
199
+
200
+ def call(expression_string)
201
+ # In a real scenario, you might use a more robust expression parser.
202
+ # For this example, let's assume simple addition for "X + Y" format.
203
+ if expression_string.match(/(\d+)\s*\+\s*(\d+)/)
204
+ num1 = $1.to_i
205
+ num2 = $2.to_i
206
+ (num1 + num2).to_s
207
+ else
208
+ "Error: Could not parse expression. Use format 'number + number'."
209
+ end
210
+ rescue StandardError => e
211
+ "Error: #{e.message}"
212
+ end
213
+ end
214
+
215
+ # Configure DSPy (if not already done)
216
+ DSPy.configure do |c|
217
+ c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY'])
218
+ end
219
+
220
+ # Initialize ReAct agent with the signature and tool
221
+ calculator = CalculatorTool.new
222
+ react_agent = DSPy::ReAct.new(MathQA, tools: [calculator])
223
+
224
+ # Ask the question
225
+ question_text = "What is 5 plus 7?"
226
+ result = react_agent.forward(question: question_text)
227
+
228
+ puts "Question: #{question_text}"
229
+ puts "Answer: #{result.answer}"
230
+ puts "Iterations: #{result.iterations}"
231
+ puts "History:"
232
+ result.history.each do |entry|
233
+ puts " Step #{entry[:step]}:"
234
+ puts " Thought: #{entry[:thought]}"
235
+ puts " Action: #{entry[:action]}"
236
+ puts " Action Input: #{entry[:action_input]}"
237
+ puts " Observation: #{entry[:observation]}" if entry[:observation]
238
+ end
239
+ # Expected output (will vary based on LLM's reasoning):
240
+ # Question: What is 5 plus 7?
241
+ # Answer: 12
242
+ # Iterations: 2
243
+ # History:
244
+ # Step 1:
245
+ # Thought: I need to calculate 5 plus 7. I have a calculator tool that can do this.
246
+ # Action: calculator
247
+ # Action Input: 5 + 7
248
+ # Observation: 12
249
+ # Step 2:
250
+ # Thought: The calculator returned 12, which is the answer to "5 plus 7?". I can now finish.
251
+ # Action: finish
252
+ # Action Input: 12
253
+ ```
254
+
255
+ **Example 2: Web Search with Serper.dev**
256
+
257
+ For questions requiring up-to-date information or broader knowledge, the ReAct agent can use a web search tool. Here's an example using the `serper.dev` API.
258
+
259
+ *Note: You'll need a Serper API key, which you can set in the `SERPER_API_KEY` environment variable.*
260
+
261
+ ```ruby
262
+ require 'net/http'
263
+ require 'json'
264
+ require 'uri'
265
+
266
+ # Define a signature for web-based QA
267
+ class WebQuestionAnswer < DSPy::Signature
268
+ description "Answers questions that may require web searches."
269
+
270
+ input do
271
+ required(:question).value(:string).meta(description: 'The question to answer, potentially requiring a web search.')
272
+ end
273
+
274
+ output do
275
+ required(:answer).value(:string).meta(description: 'The final answer to the question.')
276
+ end
277
+ end
278
+
279
+ # Define the Serper Search Tool
280
+ class SerperSearchTool < DSPy::Tools::Tool
281
+ def initialize
282
+ super('web_search', 'Searches the web for a given query and returns the first organic result snippet. Useful for finding current information or answers to general knowledge questions.')
283
+ end
284
+
285
+ def call(query)
286
+ api_key = ENV['SERPER_API_KEY']
287
+ unless api_key
288
+ return "Error: SERPER_API_KEY environment variable not set."
289
+ end
290
+
291
+ uri = URI.parse("https://google.serper.dev/search")
292
+ request = Net::HTTP::Post.new(uri)
293
+ request['X-API-KEY'] = api_key
294
+ request['Content-Type'] = 'application/json'
295
+ request.body = JSON.dump({ q: query })
296
+
297
+ begin
298
+ response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == 'https') do |http|
299
+ http.request(request)
300
+ end
301
+
302
+ if response.is_a?(Net::HTTPSuccess)
303
+ results = JSON.parse(response.body)
304
+ first_organic_result = results['organic']&.first
305
+ if first_organic_result && first_organic_result['snippet']
306
+ return "Source: #{first_organic_result['link']}\nSnippet: #{first_organic_result['snippet']}"
307
+ elsif first_organic_result && first_organic_result['title']
308
+ return "Source: #{first_organic_result['link']}\nTitle: #{first_organic_result['title']}"
309
+ else
310
+ return "No relevant snippet found in the first result."
311
+ end
312
+ else
313
+ return "Error: Serper API request failed with status #{response.code} - #{response.body}"
314
+ end
315
+ rescue StandardError => e
316
+ return "Error performing web search: #{e.message}"
317
+ end
318
+ end
319
+ end
320
+
321
+ # Configure DSPy (if not already done)
322
+ DSPy.configure do |c|
323
+ c.lm = DSPy::LM.new('openai/gpt-4o-mini', api_key: ENV['OPENAI_API_KEY']) # Ensure your LM is configured
324
+ end
325
+
326
+ # Initialize ReAct agent with the signature and search tool
327
+ search_tool = SerperSearchTool.new
328
+ web_qa_agent = DSPy::ReAct.new(WebQuestionAnswer, tools: [search_tool])
329
+
330
+ # Ask a question requiring web search
331
+ question_text = "What is the latest news about the Mars rover Perseverance?"
332
+ result = web_qa_agent.forward(question: question_text)
333
+
334
+ puts "Question: #{question_text}"
335
+ puts "Answer: #{result.answer}"
336
+ puts "Iterations: #{result.iterations}"
337
+ puts "History (summary):"
338
+ result.history.each_with_index do |entry, index|
339
+ puts " Step #{entry[:step]}: Action: #{entry[:action]}, Input: #{entry[:action_input]&.slice(0, 50)}..."
340
+ # For brevity, not printing full thought/observation here.
341
+ end
342
+ # The answer and history will depend on the LLM's reasoning and live search results.
343
+ ```
344
+
345
+ ## Roadmap
346
+
347
+ ### First Release
348
+ - [x] Signatures and Predict module
349
+ - [x] RAG examples
350
+ - [x] Multi-Stage Pipelines
351
+ - [x] Validate inputs and outputs with JSON Schema
352
+ - [x] thread-safe global config
353
+ - [x] Convert responses from hashes to Dry Poros (currently tons of footguns with hashes :fire:)
354
+ - [ ] Cover unhappy paths: validation errors
355
+ - [x] Implement ReAct module for reasoning and acting
356
+ - [ ] Add OpenTelemetry instrumentation
357
+ - [ ] Improve logging
358
+ - [ ] Add streaming support (?)
359
+ - [x] Ensure thread safety
360
+ - [ ] Comprehensive initial documentation, LLM friendly.
361
+
362
+ #### Backburner
363
+
364
+ - [ ] Support for multiple LM providers (Anthropic, etc.)
365
+ - [ ] Support for reasoning providers
366
+ - [ ] Adaptive Graph of Thoughts with Tools
367
+
368
+ ### Optimizers
369
+
370
+ - [ ] Optimizing prompts: RAG
371
+ - [ ] Optimizing prompts: Chain of Thought
372
+ - [ ] Optimizing prompts: ReAct
373
+ - [ ] Optimizing weights: Classification
374
+
375
+ ## Contributing
376
+
377
+ Contributions are welcome! Please feel free to submit a Pull Request.
378
+
379
+ ## License
380
+
381
+ `dspy.rb` is released under the [MIT License](LICENSE).
@@ -0,0 +1,22 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DSPy
4
+ # Enhances prediction by encouraging step-by-step reasoning
5
+ # before providing a final answer.
6
+ class ChainOfThought < Predict
7
+
8
+ def initialize(signature_class)
9
+ @signature_class = signature_class
10
+ chain_of_thought_schema = Dry::Schema.JSON do
11
+ required(:reasoning).
12
+ value(:string).
13
+ meta(description: "Reasoning: Let's think step by step in order to #{signature_class.description}")
14
+ end
15
+ @signature_class.output_schema = Dry::Schema.JSON(parent:
16
+ [
17
+ @signature_class.output_schema,
18
+ chain_of_thought_schema
19
+ ])
20
+ end
21
+ end
22
+ end
@@ -0,0 +1,94 @@
1
+ require 'dry/schema/version'
2
+
3
+ if Dry::Schema::VERSION > Gem::Version.new('1.15')
4
+ raise 'Double check this monkey path before upgrading drys-schema.'
5
+ end
6
+
7
+ Dry::Schema.load_extensions(:json_schema)
8
+ # Monkey patch Macros::Core to add meta method
9
+ module Dry
10
+ module Schema
11
+ module Macros
12
+ class Core
13
+ def meta(metadata)
14
+ schema_dsl.meta(name, metadata)
15
+ self
16
+ end
17
+ end
18
+ end
19
+ end
20
+ end
21
+
22
+ # Monkey patch DSL to store metadata
23
+ module Dry
24
+ module Schema
25
+ class DSL
26
+ def meta(name, metadata)
27
+ @metas ||= {}
28
+ @metas[name] = metadata
29
+ self
30
+ end
31
+
32
+ def metas
33
+ @metas ||= {}
34
+ end
35
+
36
+ # Ensure metas are included in new instances
37
+ alias_method :original_new, :new
38
+ def new(**options, &block)
39
+ options[:metas] = metas
40
+ original_new(**options, &block)
41
+ end
42
+
43
+ # Ensure processor has access to metas
44
+ alias_method :original_call, :call
45
+ def call
46
+ processor = original_call
47
+ processor.instance_variable_set(:@schema_metas, metas)
48
+ processor
49
+ end
50
+ end
51
+ end
52
+ end
53
+
54
+ # Monkey patch Processor to expose schema_metas
55
+ module Dry
56
+ module Schema
57
+ class Processor
58
+ attr_reader :schema_metas
59
+
60
+ # Add schema_metas accessor
61
+ def schema_metas
62
+ @schema_metas ||= {}
63
+ end
64
+ end
65
+ end
66
+ end
67
+
68
+ # Directly monkey patch the JSON Schema generation
69
+ module Dry
70
+ module Schema
71
+ module JSONSchema
72
+ module SchemaMethods
73
+ # Override the original json_schema method
74
+ def json_schema(loose: false)
75
+ compiler = SchemaCompiler.new(root: true, loose: loose)
76
+ compiler.call(to_ast)
77
+ result = compiler.to_hash
78
+
79
+ # Add descriptions to properties from schema_metas
80
+ if respond_to?(:schema_metas) && !schema_metas.empty?
81
+ schema_metas.each do |key, meta|
82
+ if meta[:description] && result[:properties][key]
83
+ result[:properties][key][:description] = meta[:description]
84
+ end
85
+ end
86
+ end
87
+
88
+ result
89
+ end
90
+ end
91
+ end
92
+ end
93
+ end
94
+
data/lib/dspy/field.rb ADDED
@@ -0,0 +1,23 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DSPy
4
+ class InputField
5
+ attr_reader :name, :type, :desc
6
+
7
+ def initialize(name, type, desc: nil)
8
+ @name = name
9
+ @type = type
10
+ @desc = desc
11
+ end
12
+ end
13
+
14
+ class OutputField
15
+ attr_reader :name, :type, :desc
16
+
17
+ def initialize(name, type, desc: nil)
18
+ @name = name
19
+ @type = type
20
+ @desc = desc
21
+ end
22
+ end
23
+ end
data/lib/dspy/lm.rb ADDED
@@ -0,0 +1,76 @@
1
+ # frozen_string_literal: true
2
+ require 'ruby_llm'
3
+
4
+ module DSPy
5
+ class LM
6
+ attr_reader :model_id, :api_key, :model, :provider
7
+
8
+ def initialize(model_id, api_key: nil)
9
+ @model_id = model_id
10
+ @api_key = api_key
11
+ # Configure RubyLLM with the API key if provided
12
+ if model_id.start_with?('openai/')
13
+ RubyLLM.configure do |config|
14
+ config.openai_api_key = api_key
15
+ end
16
+ @provider = :openai
17
+ @model = model_id.split('/').last
18
+ elsif model_id.start_with?('anthropic/')
19
+ RubyLLM.configure do |config|
20
+ config.anthropic_api_key = api_key
21
+ end
22
+ @provider = :anthropic
23
+ @model = model_id.split('/').last
24
+ else
25
+ raise ArgumentError, "Unsupported model provider: #{model_id}"
26
+ end
27
+ end
28
+
29
+ def chat(inference_module, input_values, &block)
30
+ signature_class = inference_module.signature_class
31
+ chat = RubyLLM.chat(model: model)
32
+ system_prompt = inference_module.system_signature
33
+ user_prompt = inference_module.user_signature(input_values)
34
+ chat.add_message role: :system, content: system_prompt
35
+ chat.ask(user_prompt, &block)
36
+
37
+ parse_response(chat.messages.last, input_values, signature_class)
38
+ end
39
+
40
+ private
41
+ def parse_response(response, input_values, signature_class)
42
+ # Try to parse the response as JSON
43
+ content = response.content
44
+
45
+ # Extract JSON if it's in a code block
46
+ if content.include?('```json')
47
+ content = content.split('```json').last.split('```').first.strip
48
+ elsif content.include?('```')
49
+ content = content.split('```').last.split('```').first.strip
50
+ end
51
+
52
+ begin
53
+ json_payload = JSON.parse(content)
54
+
55
+ # Handle different signature types
56
+ if signature_class < DSPy::SorbetSignature
57
+ # For Sorbet signatures, just return the parsed JSON
58
+ # The SorbetPredict will handle validation
59
+ json_payload
60
+ else
61
+ # Original dry-schema based handling
62
+ output = signature_class.output_schema.call(json_payload)
63
+
64
+ result_schema = Dry::Schema.JSON(parent: [signature_class.input_schema, signature_class.output_schema])
65
+ result = output.to_h.merge(input_values)
66
+ # create an instance with input and output schema
67
+ poro_result = result_schema.call(result)
68
+
69
+ poro_result.to_h
70
+ end
71
+ rescue JSON::ParserError
72
+ raise "Failed to parse LLM response as JSON: #{content}"
73
+ end
74
+ end
75
+ end
76
+ end
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DSPy
4
+ class Module
5
+ def forward(...)
6
+ raise NotImplementedError, "Subclasses must implement forward method"
7
+ end
8
+
9
+ def call(...)
10
+ forward(...)
11
+ end
12
+ end
13
+ end
@@ -0,0 +1,72 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DSPy
4
+ class PredictionInvalidError < RuntimeError
5
+ attr_accessor :errors
6
+ def initialize(errors)
7
+ @errors = errors
8
+ super("Prediction invalid: #{errors.to_h}")
9
+ end
10
+ end
11
+ class Predict < DSPy::Module
12
+ attr_reader :signature_class
13
+
14
+ def initialize(signature_class)
15
+ @signature_class = signature_class
16
+ end
17
+
18
+ def system_signature
19
+ <<-PROMPT
20
+ Your input schema fields are:
21
+ ```json
22
+ #{JSON.generate(@signature_class.input_schema.json_schema)}
23
+ ```
24
+ Your output schema fields are:
25
+ ```json
26
+ #{JSON.generate(@signature_class.output_schema.json_schema)}
27
+ ````
28
+ All interactions will be structured in the following way, with the appropriate values filled in.
29
+
30
+ ## Input values
31
+ ```json
32
+ {input_values}
33
+ ```
34
+ ## Output values
35
+ Respond exclusively with the output schema fields in the json block below.
36
+ ```json
37
+ {output_values}
38
+ ```
39
+
40
+ In adhering to this structure, your objective is: #{@signature_class.description}
41
+
42
+ PROMPT
43
+ end
44
+
45
+ def user_signature(input_values)
46
+ <<-PROMPT
47
+ ## Input Values
48
+ ```json
49
+ #{JSON.generate(input_values)}
50
+ ```
51
+
52
+ Respond with the corresponding output schema fields wrapped in a ```json ``` block,
53
+ starting with the heading `## Output values`.
54
+ PROMPT
55
+ end
56
+
57
+ def lm
58
+ DSPy.config.lm
59
+ end
60
+
61
+ def forward(**input_values)
62
+ DSPy.logger.info( module: self.class.to_s, **input_values)
63
+ result = @signature_class.input_schema.call(input_values)
64
+ if result.success?
65
+ output_attributes = lm.chat(self, input_values)
66
+ poro_class = Data.define(*output_attributes.keys)
67
+ return poro_class.new(*output_attributes.values)
68
+ end
69
+ raise PredictionInvalidError.new(result.errors)
70
+ end
71
+ end
72
+ end