desiru 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. checksums.yaml +7 -0
  2. data/.rspec +1 -0
  3. data/.rubocop.yml +55 -0
  4. data/CLAUDE.md +22 -0
  5. data/Gemfile +36 -0
  6. data/Gemfile.lock +255 -0
  7. data/LICENSE +21 -0
  8. data/README.md +343 -0
  9. data/Rakefile +18 -0
  10. data/desiru.gemspec +44 -0
  11. data/examples/README.md +55 -0
  12. data/examples/async_processing.rb +135 -0
  13. data/examples/few_shot_learning.rb +66 -0
  14. data/examples/graphql_api.rb +190 -0
  15. data/examples/graphql_integration.rb +114 -0
  16. data/examples/rag_retrieval.rb +80 -0
  17. data/examples/simple_qa.rb +31 -0
  18. data/examples/typed_signatures.rb +45 -0
  19. data/lib/desiru/async_capable.rb +170 -0
  20. data/lib/desiru/cache.rb +116 -0
  21. data/lib/desiru/configuration.rb +40 -0
  22. data/lib/desiru/field.rb +171 -0
  23. data/lib/desiru/graphql/data_loader.rb +210 -0
  24. data/lib/desiru/graphql/executor.rb +115 -0
  25. data/lib/desiru/graphql/schema_generator.rb +301 -0
  26. data/lib/desiru/jobs/async_predict.rb +52 -0
  27. data/lib/desiru/jobs/base.rb +53 -0
  28. data/lib/desiru/jobs/batch_processor.rb +71 -0
  29. data/lib/desiru/jobs/optimizer_job.rb +45 -0
  30. data/lib/desiru/models/base.rb +112 -0
  31. data/lib/desiru/models/raix_adapter.rb +210 -0
  32. data/lib/desiru/module.rb +204 -0
  33. data/lib/desiru/modules/chain_of_thought.rb +106 -0
  34. data/lib/desiru/modules/predict.rb +142 -0
  35. data/lib/desiru/modules/retrieve.rb +199 -0
  36. data/lib/desiru/optimizers/base.rb +130 -0
  37. data/lib/desiru/optimizers/bootstrap_few_shot.rb +212 -0
  38. data/lib/desiru/program.rb +106 -0
  39. data/lib/desiru/registry.rb +74 -0
  40. data/lib/desiru/signature.rb +322 -0
  41. data/lib/desiru/version.rb +5 -0
  42. data/lib/desiru.rb +67 -0
  43. metadata +184 -0
data/README.md ADDED
@@ -0,0 +1,343 @@
1
+ # DeSIRu - Declarative Self-Improving Ruby
2
+
3
+ A Ruby implementation of [DSPy](https://dspy.ai/), the framework for programming—not prompting—language models. Build sophisticated AI systems with modular, composable code instead of brittle prompt strings.
4
+
5
+ ## Overview
6
+
7
+ Desiru brings the power of DSPy to the Ruby ecosystem, enabling developers to:
8
+ - Write declarative AI programs using Ruby's elegant syntax
9
+ - Automatically optimize prompts and few-shot examples
10
+ - Build portable AI systems that work across different language models
11
+ - Create maintainable, testable AI applications
12
+
13
+ Desiru leverages [Raix](https://github.com/OlympiaAI/raix) under the hood as its primary chat completion interface, providing seamless support for OpenAI and OpenRouter APIs with features like streaming, function calling, and prompt caching.
14
+
15
+ ## Installation
16
+
17
+ Add this line to your application's Gemfile:
18
+
19
+ ```ruby
20
+ gem 'desiru'
21
+ ```
22
+
23
+ And then execute:
24
+
25
+ ```bash
26
+ $ bundle install
27
+ ```
28
+
29
+ Or install it yourself as:
30
+
31
+ ```bash
32
+ $ gem install desiru
33
+ ```
34
+
35
+ ## Quick Start
36
+
37
+ ```ruby
38
+ require 'desiru'
39
+
40
+ # Configure your language model
41
+ Desiru.configure do |config|
42
+ config.default_model = Desiru::Models::OpenAI.new(api_key: ENV['OPENAI_API_KEY'])
43
+ end
44
+
45
+ # Define a simple question-answering signature
46
+ math = Desiru::ChainOfThought.new("question -> answer: float")
47
+
48
+ # Use it!
49
+ result = math.call(question: "Two dice are tossed. What is the probability that the sum equals two?")
50
+ puts result.answer # => 0.0278
51
+ ```
52
+
53
+ ## Core Concepts
54
+
55
+ ### Signatures
56
+
57
+ Signatures define the input/output behavior of your AI components:
58
+
59
+ ```ruby
60
+ # Simple signature
61
+ qa = Desiru::Signature.new("question -> answer")
62
+
63
+ # Typed signature with descriptions
64
+ summarizer = Desiru::Signature.new(
65
+ "document: string, max_length: int -> summary: string",
66
+ descriptions: {
67
+ document: "The text to summarize",
68
+ max_length: "Maximum number of words in summary",
69
+ summary: "A concise summary of the document"
70
+ }
71
+ )
72
+ ```
73
+
74
+ ### Modules
75
+
76
+ Desiru provides several built-in modules for different reasoning patterns:
77
+
78
+ ```ruby
79
+ # Basic prediction
80
+ predict = Desiru::Predict.new("question -> answer")
81
+
82
+ # Chain of Thought reasoning
83
+ cot = Desiru::ChainOfThought.new("question -> answer")
84
+
85
+ # ReAct pattern for tool use
86
+ react = Desiru::ReAct.new("question -> answer", tools: [calculator, search])
87
+
88
+ # Compose modules into programs
89
+ class RAGPipeline < Desiru::Program
90
+ def initialize
91
+ @retrieve = Desiru::Retrieve.new(k: 3)
92
+ @generate = Desiru::ChainOfThought.new("context, question -> answer")
93
+ end
94
+
95
+ def forward(question)
96
+ context = @retrieve.call(question)
97
+ @generate.call(context: context, question: question)
98
+ end
99
+ end
100
+ ```
101
+
102
+ ### Optimizers
103
+
104
+ Automatically improve your AI programs:
105
+
106
+ ```ruby
107
+ # Create a simple training set
108
+ trainset = [
109
+ { question: "What is 2+2?", answer: "4" },
110
+ { question: "What is the capital of France?", answer: "Paris" }
111
+ ]
112
+
113
+ # Optimize with few-shot examples
114
+ optimizer = Desiru::BootstrapFewShot.new(metric: :exact_match)
115
+ optimized_program = optimizer.compile(program, trainset: trainset)
116
+
117
+ # Or use more advanced optimization
118
+ optimizer = Desiru::MIPROv2.new(
119
+ metric: :f1,
120
+ num_candidates: 10,
121
+ max_bootstrapped_demos: 3
122
+ )
123
+ ```
124
+
125
+ ## Advanced Usage
126
+
127
+ ### Custom Metrics
128
+
129
+ ```ruby
130
+ def relevance_metric(prediction, ground_truth)
131
+ # Your custom evaluation logic
132
+ score = calculate_similarity(prediction.answer, ground_truth.answer)
133
+ score > 0.8 ? 1.0 : 0.0
134
+ end
135
+
136
+ optimizer = Desiru::BootstrapFewShot.new(metric: method(:relevance_metric))
137
+ ```
138
+
139
+ ### Multi-Stage Pipelines
140
+
141
+ ```ruby
142
+ class AdvancedQA < Desiru::Program
143
+ def initialize
144
+ @understand = Desiru::ChainOfThought.new("question -> interpretation")
145
+ @decompose = Desiru::Predict.new("question -> subquestions: list[str]")
146
+ @answer_sub = Desiru::ChainOfThought.new("subquestion -> subanswer")
147
+ @synthesize = Desiru::ChainOfThought.new("subresults -> final_answer")
148
+ end
149
+
150
+ def forward(question)
151
+ interpretation = @understand.call(question: question)
152
+ subquestions = @decompose.call(question: question)
153
+
154
+ subresults = subquestions.subquestions.map do |subq|
155
+ @answer_sub.call(subquestion: subq)
156
+ end
157
+
158
+ @synthesize.call(subresults: subresults)
159
+ end
160
+ end
161
+ ```
162
+
163
+ ### Model Adapters
164
+
165
+ Desiru supports multiple language model providers:
166
+
167
+ ```ruby
168
+ # OpenAI
169
+ model = Desiru::Models::OpenAI.new(
170
+ api_key: ENV['OPENAI_API_KEY'],
171
+ model: 'gpt-4-turbo-preview'
172
+ )
173
+
174
+ # Anthropic
175
+ model = Desiru::Models::Anthropic.new(
176
+ api_key: ENV['ANTHROPIC_API_KEY'],
177
+ model: 'claude-3-opus-20240229'
178
+ )
179
+
180
+ # Local models via Ollama
181
+ model = Desiru::Models::Ollama.new(
182
+ model: 'llama2:70b',
183
+ base_url: 'http://localhost:11434'
184
+ )
185
+
186
+ # Use with any module
187
+ cot = Desiru::ChainOfThought.new("question -> answer", model: model)
188
+ ```
189
+
190
+ ### Background Processing
191
+
192
+ Desiru includes built-in support for asynchronous processing using Sidekiq:
193
+
194
+ ```ruby
195
+ # Configure Redis for background jobs
196
+ Desiru.configure do |config|
197
+ config.redis_url = 'redis://localhost:6379'
198
+ end
199
+
200
+ # Single async prediction
201
+ module = Desiru::Predict.new("question -> answer")
202
+ result = module.call_async(question: "What is 2+2?")
203
+
204
+ # Check status
205
+ result.ready? # => false (still processing)
206
+ result.success? # => true/false (when ready)
207
+
208
+ # Wait for result
209
+ answer = result.wait(timeout: 30) # Blocks until ready
210
+ puts answer.result # => "4"
211
+
212
+ # Batch processing
213
+ questions = [
214
+ { question: "What is 2+2?" },
215
+ { question: "What is 3+3?" }
216
+ ]
217
+ batch_result = module.call_batch_async(questions)
218
+
219
+ # Get batch statistics
220
+ batch_result.wait
221
+ stats = batch_result.stats
222
+ # => { total: 2, successful: 2, failed: 0, success_rate: 1.0 }
223
+
224
+ # Background optimization
225
+ optimizer = Desiru::BootstrapFewShot.new(metric: :f1)
226
+ job_id = optimizer.compile_async(program, trainset: examples)
227
+ ```
228
+
229
+ To use background processing:
230
+ 1. Add `redis` to your Gemfile
231
+ 2. Run Sidekiq workers: `bundle exec sidekiq`
232
+ 3. Use `call_async` methods on modules
233
+
234
+ ### Background Processing: DSPy vs Desiru
235
+
236
+ While DSPy (Python) includes async support through Python's `asyncio`, Desiru takes a different approach using Sidekiq and Redis. This design choice reflects the different ecosystems and typical deployment patterns:
237
+
238
+ #### DSPy's Async Approach
239
+ - **In-process concurrency** using Python's `asyncio`
240
+ - Runs multiple LLM calls concurrently within the same process
241
+ - No persistence - results are lost if the process crashes
242
+ - Best suited for scripts, notebooks, and research
243
+
244
+ ```python
245
+ # DSPy async example
246
+ async def main():
247
+ output = await predict.acall(question="What is 2+2?")
248
+ ```
249
+
250
+ #### Desiru's Background Jobs Approach
251
+ - **True background processing** with separate worker processes
252
+ - Jobs persist in Redis and survive application restarts
253
+ - Built for production web applications (Rails, Sinatra, etc.)
254
+ - Includes job prioritization, retries, and monitoring
255
+
256
+ | Feature | DSPy (asyncio) | Desiru (Sidekiq/Redis) |
257
+ |---------|----------------|------------------------|
258
+ | **Architecture** | Single process | Distributed workers |
259
+ | **Persistence** | None | Redis with configurable TTL |
260
+ | **Failure handling** | Basic exceptions | Retries, dead letter queues |
261
+ | **Monitoring** | None | Sidekiq Web UI |
262
+ | **Use case** | Research, notebooks | Production web apps |
263
+
264
+ This approach makes Desiru particularly well-suited for:
265
+ - Web applications that need non-blocking LLM operations
266
+ - Batch processing of large datasets
267
+ - Systems requiring job persistence and reliability
268
+ - Deployments that need to scale horizontally
269
+
270
+ ## Examples
271
+
272
+ ### Retrieval-Augmented Generation (RAG)
273
+
274
+ ```ruby
275
+ class SimpleRAG < Desiru::Program
276
+ def initialize(vectorstore)
277
+ @vectorstore = vectorstore
278
+ @retrieve = Desiru::Retrieve.new(k: 5)
279
+ @generate = Desiru::ChainOfThought.new(
280
+ "context: list[str], question: str -> answer: str"
281
+ )
282
+ end
283
+
284
+ def forward(question)
285
+ docs = @retrieve.call(question, index: @vectorstore)
286
+ @generate.call(context: docs, question: question)
287
+ end
288
+ end
289
+
290
+ # Usage
291
+ rag = SimpleRAG.new(my_vectorstore)
292
+ result = rag.call("What are the main features of Ruby 3.0?")
293
+ ```
294
+
295
+ ### Classification with Reasoning
296
+
297
+ ```ruby
298
+ classifier = Desiru::ChainOfThought.new(
299
+ "text -> sentiment: Literal['positive', 'negative', 'neutral']"
300
+ )
301
+
302
+ # Optimize with examples
303
+ optimizer = Desiru::BootstrapFewShot.new(max_labeled_demos: 8)
304
+ classifier = optimizer.compile(classifier, trainset: sentiment_examples)
305
+
306
+ # Use it
307
+ result = classifier.call(text: "This framework is amazing!")
308
+ puts result.sentiment # => "positive"
309
+ puts result.reasoning # => "The text uses positive language..."
310
+ ```
311
+
312
+ ## Testing
313
+
314
+ Desiru programs are testable Ruby code:
315
+
316
+ ```ruby
317
+ RSpec.describe MyRAGPipeline do
318
+ let(:pipeline) { described_class.new }
319
+
320
+ it "retrieves relevant documents" do
321
+ result = pipeline.call("What is Ruby?")
322
+ expect(result.answer).to include("programming language")
323
+ end
324
+
325
+ it "handles complex questions" do
326
+ # Test with mocked models for deterministic results
327
+ allow(pipeline).to receive(:model).and_return(mock_model)
328
+ # ...
329
+ end
330
+ end
331
+ ```
332
+
333
+ ## Contributing
334
+
335
+ Bug reports and pull requests are welcome on GitHub at https://github.com/obie/desiru.
336
+
337
+ ## License
338
+
339
+ The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
340
+
341
+ ## Acknowledgments
342
+
343
+ Desiru is a Ruby port of [DSPy](https://github.com/stanfordnlp/dspy) by Stanford NLP. Special thanks to the DSPy team for creating this innovative approach to language model programming.
data/Rakefile ADDED
@@ -0,0 +1,18 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'bundler/gem_tasks'
4
+ require 'rspec/core/rake_task'
5
+
6
+ RSpec::Core::RakeTask.new(:spec)
7
+
8
+ require 'rubocop/rake_task'
9
+
10
+ RuboCop::RakeTask.new(:rubocop_ci)
11
+
12
+ task ci: %i[spec rubocop_ci]
13
+
14
+ RuboCop::RakeTask.new(:rubocop) do |task|
15
+ task.options = ['--autocorrect']
16
+ end
17
+
18
+ task default: %i[spec rubocop]
data/desiru.gemspec ADDED
@@ -0,0 +1,44 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative 'lib/desiru/version'
4
+
5
+ Gem::Specification.new do |spec|
6
+ spec.name = 'desiru'
7
+ spec.version = Desiru::VERSION
8
+ spec.authors = ['Obie Fernandez']
9
+ spec.email = ['obiefernandez@gmail.com']
10
+
11
+ spec.summary = 'Declarative Self-Improving Ruby - A Ruby port of DSPy'
12
+ spec.description = "Desiru brings DSPy's declarative programming paradigm for language models to Ruby, " \
13
+ 'enabling reliable, maintainable, and portable AI programming.'
14
+ spec.homepage = 'https://github.com/obie/desiru'
15
+ spec.license = 'MIT'
16
+ spec.required_ruby_version = '>= 3.4.2'
17
+
18
+ spec.metadata['homepage_uri'] = spec.homepage
19
+ spec.metadata['source_code_uri'] = 'https://github.com/obie/desiru'
20
+ spec.metadata['changelog_uri'] = 'https://github.com/obie/desiru/blob/main/CHANGELOG.md'
21
+
22
+ # Specify which files should be added to the gem when it is released.
23
+ spec.files = Dir.chdir(__dir__) do
24
+ `git ls-files -z`.split("\x0").reject do |f|
25
+ (File.expand_path(f) == __FILE__) ||
26
+ f.match(%r{\A(?:(?:bin|test|spec|features)/|\.(?:git|circleci)|appveyor)})
27
+ end
28
+ end
29
+ spec.bindir = 'exe'
30
+ spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
31
+ spec.require_paths = ['lib']
32
+
33
+ # Runtime dependencies
34
+ spec.add_dependency 'forwardable', '~> 1.3'
35
+ spec.add_dependency 'redis', '~> 5.0'
36
+ spec.add_dependency 'sidekiq', '~> 7.2'
37
+ spec.add_dependency 'singleton', '~> 0.1'
38
+
39
+ # Development dependencies (basic ones, others in Gemfile)
40
+ spec.add_development_dependency 'bundler', '~> 2.0'
41
+ spec.add_development_dependency 'rake', '~> 13.0'
42
+ spec.add_development_dependency 'rspec', '~> 3.0'
43
+ spec.metadata['rubygems_mfa_required'] = 'false'
44
+ end
@@ -0,0 +1,55 @@
1
+ # Desiru Examples
2
+
3
+ This directory contains example scripts demonstrating various features of Desiru.
4
+
5
+ ## Running the Examples
6
+
7
+ Before running any examples, make sure you have:
8
+
9
+ 1. Installed the gem dependencies:
10
+ ```bash
11
+ bundle install
12
+ ```
13
+
14
+ 2. Set your OpenAI API key:
15
+ ```bash
16
+ export OPENAI_API_KEY="your-api-key-here"
17
+ ```
18
+
19
+ ## Available Examples
20
+
21
+ ### simple_qa.rb
22
+ Basic question-answering using the Predict and ChainOfThought modules.
23
+
24
+ ```bash
25
+ ruby examples/simple_qa.rb
26
+ ```
27
+
28
+ ### typed_signatures.rb
29
+ Demonstrates typed signatures with input/output validation and field descriptions.
30
+
31
+ ```bash
32
+ ruby examples/typed_signatures.rb
33
+ ```
34
+
35
+ ### few_shot_learning.rb
36
+ Shows how to use the BootstrapFewShot optimizer to improve module performance with training examples.
37
+
38
+ ```bash
39
+ ruby examples/few_shot_learning.rb
40
+ ```
41
+
42
+ ## Creating Your Own Examples
43
+
44
+ When creating new examples:
45
+
46
+ 1. Use `require "bundler/setup"` to ensure proper gem loading
47
+ 2. Configure Desiru with your preferred model
48
+ 3. Create modules with appropriate signatures
49
+ 4. Handle API keys securely (use environment variables)
50
+
51
+ ## Notes
52
+
53
+ - These examples use OpenAI by default, but you can configure other providers (Anthropic, OpenRouter, etc.)
54
+ - Make sure to handle API rate limits appropriately in production code
55
+ - Consider caching results for expensive operations
@@ -0,0 +1,135 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ require 'bundler/setup'
5
+ require 'desiru'
6
+ require 'sidekiq'
7
+
8
+ # Example: Asynchronous Processing with Desiru
9
+ # This example demonstrates how to use background jobs for LLM operations
10
+
11
+ # Configure Desiru with a mock model for demonstration
12
+ Desiru.configure do |config|
13
+ # In a real application, you would configure your actual model here
14
+ # config.default_model = Desiru::Models::OpenAI.new(api_key: ENV['OPENAI_API_KEY'])
15
+
16
+ # Redis URL for background job storage
17
+ config.redis_url = ENV['REDIS_URL'] || 'redis://localhost:6379'
18
+ end
19
+
20
+ # Mock model for demonstration
21
+ class MockModel
22
+ def complete(_prompt, **_options)
23
+ # Simulate processing time
24
+ sleep(0.5)
25
+
26
+ # Return mock response
27
+ {
28
+ content: "answer: This is a mock response for demonstration"
29
+ }
30
+ end
31
+
32
+ def to_config
33
+ { type: 'mock' }
34
+ end
35
+ end
36
+
37
+ # Configure Desiru with mock model
38
+ Desiru.configuration.default_model = MockModel.new
39
+
40
+ puts "=== Desiru Async Processing Example ==="
41
+ puts
42
+
43
+ # Example 1: Single async prediction
44
+ puts "1. Single Async Prediction:"
45
+ qa_module = Desiru::Predict.new("question -> answer")
46
+
47
+ # Submit async job
48
+ result = qa_module.call_async(question: "What is the capital of France?")
49
+ puts " Job ID: #{result.job_id}"
50
+ puts " Status: Processing..."
51
+
52
+ # Check if ready (non-blocking)
53
+ sleep(0.1)
54
+ puts " Ready? #{result.ready?}"
55
+
56
+ # Wait for result (blocking with timeout)
57
+ begin
58
+ final_result = result.wait(timeout: 5)
59
+ puts " Result: #{final_result.answer}"
60
+ rescue Desiru::TimeoutError => e
61
+ puts " Error: #{e.message}"
62
+ end
63
+
64
+ puts
65
+
66
+ # Example 2: Batch processing
67
+ puts "2. Batch Processing:"
68
+ questions = [
69
+ { question: "What is 2+2?" },
70
+ { question: "What is the capital of Japan?" },
71
+ { question: "Who wrote Romeo and Juliet?" }
72
+ ]
73
+
74
+ batch_result = qa_module.call_batch_async(questions)
75
+ puts " Batch ID: #{batch_result.job_id}"
76
+ puts " Processing #{questions.size} questions..."
77
+
78
+ # Wait for batch to complete
79
+ batch_result.wait(timeout: 10)
80
+
81
+ # Get results
82
+ results = batch_result.results
83
+ stats = batch_result.stats
84
+
85
+ puts " Stats:"
86
+ puts " - Total: #{stats[:total]}"
87
+ puts " - Successful: #{stats[:successful]}"
88
+ puts " - Failed: #{stats[:failed]}"
89
+ puts " - Success Rate: #{(stats[:success_rate] * 100).round(1)}%"
90
+
91
+ puts " Results:"
92
+ results.each_with_index do |result, index|
93
+ if result
94
+ puts " [#{index}] #{result.answer}"
95
+ else
96
+ puts " [#{index}] Failed"
97
+ end
98
+ end
99
+
100
+ puts
101
+
102
+ # Example 3: Error handling
103
+ puts "3. Error Handling:"
104
+
105
+ # Create a module that will fail
106
+ class FailingModel
107
+ def complete(_prompt, **_options)
108
+ raise StandardError, "Simulated model failure"
109
+ end
110
+
111
+ def to_config
112
+ { type: 'failing' }
113
+ end
114
+ end
115
+
116
+ failing_module = Desiru::Predict.new("question -> answer", model: FailingModel.new)
117
+ async_result = failing_module.call_async(question: "This will fail")
118
+
119
+ begin
120
+ async_result.wait(timeout: 2)
121
+ rescue Desiru::ModuleError => e
122
+ puts " Caught error: #{e.message}"
123
+ error_info = async_result.error
124
+ puts " Error class: #{error_info[:class]}"
125
+ puts " Error message: #{error_info[:message]}"
126
+ end
127
+
128
+ puts
129
+ puts "=== Example Complete ==="
130
+ puts
131
+ puts "Note: In a production environment, you would:"
132
+ puts "1. Have Sidekiq workers running: bundle exec sidekiq"
133
+ puts "2. Use real language models instead of mocks"
134
+ puts "3. Implement proper error handling and monitoring"
135
+ puts "4. Consider using Sidekiq Pro for additional features"
@@ -0,0 +1,66 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ require 'bundler/setup'
5
+ require 'desiru'
6
+
7
+ # Configure Desiru
8
+ Desiru.configure do |config|
9
+ config.default_model = Desiru::Models::RaixAdapter.new(
10
+ provider: :openai,
11
+ model: 'gpt-3.5-turbo',
12
+ api_key: ENV['OPENAI_API_KEY'] || raise('Please set OPENAI_API_KEY environment variable')
13
+ )
14
+ end
15
+
16
+ # Create a sentiment classifier
17
+ classifier = Desiru::Modules::ChainOfThought.new(
18
+ 'text -> sentiment: string, confidence: float'
19
+ )
20
+
21
+ # Training examples
22
+ training_examples = [
23
+ {
24
+ text: 'This product is amazing! I love it so much.',
25
+ sentiment: 'positive',
26
+ confidence: 0.95
27
+ },
28
+ {
29
+ text: 'Terrible experience. Would not recommend.',
30
+ sentiment: 'negative',
31
+ confidence: 0.90
32
+ },
33
+ {
34
+ text: "It's okay, nothing special but does the job.",
35
+ sentiment: 'neutral',
36
+ confidence: 0.80
37
+ }
38
+ ]
39
+
40
+ # Optimize the classifier with few-shot examples
41
+ optimizer = Desiru::Optimizers::BootstrapFewShot.new(
42
+ metric: :exact_match,
43
+ max_bootstrapped_demos: 3
44
+ )
45
+
46
+ puts "Optimizing classifier with #{training_examples.size} examples..."
47
+ optimized_classifier = optimizer.compile(classifier, trainset: training_examples)
48
+
49
+ # Test on new examples
50
+ test_texts = [
51
+ 'This framework is absolutely fantastic!',
52
+ "I'm disappointed with the quality.",
53
+ 'The service is adequate for basic needs.'
54
+ ]
55
+
56
+ puts "\n#{'=' * 50}"
57
+ puts 'Sentiment Analysis Results:'
58
+ puts '=' * 50
59
+
60
+ test_texts.each do |text|
61
+ result = optimized_classifier.call(text: text)
62
+ puts "\nText: \"#{text}\""
63
+ puts "Sentiment: #{result.sentiment}"
64
+ puts "Confidence: #{(result.confidence * 100).round(1)}%"
65
+ puts "Reasoning: #{result.reasoning}" if result.respond_to?(:reasoning)
66
+ end