desiru 0.1.1 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 6d4dc11edfa1084914d29616c396cb2bb163c4ed74bf2e13a07f23edd4b0fefc
4
- data.tar.gz: 86a62bd76bf86d7aedff994dcf93e110a64a1d135cad0249c89658f22e93bc78
3
+ metadata.gz: dba6d3f283255824630ae2a8fd4119dfcdb7480f1aa8d50e57aa8c8d740128ce
4
+ data.tar.gz: 26b4a53f8be0de54d4619c7ae7ba85fb868009c3b7407b7f5730b2cdc8b8277d
5
5
  SHA512:
6
- metadata.gz: bee932d2d4adb82e40e80feee6a07156f1f8c6f753660e11c492b2ff0ccc6fa8711587bdc05c2961b9f80b32b84394c226b8408fb7467d0d5bd5d2063254c6c0
7
- data.tar.gz: f59cb824b72814078b1646c997f001b6aae47fa0ffc2ad7c06070b2a2a9cb0e3fe6068574624c9f508ac2f87326c9d76c2040f28ada3fc91b5e938113ac81186
6
+ metadata.gz: 4be0c144f72095e40bb49060797ebf2e790d053f17bdb773c4e41bf0bd84f9706f4fff91981ea3bba89535afc2b152ca06e79ad11ac5a74d82912055b115d702
7
+ data.tar.gz: '08db2c001e9dda2c0f1cb934fcd2826f75290324edf7707e898f2ed18a57b675c4d2c50b7592ce3aa0b7d2506cbb8c704b3edb15accc0fbac44ab917abb76180'
@@ -0,0 +1,11 @@
1
+ {
2
+ "permissions": {
3
+ "allow": [
4
+ "WebFetch(domain:github.com)",
5
+ "Bash(gh repo view:*)",
6
+ "Bash(gh issue list:*)",
7
+ "Bash(claude-swarm:*)"
8
+ ],
9
+ "deny": []
10
+ }
11
+ }
data/CHANGELOG.md ADDED
@@ -0,0 +1,73 @@
1
+ # Changelog
2
+
3
+ All notable changes to this project will be documented in this file.
4
+
5
+ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
+ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
+
8
+ ## [0.2.0] - 2025-06-23
9
+
10
+ ### Added
11
+ - **Core Data Containers**: New `Example` and `Prediction` classes for structured data handling in DSPy programs
12
+ - **Trace Collection System**: Complete tracing infrastructure with `TraceCollector` and `Traceable` for execution monitoring and debugging
13
+ - **Compilation Infrastructure**: New `Compiler` and `CompilerBuilder` classes enabling full DSPy program compilation pipeline
14
+ - **ProgramOfThought Module**: Advanced reasoning module that generates and executes code for complex problem solving
15
+ - **MIPROv2 Optimizer**: State-of-the-art Bayesian optimization algorithm for automatic prompt and few-shot example optimization
16
+ - **BestOfN Module**: Multi-sampling module with configurable selection criteria (confidence, consistency, LLM judge, custom)
17
+ - **Comprehensive Integration Tests**: Full test coverage for all new components ensuring reliability and correctness
18
+
19
+ ### Enhanced
20
+ - **Module Architecture**: Improved base module system to support advanced tracing and compilation features
21
+ - **Optimization Pipeline**: Complete optimization workflow from data collection through model improvement
22
+ - **Error Handling**: Robust error recovery and logging throughout the new components
23
+
24
+ ### Technical Improvements
25
+ - **Type Safety**: Enhanced type checking and validation across all new modules
26
+ - **Performance**: Optimized execution paths for compilation and optimization workflows
27
+ - **Extensibility**: Modular architecture enabling easy addition of new optimizers and reasoning modules
28
+
29
+ ## [0.1.1] - 2025-06-21
30
+
31
+ ### Added
32
+ - Direct API client implementations for Anthropic, OpenAI, and OpenRouter
33
+ - New modules: Majority, MultiChainComparison, and ProgramOfThought
34
+ - New optimizers: COPRO and KNNFewShot
35
+ - Enhanced error handling with detailed error classes
36
+ - Support for Ruby 3.3.6 (minimum version now 3.3.0)
37
+ - Interactive console with pre-loaded modules (`bin/console`)
38
+ - Examples runner with model selection (`bin/examples`)
39
+
40
+ ### Changed
41
+ - Replaced Raix dependency with direct API integrations for better control
42
+ - Improved console experience with better error messages and debug helpers
43
+ - Updated default max_tokens from 1000 to 4096 to prevent truncated responses
44
+ - Fixed namespace issues in console by including necessary modules
45
+
46
+ ### Fixed
47
+ - Redis mocking in job tests for CI compatibility
48
+ - Rubocop configuration to match required Ruby version
49
+ - Test failures in CI environment
50
+
51
+ ### Removed
52
+ - Raix gem dependency
53
+ - Support for Ruby 3.2.x (minimum version is now 3.3.0)
54
+
55
+ ## [0.1.0] - 2025-06-12
56
+
57
+ ### Added
58
+ - Initial release of Desiru
59
+ - Core DSPy functionality ported to Ruby
60
+ - Basic modules: Predict, ChainOfThought, ReAct
61
+ - Signature system with type validation
62
+ - Model adapters for OpenAI and Anthropic (via Raix)
63
+ - Optimizers: BootstrapFewShot, MIPROv2
64
+ - REST API integration with Grape and Sinatra
65
+ - GraphQL integration with automatic schema generation
66
+ - Background job processing with Sidekiq
67
+ - Database persistence layer with Sequel
68
+ - Comprehensive test suite
69
+ - Documentation and examples
70
+
71
+ [0.2.0]: https://github.com/obie/desiru/compare/v0.1.1...v0.2.0
72
+ [0.1.1]: https://github.com/obie/desiru/compare/v0.1.0...v0.1.1
73
+ [0.1.0]: https://github.com/obie/desiru/releases/tag/v0.1.0
data/CLAUDE.local.md ADDED
@@ -0,0 +1,3 @@
1
+ ## Development Workflow
2
+
3
+ - ALWAYS run rubocop -A before pushing a new PR update. it's super annoying to have the build break because of the linter
data/CLAUDE.md CHANGED
@@ -23,4 +23,9 @@ This project is in its initial setup phase. When implementing features:
23
23
 
24
24
  ## Workflow Guidance
25
25
 
26
- - For maximum efficiency, whenever you need to perform multiple independent operations, invoke all relevant tools simultaneously rather than sequentially.
26
+ - For maximum efficiency, whenever you need to perform multiple independent operations, invoke all relevant tools simultaneously rather than sequentially.
27
+ - When you're tempted to respond and return control to me with a message like "The codebase is now in excellent shape with 859 passing tests, 1 failing test, and 5 pending tests. The project is ready for the v0.2.0 release once the team decides how to handle the final test (either fix it or mark it as pending)." then instead, you should invoke the team to go ahead and decide how to handle the final test.
28
+
29
+ ## Release Guidance
30
+
31
+ - Note that releases are never ready if there are any tests failing in the test suites. Never tell me that a release is ready unless we have a clean build.
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- desiru (0.1.0)
4
+ desiru (0.2.0)
5
5
  forwardable (~> 1.3)
6
6
  redis (~> 5.0)
7
7
  sidekiq (~> 7.2)
data/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  A Ruby implementation of [DSPy](https://dspy.ai/), the framework for programming—not prompting—language models. Build sophisticated AI systems with modular, composable code instead of brittle prompt strings.
4
4
 
5
- Note: This project is in its earliest stages of development and experimental. Expect many bugs and breaking changes.
5
+ Note: This project is in active development. While core functionality is stable, expect continued rapid evolution and new features.
6
6
 
7
7
 
8
8
  ## Overview
@@ -88,6 +88,12 @@ cot = Desiru::ChainOfThought.new("question -> answer")
88
88
  # ReAct pattern for tool use
89
89
  react = Desiru::ReAct.new("question -> answer", tools: [calculator, search])
90
90
 
91
+ # Program of Thought - generates and executes code
92
+ pot = Desiru::ProgramOfThought.new("problem -> solution: float")
93
+
94
+ # Best of N - samples multiple outputs and selects the best
95
+ best_of_n = Desiru::BestOfN.new("question -> answer", n_samples: 3, selection_criterion: :consistency)
96
+
91
97
  # Compose modules into programs
92
98
  class RAGPipeline < Desiru::Program
93
99
  def initialize
@@ -0,0 +1,185 @@
1
+ version: 1
2
+ swarm:
3
+ name: "Desiru Development Team"
4
+ main: project_lead
5
+ before:
6
+ - "echo '🚀 Starting Desiru development session...'"
7
+ - "shadowenv exec -- bundle install"
8
+ - "shadowenv exec -- bundle exec rspec --help > /dev/null || echo 'RSpec ready'"
9
+ instances:
10
+ project_lead:
11
+ description: "Project lead coordinating Desiru development, managing releases, and ensuring code quality"
12
+ directory: .
13
+ model: opus
14
+ connections: [core_architect, feature_implementer, test_specialist, release_manager]
15
+ prompt: |
16
+ You are the project lead for Desiru, a Ruby implementation of DSPy. Your responsibilities include:
17
+
18
+ - Coordinating development work across the team
19
+ - Making architectural decisions and ensuring code quality
20
+ - Prioritizing GitHub issues based on the roadmap
21
+ - Reviewing code changes before they're committed
22
+ - Managing the overall development process
23
+
24
+ Key project context:
25
+ - This is a Ruby gem implementing DSPy (Declarative Self-Improving) for programming language models
26
+ - Current version: 0.1.1 (check lib/desiru/version.rb)
27
+ - Uses RSpec for testing (NEVER use Minitest)
28
+ - Follows Ruby community conventions
29
+ - Has a comprehensive roadmap in issue #22
30
+
31
+ Always use 'shadowenv exec --' prefix for Ruby/bundler commands. Use 'be' alias for bundle exec.
32
+
33
+ For maximum efficiency, whenever you need to perform multiple independent operations, invoke all relevant tools simultaneously rather than sequentially.
34
+ allowed_tools:
35
+ - Read
36
+ - Edit
37
+ - Bash
38
+ - WebSearch
39
+ - WebFetch
40
+
41
+ core_architect:
42
+ description: "Senior architect implementing core DSPy infrastructure, modules, and optimizers"
43
+ directory: ./lib/desiru
44
+ model: opus
45
+ connections: [feature_implementer, test_specialist]
46
+ prompt: |
47
+ You are the core architect for Desiru's DSPy implementation. Your expertise includes:
48
+
49
+ - Implementing core DSPy modules (ProgramOfThought, MultiChainComparison, BestOfN)
50
+ - Building optimizers (MIPROv2, COPRO, BootstrapFewShotWithRandomSearch)
51
+ - Designing the compilation infrastructure and trace collection system
52
+ - Creating typed predictors and example/prediction classes
53
+
54
+ Focus on high-priority features from the roadmap (issue #22):
55
+ - Phase 1: Core Functionality (Example/Prediction classes, ProgramOfThought, MIPROv2, Trace collection)
56
+ - Phase 2: Enhanced Optimization (MultiChainComparison, BestOfN, COPRO)
57
+
58
+ Always follow Ruby conventions and ensure code is clean, well-documented, and tested.
59
+ Use 'shadowenv exec --' for Ruby commands and 'be' for bundle exec.
60
+
61
+ For maximum efficiency, whenever you need to perform multiple independent operations, invoke all relevant tools simultaneously rather than sequentially.
62
+ allowed_tools:
63
+ - Read
64
+ - Edit
65
+ - Write
66
+ - Bash
67
+
68
+ feature_implementer:
69
+ description: "Feature developer implementing specific DSPy components, utilities, and integrations"
70
+ directory: ./lib/desiru
71
+ model: opus
72
+ connections: [test_specialist]
73
+ prompt: |
74
+ You are a feature developer specializing in implementing specific DSPy components. Your focus areas:
75
+
76
+ - Implementing modules: Refine, ChainOfThoughtWithHint, streaming support
77
+ - Building utilities: data loaders, metrics system, serialization
78
+ - Creating multi-provider LLM abstractions
79
+ - Adding advanced features like suggestions system
80
+
81
+ Current priority features (from roadmap issue #22):
82
+ - Medium priority: Refine module, ChainOfThoughtWithHint, advanced metrics
83
+ - Utilities: Data loaders (CSV, JSON), streaming support, serialization
84
+
85
+ Ensure all implementations follow existing patterns and are thoroughly tested.
86
+ Use 'shadowenv exec --' for Ruby commands and 'be' for bundle exec.
87
+
88
+ For maximum efficiency, whenever you need to perform multiple independent operations, invoke all relevant tools simultaneously rather than sequentially.
89
+ allowed_tools:
90
+ - Read
91
+ - Edit
92
+ - Write
93
+ - Bash
94
+
95
+ test_specialist:
96
+ description: "Testing expert ensuring comprehensive test coverage and quality assurance"
97
+ directory: ./spec
98
+ model: sonnet
99
+ prompt: |
100
+ You are the testing specialist for Desiru. Your responsibilities include:
101
+
102
+ - Writing comprehensive RSpec tests for all new features
103
+ - Ensuring test coverage for core functionality
104
+ - Creating integration tests for DSPy workflows
105
+ - Maintaining test quality and performance
106
+ - Running test suites and fixing test failures
107
+
108
+ CRITICAL: This project uses RSpec exclusively. NEVER use Minitest or create test/ directories.
109
+ All tests must be in spec/ directory using RSpec format.
110
+
111
+ Key testing priorities:
112
+ - Round-trip serialization tests
113
+ - Integration tests for modules and optimizers
114
+ - Performance benchmarks
115
+ - Cross-version compatibility tests
116
+
117
+ Use 'shadowenv exec --' for Ruby commands and 'be rspec' for running tests.
118
+
119
+ For maximum efficiency, whenever you need to perform multiple independent operations, invoke all relevant tools simultaneously rather than sequentially.
120
+ allowed_tools:
121
+ - Read
122
+ - Edit
123
+ - Write
124
+ - Bash
125
+
126
+ release_manager:
127
+ description: "Release engineering specialist handling versioning, changelogs, and gem publishing"
128
+ directory: .
129
+ model: sonnet
130
+ prompt: |
131
+ You are the release manager for Desiru. Your responsibilities include:
132
+
133
+ - Managing semantic versioning in lib/desiru/version.rb
134
+ - Updating CHANGELOG.md with new features and fixes
135
+ - Preparing release documentation
136
+ - Coordinating gem publishing to RubyGems
137
+ - Ensuring release readiness (tests pass, docs updated)
138
+
139
+ Current version: 0.1.1 (check lib/desiru/version.rb)
140
+
141
+ Release process:
142
+ 1. Update version number in lib/desiru/version.rb
143
+ 2. Update CHANGELOG.md with version changes
144
+ 3. Commit changes: git commit -am "Bump version to x.y.z"
145
+ 4. Create version tag: git tag -a vx.y.z -m "Release version x.y.z"
146
+ 5. Push changes and tag: git push && git push --tags
147
+
148
+ Use 'shadowenv exec --' for Ruby commands and follow semantic versioning.
149
+ Coordinate with project_lead before any releases.
150
+
151
+ For maximum efficiency, whenever you need to perform multiple independent operations, invoke all relevant tools simultaneously rather than sequentially.
152
+ allowed_tools:
153
+ - Read
154
+ - Edit
155
+ - Bash
156
+ - WebSearch
157
+
158
+ documentation_writer:
159
+ description: "Documentation specialist maintaining comprehensive docs and examples"
160
+ directory: ./docs
161
+ model: sonnet
162
+ connections: [project_lead]
163
+ prompt: |
164
+ You are the documentation specialist for Desiru. Your focus areas:
165
+
166
+ - Maintaining comprehensive API documentation
167
+ - Creating usage examples and tutorials
168
+ - Updating feature documentation as new capabilities are added
169
+ - Ensuring documentation accuracy and clarity
170
+ - Writing integration guides and best practices
171
+
172
+ Key documentation areas:
173
+ - Feature gap analysis updates
174
+ - Integration test strategy documentation
175
+ - API documentation for new modules and optimizers
176
+ - Usage examples for new features
177
+
178
+ Keep documentation current with development progress and ensure examples work.
179
+
180
+ For maximum efficiency, whenever you need to perform multiple independent operations, invoke all relevant tools simultaneously rather than sequentially.
181
+ allowed_tools:
182
+ - Read
183
+ - Edit
184
+ - Write
185
+ - WebSearch
@@ -0,0 +1,231 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Desiru
4
+ module Core
5
+ class CompilationResult
6
+ attr_reader :program, :metrics, :traces, :metadata
7
+
8
+ def initialize(program:, metrics: {}, traces: [], metadata: {})
9
+ @program = program
10
+ @metrics = metrics
11
+ @traces = traces
12
+ @metadata = metadata
13
+ end
14
+
15
+ def success?
16
+ @metadata[:success] != false
17
+ end
18
+
19
+ def optimization_score
20
+ @metrics[:optimization_score] || 0.0
21
+ end
22
+
23
+ def to_h
24
+ {
25
+ program: @program.to_h,
26
+ metrics: @metrics,
27
+ traces_count: @traces.size,
28
+ metadata: @metadata
29
+ }
30
+ end
31
+ end
32
+
33
+ class Compiler
34
+ attr_reader :optimizer, :trace_collector, :config
35
+
36
+ def initialize(optimizer: nil, trace_collector: nil, config: {})
37
+ @optimizer = optimizer
38
+ @trace_collector = trace_collector || Core.trace_collector
39
+ @config = default_config.merge(config)
40
+ @compilation_stack = []
41
+ end
42
+
43
+ def compile(program, training_set = [])
44
+ start_compilation(program)
45
+ modules_traced = false
46
+
47
+ begin
48
+ # Clear previous traces if configured
49
+ @trace_collector.clear if @config[:clear_traces]
50
+
51
+ # Enable tracing for all modules
52
+ enable_module_tracing(program)
53
+ modules_traced = true
54
+
55
+ # Run optimizer if provided
56
+ if @optimizer
57
+ if @optimizer.respond_to?(:compile)
58
+ # MIPROv2 style optimizer
59
+ optimized_program = @optimizer.compile(program, trainset: training_set)
60
+ elsif @optimizer.respond_to?(:optimize)
61
+ # Generic optimizer
62
+ optimized_program = @optimizer.optimize(program, training_set)
63
+ else
64
+ raise ArgumentError, "Optimizer must implement either compile or optimize method"
65
+ end
66
+ else
67
+ # Basic compilation without optimization
68
+ optimized_program = compile_without_optimization(program, training_set)
69
+ end
70
+
71
+ # Collect compilation metrics
72
+ metrics = collect_metrics(program, optimized_program, training_set)
73
+
74
+ # Get relevant traces
75
+ traces = @trace_collector.traces.dup
76
+
77
+ end_compilation(
78
+ program: optimized_program,
79
+ metrics: metrics,
80
+ traces: traces,
81
+ metadata: { success: true, optimizer: @optimizer&.class&.name }
82
+ )
83
+ rescue StandardError => e
84
+ end_compilation(
85
+ program: program,
86
+ metrics: {},
87
+ traces: @trace_collector.respond_to?(:traces) ? @trace_collector.traces.dup : [],
88
+ metadata: { success: false, error: e.message, error_class: e.class.name }
89
+ )
90
+ ensure
91
+ disable_module_tracing(program) if @config[:restore_trace_state] && modules_traced
92
+ end
93
+ end
94
+
95
+ def compile_module(mod, examples = [])
96
+ # Compile individual module with examples
97
+ return mod if examples.empty?
98
+
99
+ # Extract demonstrations from successful examples
100
+ demos = examples.select { |ex| ex.is_a?(Example) }.take(@config[:max_demos])
101
+
102
+ # Create new module instance with demos
103
+ mod.with_demos(demos)
104
+ end
105
+
106
+ private
107
+
108
+ def default_config
109
+ {
110
+ clear_traces: true,
111
+ restore_trace_state: true,
112
+ max_demos: 5,
113
+ evaluate_metrics: true
114
+ }
115
+ end
116
+
117
+ def start_compilation(program)
118
+ @compilation_stack.push({
119
+ program: program,
120
+ start_time: Time.now
121
+ })
122
+ end
123
+
124
+ def end_compilation(program:, metrics:, traces:, metadata:)
125
+ compilation_data = @compilation_stack.pop
126
+ duration = Time.now - compilation_data[:start_time]
127
+
128
+ CompilationResult.new(
129
+ program: program,
130
+ metrics: metrics.merge(compilation_duration: duration),
131
+ traces: traces,
132
+ metadata: metadata
133
+ )
134
+ end
135
+
136
+ def enable_module_tracing(program)
137
+ return unless program.respond_to?(:modules)
138
+
139
+ program.modules.each do |mod|
140
+ mod.enable_trace! if mod.respond_to?(:enable_trace!)
141
+ end
142
+ end
143
+
144
+ def disable_module_tracing(program)
145
+ return unless program.respond_to?(:modules)
146
+
147
+ program.modules.each do |mod|
148
+ mod.disable_trace! if mod.respond_to?(:disable_trace!)
149
+ end
150
+ end
151
+
152
+ def compile_without_optimization(program, training_set)
153
+ # Basic compilation: collect examples as demonstrations
154
+ return program if training_set.empty? || !program.respond_to?(:modules)
155
+
156
+ # Create a copy of the program
157
+ compiled_program = program.dup
158
+
159
+ # Update modules with training examples if program supports it
160
+ if compiled_program.respond_to?(:modules) && compiled_program.respond_to?(:update_module)
161
+ compiled_program.modules.each do |mod|
162
+ next unless mod.respond_to?(:signature) && mod.signature.respond_to?(:input_fields)
163
+
164
+ relevant_examples = training_set.select do |ex|
165
+ ex.respond_to?(:keys) && ex.keys.any? { |k| mod.signature.input_fields.key?(k) }
166
+ end
167
+
168
+ compiled_module = compile_module(mod, relevant_examples)
169
+ compiled_program.update_module(mod.class, compiled_module) if compiled_module
170
+ end
171
+ end
172
+
173
+ compiled_program
174
+ end
175
+
176
+ def collect_metrics(original_program, optimized_program, training_set)
177
+ return { compilation_duration: 0 } unless @config[:evaluate_metrics]
178
+
179
+ metrics = {
180
+ training_set_size: training_set.size,
181
+ traces_collected: @trace_collector.size
182
+ }
183
+
184
+ # Add module counts if available
185
+ metrics[:original_modules_count] = original_program.modules.size if original_program.respond_to?(:modules)
186
+
187
+ metrics[:optimized_modules_count] = optimized_program.modules.size if optimized_program.respond_to?(:modules)
188
+
189
+ # Add success rate if traces available
190
+ if @trace_collector.size.positive?
191
+ success_rate = @trace_collector.successful.size.to_f / @trace_collector.size
192
+ metrics[:success_rate] = success_rate
193
+ metrics[:optimization_score] = success_rate
194
+ end
195
+
196
+ metrics
197
+ end
198
+ end
199
+
200
+ class CompilerBuilder
201
+ def initialize
202
+ @optimizer = nil
203
+ @trace_collector = nil
204
+ @config = {}
205
+ end
206
+
207
+ def with_optimizer(optimizer)
208
+ @optimizer = optimizer
209
+ self
210
+ end
211
+
212
+ def with_trace_collector(collector)
213
+ @trace_collector = collector
214
+ self
215
+ end
216
+
217
+ def with_config(config)
218
+ @config.merge!(config)
219
+ self
220
+ end
221
+
222
+ def build
223
+ Compiler.new(
224
+ optimizer: @optimizer,
225
+ trace_collector: @trace_collector,
226
+ config: @config
227
+ )
228
+ end
229
+ end
230
+ end
231
+ end
@@ -0,0 +1,96 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Desiru
4
+ module Core
5
+ class Example
6
+ attr_reader :inputs, :labels
7
+
8
+ def initialize(**kwargs)
9
+ @data = kwargs
10
+ @inputs = {}
11
+ @labels = {}
12
+
13
+ kwargs.each do |key, value|
14
+ if key.to_s.end_with?('_input')
15
+ @inputs[key.to_s.sub(/_input$/, '').to_sym] = value
16
+ elsif key.to_s.end_with?('_output')
17
+ @labels[key.to_s.sub(/_output$/, '').to_sym] = value
18
+ else
19
+ @inputs[key] = value
20
+ end
21
+ end
22
+ end
23
+
24
+ def [](key)
25
+ @data[key]
26
+ end
27
+
28
+ def []=(key, value)
29
+ @data[key] = value
30
+ update_inputs_and_labels(key, value)
31
+ end
32
+
33
+ def keys
34
+ @data.keys
35
+ end
36
+
37
+ def values
38
+ @data.values
39
+ end
40
+
41
+ def to_h
42
+ @data.dup
43
+ end
44
+
45
+ def with_inputs(**new_inputs)
46
+ merged_data = @data.dup
47
+ new_inputs.each do |key, value|
48
+ input_key = key.to_s.end_with?('_input') ? key : :"#{key}_input"
49
+ merged_data[input_key] = value
50
+ end
51
+ self.class.new(**merged_data)
52
+ end
53
+
54
+ def method_missing(method_name, *args, &)
55
+ if @data.key?(method_name)
56
+ @data[method_name]
57
+ elsif method_name.to_s.end_with?('=')
58
+ key = method_name.to_s.chop.to_sym
59
+ self[key] = args.first
60
+ else
61
+ super
62
+ end
63
+ end
64
+
65
+ def respond_to_missing?(method_name, include_private = false)
66
+ @data.key?(method_name) || method_name.to_s.end_with?('=') || super
67
+ end
68
+
69
+ def ==(other)
70
+ return false unless other.is_a?(self.class)
71
+
72
+ @data == other.instance_variable_get(:@data)
73
+ end
74
+
75
+ def hash
76
+ @data.hash
77
+ end
78
+
79
+ def inspect
80
+ "#<#{self.class.name} inputs=#{@inputs.inspect} labels=#{@labels.inspect}>"
81
+ end
82
+
83
+ private
84
+
85
+ def update_inputs_and_labels(key, value)
86
+ if key.to_s.end_with?('_input')
87
+ @inputs[key.to_s.sub(/_input$/, '').to_sym] = value
88
+ elsif key.to_s.end_with?('_output')
89
+ @labels[key.to_s.sub(/_output$/, '').to_sym] = value
90
+ else
91
+ @inputs[key] = value
92
+ end
93
+ end
94
+ end
95
+ end
96
+ end