dspy 0.31.1 → 0.32.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 584b43a98aa27dcc4bcdb83c64f48b4c3d0f4b67453abfe9a0942cbc30b14e7d
4
- data.tar.gz: ab2b58be33ef10e36d1825c371e29783a7a3cea72bfb34b765b9aeab5fb4644f
3
+ metadata.gz: a3f15cdf0298a37b8ddf1ffb484e15e01e9bb8f794154d2a993dd91beaa4d22e
4
+ data.tar.gz: f6d0376e92a84d0086257546111e3f456043d814f9dd051a18ce84569e30ff1e
5
5
  SHA512:
6
- metadata.gz: 6671b0cca0709aa9dea2eaf46abbbc2366d0779a89139377b6840279afa767e659bf635e8314a0a3ca45bc3f476200b7d430f74e83b0af08a085ae221a97c7c5
7
- data.tar.gz: 1f8aaba8e3e2f16e1bd0eb81f9694b677f20cbeb7d0dadbacc9571613076ab29bc44991c79f3c5908e821397b2e0616e04801704f4ced5e0e5a8e9eed4f84f4f
6
+ metadata.gz: 5c26922fcf7fd867b6469bdcae24f3c2937a2f2e89b0583026a478c766c4138896709d98d598814d09220a8becd7f624bc824ad3a9f569bbac60abf7ec4e49d7
7
+ data.tar.gz: a251452f69f157a026e0c23bc912f9ddc2252d4e977e2ebbc3f2276ffd293fe7ff4af7aaa97c7807c21e5cdd805fd50d54046016c44664fb153b87788e5a9a16
data/README.md CHANGED
@@ -13,12 +13,10 @@
13
13
  >
14
14
  > If you want to contribute, feel free to reach out to me to coordinate efforts: hey at vicente.services
15
15
  >
16
- > And, yes, this is 100% a legit project. :)
17
-
18
16
 
19
17
  **Build reliable LLM applications in idiomatic Ruby using composable, type-safe modules.**
20
18
 
21
- DSPy.rb is the Ruby-first surgical port of Stanford's [DSPy framework](https://github.com/stanfordnlp/dspy). It delivers structured LLM programming, prompt engineering, and context engineering in the language we love. Instead of wrestling with brittle prompt strings, you define typed signatures in idiomatic Ruby and compose workflows and agents that actually behave.
19
+ DSPy.rb is the Ruby-first surgical port of Stanford's [DSPy paradigm](https://github.com/stanfordnlp/dspy). It delivers structured LLM programming, prompt engineering, and context engineering in the language we love. Instead of wrestling with brittle prompt strings, you define typed signatures in idiomatic Ruby and compose workflows and agents that actually behave.
22
20
 
23
21
  **Prompts are just functions.** Traditional prompting is like writing code with string concatenation: it works until it doesn't. DSPy.rb brings you the programming approach pioneered by [dspy.ai](https://dspy.ai/): define modular signatures and let the framework deal with the messy bits.
24
22
 
@@ -104,6 +102,7 @@ DSPy.rb ships multiple gems from this monorepo so you can opt into features with
104
102
  | `dspy-openai` | Packages the OpenAI/OpenRouter/Ollama adapters plus the official SDK guardrails. Install whenever you call `openai/*`, `openrouter/*`, or `ollama/*`. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/openai/README.md) | **Stable** (v1.0.0) |
105
103
  | `dspy-anthropic` | Claude adapters, streaming, and structured-output helpers behind the official `anthropic` SDK. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/anthropic/README.md) | **Stable** (v1.0.0) |
106
104
  | `dspy-gemini` | Gemini adapters with multimodal + tool-call support via `gemini-ai`. [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/gemini/README.md) | **Stable** (v1.0.0) |
105
+ | `dspy-ruby_llm` | Unified access to 12+ LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Ollama, DeepSeek, etc.) via [RubyLLM](https://rubyllm.com). [Adapter README](https://github.com/vicentereig/dspy.rb/blob/main/lib/dspy/ruby_llm/README.md) | **Stable** (v0.1.0) |
107
106
  | `dspy-code_act` | Think-Code-Observe agents that synthesize and execute Ruby safely. (Add the gem or set `DSPY_WITH_CODE_ACT=1` before requiring `dspy/code_act`.) | **Stable** (v1.0.0) |
108
107
  | `dspy-datasets` | Dataset helpers plus Parquet/Polars tooling for richer evaluation corpora. (Toggle via `DSPY_WITH_DATASETS`.) | **Stable** (v1.0.0) |
109
108
  | `dspy-evals` | High-throughput evaluation harness with metrics, callbacks, and regression fixtures. (Toggle via `DSPY_WITH_EVALS`.) | **Stable** (v1.0.0) |
@@ -2,6 +2,6 @@
2
2
 
3
3
  module DSPy
4
4
  class Evals
5
- VERSION = '1.0.0'
5
+ VERSION = '1.0.1'
6
6
  end
7
7
  end
@@ -10,10 +10,11 @@ module DSPy
10
10
  'anthropic' => { class_name: 'DSPy::Anthropic::LM::Adapters::AnthropicAdapter', gem_name: 'dspy-anthropic' },
11
11
  'ollama' => { class_name: 'DSPy::OpenAI::LM::Adapters::OllamaAdapter', gem_name: 'dspy-openai' },
12
12
  'gemini' => { class_name: 'DSPy::Gemini::LM::Adapters::GeminiAdapter', gem_name: 'dspy-gemini' },
13
- 'openrouter' => { class_name: 'DSPy::OpenAI::LM::Adapters::OpenRouterAdapter', gem_name: 'dspy-openai' }
13
+ 'openrouter' => { class_name: 'DSPy::OpenAI::LM::Adapters::OpenRouterAdapter', gem_name: 'dspy-openai' },
14
+ 'ruby_llm' => { class_name: 'DSPy::RubyLLM::LM::Adapters::RubyLLMAdapter', gem_name: 'dspy-ruby_llm' }
14
15
  }.freeze
15
16
 
16
- PROVIDERS_WITH_EXTRA_OPTIONS = %w[openai anthropic ollama gemini openrouter].freeze
17
+ PROVIDERS_WITH_EXTRA_OPTIONS = %w[openai anthropic ollama gemini openrouter ruby_llm].freeze
17
18
 
18
19
  class AdapterData < Data.define(:class_name, :gem_name)
19
20
  def self.from_prefix(provider_prefix)
@@ -26,29 +26,33 @@ rescue LoadError
26
26
  end
27
27
  end
28
28
 
29
- class ObservationType < T::Enum
30
- enums do
31
- Generation = new('generation')
32
- Agent = new('agent')
33
- Tool = new('tool')
34
- Chain = new('chain')
35
- Retriever = new('retriever')
36
- Embedding = new('embedding')
37
- Evaluator = new('evaluator')
38
- Span = new('span')
39
- Event = new('event')
40
- end
29
+ # Guard against double-loading with Zeitwerk/Rails autoloader
30
+ # See: https://github.com/vicentereig/dspy.rb/issues/190
31
+ unless defined?(DSPy::ObservationType)
32
+ class ObservationType < T::Enum
33
+ enums do
34
+ Generation = new('generation')
35
+ Agent = new('agent')
36
+ Tool = new('tool')
37
+ Chain = new('chain')
38
+ Retriever = new('retriever')
39
+ Embedding = new('embedding')
40
+ Evaluator = new('evaluator')
41
+ Span = new('span')
42
+ Event = new('event')
43
+ end
41
44
 
42
- def self.for_module_class(_module_class)
43
- Span
44
- end
45
+ def self.for_module_class(_module_class)
46
+ Span
47
+ end
45
48
 
46
- def langfuse_attribute
47
- ['langfuse.observation.type', serialize]
48
- end
49
+ def langfuse_attribute
50
+ ['langfuse.observation.type', serialize]
51
+ end
49
52
 
50
- def langfuse_attributes
51
- { 'langfuse.observation.type' => serialize }
53
+ def langfuse_attributes
54
+ { 'langfuse.observation.type' => serialize }
55
+ end
52
56
  end
53
57
  end
54
58
  end
@@ -0,0 +1,24 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'dspy/lm/errors'
4
+
5
+ module DSPy
6
+ module RubyLLM
7
+ class Guardrails
8
+ SUPPORTED_RUBY_LLM_VERSIONS = "~> 1.3".freeze
9
+
10
+ def self.ensure_ruby_llm_installed!
11
+ require 'ruby_llm'
12
+
13
+ spec = Gem.loaded_specs["ruby_llm"]
14
+ unless spec && Gem::Requirement.new(SUPPORTED_RUBY_LLM_VERSIONS).satisfied_by?(spec.version)
15
+ msg = <<~MSG
16
+ DSPy requires the `ruby_llm` gem #{SUPPORTED_RUBY_LLM_VERSIONS}.
17
+ Please install or upgrade it with `bundle add ruby_llm --version "#{SUPPORTED_RUBY_LLM_VERSIONS}"`.
18
+ MSG
19
+ raise DSPy::LM::UnsupportedVersionError, msg
20
+ end
21
+ end
22
+ end
23
+ end
24
+ end
@@ -0,0 +1,391 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'uri'
4
+ require 'ruby_llm'
5
+ require 'dspy/lm/adapter'
6
+ require 'dspy/lm/vision_models'
7
+
8
+ require 'dspy/ruby_llm/guardrails'
9
+ DSPy::RubyLLM::Guardrails.ensure_ruby_llm_installed!
10
+
11
+ module DSPy
12
+ module RubyLLM
13
+ module LM
14
+ module Adapters
15
+ class RubyLLMAdapter < DSPy::LM::Adapter
16
+ attr_reader :provider
17
+
18
+ # Options that require a scoped context instead of global RubyLLM config
19
+ SCOPED_OPTIONS = %i[base_url timeout max_retries].freeze
20
+
21
+ def initialize(model:, api_key: nil, **options)
22
+ @api_key = api_key
23
+ @options = options
24
+ @structured_outputs_enabled = options.fetch(:structured_outputs, true)
25
+ @provider_override = options[:provider] # Optional provider override
26
+
27
+ # Detect provider eagerly (matches OpenAI/Anthropic/Gemini adapters)
28
+ @provider = detect_provider(model)
29
+
30
+ # Determine if we should use global RubyLLM config or create scoped context
31
+ @use_global_config = should_use_global_config?(api_key, options)
32
+
33
+ super(model: model, api_key: api_key)
34
+
35
+ # Only validate API key if not using global config
36
+ unless @use_global_config
37
+ validate_api_key_for_provider!(api_key)
38
+ end
39
+
40
+ # Validate base_url if provided
41
+ validate_base_url!(@options[:base_url])
42
+ end
43
+
44
+ # Returns the context - either scoped or global
45
+ def context
46
+ @context ||= @use_global_config ? nil : create_context(@api_key)
47
+ end
48
+
49
+ def chat(messages:, signature: nil, &block)
50
+ normalized_messages = normalize_messages(messages)
51
+
52
+ # Validate vision support if images are present
53
+ if contains_images?(normalized_messages)
54
+ validate_vision_support!
55
+ normalized_messages = format_multimodal_messages(normalized_messages)
56
+ end
57
+
58
+ chat_instance = create_chat_instance
59
+
60
+ if block_given?
61
+ stream_response(chat_instance, normalized_messages, signature, &block)
62
+ else
63
+ standard_response(chat_instance, normalized_messages, signature)
64
+ end
65
+ rescue ::RubyLLM::UnauthorizedError => e
66
+ raise DSPy::LM::MissingAPIKeyError.new(provider)
67
+ rescue ::RubyLLM::RateLimitError => e
68
+ raise DSPy::LM::AdapterError, "Rate limit exceeded for #{provider}: #{e.message}"
69
+ rescue ::RubyLLM::ModelNotFoundError => e
70
+ raise DSPy::LM::AdapterError, "Model not found: #{e.message}. Check available models with RubyLLM.models.all"
71
+ rescue ::RubyLLM::BadRequestError => e
72
+ raise DSPy::LM::AdapterError, "Invalid request to #{provider}: #{e.message}"
73
+ rescue ::RubyLLM::ConfigurationError => e
74
+ raise DSPy::LM::ConfigurationError, "RubyLLM configuration error: #{e.message}"
75
+ rescue ::RubyLLM::Error => e
76
+ raise DSPy::LM::AdapterError, "RubyLLM error (#{provider}): #{e.message}"
77
+ end
78
+
79
+ private
80
+
81
+ # Detect provider from RubyLLM's model registry or use explicit override
82
+ def detect_provider(model_id)
83
+ return @provider_override.to_s if @provider_override
84
+
85
+ model_info = ::RubyLLM.models.find(model_id)
86
+ model_info.provider.to_s
87
+ rescue ::RubyLLM::ModelNotFoundError
88
+ raise DSPy::LM::ConfigurationError,
89
+ "Model '#{model_id}' not found in RubyLLM registry. " \
90
+ "Use provider: option to specify explicitly, or run RubyLLM.models.refresh!"
91
+ end
92
+
93
+ # Check if we should use RubyLLM's global configuration
94
+ # Uses global config when no api_key and no provider-specific options provided
95
+ def should_use_global_config?(api_key, options)
96
+ api_key.nil? && (options.keys & SCOPED_OPTIONS).empty?
97
+ end
98
+
99
+ # Validate API key for providers that require it
100
+ def validate_api_key_for_provider!(api_key)
101
+ # Ollama and some local providers don't require API keys
102
+ return if provider_allows_no_api_key?
103
+
104
+ validate_api_key!(api_key, provider)
105
+ end
106
+
107
+ def provider_allows_no_api_key?
108
+ %w[ollama gpustack].include?(provider)
109
+ end
110
+
111
+ def validate_base_url!(url)
112
+ return if url.nil?
113
+
114
+ uri = URI.parse(url)
115
+ unless %w[http https].include?(uri.scheme)
116
+ raise DSPy::LM::ConfigurationError, "base_url must use http or https scheme"
117
+ end
118
+ rescue URI::InvalidURIError
119
+ raise DSPy::LM::ConfigurationError, "Invalid base_url format: #{url}"
120
+ end
121
+
122
+ def create_context(api_key)
123
+ ::RubyLLM.context do |config|
124
+ configure_provider(config, api_key)
125
+ configure_connection(config)
126
+ end
127
+ end
128
+
129
+ # Configure RubyLLM using convention: {provider}_api_key and {provider}_api_base
130
+ # For providers with non-standard auth (bedrock, vertexai), configure RubyLLM globally
131
+ def configure_provider(config, api_key)
132
+ key_method = "#{provider}_api_key="
133
+ config.send(key_method, api_key) if api_key && config.respond_to?(key_method)
134
+
135
+ base_method = "#{provider}_api_base="
136
+ config.send(base_method, @options[:base_url]) if @options[:base_url] && config.respond_to?(base_method)
137
+ end
138
+
139
+ def configure_connection(config)
140
+ config.request_timeout = @options[:timeout] if @options[:timeout]
141
+ config.max_retries = @options[:max_retries] if @options[:max_retries]
142
+ end
143
+
144
+ def create_chat_instance
145
+ chat_options = { model: model }
146
+
147
+ # If provider is explicitly overridden, pass it to RubyLLM
148
+ if @provider_override
149
+ chat_options[:provider] = @provider_override.to_sym
150
+ chat_options[:assume_model_exists] = true
151
+ end
152
+
153
+ # Use global RubyLLM config or scoped context
154
+ if @use_global_config
155
+ ::RubyLLM.chat(**chat_options)
156
+ else
157
+ context.chat(**chat_options)
158
+ end
159
+ end
160
+
161
+ def standard_response(chat_instance, messages, signature)
162
+ chat_instance = prepare_chat_instance(chat_instance, messages, signature)
163
+ content, attachments = prepare_message_content(messages)
164
+ return build_empty_response unless content
165
+
166
+ response = send_message(chat_instance, content, attachments)
167
+ map_response(response)
168
+ end
169
+
170
+ def stream_response(chat_instance, messages, signature, &block)
171
+ chat_instance = prepare_chat_instance(chat_instance, messages, signature)
172
+ content, attachments = prepare_message_content(messages)
173
+ return build_empty_response unless content
174
+
175
+ response = send_message(chat_instance, content, attachments, &block)
176
+ map_response(response)
177
+ end
178
+
179
+ # Common setup: apply system instructions, build conversation history, and optional schema
180
+ def prepare_chat_instance(chat_instance, messages, signature)
181
+ # First, handle system messages via with_instructions for proper system prompt handling
182
+ system_message = messages.find { |m| m[:role] == 'system' }
183
+ chat_instance = chat_instance.with_instructions(system_message[:content]) if system_message
184
+
185
+ # Build conversation history by adding all non-system messages except the last user message
186
+ # The last user message will be passed to ask() to get the response
187
+ messages_to_add = messages.reject { |m| m[:role] == 'system' }
188
+
189
+ # Find the index of the last user message
190
+ last_user_index = messages_to_add.rindex { |m| m[:role] == 'user' }
191
+
192
+ if last_user_index && last_user_index > 0
193
+ # Add all messages before the last user message to build history
194
+ messages_to_add[0...last_user_index].each do |msg|
195
+ content, attachments = extract_content_and_attachments(msg)
196
+ next unless content
197
+
198
+ # Add message with appropriate role
199
+ if attachments.any?
200
+ chat_instance.add_message(role: msg[:role].to_sym, content: content, attachments: attachments)
201
+ else
202
+ chat_instance.add_message(role: msg[:role].to_sym, content: content)
203
+ end
204
+ end
205
+ end
206
+
207
+ if signature && @structured_outputs_enabled
208
+ schema = build_json_schema(signature)
209
+ chat_instance = chat_instance.with_schema(schema) if schema
210
+ end
211
+
212
+ chat_instance
213
+ end
214
+
215
+ # Extract content from last user message
216
+ # RubyLLM's Chat API builds conversation history via add_message() for previous turns,
217
+ # and the last user message is passed to ask() to get the response.
218
+ def prepare_message_content(messages)
219
+ last_user_message = messages.reverse.find { |m| m[:role] == 'user' }
220
+ return [nil, []] unless last_user_message
221
+
222
+ extract_content_and_attachments(last_user_message)
223
+ end
224
+
225
+ # Send message with optional streaming block
226
+ def send_message(chat_instance, content, attachments, &block)
227
+ kwargs = attachments.any? ? { with: attachments } : {}
228
+
229
+ if block_given?
230
+ chat_instance.ask(content, **kwargs) do |chunk|
231
+ block.call(chunk.content) if chunk.content
232
+ end
233
+ else
234
+ chat_instance.ask(content, **kwargs)
235
+ end
236
+ end
237
+
238
+ def extract_content_and_attachments(message)
239
+ content = message[:content]
240
+ attachments = []
241
+
242
+ if content.is_a?(Array)
243
+ text_parts = []
244
+ content.each do |item|
245
+ case item[:type]
246
+ when 'text'
247
+ text_parts << item[:text]
248
+ when 'image'
249
+ # Extract image URL or path
250
+ image = item[:image]
251
+ if image.respond_to?(:url)
252
+ attachments << image.url
253
+ elsif image.respond_to?(:path)
254
+ attachments << image.path
255
+ elsif item[:image_url]
256
+ attachments << item[:image_url][:url]
257
+ end
258
+ end
259
+ end
260
+ content = text_parts.join("\n")
261
+ end
262
+
263
+ [content.to_s, attachments]
264
+ end
265
+
266
+ def map_response(ruby_llm_response)
267
+ DSPy::LM::Response.new(
268
+ content: ruby_llm_response.content.to_s,
269
+ usage: build_usage(ruby_llm_response),
270
+ metadata: build_metadata(ruby_llm_response)
271
+ )
272
+ end
273
+
274
+ def build_usage(response)
275
+ input_tokens = response.input_tokens || 0
276
+ output_tokens = response.output_tokens || 0
277
+
278
+ DSPy::LM::Usage.new(
279
+ input_tokens: input_tokens,
280
+ output_tokens: output_tokens,
281
+ total_tokens: input_tokens + output_tokens
282
+ )
283
+ end
284
+
285
+ def build_metadata(response)
286
+ DSPy::LM::ResponseMetadataFactory.create('ruby_llm', {
287
+ model: response.model_id || model,
288
+ underlying_provider: provider
289
+ })
290
+ end
291
+
292
+ def build_empty_response
293
+ DSPy::LM::Response.new(
294
+ content: '',
295
+ usage: DSPy::LM::Usage.new(input_tokens: 0, output_tokens: 0, total_tokens: 0),
296
+ metadata: DSPy::LM::ResponseMetadataFactory.create('ruby_llm', {
297
+ model: model,
298
+ underlying_provider: provider
299
+ })
300
+ )
301
+ end
302
+
303
+ def build_json_schema(signature)
304
+ return nil unless signature.respond_to?(:json_schema)
305
+
306
+ schema = signature.json_schema
307
+ normalize_schema(schema)
308
+ end
309
+
310
+ def normalize_schema(schema)
311
+ return schema unless schema.is_a?(Hash)
312
+
313
+ @normalized_schema_cache ||= {}
314
+ cache_key = schema.hash
315
+
316
+ @normalized_schema_cache[cache_key] ||= begin
317
+ duped = deep_dup(schema)
318
+ add_additional_properties_false(duped)
319
+ duped.freeze
320
+ end
321
+ end
322
+
323
+ def add_additional_properties_false(schema)
324
+ return unless schema.is_a?(Hash)
325
+
326
+ if schema[:type] == 'object' || schema['type'] == 'object'
327
+ schema[:additionalProperties] = false
328
+ schema['additionalProperties'] = false
329
+ end
330
+
331
+ # Recursively process nested schemas
332
+ schema.each_value { |v| add_additional_properties_false(v) if v.is_a?(Hash) }
333
+
334
+ # Handle arrays with items
335
+ if schema[:items]
336
+ add_additional_properties_false(schema[:items])
337
+ elsif schema['items']
338
+ add_additional_properties_false(schema['items'])
339
+ end
340
+ end
341
+
342
+ def deep_dup(obj)
343
+ case obj
344
+ when Hash
345
+ obj.transform_values { |v| deep_dup(v) }
346
+ when Array
347
+ obj.map { |v| deep_dup(v) }
348
+ else
349
+ obj
350
+ end
351
+ end
352
+
353
+ def validate_vision_support!
354
+ # RubyLLM handles vision validation internally, but we can add
355
+ # additional DSPy-specific validation here if needed
356
+ DSPy::LM::VisionModels.validate_vision_support!(provider, model)
357
+ rescue DSPy::LM::IncompatibleImageFeatureError
358
+ # If DSPy doesn't know about the model, let RubyLLM handle it
359
+ # RubyLLM has its own model registry with capability detection
360
+ end
361
+
362
+ def format_multimodal_messages(messages)
363
+ messages.map do |msg|
364
+ if msg[:content].is_a?(Array)
365
+ formatted_content = msg[:content].map do |item|
366
+ case item[:type]
367
+ when 'text'
368
+ { type: 'text', text: item[:text] }
369
+ when 'image'
370
+ # Validate and format image for provider
371
+ image = item[:image]
372
+ if image.respond_to?(:validate_for_provider!)
373
+ image.validate_for_provider!(provider)
374
+ end
375
+ item
376
+ else
377
+ item
378
+ end
379
+ end
380
+
381
+ { role: msg[:role], content: formatted_content }
382
+ else
383
+ msg
384
+ end
385
+ end
386
+ end
387
+ end
388
+ end
389
+ end
390
+ end
391
+ end
@@ -0,0 +1,7 @@
1
+ # frozen_string_literal: true
2
+
3
+ module DSPy
4
+ module RubyLLM
5
+ VERSION = '0.1.0'
6
+ end
7
+ end
@@ -0,0 +1,8 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'dspy/ruby_llm/version'
4
+
5
+ require 'dspy/ruby_llm/guardrails'
6
+ DSPy::RubyLLM::Guardrails.ensure_ruby_llm_installed!
7
+
8
+ require 'dspy/ruby_llm/lm/adapters/ruby_llm_adapter'
@@ -66,6 +66,8 @@ module DSPy
66
66
  tool :get_issue, description: "Get details of a specific GitHub issue"
67
67
  tool :get_pr, description: "Get details of a specific GitHub pull request"
68
68
  tool :api_request, description: "Make an arbitrary GitHub API request"
69
+ tool :traffic_views, description: "Get repository traffic views (last 14 days by default)"
70
+ tool :traffic_clones, description: "Get repository traffic clones (last 14 days by default)"
69
71
 
70
72
  sig { void }
71
73
  def initialize
@@ -216,6 +218,40 @@ module DSPy
216
218
  "Error making API request: #{e.message}"
217
219
  end
218
220
 
221
+ sig { params(repo: String, per: T.nilable(String)).returns(String) }
222
+ def traffic_views(repo:, per: nil)
223
+ endpoint = "repos/#{repo}/traffic/views"
224
+ cmd = build_gh_command(['api', shell_escape(endpoint)])
225
+ cmd << ['-f', "per=#{shell_escape(per)}"] if per
226
+
227
+ result = execute_command(cmd.flatten.join(' '))
228
+
229
+ if result[:success]
230
+ parse_traffic(result[:output], label: 'Views')
231
+ else
232
+ "Failed to fetch traffic views: #{result[:error]}"
233
+ end
234
+ rescue => e
235
+ "Error fetching traffic views: #{e.message}"
236
+ end
237
+
238
+ sig { params(repo: String, per: T.nilable(String)).returns(String) }
239
+ def traffic_clones(repo:, per: nil)
240
+ endpoint = "repos/#{repo}/traffic/clones"
241
+ cmd = build_gh_command(['api', shell_escape(endpoint)])
242
+ cmd << ['-f', "per=#{shell_escape(per)}"] if per
243
+
244
+ result = execute_command(cmd.flatten.join(' '))
245
+
246
+ if result[:success]
247
+ parse_traffic(result[:output], label: 'Clones')
248
+ else
249
+ "Failed to fetch traffic clones: #{result[:error]}"
250
+ end
251
+ rescue => e
252
+ "Error fetching traffic clones: #{e.message}"
253
+ end
254
+
219
255
  private
220
256
 
221
257
  sig { params(args: T::Array[String]).returns(T::Array[String]) }
@@ -225,6 +261,7 @@ module DSPy
225
261
 
226
262
  sig { params(str: String).returns(String) }
227
263
  def shell_escape(str)
264
+ return '""' if str.nil?
228
265
  "\"#{str.gsub(/"/, '\\"')}\""
229
266
  end
230
267
 
@@ -240,6 +277,29 @@ module DSPy
240
277
  }
241
278
  end
242
279
 
280
+ sig { params(json_output: String, label: String).returns(String) }
281
+ def parse_traffic(json_output, label:)
282
+ data = JSON.parse(json_output)
283
+
284
+ total = data['count'] || 0
285
+ uniques = data['uniques'] || 0
286
+ series = data[label.downcase] || data['views'] || []
287
+
288
+ lines = []
289
+ lines << "#{label}: #{total} total (#{uniques} unique) over the last #{series.length} data points"
290
+
291
+ series.each do |point|
292
+ ts = point['timestamp'] || point['timestamp'.to_sym]
293
+ count = point['count'] || 0
294
+ uniq = point['uniques'] || 0
295
+ lines << " #{ts}: #{count} (#{uniq} unique)"
296
+ end
297
+
298
+ lines.join("\n")
299
+ rescue JSON::ParserError => e
300
+ "Failed to parse traffic data: #{e.message}"
301
+ end
302
+
243
303
  sig { params(json_output: String).returns(String) }
244
304
  def parse_issue_list(json_output)
245
305
  issues = JSON.parse(json_output)
@@ -327,4 +387,4 @@ module DSPy
327
387
  end
328
388
  end
329
389
  end
330
- end
390
+ end
data/lib/dspy/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module DSPy
4
- VERSION = "0.31.1"
4
+ VERSION = "0.32.0"
5
5
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: dspy
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.31.1
4
+ version: 0.32.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Vicente Reig Rincón de Arellano
@@ -210,6 +210,10 @@ files:
210
210
  - lib/dspy/reflection_lm.rb
211
211
  - lib/dspy/registry/registry_manager.rb
212
212
  - lib/dspy/registry/signature_registry.rb
213
+ - lib/dspy/ruby_llm.rb
214
+ - lib/dspy/ruby_llm/guardrails.rb
215
+ - lib/dspy/ruby_llm/lm/adapters/ruby_llm_adapter.rb
216
+ - lib/dspy/ruby_llm/version.rb
213
217
  - lib/dspy/schema.rb
214
218
  - lib/dspy/schema/sorbet_json_schema.rb
215
219
  - lib/dspy/schema/sorbet_toon_adapter.rb