rubycanusellm 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 7068367ffebce32580d14b9eaf62427bb7bd268fa6a0a0ad3832428e5598e351
4
+ data.tar.gz: ed05039faee7b98d6d3a929033484ab5b0a02c6c8fc22cf4b88ed7fe2ba79fa6
5
+ SHA512:
6
+ metadata.gz: 253341456facb5a986eede001e24b8d23cacc66de282da73d8f3101221ad497edc33523b18af07bff92c9e1dd025b81a375874fbade5f2f5e82307f87fdb4ec0
7
+ data.tar.gz: 2dc6f75656c1f7800f50cb7a55debbd29b8ea3230f74959a222c219d7f4757a57614af61b36447880f1b37375f02e57c225c318305eb0115a4c0f27c1363fd8c
data/.rspec ADDED
@@ -0,0 +1,3 @@
1
+ --format documentation
2
+ --color
3
+ --require spec_helper
data/.rubocop.yml ADDED
@@ -0,0 +1,8 @@
1
+ AllCops:
2
+ TargetRubyVersion: 3.0
3
+
4
+ Style/StringLiterals:
5
+ EnforcedStyle: double_quotes
6
+
7
+ Style/StringLiteralsInInterpolation:
8
+ EnforcedStyle: double_quotes
data/CHANGELOG.md ADDED
@@ -0,0 +1,16 @@
1
+ # Changelog
2
+
3
+ ## [0.1.0] - 2025-04-01
4
+
5
+ ### Added
6
+
7
+ - Unified client interface for LLM providers
8
+ - OpenAI provider (chat completions)
9
+ - Anthropic provider (chat completions)
10
+ - Configuration module with validation
11
+ - Unified Response object with token tracking
12
+ - Error handling: AuthenticationError, RateLimitError, TimeoutError, ProviderError
13
+ - CLI with generators:
14
+ - `generate:config` — scaffolds configuration file
15
+ - `generate:completion` — scaffolds completion service object
16
+ - Rails and plain Ruby project detection for generators
data/LICENSE.txt ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2026 Juan Manuel Guzman Nava
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,161 @@
1
+ # RubyCanUseLLM
2
+
3
+ A unified Ruby client for multiple LLM providers with generators. One interface, every LLM.
4
+
5
+ ## The Problem
6
+
7
+ Every time a Ruby developer wants to add LLMs to their app, they start from scratch: pick a provider gem, learn its API, write a service object, handle errors, parse responses. Switch providers? Rewrite everything.
8
+
9
+ ## The Solution
10
+
11
+ RubyCanUseLLM gives you two things:
12
+
13
+ 1. **Unified client** — One interface that works the same across OpenAI, Anthropic, and more. Switch providers by changing a string, not your code.
14
+ 2. **Generators** — Commands that scaffold ready-to-use boilerplate. You don't start from zero, you start with something that works.
15
+
16
+ ## Installation
17
+ ```bash
18
+ gem install rubycanusellm
19
+ ```
20
+
21
+ Or add to your Gemfile:
22
+ ```ruby
23
+ gem "rubycanusellm"
24
+ ```
25
+
26
+ ## Quick Start
27
+
28
+ ### 1. Generate configuration
29
+ ```bash
30
+ rubycanusellm generate:config
31
+ ```
32
+
33
+ This creates a config file with your provider and API key. In Rails it goes to `config/initializers/rubycanusellm.rb`, otherwise to `config/llm.rb`.
34
+
35
+ ### 2. Generate a completion service
36
+ ```bash
37
+ rubycanusellm generate:completion
38
+ ```
39
+
40
+ This creates a ready-to-use service object. In Rails it goes to `app/services/`, otherwise to `lib/`.
41
+
42
+ ### 3. Use it
43
+ ```ruby
44
+ RubyCanUseLLM.configure do |config|
45
+ config.provider = :openai
46
+ config.api_key = ENV["LLM_API_KEY"]
47
+ end
48
+
49
+ response = RubyCanUseLLM.chat([
50
+ { role: :user, content: "What is Ruby?" }
51
+ ])
52
+
53
+ puts response.content
54
+ puts "Tokens: #{response.total_tokens}"
55
+ ```
56
+
57
+ ### Switch providers in one line
58
+ ```ruby
59
+ config.provider = :anthropic
60
+ ```
61
+
62
+ That's it. Same code, different provider.
63
+
64
+ ## Supported Providers
65
+
66
+ | Provider | Models | Status |
67
+ |----------|--------|--------|
68
+ | OpenAI | gpt-4o-mini, gpt-4o, etc. | ✅ |
69
+ | Anthropic | claude-sonnet-4-20250514, etc. | ✅ |
70
+
71
+ ## API Reference
72
+
73
+ ### Configuration
74
+ ```ruby
75
+ RubyCanUseLLM.configure do |config|
76
+ config.provider = :openai # :openai or :anthropic
77
+ config.api_key = "your-key" # required
78
+ config.model = "gpt-4o-mini" # optional, has sensible defaults
79
+ config.timeout = 30 # optional, default 30s
80
+ end
81
+ ```
82
+
83
+ ### Chat
84
+ ```ruby
85
+ response = RubyCanUseLLM.chat(messages, **options)
86
+ ```
87
+
88
+ **messages** — Array of hashes with `:role` and `:content`:
89
+ ```ruby
90
+ messages = [
91
+ { role: :system, content: "You are helpful." },
92
+ { role: :user, content: "Hello" }
93
+ ]
94
+ ```
95
+
96
+ **options** — Override config per request:
97
+ ```ruby
98
+ RubyCanUseLLM.chat(messages, model: "gpt-4o", temperature: 0.5)
99
+ ```
100
+
101
+ ### Response
102
+ ```ruby
103
+ response.content # "Hello! How can I help?"
104
+ response.model # "gpt-4o-mini"
105
+ response.input_tokens # 10
106
+ response.output_tokens # 5
107
+ response.total_tokens # 15
108
+ response.raw # original provider response
109
+ ```
110
+
111
+ ### Error Handling
112
+ ```ruby
113
+ begin
114
+ RubyCanUseLLM.chat(messages)
115
+ rescue RubyCanUseLLM::AuthenticationError
116
+ # invalid API key
117
+ rescue RubyCanUseLLM::RateLimitError
118
+ # too many requests
119
+ rescue RubyCanUseLLM::TimeoutError
120
+ # request timed out
121
+ rescue RubyCanUseLLM::ProviderError => e
122
+ # other provider error
123
+ end
124
+ ```
125
+
126
+ ## Generators
127
+
128
+ | Command | Description |
129
+ |---------|-------------|
130
+ | `rubycanusellm generate:config` | Configuration file with provider setup |
131
+ | `rubycanusellm generate:completion` | Completion service object |
132
+
133
+ ## Roadmap
134
+
135
+ - [x] Project setup
136
+ - [x] Configuration module
137
+ - [x] OpenAI provider
138
+ - [x] Anthropic provider
139
+ - [x] `generate:config` command
140
+ - [x] `generate:completion` command
141
+ - [x] v0.1.0 release
142
+ - [ ] Streaming support
143
+ - [ ] Embeddings + `generate:embedding`
144
+ - [ ] Mistral and Ollama providers
145
+ - [ ] Tool calling
146
+
147
+ ## Development
148
+ ```bash
149
+ git clone https://github.com/mgznv/rubycanusellm.git
150
+ cd rubycanusellm
151
+ bin/setup
152
+ bundle exec rspec
153
+ ```
154
+
155
+ ## Contributing
156
+
157
+ Bug reports and pull requests are welcome on GitHub at https://github.com/mgznv/rubycanusellm.
158
+
159
+ ## License
160
+
161
+ The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
data/Rakefile ADDED
@@ -0,0 +1,12 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "bundler/gem_tasks"
4
+ require "rspec/core/rake_task"
5
+
6
+ RSpec::Core::RakeTask.new(:spec)
7
+
8
+ require "rubocop/rake_task"
9
+
10
+ RuboCop::RakeTask.new
11
+
12
+ task default: %i[spec rubocop]
data/exe/rubycanusellm ADDED
@@ -0,0 +1,7 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ require "rubycanusellm"
5
+ require_relative "../lib/rubycanusellm/cli"
6
+
7
+ RubyCanUseLLM::CLI.start(ARGV)
@@ -0,0 +1,69 @@
1
+ # frozen_string_literal: true
2
+ require "fileutils"
3
+
4
+ module RubyCanUseLLM
5
+ class CLI
6
+ COMMANDS = {
7
+ "generate:config" => :generate_config,
8
+ "generate:completion" => :generate_completion
9
+ }.freeze
10
+
11
+ def self.start(args)
12
+ command = args.first
13
+
14
+ if COMMANDS.key?(command)
15
+ new.send(COMMANDS[command])
16
+ else
17
+ puts "Usage: rubycanusellm <command>"
18
+ puts ""
19
+ puts "Commands:"
20
+ puts " generate:config Generate configuration file"
21
+ puts " generate:completion Generate completion service object"
22
+ end
23
+ end
24
+
25
+ def generate_config
26
+ if rails?
27
+ path = "config/initializers/rubycanusellm.rb"
28
+ else
29
+ FileUtils.mkdir_p("config")
30
+ path = "config/llm.rb"
31
+ end
32
+
33
+ write_template("config", path)
34
+ end
35
+
36
+ def generate_completion
37
+ if rails?
38
+ FileUtils.mkdir_p("app/services")
39
+ path = "app/services/completion_service.rb"
40
+ else
41
+ FileUtils.mkdir_p("lib")
42
+ path = "lib/completion_service.rb"
43
+ end
44
+
45
+ write_template("completion", path)
46
+ end
47
+
48
+ private
49
+
50
+ def rails?
51
+ File.exist?("config/application.rb")
52
+ end
53
+
54
+ def write_template(name, destination)
55
+ if File.exist?(destination)
56
+ puts " exists #{destination}"
57
+ return
58
+ end
59
+
60
+ template = File.read(template_path(name))
61
+ File.write(destination, template)
62
+ puts " create #{destination}"
63
+ end
64
+
65
+ def template_path(name)
66
+ File.join(File.dirname(__FILE__), "templates", "#{name}.rb.tt")
67
+ end
68
+ end
69
+ end
@@ -0,0 +1,22 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RubyCanUseLLM
4
+ class Configuration
5
+ SUPPORTED_PROVIDERS = %i[openai anthropic].freeze
6
+
7
+ attr_accessor :provider, :api_key, :model, :timeout
8
+
9
+ def initialize
10
+ @provider = nil
11
+ @api_key = nil
12
+ @model = nil
13
+ @timeout = 30
14
+ end
15
+
16
+ def validate!
17
+ raise Error, "provider is required. Use :openai or :anthropic" if provider.nil?
18
+ raise Error, "api_key is required" if api_key.nil? || api_key.empty?
19
+ raise Error, "Unknown provider: #{provider}. Supported: #{SUPPORTED_PROVIDERS.join(", ")}" unless SUPPORTED_PROVIDERS.include?(provider)
20
+ end
21
+ end
22
+ end
@@ -0,0 +1,9 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RubyCanUseLLM
4
+ class Error < StandardError; end
5
+ class AuthenticationError < Error; end
6
+ class RateLimitError < Error; end
7
+ class TimeoutError < Error; end
8
+ class ProviderError < Error; end
9
+ end
@@ -0,0 +1,96 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "net/http"
4
+ require "json"
5
+ require "uri"
6
+
7
+ module RubyCanUseLLM
8
+ module Providers
9
+ class Anthropic < Base
10
+ API_URL = "https://api.anthropic.com/v1/messages"
11
+
12
+ def chat(messages, **options)
13
+ system, user_messages = extract_system(messages)
14
+ body = build_body(system, user_messages, options)
15
+ response = request(body)
16
+ parse_response(response)
17
+ end
18
+
19
+ private
20
+
21
+ def extract_system(messages)
22
+ system = nil
23
+ user_messages = []
24
+
25
+ messages.each do |msg|
26
+ if msg[:role].to_s == "system"
27
+ system = msg[:content]
28
+ else
29
+ user_messages << msg
30
+ end
31
+ end
32
+
33
+ [system, user_messages]
34
+ end
35
+
36
+ def build_body(system, messages, options)
37
+ body = {
38
+ model: options[:model] || config.model || "claude-sonnet-4-20250514",
39
+ messages: format_messages(messages),
40
+ max_tokens: options[:max_tokens] || 1024
41
+ }
42
+ body[:system] = system if system
43
+ body
44
+ end
45
+
46
+ def format_messages(messages)
47
+ messages.map do |msg|
48
+ { role: msg[:role].to_s, content: msg[:content] }
49
+ end
50
+ end
51
+
52
+ def request(body)
53
+ uri = URI(API_URL)
54
+ http = Net::HTTP.new(uri.host, uri.port)
55
+ http.use_ssl = true
56
+ http.read_timeout = config.timeout
57
+
58
+ req = Net::HTTP::Post.new(uri)
59
+ req["x-api-key"] = config.api_key
60
+ req["anthropic-version"] = "2023-06-01"
61
+ req["Content-Type"] = "application/json"
62
+ req.body = body.to_json
63
+
64
+ handle_response(http.request(req))
65
+ rescue Net::ReadTimeout, Net::OpenTimeout
66
+ raise TimeoutError, "Request to Anthropic timed out after #{config.timeout}s"
67
+ end
68
+
69
+ def handle_response(response)
70
+ case response.code.to_i
71
+ when 200
72
+ JSON.parse(response.body)
73
+ when 401
74
+ raise AuthenticationError, "Invalid Anthropic API key"
75
+ when 429
76
+ raise RateLimitError, "Anthropic rate limit exceeded"
77
+ else
78
+ raise ProviderError, "Anthropic error (#{response.code}): #{response.body}"
79
+ end
80
+ end
81
+
82
+ def parse_response(data)
83
+ content = data.dig("content", 0, "text")
84
+ usage = data["usage"]
85
+
86
+ Response.new(
87
+ content: content,
88
+ model: data["model"],
89
+ input_tokens: usage["input_tokens"],
90
+ output_tokens: usage["output_tokens"],
91
+ raw: data
92
+ )
93
+ end
94
+ end
95
+ end
96
+ end
@@ -0,0 +1,19 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RubyCanUseLLM
4
+ module Providers
5
+ class Base
6
+ def initialize(config)
7
+ @config = config
8
+ end
9
+
10
+ def chat(messages, **options)
11
+ raise NotImplementedError, "#{self.class} must implement #chat"
12
+ end
13
+
14
+ private
15
+
16
+ attr_reader :config
17
+ end
18
+ end
19
+ end
@@ -0,0 +1,77 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "net/http"
4
+ require "json"
5
+ require "uri"
6
+
7
+ module RubyCanUseLLM
8
+ module Providers
9
+ class OpenAI < Base
10
+ API_URL = "https://api.openai.com/v1/chat/completions"
11
+
12
+ def chat(messages, **options)
13
+ body = build_body(messages, options)
14
+ response = request(body)
15
+ parse_response(response)
16
+ end
17
+
18
+ private
19
+
20
+ def build_body(messages, options)
21
+ {
22
+ model: options[:model] || config.model || "gpt-4o-mini",
23
+ messages: format_messages(messages),
24
+ temperature: options[:temperature] || 0.7
25
+ }
26
+ end
27
+
28
+ def format_messages(messages)
29
+ messages.map do |msg|
30
+ { role: msg[:role].to_s, content: msg[:content] }
31
+ end
32
+ end
33
+
34
+ def request(body)
35
+ uri = URI(API_URL)
36
+ http = Net::HTTP.new(uri.host, uri.port)
37
+ http.use_ssl = true
38
+ http.read_timeout = config.timeout
39
+
40
+ req = Net::HTTP::Post.new(uri)
41
+ req["Authorization"] = "Bearer #{config.api_key}"
42
+ req["Content-Type"] = "application/json"
43
+ req.body = body.to_json
44
+
45
+ handle_response(http.request(req))
46
+ rescue Net::ReadTimeout, Net::OpenTimeout
47
+ raise TimeoutError, "Request to OpenAI timed out after #{config.timeout}s"
48
+ end
49
+
50
+ def handle_response(response)
51
+ case response.code.to_i
52
+ when 200
53
+ JSON.parse(response.body)
54
+ when 401
55
+ raise AuthenticationError, "Invalid OpenAI API key"
56
+ when 429
57
+ raise RateLimitError, "OpenAI rate limit exceeded"
58
+ else
59
+ raise ProviderError, "OpenAI error (#{response.code}): #{response.body}"
60
+ end
61
+ end
62
+
63
+ def parse_response(data)
64
+ choice = data.dig("choices", 0, "message")
65
+ usage = data["usage"]
66
+
67
+ Response.new(
68
+ content: choice["content"],
69
+ model: data["model"],
70
+ input_tokens: usage["prompt_tokens"],
71
+ output_tokens: usage["completion_tokens"],
72
+ raw: data
73
+ )
74
+ end
75
+ end
76
+ end
77
+ end
@@ -0,0 +1,19 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RubyCanUseLLM
4
+ class Response
5
+ attr_reader :content, :model, :input_tokens, :output_tokens, :raw
6
+
7
+ def initialize(content:, model:, input_tokens:, output_tokens:, raw:)
8
+ @content = content
9
+ @model = model
10
+ @input_tokens = input_tokens
11
+ @output_tokens = output_tokens
12
+ @raw = raw
13
+ end
14
+
15
+ def total_tokens
16
+ input_tokens + output_tokens
17
+ end
18
+ end
19
+ end
@@ -0,0 +1,20 @@
1
+ # frozen_string_literal: true
2
+
3
+ class CompletionService
4
+ def call(prompt, **options)
5
+ messages = [
6
+ { role: :user, content: prompt }
7
+ ]
8
+
9
+ # Add a system message if needed
10
+ # messages.unshift({ role: :system, content: "You are a helpful assistant." })
11
+
12
+ RubyCanUseLLM.chat(messages, **options)
13
+ end
14
+ end
15
+
16
+ # Usage:
17
+ # service = CompletionService.new
18
+ # response = service.call("What is Ruby?")
19
+ # puts response.content
20
+ # puts "Tokens used: #{response.total_tokens}"
@@ -0,0 +1,16 @@
1
+ # frozen_string_literal: true
2
+
3
+ RubyCanUseLLM.configure do |config|
4
+ # Choose your provider: :openai or :anthropic
5
+ config.provider = :openai
6
+
7
+ # Your API key (use environment variables in production)
8
+ config.api_key = ENV["LLM_API_KEY"]
9
+
10
+ # Default model (optional, each provider has a sensible default)
11
+ # OpenAI: "gpt-4o-mini", Anthropic: "claude-sonnet-4-20250514"
12
+ # config.model = "gpt-4o-mini"
13
+
14
+ # Request timeout in seconds (default: 30)
15
+ # config.timeout = 30
16
+ end
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Rubycanusellm
4
+ VERSION = "0.1.0"
5
+ end
@@ -0,0 +1,42 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "rubycanusellm/version"
4
+ require_relative "rubycanusellm/configuration"
5
+ require_relative "rubycanusellm/errors"
6
+ require_relative "rubycanusellm/response"
7
+ require_relative "rubycanusellm/providers/base"
8
+ require_relative "rubycanusellm/providers/openai"
9
+ require_relative "rubycanusellm/providers/anthropic"
10
+
11
+ module RubyCanUseLLM
12
+ PROVIDERS = {
13
+ openai: Providers::OpenAI,
14
+ anthropic: Providers::Anthropic
15
+ }.freeze
16
+
17
+ class << self
18
+ def configuration
19
+ @configuration ||= Configuration.new
20
+ end
21
+
22
+ def configure
23
+ yield(configuration)
24
+ end
25
+
26
+ def reset!
27
+ @configuration = Configuration.new
28
+ @client = nil
29
+ end
30
+
31
+ def client
32
+ configuration.validate!
33
+ @client ||= PROVIDERS.fetch(configuration.provider) do
34
+ raise Error, "Unknown provider: #{configuration.provider}"
35
+ end.new(configuration)
36
+ end
37
+
38
+ def chat(messages, **options)
39
+ client.chat(messages, **options)
40
+ end
41
+ end
42
+ end
@@ -0,0 +1,4 @@
1
+ module Rubycanusellm
2
+ VERSION: String
3
+ # See the writing guide of rbs: https://github.com/ruby/rbs#guides
4
+ end
metadata ADDED
@@ -0,0 +1,68 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: rubycanusellm
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Juan Manuel Guzman Nava
8
+ autorequire:
9
+ bindir: exe
10
+ cert_chain: []
11
+ date: 2026-04-01 00:00:00.000000000 Z
12
+ dependencies: []
13
+ description: One interface, every LLM. Rubycanusellm provides a unified client for
14
+ OpenAI, Anthropic, and more, plus generators that scaffold the boilerplate so you
15
+ go from zero to completions in 60 seconds.
16
+ email:
17
+ - juan.guzman3@flex.com
18
+ executables:
19
+ - rubycanusellm
20
+ extensions: []
21
+ extra_rdoc_files: []
22
+ files:
23
+ - ".rspec"
24
+ - ".rubocop.yml"
25
+ - CHANGELOG.md
26
+ - LICENSE.txt
27
+ - README.md
28
+ - Rakefile
29
+ - exe/rubycanusellm
30
+ - lib/rubycanusellm.rb
31
+ - lib/rubycanusellm/cli.rb
32
+ - lib/rubycanusellm/configuration.rb
33
+ - lib/rubycanusellm/errors.rb
34
+ - lib/rubycanusellm/providers/anthropic.rb
35
+ - lib/rubycanusellm/providers/base.rb
36
+ - lib/rubycanusellm/providers/openai.rb
37
+ - lib/rubycanusellm/response.rb
38
+ - lib/rubycanusellm/templates/completion.rb.tt
39
+ - lib/rubycanusellm/templates/config.rb.tt
40
+ - lib/rubycanusellm/version.rb
41
+ - sig/rubycanusellm.rbs
42
+ homepage: https://github.com/mgznv/rubycanusellm
43
+ licenses:
44
+ - MIT
45
+ metadata:
46
+ homepage_uri: https://github.com/mgznv/rubycanusellm
47
+ source_code_uri: https://github.com/mgznv/rubycanusellm
48
+ changelog_uri: https://github.com/mgznv/rubycanusellm/blob/main/CHANGELOG.md
49
+ post_install_message:
50
+ rdoc_options: []
51
+ require_paths:
52
+ - lib
53
+ required_ruby_version: !ruby/object:Gem::Requirement
54
+ requirements:
55
+ - - ">="
56
+ - !ruby/object:Gem::Version
57
+ version: 3.0.0
58
+ required_rubygems_version: !ruby/object:Gem::Requirement
59
+ requirements:
60
+ - - ">="
61
+ - !ruby/object:Gem::Version
62
+ version: '0'
63
+ requirements: []
64
+ rubygems_version: 3.5.3
65
+ signing_key:
66
+ specification_version: 4
67
+ summary: A unified Ruby client for multiple LLM providers with generators
68
+ test_files: []