ruby_llm_community 0.0.4 → 0.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 532c33922c77aa1376befb9d9ad5425c10b3dee2016fbb24ce3be431d09b3461
4
- data.tar.gz: 8339735878bfe54c68524d42ec8d67c0e59d53174c6648cdbef272f2e13f5f54
3
+ metadata.gz: d3fa93c4ea3fddd52d6cf639bb620c99d10a0fdc01082ecab1c1e9ddb022763d
4
+ data.tar.gz: 4be92604b71b94c096162a25165178344f56e22ea332815f542a87af095350f5
5
5
  SHA512:
6
- metadata.gz: 3daddc9fb0eb2f4271352cc7158e8e2e1f8c39d83ddd97484aedb1d4a9fc4ae7edc61dae6bdc2603745a3e277fc426107dbfa33c2475c0f6ae319400dbf4827b
7
- data.tar.gz: 1a9bd7a2b2bb3358bcd4c83b00e9c9aacf02c4e3b2ffe37dc368929fb5ae886f311bd108abfe939f55cd8b5d6038eeefcf2f802eac15220dad726beff41ee9d8
6
+ metadata.gz: 376559dd9e4c9ab5dbcb4c0e4c556d548c13fdebedaecef9edaebffb608f1af5380213bd388562d48866375b4bb63729c877b1548bfab17d07e2d3a9c550d67e
7
+ data.tar.gz: c2f8a24697f9abb88509bdf9b793020360bff7175544bd62b31fb3644f276fc103ae5809dc8a8dd8098962f2f59b1decc0baa1f08482d82bb4ab40e29578c705
data/README.md CHANGED
@@ -1,99 +1,114 @@
1
+ <div align="center">
2
+
1
3
  <picture>
2
4
  <source media="(prefers-color-scheme: dark)" srcset="/docs/assets/images/logotype_dark.svg">
3
5
  <img src="/docs/assets/images/logotype.svg" alt="RubyLLM" height="120" width="250">
4
6
  </picture>
5
7
 
6
- **One *beautiful* Ruby API for GPT, Claude, Gemini, and more.** Easily build chatbots, AI agents, RAG applications, and content generators. Features chat (text, images, audio, PDFs), image generation, embeddings, tools (function calling), structured output, Rails integration, and streaming. Works with OpenAI, Anthropic, Google Gemini, AWS Bedrock, DeepSeek, Mistral, Ollama (local models), OpenRouter, Perplexity, GPUStack, and any OpenAI-compatible API.
8
+ <strong>One *beautiful* Ruby API for GPT, Claude, Gemini, and more.</strong>
9
+
10
+ Battle tested at [<picture><source media="(prefers-color-scheme: dark)" srcset="https://chatwithwork.com/logotype-dark.svg"><img src="https://chatwithwork.com/logotype.svg" alt="Chat with Work" height="30" align="absmiddle"></picture>](https://chatwithwork.com) — *Claude Code for your documents*
11
+
12
+ [![Gem Version](https://badge.fury.io/rb/ruby_llm.svg?a=5)](https://badge.fury.io/rb/ruby_llm)
13
+ [![Ruby Style Guide](https://img.shields.io/badge/code_style-standard-brightgreen.svg)](https://github.com/testdouble/standard)
14
+ [![Gem Downloads](https://img.shields.io/gem/dt/ruby_llm)](https://rubygems.org/gems/ruby_llm)
15
+ [![codecov](https://codecov.io/gh/crmne/ruby_llm/branch/main/graph/badge.svg)](https://codecov.io/gh/crmne/ruby_llm)
7
16
 
8
- <div class="badge-container">
9
- <a href="https://badge.fury.io/rb/ruby_llm"><img src="https://badge.fury.io/rb/ruby_llm.svg?a=5" alt="Gem Version" /></a>
10
- <a href="https://github.com/testdouble/standard"><img src="https://img.shields.io/badge/code_style-standard-brightgreen.svg" alt="Ruby Style Guide" /></a>
11
- <a href="https://rubygems.org/gems/ruby_llm"><img alt="Gem Downloads" src="https://img.shields.io/gem/dt/ruby_llm"></a>
12
- <a href="https://codecov.io/gh/crmne/ruby_llm"><img src="https://codecov.io/gh/crmne/ruby_llm/branch/main/graph/badge.svg" alt="codecov" /></a>
17
+ <a href="https://trendshift.io/repositories/13640" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13640" alt="crmne%2Fruby_llm | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
13
18
  </div>
14
19
 
15
- Battle tested at [<picture><source media="(prefers-color-scheme: dark)" srcset="https://chatwithwork.com/logotype-dark.svg"><img src="https://chatwithwork.com/logotype.svg" alt="Chat with Work" height="30" align="absmiddle"></picture>](https://chatwithwork.com) — *Claude Code for your documents*
20
+ ---
21
+
22
+ Build chatbots, AI agents, RAG applications. Works with OpenAI, Anthropic, Google, AWS, local models, and any OpenAI-compatible API.
16
23
 
17
- ## The problem with AI libraries
24
+ ## Why RubyLLM?
18
25
 
19
- Every AI provider comes with its own client library, its own response format, its own conventions for streaming, and its own way of handling errors. Want to use multiple providers? Prepare to juggle incompatible APIs and bloated dependencies.
26
+ Every AI provider ships their own bloated client. Different APIs. Different response formats. Different conventions. It's exhausting.
20
27
 
21
- RubyLLM fixes all that. One beautiful API for everything. One consistent format. Minimal dependencies — just Faraday, Zeitwerk, and [Marcel](https://github.com/rails/marcel). Because working with AI should be a joy, not a chore.
28
+ RubyLLM gives you one beautiful API for all of them. Same interface whether you're using GPT, Claude, or your local Ollama. Just three dependencies: [Faraday](https://github.com/lostisland/faraday), [Zeitwerk](https://github.com/fxn/zeitwerk), and [Marcel](https://github.com/rails/marcel). That's it.
22
29
 
23
- ## What makes it great
30
+ ## Show me the code
24
31
 
25
32
  ```ruby
26
33
  # Just ask questions
27
34
  chat = RubyLLM.chat
28
35
  chat.ask "What's the best way to learn Ruby?"
36
+ ```
29
37
 
30
- # Analyze images, audio, documents, and text files
38
+ ```ruby
39
+ # Analyze any file type
31
40
  chat.ask "What's in this image?", with: "ruby_conf.jpg"
32
41
  chat.ask "Describe this meeting", with: "meeting.wav"
33
42
  chat.ask "Summarize this document", with: "contract.pdf"
34
43
  chat.ask "Explain this code", with: "app.rb"
44
+ ```
35
45
 
36
- # Multiple files at once - types automatically detected
46
+ ```ruby
47
+ # Multiple files at once
37
48
  chat.ask "Analyze these files", with: ["diagram.png", "report.pdf", "notes.txt"]
49
+ ```
38
50
 
39
- # Stream responses in real-time
40
- chat.ask "Tell me a story about a Ruby programmer" do |chunk|
51
+ ```ruby
52
+ # Stream responses
53
+ chat.ask "Tell me a story about Ruby" do |chunk|
41
54
  print chunk.content
42
55
  end
56
+ ```
43
57
 
58
+ ```ruby
44
59
  # Generate images
45
60
  RubyLLM.paint "a sunset over mountains in watercolor style"
61
+ ```
46
62
 
47
- # Create vector embeddings
63
+ ```ruby
64
+ # Create embeddings
48
65
  RubyLLM.embed "Ruby is elegant and expressive"
66
+ ```
49
67
 
68
+ ```ruby
50
69
  # Let AI use your code
51
70
  class Weather < RubyLLM::Tool
52
- description "Gets current weather for a location"
53
- param :latitude, desc: "Latitude (e.g., 52.5200)"
54
- param :longitude, desc: "Longitude (e.g., 13.4050)"
71
+ description "Get current weather"
72
+ param :latitude
73
+ param :longitude
55
74
 
56
75
  def execute(latitude:, longitude:)
57
76
  url = "https://api.open-meteo.com/v1/forecast?latitude=#{latitude}&longitude=#{longitude}&current=temperature_2m,wind_speed_10m"
58
-
59
- response = Faraday.get(url)
60
- data = JSON.parse(response.body)
61
- rescue => e
62
- { error: e.message }
77
+ JSON.parse(Faraday.get(url).body)
63
78
  end
64
79
  end
65
80
 
66
- chat.with_tool(Weather).ask "What's the weather in Berlin? (52.5200, 13.4050)"
81
+ chat.with_tool(Weather).ask "What's the weather in Berlin?"
82
+ ```
67
83
 
68
- # Get structured output with JSON schemas
84
+ ```ruby
85
+ # Get structured output
69
86
  class ProductSchema < RubyLLM::Schema
70
- string :name, description: "Product name"
71
- number :price, description: "Price in USD"
72
- array :features, description: "Key features" do
73
- string description: "Feature description"
87
+ string :name
88
+ number :price
89
+ array :features do
90
+ string
74
91
  end
75
92
  end
76
93
 
77
- response = chat.with_schema(ProductSchema)
78
- .ask "Analyze this product description", with: "product.txt"
79
- # response.content => { "name" => "...", "price" => 99.99, "features" => [...] }
94
+ response = chat.with_schema(ProductSchema).ask "Analyze this product", with: "product.txt"
80
95
  ```
81
96
 
82
- ## Core Capabilities
83
-
84
- * 💬 **Unified Chat:** Converse with models from OpenAI, Anthropic, Gemini, Bedrock, OpenRouter, DeepSeek, Perplexity, Mistral, Ollama, or any OpenAI-compatible API using `RubyLLM.chat`.
85
- * 👁️ **Vision:** Analyze images within chats.
86
- * 🔊 **Audio:** Transcribe and understand audio content.
87
- * 📄 **Document Analysis:** Extract information from PDFs, text files, CSV, JSON, XML, Markdown, and code files.
88
- * 🖼️ **Image Generation:** Create images with `RubyLLM.paint`.
89
- * 📊 **Embeddings:** Generate text embeddings for vector search with `RubyLLM.embed`.
90
- * 🔧 **Tools (Function Calling):** Let AI models call your Ruby code using `RubyLLM::Tool`.
91
- * 📋 **Structured Output:** Guarantee responses conform to JSON schemas with `RubyLLM::Schema`.
92
- * 🚂 **Rails Integration:** Easily persist chats, messages, and tool calls using `acts_as_chat` and `acts_as_message`.
93
- * 🌊 **Streaming:** Process responses in real-time with idiomatic Ruby blocks.
94
- * **Async Support:** Built-in fiber-based concurrency for high-performance operations.
95
- * 🎯 **Smart Configuration:** Global and scoped configs with automatic retries and proxy support.
96
- * 📚 **Model Registry:** Access 500+ models with capability detection and pricing info.
97
+ ## Features
98
+
99
+ * **Chat:** Conversational AI with `RubyLLM.chat`
100
+ * **Vision:** Analyze images and screenshots
101
+ * **Audio:** Transcribe and understand speech
102
+ * **Documents:** Extract from PDFs, CSVs, JSON, any file type
103
+ * **Image generation:** Create images with `RubyLLM.paint`
104
+ * **Embeddings:** Vector search with `RubyLLM.embed`
105
+ * **Tools:** Let AI call your Ruby methods
106
+ * **Structured output:** JSON schemas that just work
107
+ * **Streaming:** Real-time responses with blocks
108
+ * **Rails:** ActiveRecord integration with `acts_as_chat`
109
+ * **Async:** Fiber-based concurrency
110
+ * **Model registry:** 500+ models with capability detection and pricing
111
+ * **Providers:** OpenAI, Anthropic, Gemini, Bedrock, DeepSeek, Mistral, Ollama, OpenRouter, Perplexity, GPUStack, and any OpenAI-compatible API
97
112
 
98
113
  ## Installation
99
114
 
@@ -103,69 +118,36 @@ gem 'ruby_llm_community'
103
118
  ```
104
119
  Then `bundle install`.
105
120
 
106
- Configure your API keys (using environment variables is recommended):
121
+ Configure your API keys:
107
122
  ```ruby
108
- # config/initializers/ruby_llm.rb or similar
123
+ # config/initializers/ruby_llm.rb
109
124
  RubyLLM.configure do |config|
110
- config.openai_api_key = ENV.fetch('OPENAI_API_KEY', nil)
111
- # Add keys ONLY for providers you intend to use
112
- # config.anthropic_api_key = ENV.fetch('ANTHROPIC_API_KEY', nil)
113
- # ... see Configuration guide for all options ...
125
+ config.openai_api_key = ENV['OPENAI_API_KEY']
114
126
  end
115
127
  ```
116
- See the [Installation Guide](https://rubyllm.com/installation) for full details.
117
128
 
118
- ## Rails Integration
119
-
120
- Add persistence to your chat models effortlessly:
129
+ ## Rails
121
130
 
122
131
  ```bash
123
- # Generate models and migrations
124
132
  rails generate ruby_llm:install
125
133
  ```
126
134
 
127
135
  ```ruby
128
- # Or add to existing models
129
136
  class Chat < ApplicationRecord
130
- acts_as_chat # Automatically saves messages & tool calls
131
- end
132
-
133
- class Message < ApplicationRecord
134
- acts_as_message
137
+ acts_as_chat
135
138
  end
136
139
 
137
- class ToolCall < ApplicationRecord
138
- acts_as_tool_call
139
- end
140
-
141
- # Now chats persist automatically
142
- chat = Chat.create!(model_id: "gpt-4.1-nano")
143
- chat.ask("What's in this file?", with: "report.pdf")
140
+ chat = Chat.create! model_id: "claude-sonnet-4"
141
+ chat.ask "What's in this file?", with: "report.pdf"
144
142
  ```
145
143
 
146
- See the [Rails Integration Guide](https://rubyllm.com/guides/rails) for details.
147
-
148
- ## Learn More
149
-
150
- Dive deeper with the official documentation:
144
+ ## Documentation
151
145
 
152
- - [Installation](https://rubyllm.com/installation)
153
- - [Configuration](https://rubyllm.com/configuration)
154
- - **Guides:**
155
- - [Getting Started](https://rubyllm.com/guides/getting-started)
156
- - [Chatting with AI Models](https://rubyllm.com/guides/chat)
157
- - [Using Tools](https://rubyllm.com/guides/tools)
158
- - [Streaming Responses](https://rubyllm.com/guides/streaming)
159
- - [Rails Integration](https://rubyllm.com/guides/rails)
160
- - [Image Generation](https://rubyllm.com/guides/image-generation)
161
- - [Embeddings](https://rubyllm.com/guides/embeddings)
162
- - [Working with Models](https://rubyllm.com/guides/models)
163
- - [Error Handling](https://rubyllm.com/guides/error-handling)
164
- - [Available Models](https://rubyllm.com/guides/available-models)
146
+ [rubyllm.com](https://rubyllm.com)
165
147
 
166
148
  ## Contributing
167
149
 
168
- We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on setup, testing, and contribution guidelines.
150
+ See [CONTRIBUTING.md](CONTRIBUTING.md).
169
151
 
170
152
  ## License
171
153
 
@@ -148,6 +148,11 @@ module RubyLLM
148
148
  self
149
149
  end
150
150
 
151
+ def cache_prompts(...)
152
+ to_llm.cache_prompts(...)
153
+ self
154
+ end
155
+
151
156
  def on_new_message(&block)
152
157
  to_llm
153
158
 
@@ -182,6 +182,10 @@
182
182
  "openai": "gpt-oss-120b",
183
183
  "openrouter": "openai/gpt-oss-120b"
184
184
  },
185
+ "gpt-oss-20b": {
186
+ "openai": "gpt-oss-20b",
187
+ "openrouter": "openai/gpt-oss-20b"
188
+ },
185
189
  "o1": {
186
190
  "openai": "o1",
187
191
  "openrouter": "openai/o1"