rag-ruby 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 18916baa3b84e6a947a1fa8411658475e4608ff4a127d681ca9b29332d669e81
4
+ data.tar.gz: f18c5e4d77640750e570ae4762122d0b687b73b2d12731e11add7b34d00b9d8c
5
+ SHA512:
6
+ metadata.gz: f397468f410d2475d1d59557749e83092c6b65b6f844b4169fff2774dbaac436a7eca5d9e667a35f7750937722d4bf92f97ec550b1919fd0deadd1857a9b1531
7
+ data.tar.gz: 54d733930d3ba8c0a241240318fbe58c4e8dd1750e3e9f9dc5969c86b9280721386fbe24f9379c47f06a598871950e5d97e8e6b42a13c72d6f8d89da40d2f1d9
data/Gemfile ADDED
@@ -0,0 +1,9 @@
1
+ # frozen_string_literal: true
2
+
3
+ source "https://rubygems.org"
4
+
5
+ gemspec
6
+
7
+ gem "minitest", "~> 5.0"
8
+ gem "rake", "~> 13.0"
9
+ gem "webmock", "~> 3.0"
data/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Johannes Dwi Cahyo
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,288 @@
1
+ # rag-ruby
2
+
3
+ A batteries-included RAG (Retrieval-Augmented Generation) pipeline framework for Ruby and Rails.
4
+
5
+ Orchestrates the full workflow: **document loading → chunking → embedding → storage → retrieval → generation**. Think LangChain for Ruby — simpler, more opinionated, and Rails-native.
6
+
7
+ ## Installation
8
+
9
+ Add to your Gemfile:
10
+
11
+ ```ruby
12
+ gem "rag-ruby"
13
+ ```
14
+
15
+ Then run:
16
+
17
+ ```bash
18
+ bundle install
19
+ ```
20
+
21
+ ## Quick Start
22
+
23
+ ```ruby
24
+ require "rag_ruby"
25
+
26
+ pipeline = RagRuby::Pipeline.new do |config|
27
+ config.loader :file
28
+ config.chunker :recursive_character, chunk_size: 1000, chunk_overlap: 200
29
+ config.embedder :openai, model: "text-embedding-3-small"
30
+ config.store :memory, dimension: 1536
31
+ config.generator :openai, model: "gpt-4o"
32
+ end
33
+
34
+ # Ingest documents
35
+ pipeline.ingest("docs/manual.md")
36
+ pipeline.ingest_directory("docs/", glob: "**/*.{md,txt}")
37
+
38
+ # Query with RAG
39
+ answer = pipeline.query("How do I reset my password?")
40
+ answer.text # => "To reset your password, go to Settings > Security..."
41
+ answer.sources # => [#<Source chunk="..." score=0.92>, ...]
42
+ answer.tokens_used # => { prompt: 1200, completion: 150 }
43
+ ```
44
+
45
+ ## Components
46
+
47
+ Every stage of the pipeline is pluggable. Mix and match providers to fit your stack.
48
+
49
+ ### Document Loaders
50
+
51
+ | Loader | Description | Require |
52
+ |--------|-------------|---------|
53
+ | `:file` | Local files (.txt, .md) | Built-in |
54
+ | `:directory` | Bulk load from directory | Built-in |
55
+ | `:url` | Fetch from URLs | Built-in |
56
+ | `:active_record` | Load from ActiveRecord models | Built-in |
57
+
58
+ ```ruby
59
+ # Load a single file
60
+ pipeline.ingest("path/to/document.md")
61
+
62
+ # Load a directory
63
+ pipeline.ingest_directory("documents/", glob: "**/*.{md,txt}")
64
+
65
+ # Custom loader
66
+ class SlackLoader < RagRuby::Loaders::Base
67
+ def load(channel_id)
68
+ messages = SlackAPI.history(channel_id)
69
+ messages.map do |msg|
70
+ RagRuby::Document.new(
71
+ content: msg.text,
72
+ metadata: { author: msg.user, channel: channel_id }
73
+ )
74
+ end
75
+ end
76
+ end
77
+
78
+ pipeline.ingest(channel_id, loader: SlackLoader.new)
79
+ ```
80
+
81
+ ### Embedders
82
+
83
+ | Provider | Description | Require |
84
+ |----------|-------------|---------|
85
+ | `:openai` | OpenAI text-embedding-3-small/large | `OPENAI_API_KEY` env var |
86
+ | `:cohere` | Cohere embed-english-v3.0 | `COHERE_API_KEY` env var |
87
+ | `:onnx` | Local ONNX models (all-MiniLM-L6-v2) | `gem "onnx-ruby"` |
88
+
89
+ ```ruby
90
+ # API-based
91
+ config.embedder :openai, model: "text-embedding-3-small"
92
+ config.embedder :cohere, model: "embed-english-v3.0"
93
+
94
+ # Local (no API calls)
95
+ config.embedder :onnx, model: "all-MiniLM-L6-v2"
96
+ ```
97
+
98
+ ### Vector Stores
99
+
100
+ | Store | Description | Require |
101
+ |-------|-------------|---------|
102
+ | `:memory` | In-memory store (great for dev/test) | Built-in |
103
+ | `:zvec` | Persistent file-based vector store | `gem "zvec-ruby"` |
104
+
105
+ ```ruby
106
+ config.store :memory, dimension: 1536
107
+ config.store :zvec, path: "./vectors", dimension: 1536
108
+ ```
109
+
110
+ Custom stores are easy — implement `add`, `search`, `delete`, and `count`:
111
+
112
+ ```ruby
113
+ class PineconeStore < RagRuby::Stores::Base
114
+ def add(id, embedding:, metadata: {}, chunk: nil) = ...
115
+ def search(embedding, top_k:, filter: nil) = ...
116
+ def delete(id) = ...
117
+ def count = ...
118
+ end
119
+ ```
120
+
121
+ ### Generators
122
+
123
+ | Provider | Description | Require |
124
+ |----------|-------------|---------|
125
+ | `:openai` | OpenAI chat completions | `OPENAI_API_KEY` env var |
126
+ | `:ruby_llm` | Any model via ruby_llm | `gem "ruby_llm"` |
127
+
128
+ ```ruby
129
+ config.generator :openai, model: "gpt-4o"
130
+ config.generator :ruby_llm, model: "claude-sonnet-4-20250514"
131
+ ```
132
+
133
+ ## Query Options
134
+
135
+ ```ruby
136
+ answer = pipeline.query("What changed in v2.0?",
137
+ top_k: 10, # number of chunks to retrieve
138
+ filter: { category: "changelog" }, # metadata filter
139
+ temperature: 0.0, # generation temperature
140
+ system_prompt: "You are a technical docs assistant."
141
+ )
142
+
143
+ answer.text # generated answer
144
+ answer.sources # retrieved chunks with scores
145
+ answer.tokens_used # { prompt: ..., completion: ... }
146
+ answer.duration # query time in seconds
147
+ answer.query # original question
148
+ ```
149
+
150
+ ## Callbacks & Observability
151
+
152
+ Hook into every stage of the pipeline for logging, metrics, or debugging:
153
+
154
+ ```ruby
155
+ pipeline = RagRuby::Pipeline.new do |config|
156
+ # ... providers ...
157
+
158
+ config.on(:before_load) { |src| puts "Loading: #{src}" }
159
+ config.on(:after_load) { |docs| puts "Loaded #{docs.size} documents" }
160
+ config.on(:before_chunk) { |doc| puts "Chunking: #{doc.source}" }
161
+ config.on(:after_chunk) { |chunks| puts "Created #{chunks.size} chunks" }
162
+ config.on(:before_embed) { |chunks| puts "Embedding #{chunks.size} chunks" }
163
+ config.on(:after_embed) { |chunks| puts "Embedded #{chunks.size} chunks" }
164
+ config.on(:before_store) { |chunks| puts "Storing #{chunks.size} chunks" }
165
+ config.on(:after_store) { |chunks| puts "Stored #{chunks.size} chunks" }
166
+ config.on(:before_query) { |q| Metrics.increment("rag.queries") }
167
+ config.on(:after_query) { |q, answer| Metrics.timing("rag.latency", answer.duration) }
168
+ end
169
+ ```
170
+
171
+ ## Rails Integration
172
+
173
+ ### Setup
174
+
175
+ ```bash
176
+ rails generate rag:install
177
+ ```
178
+
179
+ This creates:
180
+ - `config/rag.yml` — environment-specific configuration
181
+ - `config/initializers/rag_ruby.rb` — optional programmatic config
182
+
183
+ ### Configuration
184
+
185
+ ```yaml
186
+ # config/rag.yml
187
+ default: &default
188
+ chunker:
189
+ strategy: recursive_character
190
+ chunk_size: 1000
191
+ chunk_overlap: 200
192
+ embedder:
193
+ provider: openai
194
+ model: text-embedding-3-small
195
+ store:
196
+ provider: memory
197
+ dimension: 1536
198
+ generator:
199
+ provider: openai
200
+ model: gpt-4o
201
+
202
+ development:
203
+ <<: *default
204
+
205
+ production:
206
+ <<: *default
207
+ store:
208
+ provider: zvec
209
+ path: db/vectors
210
+ dimension: 1536
211
+ ```
212
+
213
+ ### Auto-Index Models
214
+
215
+ ```ruby
216
+ class Article < ApplicationRecord
217
+ include RagRuby::Indexable
218
+
219
+ rag_index :content,
220
+ metadata: ->(article) { { category: article.category, author: article.author } },
221
+ on: [:create, :update]
222
+ end
223
+
224
+ # Articles are automatically indexed when saved
225
+ Article.create!(title: "Guide", content: "# Getting Started\n...")
226
+ ```
227
+
228
+ ### Global API
229
+
230
+ ```ruby
231
+ # Search for relevant chunks
232
+ results = RagRuby.search("How to get started?", top_k: 5)
233
+
234
+ # Full RAG: retrieve + generate
235
+ answer = RagRuby.ask("How to get started?")
236
+ ```
237
+
238
+ ### Controller Usage
239
+
240
+ ```ruby
241
+ class ChatController < ApplicationController
242
+ def ask
243
+ answer = RagRuby.ask(params[:question])
244
+ render json: {
245
+ answer: answer.text,
246
+ sources: answer.sources.map(&:to_h)
247
+ }
248
+ end
249
+ end
250
+ ```
251
+
252
+ ## Architecture
253
+
254
+ ### Ingestion Flow
255
+
256
+ ```
257
+ Document → Loader → [Document] → Chunker → [Chunk] → Embedder → [Chunk+Embedding] → Store
258
+ ```
259
+
260
+ ### Query Flow
261
+
262
+ ```
263
+ Question → Embedder → Vector → Store.search → [Chunk] → build_context → Generator → Answer
264
+ ```
265
+
266
+ Each stage is independent and swappable. The `Pipeline` class orchestrates the flow.
267
+
268
+ ## Dependencies
269
+
270
+ | Gem | Purpose | Required? |
271
+ |-----|---------|-----------|
272
+ | `chunker-ruby` | Text chunking | Yes |
273
+ | `zvec-ruby` | Persistent vector storage | Optional |
274
+ | `onnx-ruby` | Local ONNX embeddings | Optional |
275
+ | `ruby_llm` | Multi-provider LLM generation | Optional |
276
+
277
+ ## Development
278
+
279
+ ```bash
280
+ git clone https://github.com/johannesdwicahyo/rag-ruby.git
281
+ cd rag-ruby
282
+ bundle install
283
+ bundle exec rake test
284
+ ```
285
+
286
+ ## License
287
+
288
+ MIT License. See [LICENSE](LICENSE) for details.
data/Rakefile ADDED
@@ -0,0 +1,11 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "rake/testtask"
4
+
5
+ Rake::TestTask.new(:test) do |t|
6
+ t.libs << "test"
7
+ t.libs << "lib"
8
+ t.test_files = FileList["test/**/test_*.rb"]
9
+ end
10
+
11
+ task default: :test
@@ -0,0 +1,29 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RagRuby
4
+ class Answer
5
+ attr_reader :text, :sources, :tokens_used, :duration, :query
6
+
7
+ def initialize(text:, sources: [], tokens_used: {}, duration: nil, query: nil)
8
+ @text = text
9
+ @sources = sources
10
+ @tokens_used = tokens_used
11
+ @duration = duration
12
+ @query = query
13
+ end
14
+
15
+ def to_s
16
+ text
17
+ end
18
+
19
+ def to_h
20
+ {
21
+ text: text,
22
+ sources: sources.map(&:to_h),
23
+ tokens_used: tokens_used,
24
+ duration: duration,
25
+ query: query
26
+ }
27
+ end
28
+ end
29
+ end
@@ -0,0 +1,27 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RagRuby
4
+ class Chunk
5
+ attr_accessor :text, :embedding, :metadata, :document_source, :index
6
+
7
+ def initialize(text:, metadata: {}, document_source: nil, index: 0)
8
+ @text = text
9
+ @metadata = metadata
10
+ @document_source = document_source
11
+ @index = index
12
+ @embedding = nil
13
+ end
14
+
15
+ def embedded?
16
+ !@embedding.nil?
17
+ end
18
+
19
+ def to_s
20
+ text
21
+ end
22
+
23
+ def bytesize
24
+ text.bytesize
25
+ end
26
+ end
27
+ end
@@ -0,0 +1,90 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RagRuby
4
+ class Configuration
5
+ LOADER_REGISTRY = {
6
+ file: -> { Loaders::File.new },
7
+ directory: -> { Loaders::Directory.new },
8
+ url: -> { Loaders::URL.new },
9
+ active_record: -> { Loaders::ActiveRecord.new }
10
+ }.freeze
11
+
12
+ EMBEDDER_REGISTRY = {
13
+ openai: ->(opts) { Embedders::OpenAI.new(**opts) },
14
+ onnx: ->(opts) { Embedders::Onnx.new(**opts) },
15
+ cohere: ->(opts) { Embedders::Cohere.new(**opts) }
16
+ }.freeze
17
+
18
+ STORE_REGISTRY = {
19
+ zvec: ->(opts) { Stores::Zvec.new(**opts) },
20
+ memory: ->(opts) { Stores::Memory.new(**opts) }
21
+ }.freeze
22
+
23
+ GENERATOR_REGISTRY = {
24
+ openai: ->(opts) { Generators::OpenAI.new(**opts) },
25
+ ruby_llm: ->(opts) { Generators::RubyLLM.new(**opts) }
26
+ }.freeze
27
+
28
+ attr_accessor :loader_instance, :embedder_instance, :store_instance, :generator_instance,
29
+ :chunk_size, :chunk_overlap, :chunk_strategy
30
+
31
+ def initialize
32
+ @callbacks = Hash.new { |h, k| h[k] = [] }
33
+ @chunk_size = 1000
34
+ @chunk_overlap = 200
35
+ @chunk_strategy = :recursive_character
36
+ end
37
+
38
+ def loader(name, **opts)
39
+ @loader_instance = if LOADER_REGISTRY.key?(name)
40
+ LOADER_REGISTRY[name].call
41
+ else
42
+ raise ArgumentError, "Unknown loader: #{name}"
43
+ end
44
+ end
45
+
46
+ def chunker(strategy, chunk_size: 1000, chunk_overlap: 200)
47
+ @chunk_strategy = strategy
48
+ @chunk_size = chunk_size
49
+ @chunk_overlap = chunk_overlap
50
+ end
51
+
52
+ def embedder(name, **opts)
53
+ @embedder_instance = if EMBEDDER_REGISTRY.key?(name)
54
+ EMBEDDER_REGISTRY[name].call(opts)
55
+ elsif name.is_a?(Class) || name.respond_to?(:embed)
56
+ name
57
+ else
58
+ raise ArgumentError, "Unknown embedder: #{name}"
59
+ end
60
+ end
61
+
62
+ def store(name, **opts)
63
+ @store_instance = if STORE_REGISTRY.key?(name)
64
+ STORE_REGISTRY[name].call(opts)
65
+ elsif name.is_a?(Class) || name.respond_to?(:search)
66
+ name
67
+ else
68
+ raise ArgumentError, "Unknown store: #{name}"
69
+ end
70
+ end
71
+
72
+ def generator(name, **opts)
73
+ @generator_instance = if GENERATOR_REGISTRY.key?(name)
74
+ GENERATOR_REGISTRY[name].call(opts)
75
+ elsif name.is_a?(Class) || name.respond_to?(:generate)
76
+ name
77
+ else
78
+ raise ArgumentError, "Unknown generator: #{name}"
79
+ end
80
+ end
81
+
82
+ def on(event, &block)
83
+ @callbacks[event] << block
84
+ end
85
+
86
+ def callbacks_for(event)
87
+ @callbacks[event]
88
+ end
89
+ end
90
+ end
@@ -0,0 +1,25 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RagRuby
4
+ class Document
5
+ attr_accessor :content, :metadata, :source
6
+
7
+ def initialize(content:, metadata: {}, source: nil)
8
+ @content = content
9
+ @metadata = metadata
10
+ @source = source
11
+ end
12
+
13
+ def to_s
14
+ content
15
+ end
16
+
17
+ def bytesize
18
+ content.bytesize
19
+ end
20
+
21
+ def empty?
22
+ content.nil? || content.strip.empty?
23
+ end
24
+ end
25
+ end
@@ -0,0 +1,19 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RagRuby
4
+ module Embedders
5
+ class Base
6
+ def embed(text)
7
+ raise NotImplementedError, "#{self.class}#embed must be implemented"
8
+ end
9
+
10
+ def embed_batch(texts)
11
+ texts.map { |t| embed(t) }
12
+ end
13
+
14
+ def dimension
15
+ raise NotImplementedError, "#{self.class}#dimension must be implemented"
16
+ end
17
+ end
18
+ end
19
+ end
@@ -0,0 +1,50 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "net/http"
4
+ require "uri"
5
+ require "json"
6
+
7
+ module RagRuby
8
+ module Embedders
9
+ class Cohere < Base
10
+ ENDPOINT = "https://api.cohere.ai/v1/embed"
11
+
12
+ def initialize(model: "embed-english-v3.0", api_key: nil)
13
+ @model = model
14
+ @api_key = api_key || ENV["COHERE_API_KEY"]
15
+ raise ArgumentError, "Cohere API key is required (set COHERE_API_KEY or pass api_key:)" unless @api_key
16
+ end
17
+
18
+ def embed(text)
19
+ embed_batch([text]).first
20
+ end
21
+
22
+ def embed_batch(texts)
23
+ uri = URI.parse(ENDPOINT)
24
+ http = Net::HTTP.new(uri.host, uri.port)
25
+ http.use_ssl = true
26
+
27
+ req = Net::HTTP::Post.new(uri)
28
+ req["Authorization"] = "Bearer #{@api_key}"
29
+ req["Content-Type"] = "application/json"
30
+ req.body = JSON.generate(
31
+ model: @model,
32
+ texts: texts,
33
+ input_type: "search_document"
34
+ )
35
+
36
+ response = http.request(req)
37
+
38
+ unless response.is_a?(Net::HTTPSuccess)
39
+ raise "Cohere API error (#{response.code}): #{response.body}"
40
+ end
41
+
42
+ JSON.parse(response.body)["embeddings"]
43
+ end
44
+
45
+ def dimension
46
+ 1024
47
+ end
48
+ end
49
+ end
50
+ end
@@ -0,0 +1,42 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RagRuby
4
+ module Embedders
5
+ class Onnx < Base
6
+ def initialize(model: "all-MiniLM-L6-v2", model_path: nil)
7
+ @model = model
8
+ @model_path = model_path
9
+
10
+ begin
11
+ require "onnx_ruby"
12
+ rescue LoadError
13
+ raise LoadError, "onnx-ruby gem is required for ONNX embeddings. Add `gem 'onnx-ruby'` to your Gemfile."
14
+ end
15
+
16
+ @session = create_session
17
+ end
18
+
19
+ def embed(text)
20
+ @session.embed(text)
21
+ end
22
+
23
+ def embed_batch(texts)
24
+ texts.map { |t| embed(t) }
25
+ end
26
+
27
+ def dimension
28
+ 384 # all-MiniLM-L6-v2 default
29
+ end
30
+
31
+ private
32
+
33
+ def create_session
34
+ if @model_path
35
+ OnnxRuby::Session.new(@model_path)
36
+ else
37
+ OnnxRuby::Session.from_pretrained(@model)
38
+ end
39
+ end
40
+ end
41
+ end
42
+ end
@@ -0,0 +1,64 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "net/http"
4
+ require "uri"
5
+ require "json"
6
+
7
+ module RagRuby
8
+ module Embedders
9
+ class OpenAI < Base
10
+ ENDPOINT = "https://api.openai.com/v1/embeddings"
11
+
12
+ DIMENSIONS = {
13
+ "text-embedding-3-small" => 1536,
14
+ "text-embedding-3-large" => 3072,
15
+ "text-embedding-ada-002" => 1536
16
+ }.freeze
17
+
18
+ def initialize(model: "text-embedding-3-small", api_key: nil)
19
+ @model = model
20
+ @api_key = api_key || ENV["OPENAI_API_KEY"]
21
+ raise ArgumentError, "OpenAI API key is required (set OPENAI_API_KEY or pass api_key:)" unless @api_key
22
+ end
23
+
24
+ def embed(text)
25
+ response = request([text])
26
+ response.dig("data", 0, "embedding")
27
+ end
28
+
29
+ def embed_batch(texts)
30
+ response = request(texts)
31
+ response["data"]
32
+ .sort_by { |d| d["index"] }
33
+ .map { |d| d["embedding"] }
34
+ end
35
+
36
+ def dimension
37
+ DIMENSIONS.fetch(@model) { 1536 }
38
+ end
39
+
40
+ private
41
+
42
+ def request(input)
43
+ uri = URI.parse(ENDPOINT)
44
+ http = Net::HTTP.new(uri.host, uri.port)
45
+ http.use_ssl = true
46
+ http.open_timeout = 30
47
+ http.read_timeout = 60
48
+
49
+ req = Net::HTTP::Post.new(uri)
50
+ req["Authorization"] = "Bearer #{@api_key}"
51
+ req["Content-Type"] = "application/json"
52
+ req.body = JSON.generate(model: @model, input: input)
53
+
54
+ response = http.request(req)
55
+
56
+ unless response.is_a?(Net::HTTPSuccess)
57
+ raise "OpenAI API error (#{response.code}): #{response.body}"
58
+ end
59
+
60
+ JSON.parse(response.body)
61
+ end
62
+ end
63
+ end
64
+ end
@@ -0,0 +1,11 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RagRuby
4
+ module Generators
5
+ class Base
6
+ def generate(prompt:, system_prompt: nil, temperature: 0.7)
7
+ raise NotImplementedError, "#{self.class}#generate must be implemented"
8
+ end
9
+ end
10
+ end
11
+ end