local_llm 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: d8cbf63f243ec6c48c454711b97be37b1672dbe38ecd016744c6f5fc44fa5203
4
+ data.tar.gz: c054e2e4f177c3241936d718de8ab6b04b05e198c40ed8e97651309338b1a281
5
+ SHA512:
6
+ metadata.gz: cc1fe4e1e27d394b83596854cf795696e37f8224c34288acb754bff33c1d67771f4bee6fef59ed492a4989dd07c083fda29102f072157c3e75e8ebd5cff62882
7
+ data.tar.gz: a17c1605fc1283a713c1fd697004bf5f3aece8415b4a43021af3db8ac2055fdd3cd8fc26983b6b04ba95ac413011fa9b87adf240f0d06f8fb2b151b88510ab08
data/.rspec ADDED
@@ -0,0 +1,3 @@
1
+ --format documentation
2
+ --color
3
+ --require spec_helper
data/LICENSE.txt ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2025 MD Abdul Barek
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,153 @@
1
+ # local_llm
2
+
3
+ **`local_llm`** is a lightweight Ruby gem that lets you talk to **locally installed LLMs via Ollama** — with **zero cloud dependency**, full **developer control**, and **configurable defaults**, including **real-time streaming support**.
4
+
5
+ It supports:
6
+ - Any Ollama model (LLaMA, Mistral, CodeLLaMA, Qwen, Phi, Gemma, etc.)
7
+ - Developer-configurable default models
8
+ - Developer-configurable Ollama API endpoint
9
+ - Developer-configurable **streaming or non-streaming**
10
+ - One-shot Q&A and multi-turn chat
11
+ - Works in plain Ruby & Rails
12
+ - 100% local & private
13
+
14
+ ---
15
+
16
+ ## 🚀 Features
17
+
18
+ - Use **any locally installed Ollama model**
19
+ - Change **default models at runtime**
20
+ - Enable or disable **real-time streaming**
21
+ - Works with:
22
+ - `llama2`
23
+ - `mistral`
24
+ - `codellama`
25
+ - `qwen`
26
+ - `phi`
27
+ - Anything supported by Ollama
28
+ - No API keys needed
29
+ - No cloud calls
30
+ - Full privacy
31
+ - Works completely offline
32
+
33
+ ---
34
+
35
+ ## 📦 Installation
36
+
37
+ ### Install Ollama
38
+
39
+ Download from:
40
+
41
+ https://ollama.com
42
+
43
+ Then start it:
44
+
45
+ ```bash
46
+ ollama serve
47
+ ```
48
+
49
+ ### How to Install New LLMs
50
+ ```
51
+ ollama pull llama2:13b
52
+ ollama pull mistral:7b-instruct
53
+ ollama pull codellama:13b-instruct
54
+ ollama pull qwen2:7b
55
+ ```
56
+
57
+ ### Verify Installed Models
58
+ ```
59
+ ollama list
60
+ ```
61
+
62
+ ### Configuration
63
+ ```
64
+ LocalLlm.configure do |c|
65
+ c.base_url = "http://localhost:11434"
66
+ c.default_general_model = "llama2:13b"
67
+ c.default_fast_model = "mistral:7b-instruct"
68
+ c.default_code_model = "codellama:13b-instruct"
69
+ c.default_stream = false # true = stream by default, false = return full text
70
+ end
71
+ ```
72
+
73
+ ### Basic Usage (Non-Streaming)
74
+ ```
75
+ LocalLlm.ask("llama2:13b", "What is HIPAA?")
76
+ LocalLlm.ask("qwen2:7b", "Explain transformers in simple terms.")
77
+
78
+ LocalLlm.general("What is a Denial of Service attack?")
79
+ LocalLlm.fast("Summarize this paragraph in 3 bullet points.")
80
+ LocalLlm.code("Write a Ruby method that returns factorial of n.")
81
+ ```
82
+
83
+ ### Streaming Usage (Live Output)
84
+ ```
85
+ LocalLlm.configure do |c|
86
+ c.default_stream = true
87
+ end
88
+
89
+ LocalLlm.fast("Explain HIPAA in very simple words.") do |chunk|
90
+ print chunk
91
+ end
92
+ ```
93
+
94
+ ### Per-Call Streaming Override
95
+ ```
96
+ LocalLlm.fast("Explain LLMs in one paragraph.", stream: true) do |chunk|
97
+ print chunk
98
+ end
99
+
100
+ full_text = LocalLlm.fast("Explain DoS attacks briefly.", stream: false)
101
+ puts full_text
102
+ ```
103
+
104
+ ### Full Chat API (Multi-Turn)
105
+ ```
106
+ LocalLlm.chat("llama2:13b", [
107
+ { "role" => "system", "content" => "You are a helpful assistant." },
108
+ { "role" => "user", "content" => "Explain LSTM." }
109
+ ])
110
+ ```
111
+
112
+ ### List Installed Ollama Models from Ruby
113
+ ```
114
+ LocalLlm.models
115
+ ```
116
+
117
+ ### Switching to Qwen (or Any New Model)
118
+ ```
119
+ ollama pull qwen2:7b
120
+ ```
121
+
122
+ ```
123
+ LocalLlm.ask("qwen2:7b", "Explain HIPAA in simple terms.")
124
+ ```
125
+
126
+ ### Make Qwen the Default
127
+ ```
128
+ LocalLlm.configure do |c|
129
+ c.default_general_model = "qwen2:7b"
130
+ end
131
+
132
+ LocalLlm.general("Explain transformers.")
133
+ ```
134
+
135
+ ### 🔌 Remote Ollama / Docker Support
136
+ ```
137
+ LocalLlm.configure do |c|
138
+ c.base_url = "http://192.168.1.100:11434"
139
+ end
140
+ ```
141
+
142
+ ### Troubleshooting
143
+ ##### Ollama Not Running
144
+ ```
145
+ ollama serve
146
+ ```
147
+
148
+ ### Privacy & Security
149
+ - 100% local inference
150
+ - No cloud calls
151
+ - No API keys
152
+ - No data leaves your machine
153
+ - Safe for HIPAA, SOC2, and regulated workflows
data/Rakefile ADDED
@@ -0,0 +1,8 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "bundler/gem_tasks"
4
+ require "rspec/core/rake_task"
5
+
6
+ RSpec::Core::RakeTask.new(:spec)
7
+
8
+ task default: :spec
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LocalLlm
4
+ VERSION = "0.1.0"
5
+ end
data/lib/local_llm.rb ADDED
@@ -0,0 +1,101 @@
1
+ # lib/local_llm.rb
2
+ # frozen_string_literal: true
3
+
4
+ require_relative "local_llm/version"
5
+ require_relative "local_llm/client"
6
+
7
+ module LocalLlm
8
+ # Configuration object
9
+ class Config
10
+ attr_accessor :base_url,
11
+ :default_general_model,
12
+ :default_fast_model,
13
+ :default_code_model,
14
+ :default_stream
15
+
16
+ def initialize
17
+ @base_url = "http://localhost:11434" # default Ollama
18
+ @default_general_model = "llama2:13b"
19
+ @default_fast_model = "mistral:7b-instruct"
20
+ @default_code_model = "codellama:13b-instruct"
21
+ @default_stream = false
22
+ end
23
+ end
24
+
25
+ # Global config access
26
+ def self.config
27
+ @config ||= Config.new
28
+ end
29
+
30
+ # DSL-style configuration:
31
+ #
32
+ # LocalLlm.configure do |c|
33
+ # c.base_url = "http://my-ollama-host:11434"
34
+ # c.default_general_model = "phi3:3.8b"
35
+ # c.default_fast_model = "mistral:7b-instruct"
36
+ # c.default_code_model = "codellama:13b-instruct"
37
+ # end
38
+ #
39
+ def self.configure
40
+ yield(config)
41
+ end
42
+
43
+ class << self
44
+ # We build a fresh client each time so changes to config.base_url
45
+ # are always respected.
46
+ def client
47
+ Client.new(base_url: config.base_url)
48
+ end
49
+
50
+ # -------- Core API (any model) --------
51
+
52
+ # One-shot Q&A with explicit model name
53
+ #
54
+ # LocalLlm.ask("mistral:7b-instruct", "What is HIPAA?")
55
+ #
56
+ def ask(model, prompt, options = {}, &block)
57
+ # allow per-call stream override, otherwise use config.default_stream
58
+ stream = options.key?(:stream) ? options.delete(:stream) : config.default_stream
59
+ client.ask(model: model, prompt: prompt, stream: stream, options: options, &block)
60
+ end
61
+
62
+
63
+ # Chat API with full messages array (OpenAI-style)
64
+ #
65
+ # LocalLlm.chat("llama2:13b", [
66
+ # { "role" => "system", "content" => "You are a helpful assistant." },
67
+ # { "role" => "user", "content" => "Explain LSTM." }
68
+ # ])
69
+ #
70
+ def chat(model, messages, options = {}, &block)
71
+ stream = options.key?(:stream) ? options.delete(:stream) : config.default_stream
72
+ client.chat(model: model, messages: messages, stream: stream, options: options, &block)
73
+ end
74
+
75
+
76
+ # List models from Ollama (`ollama list`)
77
+ def models
78
+ client.models
79
+ end
80
+
81
+ # -------- Convenience helpers using defaults --------
82
+
83
+ # Use whatever the developer set as default_general_model
84
+ def general(prompt, options = {}, &block)
85
+ ask(config.default_general_model, prompt, options, &block)
86
+ end
87
+
88
+ # Use developer’s default_fast_model
89
+ def fast(prompt, options = {}, &block)
90
+ ask(config.default_fast_model, prompt, options, &block)
91
+ end
92
+
93
+ # Use developer’s default_code_model
94
+ def code(prompt, options = {}, &block)
95
+ ask(config.default_code_model, prompt, options, &block)
96
+ end
97
+ end
98
+ end
99
+
100
+ # Optional nicer alias if someone prefers LocalLLM
101
+ LocalLLM = LocalLlm unless defined?(LocalLLM)
data/sig/local_llm.rbs ADDED
@@ -0,0 +1,4 @@
1
+ module LocalLlm
2
+ VERSION: String
3
+ # See the writing guide of rbs: https://github.com/ruby/rbs#guides
4
+ end
metadata ADDED
@@ -0,0 +1,55 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: local_llm
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - MD Abdul Barek
8
+ autorequire:
9
+ bindir: exe
10
+ cert_chain: []
11
+ date: 2025-12-02 00:00:00.000000000 Z
12
+ dependencies: []
13
+ description: local_llm is a lightweight Ruby gem that lets you interact with locally
14
+ installed Ollama LLMs such as LLaMA, Mistral, CodeLLaMA, Qwen, and more. It supports
15
+ configurable default models, configurable Ollama API endpoints, real-time streaming
16
+ or non-streaming responses, and both one-shot and multi-turn chat—while keeping
17
+ all inference fully local, private, and offline.
18
+ email:
19
+ - barek2k2@gmail.com
20
+ executables: []
21
+ extensions: []
22
+ extra_rdoc_files: []
23
+ files:
24
+ - ".rspec"
25
+ - LICENSE.txt
26
+ - README.md
27
+ - Rakefile
28
+ - lib/local_llm.rb
29
+ - lib/local_llm/version.rb
30
+ - sig/local_llm.rbs
31
+ homepage:
32
+ licenses:
33
+ - MIT
34
+ metadata:
35
+ allowed_push_host: https://rubygems.org
36
+ post_install_message:
37
+ rdoc_options: []
38
+ require_paths:
39
+ - lib
40
+ required_ruby_version: !ruby/object:Gem::Requirement
41
+ requirements:
42
+ - - ">="
43
+ - !ruby/object:Gem::Version
44
+ version: 3.0.0
45
+ required_rubygems_version: !ruby/object:Gem::Requirement
46
+ requirements:
47
+ - - ">="
48
+ - !ruby/object:Gem::Version
49
+ version: '0'
50
+ requirements: []
51
+ rubygems_version: 3.2.22
52
+ signing_key:
53
+ specification_version: 4
54
+ summary: Ruby client for local LLMs via Ollama with streaming support
55
+ test_files: []