htm 0.0.18 → 0.0.20
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +59 -1
- data/README.md +12 -0
- data/db/seeds.rb +1 -1
- data/docs/api/embedding-service.md +140 -110
- data/docs/api/yard/HTM/ActiveRecordConfig.md +6 -0
- data/docs/api/yard/HTM/Config.md +173 -0
- data/docs/api/yard/HTM/ConfigSection.md +28 -0
- data/docs/api/yard/HTM/Database.md +1 -1
- data/docs/api/yard/HTM/Railtie.md +2 -2
- data/docs/api/yard/HTM.md +0 -57
- data/docs/api/yard/index.csv +76 -61
- data/docs/api/yard-reference.md +2 -1
- data/docs/architecture/adrs/003-ollama-embeddings.md +45 -36
- data/docs/architecture/adrs/004-hive-mind.md +1 -1
- data/docs/architecture/adrs/008-robot-identification.md +1 -1
- data/docs/architecture/index.md +11 -9
- data/docs/architecture/overview.md +11 -7
- data/docs/assets/images/balanced-strategy-decay.svg +41 -0
- data/docs/assets/images/class-hierarchy.svg +1 -1
- data/docs/assets/images/eviction-priority.svg +43 -0
- data/docs/assets/images/exception-hierarchy.svg +2 -2
- data/docs/assets/images/hive-mind-shared-memory.svg +52 -0
- data/docs/assets/images/htm-architecture-overview.svg +3 -3
- data/docs/assets/images/htm-core-components.svg +4 -4
- data/docs/assets/images/htm-layered-architecture.svg +1 -1
- data/docs/assets/images/htm-memory-addition-flow.svg +2 -2
- data/docs/assets/images/htm-memory-recall-flow.svg +2 -2
- data/docs/assets/images/memory-topology.svg +53 -0
- data/docs/assets/images/two-tier-memory-architecture.svg +55 -0
- data/docs/development/setup.md +76 -44
- data/docs/examples/basic-usage.md +133 -0
- data/docs/examples/config-files.md +170 -0
- data/docs/examples/file-loading.md +208 -0
- data/docs/examples/index.md +116 -0
- data/docs/examples/llm-configuration.md +168 -0
- data/docs/examples/mcp-client.md +172 -0
- data/docs/examples/rails-integration.md +173 -0
- data/docs/examples/robot-groups.md +210 -0
- data/docs/examples/sinatra-integration.md +218 -0
- data/docs/examples/standalone-app.md +216 -0
- data/docs/examples/telemetry.md +224 -0
- data/docs/examples/timeframes.md +143 -0
- data/docs/getting-started/installation.md +97 -40
- data/docs/getting-started/quick-start.md +28 -11
- data/docs/guides/configuration.md +515 -0
- data/docs/guides/file-loading.md +322 -0
- data/docs/guides/getting-started.md +40 -9
- data/docs/guides/index.md +3 -3
- data/docs/guides/mcp-server.md +30 -12
- data/docs/guides/propositions.md +264 -0
- data/docs/guides/recalling-memories.md +4 -4
- data/docs/guides/search-strategies.md +3 -3
- data/docs/guides/tags.md +318 -0
- data/docs/guides/telemetry.md +229 -0
- data/docs/index.md +8 -16
- data/docs/{architecture → robots}/hive-mind.md +8 -111
- data/docs/robots/index.md +73 -0
- data/docs/{guides → robots}/multi-robot.md +3 -3
- data/docs/{guides → robots}/robot-groups.md +8 -7
- data/docs/{architecture → robots}/two-tier-memory.md +13 -149
- data/docs/robots/why-robots.md +85 -0
- data/lib/htm/config/defaults.yml +4 -4
- data/lib/htm/config.rb +2 -2
- data/lib/htm/job_adapter.rb +75 -1
- data/lib/htm/version.rb +1 -1
- data/lib/htm/workflows/remember_workflow.rb +212 -0
- data/lib/htm.rb +1 -0
- data/mkdocs.yml +33 -8
- metadata +60 -7
- data/docs/api/yard/HTM/Configuration.md +0 -240
- data/docs/telemetry.md +0 -391
|
@@ -8,7 +8,7 @@ Before installing HTM, ensure you have:
|
|
|
8
8
|
|
|
9
9
|
- **Ruby 3.0 or higher** - HTM requires modern Ruby features
|
|
10
10
|
- **PostgreSQL 17+** - For the database backend
|
|
11
|
-
- **
|
|
11
|
+
- **LLM Provider** - For generating embeddings and tags (Ollama is the default for local development, but OpenAI, Anthropic, Gemini, and others are also supported via RubyLLM)
|
|
12
12
|
|
|
13
13
|
### Check Your Ruby Version
|
|
14
14
|
|
|
@@ -156,14 +156,29 @@ Expected output:
|
|
|
156
156
|
!!! warning "Missing Extensions"
|
|
157
157
|
If extensions are missing, you may need to install them. On Debian/Ubuntu: `sudo apt-get install postgresql-17-pgvector`. On macOS: `brew install pgvector`.
|
|
158
158
|
|
|
159
|
-
## Step 4:
|
|
159
|
+
## Step 4: Configure LLM Provider
|
|
160
160
|
|
|
161
|
-
HTM uses
|
|
161
|
+
HTM uses RubyLLM to generate vector embeddings and extract tags. RubyLLM supports multiple providers, allowing you to choose what works best for your use case.
|
|
162
162
|
|
|
163
|
-
###
|
|
163
|
+
### Supported Providers
|
|
164
164
|
|
|
165
|
-
|
|
165
|
+
| Provider | Best For | API Key Required |
|
|
166
|
+
|----------|----------|------------------|
|
|
167
|
+
| **Ollama** (default) | Local development, privacy, no API costs | No |
|
|
168
|
+
| **OpenAI** | Production, high-quality embeddings | Yes (`OPENAI_API_KEY`) |
|
|
169
|
+
| **Anthropic** | Tag extraction with Claude models | Yes (`ANTHROPIC_API_KEY`) |
|
|
170
|
+
| **Gemini** | Google Cloud integration | Yes (`GEMINI_API_KEY`) |
|
|
171
|
+
| **Azure** | Enterprise Azure deployments | Yes (Azure credentials) |
|
|
172
|
+
| **Bedrock** | AWS integration | Yes (AWS credentials) |
|
|
173
|
+
| **DeepSeek** | Cost-effective alternative | Yes (`DEEPSEEK_API_KEY`) |
|
|
166
174
|
|
|
175
|
+
### Option A: Ollama (Recommended for Local Development)
|
|
176
|
+
|
|
177
|
+
Ollama runs locally with no API costs and keeps your data private.
|
|
178
|
+
|
|
179
|
+
#### Install Ollama
|
|
180
|
+
|
|
181
|
+
**macOS:**
|
|
167
182
|
```bash
|
|
168
183
|
# Option 1: Direct download
|
|
169
184
|
curl https://ollama.ai/install.sh | sh
|
|
@@ -172,17 +187,15 @@ curl https://ollama.ai/install.sh | sh
|
|
|
172
187
|
brew install ollama
|
|
173
188
|
```
|
|
174
189
|
|
|
175
|
-
|
|
176
|
-
|
|
190
|
+
**Linux:**
|
|
177
191
|
```bash
|
|
178
192
|
curl https://ollama.ai/install.sh | sh
|
|
179
193
|
```
|
|
180
194
|
|
|
181
|
-
|
|
182
|
-
|
|
195
|
+
**Windows:**
|
|
183
196
|
Download the installer from [https://ollama.ai/download](https://ollama.ai/download)
|
|
184
197
|
|
|
185
|
-
|
|
198
|
+
#### Start Ollama Service
|
|
186
199
|
|
|
187
200
|
```bash
|
|
188
201
|
# Ollama typically starts automatically
|
|
@@ -190,43 +203,68 @@ Download the installer from [https://ollama.ai/download](https://ollama.ai/downl
|
|
|
190
203
|
curl http://localhost:11434/api/version
|
|
191
204
|
```
|
|
192
205
|
|
|
193
|
-
|
|
206
|
+
#### Pull Required Models
|
|
207
|
+
|
|
208
|
+
```bash
|
|
209
|
+
# Download embedding model
|
|
210
|
+
ollama pull nomic-embed-text
|
|
211
|
+
|
|
212
|
+
# Download tag extraction model
|
|
213
|
+
ollama pull gemma3:latest
|
|
194
214
|
|
|
195
|
-
|
|
196
|
-
|
|
215
|
+
# Verify models are available
|
|
216
|
+
ollama list
|
|
197
217
|
```
|
|
198
218
|
|
|
199
|
-
|
|
219
|
+
#### Configure Environment (Optional)
|
|
200
220
|
|
|
201
|
-
|
|
221
|
+
If Ollama is running on a different host or port:
|
|
202
222
|
|
|
203
223
|
```bash
|
|
204
|
-
|
|
205
|
-
ollama pull gpt-oss
|
|
206
|
-
|
|
207
|
-
# Verify the model is available
|
|
208
|
-
ollama list
|
|
224
|
+
export OLLAMA_URL="http://custom-host:11434"
|
|
209
225
|
```
|
|
210
226
|
|
|
211
|
-
|
|
227
|
+
### Option B: OpenAI (Recommended for Production)
|
|
212
228
|
|
|
213
|
-
|
|
229
|
+
OpenAI provides high-quality embeddings with simple API access.
|
|
214
230
|
|
|
215
231
|
```bash
|
|
216
|
-
#
|
|
217
|
-
|
|
232
|
+
# Set your API key
|
|
233
|
+
export OPENAI_API_KEY="sk-..."
|
|
218
234
|
```
|
|
219
235
|
|
|
220
|
-
|
|
236
|
+
Configure HTM to use OpenAI:
|
|
221
237
|
|
|
222
|
-
|
|
238
|
+
```ruby
|
|
239
|
+
HTM.configure do |config|
|
|
240
|
+
config.embedding.provider = :openai
|
|
241
|
+
config.embedding.model = 'text-embedding-3-small'
|
|
242
|
+
config.tag.provider = :openai
|
|
243
|
+
config.tag.model = 'gpt-4o-mini'
|
|
244
|
+
end
|
|
245
|
+
```
|
|
246
|
+
|
|
247
|
+
### Option C: Other Providers
|
|
248
|
+
|
|
249
|
+
For Anthropic, Gemini, Azure, Bedrock, or DeepSeek, set the appropriate API key and configure HTM:
|
|
223
250
|
|
|
224
251
|
```bash
|
|
225
|
-
|
|
252
|
+
# Example: Anthropic
|
|
253
|
+
export ANTHROPIC_API_KEY="sk-ant-..."
|
|
254
|
+
|
|
255
|
+
# Example: Gemini
|
|
256
|
+
export GEMINI_API_KEY="..."
|
|
257
|
+
```
|
|
258
|
+
|
|
259
|
+
```ruby
|
|
260
|
+
HTM.configure do |config|
|
|
261
|
+
config.tag.provider = :anthropic
|
|
262
|
+
config.tag.model = 'claude-3-haiku-20240307'
|
|
263
|
+
end
|
|
226
264
|
```
|
|
227
265
|
|
|
228
|
-
!!! tip "
|
|
229
|
-
|
|
266
|
+
!!! tip "Mix and Match Providers"
|
|
267
|
+
You can use different providers for embeddings and tags. For example, use Ollama for local embedding generation and OpenAI for tag extraction.
|
|
230
268
|
|
|
231
269
|
## Step 5: Initialize HTM Database Schema
|
|
232
270
|
|
|
@@ -297,10 +335,9 @@ puts "Testing HTM Installation..."
|
|
|
297
335
|
# Initialize HTM
|
|
298
336
|
htm = HTM.new(
|
|
299
337
|
robot_name: "Test Robot",
|
|
300
|
-
working_memory_size: 128_000
|
|
301
|
-
embedding_service: :ollama,
|
|
302
|
-
embedding_model: 'gpt-oss'
|
|
338
|
+
working_memory_size: 128_000
|
|
303
339
|
)
|
|
340
|
+
# Uses configured provider, or defaults to Ollama
|
|
304
341
|
|
|
305
342
|
puts "✓ HTM initialized successfully"
|
|
306
343
|
puts " Robot ID: #{htm.robot_id}"
|
|
@@ -357,7 +394,10 @@ HTM uses the following environment variables:
|
|
|
357
394
|
| `HTM_DATABASE__USER` | Database user | `postgres` | No |
|
|
358
395
|
| `HTM_DATABASE__PASSWORD` | Database password | - | No |
|
|
359
396
|
| `HTM_DATABASE__PORT` | Database port | `5432` | No |
|
|
360
|
-
| `OLLAMA_URL` | Ollama API URL | `http://localhost:11434` | No |
|
|
397
|
+
| `OLLAMA_URL` | Ollama API URL (if using Ollama) | `http://localhost:11434` | No |
|
|
398
|
+
| `OPENAI_API_KEY` | OpenAI API key (if using OpenAI) | - | No |
|
|
399
|
+
| `ANTHROPIC_API_KEY` | Anthropic API key (if using Anthropic) | - | No |
|
|
400
|
+
| `GEMINI_API_KEY` | Gemini API key (if using Gemini) | - | No |
|
|
361
401
|
|
|
362
402
|
### Example Configuration File
|
|
363
403
|
|
|
@@ -366,7 +406,12 @@ Create a configuration file for easy loading:
|
|
|
366
406
|
```bash
|
|
367
407
|
# ~/.bashrc__htm
|
|
368
408
|
export HTM_DATABASE__URL="postgres://user:pass@host:port/db?sslmode=require"
|
|
409
|
+
# Ollama (for local development)
|
|
369
410
|
export OLLAMA_URL="http://localhost:11434"
|
|
411
|
+
|
|
412
|
+
# Or use cloud providers:
|
|
413
|
+
# export OPENAI_API_KEY="sk-..."
|
|
414
|
+
# export ANTHROPIC_API_KEY="sk-ant-..."
|
|
370
415
|
```
|
|
371
416
|
|
|
372
417
|
Load it in your shell:
|
|
@@ -398,11 +443,11 @@ pg_ctl status
|
|
|
398
443
|
# Ensure URL includes: ?sslmode=require
|
|
399
444
|
```
|
|
400
445
|
|
|
401
|
-
###
|
|
446
|
+
### LLM Provider Connection Issues
|
|
402
447
|
|
|
403
|
-
**Error**: `Connection refused
|
|
448
|
+
**Error**: `Connection refused` (Ollama) or `API key invalid` (cloud providers)
|
|
404
449
|
|
|
405
|
-
**Solutions**:
|
|
450
|
+
**Solutions for Ollama**:
|
|
406
451
|
|
|
407
452
|
```bash
|
|
408
453
|
# 1. Check if Ollama is running
|
|
@@ -415,8 +460,19 @@ curl http://localhost:11434/api/version
|
|
|
415
460
|
killall ollama
|
|
416
461
|
ollama serve
|
|
417
462
|
|
|
418
|
-
# 4. Verify
|
|
419
|
-
ollama list | grep
|
|
463
|
+
# 4. Verify embedding model is installed
|
|
464
|
+
ollama list | grep nomic-embed-text
|
|
465
|
+
```
|
|
466
|
+
|
|
467
|
+
**Solutions for Cloud Providers**:
|
|
468
|
+
|
|
469
|
+
```bash
|
|
470
|
+
# Verify API key is set
|
|
471
|
+
echo $OPENAI_API_KEY
|
|
472
|
+
echo $ANTHROPIC_API_KEY
|
|
473
|
+
|
|
474
|
+
# Test API connectivity
|
|
475
|
+
curl https://api.openai.com/v1/models -H "Authorization: Bearer $OPENAI_API_KEY"
|
|
420
476
|
```
|
|
421
477
|
|
|
422
478
|
### Missing Extensions
|
|
@@ -491,7 +547,8 @@ If you encounter issues:
|
|
|
491
547
|
|
|
492
548
|
## Additional Resources
|
|
493
549
|
|
|
494
|
-
- **
|
|
550
|
+
- **RubyLLM Documentation**: [https://rubyllm.com/](https://rubyllm.com/) - Multi-provider LLM interface
|
|
551
|
+
- **Ollama Documentation**: [https://ollama.ai/](https://ollama.ai/) - Local LLM provider
|
|
552
|
+
- **OpenAI API**: [https://platform.openai.com/docs/](https://platform.openai.com/docs/) - Cloud embeddings
|
|
495
553
|
- **pgvector Documentation**: [https://github.com/pgvector/pgvector](https://github.com/pgvector/pgvector)
|
|
496
554
|
- **PostgreSQL Documentation**: [https://www.postgresql.org/docs/](https://www.postgresql.org/docs/)
|
|
497
|
-
- **RubyLLM Documentation**: [https://github.com/madbomber/ruby_llm](https://github.com/madbomber/ruby_llm)
|
|
@@ -123,10 +123,11 @@ puts "=" * 60
|
|
|
123
123
|
Create an HTM instance for your robot:
|
|
124
124
|
|
|
125
125
|
```ruby
|
|
126
|
-
# Configure HTM globally (optional -
|
|
126
|
+
# Configure HTM globally (optional - defaults to Ollama for local development)
|
|
127
|
+
# HTM uses RubyLLM which supports: :ollama, :openai, :anthropic, :gemini, :azure, :bedrock, :deepseek
|
|
127
128
|
HTM.configure do |config|
|
|
128
|
-
config.embedding.provider = :ollama
|
|
129
|
-
config.embedding.model = 'nomic-embed-text
|
|
129
|
+
config.embedding.provider = :ollama # or :openai, etc.
|
|
130
|
+
config.embedding.model = 'nomic-embed-text' # provider-specific model
|
|
130
131
|
config.tag.provider = :ollama
|
|
131
132
|
config.tag.model = 'gemma3:latest'
|
|
132
133
|
end
|
|
@@ -336,10 +337,11 @@ require 'htm'
|
|
|
336
337
|
puts "My First HTM Application"
|
|
337
338
|
puts "=" * 60
|
|
338
339
|
|
|
339
|
-
# Step 1: Configure and initialize HTM
|
|
340
|
+
# Step 1: Configure and initialize HTM (optional - uses Ollama by default)
|
|
341
|
+
# Supports: :ollama, :openai, :anthropic, :gemini, :azure, :bedrock, :deepseek
|
|
340
342
|
HTM.configure do |config|
|
|
341
343
|
config.embedding.provider = :ollama
|
|
342
|
-
config.embedding.model = 'nomic-embed-text
|
|
344
|
+
config.embedding.model = 'nomic-embed-text'
|
|
343
345
|
config.tag.provider = :ollama
|
|
344
346
|
config.tag.model = 'gemma3:latest'
|
|
345
347
|
end
|
|
@@ -520,10 +522,15 @@ htm = HTM.new(
|
|
|
520
522
|
working_memory_size: 256_000 # 256k tokens
|
|
521
523
|
)
|
|
522
524
|
|
|
523
|
-
# Try different
|
|
525
|
+
# Try different providers or models
|
|
524
526
|
HTM.configure do |config|
|
|
525
|
-
|
|
526
|
-
config.embedding.
|
|
527
|
+
# Use OpenAI for production
|
|
528
|
+
config.embedding.provider = :openai
|
|
529
|
+
config.embedding.model = 'text-embedding-3-small'
|
|
530
|
+
|
|
531
|
+
# Or use Ollama locally with different model
|
|
532
|
+
# config.embedding.provider = :ollama
|
|
533
|
+
# config.embedding.model = 'mxbai-embed-large'
|
|
527
534
|
end
|
|
528
535
|
|
|
529
536
|
# Try different recall strategies
|
|
@@ -595,13 +602,21 @@ htm.remember(
|
|
|
595
602
|
|
|
596
603
|
## Troubleshooting Quick Start
|
|
597
604
|
|
|
598
|
-
### Issue: "Connection refused" error
|
|
605
|
+
### Issue: "Connection refused" error (Ollama)
|
|
599
606
|
|
|
600
607
|
**Solution**: Make sure Ollama is running:
|
|
601
608
|
|
|
602
609
|
```bash
|
|
603
610
|
curl http://localhost:11434/api/version
|
|
604
|
-
# If this fails, start Ollama
|
|
611
|
+
# If this fails, start Ollama with: ollama serve
|
|
612
|
+
```
|
|
613
|
+
|
|
614
|
+
### Issue: "API key invalid" error (cloud providers)
|
|
615
|
+
|
|
616
|
+
**Solution**: Verify your API key is set:
|
|
617
|
+
|
|
618
|
+
```bash
|
|
619
|
+
echo $OPENAI_API_KEY # or ANTHROPIC_API_KEY, GEMINI_API_KEY, etc.
|
|
605
620
|
```
|
|
606
621
|
|
|
607
622
|
### Issue: "Database connection failed"
|
|
@@ -615,13 +630,15 @@ echo $HTM_DATABASE__URL
|
|
|
615
630
|
|
|
616
631
|
### Issue: Embeddings taking too long
|
|
617
632
|
|
|
618
|
-
**Solution
|
|
633
|
+
**Solution for Ollama**: Check the model is downloaded:
|
|
619
634
|
|
|
620
635
|
```bash
|
|
621
636
|
ollama list | grep nomic-embed-text
|
|
622
637
|
# Should show nomic-embed-text model
|
|
623
638
|
```
|
|
624
639
|
|
|
640
|
+
**Solution for cloud providers**: Check your internet connection and API status.
|
|
641
|
+
|
|
625
642
|
### Issue: Memory not found during recall
|
|
626
643
|
|
|
627
644
|
**Solution**: Check your timeframe. If you just added a memory, use a recent timeframe:
|