htm 0.0.18 → 0.0.20
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +59 -1
- data/README.md +12 -0
- data/db/seeds.rb +1 -1
- data/docs/api/embedding-service.md +140 -110
- data/docs/api/yard/HTM/ActiveRecordConfig.md +6 -0
- data/docs/api/yard/HTM/Config.md +173 -0
- data/docs/api/yard/HTM/ConfigSection.md +28 -0
- data/docs/api/yard/HTM/Database.md +1 -1
- data/docs/api/yard/HTM/Railtie.md +2 -2
- data/docs/api/yard/HTM.md +0 -57
- data/docs/api/yard/index.csv +76 -61
- data/docs/api/yard-reference.md +2 -1
- data/docs/architecture/adrs/003-ollama-embeddings.md +45 -36
- data/docs/architecture/adrs/004-hive-mind.md +1 -1
- data/docs/architecture/adrs/008-robot-identification.md +1 -1
- data/docs/architecture/index.md +11 -9
- data/docs/architecture/overview.md +11 -7
- data/docs/assets/images/balanced-strategy-decay.svg +41 -0
- data/docs/assets/images/class-hierarchy.svg +1 -1
- data/docs/assets/images/eviction-priority.svg +43 -0
- data/docs/assets/images/exception-hierarchy.svg +2 -2
- data/docs/assets/images/hive-mind-shared-memory.svg +52 -0
- data/docs/assets/images/htm-architecture-overview.svg +3 -3
- data/docs/assets/images/htm-core-components.svg +4 -4
- data/docs/assets/images/htm-layered-architecture.svg +1 -1
- data/docs/assets/images/htm-memory-addition-flow.svg +2 -2
- data/docs/assets/images/htm-memory-recall-flow.svg +2 -2
- data/docs/assets/images/memory-topology.svg +53 -0
- data/docs/assets/images/two-tier-memory-architecture.svg +55 -0
- data/docs/development/setup.md +76 -44
- data/docs/examples/basic-usage.md +133 -0
- data/docs/examples/config-files.md +170 -0
- data/docs/examples/file-loading.md +208 -0
- data/docs/examples/index.md +116 -0
- data/docs/examples/llm-configuration.md +168 -0
- data/docs/examples/mcp-client.md +172 -0
- data/docs/examples/rails-integration.md +173 -0
- data/docs/examples/robot-groups.md +210 -0
- data/docs/examples/sinatra-integration.md +218 -0
- data/docs/examples/standalone-app.md +216 -0
- data/docs/examples/telemetry.md +224 -0
- data/docs/examples/timeframes.md +143 -0
- data/docs/getting-started/installation.md +97 -40
- data/docs/getting-started/quick-start.md +28 -11
- data/docs/guides/configuration.md +515 -0
- data/docs/guides/file-loading.md +322 -0
- data/docs/guides/getting-started.md +40 -9
- data/docs/guides/index.md +3 -3
- data/docs/guides/mcp-server.md +30 -12
- data/docs/guides/propositions.md +264 -0
- data/docs/guides/recalling-memories.md +4 -4
- data/docs/guides/search-strategies.md +3 -3
- data/docs/guides/tags.md +318 -0
- data/docs/guides/telemetry.md +229 -0
- data/docs/index.md +8 -16
- data/docs/{architecture → robots}/hive-mind.md +8 -111
- data/docs/robots/index.md +73 -0
- data/docs/{guides → robots}/multi-robot.md +3 -3
- data/docs/{guides → robots}/robot-groups.md +8 -7
- data/docs/{architecture → robots}/two-tier-memory.md +13 -149
- data/docs/robots/why-robots.md +85 -0
- data/lib/htm/config/defaults.yml +4 -4
- data/lib/htm/config.rb +2 -2
- data/lib/htm/job_adapter.rb +75 -1
- data/lib/htm/version.rb +1 -1
- data/lib/htm/workflows/remember_workflow.rb +212 -0
- data/lib/htm.rb +1 -0
- data/mkdocs.yml +33 -8
- metadata +60 -7
- data/docs/api/yard/HTM/Configuration.md +0 -240
- data/docs/telemetry.md +0 -391
|
@@ -0,0 +1,208 @@
|
|
|
1
|
+
# File Loading Example
|
|
2
|
+
|
|
3
|
+
This example demonstrates loading markdown files into HTM's long-term memory with automatic chunking, YAML frontmatter extraction, and source tracking.
|
|
4
|
+
|
|
5
|
+
**Source:** [`examples/file_loader_usage.rb`](https://github.com/madbomber/htm/blob/main/examples/file_loader_usage.rb)
|
|
6
|
+
|
|
7
|
+
## Overview
|
|
8
|
+
|
|
9
|
+
The file loading example shows:
|
|
10
|
+
|
|
11
|
+
- Loading single markdown files
|
|
12
|
+
- Loading directories with glob patterns
|
|
13
|
+
- YAML frontmatter extraction
|
|
14
|
+
- Querying nodes from loaded files
|
|
15
|
+
- Re-sync behavior for changed files
|
|
16
|
+
- Unloading files from memory
|
|
17
|
+
|
|
18
|
+
## Running the Example
|
|
19
|
+
|
|
20
|
+
```bash
|
|
21
|
+
export HTM_DATABASE__URL="postgresql://user@localhost:5432/htm_development"
|
|
22
|
+
ruby examples/file_loader_usage.rb
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
## Code Walkthrough
|
|
26
|
+
|
|
27
|
+
### Loading a Single File
|
|
28
|
+
|
|
29
|
+
```ruby
|
|
30
|
+
htm = HTM.new(robot_name: "FileLoaderDemo")
|
|
31
|
+
|
|
32
|
+
# Load a markdown file
|
|
33
|
+
result = htm.load_file("docs/guide.md")
|
|
34
|
+
# => {
|
|
35
|
+
# file_source_id: 1,
|
|
36
|
+
# chunks_created: 5,
|
|
37
|
+
# chunks_updated: 0,
|
|
38
|
+
# skipped: false
|
|
39
|
+
# }
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
### YAML Frontmatter
|
|
43
|
+
|
|
44
|
+
Files with frontmatter have metadata extracted automatically:
|
|
45
|
+
|
|
46
|
+
```markdown
|
|
47
|
+
---
|
|
48
|
+
title: PostgreSQL Guide
|
|
49
|
+
author: HTM Team
|
|
50
|
+
tags:
|
|
51
|
+
- database
|
|
52
|
+
- postgresql
|
|
53
|
+
---
|
|
54
|
+
|
|
55
|
+
# PostgreSQL Guide
|
|
56
|
+
|
|
57
|
+
Content starts here...
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
Access frontmatter via FileSource:
|
|
61
|
+
|
|
62
|
+
```ruby
|
|
63
|
+
source = HTM::Models::FileSource.find(result[:file_source_id])
|
|
64
|
+
source.title # => "PostgreSQL Guide"
|
|
65
|
+
source.author # => "HTM Team"
|
|
66
|
+
source.frontmatter_tags # => ["database", "postgresql"]
|
|
67
|
+
source.frontmatter # => { "title" => "...", ... }
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
### Loading a Directory
|
|
71
|
+
|
|
72
|
+
```ruby
|
|
73
|
+
# Load all markdown files
|
|
74
|
+
results = htm.load_directory("docs/", pattern: "**/*.md")
|
|
75
|
+
# => [
|
|
76
|
+
# { file_path: "docs/guide.md", chunks_created: 3, ... },
|
|
77
|
+
# { file_path: "docs/api.md", chunks_created: 5, ... }
|
|
78
|
+
# ]
|
|
79
|
+
|
|
80
|
+
# Load with specific pattern
|
|
81
|
+
results = htm.load_directory("docs/guides/", pattern: "*.md")
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
### Querying Loaded Files
|
|
85
|
+
|
|
86
|
+
```ruby
|
|
87
|
+
# Get all nodes from a specific file
|
|
88
|
+
nodes = htm.nodes_from_file("docs/guide.md")
|
|
89
|
+
|
|
90
|
+
nodes.each do |node|
|
|
91
|
+
puts "#{node.id}: #{node.content[0..50]}..."
|
|
92
|
+
end
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### Re-Sync Behavior
|
|
96
|
+
|
|
97
|
+
HTM tracks file modification times for efficient updates:
|
|
98
|
+
|
|
99
|
+
```ruby
|
|
100
|
+
# First load - creates chunks
|
|
101
|
+
htm.load_file("docs/guide.md")
|
|
102
|
+
# => { skipped: false, chunks_created: 5 }
|
|
103
|
+
|
|
104
|
+
# Second load - skipped (unchanged)
|
|
105
|
+
htm.load_file("docs/guide.md")
|
|
106
|
+
# => { skipped: true }
|
|
107
|
+
|
|
108
|
+
# After editing file - re-syncs
|
|
109
|
+
htm.load_file("docs/guide.md")
|
|
110
|
+
# => { skipped: false, chunks_updated: 2, chunks_created: 1 }
|
|
111
|
+
|
|
112
|
+
# Force reload
|
|
113
|
+
htm.load_file("docs/guide.md", force: true)
|
|
114
|
+
```
|
|
115
|
+
|
|
116
|
+
### Unloading Files
|
|
117
|
+
|
|
118
|
+
```ruby
|
|
119
|
+
# Soft delete all chunks from a file
|
|
120
|
+
count = htm.unload_file("docs/guide.md")
|
|
121
|
+
puts "Removed #{count} chunks"
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
## Chunking Configuration
|
|
125
|
+
|
|
126
|
+
```ruby
|
|
127
|
+
HTM.configure do |config|
|
|
128
|
+
config.chunk_size = 1024 # Characters per chunk (default)
|
|
129
|
+
config.chunk_overlap = 64 # Overlap between chunks (default)
|
|
130
|
+
end
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
Or via environment variables:
|
|
134
|
+
|
|
135
|
+
```bash
|
|
136
|
+
export HTM_CHUNK_SIZE=512
|
|
137
|
+
export HTM_CHUNK_OVERLAP=50
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
## Expected Output
|
|
141
|
+
|
|
142
|
+
```
|
|
143
|
+
HTM File Loader Example
|
|
144
|
+
============================================================
|
|
145
|
+
|
|
146
|
+
1. Configuring HTM with Ollama provider...
|
|
147
|
+
Configured with Ollama provider
|
|
148
|
+
|
|
149
|
+
2. Initializing HTM...
|
|
150
|
+
Robot: FileLoaderDemo (ID: 1)
|
|
151
|
+
|
|
152
|
+
3. Creating sample markdown files...
|
|
153
|
+
Created: /tmp/htm_demo/postgresql_guide.md
|
|
154
|
+
Created: /tmp/htm_demo/ruby_intro.md
|
|
155
|
+
|
|
156
|
+
4. Loading single file with frontmatter...
|
|
157
|
+
File: postgresql_guide.md
|
|
158
|
+
Source ID: 1
|
|
159
|
+
Chunks created: 3
|
|
160
|
+
Frontmatter title: PostgreSQL Guide
|
|
161
|
+
Frontmatter author: HTM Team
|
|
162
|
+
Frontmatter tags: database, postgresql
|
|
163
|
+
|
|
164
|
+
5. Loading directory...
|
|
165
|
+
Files processed: 2
|
|
166
|
+
- postgresql_guide.md: skipped
|
|
167
|
+
- ruby_intro.md: 2 chunks
|
|
168
|
+
|
|
169
|
+
...
|
|
170
|
+
|
|
171
|
+
============================================================
|
|
172
|
+
Example completed successfully!
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
## Rake Tasks
|
|
176
|
+
|
|
177
|
+
```bash
|
|
178
|
+
# Load a single file
|
|
179
|
+
rake 'htm:files:load[docs/guide.md]'
|
|
180
|
+
|
|
181
|
+
# Load directory
|
|
182
|
+
rake 'htm:files:load_dir[docs/]'
|
|
183
|
+
rake 'htm:files:load_dir[docs/,**/*.md]'
|
|
184
|
+
|
|
185
|
+
# List loaded files
|
|
186
|
+
rake htm:files:list
|
|
187
|
+
|
|
188
|
+
# Show file details
|
|
189
|
+
rake 'htm:files:info[docs/guide.md]'
|
|
190
|
+
|
|
191
|
+
# Unload a file
|
|
192
|
+
rake 'htm:files:unload[docs/guide.md]'
|
|
193
|
+
|
|
194
|
+
# Sync all files
|
|
195
|
+
rake htm:files:sync
|
|
196
|
+
|
|
197
|
+
# Show statistics
|
|
198
|
+
rake htm:files:stats
|
|
199
|
+
|
|
200
|
+
# Force reload
|
|
201
|
+
FORCE=true rake 'htm:files:load[docs/guide.md]'
|
|
202
|
+
```
|
|
203
|
+
|
|
204
|
+
## See Also
|
|
205
|
+
|
|
206
|
+
- [File Loading Guide](../guides/file-loading.md)
|
|
207
|
+
- [Basic Usage Example](basic-usage.md)
|
|
208
|
+
- [Markdown Chunking](../guides/file-loading.md#chunking-strategy)
|
|
@@ -0,0 +1,116 @@
|
|
|
1
|
+
# Examples
|
|
2
|
+
|
|
3
|
+
HTM includes working example programs that demonstrate various features and integration patterns. These examples show real-world usage patterns and can serve as templates for your own applications.
|
|
4
|
+
|
|
5
|
+
## Running Examples
|
|
6
|
+
|
|
7
|
+
All examples require the database to be configured:
|
|
8
|
+
|
|
9
|
+
```bash
|
|
10
|
+
export HTM_DATABASE__URL="postgresql://user@localhost:5432/htm_development"
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
Then run any example with:
|
|
14
|
+
|
|
15
|
+
```bash
|
|
16
|
+
ruby examples/<example_name>.rb
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
## Available Examples
|
|
20
|
+
|
|
21
|
+
### Core Usage
|
|
22
|
+
|
|
23
|
+
| Example | Description |
|
|
24
|
+
|---------|-------------|
|
|
25
|
+
| [Basic Usage](basic-usage.md) | Core HTM operations: remember, recall, forget |
|
|
26
|
+
| [LLM Configuration](llm-configuration.md) | Configure providers, custom embeddings, and tag extractors |
|
|
27
|
+
| [File Loading](file-loading.md) | Load markdown files with frontmatter and chunking |
|
|
28
|
+
|
|
29
|
+
### Advanced Features
|
|
30
|
+
|
|
31
|
+
| Example | Description |
|
|
32
|
+
|---------|-------------|
|
|
33
|
+
| [Timeframes](timeframes.md) | Natural language temporal queries |
|
|
34
|
+
| [Robot Groups](robot-groups.md) | Multi-robot coordination with shared memory |
|
|
35
|
+
| [MCP Client](mcp-client.md) | Interactive AI chat with memory tools |
|
|
36
|
+
| [Telemetry](telemetry.md) | Prometheus metrics and Grafana visualization |
|
|
37
|
+
|
|
38
|
+
## Quick Reference
|
|
39
|
+
|
|
40
|
+
### Basic Operations
|
|
41
|
+
|
|
42
|
+
```ruby
|
|
43
|
+
# Initialize HTM
|
|
44
|
+
htm = HTM.new(robot_name: "My Robot")
|
|
45
|
+
|
|
46
|
+
# Remember information
|
|
47
|
+
node_id = htm.remember("PostgreSQL supports vector search via pgvector.")
|
|
48
|
+
|
|
49
|
+
# Recall memories
|
|
50
|
+
results = htm.recall("database features", strategy: :hybrid, limit: 5)
|
|
51
|
+
|
|
52
|
+
# Forget a memory
|
|
53
|
+
htm.forget(node_id)
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
### Configuration
|
|
57
|
+
|
|
58
|
+
```ruby
|
|
59
|
+
HTM.configure do |config|
|
|
60
|
+
# Use any RubyLLM-supported provider
|
|
61
|
+
config.embedding.provider = :openai # or :ollama, :anthropic, etc.
|
|
62
|
+
config.embedding.model = 'text-embedding-3-small'
|
|
63
|
+
|
|
64
|
+
config.tag.provider = :openai
|
|
65
|
+
config.tag.model = 'gpt-4o-mini'
|
|
66
|
+
end
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
### File Loading
|
|
70
|
+
|
|
71
|
+
```ruby
|
|
72
|
+
# Load markdown files into memory
|
|
73
|
+
htm.load_file("docs/guide.md")
|
|
74
|
+
htm.load_directory("docs/", pattern: "**/*.md")
|
|
75
|
+
|
|
76
|
+
# Query nodes from a file
|
|
77
|
+
nodes = htm.nodes_from_file("docs/guide.md")
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
## Prerequisites
|
|
81
|
+
|
|
82
|
+
Most examples require:
|
|
83
|
+
|
|
84
|
+
1. **PostgreSQL** with pgvector extension
|
|
85
|
+
2. **LLM Provider** - Ollama (default), OpenAI, Anthropic, etc.
|
|
86
|
+
3. **Ruby 3.2+** with HTM gem installed
|
|
87
|
+
|
|
88
|
+
### Ollama Setup (Default Provider)
|
|
89
|
+
|
|
90
|
+
```bash
|
|
91
|
+
# Install Ollama
|
|
92
|
+
curl https://ollama.ai/install.sh | sh
|
|
93
|
+
|
|
94
|
+
# Pull required models
|
|
95
|
+
ollama pull nomic-embed-text # For embeddings
|
|
96
|
+
ollama pull gemma3:latest # For tag extraction
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
### Using Cloud Providers
|
|
100
|
+
|
|
101
|
+
```bash
|
|
102
|
+
# OpenAI
|
|
103
|
+
export OPENAI_API_KEY="your-key"
|
|
104
|
+
|
|
105
|
+
# Anthropic
|
|
106
|
+
export ANTHROPIC_API_KEY="your-key"
|
|
107
|
+
|
|
108
|
+
# Google Gemini
|
|
109
|
+
export GEMINI_API_KEY="your-key"
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
## See Also
|
|
113
|
+
|
|
114
|
+
- [Getting Started Guide](../getting-started/quick-start.md)
|
|
115
|
+
- [API Reference](../api/htm.md)
|
|
116
|
+
- [Architecture Overview](../architecture/overview.md)
|
|
@@ -0,0 +1,168 @@
|
|
|
1
|
+
# LLM Configuration Example
|
|
2
|
+
|
|
3
|
+
This example demonstrates the various ways to configure LLM providers for embeddings and tag extraction in HTM.
|
|
4
|
+
|
|
5
|
+
**Source:** [`examples/custom_llm_configuration.rb`](https://github.com/madbomber/htm/blob/main/examples/custom_llm_configuration.rb)
|
|
6
|
+
|
|
7
|
+
## Overview
|
|
8
|
+
|
|
9
|
+
HTM uses RubyLLM for multi-provider LLM support. This example shows:
|
|
10
|
+
|
|
11
|
+
- Using default configuration (Ollama)
|
|
12
|
+
- Custom lambda-based embeddings and tags
|
|
13
|
+
- Service object integration
|
|
14
|
+
- Mixed configuration patterns
|
|
15
|
+
- Provider-specific settings
|
|
16
|
+
|
|
17
|
+
## Running the Example
|
|
18
|
+
|
|
19
|
+
```bash
|
|
20
|
+
ruby examples/custom_llm_configuration.rb
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
## Configuration Patterns
|
|
24
|
+
|
|
25
|
+
### Default Configuration (Ollama)
|
|
26
|
+
|
|
27
|
+
```ruby
|
|
28
|
+
HTM.configure # Uses defaults from config files
|
|
29
|
+
|
|
30
|
+
htm = HTM.new(robot_name: "DefaultBot")
|
|
31
|
+
# Uses: ollama/nomic-embed-text for embeddings
|
|
32
|
+
# Uses: ollama/gemma3:latest for tag extraction
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
### Custom Lambda Functions
|
|
36
|
+
|
|
37
|
+
```ruby
|
|
38
|
+
HTM.configure do |config|
|
|
39
|
+
# Custom embedding generator
|
|
40
|
+
config.embedding_generator = lambda do |text|
|
|
41
|
+
# Your custom embedding logic
|
|
42
|
+
# Must return Array<Float>
|
|
43
|
+
MyEmbeddingService.embed(text)
|
|
44
|
+
end
|
|
45
|
+
|
|
46
|
+
# Custom tag extractor
|
|
47
|
+
config.tag_extractor = lambda do |text, existing_ontology|
|
|
48
|
+
# Your custom tag extraction logic
|
|
49
|
+
# Must return Array<String>
|
|
50
|
+
MyTagService.extract(text, ontology: existing_ontology)
|
|
51
|
+
end
|
|
52
|
+
end
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
### Service Object Pattern
|
|
56
|
+
|
|
57
|
+
```ruby
|
|
58
|
+
class MyAppLLMService
|
|
59
|
+
def self.embed(text)
|
|
60
|
+
# Integrate with LangChain, LlamaIndex, or custom infrastructure
|
|
61
|
+
# Returns embedding vector as Array<Float>
|
|
62
|
+
Array.new(1024) { rand }
|
|
63
|
+
end
|
|
64
|
+
|
|
65
|
+
def self.extract_tags(text, ontology)
|
|
66
|
+
# Returns array of hierarchical tag strings
|
|
67
|
+
['app:feature:memory', 'app:component:llm']
|
|
68
|
+
end
|
|
69
|
+
end
|
|
70
|
+
|
|
71
|
+
HTM.configure do |config|
|
|
72
|
+
config.embedding_generator = ->(text) { MyAppLLMService.embed(text) }
|
|
73
|
+
config.tag_extractor = ->(text, ontology) { MyAppLLMService.extract_tags(text, ontology) }
|
|
74
|
+
end
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
### Provider Configuration
|
|
78
|
+
|
|
79
|
+
```ruby
|
|
80
|
+
HTM.configure do |config|
|
|
81
|
+
# Configure embedding provider
|
|
82
|
+
config.embedding.provider = :openai
|
|
83
|
+
config.embedding.model = 'text-embedding-3-small'
|
|
84
|
+
config.embedding.dimensions = 1536
|
|
85
|
+
|
|
86
|
+
# Configure tag extraction provider
|
|
87
|
+
config.tag.provider = :anthropic
|
|
88
|
+
config.tag.model = 'claude-3-haiku-20240307'
|
|
89
|
+
|
|
90
|
+
# Provider-specific settings
|
|
91
|
+
config.providers.ollama.url = 'http://localhost:11434'
|
|
92
|
+
config.providers.openai.api_key = ENV['OPENAI_API_KEY']
|
|
93
|
+
end
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
### Mixed Configuration
|
|
97
|
+
|
|
98
|
+
Use custom embedding with default tag extraction:
|
|
99
|
+
|
|
100
|
+
```ruby
|
|
101
|
+
HTM.configure do |config|
|
|
102
|
+
# Custom embedding
|
|
103
|
+
config.embedding_generator = ->(text) {
|
|
104
|
+
MyCustomEmbedder.generate(text)
|
|
105
|
+
}
|
|
106
|
+
|
|
107
|
+
# Keep default RubyLLM-based tag extraction
|
|
108
|
+
# (uses configured tag.provider and tag.model)
|
|
109
|
+
end
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
## Supported Providers
|
|
113
|
+
|
|
114
|
+
HTM uses RubyLLM which supports:
|
|
115
|
+
|
|
116
|
+
| Provider | Embedding Models | Chat Models |
|
|
117
|
+
|----------|-----------------|-------------|
|
|
118
|
+
| Ollama (default) | `nomic-embed-text`, `mxbai-embed-large` | `gemma3`, `llama3`, `mistral` |
|
|
119
|
+
| OpenAI | `text-embedding-3-small`, `text-embedding-3-large` | `gpt-4o-mini`, `gpt-4o` |
|
|
120
|
+
| Anthropic | - | `claude-3-haiku`, `claude-3-sonnet` |
|
|
121
|
+
| Google Gemini | `text-embedding-004` | `gemini-1.5-flash`, `gemini-1.5-pro` |
|
|
122
|
+
| Azure OpenAI | Same as OpenAI | Same as OpenAI |
|
|
123
|
+
| HuggingFace | Various | Various |
|
|
124
|
+
| AWS Bedrock | Titan, Cohere | Claude, Llama |
|
|
125
|
+
| DeepSeek | - | `deepseek-chat` |
|
|
126
|
+
|
|
127
|
+
## Environment Variables
|
|
128
|
+
|
|
129
|
+
```bash
|
|
130
|
+
# Ollama (default)
|
|
131
|
+
export OLLAMA_URL="http://localhost:11434"
|
|
132
|
+
|
|
133
|
+
# OpenAI
|
|
134
|
+
export OPENAI_API_KEY="sk-..."
|
|
135
|
+
export OPENAI_ORGANIZATION="org-..."
|
|
136
|
+
|
|
137
|
+
# Anthropic
|
|
138
|
+
export ANTHROPIC_API_KEY="sk-ant-..."
|
|
139
|
+
|
|
140
|
+
# Google Gemini
|
|
141
|
+
export GEMINI_API_KEY="..."
|
|
142
|
+
|
|
143
|
+
# Azure OpenAI
|
|
144
|
+
export AZURE_OPENAI_API_KEY="..."
|
|
145
|
+
export AZURE_OPENAI_ENDPOINT="https://....openai.azure.com/"
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
## Integration with HTM Operations
|
|
149
|
+
|
|
150
|
+
When you call `htm.remember()`, HTM uses your configured generators:
|
|
151
|
+
|
|
152
|
+
```ruby
|
|
153
|
+
HTM.configure do |config|
|
|
154
|
+
config.embedding_generator = ->(text) {
|
|
155
|
+
puts "Embedding: #{text[0..40]}..."
|
|
156
|
+
MyService.embed(text)
|
|
157
|
+
}
|
|
158
|
+
end
|
|
159
|
+
|
|
160
|
+
# This triggers your custom embedding generator
|
|
161
|
+
node_id = htm.remember("PostgreSQL supports vector search")
|
|
162
|
+
```
|
|
163
|
+
|
|
164
|
+
## See Also
|
|
165
|
+
|
|
166
|
+
- [Basic Usage Example](basic-usage.md)
|
|
167
|
+
- [Configuration Guide](../getting-started/quick-start.md)
|
|
168
|
+
- [EmbeddingService API](../api/embedding-service.md)
|
|
@@ -0,0 +1,172 @@
|
|
|
1
|
+
# MCP Client Example
|
|
2
|
+
|
|
3
|
+
This example demonstrates using the HTM MCP (Model Context Protocol) server with an interactive AI chat interface.
|
|
4
|
+
|
|
5
|
+
**Source:** [`examples/mcp_client.rb`](https://github.com/madbomber/htm/blob/main/examples/mcp_client.rb)
|
|
6
|
+
|
|
7
|
+
## Overview
|
|
8
|
+
|
|
9
|
+
The MCP Client example shows:
|
|
10
|
+
|
|
11
|
+
- Connecting to the HTM MCP server via STDIO transport
|
|
12
|
+
- Interactive chat with tool calling
|
|
13
|
+
- Memory operations through natural language
|
|
14
|
+
- Session persistence and restoration
|
|
15
|
+
- Available tools and resources
|
|
16
|
+
|
|
17
|
+
## Prerequisites
|
|
18
|
+
|
|
19
|
+
```bash
|
|
20
|
+
# Install the MCP client gem
|
|
21
|
+
gem install ruby_llm-mcp
|
|
22
|
+
|
|
23
|
+
# Have Ollama running with a chat model
|
|
24
|
+
ollama pull gpt-oss # or llama3, mistral, etc.
|
|
25
|
+
|
|
26
|
+
# Set database connection
|
|
27
|
+
export HTM_DATABASE__URL="postgresql://user@localhost:5432/htm_development"
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
## Running the Example
|
|
31
|
+
|
|
32
|
+
```bash
|
|
33
|
+
ruby examples/mcp_client.rb
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
## Code Walkthrough
|
|
37
|
+
|
|
38
|
+
### MCP Client Setup
|
|
39
|
+
|
|
40
|
+
```ruby
|
|
41
|
+
# Configure RubyLLM for Ollama
|
|
42
|
+
RubyLLM.configure do |config|
|
|
43
|
+
config.ollama_api_base = "http://localhost:11434/v1"
|
|
44
|
+
end
|
|
45
|
+
|
|
46
|
+
# Connect to HTM MCP server
|
|
47
|
+
@mcp_client = RubyLLM::MCP.client(
|
|
48
|
+
name: 'htm-memory',
|
|
49
|
+
transport_type: :stdio,
|
|
50
|
+
request_timeout: 60_000,
|
|
51
|
+
config: {
|
|
52
|
+
command: RbConfig.ruby,
|
|
53
|
+
args: ['bin/htm_mcp'],
|
|
54
|
+
env: {
|
|
55
|
+
'HTM_DATABASE__URL' => ENV['HTM_DATABASE__URL'],
|
|
56
|
+
'OLLAMA_URL' => 'http://localhost:11434'
|
|
57
|
+
}
|
|
58
|
+
}
|
|
59
|
+
)
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
### Set Robot Identity
|
|
63
|
+
|
|
64
|
+
```ruby
|
|
65
|
+
set_robot_tool = @tools.find { |t| t.name == 'SetRobotTool' }
|
|
66
|
+
result = set_robot_tool.call(name: "My Assistant")
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
### Chat with Tools
|
|
70
|
+
|
|
71
|
+
```ruby
|
|
72
|
+
@chat = RubyLLM.chat(
|
|
73
|
+
model: 'gpt-oss:latest',
|
|
74
|
+
provider: :ollama,
|
|
75
|
+
assume_model_exists: true
|
|
76
|
+
)
|
|
77
|
+
|
|
78
|
+
# Attach MCP tools to chat
|
|
79
|
+
@chat.with_tools(*@tools)
|
|
80
|
+
|
|
81
|
+
# Natural language interactions
|
|
82
|
+
response = @chat.ask("Remember that the API rate limit is 1000 requests per minute")
|
|
83
|
+
# The LLM will call RememberTool automatically
|
|
84
|
+
|
|
85
|
+
response = @chat.ask("What do you know about databases?")
|
|
86
|
+
# The LLM will call RecallTool and summarize results
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
### Session Restoration
|
|
90
|
+
|
|
91
|
+
The client can restore previous session context:
|
|
92
|
+
|
|
93
|
+
```ruby
|
|
94
|
+
get_wm_tool = @tools.find { |t| t.name == 'GetWorkingMemoryTool' }
|
|
95
|
+
result = get_wm_tool.call({})
|
|
96
|
+
|
|
97
|
+
if result['count'] > 0
|
|
98
|
+
# Restore previous memories to chat context
|
|
99
|
+
@chat.add_message(
|
|
100
|
+
role: :user,
|
|
101
|
+
content: "Previous session context: #{memories.join("\n")}"
|
|
102
|
+
)
|
|
103
|
+
end
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
## Available Commands
|
|
107
|
+
|
|
108
|
+
| Command | Description |
|
|
109
|
+
|---------|-------------|
|
|
110
|
+
| `/tools` | List available MCP tools |
|
|
111
|
+
| `/resources` | List available MCP resources |
|
|
112
|
+
| `/stats` | Show memory statistics |
|
|
113
|
+
| `/tags` | List all tags |
|
|
114
|
+
| `/clear` | Clear chat history |
|
|
115
|
+
| `/help` | Show help |
|
|
116
|
+
| `/exit` | Quit |
|
|
117
|
+
|
|
118
|
+
## Available MCP Tools
|
|
119
|
+
|
|
120
|
+
| Tool | Description |
|
|
121
|
+
|------|-------------|
|
|
122
|
+
| `SetRobotTool` | Set the current robot identity |
|
|
123
|
+
| `RememberTool` | Store information in memory |
|
|
124
|
+
| `RecallTool` | Query memories by topic |
|
|
125
|
+
| `ForgetTool` | Delete a memory by ID |
|
|
126
|
+
| `ListTagsTool` | List all hierarchical tags |
|
|
127
|
+
| `StatsTool` | Show memory statistics |
|
|
128
|
+
| `GetWorkingMemoryTool` | Get current working memory |
|
|
129
|
+
|
|
130
|
+
## Example Interactions
|
|
131
|
+
|
|
132
|
+
```
|
|
133
|
+
you> Remember that the PostgreSQL connection string is in the DATABASE_URL env var
|
|
134
|
+
|
|
135
|
+
[Tool Call] RememberTool
|
|
136
|
+
Arguments: {content: "PostgreSQL connection string is in DATABASE_URL env var"}
|
|
137
|
+
[Tool Result] RememberTool
|
|
138
|
+
Result: {"success": true, "node_id": 42}
|
|
139
|
+
|
|
140
|
+
Assistant> I've stored that information about the PostgreSQL connection string.
|
|
141
|
+
|
|
142
|
+
you> What do you know about databases?
|
|
143
|
+
|
|
144
|
+
[Tool Call] RecallTool
|
|
145
|
+
Arguments: {topic: "databases", limit: 5}
|
|
146
|
+
[Tool Result] RecallTool
|
|
147
|
+
Result: {"memories": [...]}
|
|
148
|
+
|
|
149
|
+
Assistant> Based on my memories, I know that:
|
|
150
|
+
1. PostgreSQL connection string is stored in DATABASE_URL env var
|
|
151
|
+
2. PostgreSQL supports vector search via pgvector
|
|
152
|
+
...
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
## Configuration
|
|
156
|
+
|
|
157
|
+
```bash
|
|
158
|
+
# Use a different Ollama model
|
|
159
|
+
export OLLAMA_MODEL="llama3:latest"
|
|
160
|
+
|
|
161
|
+
# Use a different Ollama URL
|
|
162
|
+
export OLLAMA_URL="http://192.168.1.100:11434"
|
|
163
|
+
|
|
164
|
+
# Set robot name via environment
|
|
165
|
+
export HTM_ROBOT_NAME="My Custom Bot"
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
## See Also
|
|
169
|
+
|
|
170
|
+
- [MCP Server Guide](../guides/mcp-server.md)
|
|
171
|
+
- [HTM API Reference](../api/htm.md)
|
|
172
|
+
- [RubyLLM-MCP Documentation](https://github.com/contextco/ruby_llm-mcp)
|