htm 0.0.14 → 0.0.15
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +33 -0
- data/README.md +269 -79
- data/db/migrate/00003_create_file_sources.rb +5 -0
- data/db/migrate/00004_create_nodes.rb +17 -0
- data/db/migrate/00005_create_tags.rb +7 -0
- data/db/migrate/00006_create_node_tags.rb +2 -0
- data/db/migrate/00007_create_robot_nodes.rb +7 -0
- data/db/schema.sql +41 -29
- data/docs/api/yard/HTM/Configuration.md +54 -0
- data/docs/api/yard/HTM/Database.md +13 -10
- data/docs/api/yard/HTM/EmbeddingService.md +5 -1
- data/docs/api/yard/HTM/LongTermMemory.md +18 -277
- data/docs/api/yard/HTM/PropositionError.md +18 -0
- data/docs/api/yard/HTM/PropositionService.md +66 -0
- data/docs/api/yard/HTM/QueryCache.md +88 -0
- data/docs/api/yard/HTM/RobotGroup.md +481 -0
- data/docs/api/yard/HTM/SqlBuilder.md +108 -0
- data/docs/api/yard/HTM/TagService.md +4 -0
- data/docs/api/yard/HTM/Telemetry/NullInstrument.md +13 -0
- data/docs/api/yard/HTM/Telemetry/NullMeter.md +15 -0
- data/docs/api/yard/HTM/Telemetry.md +109 -0
- data/docs/api/yard/HTM/WorkingMemoryChannel.md +176 -0
- data/docs/api/yard/HTM.md +11 -23
- data/docs/api/yard/index.csv +102 -25
- data/docs/api/yard-reference.md +8 -0
- data/docs/assets/images/multi-provider-failover.svg +51 -0
- data/docs/assets/images/robot-group-architecture.svg +65 -0
- data/docs/database/README.md +3 -3
- data/docs/database/public.file_sources.svg +29 -21
- data/docs/database/public.node_tags.md +2 -0
- data/docs/database/public.node_tags.svg +53 -41
- data/docs/database/public.nodes.md +2 -0
- data/docs/database/public.nodes.svg +52 -40
- data/docs/database/public.robot_nodes.md +2 -0
- data/docs/database/public.robot_nodes.svg +30 -22
- data/docs/database/public.robots.svg +16 -12
- data/docs/database/public.tags.md +3 -0
- data/docs/database/public.tags.svg +41 -33
- data/docs/database/schema.json +66 -0
- data/docs/database/schema.svg +60 -48
- data/docs/development/index.md +13 -0
- data/docs/development/rake-tasks.md +1068 -0
- data/docs/getting-started/quick-start.md +144 -155
- data/docs/guides/adding-memories.md +2 -3
- data/docs/guides/context-assembly.md +185 -184
- data/docs/guides/getting-started.md +154 -148
- data/docs/guides/index.md +7 -0
- data/docs/guides/long-term-memory.md +60 -92
- data/docs/guides/mcp-server.md +617 -0
- data/docs/guides/multi-robot.md +249 -345
- data/docs/guides/recalling-memories.md +153 -163
- data/docs/guides/robot-groups.md +604 -0
- data/docs/guides/search-strategies.md +61 -58
- data/docs/guides/working-memory.md +103 -136
- data/docs/index.md +30 -26
- data/examples/robot_groups/robot_worker.rb +1 -2
- data/examples/robot_groups/same_process.rb +1 -4
- data/lib/htm/robot_group.rb +721 -0
- data/lib/htm/version.rb +1 -1
- data/lib/htm/working_memory_channel.rb +250 -0
- data/lib/htm.rb +2 -0
- data/mkdocs.yml +2 -0
- metadata +18 -9
- data/db/migrate/00009_add_working_memory_to_robot_nodes.rb +0 -12
- data/db/migrate/00010_add_soft_delete_to_associations.rb +0 -29
- data/db/migrate/00011_add_performance_indexes.rb +0 -21
- data/db/migrate/00012_add_tags_trigram_index.rb +0 -18
- data/db/migrate/00013_enable_lz4_compression.rb +0 -43
- data/examples/robot_groups/lib/robot_group.rb +0 -419
- data/examples/robot_groups/lib/working_memory_channel.rb +0 -140
|
@@ -0,0 +1,604 @@
|
|
|
1
|
+
# Robot Groups: Coordinated Multi-Robot Systems
|
|
2
|
+
|
|
3
|
+
Robot Groups extend HTM's [Hive Mind architecture](../architecture/hive-mind.md) by adding real-time coordination, shared working memory, and automatic failover capabilities. While the Hive Mind enables knowledge sharing through a shared long-term memory database, Robot Groups take this further by synchronizing active working memory across multiple robots in real-time.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
A Robot Group is a coordinated collection of robots that:
|
|
8
|
+
|
|
9
|
+
- **Share Working Memory**: All members maintain identical active context
|
|
10
|
+
- **Sync in Real-Time**: PostgreSQL LISTEN/NOTIFY enables instant synchronization
|
|
11
|
+
- **Support Active/Passive Roles**: Active robots handle requests; passive robots stay warm for failover
|
|
12
|
+
- **Enable Dynamic Scaling**: Add or remove robots at runtime
|
|
13
|
+
|
|
14
|
+

|
|
15
|
+
|
|
16
|
+
## Relationship to Hive Mind
|
|
17
|
+
|
|
18
|
+
Robot Groups build on the Hive Mind architecture:
|
|
19
|
+
|
|
20
|
+
| Aspect | Hive Mind | Robot Groups |
|
|
21
|
+
|--------|-----------|--------------|
|
|
22
|
+
| **Long-Term Memory** | Shared globally | Shared globally |
|
|
23
|
+
| **Working Memory** | Per-robot (isolated) | Synchronized across group |
|
|
24
|
+
| **Synchronization** | Eventual (database reads) | Real-time (LISTEN/NOTIFY) |
|
|
25
|
+
| **Failover** | Manual | Automatic |
|
|
26
|
+
| **Scaling** | Independent robots | Coordinated group |
|
|
27
|
+
|
|
28
|
+
!!! info "When to Use Robot Groups"
|
|
29
|
+
Use Robot Groups when you need:
|
|
30
|
+
|
|
31
|
+
- **High availability** with instant failover
|
|
32
|
+
- **Load balancing** across multiple robots
|
|
33
|
+
- **Consistent context** across all team members
|
|
34
|
+
- **Real-time collaboration** between robots
|
|
35
|
+
|
|
36
|
+
For independent robots that only need shared knowledge (not synchronized context), the basic Hive Mind architecture is sufficient.
|
|
37
|
+
|
|
38
|
+
## Key Classes
|
|
39
|
+
|
|
40
|
+
### HTM::RobotGroup
|
|
41
|
+
|
|
42
|
+
Coordinates multiple robots with shared working memory and automatic failover.
|
|
43
|
+
|
|
44
|
+
```ruby
|
|
45
|
+
group = HTM::RobotGroup.new(
|
|
46
|
+
name: 'customer-support',
|
|
47
|
+
active: ['primary'],
|
|
48
|
+
passive: ['standby'],
|
|
49
|
+
max_tokens: 8000
|
|
50
|
+
)
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
**Key Methods:**
|
|
54
|
+
|
|
55
|
+
| Method | Description |
|
|
56
|
+
|--------|-------------|
|
|
57
|
+
| `remember(content, originator:)` | Add to shared working memory |
|
|
58
|
+
| `recall(query, **options)` | Search shared working memory |
|
|
59
|
+
| `add_active(robot_name)` | Add an active robot |
|
|
60
|
+
| `add_passive(robot_name)` | Add a passive (standby) robot |
|
|
61
|
+
| `promote(robot_name)` | Promote passive to active |
|
|
62
|
+
| `demote(robot_name)` | Demote active to passive |
|
|
63
|
+
| `failover!` | Promote first passive robot |
|
|
64
|
+
| `status` | Get group status |
|
|
65
|
+
| `shutdown` | Stop listening and clean up |
|
|
66
|
+
|
|
67
|
+
### HTM::WorkingMemoryChannel
|
|
68
|
+
|
|
69
|
+
Low-level PostgreSQL LISTEN/NOTIFY pub/sub for real-time synchronization.
|
|
70
|
+
|
|
71
|
+
```ruby
|
|
72
|
+
channel = HTM::WorkingMemoryChannel.new('group-name', db_config)
|
|
73
|
+
|
|
74
|
+
channel.on_change do |event, node_id, robot_id|
|
|
75
|
+
# Handle :added, :evicted, or :cleared events
|
|
76
|
+
end
|
|
77
|
+
|
|
78
|
+
channel.start_listening
|
|
79
|
+
channel.notify(:added, node_id: 123, robot_id: 1)
|
|
80
|
+
channel.stop_listening
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
## Basic Usage
|
|
84
|
+
|
|
85
|
+
### Creating a Robot Group
|
|
86
|
+
|
|
87
|
+
```ruby
|
|
88
|
+
require 'htm'
|
|
89
|
+
|
|
90
|
+
# Configure HTM
|
|
91
|
+
HTM.configure do |config|
|
|
92
|
+
config.embedding_provider = :ollama
|
|
93
|
+
config.embedding_model = 'nomic-embed-text'
|
|
94
|
+
config.tag_provider = :ollama
|
|
95
|
+
config.tag_model = 'llama3'
|
|
96
|
+
end
|
|
97
|
+
|
|
98
|
+
# Create a robot group with active and passive members
|
|
99
|
+
group = HTM::RobotGroup.new(
|
|
100
|
+
name: 'support-team',
|
|
101
|
+
active: ['agent-primary'],
|
|
102
|
+
passive: ['agent-standby'],
|
|
103
|
+
max_tokens: 8000
|
|
104
|
+
)
|
|
105
|
+
|
|
106
|
+
# Add memories to shared working memory
|
|
107
|
+
group.remember("Customer #123 prefers email over phone.")
|
|
108
|
+
group.remember("Open ticket #456 about billing issue.")
|
|
109
|
+
|
|
110
|
+
# All members (active and passive) now have these memories
|
|
111
|
+
# in their synchronized working memory
|
|
112
|
+
|
|
113
|
+
# Query shared context
|
|
114
|
+
results = group.recall('customer preferences', limit: 5, strategy: :fulltext)
|
|
115
|
+
|
|
116
|
+
# Check group status
|
|
117
|
+
status = group.status
|
|
118
|
+
puts "Active: #{status[:active].join(', ')}"
|
|
119
|
+
puts "Passive: #{status[:passive].join(', ')}"
|
|
120
|
+
puts "Working memory: #{status[:working_memory_nodes]} nodes"
|
|
121
|
+
puts "In sync: #{status[:in_sync]}"
|
|
122
|
+
|
|
123
|
+
# Clean up when done
|
|
124
|
+
group.shutdown
|
|
125
|
+
```
|
|
126
|
+
|
|
127
|
+
## Scaling Patterns
|
|
128
|
+
|
|
129
|
+
### Horizontal Scaling
|
|
130
|
+
|
|
131
|
+
Add more active robots to handle increased load:
|
|
132
|
+
|
|
133
|
+
```ruby
|
|
134
|
+
group = HTM::RobotGroup.new(
|
|
135
|
+
name: 'high-traffic-service',
|
|
136
|
+
active: ['worker-1'],
|
|
137
|
+
max_tokens: 16000
|
|
138
|
+
)
|
|
139
|
+
|
|
140
|
+
# Initial setup
|
|
141
|
+
group.remember("Service configuration loaded.")
|
|
142
|
+
|
|
143
|
+
# Scale up as traffic increases
|
|
144
|
+
group.add_active('worker-2')
|
|
145
|
+
group.add_active('worker-3')
|
|
146
|
+
|
|
147
|
+
# New workers automatically sync existing working memory
|
|
148
|
+
puts group.status[:active] # => ['worker-1', 'worker-2', 'worker-3']
|
|
149
|
+
|
|
150
|
+
# All workers share the same context
|
|
151
|
+
# Requests can be load-balanced across any active worker
|
|
152
|
+
|
|
153
|
+
# Scale down when traffic decreases
|
|
154
|
+
group.remove('worker-3')
|
|
155
|
+
group.remove('worker-2')
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
### Dynamic Scaling Example
|
|
159
|
+
|
|
160
|
+
```ruby
|
|
161
|
+
class AutoScalingRobotPool
|
|
162
|
+
def initialize(group_name, min_workers: 1, max_workers: 10)
|
|
163
|
+
@group = HTM::RobotGroup.new(
|
|
164
|
+
name: group_name,
|
|
165
|
+
active: ["#{group_name}-worker-1"],
|
|
166
|
+
max_tokens: 8000
|
|
167
|
+
)
|
|
168
|
+
@min_workers = min_workers
|
|
169
|
+
@max_workers = max_workers
|
|
170
|
+
@worker_count = 1
|
|
171
|
+
end
|
|
172
|
+
|
|
173
|
+
def scale_up
|
|
174
|
+
return if @worker_count >= @max_workers
|
|
175
|
+
|
|
176
|
+
@worker_count += 1
|
|
177
|
+
worker_name = "#{@group.name}-worker-#{@worker_count}"
|
|
178
|
+
@group.add_active(worker_name)
|
|
179
|
+
puts "Scaled up: added #{worker_name}"
|
|
180
|
+
end
|
|
181
|
+
|
|
182
|
+
def scale_down
|
|
183
|
+
return if @worker_count <= @min_workers
|
|
184
|
+
|
|
185
|
+
worker_name = "#{@group.name}-worker-#{@worker_count}"
|
|
186
|
+
@group.remove(worker_name)
|
|
187
|
+
@worker_count -= 1
|
|
188
|
+
puts "Scaled down: removed #{worker_name}"
|
|
189
|
+
end
|
|
190
|
+
|
|
191
|
+
def handle_request(query)
|
|
192
|
+
# All active workers share context, so any can handle the request
|
|
193
|
+
@group.recall(query, strategy: :hybrid, limit: 10)
|
|
194
|
+
end
|
|
195
|
+
|
|
196
|
+
def shutdown
|
|
197
|
+
@group.shutdown
|
|
198
|
+
end
|
|
199
|
+
end
|
|
200
|
+
|
|
201
|
+
# Usage
|
|
202
|
+
pool = AutoScalingRobotPool.new('api-service', min_workers: 2, max_workers: 5)
|
|
203
|
+
pool.scale_up # Now 2 workers
|
|
204
|
+
pool.scale_up # Now 3 workers
|
|
205
|
+
|
|
206
|
+
results = pool.handle_request('recent user activity')
|
|
207
|
+
pool.shutdown
|
|
208
|
+
```
|
|
209
|
+
|
|
210
|
+
## High Availability & Hot Standby
|
|
211
|
+
|
|
212
|
+
### Basic Failover
|
|
213
|
+
|
|
214
|
+
```ruby
|
|
215
|
+
group = HTM::RobotGroup.new(
|
|
216
|
+
name: 'critical-service',
|
|
217
|
+
active: ['primary'],
|
|
218
|
+
passive: ['standby-1', 'standby-2'],
|
|
219
|
+
max_tokens: 8000
|
|
220
|
+
)
|
|
221
|
+
|
|
222
|
+
# Primary handles requests, standbys stay synchronized
|
|
223
|
+
group.remember("Critical configuration loaded.")
|
|
224
|
+
group.remember("User session context established.")
|
|
225
|
+
|
|
226
|
+
# When primary fails...
|
|
227
|
+
puts "Primary failed! Initiating failover..."
|
|
228
|
+
|
|
229
|
+
# Instant failover - standby already has full context
|
|
230
|
+
promoted = group.failover!
|
|
231
|
+
puts "#{promoted} is now active"
|
|
232
|
+
|
|
233
|
+
# Service continues with zero context loss
|
|
234
|
+
results = group.recall('session', strategy: :fulltext)
|
|
235
|
+
puts "Context preserved: #{results.length} memories available"
|
|
236
|
+
```
|
|
237
|
+
|
|
238
|
+
### Multi-Provider Hot Standby
|
|
239
|
+
|
|
240
|
+
**Robots in a group do NOT need to share the same LLM provider or model.** This enables powerful disaster recovery patterns where your hot standby uses a completely different provider.
|
|
241
|
+
|
|
242
|
+
!!! warning "Provider-Specific Prompts Required"
|
|
243
|
+
When using different providers/models in the same group, remember that **prompts must be tailored to each provider**. Different models respond differently to the same prompt. The robots share *memory context*, not *prompt templates*.
|
|
244
|
+
|
|
245
|
+
```ruby
|
|
246
|
+
# Example: OpenAI primary with Anthropic standby
|
|
247
|
+
class MultiProviderRobotGroup
|
|
248
|
+
def initialize(group_name)
|
|
249
|
+
@group_name = group_name
|
|
250
|
+
|
|
251
|
+
# Create the robot group
|
|
252
|
+
@group = HTM::RobotGroup.new(
|
|
253
|
+
name: group_name,
|
|
254
|
+
max_tokens: 8000
|
|
255
|
+
)
|
|
256
|
+
|
|
257
|
+
# Primary uses OpenAI
|
|
258
|
+
setup_openai_robot('primary-openai')
|
|
259
|
+
@group.add_active('primary-openai')
|
|
260
|
+
|
|
261
|
+
# Hot standby uses Anthropic (different provider!)
|
|
262
|
+
setup_anthropic_robot('standby-anthropic')
|
|
263
|
+
@group.add_passive('standby-anthropic')
|
|
264
|
+
|
|
265
|
+
# Optional: Third standby on Gemini
|
|
266
|
+
setup_gemini_robot('standby-gemini')
|
|
267
|
+
@group.add_passive('standby-gemini')
|
|
268
|
+
end
|
|
269
|
+
|
|
270
|
+
# Provider-specific prompt templates
|
|
271
|
+
PROMPT_TEMPLATES = {
|
|
272
|
+
openai: {
|
|
273
|
+
system: "You are a helpful assistant. Be concise and direct.",
|
|
274
|
+
format: :json
|
|
275
|
+
},
|
|
276
|
+
anthropic: {
|
|
277
|
+
system: "You are Claude, an AI assistant. Think step by step.",
|
|
278
|
+
format: :xml
|
|
279
|
+
},
|
|
280
|
+
gemini: {
|
|
281
|
+
system: "Provide helpful, well-structured responses.",
|
|
282
|
+
format: :markdown
|
|
283
|
+
}
|
|
284
|
+
}
|
|
285
|
+
|
|
286
|
+
def generate_response(query)
|
|
287
|
+
# Get current active robot's provider
|
|
288
|
+
active_name = @group.active_robot_names.first
|
|
289
|
+
provider = detect_provider(active_name)
|
|
290
|
+
|
|
291
|
+
# Use provider-specific prompt template
|
|
292
|
+
template = PROMPT_TEMPLATES[provider]
|
|
293
|
+
|
|
294
|
+
# Recall shared context (same for all providers)
|
|
295
|
+
context = @group.recall(query, strategy: :hybrid, limit: 10)
|
|
296
|
+
|
|
297
|
+
# Build provider-specific prompt
|
|
298
|
+
prompt = build_prompt(query, context, template)
|
|
299
|
+
|
|
300
|
+
# Call the appropriate LLM
|
|
301
|
+
call_llm(provider, prompt)
|
|
302
|
+
end
|
|
303
|
+
|
|
304
|
+
def failover!
|
|
305
|
+
promoted = @group.failover!
|
|
306
|
+
provider = detect_provider(promoted)
|
|
307
|
+
puts "Failover complete: now using #{provider} (#{promoted})"
|
|
308
|
+
|
|
309
|
+
# Context is preserved, but prompts will now use the new provider's template
|
|
310
|
+
promoted
|
|
311
|
+
end
|
|
312
|
+
|
|
313
|
+
private
|
|
314
|
+
|
|
315
|
+
def setup_openai_robot(name)
|
|
316
|
+
# OpenAI-specific configuration
|
|
317
|
+
# Note: Each robot can use different LLM settings
|
|
318
|
+
end
|
|
319
|
+
|
|
320
|
+
def setup_anthropic_robot(name)
|
|
321
|
+
# Anthropic-specific configuration
|
|
322
|
+
end
|
|
323
|
+
|
|
324
|
+
def setup_gemini_robot(name)
|
|
325
|
+
# Gemini-specific configuration
|
|
326
|
+
end
|
|
327
|
+
|
|
328
|
+
def detect_provider(robot_name)
|
|
329
|
+
case robot_name
|
|
330
|
+
when /openai/i then :openai
|
|
331
|
+
when /anthropic|claude/i then :anthropic
|
|
332
|
+
when /gemini|google/i then :gemini
|
|
333
|
+
else :openai
|
|
334
|
+
end
|
|
335
|
+
end
|
|
336
|
+
|
|
337
|
+
def build_prompt(query, context, template)
|
|
338
|
+
# Build provider-specific prompt
|
|
339
|
+
# This is where you handle the differences between providers
|
|
340
|
+
{
|
|
341
|
+
system: template[:system],
|
|
342
|
+
context: context.map { |c| c['content'] }.join("\n\n"),
|
|
343
|
+
query: query,
|
|
344
|
+
format: template[:format]
|
|
345
|
+
}
|
|
346
|
+
end
|
|
347
|
+
|
|
348
|
+
def call_llm(provider, prompt)
|
|
349
|
+
# Call the appropriate LLM API
|
|
350
|
+
# Implementation depends on your RubyLLM configuration
|
|
351
|
+
end
|
|
352
|
+
end
|
|
353
|
+
|
|
354
|
+
# Usage
|
|
355
|
+
service = MultiProviderRobotGroup.new('multi-provider-ha')
|
|
356
|
+
|
|
357
|
+
# Both OpenAI and Anthropic robots share the same working memory
|
|
358
|
+
service.group.remember("User prefers technical explanations.")
|
|
359
|
+
service.group.remember("Previous discussion was about PostgreSQL.")
|
|
360
|
+
|
|
361
|
+
# Primary (OpenAI) handles request with OpenAI-style prompt
|
|
362
|
+
response = service.generate_response("Explain ACID properties")
|
|
363
|
+
|
|
364
|
+
# If OpenAI goes down, failover to Anthropic
|
|
365
|
+
# Context is preserved, but prompts use Anthropic's style
|
|
366
|
+
service.failover!
|
|
367
|
+
response = service.generate_response("Continue the explanation")
|
|
368
|
+
```
|
|
369
|
+
|
|
370
|
+
### Provider Failover Strategies
|
|
371
|
+
|
|
372
|
+

|
|
373
|
+
|
|
374
|
+
## Cross-Process Synchronization
|
|
375
|
+
|
|
376
|
+
For multi-process deployments (e.g., multiple servers, containers, or worker processes), use `HTM::WorkingMemoryChannel` directly:
|
|
377
|
+
|
|
378
|
+
```ruby
|
|
379
|
+
#!/usr/bin/env ruby
|
|
380
|
+
# worker.rb - Run multiple instances of this script
|
|
381
|
+
|
|
382
|
+
require 'htm'
|
|
383
|
+
|
|
384
|
+
worker_name = ARGV[0] || "worker-#{Process.pid}"
|
|
385
|
+
group_name = 'distributed-service'
|
|
386
|
+
|
|
387
|
+
# Configure HTM
|
|
388
|
+
HTM.configure do |config|
|
|
389
|
+
config.embedding_provider = :ollama
|
|
390
|
+
config.embedding_model = 'nomic-embed-text'
|
|
391
|
+
end
|
|
392
|
+
|
|
393
|
+
# Create HTM instance for this worker
|
|
394
|
+
htm = HTM.new(robot_name: worker_name, working_memory_size: 8000)
|
|
395
|
+
|
|
396
|
+
# Setup channel for cross-process notifications
|
|
397
|
+
db_config = HTM::Database.default_config
|
|
398
|
+
channel = HTM::WorkingMemoryChannel.new(group_name, db_config)
|
|
399
|
+
|
|
400
|
+
# Track notifications received
|
|
401
|
+
channel.on_change do |event, node_id, origin_robot_id|
|
|
402
|
+
next if origin_robot_id == htm.robot_id # Skip our own notifications
|
|
403
|
+
|
|
404
|
+
case event
|
|
405
|
+
when :added
|
|
406
|
+
node = HTM::Models::Node.find_by(id: node_id)
|
|
407
|
+
if node
|
|
408
|
+
htm.working_memory.add_from_sync(
|
|
409
|
+
id: node.id,
|
|
410
|
+
content: node.content,
|
|
411
|
+
token_count: node.token_count || 0,
|
|
412
|
+
created_at: node.created_at
|
|
413
|
+
)
|
|
414
|
+
puts "[#{worker_name}] Synced node #{node_id} from another worker"
|
|
415
|
+
end
|
|
416
|
+
when :evicted
|
|
417
|
+
htm.working_memory.remove_from_sync(node_id)
|
|
418
|
+
when :cleared
|
|
419
|
+
htm.working_memory.clear_from_sync
|
|
420
|
+
end
|
|
421
|
+
end
|
|
422
|
+
|
|
423
|
+
channel.start_listening
|
|
424
|
+
puts "[#{worker_name}] Listening on channel: #{channel.channel_name}"
|
|
425
|
+
|
|
426
|
+
# Main loop - process commands
|
|
427
|
+
loop do
|
|
428
|
+
print "#{worker_name}> "
|
|
429
|
+
input = gets&.strip
|
|
430
|
+
break if input.nil? || input == 'quit'
|
|
431
|
+
|
|
432
|
+
case input
|
|
433
|
+
when /^remember (.+)/
|
|
434
|
+
content = $1
|
|
435
|
+
node_id = htm.remember(content)
|
|
436
|
+
channel.notify(:added, node_id: node_id, robot_id: htm.robot_id)
|
|
437
|
+
puts "Remembered (node #{node_id}), notified other workers"
|
|
438
|
+
|
|
439
|
+
when /^recall (.+)/
|
|
440
|
+
query = $1
|
|
441
|
+
results = htm.recall(query, limit: 5, strategy: :fulltext)
|
|
442
|
+
puts "Found #{results.length} memories"
|
|
443
|
+
results.each { |r| puts " - #{r}" }
|
|
444
|
+
|
|
445
|
+
when 'status'
|
|
446
|
+
puts "Working memory: #{htm.working_memory.node_count} nodes"
|
|
447
|
+
puts "Notifications received: #{channel.notifications_received}"
|
|
448
|
+
|
|
449
|
+
else
|
|
450
|
+
puts "Commands: remember <text>, recall <query>, status, quit"
|
|
451
|
+
end
|
|
452
|
+
end
|
|
453
|
+
|
|
454
|
+
channel.stop_listening
|
|
455
|
+
puts "[#{worker_name}] Shutdown"
|
|
456
|
+
```
|
|
457
|
+
|
|
458
|
+
**Run multiple workers:**
|
|
459
|
+
|
|
460
|
+
```bash
|
|
461
|
+
# Terminal 1
|
|
462
|
+
ruby worker.rb worker-1
|
|
463
|
+
|
|
464
|
+
# Terminal 2
|
|
465
|
+
ruby worker.rb worker-2
|
|
466
|
+
|
|
467
|
+
# Terminal 3
|
|
468
|
+
ruby worker.rb worker-3
|
|
469
|
+
```
|
|
470
|
+
|
|
471
|
+
When any worker calls `remember`, all other workers instantly receive the notification and update their working memory.
|
|
472
|
+
|
|
473
|
+
## Best Practices
|
|
474
|
+
|
|
475
|
+
### 1. Provider-Agnostic Memory, Provider-Specific Prompts
|
|
476
|
+
|
|
477
|
+
```ruby
|
|
478
|
+
# Memory is shared (provider-agnostic)
|
|
479
|
+
group.remember("User is a Ruby developer with 10 years experience.")
|
|
480
|
+
|
|
481
|
+
# Prompts are provider-specific
|
|
482
|
+
def build_prompt(context, query, provider)
|
|
483
|
+
case provider
|
|
484
|
+
when :openai
|
|
485
|
+
# OpenAI prefers concise JSON output
|
|
486
|
+
{
|
|
487
|
+
messages: [
|
|
488
|
+
{ role: "system", content: "You are a helpful assistant. Be concise." },
|
|
489
|
+
{ role: "user", content: "Context:\n#{context}\n\nQuery: #{query}" }
|
|
490
|
+
],
|
|
491
|
+
response_format: { type: "json_object" }
|
|
492
|
+
}
|
|
493
|
+
when :anthropic
|
|
494
|
+
# Claude prefers XML structure and thinking aloud
|
|
495
|
+
{
|
|
496
|
+
system: "Think step by step. Use XML tags for structure.",
|
|
497
|
+
messages: [
|
|
498
|
+
{ role: "user", content: "<context>#{context}</context>\n<query>#{query}</query>" }
|
|
499
|
+
]
|
|
500
|
+
}
|
|
501
|
+
when :gemini
|
|
502
|
+
# Gemini handles markdown well
|
|
503
|
+
{
|
|
504
|
+
contents: [
|
|
505
|
+
{ role: "user", parts: [{ text: "## Context\n#{context}\n\n## Query\n#{query}" }] }
|
|
506
|
+
]
|
|
507
|
+
}
|
|
508
|
+
end
|
|
509
|
+
end
|
|
510
|
+
```
|
|
511
|
+
|
|
512
|
+
### 2. Monitor Sync Status
|
|
513
|
+
|
|
514
|
+
```ruby
|
|
515
|
+
# Regular health checks
|
|
516
|
+
Thread.new do
|
|
517
|
+
loop do
|
|
518
|
+
sleep 60 # Check every minute
|
|
519
|
+
|
|
520
|
+
unless group.in_sync?
|
|
521
|
+
puts "WARNING: Group out of sync, initiating sync..."
|
|
522
|
+
result = group.sync_all
|
|
523
|
+
puts "Synced #{result[:synced_nodes]} nodes to #{result[:members_updated]} members"
|
|
524
|
+
end
|
|
525
|
+
|
|
526
|
+
stats = group.sync_stats
|
|
527
|
+
puts "Sync stats: #{stats[:nodes_synced]} nodes, #{stats[:evictions_synced]} evictions"
|
|
528
|
+
end
|
|
529
|
+
end
|
|
530
|
+
```
|
|
531
|
+
|
|
532
|
+
### 3. Graceful Shutdown
|
|
533
|
+
|
|
534
|
+
```ruby
|
|
535
|
+
# Handle shutdown signals
|
|
536
|
+
at_exit do
|
|
537
|
+
puts "Shutting down robot group..."
|
|
538
|
+
group.shutdown
|
|
539
|
+
end
|
|
540
|
+
|
|
541
|
+
Signal.trap("INT") do
|
|
542
|
+
puts "\nReceived interrupt, shutting down..."
|
|
543
|
+
exit
|
|
544
|
+
end
|
|
545
|
+
```
|
|
546
|
+
|
|
547
|
+
### 4. Use Passive Robots for Redundancy
|
|
548
|
+
|
|
549
|
+
```ruby
|
|
550
|
+
# Always have at least one passive robot for failover
|
|
551
|
+
group = HTM::RobotGroup.new(
|
|
552
|
+
name: 'production-service',
|
|
553
|
+
active: ['primary'],
|
|
554
|
+
passive: ['standby-1', 'standby-2'], # Multiple standbys
|
|
555
|
+
max_tokens: 8000
|
|
556
|
+
)
|
|
557
|
+
|
|
558
|
+
# Passive robots are "warm" - they have full context
|
|
559
|
+
# Failover is instant with no context loss
|
|
560
|
+
```
|
|
561
|
+
|
|
562
|
+
## Demo Applications
|
|
563
|
+
|
|
564
|
+
HTM includes working examples in `examples/robot_groups/`:
|
|
565
|
+
|
|
566
|
+
### same_process.rb
|
|
567
|
+
|
|
568
|
+
Single-process demo showing:
|
|
569
|
+
- Group creation with active/passive robots
|
|
570
|
+
- Shared memory operations
|
|
571
|
+
- Failover simulation
|
|
572
|
+
- Dynamic scaling
|
|
573
|
+
- Real-time sync via LISTEN/NOTIFY
|
|
574
|
+
|
|
575
|
+
```bash
|
|
576
|
+
HTM_DBURL="postgresql://user@localhost:5432/htm_dev" ruby examples/robot_groups/same_process.rb
|
|
577
|
+
```
|
|
578
|
+
|
|
579
|
+
### multi_process.rb
|
|
580
|
+
|
|
581
|
+
Multi-process demo showing:
|
|
582
|
+
- Spawning robot workers as separate processes
|
|
583
|
+
- Cross-process synchronization
|
|
584
|
+
- Failover when a process dies
|
|
585
|
+
- Dynamic scaling by spawning new processes
|
|
586
|
+
|
|
587
|
+
```bash
|
|
588
|
+
HTM_DBURL="postgresql://user@localhost:5432/htm_dev" ruby examples/robot_groups/multi_process.rb
|
|
589
|
+
```
|
|
590
|
+
|
|
591
|
+
### robot_worker.rb
|
|
592
|
+
|
|
593
|
+
Standalone worker process that:
|
|
594
|
+
- Receives JSON commands via stdin
|
|
595
|
+
- Sends JSON responses via stdout
|
|
596
|
+
- Participates in robot group via LISTEN/NOTIFY
|
|
597
|
+
|
|
598
|
+
## Related Documentation
|
|
599
|
+
|
|
600
|
+
- [Hive Mind Architecture](../architecture/hive-mind.md) - Foundation for shared memory
|
|
601
|
+
- [Multi-Robot Usage](multi-robot.md) - Basic multi-robot patterns
|
|
602
|
+
- [Working Memory](working-memory.md) - How working memory operates
|
|
603
|
+
- [API Reference: RobotGroup](../api/yard/HTM/RobotGroup.md) - Complete API documentation
|
|
604
|
+
- [API Reference: WorkingMemoryChannel](../api/yard/HTM/WorkingMemoryChannel.md) - Low-level pub/sub API
|