htm 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (155) hide show
  1. checksums.yaml +7 -0
  2. data/.architecture/decisions/adrs/001-use-postgresql-timescaledb-storage.md +227 -0
  3. data/.architecture/decisions/adrs/002-two-tier-memory-architecture.md +322 -0
  4. data/.architecture/decisions/adrs/003-ollama-default-embedding-provider.md +339 -0
  5. data/.architecture/decisions/adrs/004-multi-robot-shared-memory-hive-mind.md +374 -0
  6. data/.architecture/decisions/adrs/005-rag-based-retrieval-with-hybrid-search.md +443 -0
  7. data/.architecture/decisions/adrs/006-context-assembly-strategies.md +444 -0
  8. data/.architecture/decisions/adrs/007-working-memory-eviction-strategy.md +461 -0
  9. data/.architecture/decisions/adrs/008-robot-identification-system.md +550 -0
  10. data/.architecture/decisions/adrs/009-never-forget-explicit-deletion-only.md +570 -0
  11. data/.architecture/decisions/adrs/010-redis-working-memory-rejected.md +323 -0
  12. data/.architecture/decisions/adrs/011-database-side-embedding-generation-with-pgai.md +585 -0
  13. data/.architecture/decisions/adrs/012-llm-driven-ontology-topic-extraction.md +583 -0
  14. data/.architecture/decisions/adrs/013-activerecord-orm-and-many-to-many-tagging.md +299 -0
  15. data/.architecture/decisions/adrs/014-client-side-embedding-generation-workflow.md +569 -0
  16. data/.architecture/decisions/adrs/015-hierarchical-tag-ontology-and-llm-extraction.md +701 -0
  17. data/.architecture/decisions/adrs/016-async-embedding-and-tag-generation.md +694 -0
  18. data/.architecture/members.yml +144 -0
  19. data/.architecture/reviews/2025-10-29-llm-configuration-and-async-processing-review.md +1137 -0
  20. data/.architecture/reviews/initial-system-analysis.md +330 -0
  21. data/.envrc +32 -0
  22. data/.irbrc +145 -0
  23. data/CHANGELOG.md +150 -0
  24. data/COMMITS.md +196 -0
  25. data/LICENSE +21 -0
  26. data/README.md +1347 -0
  27. data/Rakefile +51 -0
  28. data/SETUP.md +268 -0
  29. data/config/database.yml +67 -0
  30. data/db/migrate/20250101000001_enable_extensions.rb +14 -0
  31. data/db/migrate/20250101000002_create_robots.rb +14 -0
  32. data/db/migrate/20250101000003_create_nodes.rb +42 -0
  33. data/db/migrate/20250101000005_create_tags.rb +38 -0
  34. data/db/migrate/20250101000007_add_node_vector_indexes.rb +30 -0
  35. data/db/schema.sql +473 -0
  36. data/db/seed_data/README.md +100 -0
  37. data/db/seed_data/presidents.md +136 -0
  38. data/db/seed_data/states.md +151 -0
  39. data/db/seeds.rb +208 -0
  40. data/dbdoc/README.md +173 -0
  41. data/dbdoc/public.node_stats.md +48 -0
  42. data/dbdoc/public.node_stats.svg +41 -0
  43. data/dbdoc/public.node_tags.md +40 -0
  44. data/dbdoc/public.node_tags.svg +112 -0
  45. data/dbdoc/public.nodes.md +54 -0
  46. data/dbdoc/public.nodes.svg +118 -0
  47. data/dbdoc/public.nodes_tags.md +39 -0
  48. data/dbdoc/public.nodes_tags.svg +112 -0
  49. data/dbdoc/public.ontology_structure.md +48 -0
  50. data/dbdoc/public.ontology_structure.svg +38 -0
  51. data/dbdoc/public.operations_log.md +42 -0
  52. data/dbdoc/public.operations_log.svg +130 -0
  53. data/dbdoc/public.relationships.md +39 -0
  54. data/dbdoc/public.relationships.svg +41 -0
  55. data/dbdoc/public.robot_activity.md +46 -0
  56. data/dbdoc/public.robot_activity.svg +35 -0
  57. data/dbdoc/public.robots.md +35 -0
  58. data/dbdoc/public.robots.svg +90 -0
  59. data/dbdoc/public.schema_migrations.md +29 -0
  60. data/dbdoc/public.schema_migrations.svg +26 -0
  61. data/dbdoc/public.tags.md +35 -0
  62. data/dbdoc/public.tags.svg +60 -0
  63. data/dbdoc/public.topic_relationships.md +45 -0
  64. data/dbdoc/public.topic_relationships.svg +32 -0
  65. data/dbdoc/schema.json +1437 -0
  66. data/dbdoc/schema.svg +154 -0
  67. data/docs/api/database.md +806 -0
  68. data/docs/api/embedding-service.md +532 -0
  69. data/docs/api/htm.md +797 -0
  70. data/docs/api/index.md +259 -0
  71. data/docs/api/long-term-memory.md +1096 -0
  72. data/docs/api/working-memory.md +665 -0
  73. data/docs/architecture/adrs/001-postgresql-timescaledb.md +314 -0
  74. data/docs/architecture/adrs/002-two-tier-memory.md +411 -0
  75. data/docs/architecture/adrs/003-ollama-embeddings.md +421 -0
  76. data/docs/architecture/adrs/004-hive-mind.md +437 -0
  77. data/docs/architecture/adrs/005-rag-retrieval.md +531 -0
  78. data/docs/architecture/adrs/006-context-assembly.md +496 -0
  79. data/docs/architecture/adrs/007-eviction-strategy.md +645 -0
  80. data/docs/architecture/adrs/008-robot-identification.md +625 -0
  81. data/docs/architecture/adrs/009-never-forget.md +648 -0
  82. data/docs/architecture/adrs/010-redis-working-memory-rejected.md +323 -0
  83. data/docs/architecture/adrs/011-pgai-integration.md +494 -0
  84. data/docs/architecture/adrs/index.md +215 -0
  85. data/docs/architecture/hive-mind.md +736 -0
  86. data/docs/architecture/index.md +351 -0
  87. data/docs/architecture/overview.md +538 -0
  88. data/docs/architecture/two-tier-memory.md +873 -0
  89. data/docs/assets/css/custom.css +83 -0
  90. data/docs/assets/images/htm-core-components.svg +63 -0
  91. data/docs/assets/images/htm-database-schema.svg +93 -0
  92. data/docs/assets/images/htm-hive-mind-architecture.svg +125 -0
  93. data/docs/assets/images/htm-importance-scoring-framework.svg +83 -0
  94. data/docs/assets/images/htm-layered-architecture.svg +71 -0
  95. data/docs/assets/images/htm-long-term-memory-architecture.svg +115 -0
  96. data/docs/assets/images/htm-working-memory-architecture.svg +120 -0
  97. data/docs/assets/images/htm.jpg +0 -0
  98. data/docs/assets/images/htm_demo.gif +0 -0
  99. data/docs/assets/js/mathjax.js +18 -0
  100. data/docs/assets/videos/htm_video.mp4 +0 -0
  101. data/docs/database_rake_tasks.md +322 -0
  102. data/docs/development/contributing.md +787 -0
  103. data/docs/development/index.md +336 -0
  104. data/docs/development/schema.md +596 -0
  105. data/docs/development/setup.md +719 -0
  106. data/docs/development/testing.md +819 -0
  107. data/docs/guides/adding-memories.md +824 -0
  108. data/docs/guides/context-assembly.md +1009 -0
  109. data/docs/guides/getting-started.md +577 -0
  110. data/docs/guides/index.md +118 -0
  111. data/docs/guides/long-term-memory.md +941 -0
  112. data/docs/guides/multi-robot.md +866 -0
  113. data/docs/guides/recalling-memories.md +927 -0
  114. data/docs/guides/search-strategies.md +953 -0
  115. data/docs/guides/working-memory.md +717 -0
  116. data/docs/index.md +214 -0
  117. data/docs/installation.md +477 -0
  118. data/docs/multi_framework_support.md +519 -0
  119. data/docs/quick-start.md +655 -0
  120. data/docs/setup_local_database.md +302 -0
  121. data/docs/using_rake_tasks_in_your_app.md +383 -0
  122. data/examples/basic_usage.rb +93 -0
  123. data/examples/cli_app/README.md +317 -0
  124. data/examples/cli_app/htm_cli.rb +270 -0
  125. data/examples/custom_llm_configuration.rb +183 -0
  126. data/examples/example_app/Rakefile +71 -0
  127. data/examples/example_app/app.rb +206 -0
  128. data/examples/sinatra_app/Gemfile +21 -0
  129. data/examples/sinatra_app/app.rb +335 -0
  130. data/lib/htm/active_record_config.rb +113 -0
  131. data/lib/htm/configuration.rb +342 -0
  132. data/lib/htm/database.rb +594 -0
  133. data/lib/htm/embedding_service.rb +115 -0
  134. data/lib/htm/errors.rb +34 -0
  135. data/lib/htm/job_adapter.rb +154 -0
  136. data/lib/htm/jobs/generate_embedding_job.rb +65 -0
  137. data/lib/htm/jobs/generate_tags_job.rb +82 -0
  138. data/lib/htm/long_term_memory.rb +965 -0
  139. data/lib/htm/models/node.rb +109 -0
  140. data/lib/htm/models/node_tag.rb +33 -0
  141. data/lib/htm/models/robot.rb +52 -0
  142. data/lib/htm/models/tag.rb +76 -0
  143. data/lib/htm/railtie.rb +76 -0
  144. data/lib/htm/sinatra.rb +157 -0
  145. data/lib/htm/tag_service.rb +135 -0
  146. data/lib/htm/tasks.rb +38 -0
  147. data/lib/htm/version.rb +5 -0
  148. data/lib/htm/working_memory.rb +182 -0
  149. data/lib/htm.rb +400 -0
  150. data/lib/tasks/db.rake +19 -0
  151. data/lib/tasks/htm.rake +147 -0
  152. data/lib/tasks/jobs.rake +312 -0
  153. data/mkdocs.yml +190 -0
  154. data/scripts/install_local_database.sh +309 -0
  155. metadata +341 -0
@@ -0,0 +1,655 @@
1
+ # Quick Start Guide
2
+
3
+ Get started with HTM in just 5 minutes! This guide will walk you through building your first HTM-powered application.
4
+
5
+ !!! info "Prerequisites"
6
+ Make sure you've completed the [Installation Guide](installation.md) before starting this tutorial.
7
+
8
+ <svg viewBox="0 0 900 600" xmlns="http://www.w3.org/2000/svg" style="background: transparent;">
9
+ <!-- Title -->
10
+ <text x="450" y="30" text-anchor="middle" fill="#E0E0E0" font-size="18" font-weight="bold">HTM Quick Start Workflow</text>
11
+
12
+ <!-- Step 1: Initialize -->
13
+ <rect x="50" y="70" width="180" height="100" fill="rgba(76, 175, 80, 0.2)" stroke="#4CAF50" stroke-width="3" rx="5"/>
14
+ <text x="140" y="95" text-anchor="middle" fill="#4CAF50" font-size="16" font-weight="bold">Step 1</text>
15
+ <text x="140" y="115" text-anchor="middle" fill="#E0E0E0" font-size="14" font-weight="bold">Initialize HTM</text>
16
+ <text x="140" y="140" text-anchor="middle" fill="#B0B0B0" font-size="11">HTM.new()</text>
17
+ <text x="140" y="160" text-anchor="middle" fill="#B0B0B0" font-size="10">Set robot name</text>
18
+
19
+ <!-- Arrow 1 to 2 -->
20
+ <line x1="230" y1="120" x2="270" y2="120" stroke="#4CAF50" stroke-width="3" marker-end="url(#arrow-green)"/>
21
+
22
+ <!-- Step 2: Add Memories -->
23
+ <rect x="270" y="70" width="180" height="100" fill="rgba(33, 150, 243, 0.2)" stroke="#2196F3" stroke-width="3" rx="5"/>
24
+ <text x="360" y="95" text-anchor="middle" fill="#2196F3" font-size="16" font-weight="bold">Step 2</text>
25
+ <text x="360" y="115" text-anchor="middle" fill="#E0E0E0" font-size="14" font-weight="bold">Add Memories</text>
26
+ <text x="360" y="140" text-anchor="middle" fill="#B0B0B0" font-size="11">add_node()</text>
27
+ <text x="360" y="160" text-anchor="middle" fill="#B0B0B0" font-size="10">Store knowledge</text>
28
+
29
+ <!-- Arrow 2 to 3 -->
30
+ <line x1="450" y1="120" x2="490" y2="120" stroke="#2196F3" stroke-width="3" marker-end="url(#arrow-blue)"/>
31
+
32
+ <!-- Step 3: Recall -->
33
+ <rect x="490" y="70" width="180" height="100" fill="rgba(156, 39, 176, 0.2)" stroke="#9C27B0" stroke-width="3" rx="5"/>
34
+ <text x="580" y="95" text-anchor="middle" fill="#9C27B0" font-size="16" font-weight="bold">Step 3</text>
35
+ <text x="580" y="115" text-anchor="middle" fill="#E0E0E0" font-size="14" font-weight="bold">Recall Memories</text>
36
+ <text x="580" y="140" text-anchor="middle" fill="#B0B0B0" font-size="11">recall()</text>
37
+ <text x="580" y="160" text-anchor="middle" fill="#B0B0B0" font-size="10">Search & retrieve</text>
38
+
39
+ <!-- Arrow 3 to 4 -->
40
+ <line x1="670" y1="120" x2="710" y2="120" stroke="#9C27B0" stroke-width="3" marker-end="url(#arrow-purple)"/>
41
+
42
+ <!-- Step 4: Use Context -->
43
+ <rect x="710" y="70" width="180" height="100" fill="rgba(255, 152, 0, 0.2)" stroke="#FF9800" stroke-width="3" rx="5"/>
44
+ <text x="800" y="95" text-anchor="middle" fill="#FF9800" font-size="16" font-weight="bold">Step 4</text>
45
+ <text x="800" y="115" text-anchor="middle" fill="#E0E0E0" font-size="14" font-weight="bold">Use Context</text>
46
+ <text x="800" y="140" text-anchor="middle" fill="#B0B0B0" font-size="11">create_context()</text>
47
+ <text x="800" y="160" text-anchor="middle" fill="#B0B0B0" font-size="10">For LLM prompts</text>
48
+
49
+ <!-- Memory Layers Visualization -->
50
+ <text x="450" y="220" text-anchor="middle" fill="#E0E0E0" font-size="14" font-weight="bold">HTM Memory System</text>
51
+
52
+ <!-- Working Memory -->
53
+ <rect x="100" y="250" width="300" height="120" fill="rgba(33, 150, 243, 0.2)" stroke="#2196F3" stroke-width="2" rx="5"/>
54
+ <text x="250" y="275" text-anchor="middle" fill="#E0E0E0" font-size="13" font-weight="bold">Working Memory (Fast)</text>
55
+ <text x="120" y="300" fill="#B0B0B0" font-size="11">• Token-limited (128K)</text>
56
+ <text x="120" y="320" fill="#B0B0B0" font-size="11">• In-memory storage</text>
57
+ <text x="120" y="340" fill="#B0B0B0" font-size="11">• Immediate LLM access</text>
58
+ <text x="120" y="360" fill="#4CAF50" font-size="10" font-weight="bold">O(1) lookups</text>
59
+
60
+ <!-- Long-Term Memory -->
61
+ <rect x="500" y="250" width="300" height="120" fill="rgba(156, 39, 176, 0.2)" stroke="#9C27B0" stroke-width="2" rx="5"/>
62
+ <text x="650" y="275" text-anchor="middle" fill="#E0E0E0" font-size="13" font-weight="bold">Long-Term Memory (Durable)</text>
63
+ <text x="520" y="300" fill="#B0B0B0" font-size="11">• Unlimited storage</text>
64
+ <text x="520" y="320" fill="#B0B0B0" font-size="11">• PostgreSQL</text>
65
+ <text x="520" y="340" fill="#B0B0B0" font-size="11">• RAG search (vector + text)</text>
66
+ <text x="520" y="360" fill="#4CAF50" font-size="10" font-weight="bold">Permanent storage</text>
67
+
68
+ <!-- Data Flow -->
69
+ <path d="M 250 390 L 250 420 L 450 420 L 450 390" stroke="#4CAF50" stroke-width="2" fill="none"/>
70
+ <text x="350" y="410" text-anchor="middle" fill="#4CAF50" font-size="10">Stored in both</text>
71
+
72
+ <path d="M 400 440 L 650 440" stroke="#FF9800" stroke-width="2" marker-end="url(#arrow-orange)"/>
73
+ <text x="525" y="430" text-anchor="middle" fill="#FF9800" font-size="10">Evicted when full</text>
74
+
75
+ <path d="M 650 460 L 250 460" stroke="#9C27B0" stroke-width="2" marker-end="url(#arrow-purple2)"/>
76
+ <text x="450" y="450" text-anchor="middle" fill="#9C27B0" font-size="10">Recalled when needed</text>
77
+
78
+ <!-- Code Example -->
79
+ <rect x="50" y="490" width="800" height="90" fill="rgba(76, 175, 80, 0.1)" stroke="#4CAF50" stroke-width="2" rx="5"/>
80
+ <text x="450" y="515" text-anchor="middle" fill="#4CAF50" font-size="13" font-weight="bold">Quick Example Code:</text>
81
+ <text x="70" y="540" fill="#B0B0B0" font-family="monospace" font-size="10">htm = HTM.new(robot_name: "My Assistant")</text>
82
+ <text x="70" y="555" fill="#B0B0B0" font-family="monospace" font-size="10">htm.add_node("key1", "Remember this fact", type: :fact)</text>
83
+ <text x="70" y="570" fill="#B0B0B0" font-family="monospace" font-size="10">memories = htm.recall(timeframe: "today", topic: "fact")</text>
84
+
85
+ <!-- Arrow markers -->
86
+ <defs>
87
+ <marker id="arrow-green" markerWidth="10" markerHeight="10" refX="9" refY="3" orient="auto">
88
+ <polygon points="0 0, 10 3, 0 6" fill="#4CAF50"/>
89
+ </marker>
90
+ <marker id="arrow-blue" markerWidth="10" markerHeight="10" refX="9" refY="3" orient="auto">
91
+ <polygon points="0 0, 10 3, 0 6" fill="#2196F3"/>
92
+ </marker>
93
+ <marker id="arrow-purple" markerWidth="10" markerHeight="10" refX="9" refY="3" orient="auto">
94
+ <polygon points="0 0, 10 3, 0 6" fill="#9C27B0"/>
95
+ </marker>
96
+ <marker id="arrow-purple2" markerWidth="10" markerHeight="10" refX="9" refY="3" orient="auto">
97
+ <polygon points="0 0, 10 3, 0 6" fill="#9C27B0"/>
98
+ </marker>
99
+ <marker id="arrow-orange" markerWidth="10" markerHeight="10" refX="9" refY="3" orient="auto">
100
+ <polygon points="0 0, 10 3, 0 6" fill="#FF9800"/>
101
+ </marker>
102
+ </defs>
103
+ </svg>
104
+
105
+ ## Your First HTM Application
106
+
107
+ Let's build a simple coding assistant that remembers project decisions and preferences.
108
+
109
+ ### Step 1: Create Your Project
110
+
111
+ Create a new Ruby file:
112
+
113
+ ```ruby
114
+ # my_first_htm_app.rb
115
+ require 'htm'
116
+
117
+ puts "My First HTM Application"
118
+ puts "=" * 60
119
+ ```
120
+
121
+ ### Step 2: Initialize HTM
122
+
123
+ Create an HTM instance for your robot:
124
+
125
+ ```ruby
126
+ # Initialize HTM with a robot name
127
+ htm = HTM.new(
128
+ robot_name: "Code Helper",
129
+ working_memory_size: 128_000, # 128k tokens
130
+ embedding_service: :ollama, # Use Ollama for embeddings
131
+ embedding_model: 'gpt-oss' # Default embedding model
132
+ )
133
+
134
+ puts "✓ HTM initialized for '#{htm.robot_name}'"
135
+ puts " Robot ID: #{htm.robot_id}"
136
+ puts " Working Memory: #{htm.working_memory.max_tokens} tokens"
137
+ ```
138
+
139
+ **What's happening here?**
140
+
141
+ - `robot_name`: A human-readable name for your AI robot
142
+ - `working_memory_size`: Maximum tokens for active context (128k is typical)
143
+ - `embedding_service`: Service to generate vector embeddings (`:ollama` is default)
144
+ - `embedding_model`: Which model to use for embeddings (`gpt-oss` is default)
145
+
146
+ !!! tip "Robot Identity"
147
+ Each HTM instance represents one robot. The `robot_id` is automatically generated (UUID) and used to track which robot created each memory.
148
+
149
+ ### Step 3: Add Your First Memory
150
+
151
+ Add a project decision to HTM's memory:
152
+
153
+ ```ruby
154
+ puts "\n1. Adding a project decision..."
155
+
156
+ htm.add_node(
157
+ "decision_001", # Unique key
158
+ "We decided to use PostgreSQL for the database " \
159
+ "because it provides excellent time-series optimization and " \
160
+ "native vector search with pgvector.",
161
+ type: :decision, # Memory type
162
+ category: "architecture", # Optional category
163
+ importance: 9.0, # Importance score (0-10)
164
+ tags: ["database", "architecture"] # Searchable tags
165
+ )
166
+
167
+ puts "✓ Decision added to memory"
168
+ ```
169
+
170
+ **Memory Components:**
171
+
172
+ - **Key**: Unique identifier (e.g., `"decision_001"`)
173
+ - **Value**: The actual content/memory text
174
+ - **Type**: Category of memory (`:decision`, `:fact`, `:code`, `:preference`, etc.)
175
+ - **Category**: Optional grouping
176
+ - **Importance**: Score from 0.0 to 10.0 (affects recall priority)
177
+ - **Tags**: Searchable keywords for organization
178
+
179
+ !!! note "Automatic Embeddings"
180
+ HTM automatically generates vector embeddings for the memory content using Ollama. You don't need to handle embeddings yourself!
181
+
182
+ ### Step 4: Add More Memories
183
+
184
+ Let's add a few more memories:
185
+
186
+ ```ruby
187
+ puts "\n2. Adding user preferences..."
188
+
189
+ htm.add_node(
190
+ "pref_001",
191
+ "User prefers using the debug_me gem for debugging instead of puts statements.",
192
+ type: :preference,
193
+ category: "coding_style",
194
+ importance: 7.0,
195
+ tags: ["debugging", "ruby", "preferences"]
196
+ )
197
+
198
+ puts "✓ Preference added"
199
+
200
+ puts "\n3. Adding a code pattern..."
201
+
202
+ htm.add_node(
203
+ "code_001",
204
+ "For database queries, use connection pooling with the connection_pool gem " \
205
+ "to handle concurrent requests efficiently.",
206
+ type: :code,
207
+ category: "patterns",
208
+ importance: 8.0,
209
+ tags: ["database", "performance", "ruby"],
210
+ related_to: ["decision_001"] # Link to related memory
211
+ )
212
+
213
+ puts "✓ Code pattern added (linked to decision_001)"
214
+ ```
215
+
216
+ **Notice the `related_to` parameter?** This creates a relationship in the knowledge graph, linking related memories together.
217
+
218
+ ### Step 5: Retrieve a Specific Memory
219
+
220
+ Retrieve a memory by its key:
221
+
222
+ ```ruby
223
+ puts "\n4. Retrieving specific memory..."
224
+
225
+ memory = htm.retrieve("decision_001")
226
+
227
+ if memory
228
+ puts "✓ Found memory:"
229
+ puts " Key: #{memory['key']}"
230
+ puts " Type: #{memory['type']}"
231
+ puts " Content: #{memory['value'][0..100]}..."
232
+ puts " Importance: #{memory['importance']}"
233
+ puts " Created: #{memory['created_at']}"
234
+ else
235
+ puts "✗ Memory not found"
236
+ end
237
+ ```
238
+
239
+ ### Step 6: Recall Memories by Topic
240
+
241
+ Use HTM's powerful recall feature to find relevant memories:
242
+
243
+ ```ruby
244
+ puts "\n5. Recalling memories about 'database'..."
245
+
246
+ memories = htm.recall(
247
+ timeframe: "last week", # Natural language time filter
248
+ topic: "database", # What to search for
249
+ limit: 10, # Max results
250
+ strategy: :hybrid # Search strategy (vector + full-text)
251
+ )
252
+
253
+ puts "✓ Found #{memories.length} relevant memories:"
254
+ memories.each_with_index do |mem, idx|
255
+ puts " #{idx + 1}. [#{mem['type']}] #{mem['value'][0..60]}..."
256
+ end
257
+ ```
258
+
259
+ **Search Strategies:**
260
+
261
+ - **`:vector`**: Semantic similarity search using embeddings
262
+ - **`:fulltext`**: Keyword-based PostgreSQL full-text search
263
+ - **`:hybrid`**: Combines both for best results (recommended)
264
+
265
+ **Timeframe Options:**
266
+
267
+ - `"last week"` - Last 7 days
268
+ - `"yesterday"` - Previous day
269
+ - `"last 30 days"` - Last month
270
+ - `"this month"` - Current calendar month
271
+ - Date ranges: `(Time.now - 7.days)..Time.now`
272
+
273
+ ### Step 7: Create Context for Your LLM
274
+
275
+ Generate a context string optimized for LLM consumption:
276
+
277
+ ```ruby
278
+ puts "\n6. Creating context for LLM..."
279
+
280
+ context = htm.create_context(
281
+ strategy: :balanced, # Balance importance and recency
282
+ max_tokens: 50_000 # Optional token limit
283
+ )
284
+
285
+ puts "✓ Context created: #{context.length} characters"
286
+ puts "\nContext preview:"
287
+ puts context[0..300]
288
+ puts "..."
289
+ ```
290
+
291
+ **Context Strategies:**
292
+
293
+ - **`:recent`**: Most recent memories first
294
+ - **`:important`**: Highest importance scores first
295
+ - **`:balanced`**: Combines importance × recency (recommended)
296
+
297
+ This context can be directly injected into your LLM prompt:
298
+
299
+ ```ruby
300
+ # Example: Using context with your LLM
301
+ prompt = <<~PROMPT
302
+ You are a helpful coding assistant.
303
+
304
+ Here's what you remember from past conversations:
305
+ #{context}
306
+
307
+ User: What database did we decide to use for the project?
308
+ PROMPT
309
+
310
+ # response = your_llm.chat(prompt)
311
+ ```
312
+
313
+ ### Step 8: Check Memory Statistics
314
+
315
+ View statistics about your memory usage:
316
+
317
+ ```ruby
318
+ puts "\n7. Memory Statistics:"
319
+
320
+ stats = htm.memory_stats
321
+
322
+ puts " Total nodes in long-term memory: #{stats[:total_nodes]}"
323
+ puts " Active robots: #{stats[:active_robots]}"
324
+ puts " Working memory usage: #{stats[:working_memory][:current_tokens]} / " \
325
+ "#{stats[:working_memory][:max_tokens]} tokens " \
326
+ "(#{stats[:working_memory][:utilization].round(2)}%)"
327
+ puts " Database size: #{(stats[:database_size] / (1024.0 ** 2)).round(2)} MB"
328
+ ```
329
+
330
+ ### Complete Example
331
+
332
+ Here's the complete script:
333
+
334
+ ```ruby
335
+ #!/usr/bin/env ruby
336
+ # my_first_htm_app.rb
337
+ require 'htm'
338
+
339
+ puts "My First HTM Application"
340
+ puts "=" * 60
341
+
342
+ # Step 1: Initialize HTM
343
+ htm = HTM.new(
344
+ robot_name: "Code Helper",
345
+ working_memory_size: 128_000,
346
+ embedding_service: :ollama,
347
+ embedding_model: 'gpt-oss'
348
+ )
349
+
350
+ puts "✓ HTM initialized for '#{htm.robot_name}'"
351
+
352
+ # Step 2: Add memories
353
+ htm.add_node(
354
+ "decision_001",
355
+ "We decided to use PostgreSQL for the database.",
356
+ type: :decision,
357
+ category: "architecture",
358
+ importance: 9.0,
359
+ tags: ["database", "architecture"]
360
+ )
361
+
362
+ htm.add_node(
363
+ "pref_001",
364
+ "User prefers using the debug_me gem for debugging.",
365
+ type: :preference,
366
+ importance: 7.0,
367
+ tags: ["debugging", "ruby"]
368
+ )
369
+
370
+ puts "✓ Memories added"
371
+
372
+ # Step 3: Recall memories
373
+ memories = htm.recall(
374
+ timeframe: "last week",
375
+ topic: "database",
376
+ strategy: :hybrid
377
+ )
378
+
379
+ puts "✓ Found #{memories.length} memories about 'database'"
380
+
381
+ # Step 4: Create context
382
+ context = htm.create_context(strategy: :balanced)
383
+ puts "✓ Context created: #{context.length} characters"
384
+
385
+ # Step 5: View statistics
386
+ stats = htm.memory_stats
387
+ puts "✓ Total nodes: #{stats[:total_nodes]}"
388
+
389
+ puts "\n" + "=" * 60
390
+ puts "Success! Your first HTM application is working."
391
+ ```
392
+
393
+ Run it:
394
+
395
+ ```bash
396
+ ruby my_first_htm_app.rb
397
+ ```
398
+
399
+ ## Multi-Robot Example
400
+
401
+ HTM's "hive mind" feature allows multiple robots to share memory. Here's how:
402
+
403
+ ```ruby
404
+ require 'htm'
405
+
406
+ # Create two different robots
407
+ robot_a = HTM.new(robot_name: "Code Assistant")
408
+ robot_b = HTM.new(robot_name: "Documentation Writer")
409
+
410
+ # Robot A adds a memory
411
+ robot_a.add_node(
412
+ "shared_001",
413
+ "The API documentation is stored in the docs/ directory.",
414
+ type: :fact,
415
+ importance: 8.0
416
+ )
417
+
418
+ puts "Robot A added memory"
419
+
420
+ # Robot B can access the same memory!
421
+ memories = robot_b.recall(
422
+ timeframe: "last week",
423
+ topic: "documentation",
424
+ strategy: :hybrid
425
+ )
426
+
427
+ puts "Robot B found #{memories.length} memories"
428
+ # Robot B sees Robot A's memory!
429
+
430
+ # Track which robot said what
431
+ breakdown = robot_b.which_robot_said("documentation")
432
+ puts "Who mentioned 'documentation':"
433
+ breakdown.each do |robot_id, count|
434
+ puts " #{robot_id}: #{count} times"
435
+ end
436
+ ```
437
+
438
+ **Use cases for multi-robot:**
439
+
440
+ - Collaborative coding teams of AI agents
441
+ - Customer service handoffs between agents
442
+ - Research assistants building shared knowledge
443
+ - Teaching AI learning from multiple instructors
444
+
445
+ ## Working with Relationships
446
+
447
+ Build a knowledge graph by linking related memories:
448
+
449
+ ```ruby
450
+ # Add parent concept
451
+ htm.add_node(
452
+ "concept_databases",
453
+ "Databases store and organize data persistently.",
454
+ type: :fact,
455
+ importance: 5.0
456
+ )
457
+
458
+ # Add child concept with relationship
459
+ htm.add_node(
460
+ "concept_postgresql",
461
+ "PostgreSQL is a powerful open-source relational database.",
462
+ type: :fact,
463
+ importance: 7.0,
464
+ related_to: ["concept_databases"] # Links to parent
465
+ )
466
+
467
+ # Add another related concept
468
+ htm.add_node(
469
+ "concept_postgresql",
470
+ "PostgreSQL provides robust relational database capabilities.",
471
+ type: :fact,
472
+ importance: 8.0,
473
+ related_to: ["concept_postgresql", "concept_databases"]
474
+ )
475
+
476
+ # Now you have a knowledge graph:
477
+ # concept_databases
478
+ # ├── concept_postgresql
479
+ # │ └── concept_postgresql
480
+ ```
481
+
482
+ ## Forget (Explicit Deletion)
483
+
484
+ HTM follows a "never forget" philosophy, but you can explicitly delete memories:
485
+
486
+ ```ruby
487
+ # Deletion requires confirmation
488
+ htm.forget("old_decision", confirm: :confirmed)
489
+
490
+ puts "✓ Memory deleted"
491
+ ```
492
+
493
+ !!! warning "Deletion is Permanent"
494
+ The `forget()` method permanently deletes data. This is the **only** way to delete memories in HTM. Working memory evictions move data to long-term storage, they don't delete it.
495
+
496
+ ## Next Steps
497
+
498
+ Congratulations! You've learned the basics of HTM. Here's what to explore next:
499
+
500
+ ### Explore Advanced Features
501
+
502
+ - **[User Guide](guides/getting-started.md)**: Deep dive into all HTM features
503
+ - **[API Reference](api/htm.md)**: Complete API documentation
504
+ - **[Architecture Guide](architecture/overview.md)**: Understand HTM's internals
505
+
506
+ ### Build Real Applications
507
+
508
+ Try building:
509
+
510
+ 1. **Personal AI Assistant**: Remember user preferences and habits
511
+ 2. **Code Review Bot**: Track coding patterns and past decisions
512
+ 3. **Research Assistant**: Build a knowledge graph from documents
513
+ 4. **Customer Service Bot**: Maintain conversation history
514
+
515
+ ### Experiment with Different Configurations
516
+
517
+ ```ruby
518
+ # Try different memory sizes
519
+ htm = HTM.new(
520
+ robot_name: "Large Memory Bot",
521
+ working_memory_size: 256_000 # 256k tokens
522
+ )
523
+
524
+ # Try different embedding models
525
+ htm = HTM.new(
526
+ robot_name: "Custom Embeddings",
527
+ embedding_service: :ollama,
528
+ embedding_model: 'llama2' # Use Llama2 instead of gpt-oss
529
+ )
530
+
531
+ # Try different recall strategies
532
+ memories = htm.recall(
533
+ timeframe: "last month",
534
+ topic: "important decisions",
535
+ strategy: :vector # Pure semantic search
536
+ )
537
+ ```
538
+
539
+ ### Performance Optimization
540
+
541
+ For production applications:
542
+
543
+ - Use connection pooling (built-in)
544
+ - Tune working memory size based on your LLM's context window
545
+ - Adjust importance scores to prioritize critical memories
546
+ - Use appropriate timeframes to limit search scope
547
+ - Monitor memory statistics regularly
548
+
549
+ ### Join the Community
550
+
551
+ - **GitHub**: [https://github.com/madbomber/htm](https://github.com/madbomber/htm)
552
+ - **Issues**: Report bugs or request features
553
+ - **Discussions**: Share your HTM projects
554
+
555
+ ## Common Patterns
556
+
557
+ ### Pattern 1: Conversation Memory
558
+
559
+ ```ruby
560
+ # Store user messages
561
+ htm.add_node(
562
+ "msg_#{Time.now.to_i}",
563
+ "User: How do I optimize database queries?",
564
+ type: :context,
565
+ importance: 6.0,
566
+ tags: ["conversation", "question"]
567
+ )
568
+
569
+ # Store assistant responses
570
+ htm.add_node(
571
+ "response_#{Time.now.to_i}",
572
+ "Assistant: Use indexes and connection pooling.",
573
+ type: :context,
574
+ importance: 6.0,
575
+ tags: ["conversation", "answer"]
576
+ )
577
+ ```
578
+
579
+ ### Pattern 2: Learning from Code
580
+
581
+ ```ruby
582
+ # Extract patterns from code reviews
583
+ htm.add_node(
584
+ "pattern_#{SecureRandom.hex(4)}",
585
+ "Always validate user input before database queries.",
586
+ type: :code,
587
+ importance: 9.0,
588
+ tags: ["security", "validation", "best-practice"]
589
+ )
590
+ ```
591
+
592
+ ### Pattern 3: Decision Tracking
593
+
594
+ ```ruby
595
+ # Document architectural decisions
596
+ htm.add_node(
597
+ "adr_001",
598
+ "Decision: Use microservices architecture. " \
599
+ "Reasoning: Better scalability and independent deployment.",
600
+ type: :decision,
601
+ category: "architecture",
602
+ importance: 10.0,
603
+ tags: ["adr", "architecture", "microservices"]
604
+ )
605
+ ```
606
+
607
+ ## Troubleshooting Quick Start
608
+
609
+ ### Issue: "Connection refused" error
610
+
611
+ **Solution**: Make sure Ollama is running:
612
+
613
+ ```bash
614
+ curl http://localhost:11434/api/version
615
+ # If this fails, start Ollama
616
+ ```
617
+
618
+ ### Issue: "Database connection failed"
619
+
620
+ **Solution**: Verify your `HTM_DBURL` is set:
621
+
622
+ ```bash
623
+ echo $HTM_DBURL
624
+ # Should show your connection string
625
+ ```
626
+
627
+ ### Issue: Embeddings taking too long
628
+
629
+ **Solution**: Check Ollama's status and ensure the model is downloaded:
630
+
631
+ ```bash
632
+ ollama list | grep gpt-oss
633
+ # Should show gpt-oss model
634
+ ```
635
+
636
+ ### Issue: Memory not found during recall
637
+
638
+ **Solution**: Check your timeframe. If you just added a memory, use a recent timeframe:
639
+
640
+ ```ruby
641
+ # Instead of "last week", use:
642
+ memories = htm.recall(
643
+ timeframe: (Time.now - 3600)..Time.now, # Last hour
644
+ topic: "your topic"
645
+ )
646
+ ```
647
+
648
+ ## Additional Resources
649
+
650
+ - **[Installation Guide](installation.md)**: Complete setup instructions
651
+ - **[User Guide](guides/getting-started.md)**: Comprehensive feature documentation
652
+ - **[API Reference](api/htm.md)**: Detailed API documentation
653
+ - **[Examples](https://github.com/madbomber/htm/tree/main/examples)**: Real-world code examples
654
+
655
+ Happy coding with HTM! 🚀