@zabaca/lattice 0.2.0 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,88 +1,132 @@
1
1
  # @zabaca/lattice
2
2
 
3
- **Human-initiated, AI-powered knowledge graph for markdown documentation**
3
+ **Build a knowledge base with Claude Code — using your existing subscription**
4
4
 
5
5
  [![npm version](https://img.shields.io/npm/v/@zabaca/lattice.svg)](https://www.npmjs.com/package/@zabaca/lattice)
6
6
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
- [![Node.js](https://img.shields.io/badge/node-%3E%3D18.0.0-brightgreen.svg)](https://nodejs.org/)
7
+
8
+ Lattice turns your markdown documentation into a searchable knowledge graph. Unlike other GraphRAG tools that require separate LLM APIs, **Lattice uses Claude Code for entity extraction** — so you're already paying for it.
9
+
10
+ ## The Workflow
11
+
12
+ ```bash
13
+ /research "knowledge graphs" # Find existing docs or create new research
14
+ /graph-sync # Extract entities & sync (automatic)
15
+ lattice search "your query" # Semantic search your knowledge base
16
+ ```
17
+
18
+ That's it. Two commands to build a knowledge base.
8
19
 
9
20
  ---
10
21
 
11
- ## Features
22
+ ## Why Lattice?
12
23
 
13
- - **Knowledge Graph Sync** - Automatically sync entities and relationships from markdown frontmatter to a graph database
14
- - **Semantic Search** - AI-powered search using Voyage AI embeddings for intelligent document discovery
15
- - **Entity Extraction** - Define entities (concepts, technologies, patterns) directly in your documentation
16
- - **Relationship Mapping** - Model connections between entities with typed relationships (USES, IMPLEMENTS, DEPENDS_ON)
17
- - **FalkorDB Backend** - High-performance graph database built on Redis for fast queries
18
- - **Incremental Sync** - Smart change detection syncs only modified documents
19
- - **CLI Interface** - Simple commands for sync, search, validation, and migration
24
+ | Feature | Lattice | Other GraphRAG Tools |
25
+ |---------|---------|---------------------|
26
+ | **LLM for extraction** | Your Claude Code subscription | Separate API key + costs |
27
+ | **Setup time** | 5 minutes | 30+ minutes |
28
+ | **Containers** | 1 (FalkorDB) | 2-3 (DB + vector + graph) |
29
+ | **API keys needed** | 1 (Voyage AI for embeddings) | 2-3 (LLM + embedding + rerank) |
30
+ | **Workflow** | `/research` `/graph-sync` | Custom scripts |
20
31
 
21
32
  ---
22
33
 
23
- ## Quick Start
34
+ ## Quick Start (5 Minutes)
35
+
36
+ ### What You Need
24
37
 
25
- ### 1. Install Lattice
38
+ - **Claude Code** (you probably already have it)
39
+ - **Docker** (for FalkorDB)
40
+ - **Voyage AI API key** ([get one here](https://www.voyageai.com/) - embeddings only, ~$0.01/1M tokens)
41
+
42
+ ### 1. Install & Start
26
43
 
27
44
  ```bash
28
- npm install -g @zabaca/lattice
45
+ bun add -g @zabaca/lattice # Install CLI
46
+ docker run -d -p 6379:6379 falkordb/falkordb # Start database
47
+ export VOYAGE_API_KEY=your-key-here # Set API key
48
+ lattice init --global # Install Claude Code commands
29
49
  ```
30
50
 
31
- Or with bun:
51
+ ### 2. Start Researching
32
52
 
33
53
  ```bash
34
- bun add -g @zabaca/lattice
54
+ claude # Launch Claude Code
55
+ /research "your topic" # Find or create documentation
56
+ /graph-sync # Build knowledge graph (automatic)
57
+ lattice search "your query" # Semantic search
35
58
  ```
36
59
 
37
- ### 2. Start FalkorDB
60
+ ### That's It!
38
61
 
39
- Using Docker Compose:
62
+ The `/research` command will:
63
+ - Search your existing docs for related content
64
+ - Ask if you need new research
65
+ - Create organized documentation with AI assistance
40
66
 
41
- ```bash
42
- # Create docker-compose.yaml (see Infrastructure section)
43
- docker-compose up -d
44
- ```
67
+ The `/graph-sync` command will:
68
+ - Detect all new/changed documents
69
+ - Extract entities using Claude Code (your subscription)
70
+ - Sync to FalkorDB for semantic search
71
+
72
+ ---
45
73
 
46
- Or pull and run directly:
74
+ ## Using /research
75
+
76
+ The `/research` command provides an AI-assisted research workflow.
77
+
78
+ ### Searching Existing Research
47
79
 
48
80
  ```bash
49
- docker run -d -p 6379:6379 falkordb/falkordb:latest
81
+ /research "semantic search"
50
82
  ```
51
83
 
52
- ### 3. Configure Environment
84
+ Claude will:
85
+ 1. Search your docs using semantic similarity
86
+ 2. Read and summarize relevant findings
87
+ 3. Ask if existing research answers your question
53
88
 
54
- Create a `.env` file in your project root:
89
+ ### Creating New Research
55
90
 
56
91
  ```bash
57
- # FalkorDB Connection
58
- FALKORDB_HOST=localhost
59
- FALKORDB_PORT=6379
60
- FALKORDB_GRAPH_NAME=lattice
92
+ /research "new topic to explore"
93
+ ```
61
94
 
62
- # Embedding Provider (Voyage AI)
63
- VOYAGE_API_KEY=your-voyage-api-key-here
64
- VOYAGE_MODEL=voyage-3
95
+ If no existing docs match, Claude will:
96
+ 1. Perform web research
97
+ 2. Create a new topic directory (`docs/new-topic/`)
98
+ 3. Generate README.md index and research document
99
+ 4. Remind you to run `/graph-sync`
65
100
 
66
- # Logging
67
- LOG_LEVEL=info
68
- ```
101
+ ### Batch Syncing
69
102
 
70
- ### 4. Sync Your Documents
103
+ `/graph-sync` doesn't need to run after each research session. It identifies all documents needing sync:
71
104
 
72
105
  ```bash
73
- # Sync all markdown files with frontmatter
74
- lattice sync
106
+ # After multiple research sessions
107
+ /graph-sync
75
108
 
76
- # Sync specific paths
77
- lattice sync ./docs ./notes
78
-
79
- # Check what would change without syncing
80
- lattice sync --dry-run
109
+ # Shows: "4 documents need syncing"
110
+ # Extracts entities and syncs all at once
81
111
  ```
82
112
 
83
113
  ---
84
114
 
85
- ## CLI Commands
115
+ ## CLI Reference
116
+
117
+ The Lattice CLI runs behind the scenes. You typically won't use it directly — the Claude Code slash commands handle everything.
118
+
119
+ <details>
120
+ <summary><b>CLI Commands (Advanced)</b></summary>
121
+
122
+ ### `lattice init`
123
+
124
+ Install Claude Code slash commands for Lattice.
125
+
126
+ ```bash
127
+ lattice init # Install to .claude/commands/ (current project)
128
+ lattice init --global # Install to ~/.claude/commands/ (all projects)
129
+ ```
86
130
 
87
131
  ### `lattice sync`
88
132
 
@@ -92,18 +136,14 @@ Synchronize documents to the knowledge graph.
92
136
  lattice sync [paths...] # Sync specified paths or current directory
93
137
  lattice sync --force # Force re-sync (rebuilds entire graph)
94
138
  lattice sync --dry-run # Preview changes without applying
95
- lattice sync --verbose # Show detailed output
96
- lattice sync --watch # Watch for changes and auto-sync
97
- lattice sync --no-embeddings # Skip embedding generation
98
139
  ```
99
140
 
100
141
  ### `lattice status`
101
142
 
102
- Show the current sync status and pending changes.
143
+ Show documents that need syncing.
103
144
 
104
145
  ```bash
105
- lattice status # Show documents that need syncing
106
- lattice status --verbose # Include detailed change information
146
+ lattice status # Show new/changed documents
107
147
  ```
108
148
 
109
149
  ### `lattice search`
@@ -111,17 +151,7 @@ lattice status --verbose # Include detailed change information
111
151
  Semantic search across the knowledge graph.
112
152
 
113
153
  ```bash
114
- lattice search "query" # Search all entity types
115
- lattice search --label Technology "query" # Filter by entity label
116
- lattice search --limit 10 "query" # Limit results (default: 20)
117
- ```
118
-
119
- ### `lattice stats`
120
-
121
- Display graph statistics.
122
-
123
- ```bash
124
- lattice stats # Show node/edge counts and graph metrics
154
+ lattice search "query" # Search all entity types
125
155
  ```
126
156
 
127
157
  ### `lattice validate`
@@ -139,9 +169,10 @@ Display the derived ontology from your documents.
139
169
 
140
170
  ```bash
141
171
  lattice ontology # Show entity types and relationship types
142
- lattice ontology --format json # Output as JSON
143
172
  ```
144
173
 
174
+ </details>
175
+
145
176
  ---
146
177
 
147
178
  ## Configuration
@@ -150,75 +181,43 @@ lattice ontology --format json # Output as JSON
150
181
 
151
182
  | Variable | Description | Default |
152
183
  |----------|-------------|---------|
184
+ | `VOYAGE_API_KEY` | Voyage AI API key for embeddings | *required* |
153
185
  | `FALKORDB_HOST` | FalkorDB server hostname | `localhost` |
154
186
  | `FALKORDB_PORT` | FalkorDB server port | `6379` |
155
- | `FALKORDB_GRAPH_NAME` | Name of the graph database | `lattice` |
156
- | `VOYAGE_API_KEY` | Voyage AI API key for embeddings | *required* |
157
- | `VOYAGE_MODEL` | Voyage AI model to use | `voyage-3` |
158
- | `LOG_LEVEL` | Logging verbosity (debug, info, warn, error) | `info` |
159
187
 
160
- ### Frontmatter Schema
188
+ <details>
189
+ <summary><b>How It Works (Technical Details)</b></summary>
161
190
 
162
- Lattice extracts knowledge from YAML frontmatter in your markdown files:
191
+ ### Entity Extraction
192
+
193
+ When you run `/graph-sync`, Claude Code extracts entities from your documents and writes them to YAML frontmatter. The Lattice CLI then syncs this to FalkorDB.
163
194
 
164
195
  ```yaml
165
196
  ---
166
- title: Document Title
167
- description: Brief description of the document
168
- created: 2024-01-15
169
- updated: 2024-01-20
170
-
171
197
  entities:
172
198
  - name: React
173
199
  type: technology
174
200
  description: JavaScript library for building user interfaces
175
- - name: Component Architecture
176
- type: pattern
177
- description: Modular UI building blocks
178
201
 
179
202
  relationships:
180
203
  - source: React
181
204
  target: Component Architecture
182
- type: IMPLEMENTS
183
- - source: React
184
- target: Virtual DOM
185
- type: USES
205
+ relation: REFERENCES
186
206
  ---
187
-
188
- # Document content here...
189
207
  ```
190
208
 
191
- ### Entity Types
192
-
193
- Common entity types (you can define your own):
209
+ You don't need to write this manually — Claude Code handles it automatically.
194
210
 
195
- - `concept` - Abstract ideas and principles
196
- - `technology` - Tools, frameworks, and libraries
197
- - `pattern` - Design patterns and architectural approaches
198
- - `service` - External services and APIs
199
- - `component` - System components and modules
200
- - `person` - People and contributors
201
- - `organization` - Companies and teams
202
-
203
- ### Relationship Types
204
-
205
- Common relationship types:
206
-
207
- - `USES` - Entity A uses Entity B
208
- - `IMPLEMENTS` - Entity A implements Entity B
209
- - `DEPENDS_ON` - Entity A depends on Entity B
210
- - `EXTENDS` - Entity A extends Entity B
211
- - `CONTAINS` - Entity A contains Entity B
212
- - `RELATED_TO` - General relationship
213
- - `SUPERSEDES` - Entity A replaces Entity B
211
+ </details>
214
212
 
215
213
  ---
216
214
 
217
215
  ## Infrastructure
218
216
 
219
- ### Docker Compose
217
+ <details>
218
+ <summary><b>Docker Compose (Alternative Setup)</b></summary>
220
219
 
221
- Create `docker-compose.yaml`:
220
+ If you prefer Docker Compose over a single `docker run` command:
222
221
 
223
222
  ```yaml
224
223
  version: '3.8'
@@ -226,44 +225,31 @@ version: '3.8'
226
225
  services:
227
226
  falkordb:
228
227
  image: falkordb/falkordb:latest
229
- container_name: lattice-falkordb
230
228
  ports:
231
229
  - "6379:6379"
232
230
  volumes:
233
231
  - falkordb-data:/data
234
- environment:
235
- - FALKORDB_ARGS=--requirepass ""
236
232
  restart: unless-stopped
237
- healthcheck:
238
- test: ["CMD", "redis-cli", "ping"]
239
- interval: 10s
240
- timeout: 5s
241
- retries: 3
242
233
 
243
234
  volumes:
244
235
  falkordb-data:
245
- driver: local
246
236
  ```
247
237
 
248
- Start the database:
249
-
250
238
  ```bash
251
239
  docker-compose up -d
252
240
  ```
253
241
 
254
- ### Kubernetes (k3s)
242
+ </details>
243
+
244
+ <details>
245
+ <summary><b>Kubernetes (k3s)</b></summary>
255
246
 
256
247
  For production deployments, use the provided k3s manifests:
257
248
 
258
249
  ```bash
259
- # Create namespace
260
250
  kubectl apply -f infra/k3s/namespace.yaml
261
-
262
- # Deploy storage
263
251
  kubectl apply -f infra/k3s/pv.yaml
264
252
  kubectl apply -f infra/k3s/pvc.yaml
265
-
266
- # Deploy FalkorDB
267
253
  kubectl apply -f infra/k3s/deployment.yaml
268
254
  kubectl apply -f infra/k3s/service.yaml
269
255
 
@@ -274,9 +260,14 @@ kubectl apply -f infra/k3s/nodeport-service.yaml
274
260
  kubectl apply -f infra/k3s/ingress.yaml
275
261
  ```
276
262
 
263
+ </details>
264
+
277
265
  ---
278
266
 
279
- ## Development
267
+ ## Contributing
268
+
269
+ <details>
270
+ <summary><b>Development Setup</b></summary>
280
271
 
281
272
  ### Prerequisites
282
273
 
@@ -287,64 +278,25 @@ kubectl apply -f infra/k3s/ingress.yaml
287
278
  ### Setup
288
279
 
289
280
  ```bash
290
- # Clone the repository
291
281
  git clone https://github.com/Zabaca/lattice.git
292
282
  cd lattice
293
-
294
- # Install dependencies
295
283
  bun install
296
-
297
- # Copy environment configuration
298
284
  cp .env.example .env
299
- # Edit .env with your settings
300
-
301
- # Start FalkorDB
302
285
  docker-compose -f infra/docker-compose.yaml up -d
303
286
  ```
304
287
 
305
288
  ### Running Locally
306
289
 
307
290
  ```bash
308
- # Development mode
309
- bun run dev
310
-
311
- # Run CLI commands during development
312
- bun run lattice sync
313
- bun run lattice status
314
-
315
- # Run tests
316
- bun test
317
-
318
- # Build for production
319
- bun run build
320
- ```
321
-
322
- ### Project Structure
323
-
324
- ```
325
- lattice/
326
- ├── src/
327
- │ ├── commands/ # CLI command implementations
328
- │ ├── embedding/ # Voyage AI embedding service
329
- │ ├── graph/ # FalkorDB graph operations
330
- │ ├── query/ # Query builders and parsers
331
- │ ├── sync/ # Document sync logic
332
- │ ├── utils/ # Shared utilities
333
- │ ├── app.module.ts # NestJS application module
334
- │ ├── cli.ts # CLI entry point
335
- │ └── main.ts # Main application entry
336
- ├── infra/
337
- │ ├── docker-compose.yaml
338
- │ └── k3s/ # Kubernetes manifests
339
- ├── examples/ # Usage examples
340
- └── dist/ # Build output
291
+ bun run dev # Development mode
292
+ bun test # Run tests
293
+ bun run build # Build for production
341
294
  ```
342
295
 
343
- ---
344
-
345
- ## API Usage
296
+ </details>
346
297
 
347
- Lattice can also be used programmatically:
298
+ <details>
299
+ <summary><b>Programmatic API</b></summary>
348
300
 
349
301
  ```typescript
350
302
  import { NestFactory } from '@nestjs/core';
@@ -352,16 +304,11 @@ import { AppModule } from '@zabaca/lattice';
352
304
 
353
305
  async function main() {
354
306
  const app = await NestFactory.createApplicationContext(AppModule);
355
-
356
- // Get services
357
307
  const syncService = app.get(SyncService);
358
- const graphService = app.get(GraphService);
359
308
 
360
- // Sync documents
361
309
  const result = await syncService.sync({
362
310
  paths: ['./docs'],
363
- force: false,
364
- dryRun: false
311
+ force: false
365
312
  });
366
313
 
367
314
  console.log(`Synced ${result.added} new documents`);
@@ -370,18 +317,10 @@ async function main() {
370
317
  }
371
318
  ```
372
319
 
373
- ---
374
-
375
- ## Contributing
320
+ </details>
376
321
 
377
322
  Contributions are welcome! Please feel free to submit a Pull Request.
378
323
 
379
- 1. Fork the repository
380
- 2. Create your feature branch (`git checkout -b feature/amazing-feature`)
381
- 3. Commit your changes (`git commit -m 'Add some amazing feature'`)
382
- 4. Push to the branch (`git push origin feature/amazing-feature`)
383
- 5. Open a Pull Request
384
-
385
324
  ---
386
325
 
387
326
  ## License
@@ -390,8 +329,4 @@ MIT License - see [LICENSE](LICENSE) for details.
390
329
 
391
330
  ---
392
331
 
393
- ## Acknowledgments
394
-
395
- - [FalkorDB](https://www.falkordb.com/) - High-performance graph database
396
- - [Voyage AI](https://www.voyageai.com/) - State-of-the-art embeddings
397
- - [NestJS](https://nestjs.com/) - Progressive Node.js framework
332
+ Built with [FalkorDB](https://www.falkordb.com/), [Voyage AI](https://www.voyageai.com/), and [Claude Code](https://claude.ai/code)
@@ -0,0 +1,163 @@
1
+ ---
2
+ description: Extract entities from existing document and add to frontmatter
3
+ argument-hint: file-path
4
+ model: haiku
5
+ ---
6
+
7
+ Extract entities and relationships from the markdown file "$ARGUMENTS" and update its frontmatter.
8
+
9
+ ## IMPORTANT: Always Re-Extract
10
+
11
+ Even if the document already has frontmatter with entities:
12
+ - **RE-READ** the entire document content
13
+ - **RE-EXTRACT** entities based on CURRENT content
14
+ - **REPLACE** existing entities with fresh extraction
15
+ - **DO NOT skip** because "entities already exist"
16
+
17
+ The goal is to ensure entities reflect the document's CURRENT state, not preserve stale metadata from previous extractions.
18
+
19
+ ## Process
20
+
21
+ 1. **Verify file exists**:
22
+ - Check if "$ARGUMENTS" exists
23
+ - If not, inform user and suggest the correct path
24
+ - Verify it's a markdown file
25
+
26
+ 2. **Read and analyze the document**:
27
+ - Read the full content of the file
28
+ - Check for existing frontmatter
29
+ - Analyze document context and purpose
30
+
31
+ 3. **Extract entities** by identifying:
32
+ - **Technologies**: Languages, frameworks, databases, libraries, tools mentioned
33
+ - **Concepts**: Patterns, methodologies, theories, architectural approaches
34
+ - **Tools & Services**: Software, platforms, applications referenced
35
+ - **Processes**: Workflows, procedures, methodologies described
36
+ - **Organizations**: Companies, teams, projects mentioned
37
+
38
+ Guidelines:
39
+ - Focus on 3-10 most significant entities for the document
40
+ - Use specific names (e.g., "PostgreSQL" not "database")
41
+ - Prefer proper nouns and technical terms
42
+ - Entities should be directly relevant to the document's focus
43
+
44
+ 4. **Generate document summary**:
45
+ - Create a 2-3 sentence summary (50-100 words) that captures:
46
+ - The document's main purpose/topic
47
+ - Key technologies or concepts covered
48
+ - Primary conclusions or recommendations (if any)
49
+
50
+ Summary guidelines:
51
+ - Write in third person
52
+ - Include key terms that enable semantic search
53
+ - Focus on what the document IS ABOUT, not just what it contains
54
+ - Make it suitable for embedding generation
55
+
56
+ Example:
57
+ ```yaml
58
+ summary: >
59
+ Research on integrating multiple messaging platforms (Slack, Teams, Discord)
60
+ into a unified API. Covers platform API comparisons, recommended tech stack
61
+ (NestJS, PostgreSQL, Redis), and a phased implementation approach for
62
+ bi-directional message synchronization.
63
+ ```
64
+
65
+ 5. **Extract relationships** between entities:
66
+ - **REFERENCES**: This entity references/relates to another entity
67
+
68
+ Use `source: this` when the document itself references an entity.
69
+ Use entity names as source/target when entities reference each other.
70
+
71
+ 6. **Determine entity types** (choose most appropriate):
72
+ - `Topic`: Research domains (usually auto-derived from directory)
73
+ - `Technology`: Programming languages, frameworks, databases
74
+ - `Concept`: Patterns, theories, methodologies
75
+ - `Tool`: Software, services, platforms
76
+ - `Process`: Workflows, procedures, methodologies
77
+ - `Person`: People
78
+ - `Organization`: Companies, teams, projects
79
+
80
+ 7. **Update frontmatter**:
81
+ - If frontmatter exists: **REPLACE** entities and relationships with fresh extraction
82
+ - If no frontmatter: Create new frontmatter block
83
+ - Preserve existing fields like `created`, `status`, `topic` (but update `updated` date)
84
+ - **Replace** the `summary`, `entities` and `relationships` sections entirely
85
+ - If no topic field exists, derive it from the directory name
86
+ (e.g., `docs/claude-code/file.md` -> `topic: claude-code`)
87
+
88
+ Frontmatter template:
89
+ ```yaml
90
+ ---
91
+ created: YYYY-MM-DD
92
+ updated: YYYY-MM-DD
93
+ status: complete|ongoing|draft
94
+ topic: auto-derived-from-directory
95
+ summary: >
96
+ 2-3 sentence summary capturing the document's purpose, key topics,
97
+ and conclusions. Written in third person with key terms for semantic search.
98
+ entities:
99
+ - name: EntityName
100
+ type: Topic|Technology|Concept|Tool|Process|Person|Organization
101
+ description: Brief description of entity and its role in this document
102
+ - name: AnotherEntity
103
+ type: Concept
104
+ description: Another entity description
105
+ relationships:
106
+ - source: this
107
+ relation: REFERENCES
108
+ target: MainTopic
109
+ - source: EntityA
110
+ relation: REFERENCES
111
+ target: EntityB
112
+ graph:
113
+ domain: detected-domain
114
+ ---
115
+ ```
116
+
117
+ 8. **Entity naming consistency**:
118
+ - Check if similar entities exist in other documents
119
+ - Use exact same names when referring to same entities
120
+ - Be specific: "React" not "React library"
121
+ - Use canonical names (e.g., "TypeScript" not "TS")
122
+
123
+ 9. **Relationship guidelines**:
124
+ - Start with "source: this" for primary entity the document covers
125
+ - Include 3-7 key relationships
126
+ - Relationships should help build knowledge graph connections
127
+ - Avoid redundant relationships
128
+
129
+ 10. **Validate and auto-fix** (retry loop):
130
+ After saving, run validation:
131
+
132
+ ```bash
133
+ lattice validate 2>&1 | grep -A10 "$ARGUMENTS"
134
+ ```
135
+
136
+ **If validation reports errors for this file:**
137
+ 1. Parse the error message to identify the issue
138
+ 2. Fix the frontmatter:
139
+ - **Invalid entity type** (e.g., "Platform", "Feature"): Change to valid type
140
+ - **Invalid relation** (e.g., "AFFECTS", "ENABLES"): Change to valid relation
141
+ - **String instead of object**: Reformat to proper object structure
142
+ 3. Save the fixed frontmatter
143
+ 4. Re-run validation
144
+ 5. Repeat until validation passes (max 3 attempts)
145
+
146
+ **Valid entity types:** `Topic`, `Technology`, `Concept`, `Tool`, `Process`, `Person`, `Organization`, `Document`
147
+
148
+ **Valid relations:** `REFERENCES`
149
+
150
+ 11. **Confirmation**:
151
+ - Show the file path
152
+ - Show the generated summary
153
+ - List extracted entities with types
154
+ - List extracted relationships
155
+ - Confirm validation passed (or show fixes made)
156
+
157
+ ## Important Notes
158
+
159
+ - **Preserve existing content**: Do not modify the markdown content itself, only the frontmatter
160
+ - **YAML validity**: Ensure all YAML is properly formatted
161
+ - **Replace strategy**: Always replace entities/relationships with fresh extraction (don't merge with old)
162
+ - **Be selective**: Focus on entities that would be valuable for knowledge graph connections
163
+ - **Descriptions**: Write descriptions from the perspective of how the entity is used/discussed in THIS document
@@ -0,0 +1,117 @@
1
+ ---
2
+ description: Extract entities from modified docs and sync to graph
3
+ model: sonnet
4
+ ---
5
+
6
+ Identify modified documents, extract entities from them, and sync to the knowledge graph.
7
+
8
+ ## Process
9
+
10
+ ### Step 1: Check What Needs Syncing
11
+
12
+ Run the status command to identify modified documents:
13
+
14
+ ```bash
15
+ lattice status
16
+ ```
17
+
18
+ This will show:
19
+ - **New** documents not yet in the graph
20
+ - **Updated** documents that have changed since last sync
21
+
22
+ If no documents need syncing, report that and exit.
23
+
24
+ ### Step 2: Run Entity Extraction (Parallel Execution)
25
+
26
+ For each new or updated document identified:
27
+
28
+ 1. Use the **Task subagent pattern** with Haiku model for parallel execution
29
+ 2. Launch multiple Task agents simultaneously (one per document)
30
+ 3. Each agent should:
31
+ - Invoke `/entity-extract <path>`
32
+ - Follow expanded instructions
33
+ - Extract entities and update frontmatter
34
+ - Report completion
35
+
36
+ **Example Task agent invocation:**
37
+ ```
38
+ Task(
39
+ subagent_type="general-purpose",
40
+ model="haiku",
41
+ prompt="Use /entity-extract docs/topic/document.md to extract entities. Follow all instructions and report completion."
42
+ )
43
+ ```
44
+
45
+ **For multiple documents, launch agents in parallel:**
46
+ ```
47
+ // In a single message, launch multiple Task tool calls:
48
+ Task(subagent_type="general-purpose", model="haiku", prompt="/entity-extract docs/topic-a/README.md ...")
49
+ Task(subagent_type="general-purpose", model="haiku", prompt="/entity-extract docs/topic-b/notes.md ...")
50
+ Task(subagent_type="general-purpose", model="haiku", prompt="/entity-extract docs/topic-c/README.md ...")
51
+ ```
52
+
53
+ This is much faster than sequential execution for multiple documents.
54
+
55
+ ### Step 3: Sync to Graph
56
+
57
+ After all entity extractions are complete:
58
+
59
+ ```bash
60
+ lattice sync
61
+ ```
62
+
63
+ **Note:** The sync command validates frontmatter schema and will fail with errors if:
64
+ - Entities are malformed (strings instead of objects with `name`/`type`)
65
+ - Relationships are malformed (strings instead of objects with `source`/`relation`/`target`)
66
+
67
+ If sync fails due to schema errors, the entity extraction didn't follow the correct format.
68
+
69
+ This will:
70
+ - Update document nodes in FalkorDB
71
+ - Generate embeddings for semantic search
72
+ - Create entity relationships
73
+ - Update the sync manifest
74
+
75
+ ### Step 4: Report Results
76
+
77
+ Summarize what was processed:
78
+ - Number of documents with entity extraction
79
+ - Entities extracted per document
80
+ - Graph sync statistics (added, updated, unchanged)
81
+ - Any errors encountered
82
+
83
+ ## Example Output
84
+
85
+ ```
86
+ ## Entity Extraction
87
+
88
+ Processed 3 documents:
89
+
90
+ 1. docs/american-holidays/README.md
91
+ - 4 entities extracted
92
+ - 3 relationships defined
93
+
94
+ 2. docs/american-holidays/thanksgiving-vs-christmas.md
95
+ - 8 entities extracted
96
+ - 5 relationships defined
97
+
98
+ 3. docs/bun-nestjs/notes.md
99
+ - 5 entities extracted
100
+ - 4 relationships defined
101
+
102
+ ## Graph Sync
103
+
104
+ - Added: 2
105
+ - Updated: 1
106
+ - Unchanged: 126
107
+ - Duration: 1.2s
108
+ ```
109
+
110
+ ## Important Notes
111
+
112
+ - **Parallel execution** - Launch all entity extractions simultaneously for speed
113
+ - Entity extraction runs per-document for quality
114
+ - Graph sync is incremental (only processes changes)
115
+ - Safe to run frequently - won't duplicate or corrupt data
116
+ - If extraction fails on a doc, other agents continue - report all errors at end
117
+ - **Batch syncing**: You don't need to run after each `/research` - run once after multiple sessions
@@ -0,0 +1,183 @@
1
+ ---
2
+ description: Research a topic - searches existing docs, asks before new research
3
+ argument-hint: topic-query
4
+ model: sonnet
5
+ ---
6
+
7
+ Research the topic "$ARGUMENTS" by first checking existing documentation, then performing new research if needed.
8
+
9
+ ## Process
10
+
11
+ ### Step 1: Search Existing Research
12
+
13
+ Run semantic search to find related documents:
14
+
15
+ ```bash
16
+ lattice search "$ARGUMENTS" --limit 10
17
+ ```
18
+
19
+ ### Step 2: Review Search Results
20
+
21
+ Review the top results from the semantic search:
22
+
23
+ 1. **Read top results** regardless of path - high similarity may indicate related content
24
+ 2. **Path/title matching** is a bonus signal, not a filter
25
+ 3. **Don't dismiss** high-similarity docs just because path doesn't match query
26
+ 4. Use judgment after reading - the doc content determines relevance, not the filename
27
+
28
+ **Calibration notes:**
29
+ - Exact topic matches often show 30-40% similarity
30
+ - Unrelated docs can sometimes show 60%+ similarity
31
+ - Read the actual content to determine true relevance
32
+
33
+ For each promising result:
34
+ - Read the document
35
+ - Check if it answers the user's question
36
+ - Note relevant sections
37
+
38
+ ### Step 3: Present Findings to User
39
+
40
+ Summarize what you found in existing docs:
41
+ - What topics are covered
42
+ - Quote relevant sections if helpful
43
+ - Identify gaps in existing research
44
+
45
+ Ask the user: **"Does this existing research cover your question?"**
46
+
47
+ ### Step 4: Ask About New Research
48
+
49
+ Use AskUserQuestion to ask:
50
+ - **"Should I perform new research on this topic?"**
51
+ - Options:
52
+ - Yes, research and create new docs
53
+ - Yes, research and update existing docs
54
+ - No, existing research is sufficient
55
+
56
+ If user says **No** → Done, conversation complete.
57
+
58
+ ### Step 5: Perform Research (if requested)
59
+
60
+ If user wants new research:
61
+ 1. Use WebSearch to find current information
62
+ 2. Gather and synthesize findings
63
+ 3. Focus on what's missing from existing docs
64
+
65
+ ### Step 6: Determine Topic and Filename
66
+
67
+ **Identify the topic directory:**
68
+ - Check if a relevant `docs/{topic-name}/` directory already exists
69
+ - If not, derive a new topic name from the query (kebab-case)
70
+
71
+ **Derive the research filename:**
72
+ Auto-derive from the specific focus of the query:
73
+
74
+ | Query | Topic Dir | Research File |
75
+ |-------|-----------|---------------|
76
+ | "tesla model s value retention" | `tesla-model-s/` | `value-retention.md` |
77
+ | "bun vs node performance" | `bun-nodejs/` | `performance-comparison.md` |
78
+ | "graphql authentication patterns" | `graphql/` | `authentication-patterns.md` |
79
+
80
+ **Filename guidelines:**
81
+ - Use kebab-case
82
+ - Be descriptive of the specific research focus
83
+ - Avoid generic names like `notes.md` or `research.md`
84
+ - Keep it concise (2-4 words)
85
+
86
+ ### Step 7: Create/Update Files
87
+
88
+ #### For NEW Topics (directory doesn't exist)
89
+
90
+ Create TWO files:
91
+
92
+ **1. `docs/{topic-name}/README.md`** (index):
93
+ ```markdown
94
+ ---
95
+ created: [TODAY'S DATE]
96
+ updated: [TODAY'S DATE]
97
+ status: active
98
+ topic: {topic-name}
99
+ summary: >
100
+ Brief description of the topic area for semantic search.
101
+ ---
102
+
103
+ # {Topic Title}
104
+
105
+ Brief description of what this topic covers.
106
+
107
+ ## Documents
108
+
109
+ | Document | Description |
110
+ |----------|-------------|
111
+ | [{Research Title}](./{research-filename}.md) | Brief description |
112
+
113
+ ## Related Research
114
+
115
+ - [Related Topic](../related-topic/)
116
+ ```
117
+
118
+ **2. `docs/{topic-name}/{research-filename}.md`** (content):
119
+ ```markdown
120
+ ---
121
+ created: [TODAY'S DATE]
122
+ updated: [TODAY'S DATE]
123
+ status: complete
124
+ topic: {topic-name}
125
+ summary: >
126
+ Detailed summary of this specific research for semantic search.
127
+ ---
128
+
129
+ # {Research Title}
130
+
131
+ ## Purpose
132
+
133
+ What this research addresses.
134
+
135
+ ## Key Findings
136
+
137
+ - Finding 1
138
+ - Finding 2
139
+
140
+ ## [Content sections as needed...]
141
+
142
+ ## Sources
143
+
144
+ 1. [Source](URL)
145
+ ```
146
+
147
+ #### For EXISTING Topics (directory exists)
148
+
149
+ **1. Create** `docs/{topic-name}/{research-filename}.md` with content template above
150
+
151
+ **2. Update** `docs/{topic-name}/README.md`:
152
+ - Add new row to the Documents table
153
+ - Update the `updated` date in frontmatter
154
+
155
+ ### Step 8: Confirmation
156
+
157
+ After creating files, confirm:
158
+ - Topic directory path
159
+ - README.md created/updated
160
+ - Research file created with name
161
+ - Remind user to run `/graph-sync` to extract entities
162
+
163
+ ## Important Notes
164
+
165
+ - **Do NOT** auto-run entity extraction - use `/graph-sync` separately
166
+ - **Always create README.md** for new topics (lightweight index)
167
+ - **Always create separate research file** (never put research content in README)
168
+ - Use kebab-case for all directory and file names
169
+ - Include today's date in YYYY-MM-DD format
170
+ - Always cite sources with URLs
171
+ - Cross-link to related research topics when relevant
172
+
173
+ ## File Structure Standard
174
+
175
+ ```
176
+ docs/{topic-name}/
177
+ ├── README.md # Index: links to docs, brief overview
178
+ ├── {research-1}.md # Specific research
179
+ ├── {research-2}.md # Additional research
180
+ └── {research-n}.md # Expandable as needed
181
+ ```
182
+
183
+ This structure allows topics to grow organically while keeping README as a clean navigation index.
package/dist/cli.js CHANGED
@@ -257,7 +257,7 @@ class GraphService {
257
257
  this.logger.log(`Created vector index on ${label}.${property} with ${dimensions} dimensions`);
258
258
  } catch (error) {
259
259
  const errorMessage = error instanceof Error ? error.message : String(error);
260
- if (!errorMessage.includes("already exists")) {
260
+ if (!errorMessage.includes("already indexed")) {
261
261
  this.logger.error(`Failed to create vector index: ${errorMessage}`);
262
262
  throw error;
263
263
  }
@@ -339,10 +339,10 @@ class GraphService {
339
339
  return String(value);
340
340
  }
341
341
  parseStats(result) {
342
- if (!Array.isArray(result) || result.length < 2) {
342
+ if (!Array.isArray(result) || result.length < 3) {
343
343
  return;
344
344
  }
345
- const statsStr = result[1];
345
+ const statsStr = result[2];
346
346
  if (!statsStr || typeof statsStr !== "string") {
347
347
  return;
348
348
  }
@@ -2473,9 +2473,14 @@ Note: Semantic search requires embeddings to be generated first.`);
2473
2473
  const outgoing = [];
2474
2474
  results.forEach((row) => {
2475
2475
  const [source, rel, target] = row;
2476
- const sourceName = source.properties?.name || "unknown";
2477
- const targetName = target.properties?.name || "unknown";
2478
- const relType = rel.type || "UNKNOWN";
2476
+ const sourceObj = Object.fromEntries(source);
2477
+ const targetObj = Object.fromEntries(target);
2478
+ const relObj = Object.fromEntries(rel);
2479
+ const sourceProps = Object.fromEntries(sourceObj.properties || []);
2480
+ const targetProps = Object.fromEntries(targetObj.properties || []);
2481
+ const sourceName = sourceProps.name || "unknown";
2482
+ const targetName = targetProps.name || "unknown";
2483
+ const relType = relObj.type || "UNKNOWN";
2479
2484
  if (sourceName === name) {
2480
2485
  outgoing.push(` -[${relType}]-> ${targetName}`);
2481
2486
  } else {
@@ -2666,8 +2671,88 @@ function registerOntologyCommand(program) {
2666
2671
  }
2667
2672
  });
2668
2673
  }
2674
+ // src/commands/init.command.ts
2675
+ import * as fs from "fs/promises";
2676
+ import * as path from "path";
2677
+ import { fileURLToPath } from "url";
2678
+ import { homedir } from "os";
2679
+ var __filename2 = fileURLToPath(import.meta.url);
2680
+ var __dirname2 = path.dirname(__filename2);
2681
+ var COMMANDS = ["research.md", "graph-sync.md", "entity-extract.md"];
2682
+ function registerInitCommand(program) {
2683
+ program.command("init").description("Install Claude Code slash commands for Lattice").option("-g, --global", "Install to ~/.claude/commands/ (available in all projects)").action(async (options) => {
2684
+ try {
2685
+ const targetDir = options.global ? path.join(homedir(), ".claude", "commands") : path.join(process.cwd(), ".claude", "commands");
2686
+ let commandsSourceDir = path.resolve(__dirname2, "..", "commands");
2687
+ try {
2688
+ await fs.access(commandsSourceDir);
2689
+ } catch {
2690
+ commandsSourceDir = path.resolve(__dirname2, "..", "..", "commands");
2691
+ }
2692
+ try {
2693
+ await fs.access(commandsSourceDir);
2694
+ } catch {
2695
+ console.error("Error: Commands source directory not found at", commandsSourceDir);
2696
+ console.error("This may indicate a corrupted installation. Try reinstalling @zabaca/lattice.");
2697
+ process.exit(1);
2698
+ }
2699
+ await fs.mkdir(targetDir, { recursive: true });
2700
+ let copied = 0;
2701
+ let skipped = 0;
2702
+ const installed = [];
2703
+ for (const file of COMMANDS) {
2704
+ const sourcePath = path.join(commandsSourceDir, file);
2705
+ const targetPath = path.join(targetDir, file);
2706
+ try {
2707
+ await fs.access(sourcePath);
2708
+ try {
2709
+ await fs.access(targetPath);
2710
+ const sourceContent = await fs.readFile(sourcePath, "utf-8");
2711
+ const targetContent = await fs.readFile(targetPath, "utf-8");
2712
+ if (sourceContent === targetContent) {
2713
+ skipped++;
2714
+ continue;
2715
+ }
2716
+ } catch {}
2717
+ await fs.copyFile(sourcePath, targetPath);
2718
+ installed.push(file);
2719
+ copied++;
2720
+ } catch (err) {
2721
+ console.error(`Warning: Could not copy ${file}:`, err instanceof Error ? err.message : String(err));
2722
+ }
2723
+ }
2724
+ console.log();
2725
+ console.log(`\u2705 Lattice commands installed to ${targetDir}`);
2726
+ console.log();
2727
+ if (copied > 0) {
2728
+ console.log(`Installed ${copied} command(s):`);
2729
+ installed.forEach((f) => {
2730
+ const name = f.replace(".md", "");
2731
+ console.log(` - /${name}`);
2732
+ });
2733
+ }
2734
+ if (skipped > 0) {
2735
+ console.log(`Skipped ${skipped} unchanged command(s)`);
2736
+ }
2737
+ console.log();
2738
+ console.log("Available commands in Claude Code:");
2739
+ console.log(" /research <topic> - AI-assisted research workflow");
2740
+ console.log(" /graph-sync - Extract entities and sync to graph");
2741
+ console.log(" /entity-extract - Extract entities from a single document");
2742
+ console.log();
2743
+ if (!options.global) {
2744
+ console.log("\uD83D\uDCA1 Tip: Use 'lattice init --global' to install for all projects");
2745
+ }
2746
+ process.exit(0);
2747
+ } catch (error) {
2748
+ console.error("Error:", error instanceof Error ? error.message : String(error));
2749
+ process.exit(1);
2750
+ }
2751
+ });
2752
+ }
2669
2753
  // src/main.ts
2670
- program.name("lattice").description("Human-initiated, AI-powered knowledge graph for markdown documentation").version("0.1.0");
2754
+ program.name("lattice").description("Human-initiated, AI-powered knowledge graph for markdown documentation").version("0.3.0");
2755
+ registerInitCommand(program);
2671
2756
  registerSyncCommand(program);
2672
2757
  registerStatusCommand(program);
2673
2758
  registerQueryCommands(program);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@zabaca/lattice",
3
- "version": "0.2.0",
3
+ "version": "0.3.1",
4
4
  "description": "Human-initiated, AI-powered knowledge graph for markdown documentation",
5
5
  "type": "module",
6
6
  "bin": {
@@ -8,6 +8,7 @@
8
8
  },
9
9
  "files": [
10
10
  "dist",
11
+ "commands",
11
12
  "README.md"
12
13
  ],
13
14
  "scripts": {