@getlore/cli 0.5.1 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE CHANGED
@@ -1,8 +1,16 @@
1
- Copyright (c) 2026 Mishkin Faustini. All rights reserved.
1
+ MIT License
2
2
 
3
- This software is proprietary and confidential. No part of this software
4
- may be reproduced, distributed, or transmitted in any form or by any means
5
- without the prior written permission of the copyright holder.
3
+ Copyright (c) 2026 Mishkin Faustini
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
6
14
 
7
15
  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
8
16
  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
package/README.md CHANGED
@@ -1,6 +1,8 @@
1
1
  # Lore
2
2
 
3
- > The lore behind your projects.
3
+ > The lore behind your projects. — [getlore.ai](https://getlore.ai) · [npm](https://www.npmjs.com/package/@getlore/cli)
4
+
5
+ Every project accumulates lore — the decisions, conversations, research, and context that explain why things are the way they are. Most of it gets lost between chat threads and forgotten docs. Lore keeps it searchable and citable.
4
6
 
5
7
  A research knowledge repository with **semantic search** and **citations**. Unlike memory systems that store processed facts, Lore preserves your original sources and lets you cite exactly what was said, by whom, and when.
6
8
 
@@ -31,14 +33,23 @@ lore search "user pain points"
31
33
 
32
34
  ## MCP Configuration
33
35
 
34
- Add to your Claude Code or Claude Desktop MCP config:
36
+ **One-click install:**
37
+
38
+ [![Install in Cursor](https://cursor.com/deeplink/mcp-install-dark.svg)](https://cursor.com/en-US/install-mcp?name=lore&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIkBnZXRsb3JlL2NsaSIsIm1jcCJdfQ%3D%3D)
39
+ [![Install in VS Code](https://img.shields.io/badge/VS_Code-Install_Server-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=lore&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%40getlore%2Fcli%22%2C%22mcp%22%5D%7D)
40
+ [![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install_Server-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://insiders.vscode.dev/redirect/mcp/install?name=lore&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%40getlore%2Fcli%22%2C%22mcp%22%5D%7D&quality=insiders)
41
+ [![Install in Goose](https://img.shields.io/badge/Goose-Install_Extension-F97316?style=flat-square&logoColor=white)](goose://extension?cmd=npx&arg=-y&arg=%40getlore%2Fcli&arg=mcp&timeout=300&id=lore-mcp&name=Lore&description=Research%20knowledge%20repository%20with%20semantic%20search%20and%20citations)
42
+
43
+ After installing, run `npx @getlore/cli setup` to configure API keys and sign in.
44
+
45
+ **Manual config** — add to your MCP client config (`.mcp.json`, `.cursor/mcp.json`, etc.):
35
46
 
36
47
  ```json
37
48
  {
38
49
  "mcpServers": {
39
50
  "lore": {
40
- "command": "lore",
41
- "args": ["mcp"]
51
+ "command": "npx",
52
+ "args": ["-y", "@getlore/cli", "mcp"]
42
53
  }
43
54
  }
44
55
  }
@@ -75,6 +86,56 @@ If the MCP host doesn't inherit your shell environment (e.g. Claude Desktop), ad
75
86
 
76
87
  Same content on different machines produces the same hash — no duplicate processing.
77
88
 
89
+ ## Agent Platform Install
90
+
91
+ Lore works with any agent that supports MCP. Use `lore skills install` or install directly from your platform's registry.
92
+
93
+ ### Claude Code
94
+
95
+ ```bash
96
+ # From plugin directory (once approved)
97
+ /plugin install lore
98
+
99
+ # Or install directly from GitHub
100
+ /plugin install https://github.com/getlore-ai/lore/tree/main/plugins/claude-code
101
+
102
+ # Or via Lore CLI
103
+ lore skills install claude-code
104
+ ```
105
+
106
+ ### Gemini CLI
107
+
108
+ ```bash
109
+ # From Extensions Gallery
110
+ gemini extensions install lore
111
+
112
+ # Or install directly from GitHub
113
+ gemini extensions install https://github.com/getlore-ai/lore --path plugins/gemini
114
+
115
+ # Or via Lore CLI
116
+ lore skills install gemini
117
+ ```
118
+
119
+ ### Codex CLI
120
+
121
+ ```bash
122
+ # Add MCP server
123
+ codex mcp add lore -- npx -y @getlore/cli mcp
124
+
125
+ # Install skill
126
+ lore skills install codex
127
+ ```
128
+
129
+ ### OpenClaw
130
+
131
+ ```bash
132
+ # From ClawHub
133
+ clawhub install lore
134
+
135
+ # Or via Lore CLI
136
+ lore skills install openclaw
137
+ ```
138
+
78
139
  ## License
79
140
 
80
- Proprietary. All rights reserved.
141
+ MIT
@@ -205,7 +205,7 @@ export function registerSyncCommand(program, defaultDataDir) {
205
205
  console.log(` ... and ${result.processing.titles.length - 10} more`);
206
206
  }
207
207
  if (result.processing.errors > 0) {
208
- console.log(` Errors: ${result.processing.errors}`);
208
+ console.log(` ${result.processing.errors} file(s) failed to process (check logs above)`);
209
209
  }
210
210
  }
211
211
  if (result.sources_found > 0 || result.sources_indexed > 0) {
@@ -214,6 +214,9 @@ export function registerSyncCommand(program, defaultDataDir) {
214
214
  console.log(` Newly indexed: ${result.sources_indexed}`);
215
215
  console.log(` Already indexed: ${result.already_indexed}`);
216
216
  }
217
+ if (result.reconciled > 0) {
218
+ console.log(`\nReconciled ${result.reconciled} source(s) missing local content`);
219
+ }
217
220
  if (result.git_pushed) {
218
221
  console.log('\n✓ Pushed changes to git');
219
222
  }
package/dist/core/git.js CHANGED
@@ -51,11 +51,43 @@ export async function gitPull(dir) {
51
51
  if (!(await hasRemote(dir))) {
52
52
  return { success: false, error: 'No remote configured' };
53
53
  }
54
- // Stash any local changes
55
- await execAsync('git stash', { cwd: dir }).catch(() => { });
54
+ // Stash any local changes before pulling
55
+ let didStash = false;
56
+ if (await hasChanges(dir)) {
57
+ try {
58
+ const { stdout: stashOut } = await execAsync('git stash', { cwd: dir });
59
+ didStash = !stashOut.includes('No local changes');
60
+ }
61
+ catch (stashErr) {
62
+ console.error(`[git] Stash failed: ${stashErr}`);
63
+ }
64
+ }
56
65
  // Pull with rebase
57
- const { stdout } = await execAsync('git pull --rebase', { cwd: dir });
58
- const pulled = !stdout.includes('Already up to date');
66
+ let pullOutput;
67
+ try {
68
+ const { stdout } = await execAsync('git pull --rebase', { cwd: dir });
69
+ pullOutput = stdout;
70
+ }
71
+ catch (pullErr) {
72
+ // Restore stashed changes before returning error
73
+ if (didStash) {
74
+ await execAsync('git stash pop', { cwd: dir }).catch((popErr) => {
75
+ console.error(`[git] Stash pop failed after pull error: ${popErr}`);
76
+ });
77
+ }
78
+ throw pullErr;
79
+ }
80
+ // Restore stashed changes after successful pull
81
+ if (didStash) {
82
+ try {
83
+ await execAsync('git stash pop', { cwd: dir });
84
+ }
85
+ catch (popErr) {
86
+ console.error(`[git] Stash pop failed (possible conflict): ${popErr}`);
87
+ // Don't fail the pull — stashed content is still in `git stash list`
88
+ }
89
+ }
90
+ const pulled = !pullOutput.includes('Already up to date');
59
91
  return {
60
92
  success: true,
61
93
  message: pulled ? 'Pulled new changes' : 'Already up to date'
@@ -79,6 +79,7 @@ export declare function getAllSources(_dbPath: string, options?: {
79
79
  project?: string;
80
80
  source_type?: SourceType;
81
81
  limit?: number;
82
+ sort_by?: 'indexed_at' | 'created_at';
82
83
  }): Promise<Array<{
83
84
  id: string;
84
85
  title: string;
@@ -86,8 +87,19 @@ export declare function getAllSources(_dbPath: string, options?: {
86
87
  content_type: ContentType;
87
88
  projects: string[];
88
89
  created_at: string;
90
+ indexed_at: string;
89
91
  summary: string;
90
92
  }>>;
93
+ /**
94
+ * Get all sources that have a source_path set.
95
+ * Used by reconciliation to ensure local content.md files exist.
96
+ */
97
+ export declare function getSourcesWithPaths(_dbPath: string): Promise<Array<{
98
+ id: string;
99
+ title: string;
100
+ summary: string;
101
+ source_path: string;
102
+ }>>;
91
103
  export declare function getSourceById(_dbPath: string, sourceId: string): Promise<{
92
104
  id: string;
93
105
  title: string;
@@ -101,6 +113,7 @@ export declare function getSourceById(_dbPath: string, sourceId: string): Promis
101
113
  quotes: Quote[];
102
114
  source_url?: string;
103
115
  source_name?: string;
116
+ source_path?: string;
104
117
  } | null>;
105
118
  export declare function deleteSource(_dbPath: string, sourceId: string): Promise<{
106
119
  deleted: boolean;
@@ -321,12 +321,12 @@ export async function searchSources(_dbPath, queryVector, options = {}) {
321
321
  // Retrieval Operations
322
322
  // ============================================================================
323
323
  export async function getAllSources(_dbPath, options = {}) {
324
- const { project, source_type, limit } = options;
324
+ const { project, source_type, limit, sort_by = 'indexed_at' } = options;
325
325
  const client = await getSupabase();
326
326
  let query = client
327
327
  .from('sources')
328
- .select('id, title, source_type, content_type, projects, created_at, summary')
329
- .order('created_at', { ascending: false });
328
+ .select('id, title, source_type, content_type, projects, created_at, indexed_at, summary')
329
+ .order(sort_by, { ascending: false });
330
330
  if (source_type) {
331
331
  query = query.eq('source_type', source_type);
332
332
  }
@@ -348,9 +348,33 @@ export async function getAllSources(_dbPath, options = {}) {
348
348
  content_type: row.content_type,
349
349
  projects: row.projects,
350
350
  created_at: row.created_at,
351
+ indexed_at: row.indexed_at || row.created_at,
351
352
  summary: row.summary,
352
353
  }));
353
354
  }
355
+ /**
356
+ * Get all sources that have a source_path set.
357
+ * Used by reconciliation to ensure local content.md files exist.
358
+ */
359
+ export async function getSourcesWithPaths(_dbPath) {
360
+ const client = await getSupabase();
361
+ const { data, error } = await client
362
+ .from('sources')
363
+ .select('id, title, summary, source_path')
364
+ .not('source_path', 'is', null);
365
+ if (error) {
366
+ console.error('Error getting sources with paths:', error);
367
+ return [];
368
+ }
369
+ return (data || [])
370
+ .filter((row) => row.source_path)
371
+ .map((row) => ({
372
+ id: row.id,
373
+ title: row.title,
374
+ summary: row.summary || '',
375
+ source_path: row.source_path,
376
+ }));
377
+ }
354
378
  export async function getSourceById(_dbPath, sourceId) {
355
379
  const client = await getSupabase();
356
380
  const { data, error } = await client
@@ -375,6 +399,7 @@ export async function getSourceById(_dbPath, sourceId) {
375
399
  quotes: data.quotes_json || [],
376
400
  source_url: data.source_url || undefined,
377
401
  source_name: data.source_name || undefined,
402
+ source_path: data.source_path || undefined,
378
403
  };
379
404
  }
380
405
  export async function deleteSource(_dbPath, sourceId) {
@@ -38,6 +38,7 @@ interface SyncResult {
38
38
  errors: number;
39
39
  titles: string[];
40
40
  };
41
+ reconciled: number;
41
42
  }
42
43
  export declare function handleSync(dbPath: string, dataDir: string, args: SyncArgs, options?: {
43
44
  hookContext?: {
@@ -11,10 +11,10 @@
11
11
  * - Generate embeddings
12
12
  * - Store in Supabase + local data dir
13
13
  */
14
- import { readdir, readFile } from 'fs/promises';
14
+ import { readdir, readFile, mkdir, writeFile } from 'fs/promises';
15
15
  import { existsSync } from 'fs';
16
16
  import path from 'path';
17
- import { getAllSources, addSource, resetDatabaseConnection, } from '../../core/vector-store.js';
17
+ import { getAllSources, addSource, getSourcesWithPaths, resetDatabaseConnection, } from '../../core/vector-store.js';
18
18
  import { generateEmbedding, createSearchableText } from '../../core/embedder.js';
19
19
  import { gitPull, gitCommitAndPush } from '../../core/git.js';
20
20
  import { loadSyncConfig, getEnabledSources } from '../../sync/config.js';
@@ -106,6 +106,65 @@ async function legacyDiskSync(dbPath, dataDir) {
106
106
  return result;
107
107
  }
108
108
  // ============================================================================
109
+ // Local Content Reconciliation
110
+ // ============================================================================
111
+ /**
112
+ * Ensures every source in Supabase with a source_path has a local
113
+ * ~/.lore/sources/{id}/content.md file. This handles:
114
+ * - Sources indexed before storeSourceToDisk was implemented
115
+ * - Sources from other machines (in shared Supabase but no local content)
116
+ * - Any edge case where Supabase write succeeded but disk write failed
117
+ *
118
+ * Cost: One Supabase query + local filesystem checks. No LLM calls.
119
+ */
120
+ async function reconcileLocalContent(dataDir) {
121
+ const sourcesDir = path.join(dataDir, 'sources');
122
+ const textExts = ['.md', '.txt', '.json', '.jsonl', '.csv', '.xml', '.yaml', '.yml', '.html', '.log'];
123
+ // Get all sources that have a source_path in Supabase
124
+ const sourcesWithPaths = await getSourcesWithPaths('');
125
+ if (sourcesWithPaths.length === 0)
126
+ return 0;
127
+ let reconciled = 0;
128
+ for (const source of sourcesWithPaths) {
129
+ const sourceDir = path.join(sourcesDir, source.id);
130
+ const contentPath = path.join(sourceDir, 'content.md');
131
+ // Skip if content.md already exists
132
+ if (existsSync(contentPath))
133
+ continue;
134
+ // Try to create content.md from the original source_path
135
+ let content = null;
136
+ if (existsSync(source.source_path)) {
137
+ const ext = path.extname(source.source_path).toLowerCase();
138
+ if (textExts.includes(ext)) {
139
+ try {
140
+ content = await readFile(source.source_path, 'utf-8');
141
+ }
142
+ catch {
143
+ // File can't be read — fall through to summary
144
+ }
145
+ }
146
+ }
147
+ // If we couldn't read the original file, use the summary from Supabase
148
+ if (!content) {
149
+ content = [
150
+ `# ${source.title}`,
151
+ '',
152
+ source.summary,
153
+ ].join('\n');
154
+ }
155
+ // Create the source directory and content.md
156
+ try {
157
+ await mkdir(sourceDir, { recursive: true });
158
+ await writeFile(contentPath, content);
159
+ reconciled++;
160
+ }
161
+ catch {
162
+ // Skip on write failure — will retry on next sync
163
+ }
164
+ }
165
+ return reconciled;
166
+ }
167
+ // ============================================================================
109
168
  // Universal Sync (new system)
110
169
  // ============================================================================
111
170
  async function universalSync(dataDir, dryRun, hookContext) {
@@ -172,6 +231,7 @@ export async function handleSync(dbPath, dataDir, args, options = {}) {
172
231
  sources_found: 0,
173
232
  sources_indexed: 0,
174
233
  already_indexed: 0,
234
+ reconciled: 0,
175
235
  };
176
236
  // 1. Git pull
177
237
  if (doPull) {
@@ -198,10 +258,12 @@ export async function handleSync(dbPath, dataDir, args, options = {}) {
198
258
  result.sources_found = legacyResult.sources_found;
199
259
  result.sources_indexed = legacyResult.sources_indexed;
200
260
  result.already_indexed = legacyResult.already_indexed;
261
+ // Reconcile: ensure every Supabase source has local content.md
262
+ result.reconciled = await reconcileLocalContent(dataDir);
201
263
  }
202
264
  // 3. Git push
203
265
  if (doPush && !dryRun) {
204
- const totalNew = (result.processing?.processed || 0) + result.sources_indexed;
266
+ const totalNew = (result.processing?.processed || 0) + result.sources_indexed + result.reconciled;
205
267
  if (totalNew > 0) {
206
268
  const pushResult = await gitCommitAndPush(dataDir, `Sync: Added ${totalNew} source(s)`);
207
269
  result.git_pushed = pushResult.success && (pushResult.message?.includes('pushed') || false);
@@ -16,6 +16,7 @@ import { type ImageMediaType } from './processors.js';
16
16
  export interface ExtractedMetadata {
17
17
  title: string;
18
18
  summary: string;
19
+ description?: string;
19
20
  date: string | null;
20
21
  participants: string[];
21
22
  content_type: ContentType;
@@ -38,6 +39,13 @@ export declare function extractMetadata(content: string, filePath: string, optio
38
39
  base64: string;
39
40
  mediaType: ImageMediaType;
40
41
  };
42
+ fileMetadata?: {
43
+ filename: string;
44
+ sizeBytes: number;
45
+ createdAt: string;
46
+ modifiedAt: string;
47
+ exif?: Record<string, unknown>;
48
+ };
41
49
  }): Promise<ExtractedMetadata>;
42
50
  export declare function processFiles(files: DiscoveredFile[], dataDir: string, options?: {
43
51
  onProgress?: (completed: number, total: number, title: string) => void;
@@ -55,12 +55,24 @@ Content type guidelines:
55
55
 
56
56
  Be specific in the summary. Include concrete details, names, numbers when present.`;
57
57
  export async function extractMetadata(content, filePath, options = {}) {
58
- const { model = 'claude-sonnet-4-20250514', image } = options;
58
+ const { model = 'claude-sonnet-4-20250514', image, fileMetadata } = options;
59
59
  const client = getAnthropic();
60
60
  // Build message content based on whether we have an image or text
61
61
  let messageContent;
62
62
  if (image) {
63
- // Image analysis with Claude Vision
63
+ // Image analysis with Claude Vision — extract metadata AND a detailed text description
64
+ const imagePrompt = `Analyze this image and return ONLY valid JSON with these fields:
65
+
66
+ {
67
+ "title": "A descriptive title for this image",
68
+ "summary": "2-4 sentences capturing the key takeaway or purpose of this image",
69
+ "description": "A comprehensive text description of everything in this image. Include all text, data, labels, numbers, charts, diagrams, and visual elements. Transcribe any visible text verbatim. For charts/graphs, describe the data points and trends. For screenshots, describe the UI elements and content. Be thorough — this description replaces the image in a text-only knowledge base.",
70
+ "date": "ISO date string (YYYY-MM-DD) if mentioned, otherwise null",
71
+ "participants": ["list", "of", "names"] if people are mentioned, otherwise [],
72
+ "content_type": "one of: interview|meeting|conversation|document|note|analysis"
73
+ }
74
+
75
+ Be specific and thorough in the description. Include ALL visible text, numbers, and data.`;
64
76
  messageContent = [
65
77
  {
66
78
  type: 'image',
@@ -72,7 +84,7 @@ export async function extractMetadata(content, filePath, options = {}) {
72
84
  },
73
85
  {
74
86
  type: 'text',
75
- text: `${EXTRACTION_PROMPT}\n\nFile: ${path.basename(filePath)}\n\nAnalyze this image and extract metadata. Describe what's in the image in detail in the summary.`,
87
+ text: `${imagePrompt}\n\nFile: ${path.basename(filePath)}${fileMetadata ? `\nFile size: ${(fileMetadata.sizeBytes / 1024).toFixed(0)} KB\nFile created: ${fileMetadata.createdAt}\nFile modified: ${fileMetadata.modifiedAt}${fileMetadata.exif ? `\nEXIF data: ${JSON.stringify(fileMetadata.exif)}` : ''}` : ''}`,
76
88
  },
77
89
  ];
78
90
  }
@@ -86,7 +98,7 @@ export async function extractMetadata(content, filePath, options = {}) {
86
98
  }
87
99
  const response = await client.messages.create({
88
100
  model,
89
- max_tokens: 1000,
101
+ max_tokens: image ? 4000 : 1000,
90
102
  messages: [
91
103
  {
92
104
  role: 'user',
@@ -111,6 +123,7 @@ export async function extractMetadata(content, filePath, options = {}) {
111
123
  return {
112
124
  title: parsed.title || path.basename(filePath),
113
125
  summary: parsed.summary || 'No summary available',
126
+ description: parsed.description || undefined,
114
127
  date: parsed.date || null,
115
128
  participants: Array.isArray(parsed.participants) ? parsed.participants : [],
116
129
  content_type: validateContentType(parsed.content_type),
@@ -150,9 +163,12 @@ async function storeSourceToDisk(sourceId, file, metadata, processedContent, dat
150
163
  const sourceDir = path.join(sourcesDir, sourceId);
151
164
  // Create source directory
152
165
  await mkdir(sourceDir, { recursive: true });
153
- // Copy original file
154
- const originalExt = path.extname(file.absolutePath);
155
- await copyFile(file.absolutePath, path.join(sourceDir, `original${originalExt}`));
166
+ // Copy original file (skip binary formats — knowledge store is text-based)
167
+ const originalExt = path.extname(file.absolutePath).toLowerCase();
168
+ const binaryExts = ['.jpg', '.jpeg', '.png', '.gif', '.webp', '.bmp', '.tiff', '.ico', '.svg'];
169
+ if (!binaryExts.includes(originalExt)) {
170
+ await copyFile(file.absolutePath, path.join(sourceDir, `original${originalExt}`));
171
+ }
156
172
  // Save processed content
157
173
  await writeFile(path.join(sourceDir, 'content.md'), processedContent);
158
174
  // Save metadata
@@ -223,17 +239,59 @@ export async function processFiles(files, dataDir, options = {}) {
223
239
  // 1. Read and preprocess file
224
240
  const processed = await processFile(file.absolutePath);
225
241
  // 2. Extract metadata with Claude (handles both text and images)
226
- const metadata = await extractMetadata(processed.text, file.absolutePath, { model, image: processed.image });
227
- // For images, use the summary as the text content
228
- const contentText = processed.image
229
- ? `# ${metadata.title}\n\n${metadata.summary}`
230
- : processed.text;
242
+ const metadata = await extractMetadata(processed.text, file.absolutePath, { model, image: processed.image, fileMetadata: processed.fileMetadata });
243
+ // For images, use the detailed description as the text content
244
+ let contentText;
245
+ if (processed.image) {
246
+ const lines = [
247
+ `# ${metadata.title}`,
248
+ '',
249
+ metadata.description || metadata.summary,
250
+ '',
251
+ '---',
252
+ '',
253
+ `*Original file: ${path.basename(file.absolutePath)}*`,
254
+ `*Synced from: ${file.sourceName}*`,
255
+ metadata.date ? `*Date: ${metadata.date}*` : '',
256
+ ];
257
+ // Append EXIF metadata if available
258
+ const exif = processed.fileMetadata?.exif;
259
+ if (exif && Object.keys(exif).length > 0) {
260
+ lines.push('');
261
+ lines.push('## Image Metadata');
262
+ for (const [key, value] of Object.entries(exif)) {
263
+ if (value != null && value !== '') {
264
+ const label = key.replace(/([A-Z])/g, ' $1').replace(/^./, s => s.toUpperCase()).trim();
265
+ lines.push(`- **${label}:** ${Array.isArray(value) ? value.join(', ') : String(value)}`);
266
+ }
267
+ }
268
+ }
269
+ contentText = lines.filter(Boolean).join('\n');
270
+ }
271
+ else {
272
+ contentText = processed.text;
273
+ }
231
274
  // 3. Use existing ID for edits, generate new ID for new files
232
275
  const sourceId = file.existingId || generateSourceId();
233
- // 4. Index in Supabase FIRST (may fail on duplicate content_hash)
234
- await indexSource(sourceId, file, metadata, dbPath);
235
- // 5. Store source to disk ONLY if Supabase succeeded
236
- await storeSourceToDisk(sourceId, file, metadata, contentText, dataDir);
276
+ // 4. Store to disk FIRST ensures content.md always exists
277
+ // If this fails, we skip Supabase so the file stays "new" for retry.
278
+ try {
279
+ await storeSourceToDisk(sourceId, file, metadata, contentText, dataDir);
280
+ }
281
+ catch (diskError) {
282
+ console.error(`[process] Disk write failed for ${file.relativePath}: ${diskError}`);
283
+ throw new Error(`Disk write failed for ${file.relativePath}: ${diskError}`);
284
+ }
285
+ // 5. Index in Supabase — if this fails, disk content still exists
286
+ // and legacy sync will pick it up on the next run.
287
+ try {
288
+ await indexSource(sourceId, file, metadata, dbPath);
289
+ }
290
+ catch (supabaseError) {
291
+ console.error(`[process] Supabase index failed for ${file.relativePath}: ${supabaseError}`);
292
+ console.error(`[process] Content saved to disk — will be indexed on next sync via legacy path`);
293
+ // Don't re-throw: disk write succeeded, source is safe
294
+ }
237
295
  if (extensionRegistry && hookContext) {
238
296
  await extensionRegistry.runHook('onSourceCreated', {
239
297
  id: sourceId,
@@ -265,9 +323,11 @@ export async function processFiles(files, dataDir, options = {}) {
265
323
  onProgress?.(result.processed.length + result.errors.length, files.length, batchResult.value.metadata.title);
266
324
  }
267
325
  else {
326
+ const errorMsg = batchResult.reason?.message || String(batchResult.reason);
327
+ console.error(`[process] Failed to process ${file.relativePath}: ${errorMsg}`);
268
328
  result.errors.push({
269
329
  file,
270
- error: batchResult.reason?.message || String(batchResult.reason),
330
+ error: errorMsg,
271
331
  });
272
332
  onProgress?.(result.processed.length + result.errors.length, files.length, `Error: ${file.relativePath}`);
273
333
  }
@@ -17,6 +17,13 @@ export interface ProcessedContent {
17
17
  base64: string;
18
18
  mediaType: ImageMediaType;
19
19
  };
20
+ fileMetadata?: {
21
+ filename: string;
22
+ sizeBytes: number;
23
+ createdAt: string;
24
+ modifiedAt: string;
25
+ exif?: Record<string, unknown>;
26
+ };
20
27
  }
21
28
  export declare function processFile(filePath: string): Promise<ProcessedContent>;
22
29
  export declare function preprocessFiles(filePaths: string[], options?: {
@@ -4,7 +4,7 @@
4
4
  * Converts various file formats to plain text for Claude analysis.
5
5
  * All processing is IN MEMORY ONLY - original files are never modified.
6
6
  */
7
- import { readFile } from 'fs/promises';
7
+ import { readFile, stat } from 'fs/promises';
8
8
  import path from 'path';
9
9
  let pdfParser = null;
10
10
  async function getPdfParser() {
@@ -196,13 +196,107 @@ async function processImage(filePath) {
196
196
  }
197
197
  const buffer = await readFile(filePath);
198
198
  const base64 = buffer.toString('base64');
199
+ // Extract file-level metadata
200
+ const fileStat = await stat(filePath);
201
+ const filename = path.basename(filePath);
202
+ // Try to parse date from common filename patterns (e.g. WhatsApp, screenshots)
203
+ let dateFromFilename;
204
+ const whatsappMatch = filename.match(/(\d{4}-\d{2}-\d{2})/);
205
+ if (whatsappMatch) {
206
+ dateFromFilename = whatsappMatch[1];
207
+ }
208
+ // Extract EXIF metadata (GPS, camera, date, etc.)
209
+ let exifData;
210
+ try {
211
+ const exifr = await import('exifr');
212
+ const raw = await exifr.default.parse(buffer, {
213
+ // Request all available tags
214
+ tiff: true,
215
+ exif: true,
216
+ gps: true,
217
+ icc: false, // Skip color profile (not useful for knowledge)
218
+ iptc: true, // Keywords, captions, copyright
219
+ xmp: true, // Extended metadata
220
+ });
221
+ if (raw) {
222
+ // Extract the most useful fields
223
+ exifData = {};
224
+ // Camera info
225
+ if (raw.Make)
226
+ exifData.cameraMake = raw.Make;
227
+ if (raw.Model)
228
+ exifData.cameraModel = raw.Model;
229
+ if (raw.LensModel)
230
+ exifData.lens = raw.LensModel;
231
+ // Date
232
+ if (raw.DateTimeOriginal)
233
+ exifData.dateTaken = raw.DateTimeOriginal instanceof Date ? raw.DateTimeOriginal.toISOString() : String(raw.DateTimeOriginal);
234
+ if (raw.CreateDate)
235
+ exifData.dateCreated = raw.CreateDate instanceof Date ? raw.CreateDate.toISOString() : String(raw.CreateDate);
236
+ // GPS
237
+ if (raw.latitude != null && raw.longitude != null) {
238
+ exifData.gpsLatitude = raw.latitude;
239
+ exifData.gpsLongitude = raw.longitude;
240
+ }
241
+ if (raw.GPSAltitude != null)
242
+ exifData.gpsAltitude = raw.GPSAltitude;
243
+ // Image dimensions
244
+ if (raw.ImageWidth)
245
+ exifData.width = raw.ImageWidth;
246
+ if (raw.ImageHeight)
247
+ exifData.height = raw.ImageHeight;
248
+ if (raw.ExifImageWidth)
249
+ exifData.width = raw.ExifImageWidth;
250
+ if (raw.ExifImageHeight)
251
+ exifData.height = raw.ExifImageHeight;
252
+ // Software / source
253
+ if (raw.Software)
254
+ exifData.software = raw.Software;
255
+ if (raw.Artist)
256
+ exifData.artist = raw.Artist;
257
+ if (raw.Copyright)
258
+ exifData.copyright = raw.Copyright;
259
+ // IPTC/XMP tags
260
+ if (raw.Keywords)
261
+ exifData.keywords = raw.Keywords;
262
+ if (raw.Description)
263
+ exifData.description = raw.Description;
264
+ if (raw.Caption)
265
+ exifData.caption = raw.Caption;
266
+ if (raw.Subject)
267
+ exifData.subject = raw.Subject;
268
+ if (raw.Title)
269
+ exifData.title = raw.Title;
270
+ // Use EXIF date if no filename date
271
+ if (!dateFromFilename && exifData.dateTaken) {
272
+ const d = new Date(exifData.dateTaken);
273
+ if (!isNaN(d.getTime())) {
274
+ dateFromFilename = d.toISOString().split('T')[0];
275
+ }
276
+ }
277
+ // Drop empty objects
278
+ if (Object.keys(exifData).length === 0)
279
+ exifData = undefined;
280
+ }
281
+ }
282
+ catch (exifError) {
283
+ console.error(`[processors] EXIF extraction failed for ${path.basename(filePath)}: ${exifError}`);
284
+ }
199
285
  return {
200
286
  text: '', // Will be filled by Claude vision
201
287
  format: 'image',
288
+ metadata: dateFromFilename ? { date: dateFromFilename } : undefined,
202
289
  image: {
203
290
  base64,
204
291
  mediaType,
205
292
  },
293
+ fileMetadata: {
294
+ filename,
295
+ sizeBytes: fileStat.size,
296
+ createdAt: fileStat.birthtime.toISOString(),
297
+ modifiedAt: fileStat.mtime.toISOString(),
298
+ ...(exifData ? { exif: exifData } : {}),
299
+ },
206
300
  };
207
301
  }
208
302
  // ============================================================================
@@ -90,47 +90,85 @@ export async function loadFullContent(state, ui, dbPath, sourcesDir) {
90
90
  const source = getSelectedSource(state);
91
91
  if (!source)
92
92
  return;
93
- // Try to load from disk first
94
- const contentPath = path.join(sourcesDir, source.id, 'content.md');
93
+ // Try to load from disk first (content.md, then original file)
94
+ const sourceDir = path.join(sourcesDir, source.id);
95
+ const contentPath = path.join(sourceDir, 'content.md');
95
96
  try {
96
97
  const { readFile } = await import('fs/promises');
97
98
  state.fullContent = await readFile(contentPath, 'utf-8');
98
99
  }
99
100
  catch {
100
- // Fall back to database source details
101
- const details = await getSourceById(dbPath, source.id);
102
- if (details) {
103
- state.fullContent = [
104
- `# ${details.title}`,
105
- '',
106
- `**Type:** ${details.source_type} · ${details.content_type}`,
107
- `**Date:** ${formatDate(details.created_at)}`,
108
- `**Projects:** ${details.projects.join(', ') || '(none)'}`,
109
- '',
110
- '## Summary',
111
- details.summary,
112
- '',
113
- ].join('\n');
114
- if (details.themes && details.themes.length > 0) {
115
- state.fullContent += '## Themes\n';
116
- for (const theme of details.themes) {
117
- state.fullContent += `- **${theme.name}**`;
118
- if (theme.summary)
119
- state.fullContent += `: ${theme.summary}`;
120
- state.fullContent += '\n';
101
+ // content.md not found — try to find and read an original text file
102
+ let foundOriginal = false;
103
+ try {
104
+ const { readFile, readdir } = await import('fs/promises');
105
+ const files = await readdir(sourceDir);
106
+ const originalFile = files.find(f => f.startsWith('original.'));
107
+ if (originalFile) {
108
+ const textExts = ['.md', '.txt', '.json', '.jsonl', '.csv', '.xml', '.yaml', '.yml', '.html', '.log'];
109
+ const ext = path.extname(originalFile).toLowerCase();
110
+ if (textExts.includes(ext)) {
111
+ state.fullContent = await readFile(path.join(sourceDir, originalFile), 'utf-8');
112
+ foundOriginal = true;
121
113
  }
122
- state.fullContent += '\n';
123
114
  }
124
- if (details.quotes && details.quotes.length > 0) {
125
- state.fullContent += '## Key Quotes\n';
126
- for (const quote of details.quotes.slice(0, 10)) {
127
- const speaker = quote.speaker === 'user' ? '[You]' : `[${quote.speaker_name || 'Participant'}]`;
128
- state.fullContent += `> ${speaker} "${quote.text}"\n\n`;
115
+ }
116
+ catch {
117
+ // Source directory doesn't exist locally — fall through to DB
118
+ }
119
+ if (!foundOriginal) {
120
+ // Try reading from source_path (original file in sync directory)
121
+ const details = await getSourceById(dbPath, source.id);
122
+ if (details?.source_path) {
123
+ try {
124
+ const { readFile } = await import('fs/promises');
125
+ const ext = path.extname(details.source_path).toLowerCase();
126
+ const textExts = ['.md', '.txt', '.json', '.jsonl', '.csv', '.xml', '.yaml', '.yml', '.html', '.log'];
127
+ if (textExts.includes(ext)) {
128
+ state.fullContent = await readFile(details.source_path, 'utf-8');
129
+ foundOriginal = true;
130
+ }
131
+ }
132
+ catch {
133
+ // source_path file doesn't exist or can't be read
134
+ }
135
+ }
136
+ if (!foundOriginal) {
137
+ // Final fallback: database summary view
138
+ if (details) {
139
+ state.fullContent = [
140
+ `# ${details.title}`,
141
+ '',
142
+ `**Type:** ${details.source_type} · ${details.content_type}`,
143
+ `**Date:** ${formatDate(details.created_at)}`,
144
+ `**Projects:** ${details.projects.join(', ') || '(none)'}`,
145
+ '',
146
+ '## Summary',
147
+ details.summary,
148
+ '',
149
+ ].join('\n');
150
+ if (details.themes && details.themes.length > 0) {
151
+ state.fullContent += '## Themes\n';
152
+ for (const theme of details.themes) {
153
+ state.fullContent += `- **${theme.name}**`;
154
+ if (theme.summary)
155
+ state.fullContent += `: ${theme.summary}`;
156
+ state.fullContent += '\n';
157
+ }
158
+ state.fullContent += '\n';
159
+ }
160
+ if (details.quotes && details.quotes.length > 0) {
161
+ state.fullContent += '## Key Quotes\n';
162
+ for (const quote of details.quotes.slice(0, 10)) {
163
+ const speaker = quote.speaker === 'user' ? '[You]' : `[${quote.speaker_name || 'Participant'}]`;
164
+ state.fullContent += `> ${speaker} "${quote.text}"\n\n`;
165
+ }
166
+ }
167
+ }
168
+ else {
169
+ state.fullContent = `Could not load content for ${source.title}`;
129
170
  }
130
171
  }
131
- }
132
- else {
133
- state.fullContent = `Could not load content for ${source.title}`;
134
172
  }
135
173
  }
136
174
  // Store raw lines for searching
@@ -370,6 +408,7 @@ export async function applyFilter(state, ui, query, filterMode, dbPath, dataDir,
370
408
  content_type: r.content_type,
371
409
  projects: r.projects,
372
410
  created_at: r.created_at,
411
+ indexed_at: r.created_at,
373
412
  summary: r.summary,
374
413
  score: r.score,
375
414
  }));
@@ -289,7 +289,7 @@ export function getSelectedSource(state) {
289
289
  */
290
290
  export function renderList(ui, state) {
291
291
  const width = ui.listContent.width - 2;
292
- const height = ui.listContent.height - 1;
292
+ const height = ui.listContent.height;
293
293
  const lines = [];
294
294
  if (state.filtered.length === 0) {
295
295
  lines.push('');
@@ -328,8 +328,12 @@ function renderFlatList(ui, state, width, height, lines) {
328
328
  if (state.selectedIndex >= itemsVisible) {
329
329
  visibleStart = state.selectedIndex - itemsVisible + 1;
330
330
  }
331
- const visibleEnd = Math.min(state.filtered.length, visibleStart + itemsVisible);
332
- for (let i = visibleStart; i < visibleEnd; i++) {
331
+ // Render items until we fill the viewport
332
+ let linesUsed = 0;
333
+ for (let i = visibleStart; i < state.filtered.length; i++) {
334
+ if (linesUsed + linesPerItem > height)
335
+ break;
336
+ linesUsed += linesPerItem;
333
337
  const source = state.filtered[i];
334
338
  const isSelected = i === state.selectedIndex;
335
339
  renderDocumentItem(source, isSelected, width, lines, true);
@@ -339,15 +343,27 @@ function renderFlatList(ui, state, width, height, lines) {
339
343
  * Render grouped list with collapsible project folders
340
344
  */
341
345
  function renderGroupedList(ui, state, width, height, lines) {
342
- // Calculate lines per item (headers take 2, docs take 3)
343
- const avgLinesPerItem = 2.5;
344
- const itemsVisible = Math.floor(height / avgLinesPerItem);
346
+ // Calculate line height for each item type
347
+ const itemLineHeight = (i) => state.listItems[i]?.type === 'header' ? 2 : 3;
348
+ // Find visibleStart: scroll so selectedIndex is visible
345
349
  let visibleStart = 0;
346
- if (state.selectedIndex >= itemsVisible) {
347
- visibleStart = state.selectedIndex - itemsVisible + 1;
348
- }
349
- const visibleEnd = Math.min(state.listItems.length, visibleStart + itemsVisible);
350
- for (let i = visibleStart; i < visibleEnd; i++) {
350
+ // Count lines from visibleStart to selectedIndex (inclusive)
351
+ let linesFromStartToSelected = 0;
352
+ for (let i = 0; i <= state.selectedIndex; i++) {
353
+ linesFromStartToSelected += itemLineHeight(i);
354
+ }
355
+ // If selectedIndex doesn't fit, scroll forward
356
+ while (linesFromStartToSelected > height && visibleStart < state.selectedIndex) {
357
+ linesFromStartToSelected -= itemLineHeight(visibleStart);
358
+ visibleStart++;
359
+ }
360
+ // Render items until we fill the viewport
361
+ let linesUsed = 0;
362
+ for (let i = visibleStart; i < state.listItems.length; i++) {
363
+ const h = itemLineHeight(i);
364
+ if (linesUsed + h > height)
365
+ break;
366
+ linesUsed += h;
351
367
  const item = state.listItems[i];
352
368
  const isSelected = i === state.selectedIndex;
353
369
  if (item.type === 'header') {
@@ -377,7 +393,7 @@ function renderProjectHeader(item, isSelected, width, lines) {
377
393
  * Render a document item row
378
394
  */
379
395
  function renderDocumentItem(source, isSelected, width, lines, showProject) {
380
- const date = formatDate(source.created_at);
396
+ const date = formatDate(source.indexed_at || source.created_at);
381
397
  const contentType = source.content_type || 'document';
382
398
  const project = source.projects[0] || '';
383
399
  // Format content type as a tag
@@ -10,6 +10,7 @@ export interface SourceItem {
10
10
  content_type: ContentType;
11
11
  projects: string[];
12
12
  created_at: string;
13
+ indexed_at: string;
13
14
  summary: string;
14
15
  score?: number;
15
16
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@getlore/cli",
3
- "version": "0.5.1",
3
+ "version": "0.6.0",
4
4
  "description": "Research knowledge repository with semantic search, citations, and project lineage tracking",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -44,7 +44,7 @@
44
44
  "bugs": {
45
45
  "url": "https://github.com/getlore-ai/lore/issues"
46
46
  },
47
- "license": "UNLICENSED",
47
+ "license": "MIT",
48
48
  "engines": {
49
49
  "node": ">=18"
50
50
  },
@@ -58,6 +58,7 @@
58
58
  "chokidar": "^5.0.0",
59
59
  "commander": "^12.1.0",
60
60
  "dotenv": "^17.2.3",
61
+ "exifr": "^7.1.3",
61
62
  "openai": "^4.77.0",
62
63
  "pdf-parse": "^2.4.5",
63
64
  "zod": "^3.24.0"
@@ -10,15 +10,23 @@ Lore is a research knowledge repository available via MCP. It stores documents,
10
10
 
11
11
  ## First-Time Setup
12
12
 
13
- If Lore is not installed yet, you can set it up for the user. The process requires their input at two points.
13
+ If Lore is not installed yet, you can set it up for the user.
14
14
 
15
15
  1. **Install**: `npm install -g @getlore/cli` (requires Node.js 18+)
16
- 2. **Ask the user** for their **email address** and their **OpenAI** and **Anthropic API keys**
17
- 3. **Send OTP**: Run `lore setup --openai-key <key> --anthropic-key <key> --email <email> --data-dir ~/.lore` this sends a 6-digit code to their email and exits
18
- 4. **Ask the user** for the **6-digit code** from their email
19
- 5. **Complete setup**: Re-run the same command with `--code <code>` appended — this verifies and finishes setup
20
-
21
- After setup, Lore works autonomously. No further user input is needed for searching, ingesting, or syncing.
16
+ 2. **Ask the user** for their **email address**
17
+ 3. **Ask for API keys**present these two options:
18
+ - **Recommended**: Tell the user to run these commands themselves (keys stay out of chat history):
19
+ ```
20
+ export OPENAI_API_KEY="sk-..."
21
+ export ANTHROPIC_API_KEY="sk-ant-..."
22
+ ```
23
+ Then run: `lore setup --openai-key $OPENAI_API_KEY --anthropic-key $ANTHROPIC_API_KEY --email <email> --data-dir ~/.lore`
24
+ - **Convenient but riskier**: The user can paste keys directly into this chat and you run setup with them. Warn the user that keys shared in chat may be stored in conversation history.
25
+ 4. **Send OTP**: Run the setup command — this sends a 6-digit code to their email and exits
26
+ 5. **Ask the user** for the **6-digit code** from their email
27
+ 6. **Complete setup**: Re-run the same command with `--code <code>` appended
28
+
29
+ After setup, Lore works autonomously.
22
30
 
23
31
  ## MCP Tools
24
32
 
@@ -9,15 +9,23 @@ Lore is a research knowledge repository available via MCP. It stores documents,
9
9
 
10
10
  ## First-Time Setup
11
11
 
12
- If Lore is not installed yet, you can set it up for the user. The process requires their input at two points.
12
+ If Lore is not installed yet, you can set it up for the user.
13
13
 
14
14
  1. **Install**: `npm install -g @getlore/cli` (requires Node.js 18+)
15
- 2. **Ask the user** for their **email address** and their **OpenAI** and **Anthropic API keys**
16
- 3. **Send OTP**: Run `lore setup --openai-key <key> --anthropic-key <key> --email <email> --data-dir ~/.lore` this sends a 6-digit code to their email and exits
17
- 4. **Ask the user** for the **6-digit code** from their email
18
- 5. **Complete setup**: Re-run the same command with `--code <code>` appended — this verifies and finishes setup
19
-
20
- After setup, Lore works autonomously. No further user input is needed for searching, ingesting, or syncing.
15
+ 2. **Ask the user** for their **email address**
16
+ 3. **Ask for API keys**present these two options:
17
+ - **Recommended**: Tell the user to run these commands themselves (keys stay out of chat history):
18
+ ```
19
+ export OPENAI_API_KEY="sk-..."
20
+ export ANTHROPIC_API_KEY="sk-ant-..."
21
+ ```
22
+ Then run: `lore setup --openai-key $OPENAI_API_KEY --anthropic-key $ANTHROPIC_API_KEY --email <email> --data-dir ~/.lore`
23
+ - **Convenient but riskier**: The user can paste keys directly into this chat and you run setup with them. Warn the user that keys shared in chat may be stored in conversation history.
24
+ 4. **Send OTP**: Run the setup command — this sends a 6-digit code to their email and exits
25
+ 5. **Ask the user** for the **6-digit code** from their email
26
+ 6. **Complete setup**: Re-run the same command with `--code <code>` appended
27
+
28
+ After setup, Lore works autonomously.
21
29
 
22
30
  ## MCP Tools
23
31
 
@@ -4,15 +4,23 @@ Lore is a research knowledge repository available via MCP. It stores documents,
4
4
 
5
5
  ## First-Time Setup
6
6
 
7
- If Lore is not installed yet, you can set it up for the user. The process requires their input at two points.
7
+ If Lore is not installed yet, you can set it up for the user.
8
8
 
9
9
  1. **Install**: `npm install -g @getlore/cli` (requires Node.js 18+)
10
- 2. **Ask the user** for their **email address** and their **OpenAI** and **Anthropic API keys**
11
- 3. **Send OTP**: Run `lore setup --openai-key <key> --anthropic-key <key> --email <email> --data-dir ~/.lore` this sends a 6-digit code to their email and exits
12
- 4. **Ask the user** for the **6-digit code** from their email
13
- 5. **Complete setup**: Re-run the same command with `--code <code>` appended — this verifies and finishes setup
14
-
15
- After setup, Lore works autonomously. No further user input is needed for searching, ingesting, or syncing.
10
+ 2. **Ask the user** for their **email address**
11
+ 3. **Ask for API keys**present these two options:
12
+ - **Recommended**: Tell the user to run these commands themselves (keys stay out of chat history):
13
+ ```
14
+ export OPENAI_API_KEY="sk-..."
15
+ export ANTHROPIC_API_KEY="sk-ant-..."
16
+ ```
17
+ Then run: `lore setup --openai-key $OPENAI_API_KEY --anthropic-key $ANTHROPIC_API_KEY --email <email> --data-dir ~/.lore`
18
+ - **Convenient but riskier**: The user can paste keys directly into this chat and you run setup with them. Warn the user that keys shared in chat may be stored in conversation history.
19
+ 4. **Send OTP**: Run the setup command — this sends a 6-digit code to their email and exits
20
+ 5. **Ask the user** for the **6-digit code** from their email
21
+ 6. **Complete setup**: Re-run the same command with `--code <code>` appended
22
+
23
+ After setup, Lore works autonomously.
16
24
 
17
25
  ## MCP Tools
18
26
 
@@ -4,15 +4,23 @@ Lore is a research knowledge repository accessible via MCP (Model Context Protoc
4
4
 
5
5
  ## First-Time Setup
6
6
 
7
- If Lore is not installed yet, you can set it up for the user. The process requires their input at two points.
7
+ If Lore is not installed yet, you can set it up for the user.
8
8
 
9
9
  1. **Install**: `npm install -g @getlore/cli` (requires Node.js 18+)
10
- 2. **Ask the user** for their **email address** and their **OpenAI** and **Anthropic API keys**
11
- 3. **Send OTP**: Run `lore setup --openai-key <key> --anthropic-key <key> --email <email> --data-dir ~/.lore` this sends a 6-digit code to their email and exits
12
- 4. **Ask the user** for the **6-digit code** from their email
13
- 5. **Complete setup**: Re-run the same command with `--code <code>` appended — this verifies and finishes setup
14
-
15
- After setup, Lore works autonomously. No further user input is needed for searching, ingesting, or syncing.
10
+ 2. **Ask the user** for their **email address**
11
+ 3. **Ask for API keys**present these two options:
12
+ - **Recommended**: Tell the user to run these commands themselves (keys stay out of chat history):
13
+ ```
14
+ export OPENAI_API_KEY="sk-..."
15
+ export ANTHROPIC_API_KEY="sk-ant-..."
16
+ ```
17
+ Then run: `lore setup --openai-key $OPENAI_API_KEY --anthropic-key $ANTHROPIC_API_KEY --email <email> --data-dir ~/.lore`
18
+ - **Convenient but riskier**: The user can paste keys directly into this chat and you run setup with them. Warn the user that keys shared in chat may be stored in conversation history.
19
+ 4. **Send OTP**: Run the setup command — this sends a 6-digit code to their email and exits
20
+ 5. **Ask the user** for the **6-digit code** from their email
21
+ 6. **Complete setup**: Re-run the same command with `--code <code>` appended
22
+
23
+ After setup, Lore works autonomously.
16
24
 
17
25
  ## Core Concepts
18
26