@memograph/cli 0.1.8 → 0.1.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +21 -364
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,402 +1,59 @@
1
1
  # Memograph CLI
2
2
 
3
- **Memory Drift Inspector for Conversational AI**
3
+ Quickly inspect conversation transcripts and detect memory drift in AI assistants.
4
4
 
5
- Analyze conversation transcripts and detect when AI assistants lose context. Get a drift score, identify repetitions, forgotten preferences, and contradictions using AI-powered semantic analysis.
6
-
7
- ---
8
-
9
- ## Table of Contents
10
-
11
- - [What it does](#what-it-does)
12
- - [Why this exists](#why-this-exists)
13
- - [Try it now](#try-it-now)
14
- - [Install](#install)
15
- - [Quickstart](#quickstart)
16
- - [Using Memograph](#using-memograph)
17
- - [Interactive Mode](#interactive-mode-recommended)
18
- - [CLI Mode](#cli-mode-for-scripts--automation)
19
- - [Configuration](#configuration)
20
- - [Input Format](#input-format)
21
- - [Output](#output)
22
- - [Privacy & Security](#privacy--security)
23
- - [For Developers & Contributors](#for-developers--contributors)
24
- - [Troubleshooting](#troubleshooting)
25
- - [License](#license)
26
-
27
- ---
28
-
29
- ## What it does
30
-
31
- Memograph analyzes conversation transcripts to detect when AI assistants lose context or "forget" information:
32
-
33
- - **Detects repetitions**: User forced to repeat themselves
34
- - **Finds session resets**: Assistant language suggesting it forgot context
35
- - **Identifies forgotten preferences**: User restating preferences
36
- - **Spots contradictions**: Conflicting facts over time
37
- - **Calculates drift score** (0-100) and token waste percentage
38
-
39
- ---
40
-
41
- ## Why this exists
42
-
43
- When building conversational apps, memory failures often look like:
44
-
45
- - Users repeating preferences: "I already said I want Bangla…"
46
- - The assistant resets context: "Let's start over…"
47
- - The same question is asked multiple times because the assistant doesn't converge
48
- - Contradictory facts creep in
49
-
50
- Memograph CLI gives you a **quick, local diagnostic** before you rebuild prompts, memory layers, or retrieval logic.
51
-
52
- ---
53
-
54
- ## Try it now
5
+ ## Install
55
6
 
56
- Get started in one command:
7
+ ### Option A: Run instantly (no install)
57
8
 
58
9
  ```bash
59
10
  npx memograph-cli
60
11
  ```
61
12
 
62
- This launches interactive mode with:
63
- - Visual menu (arrow keys + Enter)
64
- - Setup wizard for AI configuration
65
- - Settings that persist across sessions
66
- - Real-time progress indicators
67
-
68
- ---
69
-
70
- ## Using Memograph
71
-
72
- ### Interactive Mode (Recommended)
73
-
74
- Run without arguments for a guided experience with arrow key navigation:
13
+ ### Option B: Install globally
75
14
 
76
15
  ```bash
77
- npx memograph-cli
78
- ```
79
-
80
- **Main features:**
81
- - **Visual menu** with ↑/↓ arrow key navigation
82
- - **Inspect transcripts**: Enter file path → Choose output format → View results
83
- - **Manage settings**: Configure once, settings persist in `~/.memograph/config.json`
84
- - **Setup wizard**: 5-step guided configuration for AI providers
85
-
86
- **Quick setup wizard:**
87
- 1. Select provider category (Cloud/Aggregators/Local)
88
- 2. Choose specific provider (OpenAI, Anthropic, Ollama, etc.)
89
- 3. Configure base URL (if needed)
90
- 4. Enter API key (if required)
91
- 5. Select model
92
-
93
- **Keyboard shortcuts:**
94
- - `↑` / `↓` - Navigate options
95
- - `Enter` - Select/confirm
96
- - `Ctrl+C` - Exit
97
-
98
- ---
99
-
100
- ## Install
101
-
102
- ### Option A: Try instantly (no installation) ⚡
103
-
104
- **Recommended for first-time users and quick analysis:**
105
-
106
- \`\`\`bash
107
- npx memograph-cli
108
- \`\`\`
109
-
110
- Launches the interactive mode immediately. Configure your AI model on first run, and you're ready to analyze transcripts!
111
-
112
- ### Option B: Install globally 📦
113
-
114
- **Best for regular use:**
115
-
116
- \`\`\`bash
117
16
  npm i -g memograph-cli
118
- \`\`\`
119
-
120
- After installation, run from anywhere:
121
-
122
- \`\`\`bash
123
- # Interactive mode
124
- memograph
125
-
126
- # Or CLI mode
127
- memograph inspect -i ./transcript.json
128
- \`\`\`
129
-
130
- The package name is `memograph-cli` and the command is `memograph`.
131
-
132
- ### Option C: Local development 🛠️
133
-
134
- **For contributors and local testing:**
135
-
136
- \`\`\`bash
137
- git clone https://github.com/yourusername/memograph-cli
138
- cd memograph-cli
139
- npm install
140
- npm run build
141
-
142
- # Run directly
143
- node dist/cli.js
144
-
145
- # Or use npm scripts
146
- npm start
147
- \`\`\`
148
-
149
- ---
150
-
151
- ## Quickstart
152
-
153
- ### Interactive Mode (Recommended)
154
-
155
- **Get started in 3 steps:**
156
-
157
- \`\`\`bash
158
- # 1. Launch interactive mode
159
- npx memograph-cli
160
-
161
- # 2. First time? Run the setup wizard
162
- # - Select your AI provider (OpenAI, Anthropic, Ollama, etc.)
163
- # - Enter API key (if required)
164
- # - Choose a model
165
- # Settings are saved to ~/.memograph/config.json
166
-
167
- # 3. Select "Inspect a transcript"
168
- # - Enter path: ./transcript.json
169
- # - Choose format: Text or JSON
170
- # - View your drift analysis!
171
- \`\`\`
172
-
173
- ### CLI Mode (For Scripts & Power Users)
174
-
175
- **Quick example:**
176
-
177
- 1. Create a transcript file:
178
-
179
- **transcript.json**
180
-
181
- \`\`\`json
182
- {
183
- "schema_version": "1.0",
184
- "messages": [
185
- { "idx": 0, "role": "user", "content": "My name is Tusher" },
186
- { "idx": 1, "role": "assistant", "content": "Nice to meet you!" },
187
- { "idx": 2, "role": "user", "content": "Please reply in Bangla from now on" },
188
- { "idx": 3, "role": "assistant", "content": "Sure." },
189
- { "idx": 4, "role": "user", "content": "Reply in Bangla please (I told you before)" }
190
- ]
191
- }
192
- \`\`\`
193
-
194
- 2. Run inspect with flags:
195
-
196
- \`\`\`bash
197
- # Text output (uses settings from interactive mode or env vars)
198
- memograph inspect -i transcript.json
199
-
200
- # Or specify all options via CLI flags
201
- memograph inspect -i transcript.json \
202
- --llm-provider openai \
203
- --llm-model gpt-4o-mini \
204
- --llm-api-key sk-...
205
-
206
- # JSON output for CI/scripts
207
- memograph inspect -i transcript.json --json
208
- \`\`\`
209
-
210
- **Note:** If you've configured settings in interactive mode, CLI commands automatically use those settings. You can override any setting with CLI flags.
211
-
212
- ---
213
-
214
- ### CLI Mode (For Scripts & Automation)
215
-
216
- For scripting and automation, use the `inspect` command directly:
217
-
218
- ```bash
219
- memograph-cli inspect -i transcript.json
220
- ```
221
-
222
- **When to use CLI mode:**
223
- - Automation scripts and CI/CD pipelines
224
- - Batch processing multiple files
225
- - When you already know your settings
226
-
227
- **Pro tip:** Configure settings once in interactive mode, then use CLI mode for automated workflows!
228
-
229
- ---
230
-
231
- **CLI inspect command:**
232
-
233
- ```bash
234
- memograph-cli inspect -i <path> [--json] [--llm-model <model>]
235
17
  ```
236
18
 
237
- **Common options:**
238
- - `-i, --input <path>` - Transcript file (required)
239
- - `--json` - Output JSON instead of text
240
- - `--llm-model <model>` - Override model (e.g., gpt-4o)
241
- - `--llm-provider <provider>` - Override provider (openai, anthropic)
242
- - `--max-messages <n>` - Limit messages processed
243
-
244
- **Examples:**
245
-
246
- ```bash
247
- # Basic usage (uses saved settings)
248
- memograph-cli inspect -i transcript.json
19
+ After install, the command is `memograph`.
249
20
 
250
- # JSON output for scripts
251
- memograph-cli inspect -i transcript.json --json
252
-
253
- # Use different model
254
- memograph-cli inspect -i transcript.json --llm-model gpt-4o
255
- ```
256
-
257
- For all options, run: `memograph-cli inspect --help`
258
-
259
- ---
260
-
261
- ## Configuration
21
+ ## Quickstart
262
22
 
263
- **Easiest: Interactive Setup**
23
+ ### 1) Interactive mode (recommended)
264
24
 
265
25
  ```bash
26
+ # Launch the guided menu
266
27
  npx memograph-cli
267
- # Select "Manage settings" → Follow wizard
268
- # Settings saved to ~/.memograph/config.json
269
28
  ```
270
29
 
271
- **Alternative: Environment Variables**
272
-
273
- ```bash
274
- # Create .env file
275
- OPENAI_API_KEY=sk-your-key-here
276
- LLM_MODEL=gpt-4o-mini
277
- ```
30
+ Follow the prompts to configure your AI provider and inspect a transcript.
278
31
 
279
- **Using Local Models (Ollama)**
32
+ ### 2) CLI mode (for scripts)
280
33
 
281
34
  ```bash
282
- # Install and start Ollama
283
- brew install ollama
284
- ollama pull llama3.2
285
- ollama serve
35
+ # Inspect a transcript file
36
+ memograph inspect -i ./transcript.json
286
37
 
287
- # Configure in interactive mode or use CLI flags
38
+ # JSON output (for automation)
39
+ memograph inspect -i ./transcript.json --json
288
40
  ```
289
41
 
290
- Settings priority: CLI flags > Environment variables > Config file
291
-
292
- ---
293
-
294
- ## Input Format
295
-
296
- Provide a JSON file with conversation messages:
42
+ ### Example transcript format
297
43
 
298
44
  ```json
299
45
  {
300
46
  "schema_version": "1.0",
301
47
  "messages": [
302
- { "idx": 0, "role": "user", "content": "Hello" },
303
- { "idx": 1, "role": "assistant", "content": "Hi!" }
48
+ { "idx": 0, "role": "user", "content": "My name is Tusher" },
49
+ { "idx": 1, "role": "assistant", "content": "Nice to meet you!" },
50
+ { "idx": 2, "role": "user", "content": "Please reply in Bangla from now on" },
51
+ { "idx": 3, "role": "assistant", "content": "Sure." },
52
+ { "idx": 4, "role": "user", "content": "Reply in Bangla please (I told you before)" }
304
53
  ]
305
54
  }
306
55
  ```
307
56
 
308
- **Required fields:**
309
- - `role`: "user", "assistant", "system", or "tool"
310
- - `content`: Message text
311
-
312
- **Optional fields:**
313
- - `idx`: Message index (auto-assigned if missing)
314
- - `ts`: ISO timestamp
315
- - `tokens`: Token count (estimated if missing)
316
-
317
- ---
318
-
319
- ## Output
320
-
321
- **Text output** (default): Human-readable report with drift score, events, and extracted facts.
322
-
323
- **JSON output** (`--json` flag): Machine-readable format for scripts and CI/CD.
324
-
325
- ```json
326
- {
327
- "drift_score": 25,
328
- "token_waste_pct": 7.1,
329
- "events": [...],
330
- "should_have_been_memory": [...]
331
- }
332
- ```
333
-
334
- ---
335
-
336
-
337
-
338
- ## Privacy & Security
339
-
340
- **Your data stays local:**
341
- - Memograph reads transcript files from your local filesystem
342
- - Only sends data to LLM APIs for analysis (or uses local models)
343
- - No data is stored or transmitted elsewhere
344
-
345
- **API Key Safety:**
346
- - Keys are stored in `~/.memograph/config.json` or environment variables
347
- - Never commit API keys to git (add `.env` to `.gitignore`)
348
- - Use local models (Ollama) to avoid sending data to external APIs
349
-
350
57
  ---
351
58
 
352
-
353
-
354
- ## For Developers & Contributors
355
-
356
- Interested in contributing or understanding how Memograph works? Check out [CONTRIBUTING.md](CONTRIBUTING.md) for:
357
-
358
- - **How it works**: Detection algorithms, scoring, performance optimizations
359
- - **Development setup**: Local environment, project structure, testing
360
- - **Roadmap**: Planned features and improvements
361
- - **Publishing**: Guidelines for releasing new versions
362
-
363
- ---
364
-
365
- ## Troubleshooting
366
-
367
- ### Common Issues
368
-
369
- **"API key not found"**
370
- - Run `npx memograph-cli` and use "Manage settings" → "Set/Update API Key"
371
- - Or set environment variable: `export OPENAI_API_KEY=sk-...`
372
-
373
- **Interactive mode doesn't start**
374
- - Don't pass any arguments (they trigger CLI mode)
375
- - Ensure terminal supports ANSI colors and arrow keys
376
-
377
- **Settings not saving**
378
- - Settings are in `~/.memograph/config.json`
379
- - Reset with: `rm ~/.memograph/config.json && npx memograph-cli`
380
-
381
- **Ollama not working**
382
- - Ensure Ollama is running: `ollama serve`
383
- - Use correct URL: `http://localhost:11434/v1`
384
- - Install model: `ollama pull llama3.2`
385
-
386
- **Network/API errors**
387
- - Check internet connection
388
- - Verify API status (status.openai.com / status.anthropic.com)
389
- - Try a different model or use local models
390
-
391
- **Where are settings stored?**
392
- - Location: `~/.memograph/config.json`
393
- - View: `cat ~/.memograph/config.json`
394
- - Edit via interactive mode: "Manage settings" → "Show raw config"
395
-
396
- **Settings priority:** CLI flags > Environment variables > Config file
397
-
398
- ---
399
-
400
- ## License
401
-
402
- MIT License - see LICENSE file for details.
59
+ For help: `memograph --help`
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@memograph/cli",
3
- "version": "0.1.8",
3
+ "version": "0.1.9",
4
4
  "description": "Local-first CLI tool for analyzing conversation transcripts and detecting memory drift",
5
5
  "main": "dist/cli.js",
6
6
  "bin": {