psychmem 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (93) hide show
  1. package/README.md +632 -0
  2. package/dist/adapters/claude-code/index.d.ts +125 -0
  3. package/dist/adapters/claude-code/index.d.ts.map +1 -0
  4. package/dist/adapters/claude-code/index.js +398 -0
  5. package/dist/adapters/claude-code/index.js.map +1 -0
  6. package/dist/adapters/opencode/index.d.ts +50 -0
  7. package/dist/adapters/opencode/index.d.ts.map +1 -0
  8. package/dist/adapters/opencode/index.js +793 -0
  9. package/dist/adapters/opencode/index.js.map +1 -0
  10. package/dist/adapters/types.d.ts +226 -0
  11. package/dist/adapters/types.d.ts.map +1 -0
  12. package/dist/adapters/types.js +6 -0
  13. package/dist/adapters/types.js.map +1 -0
  14. package/dist/cli.d.ts +19 -0
  15. package/dist/cli.d.ts.map +1 -0
  16. package/dist/cli.js +461 -0
  17. package/dist/cli.js.map +1 -0
  18. package/dist/hooks/index.d.ts +92 -0
  19. package/dist/hooks/index.d.ts.map +1 -0
  20. package/dist/hooks/index.js +304 -0
  21. package/dist/hooks/index.js.map +1 -0
  22. package/dist/hooks/post-tool-use.d.ts +26 -0
  23. package/dist/hooks/post-tool-use.d.ts.map +1 -0
  24. package/dist/hooks/post-tool-use.js +69 -0
  25. package/dist/hooks/post-tool-use.js.map +1 -0
  26. package/dist/hooks/session-end.d.ts +32 -0
  27. package/dist/hooks/session-end.d.ts.map +1 -0
  28. package/dist/hooks/session-end.js +66 -0
  29. package/dist/hooks/session-end.js.map +1 -0
  30. package/dist/hooks/session-start.d.ts +55 -0
  31. package/dist/hooks/session-start.d.ts.map +1 -0
  32. package/dist/hooks/session-start.js +173 -0
  33. package/dist/hooks/session-start.js.map +1 -0
  34. package/dist/hooks/stop.d.ts +72 -0
  35. package/dist/hooks/stop.d.ts.map +1 -0
  36. package/dist/hooks/stop.js +273 -0
  37. package/dist/hooks/stop.js.map +1 -0
  38. package/dist/index.d.ts +114 -0
  39. package/dist/index.d.ts.map +1 -0
  40. package/dist/index.js +191 -0
  41. package/dist/index.js.map +1 -0
  42. package/dist/memory/context-sweep.d.ts +107 -0
  43. package/dist/memory/context-sweep.d.ts.map +1 -0
  44. package/dist/memory/context-sweep.js +557 -0
  45. package/dist/memory/context-sweep.js.map +1 -0
  46. package/dist/memory/patterns.d.ts +106 -0
  47. package/dist/memory/patterns.d.ts.map +1 -0
  48. package/dist/memory/patterns.js +613 -0
  49. package/dist/memory/patterns.js.map +1 -0
  50. package/dist/memory/selective-memory.d.ts +78 -0
  51. package/dist/memory/selective-memory.d.ts.map +1 -0
  52. package/dist/memory/selective-memory.js +227 -0
  53. package/dist/memory/selective-memory.js.map +1 -0
  54. package/dist/memory/structural-analyzer.d.ts +75 -0
  55. package/dist/memory/structural-analyzer.d.ts.map +1 -0
  56. package/dist/memory/structural-analyzer.js +359 -0
  57. package/dist/memory/structural-analyzer.js.map +1 -0
  58. package/dist/retrieval/index.d.ts +106 -0
  59. package/dist/retrieval/index.d.ts.map +1 -0
  60. package/dist/retrieval/index.js +291 -0
  61. package/dist/retrieval/index.js.map +1 -0
  62. package/dist/storage/database.d.ts +138 -0
  63. package/dist/storage/database.d.ts.map +1 -0
  64. package/dist/storage/database.js +748 -0
  65. package/dist/storage/database.js.map +1 -0
  66. package/dist/storage/sqlite-adapter.d.ts +35 -0
  67. package/dist/storage/sqlite-adapter.d.ts.map +1 -0
  68. package/dist/storage/sqlite-adapter.js +103 -0
  69. package/dist/storage/sqlite-adapter.js.map +1 -0
  70. package/dist/transcript/index.d.ts +8 -0
  71. package/dist/transcript/index.d.ts.map +1 -0
  72. package/dist/transcript/index.js +6 -0
  73. package/dist/transcript/index.js.map +1 -0
  74. package/dist/transcript/parser.d.ts +93 -0
  75. package/dist/transcript/parser.d.ts.map +1 -0
  76. package/dist/transcript/parser.js +373 -0
  77. package/dist/transcript/parser.js.map +1 -0
  78. package/dist/transcript/sweep.d.ts +75 -0
  79. package/dist/transcript/sweep.d.ts.map +1 -0
  80. package/dist/transcript/sweep.js +202 -0
  81. package/dist/transcript/sweep.js.map +1 -0
  82. package/dist/types/index.d.ts +328 -0
  83. package/dist/types/index.d.ts.map +1 -0
  84. package/dist/types/index.js +80 -0
  85. package/dist/types/index.js.map +1 -0
  86. package/dist/utils/paths.d.ts +19 -0
  87. package/dist/utils/paths.d.ts.map +1 -0
  88. package/dist/utils/paths.js +43 -0
  89. package/dist/utils/paths.js.map +1 -0
  90. package/hooks/hooks.json +54 -0
  91. package/package.json +83 -0
  92. package/plugin.js +45 -0
  93. package/plugin.json +19 -0
package/README.md ADDED
@@ -0,0 +1,632 @@
1
+ # PsychMem
2
+
3
+ **Psychology-grounded selective memory for AI coding agents**
4
+
5
+ PsychMem gives AI agents (Claude Code, OpenCode) human-like memory that persists across sessions. Instead of treating all context equally, it implements cognitive science principles: important information consolidates into long-term memory while trivial details decay away.
6
+
7
+ ## Table of Contents
8
+
9
+ - [Why PsychMem?](#why-psychmem)
10
+ - [Psychological Foundations](#psychological-foundations)
11
+ - [How It Works](#how-it-works)
12
+ - [Mathematical Model](#mathematical-model)
13
+ - [Implementation Details](#implementation-details)
14
+ - [Installation](#installation)
15
+ - [Configuration](#configuration)
16
+ - [Architecture](#architecture)
17
+ - [Research References](#research-references)
18
+
19
+ ---
20
+
21
+ ## Why PsychMem?
22
+
23
+ Current AI agents have no persistent memory between sessions. Every conversation starts fresh, requiring users to re-explain preferences, project context, and past decisions. PsychMem solves this by:
24
+
25
+ 1. **Extracting** important information from conversations in real-time
26
+ 2. **Scoring** memories using psychology-based importance metrics
27
+ 3. **Consolidating** significant memories to long-term storage
28
+ 4. **Decaying** irrelevant memories using Ebbinghaus forgetting curves
29
+ 5. **Injecting** relevant memories into new sessions automatically
30
+
31
+ ---
32
+
33
+ ## Psychological Foundations
34
+
35
+ PsychMem is built on established cognitive science research:
36
+
37
+ ### Dual-Store Memory Model (Atkinson & Shiffrin, 1968)
38
+
39
+ Human memory operates in two stages:
40
+
41
+ - **Short-Term Memory (STM)**: Limited capacity (~4 items), rapid decay, holds task-relevant information
42
+ - **Long-Term Memory (LTM)**: Unlimited capacity, slow decay, consolidated through rehearsal/importance
43
+
44
+ PsychMem implements this with separate STM/LTM stores and different decay rates:
45
+ - STM decay rate: `λ = 0.05` (fast - memories lose ~5% strength per hour, ~32h half-life)
46
+ - LTM decay rate: `λ = 0.01` (slow - memories lose ~1% strength per hour)
47
+
48
+ ### Working Memory Capacity (Cowan, 2001)
49
+
50
+ Research shows humans can hold **4 ± 1 items** in working memory simultaneously. PsychMem uses a slightly higher limit based on Miller's 7±2:
51
+
52
+ ```typescript
53
+ maxMemoriesPerStop: 7 // Extract at most 7 memories per conversation turn
54
+ ```
55
+
56
+ This prevents memory bloat while ensuring the most important information is captured.
57
+
58
+ ### Forgetting Curve (Ebbinghaus, 1885)
59
+
60
+ Memory strength decays exponentially over time without reinforcement:
61
+
62
+ ```
63
+ S(t) = S₀ × e^(-λt)
64
+ ```
65
+
66
+ Where:
67
+ - `S(t)` = memory strength at time t
68
+ - `S₀` = initial strength
69
+ - `λ` = decay rate constant
70
+ - `t` = time elapsed (hours)
71
+
72
+ ### Memory Reconsolidation (Nader et al., 2000)
73
+
74
+ When memories are retrieved and new conflicting information is presented, the memory becomes labile and can be updated. PsychMem implements this:
75
+
76
+ - **Reinforcing evidence** (similarity > 0.7): Boosts confidence, increments frequency
77
+ - **Conflicting evidence** (similarity < 0.3): Triggers reconsolidation, adjusts confidence
78
+ - **Neutral evidence**: No update needed
79
+
80
+ ### Emotional Salience & Importance
81
+
82
+ Emotionally significant events are remembered better. PsychMem detects "emotional" signals in technical context:
83
+ - Errors and bugs (frustration/urgency)
84
+ - Corrections and mistakes (self-reflection)
85
+ - Explicit emphasis (ALL CAPS, "always", "never")
86
+ - Repeated requests (importance through frequency)
87
+
88
+ ---
89
+
90
+ ## How It Works
91
+
92
+ ### Two-Stage Pipeline
93
+
94
+ ```
95
+ ┌─────────────────────────────────────────────────────────────────┐
96
+ │ STAGE 1: CONTEXT SWEEP │
97
+ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
98
+ │ │ Multilingual │───▶│ Structural │───▶│ Candidate │ │
99
+ │ │ Patterns │ │ Analysis │ │ Extraction │ │
100
+ │ │ (15 langs) │ │ (typography) │ │ │ │
101
+ │ └──────────────┘ └──────────────┘ └──────────────┘ │
102
+ └─────────────────────────────────────────────────────────────────┘
103
+
104
+
105
+ ┌─────────────────────────────────────────────────────────────────┐
106
+ │ STAGE 2: SELECTIVE MEMORY │
107
+ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
108
+ │ │ Feature │───▶│ Strength │───▶│ Store │ │
109
+ │ │ Scoring │ │ Calculation │ │ Allocation │ │
110
+ │ │ (7 factors) │ │ │ │ (STM/LTM) │ │
111
+ │ └──────────────┘ └──────────────┘ └──────────────┘ │
112
+ └─────────────────────────────────────────────────────────────────┘
113
+ ```
114
+
115
+ ### Stage 1: Context Sweep
116
+
117
+ Extracts memory candidates by detecting importance signals:
118
+
119
+ **Layer 1 - Multilingual Keyword Patterns (15 languages)**
120
+
121
+ | Signal Type | Examples | Weight |
122
+ |-------------|----------|--------|
123
+ | Explicit Remember | "remember this", "忘れないで", "не забудь" | 0.9 |
124
+ | Emphasis Cue | "always", "必ず", "никогда" | 0.8 |
125
+ | Bug/Fix | "error", "バグ", "ошибка" | 0.8 |
126
+ | Learning | "learned", "分かった", "узнал" | 0.8 |
127
+ | Correction | "actually", "実は", "на самом деле" | 0.7 |
128
+ | Decision | "decided", "決めた", "решил" | 0.7 |
129
+ | Constraint | "can't", "できない", "нельзя" | 0.7 |
130
+ | Preference | "prefer", "好き", "предпочитаю" | 0.6 |
131
+
132
+ Languages supported: English, Spanish, French, German, Portuguese, Japanese, Chinese (Simplified/Traditional), Korean, Russian, Arabic, Hindi, Italian, Dutch, Turkish, Polish
133
+
134
+ **Layer 2 - Structural Analysis (Language-Agnostic)**
135
+
136
+ | Signal Type | Detection Method | Weight |
137
+ |-------------|------------------|--------|
138
+ | Typography Emphasis | ALL CAPS, `!!`, bold markdown | 0.7 |
139
+ | Correction Pattern | Short reply after long message | 0.6 |
140
+ | Repetition | Trigram overlap > 40% | 0.7 |
141
+ | Elaboration | Reply > 2× median length | 0.5 |
142
+ | Enumeration | Lists, "first/then/finally" | 0.5 |
143
+ | Meta Reference | Near tool errors, stack traces | 0.6 |
144
+
145
+ ### Stage 2: Selective Memory
146
+
147
+ Scores candidates and allocates to appropriate store:
148
+
149
+ **7-Feature Scoring Model**
150
+
151
+ ```
152
+ Strength = Σ(wᵢ × fᵢ)
153
+ ```
154
+
155
+ | Feature | Weight | Description |
156
+ |---------|--------|-------------|
157
+ | Recency | 0.20 | Time since creation (inverted, week scale) |
158
+ | Frequency | 0.15 | Access count (log-normalized) |
159
+ | Importance | 0.25 | Explicit + inferred signals |
160
+ | Utility | 0.20 | Task usefulness (feedback-adjusted) |
161
+ | Novelty | 0.10 | Distinctiveness vs existing memories |
162
+ | Confidence | 0.10 | Evidence consensus |
163
+ | Interference | -0.10 | Conflict penalty (negative weight) |
164
+
165
+ **Store Allocation Rules**
166
+
167
+ | Classification | Default Store | Auto-Promote | Scope |
168
+ |----------------|---------------|--------------|-------|
169
+ | bugfix | LTM | Yes | Project |
170
+ | learning | LTM | Yes | User |
171
+ | decision | LTM | Yes | Project |
172
+ | constraint | STM | No | User |
173
+ | preference | STM | No | User |
174
+ | procedural | STM | No | User |
175
+ | semantic | STM | No | Project |
176
+ | episodic | STM | No | Project |
177
+
178
+ **Consolidation (STM → LTM)**
179
+
180
+ Memories promote to LTM when any condition is met:
181
+ - Strength ≥ 0.7 (high importance)
182
+ - Frequency ≥ 3 (repeated access/mention)
183
+ - Classification is auto-promote type
184
+
185
+ ---
186
+
187
+ ## Mathematical Model
188
+
189
+ ### Memory Strength Calculation
190
+
191
+ ```typescript
192
+ function calculateStrength(features: MemoryFeatureVector): number {
193
+ const w = scoringWeights;
194
+
195
+ // Normalize frequency (log scale for diminishing returns)
196
+ const normalizedFrequency = Math.min(1, Math.log(frequency + 1) / Math.log(10));
197
+
198
+ // Recency factor (0 = now, 1 = old; 168 hours = 1 week)
199
+ const recencyFactor = 1 - Math.min(1, recency / 168);
200
+
201
+ const strength =
202
+ w.recency * recencyFactor +
203
+ w.frequency * normalizedFrequency +
204
+ w.importance * importance +
205
+ w.utility * utility +
206
+ w.novelty * novelty +
207
+ w.confidence * confidence +
208
+ w.interference * interference; // Negative contribution
209
+
210
+ return clamp(strength, 0, 1);
211
+ }
212
+ ```
213
+
214
+ ### Exponential Decay Application
215
+
216
+ ```typescript
217
+ function applyDecay(): void {
218
+ for (const memory of activeMemories) {
219
+ const dtHours = (now - memory.updatedAt) / (1000 * 60 * 60);
220
+ const newStrength = memory.strength * Math.exp(-memory.decayRate * dtHours);
221
+
222
+ if (newStrength < 0.1) {
223
+ memory.status = 'decayed'; // Below threshold, mark for removal
224
+ } else {
225
+ memory.strength = newStrength;
226
+ }
227
+ }
228
+ }
229
+ ```
230
+
231
+ ### Preliminary Importance (Signal Combination)
232
+
233
+ ```typescript
234
+ function calculatePreliminaryImportance(signals: ImportanceSignal[]): number {
235
+ // Sort by weight (strongest first)
236
+ const sorted = signals.sort((a, b) => b.weight - a.weight);
237
+
238
+ // Combine with diminishing returns
239
+ let importance = 0;
240
+ for (let i = 0; i < sorted.length; i++) {
241
+ importance += sorted[i].weight * Math.pow(0.7, i); // Each signal contributes 70% of previous
242
+ }
243
+
244
+ return Math.min(1, importance);
245
+ }
246
+ ```
247
+
248
+ ### Novelty Calculation (Inverse Similarity)
249
+
250
+ ```typescript
251
+ function calculateNovelty(candidate: MemoryCandidate): number {
252
+ const existingMemories = getTopMemories(50);
253
+
254
+ if (existingMemories.length === 0) return 1.0; // Everything is novel initially
255
+
256
+ // Find max similarity to any existing memory
257
+ let maxSimilarity = 0;
258
+ for (const mem of existingMemories) {
259
+ const similarity = jaccardSimilarity(candidate.summary, mem.summary);
260
+ maxSimilarity = Math.max(maxSimilarity, similarity);
261
+ }
262
+
263
+ return 1 - maxSimilarity; // Novelty is inverse of similarity
264
+ }
265
+ ```
266
+
267
+ ### Interference Detection
268
+
269
+ ```typescript
270
+ function detectInterference(candidate: MemoryCandidate): number {
271
+ let interference = 0;
272
+
273
+ for (const mem of existingMemories) {
274
+ const similarity = jaccardSimilarity(candidate.summary, mem.summary);
275
+
276
+ // Similar topic (0.3-0.8) but different content = potential conflict
277
+ if (similarity > 0.3 && similarity < 0.8) {
278
+ interference = Math.max(interference, similarity * 0.5);
279
+ }
280
+ }
281
+
282
+ return interference;
283
+ }
284
+ ```
285
+
286
+ ### Text Similarity (Jaccard Index)
287
+
288
+ ```typescript
289
+ function jaccardSimilarity(a: string, b: string): number {
290
+ const wordsA = new Set(a.toLowerCase().split(/\s+/));
291
+ const wordsB = new Set(b.toLowerCase().split(/\s+/));
292
+
293
+ const intersection = new Set([...wordsA].filter(x => wordsB.has(x)));
294
+ const union = new Set([...wordsA, ...wordsB]);
295
+
296
+ return intersection.size / union.size;
297
+ }
298
+ ```
299
+
300
+ ---
301
+
302
+ ## Implementation Details
303
+
304
+ ### Deduplication Strategy
305
+
306
+ Before storing, candidates are deduplicated using **70% keyword overlap threshold**:
307
+
308
+ ```typescript
309
+ deduplicationThreshold: 0.7 // If 70%+ words match, merge candidates
310
+ ```
311
+
312
+ This prevents storing "User prefers TypeScript" and "User prefers using TypeScript" as separate memories.
313
+
314
+ ### Memory Scoping (v1.6)
315
+
316
+ Memories are scoped to prevent project-specific knowledge from polluting unrelated sessions:
317
+
318
+ **User-Level** (always injected):
319
+ - Constraints: "Never use `var` in TypeScript"
320
+ - Preferences: "Prefers functional style"
321
+ - Learnings: "Learned that bun:sqlite has different API"
322
+ - Procedural: "Run tests before committing"
323
+
324
+ **Project-Level** (only injected for matching project):
325
+ - Decisions: "Decided to use SQLite over Postgres"
326
+ - Bugfixes: "Fixed null pointer in auth module"
327
+ - Semantic: "The API uses REST, not GraphQL"
328
+ - Episodic: "Yesterday we refactored the auth flow"
329
+
330
+ ### Per-Message Extraction (v1.9)
331
+
332
+ Instead of waiting for session end, PsychMem extracts memories after each message:
333
+
334
+ ```typescript
335
+ // Sliding window approach
336
+ messageWindowSize: 3 // Include last 3 messages for context
337
+ messageImportanceThreshold: 0.5 // Only extract if importance >= 0.5
338
+ ```
339
+
340
+ **Pre-filter for efficiency**: Before running full extraction, a lightweight regex check skips low-signal messages:
341
+
342
+ ```typescript
343
+ function preFilterImportance(content: string): boolean {
344
+ // Quick check for high-importance patterns
345
+ return /remember|important|always|never|error|bug|fix|learned|decided/i.test(content);
346
+ }
347
+ ```
348
+
349
+ ### Confidence Levels
350
+
351
+ Different extraction methods yield different confidence:
352
+
353
+ | Method | Confidence | Rationale |
354
+ |--------|------------|-----------|
355
+ | Multilingual regex match | 0.75 | Explicit language patterns are reliable |
356
+ | Structural analysis only | 0.50 | Typography/flow signals are suggestive |
357
+ | Tool event analysis | 0.60 | Errors/fixes are usually important |
358
+ | Repetition detection | 0.50 | Frequency suggests importance |
359
+
360
+ ### Runtime Compatibility
361
+
362
+ PsychMem works in both:
363
+ - **Node.js** (Claude Code CLI): Uses `better-sqlite3`
364
+ - **Bun** (OpenCode plugins): Uses `bun:sqlite`
365
+
366
+ The `sqlite-adapter.ts` abstracts the differences:
367
+
368
+ ```typescript
369
+ export async function createDatabase(dbPath: string): Promise<SqliteDatabase> {
370
+ if (isBun()) {
371
+ const { Database } = await import('bun:sqlite');
372
+ return new Database(dbPath);
373
+ } else {
374
+ const Database = (await import('better-sqlite3')).default;
375
+ return new Database(dbPath);
376
+ }
377
+ }
378
+ ```
379
+
380
+ ---
381
+
382
+ ## Installation
383
+
384
+ ### Prerequisites
385
+
386
+ PsychMem uses `better-sqlite3` which requires native compilation. Ensure you have:
387
+
388
+ - **Windows**: [Visual Studio Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) with "Desktop development with C++" workload, and Python 3.x
389
+ - **macOS**: Xcode Command Line Tools (`xcode-select --install`)
390
+ - **Linux**: `build-essential` package (`sudo apt install build-essential`)
391
+
392
+ ### OpenCode Plugin
393
+
394
+ **Via npm (recommended):**
395
+
396
+ Add to `opencode.json`:
397
+ ```json
398
+ {
399
+ "plugin": ["psychmem"]
400
+ }
401
+ ```
402
+
403
+ That's it. OpenCode installs and loads it automatically on next startup.
404
+
405
+ **Via local clone (alternative):**
406
+
407
+ OpenCode auto-loads any `.js` or `.ts` files placed directly in its plugin directories — no `opencode.json` entry needed for local files.
408
+
409
+ ```bash
410
+ # Clone and build
411
+ git clone https://github.com/muratg98/psychmem.git
412
+ cd psychmem
413
+ npm install && npm run build
414
+
415
+ # Copy the npm-ready plugin entry point to the global auto-load directory
416
+ cp plugin.js ~/.config/opencode/plugins/psychmem.js
417
+ ```
418
+
419
+ Windows (PowerShell):
420
+ ```powershell
421
+ git clone https://github.com/muratg98/psychmem.git
422
+ cd psychmem
423
+ npm install && npm run build
424
+ Copy-Item "plugin.js" "$env:USERPROFILE\.config\opencode\plugins\psychmem.js"
425
+ ```
426
+
427
+ > **Note:** When using the local clone approach, `plugin.js` imports from the `psychmem` package. You'll need to run `npm install -g psychmem` or ensure the package is resolvable, so the npm install approach above is simpler.
428
+
429
+ ### Claude Code Integration
430
+
431
+ PsychMem integrates with Claude Code via the hooks system.
432
+
433
+ **Linux/macOS:**
434
+ ```bash
435
+ # Clone to Claude Code's plugins directory
436
+ git clone https://github.com/muratg98/psychmem.git ~/.claude/plugins/psychmem
437
+
438
+ # Install and build
439
+ cd ~/.claude/plugins/psychmem
440
+ npm install
441
+ npm run build
442
+ ```
443
+
444
+ **Windows (PowerShell):**
445
+ ```powershell
446
+ # Clone to Claude Code's plugins directory
447
+ git clone https://github.com/muratg98/psychmem.git "$env:USERPROFILE\.claude\plugins\psychmem"
448
+
449
+ # Install and build
450
+ cd "$env:USERPROFILE\.claude\plugins\psychmem"
451
+ npm install
452
+ npm run build
453
+ ```
454
+
455
+ Then register the hooks in your Claude Code settings (`~/.claude/settings.json`):
456
+
457
+ ```json
458
+ {
459
+ "plugins": [
460
+ {
461
+ "name": "psychmem",
462
+ "root": "~/.claude/plugins/psychmem",
463
+ "hooks": "hooks/hooks.json"
464
+ }
465
+ ]
466
+ }
467
+ ```
468
+
469
+ PsychMem will then automatically:
470
+ - **Inject** relevant memories at session start
471
+ - **Track** tool usage during the session (async, non-blocking)
472
+ - **Extract** memories when you stop or end a session
473
+
474
+ Memories are written to Claude Code's auto-loaded memory location:
475
+ - `~/.claude/projects/<project>/memory/MEMORY.md` (first 200 lines auto-loaded)
476
+ - Topic files: `constraints.md`, `learnings.md`, `decisions.md`, `bugfixes.md`
477
+
478
+ ---
479
+
480
+ ## Configuration
481
+
482
+ ### Environment Variables
483
+
484
+ ```bash
485
+ # Core settings
486
+ PSYCHMEM_AGENT_TYPE=opencode # or "claude-code"
487
+ PSYCHMEM_DB_PATH=~/.psychmem/{agentType}/memory.db
488
+
489
+ # OpenCode-specific
490
+ PSYCHMEM_INJECT_ON_COMPACTION=true # Inject memories during compaction
491
+ PSYCHMEM_EXTRACT_ON_COMPACTION=true # Extract before compaction
492
+ PSYCHMEM_EXTRACT_ON_MESSAGE=true # Per-message extraction
493
+ PSYCHMEM_MAX_COMPACTION_MEMORIES=10 # Max memories on compaction
494
+ PSYCHMEM_MAX_SESSION_MEMORIES=10 # Max memories on session start
495
+ PSYCHMEM_MESSAGE_WINDOW_SIZE=3 # Messages for context window
496
+ PSYCHMEM_MESSAGE_IMPORTANCE_THRESHOLD=0.5 # Min importance for extraction
497
+ ```
498
+
499
+ ### Default Configuration
500
+
501
+ ```typescript
502
+ const DEFAULT_CONFIG: PsychMemConfig = {
503
+ // Decay rates (per hour)
504
+ stmDecayRate: 0.05, // ~32-hour half-life
505
+ ltmDecayRate: 0.01, // Slow decay for LTM
506
+
507
+ // Consolidation thresholds
508
+ stmToLtmStrengthThreshold: 0.7,
509
+ stmToLtmFrequencyThreshold: 3,
510
+
511
+ // Scoring weights (sum to ~1.0)
512
+ scoringWeights: {
513
+ recency: 0.20,
514
+ frequency: 0.15,
515
+ importance: 0.25,
516
+ utility: 0.20,
517
+ novelty: 0.10,
518
+ confidence: 0.10,
519
+ interference: -0.10,
520
+ },
521
+
522
+ // Working memory limit (Miller's 7±2)
523
+ maxMemoriesPerStop: 7,
524
+
525
+ // Deduplication
526
+ deduplicationThreshold: 0.7,
527
+
528
+ // Auto-promote to LTM
529
+ autoPromoteToLtm: ['bugfix', 'learning', 'decision'],
530
+ };
531
+ ```
532
+
533
+ ---
534
+
535
+ ## Architecture
536
+
537
+ ```
538
+ src/
539
+ ├── index.ts # Main PsychMem class
540
+ ├── types/index.ts # TypeScript definitions, config
541
+ ├── storage/
542
+ │ ├── database.ts # SQLite storage with decay
543
+ │ └── sqlite-adapter.ts # Node.js/Bun compatibility
544
+ ├── memory/
545
+ │ ├── context-sweep.ts # Stage 1: Extract candidates
546
+ │ ├── selective-memory.ts # Stage 2: Score & consolidate
547
+ │ ├── patterns.ts # Multilingual importance patterns
548
+ │ └── structural-analyzer.ts # Typography/flow signals
549
+ ├── hooks/
550
+ │ └── stop.ts # Main extraction hook
551
+ ├── retrieval/
552
+ │ └── index.ts # Scope-aware retrieval
553
+ ├── adapters/
554
+ │ ├── opencode/index.ts # OpenCode plugin adapter
555
+ │ └── claude-code/index.ts # Claude Code auto-memory adapter
556
+ └── transcript/
557
+ └── sweep.ts # Transcript-based extraction
558
+ ```
559
+
560
+ ### Data Storage
561
+
562
+ ```
563
+ ~/.psychmem/
564
+ ├── opencode/memory.db # OpenCode memories
565
+ └── claude-code/memory.db # Claude Code memories
566
+
567
+ # Claude Code also uses:
568
+ ~/.claude/projects/<project>/memory/
569
+ ├── MEMORY.md # Main file (first 200 lines loaded)
570
+ ├── constraints.md
571
+ ├── learnings.md
572
+ ├── decisions.md
573
+ └── bugfixes.md
574
+ ```
575
+
576
+ ---
577
+
578
+ ## Research References
579
+
580
+ ### Primary Sources
581
+
582
+ 1. **Atkinson, R.C. & Shiffrin, R.M.** (1968). Human memory: A proposed system and its control processes. *Psychology of Learning and Motivation*, 2, 89-195.
583
+ - Foundation for dual-store (STM/LTM) memory model
584
+
585
+ 2. **Ebbinghaus, H.** (1885). *Über das Gedächtnis: Untersuchungen zur experimentellen Psychologie*. Leipzig: Duncker & Humblot.
586
+ - Original forgetting curve research: `R = e^(-t/S)`
587
+
588
+ 3. **Cowan, N.** (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. *Behavioral and Brain Sciences*, 24(1), 87-114.
589
+ - Working memory capacity of 4±1 items
590
+
591
+ 4. **Nader, K., Schafe, G.E., & LeDoux, J.E.** (2000). Fear memories require protein synthesis in the amygdala for reconsolidation after retrieval. *Nature*, 406(6797), 722-726.
592
+ - Memory reconsolidation theory
593
+
594
+ ### Supporting Research
595
+
596
+ 5. **Baddeley, A.D. & Hitch, G.** (1974). Working memory. *Psychology of Learning and Motivation*, 8, 47-89.
597
+ - Working memory model and capacity limits
598
+
599
+ 6. **McGaugh, J.L.** (2000). Memory—a century of consolidation. *Science*, 287(5451), 248-251.
600
+ - Memory consolidation mechanisms
601
+
602
+ 7. **Roediger, H.L. & Karpicke, J.D.** (2006). The power of testing memory: Basic research and implications for educational practice. *Perspectives on Psychological Science*, 1(3), 181-210.
603
+ - Spaced repetition and retrieval practice (basis for frequency-based promotion)
604
+
605
+ 8. **Craik, F.I.M. & Lockhart, R.S.** (1972). Levels of processing: A framework for memory research. *Journal of Verbal Learning and Verbal Behavior*, 11(6), 671-684.
606
+ - Depth of processing affects memory strength (basis for importance weighting)
607
+
608
+ ### Applied Cognitive Science
609
+
610
+ 9. **Anderson, J.R. & Schooler, L.J.** (1991). Reflections of the environment in memory. *Psychological Science*, 2(6), 396-408.
611
+ - Environmental statistics predict memory retrieval (basis for utility scoring)
612
+
613
+ 10. **Wickens, D.D.** (1970). Encoding categories of words: An empirical approach to meaning. *Psychological Review*, 77(1), 1-15.
614
+ - Interference effects in memory (basis for interference penalty)
615
+
616
+ ---
617
+
618
+ ## License
619
+
620
+ MIT
621
+
622
+ ---
623
+
624
+ ## Contributing
625
+
626
+ Contributions welcome! Key areas:
627
+ - Additional language patterns (currently 15 languages)
628
+ - Improved structural signal detection
629
+ - Learned scoring weights (currently rule-based)
630
+ - Integration with other AI agents
631
+
632
+ See [GitHub Issues](https://github.com/muratg98/psychmem/issues) for open tasks.