universal-agent-memory 1.0.5 → 1.0.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/templates/CLAUDE.template.md +147 -1
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "universal-agent-memory",
|
|
3
|
-
"version": "1.0.
|
|
3
|
+
"version": "1.0.7",
|
|
4
4
|
"description": "Universal AI agent memory system - CLAUDE.md templates, memory, worktrees for Claude Code, Factory.AI, VSCode, OpenCode",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|
|
@@ -1,7 +1,18 @@
|
|
|
1
1
|
<!--
|
|
2
|
-
CLAUDE.md Universal Template - v10.
|
|
2
|
+
CLAUDE.md Universal Template - v10.6
|
|
3
3
|
|
|
4
4
|
CHANGES IN THIS VERSION:
|
|
5
|
+
- Added Domain-Specific Patterns (P21-P26)
|
|
6
|
+
- Pattern 21: Chess Engine Integration (CEI) - use Stockfish, not reasoning
|
|
7
|
+
- Pattern 22: Git Recovery Forensics (GRF) - backup first, forensic approach
|
|
8
|
+
- Pattern 23: Compression Impossibility Detection (CID) - refuse impossible tasks
|
|
9
|
+
- Pattern 24: Polyglot Code Construction (PCC) - search for existing examples
|
|
10
|
+
- Pattern 25: Service Configuration Pipeline (SCP) - ordered setup, test each
|
|
11
|
+
- Pattern 26: Near-Miss Iteration (NMI) - iterate on partial success tasks
|
|
12
|
+
- Updated Pattern Router to include domain patterns
|
|
13
|
+
- Added 30% time budget reservation for iteration
|
|
14
|
+
|
|
15
|
+
PREVIOUS (v10.5):
|
|
5
16
|
- STRENGTHENED Pattern Router: Now requires explicit analysis block output
|
|
6
17
|
- STRENGTHENED Constraint Extraction: Mandatory checklist with verification
|
|
7
18
|
- STRENGTHENED Adversarial Thinking: Explicit attack vector enumeration
|
|
@@ -424,6 +435,141 @@ ADVERSARIAL: "How do I craft input that bypasses the filter?"
|
|
|
424
435
|
|
|
425
436
|
---
|
|
426
437
|
|
|
438
|
+
## 🎯 DOMAIN-SPECIFIC PATTERNS
|
|
439
|
+
|
|
440
|
+
**Task-specific guidance for common failure categories.**
|
|
441
|
+
|
|
442
|
+
### Pattern 21: Chess Engine Integration (CEI)
|
|
443
|
+
Chess tasks require external engines - reasoning alone cannot find optimal moves.
|
|
444
|
+
|
|
445
|
+
**Detection**: Keywords "chess", "best move", "FEN", "PGN", "checkmate"
|
|
446
|
+
|
|
447
|
+
**Protocol**:
|
|
448
|
+
1. Check for Stockfish: `which stockfish || apt-get install -y stockfish`
|
|
449
|
+
2. Parse board state to FEN format
|
|
450
|
+
3. Use Stockfish for analysis: `echo "position fen $FEN" | stockfish`
|
|
451
|
+
4. Extract best move from engine output
|
|
452
|
+
|
|
453
|
+
```bash
|
|
454
|
+
# Stockfish one-liner for best move
|
|
455
|
+
echo -e "position fen rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1\ngo depth 20" | stockfish | grep "bestmove"
|
|
456
|
+
```
|
|
457
|
+
|
|
458
|
+
**NEVER**: Try to calculate chess moves through reasoning - use the engine.
|
|
459
|
+
|
|
460
|
+
### Pattern 22: Git Recovery Forensics (GRF)
|
|
461
|
+
Git recovery tasks require forensic approach, not standard git commands.
|
|
462
|
+
|
|
463
|
+
**Detection**: Keywords "recover", "corrupted", "lost commit", "fix git", "reflog"
|
|
464
|
+
|
|
465
|
+
**Protocol**:
|
|
466
|
+
1. **BACKUP FIRST**: `cp -r .git .git.bak`
|
|
467
|
+
2. Check integrity: `git fsck --full --no-dangling`
|
|
468
|
+
3. Check reflog: `git reflog` (commits survive even after reset)
|
|
469
|
+
4. Check loose objects: `find .git/objects -type f`
|
|
470
|
+
5. Recover from pack files if needed
|
|
471
|
+
|
|
472
|
+
```bash
|
|
473
|
+
# Recovery sequence
|
|
474
|
+
git fsck --full 2>&1 | tee /tmp/git-fsck.log
|
|
475
|
+
git reflog --all
|
|
476
|
+
git log --all --oneline --graph
|
|
477
|
+
```
|
|
478
|
+
|
|
479
|
+
**NEVER**: Run destructive commands without backup. Use `--dry-run` first.
|
|
480
|
+
|
|
481
|
+
### Pattern 23: Compression Impossibility Detection (CID)
|
|
482
|
+
Some compression tasks are mathematically impossible.
|
|
483
|
+
|
|
484
|
+
**Detection**: Keywords "compress", "codegolf", "under X bytes", "minimal size"
|
|
485
|
+
|
|
486
|
+
**Impossibility Markers**:
|
|
487
|
+
- ML model weights (GPT-2 = 500MB+, cannot be <1MB without losing function)
|
|
488
|
+
- Random/encrypted data (incompressible by definition)
|
|
489
|
+
- Asking for compression ratio beyond information-theoretic limits
|
|
490
|
+
|
|
491
|
+
```bash
|
|
492
|
+
# If task asks for impossible compression:
|
|
493
|
+
echo "IMPOSSIBLE: Task requires compressing ${ORIGINAL_SIZE} to ${TARGET_SIZE}.
|
|
494
|
+
Shannon's source coding theorem proves this is impossible for this data type.
|
|
495
|
+
Minimum achievable size is approximately ${MIN_POSSIBLE}." > /app/IMPOSSIBLE.md
|
|
496
|
+
```
|
|
497
|
+
|
|
498
|
+
**NEVER**: Attempt to compress ML weights to <1% of original size.
|
|
499
|
+
|
|
500
|
+
### Pattern 24: Polyglot Code Construction (PCC)
|
|
501
|
+
Polyglot tasks require specific compiler/interpreter quirks.
|
|
502
|
+
|
|
503
|
+
**Detection**: Keywords "polyglot", "works in both", "compile as X and Y"
|
|
504
|
+
|
|
505
|
+
**Known Polyglot Techniques**:
|
|
506
|
+
- C/Python: Use `#if 0` blocks and string literals
|
|
507
|
+
- Rust/C: Careful use of macros and comments
|
|
508
|
+
- Shell/Python: Shebang and triple-quote tricks
|
|
509
|
+
|
|
510
|
+
```c
|
|
511
|
+
// C and Python polyglot example
|
|
512
|
+
#if 0
|
|
513
|
+
"""
|
|
514
|
+
#endif
|
|
515
|
+
#include <stdio.h>
|
|
516
|
+
int main() { printf("Hello from C\n"); return 0; }
|
|
517
|
+
#if 0
|
|
518
|
+
"""
|
|
519
|
+
print("Hello from Python")
|
|
520
|
+
#endif
|
|
521
|
+
```
|
|
522
|
+
|
|
523
|
+
**Protocol**: Search for existing polyglot examples before implementing.
|
|
524
|
+
|
|
525
|
+
### Pattern 25: Service Configuration Pipeline (SCP)
|
|
526
|
+
Multi-service configuration requires ordered setup.
|
|
527
|
+
|
|
528
|
+
**Detection**: Keywords "configure", "server", "webserver", "service", "daemon"
|
|
529
|
+
|
|
530
|
+
**Protocol**:
|
|
531
|
+
1. **Identify all services** needed (nginx, git, ssh, etc.)
|
|
532
|
+
2. **Check service status**: `systemctl status <service>`
|
|
533
|
+
3. **Configure in dependency order** (base → dependent)
|
|
534
|
+
4. **Test each service** before moving to next
|
|
535
|
+
5. **Verify end-to-end** after all configured
|
|
536
|
+
|
|
537
|
+
```bash
|
|
538
|
+
# Service configuration pattern
|
|
539
|
+
for svc in nginx git-daemon ssh; do
|
|
540
|
+
systemctl status $svc || systemctl start $svc
|
|
541
|
+
systemctl is-active $svc || echo "FAILED: $svc"
|
|
542
|
+
done
|
|
543
|
+
```
|
|
544
|
+
|
|
545
|
+
### Pattern 26: Near-Miss Iteration (NMI)
|
|
546
|
+
When tests show >50% passing, focus on specific failing tests.
|
|
547
|
+
|
|
548
|
+
**Detection**: Test results show partial success (e.g., 8/9, 6/7, 5/6)
|
|
549
|
+
|
|
550
|
+
**Protocol**:
|
|
551
|
+
1. Run tests with verbose output: `pytest -v 2>&1 | tee results.txt`
|
|
552
|
+
2. Extract ONLY failing test names
|
|
553
|
+
3. Read failing test code to understand exact requirement
|
|
554
|
+
4. Fix specific issue without breaking passing tests
|
|
555
|
+
5. Re-run ONLY failing tests first: `pytest test_file.py::test_name -v`
|
|
556
|
+
6. Then run full suite to verify no regressions
|
|
557
|
+
|
|
558
|
+
```bash
|
|
559
|
+
# Near-miss iteration loop
|
|
560
|
+
while true; do
|
|
561
|
+
pytest -v 2>&1 | tee /tmp/results.txt
|
|
562
|
+
FAILED=$(grep "FAILED" /tmp/results.txt | head -1)
|
|
563
|
+
[ -z "$FAILED" ] && echo "ALL PASS" && break
|
|
564
|
+
echo "Fixing: $FAILED"
|
|
565
|
+
# ... fix specific test ...
|
|
566
|
+
done
|
|
567
|
+
```
|
|
568
|
+
|
|
569
|
+
**Reserve 30% of time budget for near-miss iteration.**
|
|
570
|
+
|
|
571
|
+
---
|
|
572
|
+
|
|
427
573
|
## ⚡ SESSION START PROTOCOL
|
|
428
574
|
|
|
429
575
|
**EXECUTE IMMEDIATELY before any response:**
|