loki-mode 5.29.0 → 5.30.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/SKILL.md CHANGED
@@ -3,7 +3,7 @@ name: loki-mode
3
3
  description: Multi-agent autonomous startup system. Triggers on "Loki Mode". Takes PRD to deployed product with zero human intervention. Requires --dangerously-skip-permissions flag.
4
4
  ---
5
5
 
6
- # Loki Mode v5.29.0
6
+ # Loki Mode v5.30.0
7
7
 
8
8
  **You are an autonomous agent. You make decisions. You do not ask questions. You do not stop.**
9
9
 
@@ -62,7 +62,11 @@ REFLECT: Did it work? Update CONTINUITY.md with outcome.
62
62
  v
63
63
  VERIFY: Run tests. Check build. Validate against spec.
64
64
  |
65
- +--[PASS]--> Mark task complete. Return to REASON.
65
+ +--[PASS]--> COMPOUND: If task had novel insight (bug fix, non-obvious solution,
66
+ | reusable pattern), extract to ~/.loki/solutions/{category}/{slug}.md
67
+ | with YAML frontmatter (title, tags, symptoms, root_cause, prevention).
68
+ | See skills/compound-learning.md for format.
69
+ | Then mark task complete. Return to REASON.
66
70
  |
67
71
  +--[FAIL]--> Capture error in "Mistakes & Learnings".
68
72
  Rollback if needed. Retry with new approach.
@@ -119,7 +123,8 @@ These rules are ABSOLUTE. Violating them is a critical failure.
119
123
  ```
120
124
  BOOTSTRAP ──[project initialized]──> DISCOVERY
121
125
  DISCOVERY ──[PRD analyzed, requirements clear]──> ARCHITECTURE
122
- ARCHITECTURE ──[design approved, specs written]──> INFRASTRUCTURE
126
+ ARCHITECTURE ──[design approved, specs written]──> DEEPEN_PLAN (standard/complex only)
127
+ DEEPEN_PLAN ──[plan enhanced by 4 research agents]──> INFRASTRUCTURE
123
128
  INFRASTRUCTURE ──[cloud/DB ready]──> DEVELOPMENT
124
129
  DEVELOPMENT ──[features complete, unit tests pass]──> QA
125
130
  QA ──[all tests pass, security clean]──> DEPLOYMENT
@@ -177,6 +182,8 @@ GROWTH ──[continuous improvement loop]──> GROWTH
177
182
  - Debugging? Load troubleshooting.md
178
183
  - Deploying? Load production.md
179
184
  - Parallel features? Load parallel-workflows.md
185
+ - Architecture planning? Load compound-learning.md (deepen-plan)
186
+ - Post-verification? Load compound-learning.md (knowledge extraction)
180
187
  3. Read the selected module(s)
181
188
  4. Execute with that context
182
189
  5. When task category changes: Load new modules (old context discarded)
@@ -260,4 +267,4 @@ Auto-detected or force with `LOKI_COMPLEXITY`:
260
267
 
261
268
  ---
262
269
 
263
- **v5.29.0 | Demo, quick mode, init, cost dashboard, 12 templates, GitHub Action | ~270 lines core**
270
+ **v5.30.0 | Compound learning, specialist reviewers, deepen-plan | ~280 lines core**
package/VERSION CHANGED
@@ -1 +1 @@
1
- 5.29.0
1
+ 5.30.0
package/autonomy/loki CHANGED
@@ -318,6 +318,7 @@ show_help() {
318
318
  echo " import Import GitHub issues as tasks"
319
319
  echo " config [cmd] Manage configuration (show|init|edit|path)"
320
320
  echo " memory [cmd] Cross-project learnings (list|show|search|stats)"
321
+ echo " compound [cmd] Knowledge compounding (list|show|search|run|stats)"
321
322
  echo " council [cmd] Completion council (status|verdicts|convergence|force-review|report)"
322
323
  echo " dogfood Show self-development statistics"
323
324
  echo " reset [target] Reset session state (all|retries|failed)"
@@ -3546,6 +3547,9 @@ main() {
3546
3547
  memory)
3547
3548
  cmd_memory "$@"
3548
3549
  ;;
3550
+ compound)
3551
+ cmd_compound "$@"
3552
+ ;;
3549
3553
  council)
3550
3554
  cmd_council "$@"
3551
3555
  ;;
@@ -4541,6 +4545,282 @@ except Exception as e:
4541
4545
  esac
4542
4546
  }
4543
4547
 
4548
+ # Knowledge Compounding - Structured Solutions (v5.30.0)
4549
+ # Inspired by Compound Engineering Plugin's docs/solutions/ with YAML frontmatter
4550
+ cmd_compound() {
4551
+ local subcommand="${1:-help}"
4552
+ shift 2>/dev/null || true
4553
+
4554
+ local solutions_dir="${HOME}/.loki/solutions"
4555
+
4556
+ case "$subcommand" in
4557
+ list|ls)
4558
+ echo -e "${BOLD}Compound Solutions${NC}"
4559
+ echo ""
4560
+
4561
+ if [ ! -d "$solutions_dir" ]; then
4562
+ echo " No solutions yet. Solutions are created automatically during"
4563
+ echo " Loki sessions or manually with 'loki compound run'."
4564
+ echo ""
4565
+ echo " Location: $solutions_dir"
4566
+ return
4567
+ fi
4568
+
4569
+ local total=0
4570
+ for category in security performance architecture testing debugging deployment general; do
4571
+ local cat_dir="$solutions_dir/$category"
4572
+ if [ -d "$cat_dir" ]; then
4573
+ local count=$(find "$cat_dir" -name "*.md" 2>/dev/null | wc -l | tr -d ' ')
4574
+ if [ "$count" -gt 0 ]; then
4575
+ printf " %-16s ${GREEN}%d${NC} solutions\n" "$category" "$count"
4576
+ total=$((total + count))
4577
+ fi
4578
+ fi
4579
+ done
4580
+
4581
+ if [ "$total" -eq 0 ]; then
4582
+ echo " No solutions yet."
4583
+ else
4584
+ echo ""
4585
+ echo " Total: ${total} solutions"
4586
+ fi
4587
+ echo ""
4588
+ echo " Location: $solutions_dir"
4589
+ echo ""
4590
+ echo " Use 'loki compound show <category>' to view solutions"
4591
+ ;;
4592
+
4593
+ show)
4594
+ local category="${1:-}"
4595
+ if [ -z "$category" ]; then
4596
+ echo -e "${RED}Error: Specify a category${NC}"
4597
+ echo "Categories: security, performance, architecture, testing, debugging, deployment, general"
4598
+ return 1
4599
+ fi
4600
+
4601
+ local cat_dir="$solutions_dir/$category"
4602
+ if [ ! -d "$cat_dir" ]; then
4603
+ echo "No solutions in category: $category"
4604
+ return
4605
+ fi
4606
+
4607
+ echo -e "${BOLD}Solutions: $category${NC}"
4608
+ echo ""
4609
+
4610
+ for file in "$cat_dir"/*.md; do
4611
+ [ -f "$file" ] || continue
4612
+ # Extract title from YAML frontmatter
4613
+ local title=$(grep '^title:' "$file" 2>/dev/null | head -1 | sed 's/title: *"//;s/"$//')
4614
+ local confidence=$(grep '^confidence:' "$file" 2>/dev/null | head -1 | sed 's/confidence: *//')
4615
+ local project=$(grep '^source_project:' "$file" 2>/dev/null | head -1 | sed 's/source_project: *"//;s/"$//')
4616
+ echo -e " ${GREEN}*${NC} ${title:-$(basename "$file" .md)}"
4617
+ [ -n "$confidence" ] && echo -e " confidence: ${confidence} project: ${project:-unknown}"
4618
+ done
4619
+ echo ""
4620
+ ;;
4621
+
4622
+ search)
4623
+ local query="${1:-}"
4624
+ if [ -z "$query" ]; then
4625
+ echo -e "${RED}Error: Specify a search query${NC}"
4626
+ echo "Usage: loki compound search <query>"
4627
+ return 1
4628
+ fi
4629
+
4630
+ echo -e "${BOLD}Searching solutions for: ${query}${NC}"
4631
+ echo ""
4632
+
4633
+ if [ ! -d "$solutions_dir" ]; then
4634
+ echo "No solutions directory found."
4635
+ return
4636
+ fi
4637
+
4638
+ local found=0
4639
+ while IFS= read -r file; do
4640
+ local title=$(grep '^title:' "$file" 2>/dev/null | head -1 | sed 's/title: *"//;s/"$//')
4641
+ local category=$(grep '^category:' "$file" 2>/dev/null | head -1 | sed 's/category: *//')
4642
+ echo -e " [${CYAN}${category}${NC}] ${title:-$(basename "$file" .md)}"
4643
+ echo -e " ${DIM}${file}${NC}"
4644
+ found=$((found + 1))
4645
+ done < <(grep -rl "$query" "$solutions_dir" 2>/dev/null || true)
4646
+
4647
+ if [ "$found" -eq 0 ]; then
4648
+ echo " No solutions matching: $query"
4649
+ else
4650
+ echo ""
4651
+ echo " Found: ${found} solutions"
4652
+ fi
4653
+ ;;
4654
+
4655
+ run)
4656
+ echo -e "${BOLD}Compounding learnings into solutions...${NC}"
4657
+ echo ""
4658
+
4659
+ local learnings_dir="${HOME}/.loki/learnings"
4660
+ if [ ! -d "$learnings_dir" ]; then
4661
+ echo " No learnings directory found at: $learnings_dir"
4662
+ echo " Run a Loki session first to generate learnings."
4663
+ return
4664
+ fi
4665
+
4666
+ python3 << 'COMPOUND_RUN_SCRIPT'
4667
+ import json, os, re
4668
+ from datetime import datetime, timezone
4669
+ from collections import defaultdict
4670
+ learnings_dir = os.path.expanduser("~/.loki/learnings")
4671
+ solutions_dir = os.path.expanduser("~/.loki/solutions")
4672
+ CATEGORIES = ["security", "performance", "architecture", "testing", "debugging", "deployment", "general"]
4673
+ CATEGORY_KEYWORDS = {
4674
+ "security": ["auth", "login", "password", "token", "injection", "xss", "csrf", "cors", "secret", "encrypt", "permission"],
4675
+ "performance": ["cache", "query", "n+1", "memory", "leak", "slow", "timeout", "pool", "index", "optimize", "bundle"],
4676
+ "architecture": ["pattern", "solid", "coupling", "abstraction", "module", "interface", "design", "refactor", "structure"],
4677
+ "testing": ["test", "mock", "fixture", "coverage", "assert", "spec", "e2e", "playwright", "jest", "flaky"],
4678
+ "debugging": ["debug", "error", "trace", "log", "stack", "crash", "exception", "breakpoint", "inspect"],
4679
+ "deployment": ["deploy", "docker", "ci", "cd", "pipeline", "kubernetes", "nginx", "ssl", "domain", "env", "config"],
4680
+ }
4681
+ def load_jsonl(filepath):
4682
+ entries = []
4683
+ if not os.path.exists(filepath): return entries
4684
+ with open(filepath, 'r') as f:
4685
+ for line in f:
4686
+ try:
4687
+ entry = json.loads(line)
4688
+ if 'description' in entry: entries.append(entry)
4689
+ except: continue
4690
+ return entries
4691
+ def classify_category(description):
4692
+ desc_lower = description.lower()
4693
+ scores = {cat: sum(1 for kw in kws if kw in desc_lower) for cat, kws in CATEGORY_KEYWORDS.items()}
4694
+ best = max(scores, key=scores.get)
4695
+ return best if scores[best] > 0 else "general"
4696
+ def slugify(text):
4697
+ return re.sub(r'[^a-z0-9]+', '-', text.lower().strip()).strip('-')[:80]
4698
+ def solution_exists(solutions_dir, title_slug):
4699
+ for cat in CATEGORIES:
4700
+ cat_dir = os.path.join(solutions_dir, cat)
4701
+ if os.path.exists(cat_dir) and os.path.exists(os.path.join(cat_dir, f"{title_slug}.md")):
4702
+ return True
4703
+ return False
4704
+ patterns = load_jsonl(os.path.join(learnings_dir, "patterns.jsonl"))
4705
+ mistakes = load_jsonl(os.path.join(learnings_dir, "mistakes.jsonl"))
4706
+ successes = load_jsonl(os.path.join(learnings_dir, "successes.jsonl"))
4707
+ print(f" Loaded: {len(patterns)} patterns, {len(mistakes)} mistakes, {len(successes)} successes")
4708
+ grouped = defaultdict(list)
4709
+ for entry in patterns + mistakes + successes:
4710
+ grouped[classify_category(entry.get('description', ''))].append(entry)
4711
+ created = 0
4712
+ now = datetime.now(timezone.utc).isoformat().replace("+00:00", "Z")
4713
+ for category, entries in grouped.items():
4714
+ if len(entries) < 2: continue
4715
+ cat_dir = os.path.join(solutions_dir, category)
4716
+ os.makedirs(cat_dir, exist_ok=True)
4717
+ best_entry = max(entries, key=lambda e: len(e.get('description', '')))
4718
+ title = best_entry['description'][:120]
4719
+ slug = slugify(title)
4720
+ if solution_exists(solutions_dir, slug): continue
4721
+ all_words = ' '.join(e.get('description', '') for e in entries).lower()
4722
+ tags = []
4723
+ for kw_list in CATEGORY_KEYWORDS.values():
4724
+ for kw in kw_list:
4725
+ if kw in all_words and kw not in tags: tags.append(kw)
4726
+ tags = tags[:8]
4727
+ symptoms = [e.get('description', '')[:200] for e in entries
4728
+ if any(w in e.get('description', '').lower() for w in ['error', 'fail', 'bug', 'crash', 'issue'])][:4]
4729
+ if not symptoms: symptoms = [entries[0].get('description', '')[:200]]
4730
+ solution_lines = [f"- {e.get('description', '')}" for e in entries
4731
+ if not any(w in e.get('description', '').lower() for w in ['error', 'fail', 'bug', 'crash'])]
4732
+ if not solution_lines: solution_lines = [f"- {entries[0].get('description', '')}"]
4733
+ project = best_entry.get('project', os.path.basename(os.getcwd()))
4734
+ filepath = os.path.join(cat_dir, f"{slug}.md")
4735
+ with open(filepath, 'w') as f:
4736
+ f.write(f"---\ntitle: \"{title}\"\ncategory: {category}\ntags: [{', '.join(tags)}]\nsymptoms:\n")
4737
+ for s in symptoms: f.write(f' - "{s}"\n')
4738
+ f.write(f'root_cause: "Identified from {len(entries)} related learnings"\n')
4739
+ f.write(f'prevention: "See solution details below"\nconfidence: {min(0.5 + 0.1 * len(entries), 0.95):.2f}\n')
4740
+ f.write(f'source_project: "{project}"\ncreated: "{now}"\napplied_count: 0\n---\n\n')
4741
+ f.write("## Solution\n\n" + '\n'.join(solution_lines) + '\n\n')
4742
+ f.write(f"## Context\n\nCompounded from {len(entries)} learnings from project: {project}\n")
4743
+ created += 1
4744
+ print(f" Created: {category}/{slug}.md")
4745
+ if created > 0: print(f"\n Compounded {created} new solution files to {solutions_dir}")
4746
+ else: print(" No new solutions to compound (need 2+ related learnings per category)")
4747
+ COMPOUND_RUN_SCRIPT
4748
+ ;;
4749
+
4750
+ stats)
4751
+ echo -e "${BOLD}Compound Solution Statistics${NC}"
4752
+ echo ""
4753
+
4754
+ if [ ! -d "$solutions_dir" ]; then
4755
+ echo " No solutions directory found."
4756
+ return
4757
+ fi
4758
+
4759
+ local total=0
4760
+ local newest=""
4761
+ local oldest=""
4762
+
4763
+ for category in security performance architecture testing debugging deployment general; do
4764
+ local cat_dir="$solutions_dir/$category"
4765
+ if [ -d "$cat_dir" ]; then
4766
+ local count=$(find "$cat_dir" -name "*.md" 2>/dev/null | wc -l | tr -d ' ')
4767
+ total=$((total + count))
4768
+ fi
4769
+ done
4770
+
4771
+ echo " Total solutions: ${total}"
4772
+
4773
+ if [ "$total" -gt 0 ]; then
4774
+ # Find newest and oldest
4775
+ newest=$(find "$solutions_dir" -name "*.md" -exec stat -f '%m %N' {} \; 2>/dev/null | sort -rn | head -1 | cut -d' ' -f2-)
4776
+ oldest=$(find "$solutions_dir" -name "*.md" -exec stat -f '%m %N' {} \; 2>/dev/null | sort -n | head -1 | cut -d' ' -f2-)
4777
+
4778
+ if [ -n "$newest" ]; then
4779
+ local newest_title=$(grep '^title:' "$newest" 2>/dev/null | head -1 | sed 's/title: *"//;s/"$//')
4780
+ echo " Newest: ${newest_title:-$(basename "$newest" .md)}"
4781
+ fi
4782
+ if [ -n "$oldest" ]; then
4783
+ local oldest_title=$(grep '^title:' "$oldest" 2>/dev/null | head -1 | sed 's/title: *"//;s/"$//')
4784
+ echo " Oldest: ${oldest_title:-$(basename "$oldest" .md)}"
4785
+ fi
4786
+ fi
4787
+
4788
+ echo ""
4789
+ echo " Location: $solutions_dir"
4790
+ ;;
4791
+
4792
+ help|--help|-h)
4793
+ echo -e "${BOLD}loki compound${NC} - Knowledge compounding system"
4794
+ echo ""
4795
+ echo "Extracts structured solutions from cross-project learnings."
4796
+ echo "Solutions are stored as markdown files with YAML frontmatter"
4797
+ echo "at ~/.loki/solutions/{category}/ and fed back into future planning."
4798
+ echo ""
4799
+ echo "Usage: loki compound <command>"
4800
+ echo ""
4801
+ echo "Commands:"
4802
+ echo " list List all solutions by category"
4803
+ echo " show <category> Show solutions in a category"
4804
+ echo " search <query> Search across all solutions"
4805
+ echo " run Manually compound current learnings into solutions"
4806
+ echo " stats Solution statistics"
4807
+ echo " help Show this help"
4808
+ echo ""
4809
+ echo "Categories: security, performance, architecture, testing,"
4810
+ echo " debugging, deployment, general"
4811
+ echo ""
4812
+ echo "Solutions are created automatically at session end and"
4813
+ echo "loaded during the REASON phase for relevant tasks."
4814
+ ;;
4815
+
4816
+ *)
4817
+ echo -e "${RED}Unknown compound command: $subcommand${NC}"
4818
+ echo "Run 'loki compound help' for usage."
4819
+ return 1
4820
+ ;;
4821
+ esac
4822
+ }
4823
+
4544
4824
  # Completion Council management
4545
4825
  cmd_council() {
4546
4826
  local subcommand="${1:-status}"
package/autonomy/run.sh CHANGED
@@ -3599,6 +3599,264 @@ print("Learning extraction complete")
3599
3599
  EXTRACT_SCRIPT
3600
3600
  }
3601
3601
 
3602
+ # ============================================================================
3603
+ # Knowledge Compounding - Structured Solutions (v5.30.0)
3604
+ # Inspired by Compound Engineering Plugin's docs/solutions/ with YAML frontmatter
3605
+ # ============================================================================
3606
+
3607
+ compound_session_to_solutions() {
3608
+ # Compound JSONL learnings into structured solution markdown files
3609
+ local learnings_dir="${HOME}/.loki/learnings"
3610
+ local solutions_dir="${HOME}/.loki/solutions"
3611
+
3612
+ if [ ! -d "$learnings_dir" ]; then
3613
+ return
3614
+ fi
3615
+
3616
+ log_info "Compounding learnings into structured solutions..."
3617
+
3618
+ python3 << 'COMPOUND_SCRIPT'
3619
+ import json
3620
+ import os
3621
+ import re
3622
+ import hashlib
3623
+ from datetime import datetime, timezone
3624
+ from collections import defaultdict
3625
+
3626
+ learnings_dir = os.path.expanduser("~/.loki/learnings")
3627
+ solutions_dir = os.path.expanduser("~/.loki/solutions")
3628
+
3629
+ # Fixed categories
3630
+ CATEGORIES = ["security", "performance", "architecture", "testing", "debugging", "deployment", "general"]
3631
+
3632
+ # Category keyword mapping
3633
+ CATEGORY_KEYWORDS = {
3634
+ "security": ["auth", "login", "password", "token", "injection", "xss", "csrf", "cors", "secret", "encrypt", "permission", "role", "session", "cookie", "oauth", "jwt"],
3635
+ "performance": ["cache", "query", "n+1", "memory", "leak", "slow", "timeout", "pool", "index", "optimize", "bundle", "lazy", "render", "batch"],
3636
+ "architecture": ["pattern", "solid", "coupling", "abstraction", "module", "interface", "design", "refactor", "structure", "layer", "separation", "dependency"],
3637
+ "testing": ["test", "mock", "fixture", "coverage", "assert", "spec", "e2e", "playwright", "jest", "flaky", "snapshot"],
3638
+ "debugging": ["debug", "error", "trace", "log", "stack", "crash", "exception", "breakpoint", "inspect", "diagnose"],
3639
+ "deployment": ["deploy", "docker", "ci", "cd", "pipeline", "kubernetes", "k8s", "nginx", "ssl", "domain", "env", "config", "build"],
3640
+ }
3641
+
3642
+ def load_jsonl(filepath):
3643
+ entries = []
3644
+ if not os.path.exists(filepath):
3645
+ return entries
3646
+ with open(filepath, 'r') as f:
3647
+ for line in f:
3648
+ try:
3649
+ entry = json.loads(line)
3650
+ if 'description' in entry:
3651
+ entries.append(entry)
3652
+ except:
3653
+ continue
3654
+ return entries
3655
+
3656
+ def classify_category(description):
3657
+ desc_lower = description.lower()
3658
+ scores = {}
3659
+ for cat, keywords in CATEGORY_KEYWORDS.items():
3660
+ scores[cat] = sum(1 for kw in keywords if kw in desc_lower)
3661
+ best = max(scores, key=scores.get)
3662
+ return best if scores[best] > 0 else "general"
3663
+
3664
+ def slugify(text):
3665
+ slug = re.sub(r'[^a-z0-9]+', '-', text.lower().strip())
3666
+ return slug.strip('-')[:80]
3667
+
3668
+ def solution_exists(solutions_dir, title_slug):
3669
+ for cat in CATEGORIES:
3670
+ cat_dir = os.path.join(solutions_dir, cat)
3671
+ if os.path.exists(cat_dir):
3672
+ if os.path.exists(os.path.join(cat_dir, f"{title_slug}.md")):
3673
+ return True
3674
+ return False
3675
+
3676
+ # Load all learnings
3677
+ patterns = load_jsonl(os.path.join(learnings_dir, "patterns.jsonl"))
3678
+ mistakes = load_jsonl(os.path.join(learnings_dir, "mistakes.jsonl"))
3679
+ successes = load_jsonl(os.path.join(learnings_dir, "successes.jsonl"))
3680
+
3681
+ # Group by category
3682
+ grouped = defaultdict(list)
3683
+ for entry in patterns + mistakes + successes:
3684
+ cat = classify_category(entry.get('description', ''))
3685
+ grouped[cat].append(entry)
3686
+
3687
+ created = 0
3688
+ now = datetime.now(timezone.utc).isoformat().replace("+00:00", "Z")
3689
+
3690
+ for category, entries in grouped.items():
3691
+ if len(entries) < 2:
3692
+ continue # Need at least 2 related entries to compound
3693
+
3694
+ # Create category directory
3695
+ cat_dir = os.path.join(solutions_dir, category)
3696
+ os.makedirs(cat_dir, exist_ok=True)
3697
+
3698
+ # Group similar entries (simple: by shared keywords)
3699
+ # Take the most descriptive entry as the title
3700
+ best_entry = max(entries, key=lambda e: len(e.get('description', '')))
3701
+ title = best_entry['description'][:120]
3702
+ slug = slugify(title)
3703
+
3704
+ if solution_exists(solutions_dir, slug):
3705
+ continue # Already compounded
3706
+
3707
+ # Extract tags from all entries
3708
+ all_words = ' '.join(e.get('description', '') for e in entries).lower()
3709
+ tags = []
3710
+ for kw_list in CATEGORY_KEYWORDS.values():
3711
+ for kw in kw_list:
3712
+ if kw in all_words and kw not in tags:
3713
+ tags.append(kw)
3714
+ tags = tags[:8] # Limit to 8 tags
3715
+
3716
+ # Build symptoms from mistake entries
3717
+ symptoms = []
3718
+ for e in entries:
3719
+ desc = e.get('description', '')
3720
+ if any(w in desc.lower() for w in ['error', 'fail', 'bug', 'crash', 'issue', 'problem']):
3721
+ symptoms.append(desc[:200])
3722
+ symptoms = symptoms[:4]
3723
+ if not symptoms:
3724
+ symptoms = [entries[0].get('description', '')[:200]]
3725
+
3726
+ # Build solution content from pattern/success entries
3727
+ solution_lines = []
3728
+ for e in entries:
3729
+ desc = e.get('description', '')
3730
+ if not any(w in desc.lower() for w in ['error', 'fail', 'bug', 'crash']):
3731
+ solution_lines.append(f"- {desc}")
3732
+ if not solution_lines:
3733
+ solution_lines = [f"- {entries[0].get('description', '')}"]
3734
+
3735
+ project = best_entry.get('project', os.path.basename(os.getcwd()))
3736
+
3737
+ # Write solution file
3738
+ filepath = os.path.join(cat_dir, f"{slug}.md")
3739
+ with open(filepath, 'w') as f:
3740
+ f.write(f"---\n")
3741
+ f.write(f'title: "{title}"\n')
3742
+ f.write(f"category: {category}\n")
3743
+ f.write(f"tags: [{', '.join(tags)}]\n")
3744
+ f.write(f"symptoms:\n")
3745
+ for s in symptoms:
3746
+ f.write(f' - "{s}"\n')
3747
+ f.write(f'root_cause: "Identified from {len(entries)} related learnings across sessions"\n')
3748
+ f.write(f'prevention: "See solution details below"\n')
3749
+ f.write(f"confidence: {min(0.5 + 0.1 * len(entries), 0.95):.2f}\n")
3750
+ f.write(f'source_project: "{project}"\n')
3751
+ f.write(f'created: "{now}"\n')
3752
+ f.write(f"applied_count: 0\n")
3753
+ f.write(f"---\n\n")
3754
+ f.write(f"## Solution\n\n")
3755
+ f.write('\n'.join(solution_lines) + '\n\n')
3756
+ f.write(f"## Context\n\n")
3757
+ f.write(f"Compounded from {len(entries)} learnings ")
3758
+ f.write(f"({len([e for e in entries if e in patterns])} patterns, ")
3759
+ f.write(f"{len([e for e in entries if e in mistakes])} mistakes, ")
3760
+ f.write(f"{len([e for e in entries if e in successes])} successes) ")
3761
+ f.write(f"from project: {project}\n")
3762
+
3763
+ created += 1
3764
+
3765
+ if created > 0:
3766
+ print(f"Compounded {created} new solution files to {solutions_dir}")
3767
+ else:
3768
+ print("No new solutions to compound (need 2+ related learnings per category)")
3769
+ COMPOUND_SCRIPT
3770
+ }
3771
+
3772
+ load_solutions_context() {
3773
+ # Load relevant structured solutions for the current task context
3774
+ local context="$1"
3775
+ local solutions_dir="${HOME}/.loki/solutions"
3776
+ local output_file=".loki/state/relevant-solutions.json"
3777
+
3778
+ if [ ! -d "$solutions_dir" ]; then
3779
+ echo '{"solutions":[]}' > "$output_file" 2>/dev/null || true
3780
+ return
3781
+ fi
3782
+
3783
+ export LOKI_SOL_CONTEXT="$context"
3784
+ python3 << 'SOLUTIONS_SCRIPT'
3785
+ import json
3786
+ import os
3787
+ import re
3788
+
3789
+ solutions_dir = os.path.expanduser("~/.loki/solutions")
3790
+ context = os.environ.get("LOKI_SOL_CONTEXT", "").lower()
3791
+ context_words = set(context.split())
3792
+
3793
+ results = []
3794
+
3795
+ for category in os.listdir(solutions_dir):
3796
+ cat_dir = os.path.join(solutions_dir, category)
3797
+ if not os.path.isdir(cat_dir):
3798
+ continue
3799
+ for filename in os.listdir(cat_dir):
3800
+ if not filename.endswith('.md'):
3801
+ continue
3802
+ filepath = os.path.join(cat_dir, filename)
3803
+ try:
3804
+ with open(filepath, 'r') as f:
3805
+ content = f.read()
3806
+ except:
3807
+ continue
3808
+
3809
+ # Parse YAML frontmatter
3810
+ fm_match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
3811
+ if not fm_match:
3812
+ continue
3813
+
3814
+ fm = fm_match.group(1)
3815
+ title = re.search(r'title:\s*"([^"]*)"', fm)
3816
+ tags_match = re.search(r'tags:\s*\[([^\]]*)\]', fm)
3817
+ root_cause = re.search(r'root_cause:\s*"([^"]*)"', fm)
3818
+ prevention = re.search(r'prevention:\s*"([^"]*)"', fm)
3819
+ symptoms = re.findall(r'^\s*-\s*"([^"]*)"', fm, re.MULTILINE)
3820
+
3821
+ title_str = title.group(1) if title else filename.replace('.md', '')
3822
+ tags = [t.strip() for t in tags_match.group(1).split(',')] if tags_match else []
3823
+
3824
+ # Score by matching
3825
+ score = 0
3826
+ for tag in tags:
3827
+ if tag.lower() in context:
3828
+ score += 2
3829
+ for symptom in symptoms:
3830
+ for word in symptom.lower().split():
3831
+ if word in context_words and len(word) > 3:
3832
+ score += 3
3833
+ if category in context:
3834
+ score += 1
3835
+
3836
+ if score > 0:
3837
+ results.append({
3838
+ "score": score,
3839
+ "category": category,
3840
+ "title": title_str,
3841
+ "root_cause": root_cause.group(1) if root_cause else "",
3842
+ "prevention": prevention.group(1) if prevention else "",
3843
+ "file": filepath
3844
+ })
3845
+
3846
+ # Sort by score, take top 3
3847
+ results.sort(key=lambda x: x["score"], reverse=True)
3848
+ top = results[:3]
3849
+
3850
+ output = {"solutions": top}
3851
+ os.makedirs(".loki/state", exist_ok=True)
3852
+ with open(".loki/state/relevant-solutions.json", 'w') as f:
3853
+ json.dump(output, f, indent=2)
3854
+
3855
+ if top:
3856
+ print(f"Loaded {len(top)} relevant solutions from cross-project knowledge base")
3857
+ SOLUTIONS_SCRIPT
3858
+ }
3859
+
3602
3860
  start_dashboard() {
3603
3861
  log_header "Starting Loki Dashboard"
3604
3862
 
@@ -5429,8 +5687,10 @@ main() {
5429
5687
  # Load relevant learnings for this project context
5430
5688
  if [ -n "$PRD_PATH" ] && [ -f "$PRD_PATH" ]; then
5431
5689
  get_relevant_learnings "$(cat "$PRD_PATH" | head -100)"
5690
+ load_solutions_context "$(cat "$PRD_PATH" | head -100)"
5432
5691
  else
5433
5692
  get_relevant_learnings "general development"
5693
+ load_solutions_context "general development"
5434
5694
  fi
5435
5695
 
5436
5696
  # Log session start for audit
@@ -5488,6 +5748,9 @@ main() {
5488
5748
  # Extract and save learnings from this session
5489
5749
  extract_learnings_from_session
5490
5750
 
5751
+ # Compound learnings into structured solution files (v5.30.0)
5752
+ compound_session_to_solutions
5753
+
5491
5754
  # Log session end for audit
5492
5755
  audit_log "SESSION_END" "result=$result,prd=$PRD_PATH"
5493
5756
 
@@ -7,7 +7,7 @@ Modules:
7
7
  control: Session control API (start/stop/pause/resume)
8
8
  """
9
9
 
10
- __version__ = "5.29.0"
10
+ __version__ = "5.30.0"
11
11
 
12
12
  # Expose the control app for easy import
13
13
  try:
@@ -2,7 +2,7 @@
2
2
 
3
3
  Complete installation instructions for all platforms and use cases.
4
4
 
5
- **Version:** v5.29.0
5
+ **Version:** v5.30.0
6
6
 
7
7
  ---
8
8
 
@@ -36,7 +36,7 @@ npm install -g loki-mode
36
36
  brew tap asklokesh/tap && brew install loki-mode
37
37
 
38
38
  # Option C: Docker
39
- docker pull asklokesh/loki-mode:5.26.0
39
+ docker pull asklokesh/loki-mode:5.30.0
40
40
 
41
41
  # Option D: Git clone
42
42
  git clone https://github.com/asklokesh/loki-mode.git ~/.claude/skills/loki-mode
@@ -227,7 +227,7 @@ Run Loki Mode in a container for isolated execution.
227
227
 
228
228
  ```bash
229
229
  # Pull the image
230
- docker pull asklokesh/loki-mode:5.26.0
230
+ docker pull asklokesh/loki-mode:5.30.0
231
231
 
232
232
  # Or use docker-compose
233
233
  curl -o docker-compose.yml https://raw.githubusercontent.com/asklokesh/loki-mode/main/docker-compose.yml
@@ -237,10 +237,10 @@ curl -o docker-compose.yml https://raw.githubusercontent.com/asklokesh/loki-mode
237
237
 
238
238
  ```bash
239
239
  # Run with a PRD file
240
- docker run -v $(pwd):/workspace -w /workspace asklokesh/loki-mode:5.26.0 start ./my-prd.md
240
+ docker run -v $(pwd):/workspace -w /workspace asklokesh/loki-mode:5.30.0 start ./my-prd.md
241
241
 
242
242
  # Interactive mode
243
- docker run -it -v $(pwd):/workspace -w /workspace asklokesh/loki-mode:5.26.0
243
+ docker run -it -v $(pwd):/workspace -w /workspace asklokesh/loki-mode:5.30.0
244
244
 
245
245
  # Using docker-compose
246
246
  docker-compose run loki start ./my-prd.md
@@ -253,7 +253,7 @@ Pass your configuration via environment variables:
253
253
  ```bash
254
254
  docker run -e LOKI_MAX_RETRIES=100 -e LOKI_BASE_WAIT=120 \
255
255
  -v $(pwd):/workspace -w /workspace \
256
- asklokesh/loki-mode:5.26.0 start ./my-prd.md
256
+ asklokesh/loki-mode:5.30.0 start ./my-prd.md
257
257
  ```
258
258
 
259
259
  ### Updating
@@ -369,12 +369,12 @@ Pass the provider as an environment variable:
369
369
  # Use Codex with Docker
370
370
  docker run -e LOKI_PROVIDER=codex \
371
371
  -v $(pwd):/workspace -w /workspace \
372
- asklokesh/loki-mode:5.26.0 start ./my-prd.md
372
+ asklokesh/loki-mode:5.30.0 start ./my-prd.md
373
373
 
374
374
  # Use Gemini with Docker
375
375
  docker run -e LOKI_PROVIDER=gemini \
376
376
  -v $(pwd):/workspace -w /workspace \
377
- asklokesh/loki-mode:5.26.0 start ./my-prd.md
377
+ asklokesh/loki-mode:5.30.0 start ./my-prd.md
378
378
  ```
379
379
 
380
380
  ### Degraded Mode
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "loki-mode",
3
- "version": "5.29.0",
3
+ "version": "5.30.0",
4
4
  "description": "Multi-agent autonomous startup system for Claude Code, Codex CLI, and Gemini CLI",
5
5
  "keywords": [
6
6
  "claude",
@@ -27,6 +27,7 @@
27
27
  | Scale patterns (50+ agents) | `parallel-workflows.md` + `references/cursor-learnings.md` |
28
28
  | GitHub issues, PRs, syncing | `github-integration.md` |
29
29
  | Multi-provider (Codex, Gemini) | `providers.md` |
30
+ | Plan deepening, knowledge extraction | `compound-learning.md` |
30
31
 
31
32
  ## Module Descriptions
32
33
 
@@ -108,6 +109,13 @@
108
109
  - Filter by labels, milestone, assignee
109
110
  - Requires `gh` CLI authenticated
110
111
 
112
+ ### compound-learning.md (v5.30.0)
113
+ **When:** After architecture phase (deepen plan), after verification (extract learnings)
114
+ - Deepen-plan: 4 parallel research agents enhance plans before implementation
115
+ - Knowledge compounding: Extract structured solutions from task outcomes
116
+ - Solution retrieval: Load relevant cross-project solutions during REASON phase
117
+ - Composable phases: plan, deepen, work, review, compound
118
+
111
119
  ### providers.md (v5.0.0)
112
120
  **When:** Using non-Claude providers (Codex, Gemini), understanding degraded mode
113
121
  - Provider comparison matrix
package/skills/agents.md CHANGED
@@ -91,37 +91,50 @@ Success: Endpoint works, tests pass, matches OpenAPI spec.
91
91
 
92
92
  ---
93
93
 
94
- ## Parallel Review Pattern
94
+ ## Specialist Review Pattern (v5.30.0)
95
95
 
96
- **Code review uses 3 parallel reviewers with different focus areas:**
96
+ **Code review uses 3 specialist reviewers selected from a pool of 5 named experts.**
97
+
98
+ See `quality-gates.md` for full specialist definitions, selection rules, and prompt templates.
99
+
100
+ **Pool:** security-sentinel, performance-oracle, architecture-strategist, test-coverage-auditor, dependency-analyst
101
+
102
+ **Selection:** architecture-strategist always included + top 2 by trigger keyword match against diff.
97
103
 
98
104
  ```python
99
- # Launch all 3 in ONE message (parallel execution)
105
+ # Launch all 3 in ONE message (parallel, blind)
100
106
  Task(
101
107
  subagent_type="general-purpose",
102
- model="opus",
103
- description="Review: correctness",
104
- prompt="Review for bugs, logic errors, edge cases. Files: {files}"
108
+ model="sonnet",
109
+ description="Review: Architecture Strategist",
110
+ prompt="You are Architecture Strategist. Review ONLY for: SOLID violations, "
111
+ "coupling, wrong patterns, missing abstractions. Files: {files}. Diff: {diff}. "
112
+ "Output: VERDICT (PASS/FAIL) + FINDINGS with severity."
105
113
  )
106
114
  Task(
107
115
  subagent_type="general-purpose",
108
- model="opus",
109
- description="Review: security",
110
- prompt="Review for vulnerabilities, injection, auth issues. Files: {files}"
116
+ model="sonnet",
117
+ description="Review: Security Sentinel",
118
+ prompt="You are Security Sentinel. Review ONLY for: injection, XSS, auth bypass, "
119
+ "secrets, input validation, OWASP Top 10. Files: {files}. Diff: {diff}. "
120
+ "Output: VERDICT (PASS/FAIL) + FINDINGS with severity."
111
121
  )
112
122
  Task(
113
123
  subagent_type="general-purpose",
114
- model="opus",
115
- description="Review: performance",
116
- prompt="Review for N+1 queries, memory leaks, slow operations. Files: {files}"
124
+ model="sonnet",
125
+ description="Review: {selected_specialist}",
126
+ prompt="You are {name}. Review ONLY for: {focus}. "
127
+ "Files: {files}. Diff: {diff}. "
128
+ "Output: VERDICT (PASS/FAIL) + FINDINGS with severity."
117
129
  )
118
130
  ```
119
131
 
120
132
  **Rules:**
121
- - ALWAYS use opus for reviews
122
- - ALWAYS launch all 3 in single message
133
+ - ALWAYS use sonnet for reviews (balanced quality/cost)
134
+ - ALWAYS launch all 3 in single message (parallel, blind)
123
135
  - WAIT for all 3 before aggregating
124
- - IF unanimous approval: run Devil's Advocate reviewer
136
+ - IF unanimous PASS: run Devil's Advocate reviewer (anti-sycophancy)
137
+ - Critical/High = BLOCK, Medium = TODO, Low = informational
125
138
 
126
139
  ---
127
140
 
@@ -0,0 +1,274 @@
1
+ # Compound Learning & Deep Planning
2
+
3
+ **Inspired by:** Compound Engineering Plugin (Every/Kieran Klaassen) -- knowledge compounding philosophy where each unit of work makes subsequent work easier.
4
+
5
+ ---
6
+
7
+ ## Knowledge Compounding (Post-VERIFY)
8
+
9
+ After VERIFY passes, evaluate whether the task produced a **novel insight** worth preserving. Extract structured solutions that feed back into future planning.
10
+
11
+ ### When to Compound
12
+
13
+ Extract a solution when the task involved:
14
+ - Fixing a bug with a non-obvious root cause
15
+ - Solving a problem that required research or multiple attempts
16
+ - Discovering a reusable pattern or anti-pattern
17
+ - Hitting a pitfall that future projects could avoid
18
+ - Finding a performance optimization worth documenting
19
+
20
+ ### When NOT to Compound
21
+
22
+ Skip compounding for:
23
+ - Trivial changes (typos, formatting, renaming)
24
+ - Standard CRUD operations
25
+ - Changes with no novel insight
26
+ - Tasks that completed on first attempt with obvious approach
27
+
28
+ ### Solution File Format
29
+
30
+ Write to `~/.loki/solutions/{category}/{slug}.md` where slug is the title in kebab-case.
31
+
32
+ ```yaml
33
+ ---
34
+ title: "Connection pool exhaustion under load"
35
+ category: performance
36
+ tags: [database, pool, timeout, postgres]
37
+ symptoms:
38
+ - "ECONNREFUSED on database queries under load"
39
+ - "Pool timeout exceeded errors in production"
40
+ root_cause: "Default pool size of 10 insufficient for concurrent request volume"
41
+ prevention: "Set pool size to 2x expected concurrent connections, add health checks"
42
+ confidence: 0.85
43
+ source_project: "auth-service"
44
+ created: "2026-02-09T12:00:00Z"
45
+ applied_count: 0
46
+ ---
47
+
48
+ ## Solution
49
+
50
+ Increase the connection pool size in the database configuration. Add a connection
51
+ health check that validates connections before returning them to the pool.
52
+
53
+ ## Context
54
+
55
+ Discovered when the auth-service started timing out under moderate load (50 rps).
56
+ Default pg pool size of 10 caused request queuing when each request held a connection
57
+ for ~200ms. Fix: pool_size = 2 * max_concurrent_requests.
58
+
59
+ ## Related
60
+
61
+ - See also: `performance/database-query-optimization.md`
62
+ - See also: `deployment/connection-string-configuration.md`
63
+ ```
64
+
65
+ ### Categories
66
+
67
+ Solutions are organized into 7 fixed categories:
68
+
69
+ | Category | What Goes Here |
70
+ |----------|---------------|
71
+ | `security` | Auth bugs, injection fixes, secret handling, OWASP findings |
72
+ | `performance` | N+1 queries, memory leaks, caching strategies, bundle optimization |
73
+ | `architecture` | Design patterns, coupling fixes, abstraction improvements, SOLID violations |
74
+ | `testing` | Test strategies, flaky test fixes, coverage improvements, mocking patterns |
75
+ | `debugging` | Root cause analysis techniques, diagnostic approaches, logging patterns |
76
+ | `deployment` | CI/CD fixes, Docker issues, environment config, infrastructure patterns |
77
+ | `general` | Anything that doesn't fit above categories |
78
+
79
+ ### YAML Frontmatter Fields
80
+
81
+ | Field | Required | Description |
82
+ |-------|----------|-------------|
83
+ | `title` | Yes | Concise description of the problem/solution |
84
+ | `category` | Yes | One of the 7 categories above |
85
+ | `tags` | Yes | Array of keywords for search/matching |
86
+ | `symptoms` | Yes | Observable indicators of the problem |
87
+ | `root_cause` | Yes | Underlying cause (not just the symptom) |
88
+ | `prevention` | Yes | How to avoid this in future projects |
89
+ | `confidence` | No | 0.0-1.0, how broadly applicable (default 0.7) |
90
+ | `source_project` | No | Project where this was discovered |
91
+ | `created` | Yes | ISO 8601 timestamp |
92
+ | `applied_count` | No | Times this solution was loaded for a task (default 0) |
93
+
94
+ ---
95
+
96
+ ## Loading Solutions (REASON Phase)
97
+
98
+ Before starting a task, check `~/.loki/solutions/` for relevant entries.
99
+
100
+ ### Matching Logic
101
+
102
+ 1. Extract keywords from the current task description/goal
103
+ 2. Scan solution files in `~/.loki/solutions/*/`
104
+ 3. Score each solution by:
105
+ - Tag matches against task keywords (2 points each)
106
+ - Symptom matches against error messages (3 points each)
107
+ - Category match against current phase (1 point)
108
+ 4. Return top 3 solutions sorted by score
109
+ 5. Inject into context as: `title | root_cause | prevention`
110
+
111
+ ### Context Injection Format
112
+
113
+ ```
114
+ RELEVANT SOLUTIONS FROM PAST PROJECTS:
115
+ 1. [performance] Connection pool exhaustion under load
116
+ Root cause: Default pool size insufficient for concurrent requests
117
+ Prevention: Set pool size to 2x concurrent connections
118
+ 2. [security] JWT token not validated on WebSocket upgrade
119
+ Root cause: WebSocket middleware bypassed auth middleware
120
+ Prevention: Apply auth middleware to ALL transport layers
121
+ ```
122
+
123
+ Keep injection under 500 tokens. Title + root_cause + prevention only.
124
+
125
+ ---
126
+
127
+ ## Deepen-Plan Phase
128
+
129
+ After the ARCHITECTURE phase produces an initial plan, BEFORE starting DEVELOPMENT, spawn 4 parallel research agents to enhance the plan with concrete findings.
130
+
131
+ ### When to Run
132
+
133
+ - After ARCHITECTURE phase completes and design is approved
134
+ - Before INFRASTRUCTURE or DEVELOPMENT phase begins
135
+ - **Only for standard/complex complexity tiers** (skip for simple -- overkill)
136
+ - **Only when using Claude provider** (requires Task tool for parallel agents)
137
+ - Skip if the project is a known template with well-understood patterns
138
+
139
+ ### Research Agents (4 parallel -- launch in ONE message)
140
+
141
+ #### 1. Repo Analyzer
142
+
143
+ ```python
144
+ Task(
145
+ subagent_type="general-purpose",
146
+ model="sonnet",
147
+ description="Research: Repo Analysis",
148
+ prompt="""Analyze this codebase for patterns relevant to the planned feature.
149
+
150
+ Plan summary: {plan_summary}
151
+
152
+ Find:
153
+ - Reusable components and utilities
154
+ - Established naming conventions and code patterns
155
+ - Similar past implementations to reference
156
+ - Shared infrastructure (auth, logging, error handling)
157
+
158
+ Output: Bullet list of patterns to follow and components to reuse.
159
+ Be specific -- include file paths and function names."""
160
+ )
161
+ ```
162
+
163
+ #### 2. Dependency Researcher
164
+
165
+ ```python
166
+ Task(
167
+ subagent_type="general-purpose",
168
+ model="sonnet",
169
+ description="Research: Dependencies",
170
+ prompt="""Research best practices for these technologies: {tech_stack}
171
+
172
+ Check:
173
+ - Official documentation for recommended patterns
174
+ - Known pitfalls and common mistakes
175
+ - Version compatibility between dependencies
176
+ - Recommended configuration defaults
177
+
178
+ Output: Best practices list and pitfalls to avoid.
179
+ Cite sources where possible."""
180
+ )
181
+ ```
182
+
183
+ #### 3. Edge Case Finder
184
+
185
+ ```python
186
+ Task(
187
+ subagent_type="general-purpose",
188
+ model="sonnet",
189
+ description="Research: Edge Cases",
190
+ prompt="""Identify edge cases and failure modes for this plan: {plan_summary}
191
+
192
+ Check:
193
+ - Concurrency and race conditions
194
+ - Network failures and timeouts
195
+ - Data validation (null, empty, oversized, malformed)
196
+ - Partial failures and error cascading
197
+ - Resource exhaustion (memory, connections, disk)
198
+ - Boundary conditions (first, last, zero, max)
199
+
200
+ Output: Prioritized list with suggested handling for each.
201
+ Mark each as Critical/High/Medium severity."""
202
+ )
203
+ ```
204
+
205
+ #### 4. Security Threat Modeler
206
+
207
+ ```python
208
+ Task(
209
+ subagent_type="general-purpose",
210
+ model="sonnet",
211
+ description="Research: Threat Model",
212
+ prompt="""Perform threat modeling for this architecture: {architecture_summary}
213
+
214
+ Check:
215
+ - Authentication and authorization flows
216
+ - Data exposure and privacy concerns
217
+ - Injection surfaces (SQL, XSS, command, template)
218
+ - Authorization bypass scenarios
219
+ - Third-party dependency risks
220
+ - Supply chain attack surface
221
+
222
+ Output: STRIDE-based threat model with specific mitigations.
223
+ Mark each threat as Critical/High/Medium severity."""
224
+ )
225
+ ```
226
+
227
+ ### After All 4 Complete
228
+
229
+ 1. **Update the architecture plan** with findings from all agents
230
+ 2. **Add edge cases** as explicit tasks in `.loki/queue/pending.json`
231
+ 3. **Save threat model** to `.loki/specs/threat-model.md`
232
+ 4. **Log enhancement summary** in CONTINUITY.md under "Plan Deepening Results"
233
+ 5. **Proceed** to INFRASTRUCTURE or DEVELOPMENT phase
234
+
235
+ ### Example Output Integration
236
+
237
+ ```markdown
238
+ ## Plan Deepening Results (4 agents, 2m 14s)
239
+
240
+ ### Repo Analysis
241
+ - Found existing auth middleware at src/middleware/auth.ts -- reuse for new endpoints
242
+ - Logging pattern uses winston with structured JSON -- follow same pattern
243
+ - Database queries use repository pattern -- add new repository for feature
244
+
245
+ ### Dependencies
246
+ - React Query v5 requires explicit cache invalidation -- don't rely on auto-refetch
247
+ - Prisma 6.x has breaking change in nested writes -- use transactions instead
248
+
249
+ ### Edge Cases (3 Critical, 5 High)
250
+ - [Critical] Concurrent user edits can cause data loss -- add optimistic locking
251
+ - [Critical] File upload >100MB causes OOM -- add streaming upload
252
+ - [High] Network timeout during payment creates orphaned transaction
253
+
254
+ ### Threat Model
255
+ - [Critical] API endpoint /admin/* missing rate limiting
256
+ - [High] User-uploaded filenames not sanitized -- path traversal risk
257
+ ```
258
+
259
+ ---
260
+
261
+ ## Composable Phases
262
+
263
+ These phases can be invoked individually or as part of the full RARV+C cycle:
264
+
265
+ | Phase | Maps To | What It Does |
266
+ |-------|---------|-------------|
267
+ | `plan` | REASON (first pass) | Analyze PRD, generate architecture, create task queue |
268
+ | `deepen` | REASON (enhanced) | 4 research agents enhance the plan |
269
+ | `work` | ACT | Execute highest-priority task |
270
+ | `review` | REFLECT + VERIFY | 3 specialist reviewers on recent changes |
271
+ | `compound` | COMPOUND | Extract structured solutions from learnings |
272
+
273
+ When running via `autonomy/run.sh`, the full cycle executes automatically.
274
+ When running via Claude Code skill directly, invoke phases as needed.
@@ -263,22 +263,97 @@ velocity_quality_balance:
263
263
 
264
264
  ---
265
265
 
266
- ## Blind Review System
266
+ ## Specialist Review Pool (v5.30.0)
267
267
 
268
- **Launch 3 reviewers in parallel in a single message:**
268
+ 5 named expert reviewers. Select 3 per review based on change type.
269
+
270
+ **Inspired by:** Compound Engineering Plugin's 14 named review agents -- specialized expertise catches more issues than generic reviewers.
271
+
272
+ | Specialist | Focus Area | Trigger Keywords |
273
+ |-----------|-----------|-----------------|
274
+ | **security-sentinel** | OWASP Top 10, injection, auth, secrets, input validation | auth, login, password, token, api, sql, query, cookie, cors, csrf |
275
+ | **performance-oracle** | N+1 queries, memory leaks, caching, bundle size, lazy loading | database, query, cache, render, loop, fetch, load, index, join, pool |
276
+ | **architecture-strategist** | SOLID, coupling, cohesion, patterns, abstraction, dependency direction | *(always included -- design quality affects everything)* |
277
+ | **test-coverage-auditor** | Missing tests, edge cases, error paths, boundary conditions | test, spec, coverage, assert, mock, fixture, expect, describe |
278
+ | **dependency-analyst** | Outdated packages, CVEs, bloat, unused deps, license issues | package, import, require, dependency, npm, pip, yarn, lock |
279
+
280
+ ### Selection Rules
281
+
282
+ 1. **architecture-strategist** is ALWAYS one of the 3 slots
283
+ 2. Score remaining 4 specialists by counting trigger keyword matches in the diff content and changed file names
284
+ 3. Top 2 scoring specialists fill the remaining slots
285
+ 4. **Tie-breaker priority:** security-sentinel > test-coverage-auditor > performance-oracle > dependency-analyst
286
+ 5. **No triggers match at all:** Default to security-sentinel + test-coverage-auditor
287
+
288
+ ### Dispatch Pattern
289
+
290
+ Launch all 3 in ONE message. Each reviewer sees ONLY the diff -- NOT other reviewers' findings (blind review preserved).
269
291
 
270
292
  ```python
271
- # ALWAYS launch all 3 in ONE message
272
- Task(model="sonnet", description="Code review: correctness", prompt="Review for bugs...")
273
- Task(model="sonnet", description="Code review: security", prompt="Review for vulnerabilities...")
274
- Task(model="sonnet", description="Code review: performance", prompt="Review for efficiency...")
293
+ # ALWAYS launch all 3 in ONE message (parallel, blind)
294
+ Task(
295
+ model="sonnet",
296
+ description="Review: Architecture Strategist",
297
+ prompt="""You are Architecture Strategist. Your SOLE focus is design quality.
298
+
299
+ Review ONLY for: SOLID violations, excessive coupling, wrong patterns,
300
+ missing abstractions, dependency direction issues, god classes/functions.
301
+
302
+ Files changed: {files}
303
+ Diff: {diff}
304
+
305
+ Output format:
306
+ VERDICT: PASS or FAIL
307
+ FINDINGS:
308
+ - [severity] description (file:line)
309
+ Severity levels: Critical, High, Medium, Low"""
310
+ )
311
+
312
+ Task(
313
+ model="sonnet",
314
+ description="Review: Security Sentinel",
315
+ prompt="""You are Security Sentinel. Your SOLE focus is security vulnerabilities.
316
+
317
+ Review ONLY for: injection (SQL, XSS, command, template), auth bypass,
318
+ secrets in code, missing input validation, OWASP Top 10, insecure defaults.
319
+
320
+ Files changed: {files}
321
+ Diff: {diff}
322
+
323
+ Output format:
324
+ VERDICT: PASS or FAIL
325
+ FINDINGS:
326
+ - [severity] description (file:line)
327
+ Severity levels: Critical, High, Medium, Low"""
328
+ )
329
+
330
+ Task(
331
+ model="sonnet",
332
+ description="Review: {3rd_selected_specialist}",
333
+ prompt="""You are {specialist_name}. Your SOLE focus is {focus_area}.
334
+
335
+ Review ONLY for: {specific_checks}
336
+
337
+ Files changed: {files}
338
+ Diff: {diff}
339
+
340
+ Output format:
341
+ VERDICT: PASS or FAIL
342
+ FINDINGS:
343
+ - [severity] description (file:line)
344
+ Severity levels: Critical, High, Medium, Low"""
345
+ )
275
346
  ```
276
347
 
277
- **Rules:**
348
+ ### Rules (unchanged from blind review)
349
+
278
350
  - ALWAYS use sonnet for reviews (balanced quality/cost)
279
351
  - NEVER aggregate before all 3 complete
280
352
  - ALWAYS re-run ALL 3 after fixes
281
- - If unanimous approval -> run Devil's Advocate
353
+ - If unanimous PASS -> run Devil's Advocate (anti-sycophancy check)
354
+ - Critical/High findings = BLOCK (must fix before merge)
355
+ - Medium findings = TODO (track but don't block)
356
+ - Low findings = informational only
282
357
 
283
358
  ---
284
359