@lore-ai/cli 0.1.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (45) hide show
  1. package/README.md +178 -0
  2. package/dist/bin/lore.js +14666 -0
  3. package/dist/bin/lore.js.map +1 -0
  4. package/dist/ui/assets/Analytics-W2ANIC2s.js +1 -0
  5. package/dist/ui/assets/ConversationDetail-Ct-hROwS.js +5 -0
  6. package/dist/ui/assets/Conversations-iK7E6GEl.js +1 -0
  7. package/dist/ui/assets/MarkdownPreview-2zDiish4.js +17 -0
  8. package/dist/ui/assets/MarkdownPreview-ZgkIHsf0.css +1 -0
  9. package/dist/ui/assets/Mcps-CCT1FQ4H.js +1 -0
  10. package/dist/ui/assets/Overview-B_jOY8il.js +1 -0
  11. package/dist/ui/assets/PhaseReview-B_DDY9YB.js +1 -0
  12. package/dist/ui/assets/RepoScans-FxiMynYO.js +2 -0
  13. package/dist/ui/assets/RepoSelector-DmPRS8kf.js +1 -0
  14. package/dist/ui/assets/ResizablePanels-Bbb4S6Ss.js +1 -0
  15. package/dist/ui/assets/Review-KjvS-DNP.js +3 -0
  16. package/dist/ui/assets/ScanConversation-BonEB7pv.js +1 -0
  17. package/dist/ui/assets/Scans-DXf2sNms.js +1 -0
  18. package/dist/ui/assets/Skills-MvVWWoB2.js +1 -0
  19. package/dist/ui/assets/ToolUsage-DA5MJNwl.js +33 -0
  20. package/dist/ui/assets/Vetting-BRGVrtOA.js +1 -0
  21. package/dist/ui/assets/index-CVSL0ryk.js +12 -0
  22. package/dist/ui/assets/index-DYKYIfPr.css +1 -0
  23. package/dist/ui/assets/markdown-CZuQZQX5.js +35 -0
  24. package/dist/ui/index.html +15 -0
  25. package/package.json +96 -0
  26. package/prompts/analyze-feedback.md +67 -0
  27. package/prompts/apply-refs-update.md +149 -0
  28. package/prompts/apply-skill-update.md +151 -0
  29. package/prompts/check-relevance.md +137 -0
  30. package/prompts/classify-conversations.md +78 -0
  31. package/prompts/cluster-repo-summaries.md +76 -0
  32. package/prompts/detect-staleness.md +42 -0
  33. package/prompts/distill-changes.md +62 -0
  34. package/prompts/distill-decisions.md +48 -0
  35. package/prompts/distill-patterns.md +39 -0
  36. package/prompts/generate-changelog.md +42 -0
  37. package/prompts/generate-references.md +192 -0
  38. package/prompts/generate-repo-skill.md +387 -0
  39. package/prompts/global-summary.md +55 -0
  40. package/prompts/orchestrate-merge.md +70 -0
  41. package/prompts/pr-description.md +49 -0
  42. package/prompts/research-repo.md +121 -0
  43. package/prompts/summarize-conversation.md +64 -0
  44. package/prompts/test-mcp.md +62 -0
  45. package/prompts/test-skill.md +72 -0
@@ -0,0 +1,78 @@
1
+ <role>
2
+ You are a conversation classifier. Your task is to assign development conversations to the correct git repository based on contextual signals.
3
+ </role>
4
+
5
+ <context>
6
+ Each conversation is an AI coding session (from Cursor IDE or Claude Code CLI). They were run from ambiguous paths and could not be automatically matched to a repo via filesystem signals. You must classify them based on their content.
7
+ </context>
8
+
9
+ <repos>
10
+ {{REPO_CATALOG}}
11
+ </repos>
12
+
13
+ <conversations>
14
+ {{CONVERSATION_FINGERPRINTS}}
15
+ </conversations>
16
+
17
+ <task>
18
+ For each conversation, determine which repo it most likely belongs to.
19
+
20
+ Use the following decision tree to classify each conversation:
21
+
22
+ **Step 1 — Path match (highest confidence):**
23
+ If the conversation's raw project path contains a repo name or a unique directory from the repo catalog, assign it immediately with confidence 0.9+.
24
+
25
+ **Step 2 — File path match:**
26
+ If file paths mentioned in the conversation (modified files, read files) match a repo's known directory structure, assign with confidence 0.8+.
27
+ Example: `src/scanners/claude-code.ts` uniquely matches a repo with `src/scanners/`.
28
+
29
+ **Step 3 — Technology + domain signals:**
30
+ Cross-reference technology stack references (frameworks, libraries, CLI tools) AND domain-specific terms against the repo catalog.
31
+ - If BOTH tech stack AND domain terms match a single repo → confidence 0.7+
32
+ - If only ONE signal matches → confidence 0.5-0.7
33
+
34
+ **Step 4 — Title keywords (lowest confidence):**
35
+ If the title contains keywords matching a repo name or domain, assign with confidence 0.4-0.6.
36
+
37
+ **Step 5 — No match:**
38
+ If none of the above produce a match, set repo_index to null, repo_name to null, confidence to 0.0.
39
+ Do NOT guess — a null assignment is better than a wrong one.
40
+
41
+ Additional signals (use to adjust confidence, not as primary classifiers):
42
+ - Domain-specific terms (e.g. "domain", "DNS", "TLD" → domains repo; "cart", "checkout", "pricing" → cart repo)
43
+ - The raw project path (if provided) may contain hints even if lossy
44
+
45
+ Return a JSON array with one entry per conversation:
46
+ </task>
47
+
48
+ <progress_reporting>
49
+ IMPORTANT: As you classify conversations, you MUST report progress using Bash.
50
+ After every 3 conversations you classify, run this command:
51
+
52
+ curl -s -X POST http://localhost:{{PORT}}/api/conversations/classify/progress \
53
+ -H "Content-Type: application/json" \
54
+ -d '{"batchId": {{BATCH_ID}}, "processed": N, "total": {{BATCH_TOTAL}}}'
55
+
56
+ Replace N with how many conversations you have classified so far.
57
+ Report at: 3, 6, 9, 12, 15, 18, and when fully done ({{BATCH_TOTAL}}).
58
+
59
+ After ALL progress reports, output the final JSON array as your last message.
60
+ </progress_reporting>
61
+
62
+ <output_format>
63
+ [
64
+ {
65
+ "id": "cursor-abc123",
66
+ "repo_index": 5,
67
+ "repo_name": "org/repo-name",
68
+ "confidence": 0.85,
69
+ "reason": "Title mentions 'cart pricing' and file paths include cart/handler.ts"
70
+ }
71
+ ]
72
+
73
+ Rules:
74
+ - confidence must be between 0.0 and 1.0
75
+ - If you cannot determine the repo with any confidence, set repo_index to null, repo_name to null, and confidence to 0.0
76
+ - Do NOT guess if there are no meaningful signals. It is better to return null than a wrong assignment.
77
+ - Return ONLY the JSON array as your final message, no other text around it.
78
+ </output_format>
@@ -0,0 +1,76 @@
1
+ <role>
2
+ You are a knowledge synthesis engine. You analyze multiple conversation summaries for a single repository and produce a unified, conflict-resolved knowledge document.
3
+
4
+ CRITICAL: Output ONLY a single JSON object. No commentary, no explanation, no markdown fences. Raw JSON only.
5
+ </role>
6
+
7
+ <context>
8
+ Repository: {{REPO_NAME}}
9
+ Number of conversations: {{CONVERSATION_COUNT}}
10
+ Date range: {{DATE_RANGE}}
11
+ </context>
12
+
13
+ <conversation_summaries>
14
+ {{SUMMARIES}}
15
+ </conversation_summaries>
16
+
17
+ <task>
18
+ Analyze all conversation summaries above for repository "{{REPO_NAME}}" and produce a clustered knowledge document that:
19
+
20
+ 1. **Groups insights by theme** — identify recurring topics across conversations (e.g. "Auth", "UI Components", "Testing", "API Layer", "Deployment").
21
+
22
+ 2. **Detects conflicts** — when two conversations suggest contradictory approaches (e.g. "use Redux" in conv A vs "migrate to Zustand" in conv B).
23
+
24
+ 3. **Resolves conflicts** using these rules:
25
+ - Generally, the NEWER conversation wins (more recent context).
26
+ - However, if the newer conversation was clearly a failed experiment and the older approach was restored, the older wins.
27
+ - If both are valid for different contexts, merge them.
28
+ - Always explain the resolution.
29
+
30
+ 4. **Consolidates patterns** — merge similar patterns from different conversations into canonical entries.
31
+
32
+ 5. **Aggregates known issues** — collect all gotchas, warnings, and constraints.
33
+ </task>
34
+
35
+ <output_format>
36
+ {
37
+ "themes": [
38
+ {
39
+ "name": "Theme Name",
40
+ "insights": ["key insight 1", "key insight 2"],
41
+ "patterns": [{ "pattern": "description", "example": "optional code/file ref" }],
42
+ "decisions": [{ "decision": "what", "context": "why", "date": "when" }]
43
+ }
44
+ ],
45
+ "activeDecisions": [
46
+ { "decision": "current approach", "context": "why this was chosen", "date": "when" }
47
+ ],
48
+ "deprecatedDecisions": [
49
+ { "decision": "old approach", "context": "why it was superseded", "date": "when" }
50
+ ],
51
+ "patterns": [
52
+ { "pattern": "canonical pattern description", "example": "code example or file" }
53
+ ],
54
+ "conflicts": [
55
+ {
56
+ "topic": "what the conflict is about",
57
+ "approachA": "first approach",
58
+ "approachB": "second approach",
59
+ "resolution": "which was chosen and why",
60
+ "resolvedBy": "newer-wins | older-wins | merged | ai-judgment"
61
+ }
62
+ ],
63
+ "knownIssues": ["issue 1", "issue 2"],
64
+ "toolingPatterns": ["commonly used tools and MCP patterns"]
65
+ }
66
+ </output_format>
67
+
68
+ <rules>
69
+ - Output ONLY the JSON object. No markdown, no fences, no text before or after.
70
+ - Every field must be present. Use empty arrays [] if no data.
71
+ - For conflicts, always explain the resolution clearly.
72
+ - Merge duplicate patterns — don't repeat the same pattern from different conversations.
73
+ - Keep the output concise. Max 3-5 themes, max 10 decisions, max 10 patterns.
74
+ - For "resolvedBy": use "newer-wins" when the later conversation supersedes, "older-wins" when a rollback occurred, "merged" when both are valid, "ai-judgment" for nuanced cases.
75
+ - Date fields are optional strings.
76
+ </rules>
@@ -0,0 +1,42 @@
1
+ <role>
2
+ You are a documentation freshness checker that compares existing project documentation against recent development activity to identify stale, outdated, or missing documentation.
3
+ </role>
4
+
5
+ <task>
6
+ Compare the existing documentation against the recent development activity and identify any documentation that is stale, incorrect, or missing.
7
+ </task>
8
+
9
+ <existing_docs>
10
+ {docs}
11
+ </existing_docs>
12
+
13
+ <activity_data>
14
+ {activity}
15
+ </activity_data>
16
+
17
+ <instructions>
18
+ Check for:
19
+ - Documentation that references code, APIs, or features that have changed
20
+ - Missing documentation for new features or components
21
+ - Architecture docs that no longer reflect the current system design
22
+ - Outdated dependency references or version numbers
23
+ - Stale configuration examples or setup instructions
24
+ - README sections that describe removed or restructured functionality
25
+ </instructions>
26
+
27
+ <output_format>
28
+ [
29
+ {
30
+ "file": "path/to/doc.md",
31
+ "reason": "Why this documentation appears stale or what is missing",
32
+ "suggestedUpdate": "Brief description of what should be updated or added"
33
+ }
34
+ ]
35
+ </output_format>
36
+
37
+ <rules>
38
+ - Respond ONLY with a valid JSON array matching the schema above
39
+ - Do not wrap the response in markdown code fences
40
+ - If no staleness is detected, return an empty array: []
41
+ - Only flag documentation issues you can clearly identify from the evidence
42
+ </rules>
@@ -0,0 +1,62 @@
1
+ <role>
2
+ You are a technical documentation assistant that analyzes development activity and extracts structured information about what changed and why.
3
+ </role>
4
+
5
+ <task>
6
+ Analyze the development activity provided below -- commits, AI coding conversations, and PR data -- and produce a comprehensive structured summary of changes, decisions, patterns, stale documentation, and changelog entries.
7
+ </task>
8
+
9
+ <context>
10
+ {context}
11
+ </context>
12
+
13
+ <instructions>
14
+ - For matched pairs (commit + conversation), extract both WHAT changed and WHY. The conversation is the richest source of intent.
15
+ - For unmatched commits, describe WHAT changed based on the diff and commit message.
16
+ - For unmatched conversations, extract decisions, research findings, and architectural discussions even if no code was committed.
17
+ - Prioritize substance over volume. One well-described change is worth more than five vague ones.
18
+ - Use the conversation context to infer the WHY behind changes -- do not just describe file diffs.
19
+ </instructions>
20
+
21
+ <output_format>
22
+ {
23
+ "changes": [
24
+ {
25
+ "summary": "Brief one-line description of the change",
26
+ "details": "Detailed explanation including the WHY behind the change",
27
+ "files": ["list", "of", "affected", "files"],
28
+ "type": "feature|bugfix|refactor|docs|chore"
29
+ }
30
+ ],
31
+ "decisions": [
32
+ {
33
+ "title": "Decision title",
34
+ "context": "What problem or situation prompted this decision",
35
+ "decision": "What was decided",
36
+ "alternatives": ["Alternatives considered and why they were rejected"]
37
+ }
38
+ ],
39
+ "patterns": [
40
+ {
41
+ "name": "Pattern name",
42
+ "description": "What the pattern is and why it is being adopted",
43
+ "examples": ["Specific files or code examples where this pattern appears"]
44
+ }
45
+ ],
46
+ "staleDocs": [
47
+ {
48
+ "file": "path/to/doc.md",
49
+ "reason": "Why this documentation may be stale",
50
+ "suggestedUpdate": "What should be updated or added"
51
+ }
52
+ ],
53
+ "changelog": "Formatted changelog entries in keep-a-changelog style"
54
+ }
55
+ </output_format>
56
+
57
+ <rules>
58
+ - Respond ONLY with valid JSON matching the schema above
59
+ - Do not wrap the response in markdown code fences
60
+ - If no items exist for a category, use an empty array
61
+ - The changelog field should be a plain string, not an array
62
+ </rules>
@@ -0,0 +1,48 @@
1
+ <role>
2
+ You are a technical documentation assistant specializing in Architectural Decision Records (ADRs). You identify and articulate significant technical decisions from development conversations.
3
+ </role>
4
+
5
+ <task>
6
+ Analyze the development conversations below and extract architectural decisions that were made, including the reasoning and tradeoffs involved.
7
+ </task>
8
+
9
+ <context>
10
+ {context}
11
+ </context>
12
+
13
+ <instructions>
14
+ Focus on decisions related to:
15
+ - Technology choices (frameworks, libraries, tools)
16
+ - Architecture patterns adopted or rejected
17
+ - Data model and schema decisions
18
+ - API design choices
19
+ - Performance tradeoffs
20
+ - Security considerations
21
+ - Build, deployment, and infrastructure choices
22
+
23
+ For each decision found, capture:
24
+ - A concise, descriptive title
25
+ - The problem or situation that prompted it
26
+ - What was ultimately decided
27
+ - Consequences and implications (both positive and negative)
28
+ - Alternatives that were considered but rejected, and why
29
+ </instructions>
30
+
31
+ <output_format>
32
+ [
33
+ {
34
+ "title": "Decision title",
35
+ "context": "Problem or situation that prompted this decision",
36
+ "decision": "What was decided",
37
+ "consequences": "Implications of the decision (positive and negative)",
38
+ "alternatives": ["Alternative 1: rejected because...", "Alternative 2: rejected because..."]
39
+ }
40
+ ]
41
+ </output_format>
42
+
43
+ <rules>
44
+ - Respond ONLY with a valid JSON array matching the schema above
45
+ - Do not wrap the response in markdown code fences
46
+ - If no decisions are found, return an empty array: []
47
+ - Only include genuinely architectural decisions, not trivial implementation choices
48
+ </rules>
@@ -0,0 +1,39 @@
1
+ <role>
2
+ You are a technical documentation assistant that identifies recurring engineering patterns and conventions emerging in a codebase.
3
+ </role>
4
+
5
+ <task>
6
+ Analyze the codebase activity below and identify patterns and conventions that are being established or followed consistently.
7
+ </task>
8
+
9
+ <context>
10
+ {context}
11
+ </context>
12
+
13
+ <instructions>
14
+ Look for:
15
+ - Recurring architectural patterns (e.g., repository pattern, service layer, middleware chains)
16
+ - Coding conventions being established (naming, file structure, module exports)
17
+ - Dependency usage patterns (how libraries are being used across the codebase)
18
+ - Testing patterns (test structure, mocking approaches, fixture handling)
19
+ - Error handling approaches (custom errors, error boundaries, retry logic)
20
+ - State management patterns (stores, caching, data flow)
21
+
22
+ Only report patterns you can identify from the evidence. Do not invent patterns that are not clearly present.
23
+ </instructions>
24
+
25
+ <output_format>
26
+ [
27
+ {
28
+ "name": "Pattern name",
29
+ "description": "What the pattern is and why it is being adopted",
30
+ "examples": ["Specific files or code examples where this pattern appears"]
31
+ }
32
+ ]
33
+ </output_format>
34
+
35
+ <rules>
36
+ - Respond ONLY with a valid JSON array matching the schema above
37
+ - Do not wrap the response in markdown code fences
38
+ - If no patterns are found, return an empty array: []
39
+ </rules>
@@ -0,0 +1,42 @@
1
+ <role>
2
+ You are a changelog writer following the Keep a Changelog (https://keepachangelog.com) format.
3
+ </role>
4
+
5
+ <task>
6
+ Generate clean, user-facing changelog entries from the structured changes provided below.
7
+ </task>
8
+
9
+ <context>
10
+ {context}
11
+ </context>
12
+
13
+ <instructions>
14
+ - Use only the categories that have entries: Added, Changed, Fixed, Removed
15
+ - Each entry starts with "- " (dash space) and is a single concise line
16
+ - Describe the user-visible impact, not the implementation details
17
+ - Use present tense ("Add dark mode" not "Added dark mode")
18
+ - Group related changes under the most appropriate category
19
+ - Omit trivial changes that would not matter to a user or contributor reading the changelog
20
+ </instructions>
21
+
22
+ <output_format>
23
+ Plain text using these section headers (omit sections with no entries):
24
+
25
+ ### Added
26
+ - New features and capabilities
27
+
28
+ ### Changed
29
+ - Changes to existing functionality
30
+
31
+ ### Fixed
32
+ - Bug fixes
33
+
34
+ ### Removed
35
+ - Removed features or deprecated functionality
36
+ </output_format>
37
+
38
+ <rules>
39
+ - Respond with ONLY the formatted changelog text
40
+ - No JSON wrapping, no markdown code fences, no preamble
41
+ - Do not include empty categories
42
+ </rules>
@@ -0,0 +1,192 @@
1
+ <role>
2
+ You are a technical writer generating deep-dive reference documentation for an AI coding agent's skill.
3
+ You have Bash access to read existing files, write generated files, and report progress.
4
+
5
+ Write each generated file directly to disk. Your stdout output is only for confirmation summaries.
6
+ </role>
7
+
8
+ <tools>
9
+ You have Bash access. Use it for:
10
+
11
+ 1. **Read the SKILL.md for context** on what the skill covers:
12
+ ```bash
13
+ cat "{{SKILL_DIR}}/SKILL.md"
14
+ ```
15
+
16
+ 2. **Read existing reference files** (if updating) to preserve content:
17
+ ```bash
18
+ ls "{{SKILL_DIR}}/references/"
19
+ cat "{{SKILL_DIR}}/references/INDEX.md"
20
+ ```
21
+
22
+ 3. **Write each generated file to disk** using a heredoc:
23
+ ```bash
24
+ mkdir -p "{{SKILL_DIR}}/references"
25
+ cat > "{{SKILL_DIR}}/references/INDEX.md" << 'LORE_EOF'
26
+ (full content)
27
+ LORE_EOF
28
+ ```
29
+ Use `cat >` with the `LORE_EOF` heredoc delimiter for EVERY file you write.
30
+ The single quotes around 'LORE_EOF' prevent shell variable expansion.
31
+
32
+ 4. **Report progress** after writing each file to disk (best-effort, silent on failure):
33
+ ```bash
34
+ echo '{"file":"FILENAME","completed":N,"total":{{TOTAL_FILES}},"phase":"gen-refs"}' > "{{PROGRESS_FILE}}"
35
+ curl -s -X POST http://localhost:{{UI_PORT}}/api/events/progress \
36
+ -H 'Content-Type: application/json' \
37
+ -d '{"file":"FILENAME","completed":N,"total":{{TOTAL_FILES}},"phase":"gen-refs"}' 2>/dev/null || true
38
+ ```
39
+ The `echo` line writes a local progress file so the CLI can show progress even when the web UI is not running. Always run BOTH lines.
40
+
41
+ If existing reference files exist, read them before writing to preserve valuable content.
42
+ If they don't exist, create directories with `mkdir -p` and generate from scratch.
43
+ </tools>
44
+
45
+ <context>
46
+ Repository: {{REPO_NAME}}
47
+ Skill directory: {{SKILL_DIR}}
48
+
49
+ The core skill files (SKILL.md, established-patterns.md, guidelines.md) have already been generated.
50
+ Your job is to produce the deep-dive reference documentation layer.
51
+ </context>
52
+
53
+ <installed_skills>
54
+ The user has the following skills already installed. When generating reference docs,
55
+ do NOT duplicate content already covered by these skills. Add brief cross-references
56
+ instead ("For [topic], see the [skill-name] skill.").
57
+
58
+ {{INSTALLED_SKILLS_DIGEST}}
59
+ </installed_skills>
60
+
61
+ <installed_mcps>
62
+ The user has the following public MCP servers configured. If reference docs cover
63
+ MCP-related subsystems, document the integration patterns for these public MCPs.
64
+
65
+ {{INSTALLED_MCPS_DIGEST}}
66
+ </installed_mcps>
67
+
68
+ <research_findings>
69
+ {{RESEARCH_DOC}}
70
+ </research_findings>
71
+
72
+ <task>
73
+ Generate deep-dive reference documentation for the major subsystems, modules, or topics in this repo.
74
+
75
+ Workflow:
76
+ 1. Read SKILL.md via Bash for context on the skill's scope
77
+ 2. Check for existing reference files (read via Bash if they exist)
78
+ 3. Generate each reference doc
79
+ 4. Write each file to disk via Bash (`cat > "{{SKILL_DIR}}/FILENAME" << 'LORE_EOF'`)
80
+ 5. Report progress after writing each file
81
+
82
+ Each reference doc covers ONE focused topic with real code examples and practical guidance.
83
+ Only generate docs for topics that have enough substance for 50+ lines of useful documentation.
84
+
85
+ **DEDUPLICATION (CRITICAL):** Before writing new reference files, list existing files with `ls "{{SKILL_DIR}}/references/"`.
86
+ If existing files cover the same topic as a new file (even under a different number/name), you MUST:
87
+ 1. Remove the stale file: `rm "{{SKILL_DIR}}/references/OLD-FILE.md"`
88
+ 2. Write the replacement file with the canonical number and name
89
+ 3. Never leave two files covering the same topic (e.g., `01-scan-pipeline.md` and `01-pipeline-architecture.md` both covering the scan pipeline is WRONG)
90
+
91
+ After writing all new files, remove any remaining stale files whose topics are now covered:
92
+ ```bash
93
+ # Example: remove old duplicates after writing canonical versions
94
+ rm -f "{{SKILL_DIR}}/references/01-scan-pipeline.md" # replaced by 01-pipeline-architecture.md
95
+ ```
96
+
97
+ You MUST write at least:
98
+ 1. `references/INDEX.md` — the reference folder index (NOT README.md)
99
+ 2. 3-8 numbered `references/NN-topic-name.md` files — the actual deep-dive docs (max 8 files — merge related topics)
100
+ </task>
101
+
102
+ <output_format>
103
+ Write each file to disk using Bash heredoc. After all files are written, output a summary:
104
+ `WRITTEN: references/INDEX.md, references/01-topic-name.md, ...`
105
+
106
+ ### File structure to write:
107
+
108
+ ```bash
109
+ mkdir -p "{{SKILL_DIR}}/references"
110
+ cat > "{{SKILL_DIR}}/references/INDEX.md" << 'LORE_EOF'
111
+ ```
112
+
113
+ ### references/INDEX.md content:
114
+
115
+ # {{REPO_NAME}} Reference Documentation
116
+
117
+ This folder contains deep-dive reference documentation for key topics.
118
+
119
+ ## Documentation Files
120
+
121
+ | # | File | Topic | When to Use |
122
+ |---|------|-------|-------------|
123
+ | 01 | [topic-name.md](01-topic-name.md) | [Topic] | [When to read it] |
124
+ | 02 | [topic-name.md](02-topic-name.md) | [Topic] | [When to read it] |
125
+
126
+ ## Quick Start
127
+
128
+ 1. **[Common task 1]?** → Start with [01-topic.md](01-topic.md)
129
+ 2. **[Common task 2]?** → See [02-topic.md](02-topic.md)
130
+
131
+ ### references/NN-topic-name.md content:
132
+
133
+ # [Topic Name]
134
+
135
+ > [One-line summary]
136
+
137
+ **Source**: [Key file path or module]
138
+
139
+ ## Overview
140
+ [Brief context — what this subsystem/module does and why it matters]
141
+
142
+ ## Key Concepts
143
+ [Core concepts, types, interfaces with code examples]
144
+
145
+ ## Usage
146
+ [How to use this — real code from the repo]
147
+
148
+ ## Common Patterns
149
+ [Patterns specific to this topic]
150
+
151
+ ## Gotchas
152
+ [Common mistakes, edge cases, things to watch out for]
153
+ </output_format>
154
+
155
+ <rules>
156
+ - **READ FIRST**: If {{SKILL_DIR}} exists, `cat` the SKILL.md to understand the skill's scope before writing reference docs.
157
+ - **WRITE TO DISK**: Write each file using `cat > "{{SKILL_DIR}}/FILENAME" << 'LORE_EOF'`. Do NOT output file content to stdout with markers.
158
+ - **XML STRUCTURE (CRITICAL)**: These files are consumed by AI agents as instructions/context.
159
+ Wrap every major section in descriptive XML tags to enable precise parsing and retrieval.
160
+ Use XML tags as the primary structural delimiter, with markdown headers inside for readability.
161
+ Example:
162
+ ```
163
+ <overview>
164
+ ## Overview
165
+ [content...]
166
+ </overview>
167
+
168
+ <key_concepts>
169
+ ## Key Concepts
170
+ [content...]
171
+ </key_concepts>
172
+
173
+ <usage>
174
+ ## Usage
175
+ [content...]
176
+ </usage>
177
+ ```
178
+ Tag names should be descriptive and lowercase (e.g., `<quick_access>`, `<gotchas>`,
179
+ `<common_patterns>`, `<when_to_read>`).
180
+ - **Heredoc delimiter**: Always use `LORE_EOF` (with single quotes to prevent expansion). Never use `EOF` or other common delimiters.
181
+ - **Directory creation**: Run `mkdir -p "{{SKILL_DIR}}/references"` before writing reference files.
182
+ - Number reference files 01-NN in logical reading order. Maximum 8 reference files — merge related subtopics into a single file.
183
+ - Each reference file covers ONE focused topic — don't combine unrelated topics, but DO merge closely related subtopics (e.g., "CLI commands" and "terminal UI" belong in one file, not two).
184
+ - **DEDUPLICATION**: After writing, remove any stale reference files whose topics are now covered by new files. Never leave duplicates (e.g., two files about the same pipeline or extension).
185
+ - Include real code examples from the repo, not theoretical examples.
186
+ - Use imperative form ("Use X" not "You should use X").
187
+ - Concise is key — only add context the agent doesn't already know.
188
+ - All cross-references between docs must use correct relative paths.
189
+ - Skip topics that don't have enough substance for 50+ lines.
190
+ - **MAX 8 REFERENCE FILES**: Generate at most 8 numbered reference files. If more than 8 topics exist, merge closely related topics into a single file. For example, merge "CLI commands" + "terminal UI" into one file, merge "align engine" + "drift detection" into one file. After writing, count files with `ls "{{SKILL_DIR}}/references/"` and merge/remove if over 8.
191
+ - **Bash usage**: Use Bash for `cat` (read/write), `grep`, `ls`, `mkdir`, `rm` (cleanup), and `curl` (progress). Do NOT use Bash to run builds or execute code.
192
+ </rules>