@jayjiang/byoao 2.0.1 → 2.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,167 +0,0 @@
1
- ---
2
- name: cook
3
- description: >
4
- The core knowledge compilation skill. Reads raw material (user notes, external sources)
5
- and distills it into structured, cross-referenced knowledge pages in entities/, concepts/,
6
- comparisons/, and queries/. Use when the user wants to compile notes into knowledge,
7
- digest external material, or periodically maintain the knowledge base.
8
- ---
9
-
10
- # /cook — Knowledge Compilation
11
-
12
- You are a knowledge compiler. Your job is to read raw material (user notes, external sources)
13
- and distill it into structured, cross-referenced knowledge pages.
14
-
15
- ## Prerequisites Check
16
-
17
- ```bash
18
- obsidian --version
19
- ```
20
-
21
- If this fails, STOP and display the Obsidian CLI availability message (see /prep).
22
-
23
- ## Parameters
24
-
25
- - **target** (optional): What to cook. Default: incremental (new/modified notes since last cook).
26
- - `--all` or `full`: Read all user notes in the vault
27
- - `"Topic Name"`: Read notes matching this keyword
28
- - `path/to/note.md`: Read a specific note
29
- - `<URL>`: Fetch external article and digest it
30
-
31
- ## Input Scope
32
-
33
- ### Incremental Mode (default)
34
-
35
- When user runs `/cook` with no arguments:
36
- 1. Read `log.md` for last cook timestamp
37
- 2. Scan for `.md` files outside agent directories with `modified` date after that timestamp
38
- 3. Include any unprocessed files
39
-
40
- ### Full Mode
41
-
42
- When user runs `/cook --all`:
43
- - Read all user notes in the vault (exclude `entities/`, `concepts/`, `comparisons/`, `queries/`)
44
- - Re-evaluate all entities and concepts
45
-
46
- ### Targeted Mode
47
-
48
- When user runs `/cook "Feature A"` or `/cook path/to/note.md`:
49
- - Read only the specified notes or notes matching the keyword
50
-
51
- ### External URL
52
-
53
- When user provides a URL:
54
- 1. Fetch content using WebFetch or Obsidian Web Clipper
55
- 2. Save as a user note in the vault (ask the user where to save, or use a sensible default like the vault root with a descriptive filename: `<slug>.md`)
56
- 3. Add frontmatter: `title`, `source_url`, `fetched` date
57
- 4. Process normally — the saved note becomes raw material for /cook
58
-
59
- **Note:** No dedicated `raw/` directory. External material is saved as regular user notes, consistent with the brownfield principle.
60
-
61
- ## Processing Pipeline
62
-
63
- ### Step 1: Read & Parse
64
- - Read all target notes
65
- - Extract frontmatter, content, wikilinks
66
- - Identify entities (named things), concepts (abstract ideas), decisions, contradictions
67
-
68
- ### Step 2: Match Against Existing Pages
69
- - Check `INDEX.base` or scan `entities/`, `concepts/` for existing pages
70
- - Determine: create new vs. update existing
71
-
72
- ### Step 3: Create/Update Pages
73
- - **New entities:** Create in `entities/<name>.md`
74
- - **New concepts:** Create in `concepts/<name>.md`
75
- - **Updates:** Add new information, bump `updated` date
76
- - **Contradictions:** Follow Update Policy
77
-
78
- **Create page thresholds:**
79
- - Appears in 2+ notes, OR is central subject of one note
80
- - Do NOT create for: passing mentions, minor details, out-of-domain topics
81
-
82
- ### Step 4: Cross-Reference
83
- - Ensure every new/updated page has at least 2 outbound wikilinks
84
- - Check existing pages link back where relevant
85
-
86
- ### Step 5: Update Navigation
87
- - `INDEX.base` auto-updates via Obsidian Base query
88
- - Append entry to `log.md`
89
-
90
- ### Step 6: Report
91
- Present structured summary (see Output Report Format below).
92
-
93
- ## Contradiction Handling
94
-
95
- ### Detection
96
- - Compare claims across notes about the same entity/concept
97
- - Check dates — newer claims may supersede older
98
- - Look for explicit contradictions (e.g., "we changed from X to Y")
99
-
100
- ### Resolution Workflow
101
- 1. Note both positions with dates and source references
102
- 2. Mark in frontmatter: `contradictions: [page-name]`
103
- 3. Report to user with specific sources
104
- 4. Offer to create a comparison page
105
- 5. User decides
106
-
107
- ### Update Policy
108
- - Newer sources generally supersede older
109
- - If both positions still valid (e.g., A/B testing), note both
110
- - Never silently overwrite — always flag for review
111
-
112
- ## Output Report Format
113
-
114
- ```
115
- Cook complete. Here's what changed:
116
-
117
- New knowledge:
118
- • [[feature-a]] — Response time monitoring feature
119
- • [[response-time-metrics]] — Why median replaced avg
120
-
121
- Updated:
122
- • [[zhang-san]] — Added Feature A assignment
123
-
124
- Contradiction found:
125
- ⚠ PRD says avg(response_time) > baseline, but experiment notes say median
126
- Sources: Projects/Feature-A-PRD.md vs Daily/2026-04-05.md
127
- Want me to create a comparison page?
128
-
129
- Log: 1 entry added to log.md
130
- ```
131
-
132
- **Design principles:**
133
- - Natural language, no technical jargon
134
- - Structured for quick scanning
135
- - Actionable (asks for decisions on contradictions)
136
- - Wikilinks for easy navigation
137
-
138
- ## Auto-Trigger Behavior
139
-
140
- The Agent should automatically run `/cook` after:
141
- - Writing a note (brief report: "Cooked 1 note. Updated [[x]], created [[y]].")
142
- - User drops new files into the vault
143
-
144
- **When NOT to auto-trigger:**
145
- - Rapid-fire note creation (batch and cook once at the end)
146
- - `/cook` was already run in the last 5 minutes
147
-
148
- ## Agent Page Identification
149
-
150
- Agent pages are identified by directory:
151
- | Location | Ownership |
152
- |----------|-----------|
153
- | `entities/**/*.md` | Agent |
154
- | `concepts/**/*.md` | Agent |
155
- | `comparisons/**/*.md` | Agent |
156
- | `queries/**/*.md` | Agent |
157
- | All other `.md` | User (read-only during /cook) |
158
-
159
- No `owner` frontmatter field needed.
160
-
161
- ## Key Principles
162
-
163
- - **Evidence-based**: Every knowledge page cites its sources
164
- - **Never modify user notes**: User notes are read-only during /cook
165
- - **Thresholds matter**: 2+ mentions or central subject to create a page
166
- - **Split at 200 lines**: Break large pages into sub-topics
167
- - **Flag contradictions**: Never silently overwrite
@@ -1,131 +0,0 @@
1
- ---
2
- name: diagnose
3
- description: >
4
- Vault health check at the structural level. Checks frontmatter coverage, orphan notes,
5
- broken links, AGENTS.md and SCHEMA.md drift, v2 agent directories, and overall vault configuration.
6
- Broader than /health which focuses on agent pages — /diagnose checks the entire vault including user notes.
7
- ---
8
-
9
- # /diagnose — Vault Diagnosis
10
-
11
- You are a vault doctor. Your job is to check the overall health of the vault — structure, frontmatter coverage, configuration, and consistency across both user notes and agent pages.
12
-
13
- ## Prerequisites Check
14
-
15
- ```bash
16
- obsidian --version
17
- ```
18
-
19
- If this fails, STOP and display the Obsidian CLI availability message (see /prep).
20
-
21
- ## Parameters
22
-
23
- - **focus** (optional): Specific area to check — `frontmatter`, `links`, `structure`, `config`, or `all`. Default: `all`.
24
-
25
- ## Process
26
-
27
- ### Step 1: Frontmatter Coverage
28
-
29
- ```bash
30
- obsidian properties sort=count counts
31
- ```
32
-
33
- Report:
34
- - Total notes with frontmatter vs. without
35
- - Most common missing fields
36
- - Notes with invalid frontmatter (bad dates, unknown types, etc.)
37
- - Tag usage: how many unique tags, how many notes per tag
38
-
39
- ### Step 2: Broken Wikilinks
40
-
41
- Scan for wikilinks that point to non-existent files:
42
-
43
- ```bash
44
- obsidian search "\[\[.*\]\]"
45
- ```
46
-
47
- For each wikilink found, check if the target file exists. Report broken links with:
48
- - Source file where the broken link appears
49
- - Target link that doesn't resolve
50
- - Suggested fix (create the missing file or remove the link)
51
-
52
- ### Step 3: Orphan Detection
53
-
54
- Find notes with no inbound wikilinks:
55
-
56
- ```bash
57
- obsidian backlinks "note-name"
58
- ```
59
-
60
- For both user notes and agent pages, identify orphans. Note that newly created notes are expected to be orphans temporarily.
61
-
62
- ### Step 4: AGENTS.md, SCHEMA.md, and v2 layout
63
-
64
- Check if `AGENTS.md` accurately reflects the current vault state:
65
- - Does it reference directories that no longer exist?
66
- - Does it miss directories that were added?
67
- - Are the skill references still valid?
68
- - Is the navigation advice still accurate?
69
-
70
- Check `SCHEMA.md`:
71
- - Tag taxonomy and domain sections match how tags are actually used
72
- - Agent directory table matches `entities/`, `concepts/`, `comparisons/`, `queries/`
73
- - Frontmatter expectations align with v2 `type: entity | concept | comparison | query`
74
-
75
- Verify the v2 agent directories exist and are usable: `entities/`, `concepts/`, `comparisons/`, `queries/` (note if any are missing or empty when the vault should have compiled knowledge).
76
-
77
- ### Step 5: Configuration Check
78
-
79
- Verify vault configuration:
80
- - `.obsidian/` directory exists and is valid
81
- - `.opencode/` directory has current skill definitions
82
- - `SCHEMA.md` exists and has a defined tag taxonomy
83
- - `log.md` exists and has recent entries
84
- - `INDEX.base` exists for compiled knowledge discovery
85
-
86
- ### Step 6: Present Diagnosis
87
-
88
- ```markdown
89
- # Vault Diagnosis
90
-
91
- Scanned {N} notes, {M} agent pages, {K} user notes.
92
-
93
- ---
94
-
95
- ## Frontmatter Coverage
96
- - Notes with frontmatter: X/Y (Z%)
97
- - Most common missing: {list fields}
98
- - Unique tags: {N} (top 5: {list})
99
-
100
- ## Broken Wikilinks
101
- - {N} broken links found:
102
- - [[target]] in [[source]] → file not found
103
-
104
- ## Orphan Notes
105
- - {N} notes with no inbound links:
106
- - [[note-name]] — consider linking from [[suggested-source]]
107
-
108
- ## AGENTS.md / SCHEMA.md / layout
109
- - AGENTS.md: {Up to date / Needs update} — {details if outdated}
110
- - SCHEMA.md: {Up to date / Needs update / Missing} — {taxonomy vs usage}
111
- - Agent dirs (`entities/`, `concepts/`, `comparisons/`, `queries/`): {OK / Missing / Issues}
112
-
113
- ## Configuration
114
- - .obsidian/: {OK / Missing / Issues}
115
- - .opencode/: {OK / Missing / Issues}
116
- - log.md: {OK / Missing / {N} entries, last: {date}}
117
- - INDEX.base: {OK / Missing / Needs update}
118
-
119
- ## Overall Health
120
- **Score**: {Good / Fair / Needs attention}
121
-
122
- {2-3 sentence summary of the vault's overall health and the top 2-3 issues to address}
123
- ```
124
-
125
- ## Key Principles
126
-
127
- - **Comprehensive but prioritized.** Check everything, but surface the most important issues first.
128
- - **Actionable findings.** Every issue should come with a suggested fix.
129
- - **Non-destructive by default.** Report issues, don't fix them automatically.
130
- - **Whole vault, not just agent pages.** Unlike /health which focuses on agent-maintained directories, /diagnose checks the entire vault.
131
- - **Obsidian is first workbench.** All note operations go through Obsidian CLI.
@@ -1,122 +0,0 @@
1
- ---
2
- name: drift
3
- description: >
4
- Intention-vs-action gap analysis over time. Compares what the user said they would do with
5
- what actually happened. Use when the user asks "am I following through on X", "how has Y
6
- changed since the plan", or wants to check if actions match intentions.
7
- ---
8
-
9
- # /drift — Intention vs. Action
10
-
11
- You are an accountability mirror. Your job is to compare what the user said they would do with what actually happened — finding gaps between intentions and actions, plan vs. reality, and the slow drift of priorities over time.
12
-
13
- ## Prerequisites Check
14
-
15
- ```bash
16
- obsidian --version
17
- ```
18
-
19
- If this fails, STOP and display the Obsidian CLI availability message (see /prep).
20
-
21
- ## Parameters
22
-
23
- - **topic** (optional): Specific plan, goal, or intention to track. Default: scan all recent intentions.
24
- - **window** (optional): Time window to analyze (e.g., "30d", "3m", "all"). Default: "30d".
25
-
26
- ## Process
27
-
28
- ### Step 1: Find Stated Intentions
29
-
30
- Search for places where the user expressed intentions:
31
-
32
- ```bash
33
- obsidian search "should" OR "need to" OR "will" OR "plan to" OR "going to" OR "decided to"
34
- obsidian search "goal" OR "objective" OR "target" OR "priority"
35
- ```
36
-
37
- Also check:
38
- - Daily notes for intention statements
39
- - Agent pages in `entities/` and `concepts/` for documented decisions, owners, or plans
40
- - Pages with `status: draft` that represent in-progress intentions
41
- - `log.md` as the chronological spine: cook cycles, reported changes, and stated purposes tied to dates
42
-
43
- ### Step 2: Find Actual Actions
44
-
45
- Search for evidence of what actually happened:
46
-
47
- ```bash
48
- obsidian search "completed" OR "done" OR "shipped" OR "implemented" OR "finished"
49
- obsidian search "changed" OR "switched" OR "pivoted" OR "abandoned"
50
- ```
51
-
52
- Check:
53
- - Recent daily notes for actual activities
54
- - Agent pages in `entities/` and `concepts/` for current state and decision descriptions
55
- - `log.md` entries since the intention was stated
56
- - Updated frontmatter dates and `status` changes
57
- - New pages created vs. pages left in draft
58
-
59
- ### Step 3: Compare Intentions to Actions
60
-
61
- For each intention found:
62
-
63
- 1. **Followed through** — Evidence shows the action was taken as planned
64
- 2. **Partially followed** — Some action was taken but not fully
65
- 3. **Deferred** — Still planned but not yet acted on
66
- 4. **Diverged** — Action was taken but in a different direction
67
- 5. **Abandoned** — No evidence of any action
68
-
69
- ### Step 4: Identify Drift Patterns
70
-
71
- Look for systematic patterns:
72
-
73
- - **Priority drift**: The user said X was top priority, but most time went to Y
74
- - **Scope drift**: A small intention grew into a much larger effort (or shrank)
75
- - **Direction drift**: The approach changed from the original plan
76
- - **Timeline drift**: Things took significantly longer (or shorter) than expected
77
- - **Attention drift**: An intense focus faded and wasn't replaced by anything
78
-
79
- ### Step 5: Present the Drift Report
80
-
81
- ```markdown
82
- # Drift Report: {topic or "Recent Intentions"}
83
-
84
- Analyzed {N} notes from {start date} to {end date}.
85
-
86
- ---
87
-
88
- ## Followed Through ✅
89
- - **{intention}** — {what was done, evidence from [[note]]}
90
-
91
- ## Partially Followed ⚡
92
- - **{intention}** — {what was done vs. what was planned, gap evidence from [[note]]}
93
-
94
- ## Deferred ⏳
95
- - **{intention}** — {stated on [[date]] in [[note]], no action found since}
96
-
97
- ## Diverged ↩
98
- - **{intention}** — {original plan from [[note A]], actual outcome from [[note B]]}
99
-
100
- ## Abandoned ❌
101
- - **{intention}** — {stated on [[date]], zero evidence of action}
102
-
103
- ---
104
-
105
- ## Drift Patterns
106
-
107
- ### Priority Drift
108
- {Evidence that stated priorities don't match actual time allocation}
109
-
110
- ### Direction Drift
111
- {Evidence that the approach changed from the original plan}
112
-
113
- ## Overall Assessment
114
- {2-3 sentences: Is the user generally following through on intentions? Where is the biggest gap? Is the drift a problem or a healthy adaptation?}
115
- ```
116
-
117
- ## Key Principles
118
-
119
- - **Factual, not judgmental.** Report the gap between intention and action without moralizing. The user decides if it matters.
120
- - **Evidence-based.** Every drift claim must cite specific notes showing both the intention and the actual outcome.
121
- - **Drift isn't always bad.** Sometimes changing direction is the right call. Flag the drift; let the user judge.
122
- - **Obsidian is first workbench.** All note operations go through Obsidian CLI.
@@ -1,63 +0,0 @@
1
- ---
2
- name: health
3
- description: >
4
- Scan agent-maintained directories for health issues: orphan pages, broken wikilinks,
5
- stale content, frontmatter violations, tag taxonomy drift, oversized pages. Use when
6
- the user wants to audit the knowledge base quality.
7
- ---
8
-
9
- # /health — Knowledge Health Check
10
-
11
- Scan the four agent-maintained directories (`entities/`, `concepts/`, `comparisons/`, `queries/`)
12
- for structural issues.
13
-
14
- ## Prerequisites Check
15
-
16
- ```bash
17
- obsidian --version
18
- ```
19
-
20
- If this fails, STOP and display the Obsidian CLI availability message (see /prep).
21
-
22
- ## Scan Categories
23
-
24
- ### 1. Orphan Pages
25
- Pages with no inbound wikilinks from any other note (user notes or agent pages).
26
- - Severity: **warning** for new pages (< 7 days old), **info** for older
27
-
28
- ### 2. Broken Wikilinks
29
- Wikilinks in agent pages that point to non-existent targets.
30
- - Severity: **warning**
31
-
32
- ### 3. Stale Content
33
- Pages where `updated` date is > 90 days behind the most recent source note's date.
34
- - Severity: **info**
35
-
36
- ### 4. Frontmatter Violations
37
- Pages missing required fields (`title`, `date`, `created`, `updated`, `type`, `tags`, `sources`).
38
- - Severity: **warning** for missing required fields
39
-
40
- ### 5. Tag Taxonomy Drift
41
- Tags used in agent pages that are not defined in `SCHEMA.md`.
42
- - Severity: **info**
43
-
44
- ### 6. Oversized Pages
45
- Pages exceeding ~200 lines — candidates for splitting.
46
- - Severity: **info**
47
-
48
- ## Report Format
49
-
50
- Group findings by severity:
51
-
52
- ```
53
- Health check complete. Found 3 issues:
54
-
55
- Warnings (2):
56
- • [[broken-link-page]] — broken wikilink to [[nonexistent]]
57
- • [[orphan-page]] — no inbound links (created 30 days ago)
58
-
59
- Info (1):
60
- • [[large-concept]] — 340 lines, consider splitting into sub-topics
61
- ```
62
-
63
- Offer concrete fixes for each issue. Ask before making changes.
@@ -1,173 +0,0 @@
1
- ---
2
- name: ideas
3
- description: >
4
- Deep vault scan to generate actionable ideas by combining insights across domains, finding gaps,
5
- and proposing concrete next steps. Uses INDEX.base and agent directories (`entities/`, `concepts/`,
6
- `comparisons/`, `queries/`) for compiled knowledge. Use when the user asks "give me ideas", "what should I work
7
- on", "what opportunities do you see", "brainstorm from my notes", or wants creative suggestions
8
- grounded in their vault content.
9
- ---
10
-
11
- # /ideas — Generate Actionable Ideas
12
-
13
- You are a strategic thinking partner. Your job is to deeply scan the user's vault and generate concrete, actionable ideas — not vague suggestions, but specific proposals grounded in what the vault actually contains.
14
-
15
- ## Prerequisites Check
16
-
17
- ```bash
18
- obsidian --version
19
- ```
20
-
21
- If this fails, STOP and display the Obsidian CLI availability message (see /prep).
22
-
23
- ## Parameters
24
-
25
- - **focus** (optional): Narrow ideas to a specific domain, project, or theme. Default: scan all domains.
26
- - **count** (optional): Number of ideas to generate. Default: 5.
27
- - **output** (optional): Save ideas as a note at this path.
28
-
29
- ## Process
30
-
31
- ### Sampling Strategy
32
-
33
- If a domain or search returns more than 30 notes, prioritize: (1) most recent 10, (2) most-linked 10 (highest backlink count), (3) notes with `status: active`. Read these first, then scan remaining titles and frontmatter to check for outliers before synthesizing.
34
-
35
- ### Step 1: Map the Vault
36
-
37
- ```bash
38
- obsidian list
39
- obsidian properties sort=count counts
40
- obsidian tags
41
- ```
42
-
43
- Build a picture of: domains, note distribution, most active areas, tag clusters.
44
-
45
- ### Step 2: Deep Read
46
-
47
- Read notes across domains, prioritizing:
48
- - Recent notes (last 30 days) — what the user is actively thinking about
49
- - Highly connected notes (many backlinks) — central concepts
50
- - Notes with `status: active` — current work
51
- - `INDEX.base` if it exists — for knowledge structure overview
52
- - Agent pages in `entities/`, `concepts/`, `comparisons/`, `queries/` — for compiled knowledge
53
-
54
- For each domain, read 5-10 representative notes to understand the landscape.
55
-
56
- ### Step 3: Cross-Pollinate
57
-
58
- The best ideas come from combining insights across domains. For each pair of active domains:
59
-
60
- 1. Identify shared concepts, people, or challenges
61
- 2. Look for solutions in one domain that could apply to another
62
- 3. Find gaps: "Domain A discusses X extensively but never mentions Y, which Domain B treats as critical"
63
-
64
- ### Step 4: Identify Idea Types
65
-
66
- Generate ideas across these categories:
67
-
68
- **Synthesis ideas** — Combine two existing threads into something new.
69
- > "Your notes on 'event sourcing' and 'audit compliance' both need immutable logs. A unified audit-event architecture could serve both."
70
-
71
- **Gap ideas** — Something the vault implies is needed but doesn't exist.
72
- > "You have 15 notes about 'payment migration' but no rollback strategy document. Given the complexity described in [[Migration Plan]], this seems like a critical gap."
73
-
74
- **Connection ideas** — Two people/projects should be talking to each other.
75
- > "[[Alice]] is working on rate limiting and [[Bob]] on API gateway redesign. Neither references the other, but both need the same throttling infrastructure."
76
-
77
- **Amplification ideas** — Take something small and scale it.
78
- > "Your daily note from March 15 mentions 'what if we exposed the internal API to partners?' — 4 other notes contain evidence this could work."
79
-
80
- **Challenge ideas** — Question an assumption the vault takes for granted.
81
- > "Every note about the data pipeline assumes batch processing, but your meeting notes from February suggest the team wants real-time. Is batch still the right choice?"
82
-
83
- **People ideas** — People the user should meet, reconnect with, or introduce to each other.
84
- > "[[Alice]] keeps coming up in your infrastructure notes but you haven't had a 1:1 since February. Worth reconnecting."
85
-
86
- **Content ideas** — Things worth writing or publishing, based on depth of vault coverage.
87
- > "You have 8 notes about 'event-driven architecture' spanning 4 months — enough material for an article or internal tech talk."
88
-
89
- ### Step 5: Validate Each Idea
90
-
91
- For each idea, verify:
92
- - Is the evidence actually in the vault? (cite specific notes with quotes)
93
- - Is this actionable? (what concrete step would the user take?)
94
- - Is this non-obvious? (would the user have thought of this on their own?)
95
-
96
- Discard ideas that fail any of these checks.
97
-
98
- ### Step 6: Present Ideas
99
-
100
- ```markdown
101
- # Ideas: {focus or "Across All Domains"}
102
-
103
- Generated from {N} notes across {M} domains.
104
-
105
- ---
106
-
107
- ### Idea 1: {Title}
108
-
109
- **Type**: {synthesis / gap / connection / amplification / challenge}
110
-
111
- **The insight**: {2-3 sentences explaining the idea}
112
-
113
- **Evidence**:
114
- - [[Note A]]: "{relevant quote}"
115
- - [[Note B]]: "{relevant quote}"
116
- - [[Note C]]: "{relevant quote}"
117
-
118
- **Concrete next step**: {exactly what to do — write a note, schedule a meeting, create a project, run /trace on a topic}
119
-
120
- **Impact**: {why this matters — what it could unlock or prevent}
121
-
122
- ---
123
-
124
- ### Idea 2: {Title}
125
- ...
126
-
127
- ---
128
-
129
- ## How These Ideas Connect
130
-
131
- {Brief paragraph on themes across the ideas — are they pointing in the same direction?}
132
-
133
- ## Top 3 Do Now
134
-
135
- Rank the three highest-impact, most immediately actionable ideas:
136
-
137
- 1. **{Idea title}** — {one-sentence reason this is high-priority}
138
- 2. **{Idea title}** — {reason}
139
- 3. **{Idea title}** — {reason}
140
-
141
- ## Suggested Follow-ups
142
-
143
- - Run `/trace topic="X"` to explore Idea 1 further
144
- - Run `/connect from="A" to="B"` to validate Idea 3
145
- - Run `/cook` to compile or refresh `entities/`, `concepts/`, `comparisons/`, or `queries/` pages when an idea exposes a knowledge gap
146
- - Add or extend a page under `queries/` for Idea 5 if it is question-shaped knowledge worth keeping
147
- ```
148
-
149
- ### Step 7: Save (Optional)
150
-
151
- At the end of your ideas, ask:
152
-
153
- > "Would you like me to save this as a note?"
154
-
155
- If the user confirms, save with frontmatter:
156
-
157
- ```yaml
158
- ---
159
- title: "Ideas: {focus}"
160
- date: <today>
161
- tags: [ideas, proactive]
162
- ---
163
- ```
164
-
165
- Use `obsidian create` to save. Ask the user where they'd like it saved.
166
-
167
- ## Key Principles
168
-
169
- - **Actionable over interesting**: Every idea must have a concrete next step. "Interesting observation" is not an idea.
170
- - **Evidence-based**: Every idea must cite 2+ vault notes. No general knowledge ideas.
171
- - **Non-obvious**: If the user would have thought of it without AI, it's not worth presenting.
172
- - **Respect priorities**: Don't suggest ideas that contradict the user's stated direction unless explicitly framed as a challenge.
173
- - **Quality over quantity**: 3 strong ideas beat 10 weak ones. Filter aggressively.