@ztffn/presentation-generator-plugin 1.3.9 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,11 @@
1
+ # Deferred Fixes
2
+
3
+ Issues that require architectural changes and should be addressed in targeted, focused fixes rather than prompt-level patches.
4
+
5
+ ---
6
+
7
+ ## Issue 3 — `topic` badge truncation ✓ resolved
8
+
9
+ **Was:** Badge clips at ~180px wide when `topic` strings are long.
10
+
11
+ **Resolution:** The root cause was slide titles being written at full descriptive length (10–15 words) rather than as sharp 6–10 word claims. Fixed by adding slide title length rules to `slide-content/SKILL.md` (narrative agent) and reinforcing the hard limit in the style agent's label rewriting rules. Short titles produce short `topic` strings naturally — no schema change or UI change required.
@@ -6,6 +6,7 @@ skills:
6
6
  - presentation-generator:frameworks
7
7
  - presentation-generator:slide-content
8
8
  - presentation-generator:graph-topology
9
+ - presentation-generator:outline-format
9
10
  ---
10
11
 
11
12
  # Narrative Design Agent
@@ -20,8 +21,9 @@ Read the skill files at the explicit paths provided by the orchestrator in your
20
21
  - `{PLUGIN_ROOT}/skills/frameworks/SKILL.md`
21
22
  - `{PLUGIN_ROOT}/skills/slide-content/SKILL.md`
22
23
  - `{PLUGIN_ROOT}/skills/graph-topology/SKILL.md`
24
+ - `{PLUGIN_ROOT}/skills/outline-format/SKILL.md`
23
25
 
24
- These files define framework selection logic, slide content quality rules, visual intent annotations, and topology patterns. You must follow them exactly.
26
+ These files define framework selection logic, slide content quality rules, visual intent annotations, topology patterns, and the exact markdown format the Phase 5 parser requires. You must follow them exactly — the outline format in particular is a hard contract with the parser.
25
27
 
26
28
  ### Step 2 — Read the content brief
27
29
 
@@ -44,23 +46,36 @@ Return a summary: framework selected, spine node count, drill-down node count, a
44
46
 
45
47
  ## Output Format
46
48
 
47
- Write a markdown file to `_temp/presentation-outline.md` with these sections:
49
+ Write a markdown file to `_temp/presentation-outline.md`. The format is a hard contract with the Phase 5 parser — follow `outline-format/SKILL.md` exactly. Key rules:
50
+
51
+ - Title line: `# Presentation Outline: [Deck Title]`
52
+ - SPINE items: `N. **Title**` (number, period, bold)
53
+ - DRILL-DOWNS parent headers: `### Under Spine N` (H3, exact wording)
54
+ - DRILL-DOWNS child entries: `- **N.M** Title: description` (numeric minor index, never letters)
55
+ - CONTENT headers for spine: `### SPINE N: Title`
56
+ - CONTENT headers for drill-downs: `### SPINE N.M: Title`
48
57
 
49
58
  ```markdown
50
- # Presentation Outline
59
+ # Presentation Outline: [Deck Title]
51
60
 
52
61
  ## Framework
53
62
  [Which framework was selected and why]
54
63
 
55
64
  ## SPINE
56
- [Ordered list of main-flow slides with title and key message]
65
+
66
+ 1. **[Spine 1 Title]**
67
+ 2. **[Spine 2 Title]**
68
+ 3. **[Spine 3 Title]**
57
69
 
58
70
  ## DRILL-DOWNS
59
- [Which spine nodes have drill-downs, with titles and content summaries]
71
+
72
+ ### Under Spine 2
73
+ - **2.1** [Child Title]: [short description]
74
+ - **2.2** [Child Title]: [short description]
60
75
 
61
76
  ## CONTENT PER SLIDE
62
77
 
63
- ### [Slide Title] (spine | drill-down under [parent])
78
+ ### SPINE 1: [Spine 1 Title]
64
79
  **Key message:** [one sentence]
65
80
  **Content:**
66
81
  [Detailed bullet points and text for this slide]
@@ -68,6 +83,15 @@ Write a markdown file to `_temp/presentation-outline.md` with these sections:
68
83
  [What the presenter should say that isn't on screen]
69
84
  **Transition to next:**
70
85
  [How this slide connects to the next]
86
+
87
+ ### SPINE 2.1: [Child Title]
88
+ **Key message:** [one sentence]
89
+ **Content:**
90
+ [Detail]
91
+ **Speaker notes:**
92
+ [Notes]
93
+ **Transition to next:**
94
+ [Transition]
71
95
  ```
72
96
 
73
97
  ## Rules
@@ -44,9 +44,9 @@ Walk through every node and assign a visual intent. Use the outline's `**Slide t
44
44
  | Slide after 2+ consecutive bullet-heavy slides | `breathing-room` | Centered, background image, minimal text |
45
45
  | Drill-down with detailed specs or numbers | `workhorse` or `evidence` | Depends on whether data is comparative or sequential |
46
46
 
47
- ### Step 4 — Rewrite content for slide readability
47
+ ### Step 4 — Rewrite content and labels for slide readability
48
48
 
49
- The script extracted outline content verbatim. Outline content is written for reading, not presenting. For each slide, rewrite `data.content` to work on screen:
49
+ The script extracted outline content verbatim. Outline content is written for reading, not presenting. For each slide, rewrite `data.content` and related fields to work on screen.
50
50
 
51
51
  **Content rules:**
52
52
  - Lead with a `## Heading` that states the insight or claim (not a topic label)
@@ -58,14 +58,45 @@ The script extracted outline content verbatim. Outline content is written for re
58
58
  - For two-column slides, ensure the content has a `---` delimiter separating left and right columns.
59
59
  - For `impact` slides, content should be 1-2 sentences maximum. Let whitespace carry the message.
60
60
  - For `bookend` slides, content is a single compelling sentence or tagline.
61
+ - **Closing lines:** A trailing paragraph after a bullet list must be absorbed as the final bullet, formatted as an explicit callout (`→ Closing thought` or `**Takeaway:** ...`), or removed if the bullets already make the point. Never leave a standalone sentence after a list.
62
+
63
+ **Label rewriting rules — two patterns based on slide type:**
64
+
65
+ `data.label` is the largest text on the slide. **Target 6–10 words. Hard limit: 12.** A label that wraps to two lines hasn't been edited yet. Cut every word that doesn't change the meaning if removed.
66
+
67
+ | Slide type | `data.label` | Longer text |
68
+ |---|---|---|
69
+ | Concept slide (problem, argument, CTA) | Short assertive claim, 6–10 words | Goes in `data.subtitle` |
70
+ | Product/brand slide (named technology or system) | Product or brand name only | Goes in `content` as H1/H2 |
71
+
72
+ For concept slides — `label` states the claim; detail and data go in `subtitle`:
73
+ ```
74
+ Weak: label: "Annual output: 1,680 tonnes wax, 1,365 tonnes biochar, hydrogen, -10,500 CO₂-eq"
75
+ Strong: label: "Three products. Carbon-negative by design."
76
+ subtitle: "1,680t wax · 1,365t biochar · hydrogen · −10,500 CO₂-eq/yr"
77
+ ```
78
+
79
+ For product/brand slides — promote the product name to `label`; move the descriptive headline into `content` as `## H1/H2`:
80
+ ```
81
+ Weak: label: "Ceramic Spiralis geometry solves isothermal operation"
82
+ Strong: label: "Spiralis"
83
+ content: ## Ceramic Spiralis geometry solves isothermal operation
84
+ - **Spiralis geometry:** Dean flow mixing + isothermal control
85
+ - **Modular stacking:** Add capacity without redesigning the system
86
+ ```
87
+ When to apply product pattern: the slide's central subject is a named product, technology, or system. Not for abstract concepts, problems, or CTAs.
88
+
89
+ **Speaker notes:**
90
+ For every node, read the corresponding speaker notes from the outline and write them to `data.notes`. Create the field if it does not already exist. Do NOT rely on Phase 5 to have pre-populated this field — always source notes from the outline directly.
61
91
 
62
92
  **Do NOT change:**
63
- - `data.label` — the slide title in the graph editor
64
- - `data.notes` — speaker notes are already correct from the outline
65
93
  - `data.transitionToNext` — transition cues are already correct
66
94
 
67
95
  ### Step 5 — Apply visual fields
68
96
 
97
+ **Branding — apply to every node:**
98
+ Set `brandingText` on every node in the deck, not just bookends. Infer the correct value from the deck title or topic (e.g., the company domain or product name). Also set `showBranding: true` on bookend slides; on other slides `showBranding` can be omitted (it defaults to off), but `brandingText` must always be present so the presenter has a correct fallback.
99
+
69
100
  For each slide, set the appropriate data fields based on visual intent:
70
101
 
71
102
  #### Bookend (cover / CTA)
@@ -185,8 +216,8 @@ Your completion message must include:
185
216
  - **NEVER modify** `id`, `type` (on the node wrapper), `position`, `style`, or `measured` on any node
186
217
  - **NEVER modify** any edge — edges are structurally fixed
187
218
  - **NEVER add or remove** nodes or edges
188
- - **NEVER change** `data.label` this is the slide title shown in the graph editor
189
- - **NEVER change** `data.notes` speaker notes are set by the outline
219
+ - **Rewrite** `data.label` per the headline rules in Step 4 short assertive claim or product name only
220
+ - **Write** `data.notes` from the outline for every node — create the field if absent, never leave it empty
190
221
  - **NEVER change** `data.transitionToNext` — transition cues are set by the outline
191
222
  - Only use field names from the `ALLOWED_DATA_FIELDS` set in the validator
192
223
  - For background images, use Unsplash URLs with `?w=1920&q=80`
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ztffn/presentation-generator-plugin",
3
- "version": "1.3.9",
3
+ "version": "1.4.0",
4
4
  "description": "Claude Code plugin for generating graph-based presentations",
5
5
  "bin": {
6
6
  "presentation-generator-plugin": "bin/index.js"
@@ -38,12 +38,28 @@ def parse_spine(text):
38
38
 
39
39
 
40
40
  def parse_drilldowns(text):
41
- """Parse DRILL-DOWNS into {parent_spine_num: [(child_num, child_subnum, title), ...]}."""
41
+ """Parse DRILL-DOWNS into {parent_spine_num: [(child_num, child_subnum, title), ...]}.
42
+
43
+ Canonical format:
44
+ ### Under Spine N
45
+ - **N.M** Title: description
46
+
47
+ Also accepts common LLM format variants as a fallback:
48
+ **Under Spine Node N (description):**
49
+ - Drill-down A: Title
50
+ - **N.A**: Title (letter suffix)
51
+ """
42
52
  drilldowns = {}
43
53
  current_parent = None
54
+ _letter = {chr(ord('A') + i): i + 1 for i in range(26)}
44
55
 
45
56
  for line in text.splitlines():
46
- parent_match = re.match(r"###\s+Under Spine (\d+)", line)
57
+ # Parent header — canonical: "### Under Spine N"
58
+ # Fallback: "**Under Spine Node N ..." or "**Under Spine N"
59
+ parent_match = (
60
+ re.match(r"###\s+Under Spine\s+(\d+)", line) or
61
+ re.match(r"\*\*Under Spine(?:\s+Node)?\s+(\d+)", line)
62
+ )
47
63
  if parent_match:
48
64
  current_parent = int(parent_match.group(1))
49
65
  if current_parent not in drilldowns:
@@ -51,22 +67,71 @@ def parse_drilldowns(text):
51
67
  continue
52
68
 
53
69
  if current_parent is not None:
70
+ # Canonical: "- **N.M** Title: description"
54
71
  child_match = re.match(
55
- r"-\s+\*\*(\d+)\.(\d+)\*\*\s+(.+?):\s*(.*)", line
72
+ r"-\s+\*\*(\d+)\.(\d+)\*\*\s+(.+?)(?::\s.*)?$", line
56
73
  )
57
74
  if child_match:
58
75
  major = int(child_match.group(1))
59
76
  minor = int(child_match.group(2))
60
- title = child_match.group(3).strip()
77
+ title = child_match.group(3).strip().rstrip("*")
78
+ drilldowns[current_parent].append((major, minor, title))
79
+ continue
80
+
81
+ # Fallback A: "- **N.A** Title:" or "- **N.A**: Title" (letter suffix)
82
+ bold_letter = re.match(r"-\s+\*\*(\d+)\.([A-Za-z])\*\*:?\s+(.+)", line)
83
+ if bold_letter:
84
+ major = int(bold_letter.group(1))
85
+ minor = _letter.get(bold_letter.group(2).upper(), 1)
86
+ title = bold_letter.group(3).strip().rstrip("*:")
61
87
  drilldowns[current_parent].append((major, minor, title))
88
+ continue
89
+
90
+ # Fallback B: "- Drill-down A: Title"
91
+ dd_letter = re.match(r"-\s+Drill-down\s+([A-Za-z]):\s+(.+)", line)
92
+ if dd_letter:
93
+ minor = _letter.get(dd_letter.group(1).upper(), 1)
94
+ title = dd_letter.group(2).strip()
95
+ drilldowns[current_parent].append((current_parent, minor, title))
62
96
 
63
97
  return drilldowns
64
98
 
65
99
 
66
100
  def parse_content_blocks(text):
67
- """Parse CONTENT PER SLIDE into a dict keyed by slide identifier (e.g. '1', '2.1')."""
101
+ """Parse CONTENT PER SLIDE into a dict keyed by slide identifier (e.g. '1', '2.1').
102
+
103
+ Canonical header: ### SPINE N: Title
104
+ ### SPINE N.M: Title
105
+
106
+ Fallback header: ### Slide N: Title (letter or number suffix accepted)
107
+ ### Slide N.A: Title
108
+
109
+ Note: Phase 6 (style agent) re-reads the outline and overwrites all content fields
110
+ anyway, so content parsing here is best-effort pre-population only.
111
+ """
112
+ _letter = {chr(ord('A') + i): i + 1 for i in range(26)}
113
+
114
+ def _normalize_slide_header(m):
115
+ """Map '### Slide N[.X]: Title' → '### SPINE N[.M]: Title'."""
116
+ major = m.group(1)
117
+ raw_suffix = m.group(2)
118
+ title = m.group(3).strip()
119
+ if raw_suffix:
120
+ minor = _letter.get(raw_suffix.upper(), raw_suffix) if raw_suffix.isalpha() else raw_suffix
121
+ slide_key = f"{major}.{minor}"
122
+ else:
123
+ slide_key = major
124
+ return f"### SPINE {slide_key}: {title}"
125
+
126
+ # Normalize fallback "### Slide N[.X]: Title (...)" → canonical before splitting
127
+ text = re.sub(
128
+ r"^### Slide (\d+)(?:\.([A-Za-z0-9]+))?[:\s]+(.+?)(?:\s*\(.*\))?\s*$",
129
+ _normalize_slide_header,
130
+ text,
131
+ flags=re.MULTILINE,
132
+ )
133
+
68
134
  blocks = {}
69
- # Split on ### SPINE headers
70
135
  parts = re.split(r"(?=^### SPINE )", text, flags=re.MULTILINE)
71
136
 
72
137
  for part in parts:
@@ -0,0 +1,267 @@
1
+ ---
2
+ name: outline-format
3
+ description: >
4
+ Exact markdown format specification for _temp/presentation-outline.md.
5
+ The outline_to_graph.py script parses this file deterministically — any
6
+ deviation causes nodes to be silently dropped from the output JSON.
7
+ user-invocable: false
8
+ ---
9
+
10
+ # Outline Format Specification
11
+
12
+ Defines the exact markdown format that `outline_to_graph.py` (Phase 5) expects.
13
+ Follow it character-for-character. Format drift causes nodes to be silently dropped.
14
+
15
+ ---
16
+
17
+ ## File structure
18
+
19
+ Four sections in this order:
20
+
21
+ ```
22
+ # Presentation Outline: [Deck Title]
23
+
24
+ ## Framework
25
+
26
+ ## SPINE
27
+
28
+ ## DRILL-DOWNS
29
+
30
+ ## CONTENT PER SLIDE
31
+ ```
32
+
33
+ ---
34
+
35
+ ## Title line
36
+
37
+ ```
38
+ # Presentation Outline: The Exact Deck Title
39
+ ```
40
+
41
+ The colon and title on the **same line** are required. The script errors and exits if this line is missing.
42
+
43
+ ---
44
+
45
+ ## SPINE section
46
+
47
+ Numbered list with bold titles:
48
+
49
+ ```markdown
50
+ ## SPINE
51
+
52
+ 1. **Cover Slide Title**
53
+ 2. **The Problem We Solve**
54
+ 3. **Our Solution**
55
+ 4. **Evidence**
56
+ 5. **Call to Action**
57
+ ```
58
+
59
+ Rules:
60
+ - Format per item: `N. **Title**` — number, period, space, `**`, title, `**`
61
+ - No trailing colon, asterisk, or other punctuation inside the bold markers
62
+ - Order determines left-to-right position in the graph
63
+
64
+ ---
65
+
66
+ ## DRILL-DOWNS section
67
+
68
+ Group children under their parent spine node using H3 headers:
69
+
70
+ ```markdown
71
+ ## DRILL-DOWNS
72
+
73
+ ### Under Spine 3
74
+ - **3.1** Integration Detail: How the system connects to existing infrastructure
75
+ - **3.2** Change Management: What changes for the people using it
76
+
77
+ ### Under Spine 4
78
+ - **4.1** Case Study: Full methodology and 12-month results
79
+ - **4.2** ROI Model: Payback timeline for the audience's context
80
+ ```
81
+
82
+ Rules:
83
+ - Parent header: exactly `### Under Spine N` — H3, `Under Spine`, space, integer. No additional text.
84
+ - Child entry: `- **N.M** Title: description` where N is the parent spine number and M starts at 1
85
+ - Use **numeric** minor indices only (1, 2, 3…). Do NOT use letter suffixes (A, B, C).
86
+ - The colon after the title is required; the description after it is optional but recommended
87
+
88
+ ---
89
+
90
+ ## CONTENT PER SLIDE section
91
+
92
+ One H3 block per slide for every spine and drill-down node declared above:
93
+
94
+ ```markdown
95
+ ## CONTENT PER SLIDE
96
+
97
+ ### SPINE 1: Cover Slide Title
98
+ **Key message:** One sentence stating what the audience should remember.
99
+ **Content:**
100
+ The detailed slide content. Use bullet points and markdown formatting.
101
+ - Point one with a specific number or named entity
102
+ - Point two — consequence or contrast
103
+ **Speaker notes:**
104
+ What the presenter says that is not on screen. Talking points, anticipated questions, timing.
105
+ **Transition to next:**
106
+ One sentence bridging this slide to the next.
107
+
108
+ ### SPINE 2: The Problem We Solve
109
+ **Key message:** ...
110
+ **Content:**
111
+ ...
112
+ **Speaker notes:**
113
+ ...
114
+ **Transition to next:**
115
+ ...
116
+
117
+ ### SPINE 3.1: Integration Detail
118
+ **Key message:** Integration is a one-day process, not a project.
119
+ **Content:**
120
+ ...
121
+ **Speaker notes:**
122
+ ...
123
+ **Transition to next:**
124
+ ...
125
+ ```
126
+
127
+ Rules:
128
+ - Spine header: `### SPINE N: Title`
129
+ - Drill-down header: `### SPINE N.M: Title` — same N.M notation as DRILL-DOWNS section
130
+ - Every node declared in SPINE and DRILL-DOWNS must have a corresponding content block
131
+ - Field markers must be exactly: `**Key message:**`, `**Content:**`, `**Speaker notes:**`, `**Transition to next:**`
132
+ - Do NOT add parenthetical context like `(drill-down under Slide 3)` after the title
133
+
134
+ ---
135
+
136
+ ## Complete minimal example
137
+
138
+ ```markdown
139
+ # Presentation Outline: Acme Platform Pitch
140
+
141
+ ## Framework
142
+ Problem-Solution-Evidence. Chosen because the audience is unfamiliar with the product
143
+ and needs to understand the problem before evaluating the solution.
144
+
145
+ ## SPINE
146
+
147
+ 1. **Introduction**
148
+ 2. **The Problem**
149
+ 3. **Our Approach**
150
+ 4. **Results**
151
+ 5. **Next Steps**
152
+
153
+ ## DRILL-DOWNS
154
+
155
+ ### Under Spine 2
156
+ - **2.1** Root Cause: Why existing tools fail at scale
157
+
158
+ ### Under Spine 3
159
+ - **3.1** How It Works: Technical walkthrough for engineering audiences
160
+ - **3.2** Integrations: Connects to Salesforce, SAP, and custom APIs
161
+
162
+ ## CONTENT PER SLIDE
163
+
164
+ ### SPINE 1: Introduction
165
+ **Key message:** Acme eliminates the manual reconciliation layer that costs mid-market finance teams 11 hours a week.
166
+ **Content:**
167
+ ## One integration. Every system in sync.
168
+
169
+ - Finance teams spend 11 hrs/week reconciling three systems that can't agree
170
+ - Acme replaces the reconciliation step with a live shared record
171
+ - Live in one day — no migration, no rip-and-replace
172
+ **Speaker notes:**
173
+ Open with the cost before introducing the product. Let the number land. If anyone nods, they're the champion.
174
+ **Transition to next:**
175
+ Let's look at exactly where those 11 hours go.
176
+
177
+ ### SPINE 2: The Problem
178
+ **Key message:** Three siloed systems mean no single source of truth — ever.
179
+ **Content:**
180
+ ## The reconciliation trap
181
+
182
+ - ERP, CRM, and billing each hold a different version of reality
183
+ - Finance exports all three every Monday morning and manually merges them
184
+ - One bad merge cascades into wrong forecasts, wrong invoices, wrong decisions
185
+ **Speaker notes:**
186
+ Pause after "wrong decisions" — let the audience feel the compounding failure. If someone raises a specific system, note it and offer to address it in the drill-down below.
187
+ **Transition to next:**
188
+ The root cause isn't the systems — it's the gap between them.
189
+
190
+ ### SPINE 2.1: Root Cause
191
+ **Key message:** Integration middleware solves point-to-point connections but not semantic alignment.
192
+ **Content:**
193
+ ## Why middleware isn't enough
194
+
195
+ - Middleware syncs data but doesn't resolve conflicting field definitions
196
+ - "Customer" means different things in ERP vs CRM — no sync resolves that
197
+ - Result: data arrives but still requires human interpretation on the other end
198
+ **Speaker notes:**
199
+ This is a technical drill-down — adjust depth based on audience. For non-technical audiences, focus on the "different definitions" point and skip the middleware detail.
200
+ **Transition to next:**
201
+ Acme solves the semantic layer, not just the transport layer.
202
+
203
+ ### SPINE 3: Our Approach
204
+ **Key message:** A shared semantic layer makes all three systems agree on meaning, not just data.
205
+ **Content:**
206
+ ## One record. Three systems. Zero reconciliation.
207
+
208
+ - Acme sits between your systems and maintains a canonical data model
209
+ - Each system reads and writes to its own view — Acme translates in real time
210
+ - No migration required: connect in one day, live by end of week
211
+ **Speaker notes:**
212
+ The "canonical data model" framing resonates with technical buyers. For business buyers, use "single source of truth." Have the integration diagram ready to show if they ask how it connects.
213
+ **Transition to next:**
214
+ Here's what it looks like in practice for a customer similar to yours.
215
+
216
+ ### SPINE 3.1: How It Works
217
+ **Key message:** Three API calls replace an 11-step manual process.
218
+ **Content:**
219
+ ## From 11 steps to 3 API calls
220
+
221
+ - Step 1: Connect your systems via pre-built connectors (30 minutes)
222
+ - Step 2: Map your field definitions once using the visual schema editor
223
+ - Step 3: Acme maintains alignment in real time — no ongoing maintenance
224
+ **Speaker notes:**
225
+ If the audience is technical, offer a live demo. If not, skip this slide and go straight to Results.
226
+ **Transition to next:**
227
+ Let's look at what changed for a customer who went live six months ago.
228
+
229
+ ### SPINE 3.2: Integrations
230
+ **Key message:** Pre-built connectors for the 12 systems mid-market companies actually use.
231
+ **Content:**
232
+ ## Connect what you have — not what you planned to have
233
+
234
+ - Salesforce, SAP, NetSuite, HubSpot, QuickBooks, and 7 more
235
+ - Custom API connector for any REST endpoint
236
+ - No professional services required for standard connectors
237
+ **Speaker notes:**
238
+ List connectors only if the audience asks. Lead with "we probably already connect to your stack" rather than listing all 12 upfront.
239
+ **Transition to next:**
240
+ These integrations are what made the Oslo deployment possible in one day.
241
+
242
+ ### SPINE 4: Results
243
+ **Key message:** Customers recover the 11-hour weekly loss in the first month.
244
+ **Content:**
245
+ ## 11 hours → 0 hours in 30 days
246
+
247
+ - Oslo Pilot (6-month data): reconciliation time dropped from 11 hrs/week to zero
248
+ - Forecast accuracy improved from 74% to 96% within the first quarter
249
+ - Zero incidents of invoice errors in the post-deployment period (vs. 3/month prior)
250
+ **Speaker notes:**
251
+ These are real numbers from Oslo — use them confidently. If they ask for more detail, the case study drill-down has the full methodology. The ROI model drill-down has the payback math for their specific context.
252
+ **Transition to next:**
253
+ This is what we'd like to replicate for your team in a structured 90-day pilot.
254
+
255
+ ### SPINE 5: Next Steps
256
+ **Key message:** A 90-day pilot delivers a live system and a verified business case.
257
+ **Content:**
258
+ ## From this meeting to live in 90 days
259
+
260
+ - Day 1–5: Connect your systems, map your schema
261
+ - Day 6–30: Run Acme in parallel — verify accuracy before switching
262
+ - Day 31–90: Full cutover, team training, and business case documentation
263
+ **Speaker notes:**
264
+ Make the ask specific: "Can we agree to the 90-day pilot today?" Have the pilot agreement ready. If they want to think about it, offer a follow-up with their IT lead present.
265
+ **Transition to next:**
266
+ No transition — this is the closing slide.
267
+ ```
@@ -85,13 +85,16 @@ Read these skill files first:
85
85
  - {PLUGIN_ROOT}/skills/frameworks/SKILL.md
86
86
  - {PLUGIN_ROOT}/skills/slide-content/SKILL.md
87
87
  - {PLUGIN_ROOT}/skills/graph-topology/SKILL.md
88
+ - {PLUGIN_ROOT}/skills/outline-format/SKILL.md
88
89
 
89
90
  Read the content brief at _temp/presentation-content-brief.json
90
91
 
91
92
  Audience: {AUDIENCE}
92
93
  Goal: {GOAL}
93
94
 
94
- Write the outline as MARKDOWN to _temp/presentation-outline.md
95
+ Pacing constraint: for any run of 3 or more consecutive workhorse or evidence slides, include at least one breathing-room or chapter-opener beat — even in short decks. Do not compress this out to save slide count.
96
+
97
+ Write the outline as MARKDOWN to _temp/presentation-outline.md following the format in outline-format/SKILL.md exactly — the Phase 5 parser is strict about header syntax.
95
98
  Report back: framework chosen, spine count, drill-down count.
96
99
  ```
97
100
 
@@ -128,7 +131,41 @@ Run this bash command:
128
131
  python3 "{PLUGIN_ROOT}/scripts/outline_to_graph.py" _temp/presentation-outline.md -o "presentations/{SLUG}/{SLUG}.json"
129
132
  ```
130
133
 
131
- If exit code is 0, move to Phase 6. If non-zero, show the errors.
134
+ If exit code is non-zero, show the errors and stop.
135
+
136
+ After a successful run, verify node count against the outline:
137
+
138
+ ```bash
139
+ python3 - <<'EOF'
140
+ import json, re, sys
141
+
142
+ with open("_temp/presentation-outline.md") as f:
143
+ outline = f.read()
144
+
145
+ # Scope counts to their respective sections to avoid false matches
146
+ spine_m = re.search(r"^## SPINE\b", outline, re.MULTILINE)
147
+ dd_m = re.search(r"^## DRILL-DOWNS\b", outline, re.MULTILINE)
148
+ cont_m = re.search(r"^## CONTENT PER SLIDE\b", outline, re.MULTILINE)
149
+
150
+ spine_text = outline[spine_m.end():dd_m.start()] if spine_m and dd_m else ""
151
+ dd_text = outline[dd_m.end():cont_m.start()] if dd_m and cont_m else ""
152
+
153
+ spine_count = len(re.findall(r"^\d+\.\s+\*\*", spine_text, re.MULTILINE))
154
+ drill_count = len(re.findall(r"^-\s+\*\*\d+\.\d+\*\*", dd_text, re.MULTILINE))
155
+ expected = spine_count + drill_count
156
+
157
+ with open("presentations/{SLUG}/{SLUG}.json") as f:
158
+ data = json.load(f)
159
+ actual = len(data["nodes"])
160
+
161
+ print(f"Outline declares {expected} nodes ({spine_count} spine + {drill_count} drill-down), JSON has {actual}")
162
+ if actual < expected:
163
+ print(f"MISMATCH: {expected - actual} node(s) missing — inspect DRILL-DOWNS headers and child entry format in the outline")
164
+ sys.exit(1)
165
+ EOF
166
+ ```
167
+
168
+ If this count check fails: read the DRILL-DOWNS section of `_temp/presentation-outline.md`, identify format mismatches against `outline-format/SKILL.md`, and do NOT proceed to Phase 6 until the outline is corrected and Phase 5 re-run.
132
169
 
133
170
  ---
134
171
 
@@ -153,6 +190,8 @@ Read the outline at _temp/presentation-outline.md for context.
153
190
  Apply visual treatments to each slide. Edit only data fields using the Edit tool.
154
191
  Never modify node wrappers (id, type, position, style, measured), edges, or positions.
155
192
 
193
+ Speaker notes: for every node, read the corresponding speaker notes from the outline and write them to data.notes, creating the field if it does not already exist. Do not rely on Phase 5 to have pre-populated this field — always write notes from the outline directly.
194
+
156
195
  After all edits, run the validator:
157
196
  python3 "{PLUGIN_ROOT}/scripts/validate_draft.py" "presentations/{SLUG}/{SLUG}.json"
158
197
 
@@ -62,6 +62,27 @@ The most common failure: slides containing everything the presenter plans to say
62
62
 
63
63
  Use headlines on the spine (main flow). Titles are acceptable on drill-down detail slides where the parent headline provides context.
64
64
 
65
+ ## Slide Title Length
66
+
67
+ **6–10 words is the target. Never exceed 12.**
68
+
69
+ The slide title is the largest text on the screen. Long titles wrap, shrink, and lose impact. A title that needs two lines to fit is a title that hasn't been sharpened yet.
70
+
71
+ Rules:
72
+ - State the claim, not the description of the claim
73
+ - Cut every word that doesn't change the meaning if removed
74
+ - Prefer a verb: "Spiralis enables compact FT operation" over "The Role of Spiralis in Enabling Compact FT Operation"
75
+ - Numbers and specifics are allowed: "Q3 revenue up 18% — cost held flat" is 8 words and stronger than any 15-word version
76
+
77
+ | Too long (rewrite this) | Sharp (target) |
78
+ |---|---|
79
+ | "The Spiralis Ceramic Reactor Makes Compact, Isothermal FT Operation Physically Achievable" | "Spiralis makes compact, isothermal FT achievable" |
80
+ | "Revenue grew 40% quarter-on-quarter after the pricing change in Q2 of last year" | "Pricing change drove 40% QoQ growth" |
81
+ | "Why Existing Integration Middleware Tools Fail to Solve the Semantic Alignment Problem" | "Middleware moves data — it doesn't resolve meaning" |
82
+ | "Annual Output: 1,680 Tonnes Wax, 1,365 Tonnes Biochar, Hydrogen, and Negative CO₂" | "Three products. Carbon-negative by design." |
83
+
84
+ If you find yourself writing a title longer than 10 words, stop. Name the single thing the slide proves, then state it in the fewest words that are still specific.
85
+
65
86
  ## Speaker Notes
66
87
 
67
88
  Notes should add context the presenter says aloud, not repeat what's on screen.
@@ -220,11 +241,11 @@ The narrative agent should annotate each slide in the outline with a `**Visual i
220
241
  | `bookend` | Cover or CTA, branded, symmetrical with its pair |
221
242
  | `breathing-room` | Minimal text over image or color, resets attention after dense slides |
222
243
 
223
- Example annotation in an outline:
244
+ Example annotation in an outline (use `### SPINE N:` header format — the parser requires it):
224
245
 
225
246
  ```
226
- Slide 4: "Response time dropped 60% — infrastructure cost followed"
227
- Key message: The platform delivers measurable speed and cost gains.
247
+ ### SPINE 4: Response time dropped 60% — infrastructure cost followed
248
+ **Key message:** The platform delivers measurable speed and cost gains.
228
249
  **Visual intent:** evidence
229
250
  ```
230
251
 
@@ -79,6 +79,11 @@ Set `showBranding: false` for immersive slides (R3F, full-bleed video).
79
79
  - Set `backgroundImageOverlay: true` and `lightText: true` when text appears over the image
80
80
  - Use Unsplash URLs with `?w=1920&q=80`
81
81
 
82
+ **Default-image decision rule:** If a slide has `centered: true` and its content is under ~30 words (i.e. a `bookend` or `impact` slide, not a content-heavy workhorse), apply a contextually appropriate background image:
83
+ - Set `backgroundImage`, `backgroundImageFit: "cover"`, `backgroundImageOverlay: true`, `lightText: true`
84
+ - Choose Unsplash keywords from the deck's topic and industry: energy → `solar`, `infrastructure`; finance → `architecture`, `city`; technology → `abstract`, `circuit`; climate → `nature`, `landscape`; health → `light`, `science`
85
+ - Plain white backgrounds on impact and bookend slides are the wrong default — use an image unless the slide explicitly needs a solid colour treatment
86
+
82
87
  **Background video** — cinematic section openers or demo context:
83
88
  - Set URL to placeholder: `"PLACEHOLDER: [description of needed video]"`
84
89
  - Set `backgroundVideoFit: "cover"`, `backgroundVideoLoop: true`
@@ -95,6 +100,22 @@ Set `showBranding: false` for immersive slides (R3F, full-bleed video).
95
100
  **Full-viewport** (`type: "chart"`) — when the data IS the slide.
96
101
  **Inline** (`[chart:name]` in content) — when data supports a text argument.
97
102
 
103
+ **Chart-trigger signals** — when outline content matches any of these patterns, synthesise a chart rather than leaving data as bullets:
104
+
105
+ | Content signal | Chart type |
106
+ |---|---|
107
+ | 2+ quantities of the same type being compared | `bar` or grouped bar |
108
+ | Named items scored across two options (us vs. them, A vs. B) | `radar` |
109
+ | Values changing across 3+ time periods | `area` or `line` |
110
+ | Capital vs. revenue with a breakeven point | `bar` with threshold line |
111
+ | A single proportion of a whole | Large number callout — **not** a pie chart |
112
+
113
+ When a signal is present: synthesise the `charts` dict from the outline figures, embed `[chart:name]` in the content markdown, and remove the raw data from bullet form. The agent must extract values from the outline text — do not leave chart-ready data as bullets.
114
+
115
+ Chart placement:
116
+ - Data IS the primary message of the slide → `type: "chart"` (full-viewport)
117
+ - Data supports a text argument → `[chart:name]` inline in a `two-column` layout
118
+
98
119
  ### R3F Scene Decisions
99
120
 
100
121
  Available scenes: