@fredcallagan/arn-spark 5.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/plugin.json +9 -0
- package/.opencode/plugins/arn-spark.js +272 -0
- package/package.json +17 -0
- package/plugins/arn-spark/.claude-plugin/plugin.json +9 -0
- package/plugins/arn-spark/LICENSE +21 -0
- package/plugins/arn-spark/README.md +25 -0
- package/plugins/arn-spark/agents/arn-spark-brand-strategist.md +299 -0
- package/plugins/arn-spark/agents/arn-spark-dev-env-builder.md +228 -0
- package/plugins/arn-spark/agents/arn-spark-doctor.md +92 -0
- package/plugins/arn-spark/agents/arn-spark-forensic-investigator.md +181 -0
- package/plugins/arn-spark/agents/arn-spark-market-researcher.md +232 -0
- package/plugins/arn-spark/agents/arn-spark-marketing-pm.md +225 -0
- package/plugins/arn-spark/agents/arn-spark-persona-architect.md +259 -0
- package/plugins/arn-spark/agents/arn-spark-persona-impersonator.md +183 -0
- package/plugins/arn-spark/agents/arn-spark-product-strategist.md +191 -0
- package/plugins/arn-spark/agents/arn-spark-prototype-builder.md +497 -0
- package/plugins/arn-spark/agents/arn-spark-scaffolder.md +228 -0
- package/plugins/arn-spark/agents/arn-spark-spike-runner.md +209 -0
- package/plugins/arn-spark/agents/arn-spark-style-capture.md +196 -0
- package/plugins/arn-spark/agents/arn-spark-tech-evaluator.md +229 -0
- package/plugins/arn-spark/agents/arn-spark-ui-interactor.md +235 -0
- package/plugins/arn-spark/agents/arn-spark-use-case-writer.md +280 -0
- package/plugins/arn-spark/agents/arn-spark-ux-judge.md +215 -0
- package/plugins/arn-spark/agents/arn-spark-ux-specialist.md +200 -0
- package/plugins/arn-spark/agents/arn-spark-visual-sketcher.md +285 -0
- package/plugins/arn-spark/agents/arn-spark-visual-test-engineer.md +224 -0
- package/plugins/arn-spark/references/copilot-tools.md +62 -0
- package/plugins/arn-spark/skills/arn-brainstorming/SKILL.md +520 -0
- package/plugins/arn-spark/skills/arn-brainstorming/references/add-feature-flow.md +155 -0
- package/plugins/arn-spark/skills/arn-spark-arch-vision/SKILL.md +226 -0
- package/plugins/arn-spark/skills/arn-spark-arch-vision/references/architecture-vision-template.md +153 -0
- package/plugins/arn-spark/skills/arn-spark-arch-vision/references/technology-evaluation-guide.md +86 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype/SKILL.md +471 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/clickable-prototype-criteria.md +65 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/journey-template.md +62 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/review-report-template.md +75 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/showcase-capture-guide.md +213 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/SKILL.md +642 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/debate-protocol.md +242 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/debate-review-report-template.md +161 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/expert-interaction-review-template.md +152 -0
- package/plugins/arn-spark/skills/arn-spark-concept-review/SKILL.md +350 -0
- package/plugins/arn-spark/skills/arn-spark-concept-review/references/conflict-resolution-protocol.md +145 -0
- package/plugins/arn-spark/skills/arn-spark-concept-review/references/review-report-template.md +185 -0
- package/plugins/arn-spark/skills/arn-spark-dev-setup/SKILL.md +366 -0
- package/plugins/arn-spark/skills/arn-spark-dev-setup/references/dev-setup-checklist.md +84 -0
- package/plugins/arn-spark/skills/arn-spark-dev-setup/references/dev-setup-template.md +205 -0
- package/plugins/arn-spark/skills/arn-spark-discover/SKILL.md +303 -0
- package/plugins/arn-spark/skills/arn-spark-discover/references/competitive-landscape-template.md +87 -0
- package/plugins/arn-spark/skills/arn-spark-discover/references/discovery-questions.md +120 -0
- package/plugins/arn-spark/skills/arn-spark-discover/references/persona-profile-template.md +97 -0
- package/plugins/arn-spark/skills/arn-spark-discover/references/product-concept-template.md +253 -0
- package/plugins/arn-spark/skills/arn-spark-ensure-config/SKILL.md +23 -0
- package/plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md +388 -0
- package/plugins/arn-spark/skills/arn-spark-ensure-config/references/step-0-fast-path.md +25 -0
- package/plugins/arn-spark/skills/arn-spark-ensure-config/scripts/cache-check.sh +127 -0
- package/plugins/arn-spark/skills/arn-spark-feature-extract/SKILL.md +483 -0
- package/plugins/arn-spark/skills/arn-spark-feature-extract/references/feature-backlog-template.md +176 -0
- package/plugins/arn-spark/skills/arn-spark-feature-extract/references/feature-entry-template.md +209 -0
- package/plugins/arn-spark/skills/arn-spark-help/SKILL.md +149 -0
- package/plugins/arn-spark/skills/arn-spark-help/references/pipeline-map.md +211 -0
- package/plugins/arn-spark/skills/arn-spark-init/SKILL.md +312 -0
- package/plugins/arn-spark/skills/arn-spark-init/references/agent-models-presets/all-opus.md +23 -0
- package/plugins/arn-spark/skills/arn-spark-init/references/agent-models-presets/balanced.md +23 -0
- package/plugins/arn-spark/skills/arn-spark-init/references/bkt-setup.md +55 -0
- package/plugins/arn-spark/skills/arn-spark-init/references/jira-mcp-setup.md +61 -0
- package/plugins/arn-spark/skills/arn-spark-init/references/platform-labels.md +97 -0
- package/plugins/arn-spark/skills/arn-spark-naming/SKILL.md +275 -0
- package/plugins/arn-spark/skills/arn-spark-naming/references/creative-brief-template.md +146 -0
- package/plugins/arn-spark/skills/arn-spark-naming/references/naming-methodology.md +237 -0
- package/plugins/arn-spark/skills/arn-spark-naming/references/naming-report-template.md +122 -0
- package/plugins/arn-spark/skills/arn-spark-naming/references/trademark-databases.md +88 -0
- package/plugins/arn-spark/skills/arn-spark-naming/references/whois-server-map.md +164 -0
- package/plugins/arn-spark/skills/arn-spark-naming/scripts/whois-check.js +502 -0
- package/plugins/arn-spark/skills/arn-spark-naming/scripts/whois-check.py +533 -0
- package/plugins/arn-spark/skills/arn-spark-prototype-lock/SKILL.md +260 -0
- package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/lock-report-template.md +68 -0
- package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/pretooluse-hook-template.json +35 -0
- package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/prototype-guardrail-rules.md +38 -0
- package/plugins/arn-spark/skills/arn-spark-report/SKILL.md +144 -0
- package/plugins/arn-spark/skills/arn-spark-report/references/issue-template.md +81 -0
- package/plugins/arn-spark/skills/arn-spark-report/references/spark-knowledge-base.md +293 -0
- package/plugins/arn-spark/skills/arn-spark-scaffold/SKILL.md +239 -0
- package/plugins/arn-spark/skills/arn-spark-scaffold/references/scaffold-checklist.md +79 -0
- package/plugins/arn-spark/skills/arn-spark-scaffold/references/scaffold-summary-template.md +74 -0
- package/plugins/arn-spark/skills/arn-spark-spike/SKILL.md +209 -0
- package/plugins/arn-spark/skills/arn-spark-spike/references/spike-report-template.md +123 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype/SKILL.md +362 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype/references/review-report-template.md +65 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype/references/showcase-capture-guide.md +153 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype/references/static-prototype-criteria.md +54 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/SKILL.md +518 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/debate-protocol.md +230 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/debate-review-report-template.md +148 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/expert-visual-review-template.md +130 -0
- package/plugins/arn-spark/skills/arn-spark-stress-competitive/SKILL.md +166 -0
- package/plugins/arn-spark/skills/arn-spark-stress-competitive/references/competitive-report-template.md +139 -0
- package/plugins/arn-spark/skills/arn-spark-stress-competitive/references/gap-analysis-framework.md +111 -0
- package/plugins/arn-spark/skills/arn-spark-stress-interview/SKILL.md +257 -0
- package/plugins/arn-spark/skills/arn-spark-stress-interview/references/interview-protocol.md +140 -0
- package/plugins/arn-spark/skills/arn-spark-stress-interview/references/interview-report-template.md +165 -0
- package/plugins/arn-spark/skills/arn-spark-stress-interview/references/persona-casting-spec.md +138 -0
- package/plugins/arn-spark/skills/arn-spark-stress-premortem/SKILL.md +181 -0
- package/plugins/arn-spark/skills/arn-spark-stress-premortem/references/premortem-protocol.md +112 -0
- package/plugins/arn-spark/skills/arn-spark-stress-premortem/references/premortem-report-template.md +158 -0
- package/plugins/arn-spark/skills/arn-spark-stress-prfaq/SKILL.md +206 -0
- package/plugins/arn-spark/skills/arn-spark-stress-prfaq/references/prfaq-report-template.md +139 -0
- package/plugins/arn-spark/skills/arn-spark-stress-prfaq/references/prfaq-workflow.md +118 -0
- package/plugins/arn-spark/skills/arn-spark-style-explore/SKILL.md +281 -0
- package/plugins/arn-spark/skills/arn-spark-style-explore/references/style-brief-template.md +198 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases/SKILL.md +359 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases/references/expert-review-template.md +94 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases/references/review-protocol.md +150 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases/references/use-case-index-template.md +108 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases/references/use-case-template.md +125 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases-teams/SKILL.md +306 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases-teams/references/debate-protocol.md +272 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases-teams/references/review-report-template.md +112 -0
- package/plugins/arn-spark/skills/arn-spark-visual-readiness/SKILL.md +293 -0
- package/plugins/arn-spark/skills/arn-spark-visual-readiness/references/readiness-checklist.md +196 -0
- package/plugins/arn-spark/skills/arn-spark-visual-sketch/SKILL.md +376 -0
- package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/aesthetic-philosophy.md +210 -0
- package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/sketch-gallery-guide.md +282 -0
- package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/visual-direction-template.md +174 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/SKILL.md +447 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/baseline-capture-script-template.js +89 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/journey-schema.md +375 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/spike-checklist.md +122 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/strategy-layers-guide.md +132 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/visual-strategy-template.md +141 -0
|
@@ -0,0 +1,112 @@
|
|
|
1
|
+
# Pre-Mortem Protocol
|
|
2
|
+
|
|
3
|
+
Adapted from Gary Klein's pre-mortem methodology for product concept stress testing. This document is consumed by the `arn-spark-stress-premortem` skill to frame the forensic investigator's investigation.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
The pre-mortem technique inverts the typical risk assessment. Instead of asking "what could go wrong?" (which invites optimism bias and superficial answers), the pre-mortem declares that the product has already failed and asks "why did it fail?" This psychological reframing unlocks deeper, more specific failure analysis because it removes the social pressure to be optimistic and replaces it with the intellectual challenge of explaining a known outcome.
|
|
8
|
+
|
|
9
|
+
**The premise:** It is 12 months after launch. The product was shut down today. You are not predicting failure -- you are investigating a failure that has already happened.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Methodology
|
|
14
|
+
|
|
15
|
+
### Step 1: Establish the Failure Premise
|
|
16
|
+
|
|
17
|
+
Set the temporal frame:
|
|
18
|
+
- The product launched 12 months ago with the features and scope described in the product concept
|
|
19
|
+
- Initial reception was [variable -- the investigator determines this based on the concept's strengths and weaknesses]
|
|
20
|
+
- Despite the team's best efforts, the product was shut down today
|
|
21
|
+
- Your job is to explain why
|
|
22
|
+
|
|
23
|
+
The failure premise must be stated explicitly at the beginning of the investigation. This is not a hypothetical -- it is a forensic reconstruction.
|
|
24
|
+
|
|
25
|
+
### Step 2: Work Backward from Failure
|
|
26
|
+
|
|
27
|
+
The investigator works backward from the shutdown to identify root causes. Each root cause follows a causal chain:
|
|
28
|
+
|
|
29
|
+
```
|
|
30
|
+
Shutdown decision
|
|
31
|
+
<- Trigger event (the specific metric or incident that forced the decision)
|
|
32
|
+
<- Compounding effect (what made recovery impossible)
|
|
33
|
+
<- Execution failure (how the design decision played out in practice)
|
|
34
|
+
<- Design assumption (the original decision or assumption in the product concept)
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
### Step 3: Identify Root Causes Across Failure Dimensions
|
|
38
|
+
|
|
39
|
+
The standard investigation produces 3 root causes, each targeting a distinct failure dimension:
|
|
40
|
+
|
|
41
|
+
#### Dimension A: User Adoption Failure / Core Experience Flaw
|
|
42
|
+
The product's central interaction model had a fundamental flaw. Users tried it and left. This is not about missing features -- it is about the core experience itself being wrong, insufficient, or misaligned with actual user needs.
|
|
43
|
+
|
|
44
|
+
Investigation angles:
|
|
45
|
+
- Gap between promised experience and actual experience
|
|
46
|
+
- Onboarding friction that prevented users from reaching the value
|
|
47
|
+
- Core interaction that was interesting in demos but tedious in daily use
|
|
48
|
+
- Product pillars that conflicted with each other in practice
|
|
49
|
+
|
|
50
|
+
#### Dimension B: Trust / Security / Compliance Failure
|
|
51
|
+
The product had a trust assumption that proved catastrophically wrong. A data breach, a privacy scandal, a compliance failure, or a trust violation that destroyed user confidence overnight.
|
|
52
|
+
|
|
53
|
+
Investigation angles:
|
|
54
|
+
- Data handling assumptions that were tested by a real incident
|
|
55
|
+
- Security architecture decisions that seemed reasonable but failed under load or adversarial conditions
|
|
56
|
+
- Compliance requirements that emerged post-launch and could not be met
|
|
57
|
+
- Trust signals that were sufficient for early adopters but not for mainstream users
|
|
58
|
+
|
|
59
|
+
#### Dimension C: Market / Audience Misread
|
|
60
|
+
The product was built for the wrong people, or the right people in the wrong context. The personas were plausible but did not match reality. The market existed but the product's entry point was misaligned.
|
|
61
|
+
|
|
62
|
+
Investigation angles:
|
|
63
|
+
- Which persona assumption was most wrong, and how the actual early adopters differed
|
|
64
|
+
- Market timing: too early, too late, or right time but wrong entry point
|
|
65
|
+
- Competitive response that the product concept underestimated
|
|
66
|
+
- Adjacent market or use case that users actually wanted but the product did not pivot toward
|
|
67
|
+
|
|
68
|
+
### Step 4: Evaluate Each Root Cause
|
|
69
|
+
|
|
70
|
+
For each root cause, assess:
|
|
71
|
+
|
|
72
|
+
**Likelihood:** How probable is this failure chain given the product concept's current design?
|
|
73
|
+
- **High:** The product concept contains specific elements that directly increase this risk
|
|
74
|
+
- **Medium:** The product concept does not directly address this risk but is not uniquely vulnerable
|
|
75
|
+
- **Low:** The product concept has elements that mitigate this risk, but it cannot be eliminated
|
|
76
|
+
|
|
77
|
+
**Severity:** If this failure chain occurs, how bad is the outcome?
|
|
78
|
+
- **Critical:** Product shutdown -- unrecoverable
|
|
79
|
+
- **High:** Significant user loss or trust damage -- recoverable but costly
|
|
80
|
+
- **Medium:** Growth stalled or market position weakened -- manageable with pivots
|
|
81
|
+
|
|
82
|
+
### Step 5: Construct Risk Priority Matrix
|
|
83
|
+
|
|
84
|
+
Map root causes on a 3x3 matrix:
|
|
85
|
+
|
|
86
|
+
| | Low Likelihood | Medium Likelihood | High Likelihood |
|
|
87
|
+
|--|---------------|-------------------|-----------------|
|
|
88
|
+
| **Critical Severity** | Monitor | Mitigate | Address immediately |
|
|
89
|
+
| **High Severity** | Monitor | Mitigate | Address immediately |
|
|
90
|
+
| **Medium Severity** | Accept | Monitor | Mitigate |
|
|
91
|
+
|
|
92
|
+
Root causes in "Address immediately" cells must appear in the Recommended Concept Updates table.
|
|
93
|
+
|
|
94
|
+
---
|
|
95
|
+
|
|
96
|
+
## Psychological Framing Instructions for the Forensic Investigator
|
|
97
|
+
|
|
98
|
+
These instructions are passed to the `arn-spark-forensic-investigator` agent to establish the correct mindset:
|
|
99
|
+
|
|
100
|
+
1. **You are not defending this product.** You are not an advocate, a coach, or a well-wisher. You are a forensic investigator called in after the shutdown, piecing together what went wrong and why nobody saw it coming.
|
|
101
|
+
|
|
102
|
+
2. **The failure has already happened.** Do not hedge with "might" or "could." The product failed. Your job is to explain why, not to predict whether it will.
|
|
103
|
+
|
|
104
|
+
3. **Be specific, not generic.** "Users did not find it useful" is not a root cause. "Users expected [specific claim from concept] but experienced [specific reality], leading to [specific metric decline] by month [N]" is a root cause.
|
|
105
|
+
|
|
106
|
+
4. **Use the product concept against itself.** Quote specific claims, features, and design decisions. A pre-mortem that could apply to any product is a failed pre-mortem.
|
|
107
|
+
|
|
108
|
+
5. **Product pillars are forensic evidence.** Pillars often become failure vectors. A "zero-configuration" pillar might mean the product could not accommodate enterprise deployment requirements. A "privacy-first" pillar might mean the product could not implement the analytics needed to detect churn early enough.
|
|
109
|
+
|
|
110
|
+
6. **Early warning signals must be observable.** Not "user satisfaction declining" but "NPS scores for the [specific feature] flow dropping below 30 within 60 days" or "support ticket volume for [specific issue] exceeding [threshold] by month 2."
|
|
111
|
+
|
|
112
|
+
7. **Mitigation strategies must change the product concept.** Not "improve onboarding" but "add a guided first-run experience that demonstrates [specific value] within 90 seconds by [specific mechanism]."
|
package/plugins/arn-spark/skills/arn-spark-stress-premortem/references/premortem-report-template.md
ADDED
|
@@ -0,0 +1,158 @@
|
|
|
1
|
+
# Pre-Mortem Report Template
|
|
2
|
+
|
|
3
|
+
Template for the pre-mortem risk mitigation stress test report. This document is consumed by the `arn-spark-stress-premortem` skill when assembling the final report from the forensic investigator's output.
|
|
4
|
+
|
|
5
|
+
## Instructions for arn-spark-stress-premortem
|
|
6
|
+
|
|
7
|
+
When populating this template:
|
|
8
|
+
|
|
9
|
+
- Every section below MUST appear in the output
|
|
10
|
+
- Replace all bracketed placeholders with concrete content from the forensic investigator's output
|
|
11
|
+
- Each Root Cause must have ALL 6 subsections: Failure Narrative, Causal Chain, Early Warning Signals, Mitigation Strategies, Likelihood, Severity
|
|
12
|
+
- The Risk Priority Matrix must accurately reflect the Likelihood/Severity assessments from the root causes
|
|
13
|
+
- The Recommended Concept Updates table MUST use the standardized schema exactly as shown
|
|
14
|
+
- Unresolved Questions should capture questions that the pre-mortem raised but could not answer
|
|
15
|
+
- If the forensic investigator produced fewer than 3 root causes or overlapping root causes, note this in the report and explain what happened
|
|
16
|
+
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
## Template
|
|
20
|
+
|
|
21
|
+
```markdown
|
|
22
|
+
# Pre-Mortem Investigation Report
|
|
23
|
+
|
|
24
|
+
**Product:** [product name]
|
|
25
|
+
**Date:** [ISO 8601 date]
|
|
26
|
+
**Failure premise:** It is [date + 12 months]. [Product name] launched 12 months ago and was shut down today.
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
## Executive Summary
|
|
31
|
+
|
|
32
|
+
[3-5 sentences summarizing the investigation findings. What was the most likely failure mode? What was the most severe? What was the most surprising finding? Overall risk posture of the product concept.]
|
|
33
|
+
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
## Failure Premise
|
|
37
|
+
|
|
38
|
+
[2-3 sentences establishing the temporal frame and initial conditions. How did the launch go? What was the initial reception? What happened in the months that followed?]
|
|
39
|
+
|
|
40
|
+
---
|
|
41
|
+
|
|
42
|
+
## Root Cause A: Core Experience Flaw -- [Specific Flaw Title]
|
|
43
|
+
|
|
44
|
+
**Failure Narrative:**
|
|
45
|
+
[3-5 sentences describing what happened from the user's perspective. Specific, vivid, grounded in the product concept's own claims. Reference specific features, interactions, and personas.]
|
|
46
|
+
|
|
47
|
+
**Causal Chain:**
|
|
48
|
+
1. [Design assumption in the product concept]
|
|
49
|
+
2. [How that assumption played out in practice]
|
|
50
|
+
3. [The compounding effect that made recovery impossible]
|
|
51
|
+
4. [The specific trigger event that forced shutdown]
|
|
52
|
+
|
|
53
|
+
**Early Warning Signals:**
|
|
54
|
+
- [Signal 1 -- observable in month 1-2, with specific metric or behavior]
|
|
55
|
+
- [Signal 2 -- a metric or user behavior pattern]
|
|
56
|
+
- [Signal 3 -- a qualitative signal from feedback or support]
|
|
57
|
+
|
|
58
|
+
**Mitigation Strategies:**
|
|
59
|
+
1. [Specific change to the product concept that addresses the root cause]
|
|
60
|
+
2. [Monitoring or validation approach to catch early warning signals]
|
|
61
|
+
3. [Design alternative that avoids the failure chain entirely]
|
|
62
|
+
|
|
63
|
+
**Likelihood:** [High / Medium / Low] -- [1-sentence justification referencing specific product concept elements]
|
|
64
|
+
**Severity:** [Critical / High / Medium] -- [1-sentence justification]
|
|
65
|
+
|
|
66
|
+
---
|
|
67
|
+
|
|
68
|
+
## Root Cause B: Trust & Security Blind Spot -- [Specific Blind Spot Title]
|
|
69
|
+
|
|
70
|
+
**Failure Narrative:**
|
|
71
|
+
[Same structure as Root Cause A]
|
|
72
|
+
|
|
73
|
+
**Causal Chain:**
|
|
74
|
+
1. [Trust/security assumption in the product concept]
|
|
75
|
+
2. [How that assumption was tested by real-world conditions]
|
|
76
|
+
3. [The compounding effect -- loss of user confidence, regulatory response, competitive messaging]
|
|
77
|
+
4. [The trigger event -- specific incident or revelation]
|
|
78
|
+
|
|
79
|
+
**Early Warning Signals:**
|
|
80
|
+
- [Signal 1]
|
|
81
|
+
- [Signal 2]
|
|
82
|
+
- [Signal 3]
|
|
83
|
+
|
|
84
|
+
**Mitigation Strategies:**
|
|
85
|
+
1. [Strategy 1]
|
|
86
|
+
2. [Strategy 2]
|
|
87
|
+
3. [Strategy 3]
|
|
88
|
+
|
|
89
|
+
**Likelihood:** [High / Medium / Low] -- [justification]
|
|
90
|
+
**Severity:** [Critical / High / Medium] -- [justification]
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
## Root Cause C: Target Audience Assumption -- [Specific Assumption Title]
|
|
95
|
+
|
|
96
|
+
**Failure Narrative:**
|
|
97
|
+
[Same structure as Root Cause A]
|
|
98
|
+
|
|
99
|
+
**Causal Chain:**
|
|
100
|
+
1. [Audience assumption in the product concept]
|
|
101
|
+
2. [How the actual market differed from the assumed market]
|
|
102
|
+
3. [The compounding effect -- wrong users, wrong positioning, wrong growth strategy]
|
|
103
|
+
4. [The trigger event -- specific metric or market shift]
|
|
104
|
+
|
|
105
|
+
**Early Warning Signals:**
|
|
106
|
+
- [Signal 1]
|
|
107
|
+
- [Signal 2]
|
|
108
|
+
- [Signal 3]
|
|
109
|
+
|
|
110
|
+
**Mitigation Strategies:**
|
|
111
|
+
1. [Strategy 1]
|
|
112
|
+
2. [Strategy 2]
|
|
113
|
+
3. [Strategy 3]
|
|
114
|
+
|
|
115
|
+
**Likelihood:** [High / Medium / Low] -- [justification]
|
|
116
|
+
**Severity:** [Critical / High / Medium] -- [justification]
|
|
117
|
+
|
|
118
|
+
---
|
|
119
|
+
|
|
120
|
+
## Risk Priority Matrix
|
|
121
|
+
|
|
122
|
+
| | Low Likelihood | Medium Likelihood | High Likelihood |
|
|
123
|
+
|--|---------------|-------------------|-----------------|
|
|
124
|
+
| **Critical Severity** | [Root Cause letter if applicable] | [Root Cause letter if applicable] | [Root Cause letter if applicable] |
|
|
125
|
+
| **High Severity** | [Root Cause letter if applicable] | [Root Cause letter if applicable] | [Root Cause letter if applicable] |
|
|
126
|
+
| **Medium Severity** | [Root Cause letter if applicable] | [Root Cause letter if applicable] | [Root Cause letter if applicable] |
|
|
127
|
+
|
|
128
|
+
---
|
|
129
|
+
|
|
130
|
+
## Recommended Concept Updates
|
|
131
|
+
|
|
132
|
+
| # | Section | Current State | Recommended Change | Type | Rationale |
|
|
133
|
+
|---|---------|---------------|--------------------|------|-----------|
|
|
134
|
+
| 1 | [product concept section] | [what the concept currently says or assumes] | [specific change recommended] | [Add/Modify/Remove] | [which root cause this addresses -- reference Root Cause letter and mitigation strategy number] |
|
|
135
|
+
| 2 | ... | ... | ... | ... | ... |
|
|
136
|
+
|
|
137
|
+
---
|
|
138
|
+
|
|
139
|
+
## Unresolved Questions
|
|
140
|
+
|
|
141
|
+
| # | Section | Question | Options | Assessment |
|
|
142
|
+
|---|---------|----------|---------|------------|
|
|
143
|
+
| 1 | [product concept section] | [question that the pre-mortem raised but could not answer] | [possible approaches to answering this] | [preliminary assessment based on investigation findings] |
|
|
144
|
+
| 2 | ... | ... | ... | ... |
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
---
|
|
148
|
+
|
|
149
|
+
## Section Guidance
|
|
150
|
+
|
|
151
|
+
| Section | Source | Depth |
|
|
152
|
+
|---------|--------|-------|
|
|
153
|
+
| Executive Summary | Synthesized by skill from forensic investigator output | 3-5 sentences, overall risk posture |
|
|
154
|
+
| Failure Premise | Set by skill based on product concept | 2-3 sentences establishing temporal frame |
|
|
155
|
+
| Root Cause A/B/C | Forensic investigator output, one per failure dimension | Each root cause: narrative (3-5 sentences), causal chain (4 links), early warning signals (3), mitigation strategies (3), likelihood and severity with justification |
|
|
156
|
+
| Risk Priority Matrix | Derived from root cause likelihood/severity assessments | 3x3 matrix with root cause letters placed in cells |
|
|
157
|
+
| Recommended Concept Updates | Derived from root causes in "Address immediately" or "Mitigate" cells | One row per recommendation, Type must be Add/Modify/Remove, rationale must reference specific root cause and mitigation strategy |
|
|
158
|
+
| Unresolved Questions | Identified during investigation as questions requiring real data | One row per question, must specify which product concept section is affected |
|
|
@@ -0,0 +1,206 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: arn-spark-stress-prfaq
|
|
3
|
+
description: >-
|
|
4
|
+
This skill should be used when the user says "prfaq", "pr faq", "pr/faq",
|
|
5
|
+
"press release stress test", "stress prfaq", "amazon pr faq method",
|
|
6
|
+
"test the pitch with a pr/faq", "validate concept through pr/faq",
|
|
7
|
+
"critique press release", "pr faq stress test",
|
|
8
|
+
"will this marketing story hold up", or wants to stress-test a product concept by drafting a
|
|
9
|
+
compelling press release and FAQ, then adversarially critiquing it to find
|
|
10
|
+
where the concept cracks under scrutiny. Produces a PR/FAQ report with the
|
|
11
|
+
full draft, adversarial questions, crack point analysis, and recommended
|
|
12
|
+
concept updates.
|
|
13
|
+
version: 1.0.0
|
|
14
|
+
---
|
|
15
|
+
|
|
16
|
+
# Arness Spark Stress PR/FAQ
|
|
17
|
+
|
|
18
|
+
Stress-test a product concept using Amazon's PR/FAQ method. This technique forces the product concept through two filters:
|
|
19
|
+
|
|
20
|
+
1. **Draft phase:** A marketing PM agent writes the best possible public story -- a compelling press release, customer FAQ, and internal FAQ. If the press release is unconvincing, the product concept likely has clarity problems.
|
|
21
|
+
2. **Critique phase:** The same marketing PM agent (in a separate invocation, with no memory of drafting) adversarially attacks the draft -- finding questions the PR dodges, identifying crack points where claims exceed substance, and recommending concept changes.
|
|
22
|
+
|
|
23
|
+
The two phases use **separate agent invocations** to prevent rubber-stamping. A critic who remembers being the drafter unconsciously defends what it wrote. Separate invocations force genuine adversarial evaluation.
|
|
24
|
+
|
|
25
|
+
The product concept is read but never modified -- all recommendations are captured in the PR/FAQ report for later review.
|
|
26
|
+
|
|
27
|
+
## Prerequisites
|
|
28
|
+
|
|
29
|
+
### Configuration Check
|
|
30
|
+
|
|
31
|
+
1. Read the project's `arness.md` and check for a `## Arness` section
|
|
32
|
+
2. If found, extract the configured **Vision directory** and **Reports directory** paths
|
|
33
|
+
3. If no `## Arness` section exists or Arness Spark fields are missing, inform the user: "Arness Spark is not configured for this project yet. Run `/arn-brainstorming` to get started — it will set everything up automatically." Do not proceed without it.
|
|
34
|
+
4. If the Reports directory does not exist, create it with `mkdir -p <reports-dir>/stress-tests/`
|
|
35
|
+
|
|
36
|
+
### Data Availability
|
|
37
|
+
|
|
38
|
+
| Artifact | Status | Location | Fallback |
|
|
39
|
+
|----------|--------|----------|----------|
|
|
40
|
+
| Product concept | REQUIRED | `<vision-dir>/product-concept.md` | Cannot proceed without it -- suggest running `/arn-spark-discover` |
|
|
41
|
+
| Product pillars | ENRICHES | Product Pillars section of product concept | Draft messaging is less focused; critique has fewer anchors |
|
|
42
|
+
| Competitive landscape | ENRICHES | Competitive Landscape section of product concept | Draft positioning is less grounded in market context |
|
|
43
|
+
| Target personas | ENRICHES | Target Personas section of product concept | Customer quote in press release is less persona-specific |
|
|
44
|
+
|
|
45
|
+
**Standard fallback cascade:**
|
|
46
|
+
|
|
47
|
+
If no product concept exists:
|
|
48
|
+
|
|
49
|
+
Ask the user: **"No product concept found. The PR/FAQ stress test needs a product concept to draft and critique messaging for. How would you like to proceed?"**
|
|
50
|
+
1. Run `/arn-spark-discover` to create a product concept first
|
|
51
|
+
2. Describe the product now (I will conduct the PR/FAQ from your description)
|
|
52
|
+
3. Skip the PR/FAQ stress test
|
|
53
|
+
|
|
54
|
+
If the user chooses option 2, collect a product description and proceed with a reduced-fidelity test (note in the report that the test was based on a verbal description rather than a full product concept).
|
|
55
|
+
|
|
56
|
+
## Workflow
|
|
57
|
+
|
|
58
|
+
### Step 1: Load References
|
|
59
|
+
|
|
60
|
+
Load the PR/FAQ workflow and report template:
|
|
61
|
+
> Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-stress-prfaq/references/prfaq-workflow.md`
|
|
62
|
+
> Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-stress-prfaq/references/prfaq-report-template.md`
|
|
63
|
+
|
|
64
|
+
### Step 2: Read Product Concept and Extract Context
|
|
65
|
+
|
|
66
|
+
Read the product concept from `<vision-dir>/product-concept.md`. Extract:
|
|
67
|
+
- Full product concept (both draft and critique need the complete document)
|
|
68
|
+
- Product pillars (anchors messaging in draft, tested for sincerity in critique)
|
|
69
|
+
- Competitive landscape (grounds positioning in real market context)
|
|
70
|
+
- Target personas (for realistic customer quote in press release)
|
|
71
|
+
- Core experience (primary material for the solution paragraph)
|
|
72
|
+
|
|
73
|
+
### Step 3: Phase 1 -- Draft
|
|
74
|
+
|
|
75
|
+
Invoke the `arn-spark-marketing-pm` agent in **draft mode** via the Task tool, passing the model from `.arness/agent-models/spark.md` as the `model` parameter (see `plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md` "Dispatch convention" for fallback). Context:
|
|
76
|
+
|
|
77
|
+
--- PRODUCT CONCEPT ---
|
|
78
|
+
[full product concept document]
|
|
79
|
+
--- END PRODUCT CONCEPT ---
|
|
80
|
+
|
|
81
|
+
--- PRODUCT PILLARS ---
|
|
82
|
+
[product pillars section]
|
|
83
|
+
--- END PRODUCT PILLARS ---
|
|
84
|
+
|
|
85
|
+
--- OPERATING MODE ---
|
|
86
|
+
draft
|
|
87
|
+
--- END OPERATING MODE ---
|
|
88
|
+
|
|
89
|
+
Receive back the complete draft: press release (400-600 words), customer FAQ (5-8 entries), internal FAQ (3-5 entries).
|
|
90
|
+
|
|
91
|
+
**Quality check before proceeding:**
|
|
92
|
+
- Press release must be 400-600 words and compelling (would a reporter find this newsworthy? Does it lead with customer value, not features?)
|
|
93
|
+
- Customer FAQ answers must be concrete with specific examples, not evasive hedging
|
|
94
|
+
- Internal FAQ questions must address genuine tensions or risky claims in the press release
|
|
95
|
+
- Customer quote must use colloquial language and personal emotion, not corporate phrasing
|
|
96
|
+
|
|
97
|
+
If the draft is too thin or generic, retry with more specific context:
|
|
98
|
+
|
|
99
|
+
"The draft needs to be more specific. Ground it in these details:
|
|
100
|
+
- Target persona for the customer quote: [specific persona from product concept]
|
|
101
|
+
- Key competitive differentiator: [from competitive landscape]
|
|
102
|
+
- Core interaction to highlight: [from core experience]
|
|
103
|
+
- Most important pillar to anchor messaging: [from product pillars]"
|
|
104
|
+
|
|
105
|
+
### Step 4: Phase 2 -- Critique
|
|
106
|
+
|
|
107
|
+
Invoke the `arn-spark-marketing-pm` agent in **critique mode** via the Task tool, passing the model from `.arness/agent-models/spark.md` as the `model` parameter (see `plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md` "Dispatch convention" for fallback). Context:
|
|
108
|
+
|
|
109
|
+
--- PRODUCT CONCEPT ---
|
|
110
|
+
[full product concept document -- same as Phase 1]
|
|
111
|
+
--- END PRODUCT CONCEPT ---
|
|
112
|
+
|
|
113
|
+
--- PRODUCT PILLARS ---
|
|
114
|
+
[product pillars section -- same as Phase 1]
|
|
115
|
+
--- END PRODUCT PILLARS ---
|
|
116
|
+
|
|
117
|
+
--- OPERATING MODE ---
|
|
118
|
+
critique
|
|
119
|
+
--- END OPERATING MODE ---
|
|
120
|
+
|
|
121
|
+
--- DRAFT OUTPUT ---
|
|
122
|
+
[complete draft from Phase 1 -- press release + customer FAQ + internal FAQ]
|
|
123
|
+
--- END DRAFT OUTPUT ---
|
|
124
|
+
|
|
125
|
+
**Critical:** This MUST be a separate agent invocation from Phase 1. Include ONLY: the full product concept, the product pillars, the complete draft output (press release + FAQs), and the critique task instructions. Do NOT include any conversation context from the draft phase or any conversational recap.
|
|
126
|
+
|
|
127
|
+
Receive back: adversarial questions (5-8), crack points (3-5), recommended concept updates table, unresolved questions.
|
|
128
|
+
|
|
129
|
+
**Quality check:**
|
|
130
|
+
- Adversarial questions must target concept substance, not word choice
|
|
131
|
+
- Crack points must identify real gaps between claims and substance
|
|
132
|
+
- At least one crack point should reference a product pillar
|
|
133
|
+
- Recommendations must change the product concept, not the press release
|
|
134
|
+
|
|
135
|
+
If the critique is too soft, retry with explicit adversarial instruction:
|
|
136
|
+
|
|
137
|
+
"The critique needs to be sharper. Requirements:
|
|
138
|
+
- At least 2 adversarial questions rated High damage potential
|
|
139
|
+
- Crack points must reference specific claims from the press release and specific gaps in the product concept
|
|
140
|
+
- If no crack point makes someone say 'we need to address that before building this,' the critique is too gentle
|
|
141
|
+
- Do not find ways the messaging could be improved -- find ways the underlying concept fails"
|
|
142
|
+
|
|
143
|
+
### Step 5: Draft Recommended Concept Updates
|
|
144
|
+
|
|
145
|
+
Review the critique's Recommended Concept Updates table. Ensure:
|
|
146
|
+
- Each recommendation uses the standardized schema
|
|
147
|
+
- Each recommendation traces to a specific crack point
|
|
148
|
+
- Type column uses Add/Modify/Remove
|
|
149
|
+
- All crack points with High damage potential adversarial questions have corresponding recommendations
|
|
150
|
+
|
|
151
|
+
If the critique's recommendations are incomplete, supplement from the crack point "What needs strengthening" fields.
|
|
152
|
+
|
|
153
|
+
### Step 6: Assemble and Write Report
|
|
154
|
+
|
|
155
|
+
Using the PR/FAQ report template:
|
|
156
|
+
1. Populate all sections with draft output and critique output
|
|
157
|
+
2. Include the full press release text, all FAQ entries, all adversarial questions, and all crack points
|
|
158
|
+
3. Write the report to `<reports-dir>/stress-tests/prfaq-report.md`
|
|
159
|
+
|
|
160
|
+
Present a summary to the user:
|
|
161
|
+
|
|
162
|
+
"PR/FAQ stress test complete. Report saved to `[path]`.
|
|
163
|
+
|
|
164
|
+
**Draft assessment:** [1 sentence on whether the product story was compelling]
|
|
165
|
+
|
|
166
|
+
**Key critique findings:**
|
|
167
|
+
- **Adversarial questions:** [N] questions the PR dodges ([X] High damage, [Y] Medium, [Z] Low)
|
|
168
|
+
- **Crack points:** [N] places where the concept cracks under scrutiny
|
|
169
|
+
|
|
170
|
+
**Top crack point:** [The highest-impact crack point in 1 sentence]
|
|
171
|
+
|
|
172
|
+
**Recommended concept updates:** [N] recommendations ([X] Add, [Y] Modify, [Z] Remove)
|
|
173
|
+
**Unresolved questions:** [N]
|
|
174
|
+
|
|
175
|
+
This report will be used by `/arn-spark-concept-review` to propose changes to the product concept."
|
|
176
|
+
|
|
177
|
+
## Agent Invocation Guide
|
|
178
|
+
|
|
179
|
+
| Situation | Agent | Mode/Context |
|
|
180
|
+
|-----------|-------|--------------|
|
|
181
|
+
| Write press release and FAQ | `arn-spark-marketing-pm` | Draft mode with product concept and pillars |
|
|
182
|
+
| Adversarially critique the draft | `arn-spark-marketing-pm` | Critique mode with product concept, pillars, and draft output (separate invocation) |
|
|
183
|
+
|
|
184
|
+
## Error Handling
|
|
185
|
+
|
|
186
|
+
- **Marketing PM produces generic/thin draft:** Retry with more specific context from the product concept -- highlight specific features, personas, and competitive positioning. If retry still produces thin output:
|
|
187
|
+
Ask the user: **"The PR/FAQ draft is too generic to produce a useful critique. How would you like to proceed?"**
|
|
188
|
+
1. Retry with additional context
|
|
189
|
+
2. Proceed with the current draft (critique may be less effective)
|
|
190
|
+
3. Abort the PR/FAQ stress test
|
|
191
|
+
|
|
192
|
+
- **Marketing PM produces soft critique:** Retry with explicit adversarial instruction emphasizing that the critique must target concept substance, not copywriting quality. Include: "If no crack point makes someone uncomfortable, the critique has failed."
|
|
193
|
+
|
|
194
|
+
- **Critique mode receives draft context (accidental context leak):** This should not happen if invocations are properly separated. If detected (the critique references drafting decisions or uses phrases like "when I wrote..."), discard the critique and re-invoke in critique mode with only the draft output and product concept -- no conversational context.
|
|
195
|
+
|
|
196
|
+
- **Any agent invocation fails entirely:** Retry once with a simplified prompt. If retry fails:
|
|
197
|
+
Ask the user: **"Agent invocation failed. How would you like to proceed?"**
|
|
198
|
+
1. Retry
|
|
199
|
+
2. Skip this step
|
|
200
|
+
3. Abort
|
|
201
|
+
|
|
202
|
+
## Constraints
|
|
203
|
+
|
|
204
|
+
- **Read-only with respect to product-concept.md.** The PR/FAQ skill reads the product concept but NEVER modifies it. All recommendations are captured in the PR/FAQ report.
|
|
205
|
+
- **Separate invocations for draft and critique.** This is a hard requirement, not a preference. Same-context self-critique produces rubber-stamp results. The draft and critique MUST be separate agent invocations with no shared conversational context.
|
|
206
|
+
- **Report overwrites on re-run.** If `prfaq-report.md` already exists, it is overwritten. Git provides history.
|
|
@@ -0,0 +1,139 @@
|
|
|
1
|
+
# PR/FAQ Report Template
|
|
2
|
+
|
|
3
|
+
Template for the PR/FAQ stress test report. This document is consumed by the `arn-spark-stress-prfaq` skill when assembling the final report from the marketing PM's draft and critique outputs.
|
|
4
|
+
|
|
5
|
+
## Instructions for arn-spark-stress-prfaq
|
|
6
|
+
|
|
7
|
+
When populating this template:
|
|
8
|
+
|
|
9
|
+
- Every section below MUST appear in the output
|
|
10
|
+
- Replace all bracketed placeholders with concrete content from the marketing PM's draft and critique outputs
|
|
11
|
+
- The Press Release should be the FULL text from draft mode, not a summary
|
|
12
|
+
- Customer FAQ and Internal FAQ should include ALL entries from draft mode
|
|
13
|
+
- Adversarial Questions and Crack Points should include ALL entries from critique mode
|
|
14
|
+
- The Recommended Concept Updates table MUST use the standardized schema exactly as shown
|
|
15
|
+
- Unresolved Questions should capture questions that emerged from the critique but cannot be answered without real user data or market research
|
|
16
|
+
- If either draft or critique mode failed, note what was captured and explain the gap
|
|
17
|
+
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
## Template
|
|
21
|
+
|
|
22
|
+
```markdown
|
|
23
|
+
# PR/FAQ Stress Test Report
|
|
24
|
+
|
|
25
|
+
**Product:** [product name]
|
|
26
|
+
**Date:** [ISO 8601 date]
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
## Executive Summary
|
|
31
|
+
|
|
32
|
+
[3-5 sentences summarizing the PR/FAQ stress test findings. Was the product story compelling? Where did it crack? What was the most significant finding from the critique?]
|
|
33
|
+
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
## Press Release
|
|
37
|
+
|
|
38
|
+
### [Headline]
|
|
39
|
+
|
|
40
|
+
**[Subheading]**
|
|
41
|
+
|
|
42
|
+
[Problem paragraph]
|
|
43
|
+
|
|
44
|
+
[Solution paragraph]
|
|
45
|
+
|
|
46
|
+
> "[Customer quote]"
|
|
47
|
+
> -- [Persona name], [role/context]
|
|
48
|
+
|
|
49
|
+
[Product details paragraph]
|
|
50
|
+
|
|
51
|
+
**[Call to action]**
|
|
52
|
+
|
|
53
|
+
---
|
|
54
|
+
|
|
55
|
+
## Customer FAQ
|
|
56
|
+
|
|
57
|
+
### Q: [Question 1]
|
|
58
|
+
[Answer]
|
|
59
|
+
|
|
60
|
+
### Q: [Question 2]
|
|
61
|
+
[Answer]
|
|
62
|
+
|
|
63
|
+
[... 5-8 entries total]
|
|
64
|
+
|
|
65
|
+
---
|
|
66
|
+
|
|
67
|
+
## Internal FAQ
|
|
68
|
+
|
|
69
|
+
### Q: [Question 1]
|
|
70
|
+
[Answer]
|
|
71
|
+
|
|
72
|
+
### Q: [Question 2]
|
|
73
|
+
[Answer]
|
|
74
|
+
|
|
75
|
+
[... 3-5 entries total]
|
|
76
|
+
|
|
77
|
+
---
|
|
78
|
+
|
|
79
|
+
## Adversarial Questions
|
|
80
|
+
|
|
81
|
+
### 1. [Question]
|
|
82
|
+
**Why the PR dodges this:** [explanation of what claim is made and what evidence is missing]
|
|
83
|
+
**Damage potential:** [High/Medium/Low] -- [brief justification]
|
|
84
|
+
|
|
85
|
+
### 2. [Question]
|
|
86
|
+
**Why the PR dodges this:** [explanation]
|
|
87
|
+
**Damage potential:** [High/Medium/Low] -- [justification]
|
|
88
|
+
|
|
89
|
+
[... 5-8 entries total]
|
|
90
|
+
|
|
91
|
+
---
|
|
92
|
+
|
|
93
|
+
## Crack Point Analysis
|
|
94
|
+
|
|
95
|
+
### 1. [Crack Point Title]
|
|
96
|
+
- **What the concept claims:** [specific claim from the PR/FAQ]
|
|
97
|
+
- **What the question reveals:** [the gap, assumption, or contradiction exposed]
|
|
98
|
+
- **What needs strengthening:** [actionable recommendation for the product concept]
|
|
99
|
+
|
|
100
|
+
### 2. [Crack Point Title]
|
|
101
|
+
- **What the concept claims:** [specific claim]
|
|
102
|
+
- **What the question reveals:** [gap exposed]
|
|
103
|
+
- **What needs strengthening:** [recommendation]
|
|
104
|
+
|
|
105
|
+
[... 3-5 entries total]
|
|
106
|
+
|
|
107
|
+
---
|
|
108
|
+
|
|
109
|
+
## Recommended Concept Updates
|
|
110
|
+
|
|
111
|
+
| # | Section | Current State | Recommended Change | Type | Rationale |
|
|
112
|
+
|---|---------|---------------|--------------------|------|-----------|
|
|
113
|
+
| 1 | [product concept section] | [what the concept currently says or assumes] | [specific change recommended] | [Add/Modify/Remove] | [which crack point this addresses -- reference crack point number] |
|
|
114
|
+
| 2 | ... | ... | ... | ... | ... |
|
|
115
|
+
|
|
116
|
+
---
|
|
117
|
+
|
|
118
|
+
## Unresolved Questions
|
|
119
|
+
|
|
120
|
+
| # | Section | Question | Options | Assessment |
|
|
121
|
+
|---|---------|----------|---------|------------|
|
|
122
|
+
| 1 | [product concept section] | [question that the PR/FAQ critique raised but cannot be answered without real data] | [possible approaches to answering this] | [preliminary assessment based on critique findings] |
|
|
123
|
+
| 2 | ... | ... | ... | ... |
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
---
|
|
127
|
+
|
|
128
|
+
## Section Guidance
|
|
129
|
+
|
|
130
|
+
| Section | Source | Depth |
|
|
131
|
+
|---------|--------|-------|
|
|
132
|
+
| Executive Summary | Synthesized by skill from draft quality assessment and critique findings | 3-5 sentences, overall messaging integrity assessment |
|
|
133
|
+
| Press Release | Marketing PM draft mode output | Full text, 400-600 words, Amazon PR/FAQ format |
|
|
134
|
+
| Customer FAQ | Marketing PM draft mode output | 5-8 entries with concrete, specific answers |
|
|
135
|
+
| Internal FAQ | Marketing PM draft mode output | 3-5 entries with honest, hard-question answers |
|
|
136
|
+
| Adversarial Questions | Marketing PM critique mode output | 5-8 questions with dodge explanations and damage potential ratings |
|
|
137
|
+
| Crack Point Analysis | Marketing PM critique mode output | 3-5 crack points with: claim, revelation, and strengthening recommendation |
|
|
138
|
+
| Recommended Concept Updates | Marketing PM critique mode output, reviewed by skill | One row per recommendation, Type must be Add/Modify/Remove, rationale must reference specific crack point |
|
|
139
|
+
| Unresolved Questions | Identified during critique as questions requiring real data | One row per question, must specify which product concept section is affected |
|