autodoc-agent-kit 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (187) hide show
  1. package/README.md +362 -0
  2. package/package.json +49 -0
  3. package/src/core/module.yaml +5 -0
  4. package/src/modules/design/module.yaml +9 -0
  5. package/src/modules/design/skills/brand-guidelines/LICENSE.txt +202 -0
  6. package/src/modules/design/skills/brand-guidelines/SKILL.md +73 -0
  7. package/src/modules/design/skills/frontend-design/LICENSE.txt +177 -0
  8. package/src/modules/design/skills/frontend-design/SKILL.md +42 -0
  9. package/src/modules/design/skills/web-artifacts-builder/SKILL.md +229 -0
  10. package/src/modules/devops/module.yaml +10 -0
  11. package/src/modules/devops/skills/devops-helper/SKILL.md +60 -0
  12. package/src/modules/devops/skills/k8s-helm/SKILL.md +360 -0
  13. package/src/modules/devops/skills/monitoring-observability/SKILL.md +240 -0
  14. package/src/modules/devops/skills/security-auditor/SKILL.md +105 -0
  15. package/src/modules/engineering/module.yaml +22 -0
  16. package/src/modules/engineering/skills/ai-sdk/SKILL.md +314 -0
  17. package/src/modules/engineering/skills/api-designer/SKILL.md +77 -0
  18. package/src/modules/engineering/skills/code-reviewer/SKILL.md +71 -0
  19. package/src/modules/engineering/skills/db-architect/SKILL.md +50 -0
  20. package/src/modules/engineering/skills/debugger/SKILL.md +59 -0
  21. package/src/modules/engineering/skills/docs-generator/SKILL.md +51 -0
  22. package/src/modules/engineering/skills/git-workflow/SKILL.md +258 -0
  23. package/src/modules/engineering/skills/mcp-builder/LICENSE.txt +202 -0
  24. package/src/modules/engineering/skills/mcp-builder/SKILL.md +236 -0
  25. package/src/modules/engineering/skills/mcp-builder/reference/evaluation.md +602 -0
  26. package/src/modules/engineering/skills/mcp-builder/reference/mcp_best_practices.md +249 -0
  27. package/src/modules/engineering/skills/mcp-builder/reference/node_mcp_server.md +970 -0
  28. package/src/modules/engineering/skills/mcp-builder/reference/python_mcp_server.md +719 -0
  29. package/src/modules/engineering/skills/mcp-builder/scripts/connections.py +151 -0
  30. package/src/modules/engineering/skills/mcp-builder/scripts/evaluation.py +373 -0
  31. package/src/modules/engineering/skills/mcp-builder/scripts/example_evaluation.xml +22 -0
  32. package/src/modules/engineering/skills/mcp-builder/scripts/requirements.txt +2 -0
  33. package/src/modules/engineering/skills/nextjs-15/SKILL.md +312 -0
  34. package/src/modules/engineering/skills/perf-optimizer/SKILL.md +60 -0
  35. package/src/modules/engineering/skills/react-19/SKILL.md +257 -0
  36. package/src/modules/engineering/skills/refactorer/SKILL.md +60 -0
  37. package/src/modules/engineering/skills/skill-authoring-workflow/SKILL.md +183 -0
  38. package/src/modules/engineering/skills/skill-creator/LICENSE.txt +202 -0
  39. package/src/modules/engineering/skills/skill-creator/SKILL.md +356 -0
  40. package/src/modules/engineering/skills/skill-creator/references/output-patterns.md +82 -0
  41. package/src/modules/engineering/skills/skill-creator/references/workflows.md +28 -0
  42. package/src/modules/engineering/skills/skill-creator/scripts/__pycache__/quick_validate.cpython-313.pyc +0 -0
  43. package/src/modules/engineering/skills/skill-creator/scripts/init_skill.py +303 -0
  44. package/src/modules/engineering/skills/skill-creator/scripts/package_skill.py +110 -0
  45. package/src/modules/engineering/skills/skill-creator/scripts/quick_validate.py +95 -0
  46. package/src/modules/engineering/skills/typescript/SKILL.md +231 -0
  47. package/src/modules/engineering/skills/zod-4/SKILL.md +223 -0
  48. package/src/modules/product/module.yaml +51 -0
  49. package/src/modules/product/skills/acquisition-channel-advisor/SKILL.md +643 -0
  50. package/src/modules/product/skills/acquisition-channel-advisor/examples/conversation-flow.md +531 -0
  51. package/src/modules/product/skills/ai-shaped-readiness-advisor/SKILL.md +923 -0
  52. package/src/modules/product/skills/altitude-horizon-framework/SKILL.md +250 -0
  53. package/src/modules/product/skills/altitude-horizon-framework/examples/sample.md +85 -0
  54. package/src/modules/product/skills/business-health-diagnostic/SKILL.md +783 -0
  55. package/src/modules/product/skills/company-research/SKILL.md +385 -0
  56. package/src/modules/product/skills/company-research/examples/sample.md +164 -0
  57. package/src/modules/product/skills/company-research/template.md +60 -0
  58. package/src/modules/product/skills/context-engineering-advisor/SKILL.md +763 -0
  59. package/src/modules/product/skills/customer-journey-map/SKILL.md +346 -0
  60. package/src/modules/product/skills/customer-journey-map/examples/meta-product-manager-skills.md +40 -0
  61. package/src/modules/product/skills/customer-journey-map/examples/sample.md +33 -0
  62. package/src/modules/product/skills/customer-journey-map/template.md +28 -0
  63. package/src/modules/product/skills/customer-journey-mapping-workshop/SKILL.md +523 -0
  64. package/src/modules/product/skills/director-readiness-advisor/SKILL.md +351 -0
  65. package/src/modules/product/skills/director-readiness-advisor/examples/conversation-flow.md +96 -0
  66. package/src/modules/product/skills/discovery-interview-prep/SKILL.md +410 -0
  67. package/src/modules/product/skills/discovery-process/SKILL.md +504 -0
  68. package/src/modules/product/skills/discovery-process/examples/sample.md +60 -0
  69. package/src/modules/product/skills/discovery-process/template.md +39 -0
  70. package/src/modules/product/skills/eol-message/SKILL.md +348 -0
  71. package/src/modules/product/skills/eol-message/examples/sample.md +87 -0
  72. package/src/modules/product/skills/eol-message/template.md +74 -0
  73. package/src/modules/product/skills/epic-breakdown-advisor/SKILL.md +665 -0
  74. package/src/modules/product/skills/epic-hypothesis/SKILL.md +277 -0
  75. package/src/modules/product/skills/epic-hypothesis/examples/sample.md +104 -0
  76. package/src/modules/product/skills/epic-hypothesis/template.md +30 -0
  77. package/src/modules/product/skills/executive-onboarding-playbook/SKILL.md +280 -0
  78. package/src/modules/product/skills/executive-onboarding-playbook/examples/sample.md +116 -0
  79. package/src/modules/product/skills/feature-investment-advisor/SKILL.md +639 -0
  80. package/src/modules/product/skills/feature-investment-advisor/examples/conversation-flow.md +538 -0
  81. package/src/modules/product/skills/finance-based-pricing-advisor/SKILL.md +763 -0
  82. package/src/modules/product/skills/finance-metrics-quickref/SKILL.md +309 -0
  83. package/src/modules/product/skills/jobs-to-be-done/SKILL.md +370 -0
  84. package/src/modules/product/skills/jobs-to-be-done/examples/sample.md +80 -0
  85. package/src/modules/product/skills/jobs-to-be-done/template.md +65 -0
  86. package/src/modules/product/skills/lean-ux-canvas/SKILL.md +561 -0
  87. package/src/modules/product/skills/lean-ux-canvas/examples/sample.md +88 -0
  88. package/src/modules/product/skills/lean-ux-canvas/template.md +32 -0
  89. package/src/modules/product/skills/opportunity-solution-tree/SKILL.md +420 -0
  90. package/src/modules/product/skills/opportunity-solution-tree/examples/sample.md +104 -0
  91. package/src/modules/product/skills/opportunity-solution-tree/template.md +33 -0
  92. package/src/modules/product/skills/pestel-analysis/SKILL.md +376 -0
  93. package/src/modules/product/skills/pestel-analysis/examples/sample.md +143 -0
  94. package/src/modules/product/skills/pestel-analysis/template.md +53 -0
  95. package/src/modules/product/skills/pol-probe/SKILL.md +217 -0
  96. package/src/modules/product/skills/pol-probe/examples/sample.md +136 -0
  97. package/src/modules/product/skills/pol-probe/template.md +59 -0
  98. package/src/modules/product/skills/pol-probe-advisor/SKILL.md +492 -0
  99. package/src/modules/product/skills/positioning-statement/SKILL.md +230 -0
  100. package/src/modules/product/skills/positioning-statement/examples/sample.md +51 -0
  101. package/src/modules/product/skills/positioning-statement/template.md +25 -0
  102. package/src/modules/product/skills/positioning-workshop/SKILL.md +424 -0
  103. package/src/modules/product/skills/prd-development/SKILL.md +655 -0
  104. package/src/modules/product/skills/prd-development/examples/sample.md +43 -0
  105. package/src/modules/product/skills/prd-development/template.md +55 -0
  106. package/src/modules/product/skills/press-release/SKILL.md +269 -0
  107. package/src/modules/product/skills/press-release/examples/sample.md +73 -0
  108. package/src/modules/product/skills/press-release/template.md +39 -0
  109. package/src/modules/product/skills/prioritization-advisor/SKILL.md +448 -0
  110. package/src/modules/product/skills/problem-framing-canvas/SKILL.md +466 -0
  111. package/src/modules/product/skills/problem-framing-canvas/examples/sample.md +58 -0
  112. package/src/modules/product/skills/problem-framing-canvas/template.md +22 -0
  113. package/src/modules/product/skills/problem-statement/SKILL.md +246 -0
  114. package/src/modules/product/skills/problem-statement/examples/sample.md +82 -0
  115. package/src/modules/product/skills/problem-statement/template.md +37 -0
  116. package/src/modules/product/skills/product-strategy-session/SKILL.md +426 -0
  117. package/src/modules/product/skills/product-strategy-session/examples/sample.md +67 -0
  118. package/src/modules/product/skills/product-strategy-session/template.md +38 -0
  119. package/src/modules/product/skills/proto-persona/SKILL.md +326 -0
  120. package/src/modules/product/skills/proto-persona/examples/sample.md +97 -0
  121. package/src/modules/product/skills/proto-persona/template.md +45 -0
  122. package/src/modules/product/skills/recommendation-canvas/SKILL.md +375 -0
  123. package/src/modules/product/skills/recommendation-canvas/examples/sample.md +94 -0
  124. package/src/modules/product/skills/recommendation-canvas/template.md +86 -0
  125. package/src/modules/product/skills/roadmap-planning/SKILL.md +505 -0
  126. package/src/modules/product/skills/roadmap-planning/examples/sample.md +62 -0
  127. package/src/modules/product/skills/roadmap-planning/template.md +30 -0
  128. package/src/modules/product/skills/saas-economics-efficiency-metrics/SKILL.md +694 -0
  129. package/src/modules/product/skills/saas-economics-efficiency-metrics/examples/cash-trap.md +365 -0
  130. package/src/modules/product/skills/saas-economics-efficiency-metrics/examples/healthy-unit-economics.md +279 -0
  131. package/src/modules/product/skills/saas-economics-efficiency-metrics/template.md +263 -0
  132. package/src/modules/product/skills/saas-revenue-growth-metrics/SKILL.md +630 -0
  133. package/src/modules/product/skills/saas-revenue-growth-metrics/examples/healthy-saas.md +131 -0
  134. package/src/modules/product/skills/saas-revenue-growth-metrics/examples/warning-signs.md +229 -0
  135. package/src/modules/product/skills/saas-revenue-growth-metrics/template.md +192 -0
  136. package/src/modules/product/skills/storyboard/SKILL.md +252 -0
  137. package/src/modules/product/skills/storyboard/examples/sample.md +71 -0
  138. package/src/modules/product/skills/storyboard/template.md +41 -0
  139. package/src/modules/product/skills/tam-sam-som-calculator/SKILL.md +392 -0
  140. package/src/modules/product/skills/tam-sam-som-calculator/examples/sample.md +142 -0
  141. package/src/modules/product/skills/tam-sam-som-calculator/scripts/market-sizing.py +95 -0
  142. package/src/modules/product/skills/tam-sam-som-calculator/template.md +35 -0
  143. package/src/modules/product/skills/user-story/SKILL.md +272 -0
  144. package/src/modules/product/skills/user-story/examples/sample.md +110 -0
  145. package/src/modules/product/skills/user-story/scripts/user-story-template.py +65 -0
  146. package/src/modules/product/skills/user-story/template.md +32 -0
  147. package/src/modules/product/skills/user-story-mapping/SKILL.md +285 -0
  148. package/src/modules/product/skills/user-story-mapping/examples/sample.md +77 -0
  149. package/src/modules/product/skills/user-story-mapping/template.md +41 -0
  150. package/src/modules/product/skills/user-story-mapping-workshop/SKILL.md +477 -0
  151. package/src/modules/product/skills/user-story-mapping-workshop/template.md +28 -0
  152. package/src/modules/product/skills/user-story-splitting/SKILL.md +303 -0
  153. package/src/modules/product/skills/user-story-splitting/examples/sample.md +147 -0
  154. package/src/modules/product/skills/user-story-splitting/template.md +37 -0
  155. package/src/modules/product/skills/vp-cpo-readiness-advisor/SKILL.md +409 -0
  156. package/src/modules/product/skills/vp-cpo-readiness-advisor/examples/conversation-flow.md +95 -0
  157. package/src/modules/product/skills/workshop-facilitation/SKILL.md +87 -0
  158. package/src/modules/productivity/module.yaml +9 -0
  159. package/src/modules/productivity/skills/doc-coauthoring/SKILL.md +375 -0
  160. package/src/modules/productivity/skills/internal-comms/LICENSE.txt +202 -0
  161. package/src/modules/productivity/skills/internal-comms/SKILL.md +32 -0
  162. package/src/modules/productivity/skills/internal-comms/examples/3p-updates.md +47 -0
  163. package/src/modules/productivity/skills/internal-comms/examples/company-newsletter.md +65 -0
  164. package/src/modules/productivity/skills/internal-comms/examples/faq-answers.md +30 -0
  165. package/src/modules/productivity/skills/internal-comms/examples/general-comms.md +16 -0
  166. package/src/modules/productivity/skills/technical-writing/SKILL.md +266 -0
  167. package/src/modules/qa/module.yaml +9 -0
  168. package/src/modules/qa/skills/test-strategy/SKILL.md +263 -0
  169. package/src/modules/qa/skills/test-writer/SKILL.md +57 -0
  170. package/src/modules/qa/skills/webapp-testing/LICENSE.txt +202 -0
  171. package/src/modules/qa/skills/webapp-testing/SKILL.md +96 -0
  172. package/src/modules/qa/skills/webapp-testing/examples/console_logging.py +35 -0
  173. package/src/modules/qa/skills/webapp-testing/examples/element_discovery.py +40 -0
  174. package/src/modules/qa/skills/webapp-testing/examples/static_html_automation.py +33 -0
  175. package/src/modules/qa/skills/webapp-testing/scripts/with_server.py +106 -0
  176. package/tools/autodoc-npx-wrapper.js +34 -0
  177. package/tools/cli/autodoc-cli.js +55 -0
  178. package/tools/cli/commands/install.js +36 -0
  179. package/tools/cli/commands/status.js +35 -0
  180. package/tools/cli/commands/uninstall.js +60 -0
  181. package/tools/cli/installers/lib/core/installer.js +164 -0
  182. package/tools/cli/installers/lib/core/manifest.js +49 -0
  183. package/tools/cli/installers/lib/ide/manager.js +112 -0
  184. package/tools/cli/installers/lib/ide/platform-codes.yaml +207 -0
  185. package/tools/cli/installers/lib/modules/manager.js +59 -0
  186. package/tools/cli/lib/ui.js +199 -0
  187. package/tools/cli/lib/welcome.js +82 -0
@@ -0,0 +1,923 @@
1
+ ---
2
+ name: ai-shaped-readiness-advisor
3
+ description: Assess whether your product work is AI-first or AI-shaped. Use when evaluating AI maturity and choosing the next team capability to build.
4
+ intent: >-
5
+ Assess whether your product work is **"AI-first"** (using AI to automate existing tasks faster) or **"AI-shaped"** (fundamentally redesigning how product teams operate around AI capabilities). Use this to evaluate your readiness across **5 essential PM competencies for 2026**, identify gaps, and get concrete recommendations on which capability to build first.
6
+ type: interactive
7
+ theme: ai-agents
8
+ best_for:
9
+ - "Assessing whether your team is AI-first or genuinely AI-shaped"
10
+ - "Identifying which of the 5 AI competencies to build next"
11
+ - "Understanding your product org's AI maturity honestly"
12
+ scenarios:
13
+ - "My team uses AI tools but I'm not sure if we're working differently or just automating the same tasks"
14
+ - "I want to assess my product org's AI maturity and prioritize where to invest next quarter"
15
+ estimated_time: "15-20 min"
16
+ ---
17
+
18
+ ## Purpose
19
+
20
+ Assess whether your product work is **"AI-first"** (using AI to automate existing tasks faster) or **"AI-shaped"** (fundamentally redesigning how product teams operate around AI capabilities). Use this to evaluate your readiness across **5 essential PM competencies for 2026**, identify gaps, and get concrete recommendations on which capability to build first.
21
+
22
+ **Key Distinction:** AI-first is cute (using Copilot to write PRDs faster). AI-shaped is survival (building a durable "reality layer" that both humans and AI trust, orchestrating AI workflows, compressing learning cycles).
23
+
24
+ This is not about AI tools—it's about **organizational redesign around AI as co-intelligence**. The interactive skill guides you through a maturity assessment, then recommends your next move.
25
+
26
+ ## Key Concepts
27
+
28
+ ### AI-First vs. AI-Shaped
29
+
30
+ | Dimension | AI-First (Cute) | AI-Shaped (Survival) |
31
+ |-----------|-----------------|----------------------|
32
+ | **Mindset** | Automate existing tasks | Redesign how work gets done |
33
+ | **Goal** | Speed up artifact creation | Compress learning cycles |
34
+ | **AI Role** | Task assistant | Strategic co-intelligence |
35
+ | **Advantage** | Temporary efficiency gains | Defensible competitive moat |
36
+ | **Example** | "Copilot writes PRDs 2x faster" | "AI agent validates hypotheses in 48 hours instead of 3 weeks" |
37
+
38
+ **Critical Insight:** If a competitor can replicate your AI usage by throwing bodies at it, it's not differentiation—it's just efficiency (which becomes table stakes within months).
39
+
40
+ ---
41
+
42
+ ### The 5 Essential PM Competencies (2026)
43
+
44
+ These competencies define AI-shaped product work. You'll assess your maturity on each.
45
+
46
+ #### 1. **Context Design**
47
+ Building a durable **"reality layer"** that both humans and AI can trust—treating AI attention as a scarce resource and allocating it deliberately.
48
+
49
+ **What it includes:**
50
+ - Documenting what's true vs. assumed
51
+ - Immutable constraints (technical, regulatory, strategic)
52
+ - Operational glossary (shared definitions)
53
+ - Evidence standards (what counts as validation)
54
+ - **Context boundaries** (what to persist vs. retrieve)
55
+ - **Memory architecture** (short-term conversational + long-term persistent)
56
+ - **Retrieval strategies** (semantic search, contextual retrieval)
57
+
58
+ **Key Principle:** *"If you can't point to evidence, constraints, and definitions, you don't have context. You have vibes."*
59
+
60
+ **Critical Distinction: Context Stuffing vs. Context Engineering**
61
+ - **Context Stuffing (AI-first):** Jamming volume without intent ("paste entire PRD")
62
+ - **Context Engineering (AI-shaped):** Shaping structure for attention (bounded domains, retrieve with intent)
63
+
64
+ **The 5 Diagnostic Questions:**
65
+ 1. What specific decision does this support?
66
+ 2. Can retrieval replace persistence?
67
+ 3. Who owns the context boundary?
68
+ 4. What fails if we exclude this?
69
+ 5. Are we fixing structure or avoiding it?
70
+
71
+ **AI-first version:** Pasting PRDs into ChatGPT; no context boundaries; "more is better" mentality
72
+ **AI-shaped version:** CLAUDE.md files, evidence databases, constraint registries AI agents reference; two-layer memory architecture; Research→Plan→Reset→Implement cycle to prevent context rot
73
+
74
+ **Deep Dive:** See [`context-engineering-advisor`](../context-engineering-advisor/SKILL.md) for detailed guidance on diagnosing context stuffing and implementing memory architecture.
75
+
76
+ ---
77
+
78
+ #### 2. **Agent Orchestration**
79
+ Creating repeatable, traceable AI workflows (not one-off prompts).
80
+
81
+ **What it includes:**
82
+ - Defined workflow loops: research → synthesis → critique → decision → log rationale
83
+ - Each step shows its work (traceable reasoning)
84
+ - Workflows run consistently (same inputs = predictable process)
85
+ - Version-controlled prompts and agents
86
+
87
+ **Key Principle:** One-off prompts are tactical. Orchestrated workflows are strategic.
88
+
89
+ **AI-first version:** "Ask ChatGPT to analyze this user feedback"
90
+ **AI-shaped version:** Automated workflow that ingests feedback, tags themes, generates hypotheses, flags contradictions, logs decisions
91
+
92
+ ---
93
+
94
+ #### 3. **Outcome Acceleration**
95
+ Using AI to compress **learning cycles** (not just speed up tasks).
96
+
97
+ **What it includes:**
98
+ - Eliminate validation lag (PoL probes run in days, not weeks)
99
+ - Remove approval delays (AI pre-validates against constraints)
100
+ - Cut meeting overhead (async AI synthesis replaces status meetings)
101
+
102
+ **Key Principle:** Do less, purposefully. AI removes bottlenecks, not generates more work.
103
+
104
+ **AI-first version:** "AI writes user stories faster"
105
+ **AI-shaped version:** "AI runs feasibility checks overnight, eliminating 2 weeks of technical discovery"
106
+
107
+ ---
108
+
109
+ #### 4. **Team-AI Facilitation**
110
+ Redesigning team systems so AI operates as **co-intelligence**, not an accountability shield.
111
+
112
+ **What it includes:**
113
+ - Review norms (who checks AI outputs, when, how)
114
+ - Evidence standards (AI must cite sources, not hallucinate)
115
+ - Decision authority (AI recommends, humans decide—clear boundaries)
116
+ - Psychological safety (team can challenge AI without feeling "dumb")
117
+
118
+ **Key Principle:** AI amplifies judgment, doesn't replace accountability.
119
+
120
+ **AI-first version:** "I used AI" as excuse for bad outputs
121
+ **AI-shaped version:** Clear review protocols; AI outputs treated as drafts requiring human validation
122
+
123
+ ---
124
+
125
+ #### 5. **Strategic Differentiation**
126
+ Moving beyond efficiency to create **defensible competitive advantages**.
127
+
128
+ **What it includes:**
129
+ - New customer capabilities (what can users do now that they couldn't before?)
130
+ - Workflow rewiring (processes competitors can't replicate without full redesign)
131
+ - Economics competitors can't match (10x cost advantage through AI)
132
+
133
+ **Key Principle:** *"If a competitor can copy it by throwing bodies at it, it's not differentiation."*
134
+
135
+ **AI-first version:** "We use AI to write better docs"
136
+ **AI-shaped version:** "We validate product hypotheses in 2 days vs. industry standard 3 weeks—ship 6x more validated features per quarter"
137
+
138
+ ---
139
+
140
+ ### Anti-Patterns (What This Is NOT)
141
+
142
+ - **Not about AI tools:** Using Claude vs. ChatGPT doesn't matter. Redesigning workflows matters.
143
+ - **Not about speed:** Writing PRDs 2x faster isn't strategic if PRDs weren't the bottleneck.
144
+ - **Not about automation:** Automating bad processes just scales the bad.
145
+ - **Not about replacing humans:** AI-shaped orgs augment judgment, not eliminate it.
146
+
147
+ ---
148
+
149
+ ### When to Use This Skill
150
+
151
+ ✅ **Use this when:**
152
+ - You're using AI tools but not seeing strategic advantage
153
+ - You suspect you're "AI-first" (efficiency) but want to be "AI-shaped" (transformation)
154
+ - You need to prioritize which AI capability to build next
155
+ - Leadership asks "How are we using AI?" and you're not sure how to answer strategically
156
+ - You want to assess team readiness for AI-powered product work
157
+
158
+ ❌ **Don't use this when:**
159
+ - You haven't started using AI at all (start with basic tools first)
160
+ - You're looking for tool recommendations (this is about organizational design, not tooling)
161
+ - You need tactical "how to write a prompt" guidance (use skills for that)
162
+
163
+ ---
164
+
165
+ ### Facilitation Source of Truth
166
+
167
+ Use [`workshop-facilitation`](../workshop-facilitation/SKILL.md) as the default interaction protocol for this skill.
168
+
169
+ It defines:
170
+ - session heads-up + entry mode (Guided, Context dump, Best guess)
171
+ - one-question turns with plain-language prompts
172
+ - progress labels (for example, Context Qx/8 and Scoring Qx/5)
173
+ - interruption handling and pause/resume behavior
174
+ - numbered recommendations at decision points
175
+ - quick-select numbered response options for regular questions (include `Other (specify)` when useful)
176
+
177
+ This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
178
+
179
+ ## Application
180
+
181
+ This interactive skill uses **adaptive questioning** to assess your maturity across 5 competencies, then recommends which to prioritize.
182
+
183
+ ### Facilitation Protocol (Mandatory)
184
+
185
+ 1. Ask exactly **one question per turn**.
186
+ 2. Wait for the user's answer before asking the next question.
187
+ 3. Use plain-language questions (no shorthand labels as the primary question). If needed, include an example response format.
188
+ 4. Show progress on every turn using user-facing labels:
189
+ - `Context Qx/8` during context gathering
190
+ - `Scoring Qx/5` during maturity scoring
191
+ - Include "questions remaining" when practical.
192
+ 5. Do not use internal phase labels (like "Step 0") in user-facing prompts unless the user asks for internal structure details.
193
+ 6. For maturity scoring questions, present concise 1-4 choices first; share full rubric details only if requested.
194
+ 7. For context questions, offer concise numbered quick-select options when practical, plus `Other (specify)` for open-ended answers. Accept multi-select replies like `1,3` or `1 and 3`.
195
+ 8. Give numbered recommendations **only at decision points**, not after every answer.
196
+ 9. Decision points include:
197
+ - After the full context summary
198
+ - After the 5-dimension maturity profile
199
+ - During priority selection and action-plan path selection
200
+ 10. When recommendations are shown, enumerate clearly (`1.`, `2.`, `3.`) and accept selections like `#1`, `1`, `1 and 3`, `1,3`, or custom text.
201
+ 11. If multiple options are selected, synthesize a combined path and continue.
202
+ 12. If custom text is provided, map it to the closest valid path and continue without forcing re-entry.
203
+ 13. Interruption handling is mandatory: if the user asks a meta question ("how many left?", "why this label?", "pause"), answer directly first, then restate current progress and resume with the pending question.
204
+ 14. If the user says to stop or pause, halt the assessment immediately and wait for explicit resume.
205
+ 15. If the user asks for "one question at a time," keep that mode for the rest of the session unless they explicitly opt out.
206
+ 16. Before any assessment question, give a short heads-up on time/length and let the user choose an entry mode.
207
+
208
+ ---
209
+
210
+ ### Session Start: Heads-Up + Entry Mode (Mandatory)
211
+
212
+ **Agent opening prompt (use this first):**
213
+
214
+ "Quick heads-up before we start: this usually takes about 7-10 minutes and up to 13 questions total (8 context + 5 scoring).
215
+
216
+ How do you want to do this?
217
+ 1. Guided mode: I’ll ask one question at a time.
218
+ 2. Context dump: you paste what you already know, and I’ll skip anything redundant.
219
+ 3. Best guess mode: I’ll make reasonable assumptions where details are missing, label them, and keep moving."
220
+
221
+ Accept selections as `#1`, `1`, `1 and 3`, `1,3`, or custom text.
222
+
223
+ **Mode behavior:**
224
+
225
+ - **If Guided mode:** Run Step 0 as written, then scoring.
226
+ - **If Context dump:** Ask for pasted context once, summarize it, identify gaps, and:
227
+ - Skip any context questions already answered.
228
+ - Ask only the minimum missing context needed (0-2 clarifying questions).
229
+ - Move to scoring as soon as context is sufficient.
230
+ - **If Best guess mode:** Ask for the smallest viable starting input (role/team + primary goal), then:
231
+ - Infer missing details using reasonable defaults.
232
+ - Label each inferred item as `Assumption`.
233
+ - Include confidence tags (`High`, `Medium`, `Low`) for each assumption.
234
+ - Continue without blocking on unknowns.
235
+
236
+ At the final summary, include an **Assumptions to Validate** section when context dump or best guess mode was used.
237
+
238
+ ---
239
+
240
+ ### Step 0: Gather Context
241
+
242
+ **Agent asks:**
243
+
244
+ Collect context using this exact sequence, one question at a time:
245
+
246
+ 1. "Which AI tools are you using today?"
247
+ 2. "How does your team usually use AI today: one-off prompts, reusable templates, or multi-step workflows?"
248
+ 3. "Who uses AI consistently today: just you, PMs, or cross-functional teams?"
249
+ 4. "About how many PMs, engineers, and designers are on your team?"
250
+ 5. "What stage are you in: startup, growth, or enterprise?"
251
+ 6. "How are decisions made: centralized, distributed, or consensus-driven?"
252
+ 7. "What competitive advantage are you trying to build with AI?"
253
+ 8. "What's the biggest bottleneck slowing learning and iteration today?"
254
+
255
+ After question 8, summarize back in 4 lines:
256
+ - Current AI usage pattern
257
+ - Team context
258
+ - Strategic intent
259
+ - Primary bottleneck
260
+
261
+ ---
262
+
263
+ ### Step 1: Context Design Maturity
264
+
265
+ **Agent asks:**
266
+
267
+ Let's assess your **Context Design** capability—how well you've built a "reality layer" that both humans and AI can trust, and whether you're doing **context stuffing** (volume without intent) or **context engineering** (structure for attention).
268
+
269
+ **Which statement best describes your current state?**
270
+
271
+ 1. **Level 1 (AI-First / Context Stuffing):** "I paste entire documents into ChatGPT every time I need something. No shared knowledge base. No context boundaries."
272
+ - Reality: One-off prompting with no durability; "more is better" mentality
273
+ - Problem: AI has no memory; you repeat yourself constantly; context stuffing degrades attention
274
+ - **Context Engineering Gap:** No answers to the 5 diagnostic questions; persisting everything "just in case"
275
+
276
+ 2. **Level 2 (Emerging / Early Structure):** "We have some docs (PRDs, strategy memos), but they're scattered. No consistent format. Starting to notice context stuffing issues (vague responses, normalized retries)."
277
+ - Reality: Context exists but isn't structured for AI consumption; no retrieval strategy
278
+ - Problem: AI can't reliably find or trust information; mixing always-needed with episodic context
279
+ - **Context Engineering Gap:** No context boundary owner; no distinction between persist vs. retrieve
280
+
281
+ 3. **Level 3 (Transitioning / Context Engineering Emerging):** "We've started using CLAUDE.md files and project instructions. Constraints registry exists. We're identifying what to persist vs. retrieve. Experimenting with Research→Plan→Reset→Implement cycle."
282
+ - Reality: Structured context emerging, but not comprehensive; context boundaries defined but not fully enforced
283
+ - Problem: Coverage is patchy; some areas well-documented, others vibe-driven; inconsistent retrieval practices
284
+ - **Context Engineering Progress:** Can answer 3-4 of the 5 diagnostic questions; context boundary owner assigned; starting to use two-layer memory
285
+
286
+ 4. **Level 4 (AI-Shaped / Context Engineering Mastery):** "We maintain a durable reality layer: constraints registry (20+ entries), evidence database, operational glossary (30+ terms). Two-layer memory architecture (short-term conversational + long-term persistent via vector DB). Context boundaries defined and owned. AI agents reference these automatically. We use Research→Plan→Reset→Implement to prevent context rot."
287
+ - Reality: Comprehensive, version-controlled context both humans and AI trust; retrieval with intent (not completeness)
288
+ - Outcome: AI operates with high confidence; reduces hallucination and rework; token usage optimized; no context stuffing
289
+ - **Context Engineering Mastery:** Can answer all 5 diagnostic questions; context boundary audited quarterly; quantitative efficiency tracking: (Accuracy × Coherence) / (Tokens × Latency)
290
+
291
+ **Select your level:** [1, 2, 3, or 4]
292
+
293
+ **Note:** If you selected Level 1-2 and struggle with context stuffing, consider using [`context-engineering-advisor`](../context-engineering-advisor/SKILL.md) to diagnose and fix Context Hoarding Disorder before proceeding.
294
+
295
+ **User response:** [Selection]
296
+
297
+ **Agent records:** Context Design maturity = [Level X]
298
+
299
+ ---
300
+
301
+ ### Step 2: Agent Orchestration Maturity
302
+
303
+ **Agent asks:**
304
+
305
+ Now let's assess **Agent Orchestration**—whether you have repeatable AI workflows or just one-off prompts.
306
+
307
+ **Which statement best describes your current state?**
308
+
309
+ 1. **Level 1 (AI-First):** "I type prompts into ChatGPT as needed. No saved workflows or templates."
310
+ - Reality: Tactical, ad-hoc usage
311
+ - Problem: Inconsistent results; can't scale or audit
312
+
313
+ 2. **Level 2 (Emerging):** "I have a few saved prompts I reuse. Maybe some custom GPTs or Claude Projects."
314
+ - Reality: Repeatable prompts, but not full workflows
315
+ - Problem: Each step is manual; no orchestration
316
+
317
+ 3. **Level 3 (Transitioning):** "We've built some multi-step workflows (research → synthesis → critique). Tracked in tools like Notion or Linear."
318
+ - Reality: Workflows exist but require manual handoffs
319
+ - Problem: Still human-in-the-loop for every step; not fully automated
320
+
321
+ 4. **Level 4 (AI-Shaped):** "We have orchestrated AI workflows that run autonomously: research → synthesis → critique → decision → log rationale. Each step is traceable and version-controlled."
322
+ - Reality: Workflows run consistently; show their work at each step
323
+ - Outcome: Reliable, auditable, scalable AI processes
324
+
325
+ **Select your level:** [1, 2, 3, or 4]
326
+
327
+ **User response:** [Selection]
328
+
329
+ **Agent records:** Agent Orchestration maturity = [Level X]
330
+
331
+ ---
332
+
333
+ ### Step 3: Outcome Acceleration Maturity
334
+
335
+ **Agent asks:**
336
+
337
+ Next, **Outcome Acceleration**—are you using AI to compress learning cycles, or just speed up tasks?
338
+
339
+ **Which statement best describes your current state?**
340
+
341
+ 1. **Level 1 (AI-First):** "AI helps me write docs faster (PRDs, user stories). Saves me a few hours per week."
342
+ - Reality: Efficiency gains on artifact creation
343
+ - Problem: Docs weren't the bottleneck; learning cycles unchanged
344
+
345
+ 2. **Level 2 (Emerging):** "AI helps with research and synthesis (summarize user feedback, analyze competitors). Saves research time."
346
+ - Reality: Modest learning acceleration
347
+ - Problem: Still sequential; AI doesn't eliminate validation lag
348
+
349
+ 3. **Level 3 (Transitioning):** "We use AI to run experiments faster (PoL probes, feasibility checks). Cut validation time from weeks to days."
350
+ - Reality: Learning cycles compressing
351
+ - Problem: Not yet systematic; only applied to some experiments
352
+
353
+ 4. **Level 4 (AI-Shaped):** "AI systematically removes bottlenecks: overnight feasibility checks, async synthesis replaces meetings, automated validation against constraints. Learning cycles 5-10x faster."
354
+ - Reality: Fundamental redesign of how learning happens
355
+ - Outcome: Ship validated features 6x faster than competitors
356
+
357
+ **Select your level:** [1, 2, 3, or 4]
358
+
359
+ **User response:** [Selection]
360
+
361
+ **Agent records:** Outcome Acceleration maturity = [Level X]
362
+
363
+ ---
364
+
365
+ ### Step 4: Team-AI Facilitation Maturity
366
+
367
+ **Agent asks:**
368
+
369
+ Now assess **Team-AI Facilitation**—how well you've redesigned team systems for AI as co-intelligence.
370
+
371
+ **Which statement best describes your current state?**
372
+
373
+ 1. **Level 1 (AI-First):** "I use AI privately. Team doesn't know or doesn't use it. No shared norms."
374
+ - Reality: Individual tool usage, no team integration
375
+ - Problem: Inconsistent quality; no accountability for AI outputs
376
+
377
+ 2. **Level 2 (Emerging):** "Team uses AI, but no formal review process. 'I used AI' mentioned casually."
378
+ - Reality: Awareness but no structure
379
+ - Problem: AI outputs treated as final; errors slip through
380
+
381
+ 3. **Level 3 (Transitioning):** "We have review norms emerging (AI outputs are drafts, not finals). Evidence standards discussed but not codified."
382
+ - Reality: Cultural shift underway
383
+ - Problem: Norms are informal; not everyone follows them
384
+
385
+ 4. **Level 4 (AI-Shaped):** "Clear protocols: AI outputs require human validation, evidence standards codified, decision authority explicit (AI recommends, humans decide). Team treats AI as co-intelligence."
386
+ - Reality: AI integrated into team operating system
387
+ - Outcome: High-quality outputs; psychological safety maintained
388
+
389
+ **Select your level:** [1, 2, 3, or 4]
390
+
391
+ **User response:** [Selection]
392
+
393
+ **Agent records:** Team-AI Facilitation maturity = [Level X]
394
+
395
+ ---
396
+
397
+ ### Step 5: Strategic Differentiation Maturity
398
+
399
+ **Agent asks:**
400
+
401
+ Finally, **Strategic Differentiation**—are you creating defensible competitive advantages, or just efficiency gains?
402
+
403
+ **Which statement best describes your current state?**
404
+
405
+ 1. **Level 1 (AI-First):** "We use AI to work faster (write better docs, respond to customers quicker). Efficiency gains only."
406
+ - Reality: Table-stakes improvements
407
+ - Problem: Competitors can copy this within months
408
+
409
+ 2. **Level 2 (Emerging):** "AI enables us to do things we couldn't before (analyze 10x more data, test more hypotheses). New capabilities, but competitors could replicate."
410
+ - Reality: Capability expansion, but not defensible
411
+ - Problem: No moat; competitors hire more people to match
412
+
413
+ 3. **Level 3 (Transitioning):** "We've redesigned some workflows around AI (e.g., validate hypotheses in 2 days vs. 3 weeks). Starting to create separation."
414
+ - Reality: Workflow advantages emerging
415
+ - Problem: Not yet systematic; only applied in pockets
416
+
417
+ 4. **Level 4 (AI-Shaped):** "We've fundamentally rewired how we operate: customers get capabilities they can't get elsewhere, our learning cycles are 10x faster than industry standard, our economics are 5x better. Competitors can't replicate without full org redesign."
418
+ - Reality: Defensible competitive moat
419
+ - Outcome: Strategic advantage that compounds over time
420
+
421
+ **Select your level:** [1, 2, 3, or 4]
422
+
423
+ **User response:** [Selection]
424
+
425
+ **Agent records:** Strategic Differentiation maturity = [Level X]
426
+
427
+ ---
428
+
429
+ ### Step 6: Assess Maturity Profile
430
+
431
+ **Agent synthesizes:**
432
+
433
+ Here's your AI-Shaped Readiness Profile:
434
+
435
+ ```
436
+ ┌─────────────────────────────┬───────┬──────────┐
437
+ │ Competency │ Level │ Maturity │
438
+ ├─────────────────────────────┼───────┼──────────┤
439
+ │ 1. Context Design │ X │ [Label] │
440
+ │ 2. Agent Orchestration │ X │ [Label] │
441
+ │ 3. Outcome Acceleration │ X │ [Label] │
442
+ │ 4. Team-AI Facilitation │ X │ [Label] │
443
+ │ 5. Strategic Differentiation│ X │ [Label] │
444
+ └─────────────────────────────┴───────┴──────────┘
445
+
446
+ Overall Assessment: [AI-First / Emerging / Transitioning / AI-Shaped]
447
+ ```
448
+
449
+ **Maturity Labels:**
450
+ - **Level 1:** AI-First (efficiency only)
451
+ - **Level 2:** Emerging (early capabilities)
452
+ - **Level 3:** Transitioning (redesign underway)
453
+ - **Level 4:** AI-Shaped (strategic transformation)
454
+
455
+ **Overall Assessment Logic:**
456
+ - **AI-First:** Average level 1-1.5 (mostly Level 1s)
457
+ - **Emerging:** Average level 2-2.5 (mostly Level 2s)
458
+ - **Transitioning:** Average level 3-3.5 (mostly Level 3s)
459
+ - **AI-Shaped:** Average level 3.5-4 (mostly Level 4s)
460
+
461
+ ---
462
+
463
+ ### Step 7: Identify Priority Gap
464
+
465
+ **Agent asks:**
466
+
467
+ Based on your maturity profile, which competency should you prioritize first?
468
+
469
+ **Agent analyzes dependencies:**
470
+
471
+ **Dependency Logic:**
472
+ 1. **Context Design is foundational** — If Level 1-2, this must be priority #1 (Agent Orchestration and Outcome Acceleration depend on it)
473
+ 2. **Agent Orchestration enables Outcome Acceleration** — If Context Design is Level 3+, but Agent Orchestration is Level 1-2, prioritize orchestration
474
+ 3. **Team-AI Facilitation is parallel** — Can be developed alongside others, but required for scale
475
+ 4. **Strategic Differentiation requires Levels 3+ on others** — Don't focus here until foundational competencies are built
476
+
477
+ **Agent recommends:**
478
+
479
+ Based on your profile, I recommend focusing on **[Competency Name]** first because:
480
+
481
+ **Option 1: Context Design (if Level 1-2)**
482
+ - **Why:** Without durable context, AI operates on vibes. Every workflow will be fragile.
483
+ - **Impact:** Unlocks Agent Orchestration and Outcome Acceleration
484
+ - **Next Steps:** Build CLAUDE.md files, start constraints registry, create operational glossary
485
+
486
+ **Option 2: Agent Orchestration (if Context is 3+, but Orchestration is 1-2)**
487
+ - **Why:** You have context, but no repeatable workflows. Scaling requires orchestration.
488
+ - **Impact:** Turn one-off prompts into reliable, traceable workflows
489
+ - **Next Steps:** Document your most frequent AI workflow, version-control prompts, add traceability
490
+
491
+ **Option 3: Outcome Acceleration (if Context + Orchestration are 3+)**
492
+ - **Why:** You have infrastructure; now compress learning cycles
493
+ - **Impact:** Strategic advantage emerges from speed-to-learning
494
+ - **Next Steps:** Identify biggest bottleneck in learning cycle, design AI workflow to eliminate it
495
+
496
+ **Option 4: Team-AI Facilitation (if usage is individual, not team-wide)**
497
+ - **Why:** Can't scale if only you're AI-shaped; team must adopt
498
+ - **Impact:** Organizational transformation, not just individual productivity
499
+ - **Next Steps:** Establish review norms, codify evidence standards, create decision authority framework
500
+
501
+ **Option 5: Strategic Differentiation (if all others are 3+)**
502
+ - **Why:** You have the foundation; now build the moat
503
+ - **Impact:** Create defensible competitive advantage
504
+ - **Next Steps:** Identify workflow competitors can't replicate, design AI-enabled customer capabilities
505
+
506
+ **Which would you like to focus on?**
507
+
508
+ **Options:**
509
+ 1. **Accept recommendation** — [Agent provides detailed action plan]
510
+ 2. **Choose different priority** — [Agent warns about dependencies but allows override]
511
+ 3. **Focus on multiple simultaneously** — [Agent suggests parallel tracks if feasible]
512
+
513
+ **User response:** [Selection]
514
+
515
+ ---
516
+
517
+ ### Step 8: Generate Action Plan
518
+
519
+ **Agent provides tailored action plan based on selected priority:**
520
+
521
+ ---
522
+
523
+ #### If Priority = Context Design
524
+
525
+ **Goal:** Build a durable "reality layer" that both humans and AI trust—move from context stuffing to context engineering.
526
+
527
+ **Pre-Phase: Diagnose Context Stuffing (If Needed)**
528
+ If you're at Level 1-2, first diagnose context stuffing symptoms:
529
+ 1. Run through the 5 diagnostic questions (see [`context-engineering-advisor`](../context-engineering-advisor/SKILL.md))
530
+ 2. Identify what you're persisting that should be retrieved
531
+ 3. Assign context boundary owner
532
+ 4. Create Context Manifest (what's always-needed vs. episodic)
533
+
534
+ **Phase 1: Document Constraints (Week 1)**
535
+ 1. Create a constraints registry:
536
+ - Technical constraints (APIs, data models, performance limits)
537
+ - Regulatory constraints (GDPR, HIPAA, etc.)
538
+ - Strategic constraints (we will/won't build X)
539
+ 2. Apply diagnostic question #4 to each constraint: "What fails if we exclude this?"
540
+ 3. Format: Structured file AI agents can parse (YAML, JSON, or Markdown with frontmatter)
541
+ 4. Version control in Git
542
+
543
+ **Phase 2: Build Operational Glossary (Week 2)**
544
+ 1. List top 20-30 terms your team uses (e.g., "user," "customer," "activation," "churn")
545
+ 2. Define each unambiguously (avoid "it depends")
546
+ 3. Include edge cases and exceptions
547
+ 4. Add to CLAUDE.md or project instructions
548
+ 5. This becomes your **long-term persistent memory** (Declarative Memory)
549
+
550
+ **Phase 3: Establish Evidence Standards + Context Boundaries (Week 3)**
551
+ 1. Define what counts as validation:
552
+ - User feedback: "X users said Y" (with quotes)
553
+ - Analytics: "Metric Z changed by N%" (with dashboard link)
554
+ - Competitive intel: "Competitor A launched B" (with source)
555
+ 2. Reject: "I think," "We feel," "It seems like"
556
+ 3. Define context boundaries using the 5 diagnostic questions:
557
+ - What specific decision does each piece of context support?
558
+ - Can retrieval replace persistence?
559
+ - Who owns the context boundary?
560
+ 4. Create Context Manifest document
561
+ 5. Codify in team docs
562
+
563
+ **Phase 4: Implement Memory Architecture + Workflows (Week 4)**
564
+ 1. **Set up two-layer memory:**
565
+ - **Short-term (conversational):** Summarize/truncate older parts of conversation
566
+ - **Long-term (persistent):** Constraints registry + operational glossary (consider vector database for retrieval)
567
+ 2. **Implement Research→Plan→Reset→Implement cycle:**
568
+ - Research: Allow chaotic context gathering
569
+ - Plan: Synthesize into high-density SPEC.md or PLAN.md
570
+ - Reset: Clear context window
571
+ - Implement: Use only the plan as context
572
+ 3. Update AI prompts to reference constraints registry and glossary
573
+ 4. Test: Ask AI to cite constraints when making recommendations
574
+ 5. Measure: % of AI outputs that cite evidence vs. hallucinate; token usage efficiency
575
+
576
+ **Success Criteria:**
577
+ - ✅ Constraints registry has 20+ entries
578
+ - ✅ Operational glossary has 20-30 terms
579
+ - ✅ Evidence standards documented and shared
580
+ - ✅ Context Manifest created (always-needed vs. episodic)
581
+ - ✅ Context boundary owner assigned
582
+ - ✅ Two-layer memory architecture implemented
583
+ - ✅ Research→Plan→Reset→Implement cycle tested on 1 workflow
584
+ - ✅ AI agents reference these automatically
585
+ - ✅ Token usage down 30%+ (less context stuffing)
586
+ - ✅ Output consistency up (fewer retries)
587
+
588
+ **Related Skills:**
589
+ - **[`context-engineering-advisor`](../context-engineering-advisor/SKILL.md)** (Interactive) — Deep dive on diagnosing context stuffing and implementing memory architecture
590
+ - `problem-statement.md` — Define constraints before framing problems
591
+ - `epic-hypothesis.md` — Evidence-based hypothesis writing
592
+
593
+ ---
594
+
595
+ #### If Priority = Agent Orchestration
596
+
597
+ **Goal:** Turn one-off prompts into repeatable, traceable AI workflows.
598
+
599
+ **Phase 1: Map Current Workflows (Week 1)**
600
+ 1. Pick your most frequent AI use case (e.g., "analyze user feedback")
601
+ 2. Document every step you currently take:
602
+ - Copy/paste feedback into ChatGPT
603
+ - Ask for themes
604
+ - Manually categorize
605
+ - Write summary
606
+ 3. Identify pain points (manual handoffs, inconsistent results)
607
+
608
+ **Phase 2: Design Orchestrated Workflow (Week 2)**
609
+ 1. Define workflow loop:
610
+ - **Research:** AI reads all feedback (structured input)
611
+ - **Synthesis:** AI identifies themes (with evidence)
612
+ - **Critique:** AI flags contradictions or weak signals
613
+ - **Decision:** Human reviews and decides next steps
614
+ - **Log:** AI records rationale and sources
615
+ 2. Each step must be traceable (show sources, reasoning)
616
+
617
+ **Phase 3: Build and Test (Week 3)**
618
+ 1. Implement workflow using:
619
+ - Claude Projects (if simple)
620
+ - Custom GPTs (if moderate)
621
+ - API orchestration (if complex)
622
+ 2. Run on 3 past examples; compare to manual process
623
+ 3. Measure: Time saved, consistency improved, traceability added
624
+
625
+ **Phase 4: Document and Scale (Week 4)**
626
+ 1. Version-control prompts (Git)
627
+ 2. Document workflow steps for team
628
+ 3. Train 2 teammates; observe results
629
+ 4. Iterate based on feedback
630
+
631
+ **Success Criteria:**
632
+ - ✅ At least 1 workflow runs consistently (same inputs → predictable process)
633
+ - ✅ Each step is traceable (AI cites sources)
634
+ - ✅ Team can replicate workflow without your involvement
635
+
636
+ **Related Skills:**
637
+ - `pol-probe-advisor.md` — Use orchestrated workflows for validation experiments
638
+
639
+ ---
640
+
641
+ #### If Priority = Outcome Acceleration
642
+
643
+ **Goal:** Use AI to compress learning cycles, not just speed up tasks.
644
+
645
+ **Phase 1: Identify Bottleneck (Week 1)**
646
+ 1. Map your current learning cycle (e.g., hypothesis → experiment → analysis → decision)
647
+ 2. Time each step
648
+ 3. Identify slowest step (usually: validation lag, approval delays, or meeting overhead)
649
+
650
+ **Phase 2: Design AI Intervention (Week 2)**
651
+ 1. Ask: "What if this step happened overnight?"
652
+ - Feasibility checks: AI spike in 2 hours vs. 2 days
653
+ - User research synthesis: AI analysis in 1 hour vs. 1 week
654
+ - Approval pre-checks: AI validates against constraints before meeting
655
+ 2. Design minimal AI workflow to eliminate bottleneck
656
+
657
+ **Phase 3: Run Pilot (Week 3)**
658
+ 1. Test AI intervention on 1 real initiative
659
+ 2. Measure cycle time: before vs. after
660
+ 3. Validate quality: Did AI maintain rigor, or cut corners?
661
+
662
+ **Phase 4: Scale (Week 4)**
663
+ 1. If successful (cycle time down 50%+, quality maintained), apply to 3 more initiatives
664
+ 2. Document workflow
665
+ 3. Train team
666
+
667
+ **Success Criteria:**
668
+ - ✅ Learning cycle compressed by 50%+ on at least 1 initiative
669
+ - ✅ Quality maintained (no shortcuts that compromise rigor)
670
+ - ✅ Team adopts the accelerated workflow
671
+
672
+ **Related Skills:**
673
+ - `pol-probe.md` — Use AI to run PoL probes faster
674
+ - `discovery-process.md` — Compress discovery cycles with AI
675
+
676
+ ---
677
+
678
+ #### If Priority = Team-AI Facilitation
679
+
680
+ **Goal:** Redesign team systems so AI operates as co-intelligence, not accountability shield.
681
+
682
+ **Phase 1: Establish Review Norms (Week 1)**
683
+ 1. Codify rule: "AI outputs are drafts, not finals"
684
+ 2. Define review protocol:
685
+ - Who reviews AI outputs? (peer, lead PM, cross-functional partner)
686
+ - When? (before sharing externally, before decisions)
687
+ - What to check? (accuracy, completeness, evidence citation)
688
+ 3. Share with team, get buy-in
689
+
690
+ **Phase 2: Set Evidence Standards (Week 2)**
691
+ 1. AI must cite sources (no hallucinations)
692
+ 2. Reject outputs that say "I think" or "it seems"
693
+ 3. Require: "According to [source], [fact]"
694
+ 4. Add to team operating docs
695
+
696
+ **Phase 3: Define Decision Authority (Week 3)**
697
+ 1. Clarify: AI recommends, humans decide
698
+ 2. Document who has authority to override AI recommendations (PM, team lead, cross-functional consensus)
699
+ 3. Create escalation path (what if AI and human disagree?)
700
+
701
+ **Phase 4: Build Psychological Safety (Week 4)**
702
+ 1. Team exercise: Share an AI mistake you caught (normalize catching errors)
703
+ 2. Reward critical thinking ("Good catch on that AI hallucination!")
704
+ 3. Avoid: "Why didn't you just use AI?" (shaming)
705
+
706
+ **Success Criteria:**
707
+ - ✅ Review norms documented and followed by team
708
+ - ✅ Evidence standards codified
709
+ - ✅ Decision authority clear
710
+ - ✅ Team comfortable challenging AI outputs
711
+
712
+ **Related Skills:**
713
+ - `problem-statement.md` — Evidence-based problem framing
714
+ - `epic-hypothesis.md` — Testable, evidence-backed hypotheses
715
+
716
+ ---
717
+
718
+ #### If Priority = Strategic Differentiation
719
+
720
+ **Goal:** Create defensible competitive advantages, not just efficiency gains.
721
+
722
+ **Phase 1: Identify Moat Opportunities (Week 1)**
723
+ 1. Ask: "What could we do with AI that competitors can't replicate by adding headcount?"
724
+ - New customer capabilities (e.g., "AI advisor suggests personalized roadmap")
725
+ - Workflow rewiring (e.g., "Validate product ideas in 2 days vs. 3 weeks")
726
+ - Economics shift (e.g., "Deliver enterprise features at SMB prices via AI automation")
727
+ 2. List 5 candidates
728
+ 3. Prioritize by defensibility (how hard to copy?)
729
+
730
+ **Phase 2: Design AI-Enabled Capability (Week 2)**
731
+ 1. Pick top candidate
732
+ 2. Design end-to-end workflow:
733
+ - What does customer experience?
734
+ - What does AI do behind the scenes?
735
+ - What human judgment is required?
736
+ 3. Sketch MVP (minimum viable moat)
737
+
738
+ **Phase 3: Build and Test (Weeks 3-4)**
739
+ 1. Build prototype (can be PoL probe, not production)
740
+ 2. Test with 5 customers
741
+ 3. Measure: Does this create value competitors can't match?
742
+
743
+ **Phase 4: Validate Moat (Week 5)**
744
+ 1. Ask: "How would a competitor replicate this?"
745
+ - If answer is "hire more people," it's not a moat
746
+ - If answer is "redesign their entire org," you have a moat
747
+ 2. Document competitive analysis
748
+ 3. Decide: Build full version, pivot, or kill
749
+
750
+ **Success Criteria:**
751
+ - ✅ Identified at least 1 AI-enabled capability competitors can't easily copy
752
+ - ✅ Validated with customers (they see the value)
753
+ - ✅ Confirmed defensibility (competitor analysis)
754
+
755
+ **Related Skills:**
756
+ - `positioning-statement.md` — Articulate your AI-driven differentiation
757
+ - `jobs-to-be-done.md` — Understand what customers hire your AI capabilities to do
758
+
759
+ ---
760
+
761
+ ### Step 9: Track Progress (Optional)
762
+
763
+ **Agent offers:**
764
+
765
+ Would you like me to create a progress tracker for your AI-shaped transformation?
766
+
767
+ **Tracker includes:**
768
+ - Current maturity levels (baseline)
769
+ - Target maturity levels (goal state)
770
+ - Action plan milestones (from Step 8)
771
+ - Review cadence (weekly, monthly)
772
+
773
+ **Options:**
774
+ 1. **Yes, create tracker** — [Agent generates Markdown checklist]
775
+ 2. **No, I'll track separately** — [Agent provides summary]
776
+
777
+ ---
778
+
779
+ ## Examples
780
+
781
+ ### Example 1: Early-Stage Startup (AI-First → Emerging)
782
+
783
+ **Context:**
784
+ - Team: 2 PMs, 5 engineers
785
+ - AI Usage: ChatGPT for writing PRDs, occasional Copilot usage
786
+ - Goal: Move faster than larger competitors
787
+
788
+ **Assessment Results:**
789
+ - Context Design: Level 1 (no structured context)
790
+ - Agent Orchestration: Level 1 (one-off prompts)
791
+ - Outcome Acceleration: Level 1 (docs faster, but learning cycles unchanged)
792
+ - Team-AI Facilitation: Level 2 (team uses AI, but no norms)
793
+ - Strategic Differentiation: Level 1 (efficiency only)
794
+
795
+ **Recommendation:** Focus on **Context Design** first.
796
+
797
+ **Action Plan (Week 1-4):**
798
+ - Week 1: Create constraints registry (10 technical constraints)
799
+ - Week 2: Build operational glossary (15 terms)
800
+ - Week 3: Establish evidence standards
801
+ - Week 4: Add context to CLAUDE.md files
802
+
803
+ **Outcome:** After 4 weeks, Context Design → Level 3. Unlocks Agent Orchestration next quarter.
804
+
805
+ ---
806
+
807
+ ### Example 2: Growth-Stage Company (Transitioning → AI-Shaped)
808
+
809
+ **Context:**
810
+ - Team: 10 PMs, 50 engineers, 5 designers
811
+ - AI Usage: Claude Projects for research, custom workflows emerging
812
+ - Goal: Build defensible AI advantage before IPO
813
+
814
+ **Assessment Results:**
815
+ - Context Design: Level 3 (structured context, not comprehensive)
816
+ - Agent Orchestration: Level 3 (some workflows, manual handoffs)
817
+ - Outcome Acceleration: Level 2 (modest gains, not systematic)
818
+ - Team-AI Facilitation: Level 3 (norms emerging, not codified)
819
+ - Strategic Differentiation: Level 2 (new capabilities, but copyable)
820
+
821
+ **Recommendation:** Focus on **Outcome Acceleration** (foundation is solid; now compress learning cycles).
822
+
823
+ **Action Plan (Week 1-4):**
824
+ - Week 1: Identify bottleneck (discovery cycles take 3 weeks)
825
+ - Week 2: Design AI workflow to run overnight feasibility checks
826
+ - Week 3: Pilot on 1 initiative (cut cycle to 5 days)
827
+ - Week 4: Scale to 3 initiatives
828
+
829
+ **Outcome:** Learning cycles 5x faster → strategic separation from competitors → Level 4 Outcome Acceleration + Level 3 Strategic Differentiation.
830
+
831
+ ---
832
+
833
+ ### Example 3: Enterprise Company (AI-First, Scattered Usage)
834
+
835
+ **Context:**
836
+ - Team: 50 PMs, 300 engineers
837
+ - AI Usage: Individual PMs use various tools, no consistency
838
+ - Goal: Standardize AI usage, create cross-functional workflows
839
+
840
+ **Assessment Results:**
841
+ - Context Design: Level 2 (docs exist, not structured for AI)
842
+ - Agent Orchestration: Level 1 (no shared workflows)
843
+ - Outcome Acceleration: Level 1 (efficiency only)
844
+ - Team-AI Facilitation: Level 1 (private usage, no norms)
845
+ - Strategic Differentiation: Level 1 (no advantage)
846
+
847
+ **Recommendation:** Focus on **Team-AI Facilitation** first (distributed team needs shared norms before building infrastructure).
848
+
849
+ **Action Plan (Week 1-4):**
850
+ - Week 1: Establish review norms (AI outputs are drafts)
851
+ - Week 2: Set evidence standards (AI must cite sources)
852
+ - Week 3: Define decision authority (AI recommends, leads decide)
853
+ - Week 4: Pilot with 3 teams, gather feedback
854
+
855
+ **Outcome:** Team-AI Facilitation → Level 3. Creates foundation for Context Design and Agent Orchestration next.
856
+
857
+ ---
858
+
859
+ ## Common Pitfalls
860
+
861
+ ### 1. **Mistaking Efficiency for Differentiation**
862
+ **Failure Mode:** "We use AI to write PRDs 2x faster—we're AI-shaped!"
863
+
864
+ **Consequence:** Competitors copy within 3 months; no lasting advantage.
865
+
866
+ **Fix:** Ask: "If a competitor threw 2x more people at this, could they match us?" If yes, it's efficiency (table stakes), not differentiation.
867
+
868
+ ---
869
+
870
+ ### 2. **Skipping Context Design**
871
+ **Failure Mode:** Building Agent Orchestration workflows without durable context.
872
+
873
+ **Consequence:** AI workflows are fragile (context changes break everything).
874
+
875
+ **Fix:** Context Design is foundational. Don't skip it. Build constraints registry, glossary, evidence standards first.
876
+
877
+ ---
878
+
879
+ ### 3. **Individual Usage, Not Team Transformation**
880
+ **Failure Mode:** "I'm AI-shaped, but my team isn't."
881
+
882
+ **Consequence:** Can't scale; workflows die when you're on vacation.
883
+
884
+ **Fix:** Prioritize Team-AI Facilitation. Shared norms > individual productivity.
885
+
886
+ ---
887
+
888
+ ### 4. **Focusing on Tools, Not Workflows**
889
+ **Failure Mode:** "Should we use Claude or ChatGPT?"
890
+
891
+ **Consequence:** Tool debates distract from organizational redesign.
892
+
893
+ **Fix:** Tools don't matter. Workflows matter. Focus on redesigning how work gets done, not which AI you use.
894
+
895
+ ---
896
+
897
+ ### 5. **Speed Over Learning**
898
+ **Failure Mode:** "AI helps us ship faster!"
899
+
900
+ **Consequence:** Ship the wrong thing faster (if you're not compressing learning cycles).
901
+
902
+ **Fix:** Outcome Acceleration is about learning faster, not building faster. Validate hypotheses in days, not weeks.
903
+
904
+ ---
905
+
906
+ ## References
907
+
908
+ ### Related Skills
909
+ - **[context-engineering-advisor](../context-engineering-advisor/SKILL.md)** (Interactive) — **Deep dive on Context Design competency:** Diagnose context stuffing, implement memory architecture, use Research→Plan→Reset→Implement cycle
910
+ - **[problem-statement](../problem-statement/SKILL.md)** (Component) — Evidence-based problem framing (Context Design)
911
+ - **[epic-hypothesis](../epic-hypothesis/SKILL.md)** (Component) — Testable hypotheses with evidence standards
912
+ - **[pol-probe-advisor](../pol-probe-advisor/SKILL.md)** (Interactive) — Use AI to compress validation cycles (Outcome Acceleration)
913
+ - **[discovery-process](../discovery-process/SKILL.md)** (Workflow) — Apply AI-shaped principles to discovery
914
+ - **[positioning-statement](../positioning-statement/SKILL.md)** (Component) — Articulate your AI-driven differentiation (Strategic Differentiation)
915
+
916
+ ### External Frameworks
917
+ - **Dean Peters** — [*AI-First Is Cute. AI-Shaped Is Survival.*](https://deanpeters.substack.com/p/ai-first-is-cute-ai-shaped-is-survival) (Dean Peters' Substack, 2026)
918
+ - **Dean Peters** — [*Context Stuffing Is Not Context Engineering*](https://deanpeters.substack.com/p/context-stuffing-is-not-context-engineering) (Dean Peters' Substack, 2026) — Deep dive on Competency #1 (Context Design)
919
+
920
+ ### Further Reading
921
+ - **Ethan Mollick** — *Co-Intelligence* (on AI as co-intelligence, not replacement)
922
+ - **Shreyas Doshi** — Twitter threads on PM judgment augmentation with AI
923
+ - **Lenny Rachitsky** — Newsletter interviews with AI-forward PMs