@laith-wallace/crisp 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,158 @@
1
+ ---
2
+ name: crisp-audit
3
+ description: Full CRISP framework evaluation of a UI design. Scores all five dimensions — Contextual, Responsive, Intelligent, Seamless, Powerful — with P0–P3 severity ratings, benchmarks against Stripe, Linear, Notion, Asana, and Slack, and delivers a prioritised action plan. Use when doing a thorough design review. If .crisp.md exists, load it first for project context.
4
+ user-invocable: true
5
+ ---
6
+
7
+ # /crisp-audit — Full CRISP Evaluation
8
+
9
+ Analyse the provided design (screenshot, Figma link, or description) with the critical eye of a senior product designer. If `.crisp.md` exists in the project root, load it before beginning — your analysis should be grounded in the specific product context, users, and benchmarks documented there.
10
+
11
+ ## Step 1: 30-Second Scan
12
+
13
+ Before structured analysis, capture first impressions:
14
+ - **Strengths**: What works immediately
15
+ - **Red Flags**: The most critical issues visible at a glance
16
+ - **Overall Grade**: A–F with one-line justification
17
+
18
+ ## Step 2: CRISP Dimension Scoring
19
+
20
+ Rate each dimension 1–10 and identify specific violations. Use the failure indicators below as your diagnostic criteria.
21
+
22
+ ### C — Contextual
23
+ **Test:** Can the user tell where they are and what this page does within 5 seconds?
24
+
25
+ Fail indicators:
26
+ - Generic empty states ("No data available" vs. "You haven't added any deals yet. Add your first one to start tracking.")
27
+ - Missing breadcrumbs or location signals on deep pages
28
+ - Page title or heading doesn't reflect what the user is doing
29
+ - No orientation after a user action ("What just happened?")
30
+
31
+ Violation examples:
32
+ - Generic "Success" message → Fails C. Tell the user exactly what changed.
33
+ - Empty dashboard with no call-to-action → Fails C. Show what they're missing and how to get it.
34
+
35
+ ### R — Responsive
36
+ **Test:** Does the UI update immediately on every interaction?
37
+
38
+ Fail indicators:
39
+ - Spinner appears on filter changes, tab switches, or any action the user can predict the result of
40
+ - Click-wait-update patterns where the user has to wait to see their action reflected
41
+ - No hover feedback on interactive elements
42
+ - No loading skeleton — blank space appears while content loads
43
+
44
+ Violation examples:
45
+ - Spinner on search → Fails R. Use debounced optimistic filtering.
46
+ - Page refresh on form submit → Fails R. Update inline, background sync.
47
+
48
+ ### I — Intelligent
49
+ **Test:** Are we showing insight, not raw data?
50
+
51
+ Fail indicators:
52
+ - Numbers displayed without context, comparison, or suggested action
53
+ - Blank/empty forms when we already know the user's data
54
+ - No next-best-action when the user reaches a dead end
55
+ - We know the user's history but present them a generic experience
56
+
57
+ Violation examples:
58
+ - "1,247" with no label, no comparison, no action → Fails I.
59
+ - Empty form when user has done this before → Fails I. Pre-populate from their last session.
60
+
61
+ ### S — Seamless
62
+ **Test:** Are we fitting into their day — not forcing them into ours?
63
+
64
+ Fail indicators:
65
+ - Redirect to a portal or separate app for a task the user considers routine
66
+ - Forced login to complete an action that could be handled inline or via email
67
+ - Custom UI components that break familiar mental models (e.g. a custom dropdown that doesn't behave like a dropdown)
68
+ - Unnecessary page reloads for contextual tasks
69
+
70
+ Violation examples:
71
+ - "Open in portal to approve" → Fails S. Inline approval card, one click.
72
+ - Custom date picker with non-standard interactions → Fails S. Use the native or library standard.
73
+
74
+ ### P — Powerful
75
+ **Test:** Is complexity hidden appropriately for each user type?
76
+
77
+ Fail indicators:
78
+ - All settings visible to all users regardless of role or experience level
79
+ - No keyboard shortcuts for power users
80
+ - No undo — destructive actions are permanent without confirmation
81
+ - Advanced options surfaced to novices who don't need them
82
+
83
+ Violation examples:
84
+ - 23 settings on the main view → Fails P. Surface 3–5, collapse the rest.
85
+ - Delete with no undo or confirm → Fails P. Soft delete with 5-second undo.
86
+
87
+ ## Step 3: Severity Rating
88
+
89
+ Rate each violation using this scale:
90
+
91
+ | Priority | Definition | Example |
92
+ |----------|-----------|---------|
93
+ | P0 | Blocks the user entirely | Empty state with no recovery path |
94
+ | P1 | Major friction — user can work around it but shouldn't have to | Spinner on every filter change |
95
+ | P2 | Noticeable degradation in experience | Generic empty state copy |
96
+ | P3 | Minor polish issue | Missing hover state on secondary action |
97
+
98
+ ## Step 4: Benchmark Comparison
99
+
100
+ Compare against one or more of these exemplars (or the benchmarks from `.crisp.md`):
101
+ - **Stripe**: Clean, trustworthy, progressive disclosure, excellent error handling
102
+ - **Linear**: Minimal, fast, keyboard-native, excellent micro-interactions
103
+ - **Notion**: Flexible, intuitive, powerful yet approachable, great onboarding
104
+ - **Asana**: Task-focused, clear status indicators, seamless collaboration
105
+ - **Slack**: Clear hierarchy, efficient workflows, contextual design
106
+
107
+ ## Output Format
108
+
109
+ Structure the audit as:
110
+
111
+ ```
112
+ ## CRISP Audit: [Screen/Feature Name]
113
+
114
+ **Grade: [A–F]** — [One-line verdict]
115
+
116
+ ### 30-Second Impression
117
+ Strengths: [2–3 bullet points]
118
+ Red Flags: [2–3 bullet points]
119
+
120
+ ### CRISP Scorecard
121
+ | Dimension | Score | Key Violation |
122
+ |-------------|-------|--------------------------------------|
123
+ | Contextual | /10 | [Most critical C failure] |
124
+ | Responsive | /10 | [Most critical R failure] |
125
+ | Intelligent | /10 | [Most critical I failure] |
126
+ | Seamless | /10 | [Most critical S failure] |
127
+ | Powerful | /10 | [Most critical P failure] |
128
+ | **Total** | **/50** | |
129
+
130
+ ### Violations by Priority
131
+ **P0 — Fix immediately**
132
+ - [Dimension tag] [Specific violation] → [Specific fix]
133
+
134
+ **P1 — Fix this sprint**
135
+ - [Dimension tag] [Specific violation] → [Specific fix]
136
+
137
+ **P2–P3 — Backlog**
138
+ - [List]
139
+
140
+ ### Quick Wins (high-impact, low-effort)
141
+ - [List]
142
+
143
+ ### Benchmark Comparison
144
+ Compared against [Stripe / Linear / Notion / project benchmark]:
145
+ [2–3 sentences on where this design falls vs. the benchmark]
146
+
147
+ ### Strategic Recommendations
148
+ - [How to elevate to world-class — 2–3 points]
149
+ - [User research questions to validate]
150
+ - [Success metrics to track]
151
+ ```
152
+
153
+ ## Analysis Style
154
+
155
+ - Be direct and specific. "This fails I" is useful. "The UX could be improved" is not.
156
+ - Every violation should have a specific fix, not a direction.
157
+ - Prioritise by user impact, not by what's easiest to say.
158
+ - Reference the user context from `.crisp.md` if available — a violation matters more or less depending on who the user is.
@@ -0,0 +1,84 @@
1
+ ---
2
+ name: crisp-review
3
+ description: 30-second CRISP design scan. Returns a grade A–F and the top 3 issues by user impact with specific fix suggestions. Use during rapid design iteration when a full audit would slow you down. If .crisp.md exists, load it for project context.
4
+ user-invocable: true
5
+ ---
6
+
7
+ # /crisp-review — Quick CRISP Scan
8
+
9
+ A fast, high-signal design pass. Not a full audit — a diagnostic. Use this during iteration when you need clear direction, not a comprehensive report.
10
+
11
+ If `.crisp.md` exists in the project root, load it. Your review should be grounded in the specific product, users, and priorities documented there.
12
+
13
+ ## What to Evaluate
14
+
15
+ Scan the design against all five CRISP dimensions, but don't score each one individually. Instead:
16
+
17
+ 1. Identify the **single most critical issue per dimension** (if one exists)
18
+ 2. From those, surface the **top 3 issues by user impact**
19
+ 3. Assign a **grade** that reflects the overall quality
20
+
21
+ ## Grading Scale
22
+
23
+ | Grade | Meaning |
24
+ |-------|---------|
25
+ | A | World-class. Ship it. Minor polish only. |
26
+ | B | Good. One or two fixable issues. |
27
+ | C | Functional but frustrating. Multiple P1s. |
28
+ | D | Users will struggle. Core experience broken. |
29
+ | F | Blocks users entirely. Don't ship. |
30
+
31
+ ## CRISP Quick-Check
32
+
33
+ Use these as your diagnostic lens during the scan:
34
+
35
+ - **C** — Does the user know where they are within 5 seconds?
36
+ - **R** — Does every interaction feel instant?
37
+ - **I** — Is data presented as insight, not just numbers?
38
+ - **S** — Does the user stay in their flow, or get pushed out of it?
39
+ - **P** — Is complexity hidden from those who don't need it?
40
+
41
+ ## Output Format
42
+
43
+ Keep it tight. No lengthy explanations.
44
+
45
+ ```
46
+ ## CRISP Review: [Screen/Feature Name]
47
+
48
+ **Grade: [A–F]** — [One punchy verdict sentence]
49
+
50
+ **Strengths**
51
+ - [What's working — 1–2 points max]
52
+
53
+ **Top 3 Issues**
54
+
55
+ 1. [C/R/I/S/P] **[Issue title]**
56
+ What's wrong: [One sentence, specific]
57
+ Fix: [One sentence, specific — not a direction, an action]
58
+
59
+ 2. [C/R/I/S/P] **[Issue title]**
60
+ What's wrong: [One sentence]
61
+ Fix: [One sentence]
62
+
63
+ 3. [C/R/I/S/P] **[Issue title]**
64
+ What's wrong: [One sentence]
65
+ Fix: [One sentence]
66
+
67
+ **Quick Wins**
68
+ - [High-impact, low-effort items that didn't make the top 3]
69
+ ```
70
+
71
+ ## Examples of Good vs. Weak Feedback
72
+
73
+ **Weak:** "The empty state could be improved."
74
+ **Good:** "[C] Empty state says 'No data' with no CTA. Replace with: 'You haven't added any suppliers yet. [Add your first supplier]'"
75
+
76
+ **Weak:** "Loading feels slow."
77
+ **Good:** "[R] Filter results wait for API response before updating. Switch to optimistic filtering — show results immediately, reconcile in background."
78
+
79
+ **Weak:** "The dashboard shows too much."
80
+ **Good:** "[P] 11 metrics visible at once, all with equal visual weight. Promote 3 most-used to hero cards. Collapse the rest into a secondary grid."
81
+
82
+ ## Tone
83
+
84
+ Direct. Specific. Actionable. No softening. If the design fails, say it fails and say exactly why. The goal is to make the next iteration better, not to protect feelings.
@@ -0,0 +1,90 @@
1
+ ---
2
+ name: crisp-teach
3
+ description: CRISP onboarding command. Run once per project to teach the AI your product context — users, jobs-to-be-done, design system, benchmark references, and anti-references. Writes .crisp.md which all subsequent CRISP commands read automatically.
4
+ user-invocable: true
5
+ ---
6
+
7
+ # /crisp-teach — Project Onboarding
8
+
9
+ Run this command once per project. It learns your design context through a structured interview, then writes `.crisp.md` to your project root. All other CRISP commands (`/crisp-audit`, `/crisp-review`, `/feature-design`, `/handoff`) inherit this context automatically.
10
+
11
+ ## Interview Protocol
12
+
13
+ Work through the following questions with the user. Ask them one section at a time. Don't rush — good context makes every subsequent command significantly more accurate.
14
+
15
+ ---
16
+
17
+ ### Section 1: Product Overview
18
+ Ask:
19
+ - "What is this product? Describe it in one sentence as if explaining to a potential customer."
20
+ - "What stage is it at? (Pre-launch / early customers / growth / mature)"
21
+ - "What's the primary action a user takes in this product?"
22
+
23
+ ---
24
+
25
+ ### Section 2: Users
26
+ Ask:
27
+ - "Who is your primary user? Describe them — their role, their day, their level of technical sophistication."
28
+ - "What is the job they're hiring your product to do? What were they doing before?"
29
+ - "What does failure look like for them? What happens if the product lets them down?"
30
+
31
+ ---
32
+
33
+ ### Section 3: Design System
34
+ Ask:
35
+ - "Do you have a design system or component library? If so, name it or describe it briefly."
36
+ - "What design tokens or visual constraints should I know about? (colours, type scale, spacing)"
37
+ - "Are there any components that are off-limits to change?"
38
+
39
+ ---
40
+
41
+ ### Section 4: Benchmarks
42
+ Ask:
43
+ - "Which products do you most admire from a design perspective? These become your positive references."
44
+ - "Which products do you NOT want to look or feel like? These become your anti-references."
45
+ - "Of the CRISP dimensions — Contextual, Responsive, Intelligent, Seamless, Powerful — which matters most for your users right now?"
46
+
47
+ ---
48
+
49
+ ### Section 5: Known Weaknesses
50
+ Ask:
51
+ - "What's the biggest UX problem you already know exists?"
52
+ - "Which user flows feel most broken or incomplete?"
53
+ - "Is there anything that's off-limits for this audit? (legacy constraints, upcoming changes)"
54
+
55
+ ---
56
+
57
+ ## Output Format
58
+
59
+ Once the interview is complete, write a `.crisp.md` file to the project root with this structure:
60
+
61
+ ```markdown
62
+ # .crisp.md — CRISP Design Context
63
+ *Generated by /crisp-teach. Update by running /crisp-teach again.*
64
+
65
+ ## Product
66
+ [One-sentence product description]
67
+ Stage: [Pre-launch / Early / Growth / Mature]
68
+ Primary action: [What users mainly do]
69
+
70
+ ## Users
71
+ Primary user: [Role + sophistication level]
72
+ Job-to-be-done: [What they're hiring the product for]
73
+ Failure mode: [What happens if the product lets them down]
74
+
75
+ ## Design System
76
+ [System name or description]
77
+ Key tokens: [Colours, type, spacing constraints]
78
+ Constraints: [What can't be changed]
79
+
80
+ ## Benchmarks
81
+ Positive references: [Products to aspire to]
82
+ Anti-references: [Products to avoid resembling]
83
+ Priority CRISP dimension: [C / R / I / S / P]
84
+
85
+ ## Known Issues
86
+ [List of acknowledged UX problems]
87
+ [Off-limits areas]
88
+ ```
89
+
90
+ Confirm to the user that `.crisp.md` has been written and that all CRISP commands will now use this context automatically.
@@ -0,0 +1,131 @@
1
+ ---
2
+ name: feature-design
3
+ description: Design a new product feature using CRISP principles. Takes a problem statement and produces user flows, component decisions, CRISP compliance checks, and decision rationale benchmarked against world-class products. If .crisp.md exists, load it first.
4
+ user-invocable: true
5
+ ---
6
+
7
+ # /feature-design — CRISP Feature Design
8
+
9
+ Design a new feature from scratch, grounded in CRISP principles. This is not wireframe generation — it's structured design thinking that produces flows, decisions, rationale, and open questions.
10
+
11
+ If `.crisp.md` exists, load it before beginning. The design should be grounded in the specific users, design system, and benchmarks documented there.
12
+
13
+ ## Step 1: Problem Framing
14
+
15
+ Before designing anything, clarify the problem. If the user hasn't provided these, ask:
16
+
17
+ 1. **Who is experiencing this problem?** (Role, sophistication level, context)
18
+ 2. **What are they trying to do?** (The job, not the feature)
19
+ 3. **What do they do today?** (Workaround, existing tool, manual process)
20
+ 4. **What does success look like?** (Observable outcome, not feature completion)
21
+ 5. **What constraints exist?** (Technical, timeline, design system)
22
+
23
+ If `.crisp.md` exists, cross-reference with the documented user persona and known issues before proceeding.
24
+
25
+ ## Step 2: CRISP Design Principles to Apply
26
+
27
+ Design the feature so that each dimension is explicitly addressed:
28
+
29
+ **Contextual** — The user should always know:
30
+ - Where they are in the flow
31
+ - What they've just done
32
+ - What happens next
33
+
34
+ **Responsive** — Every action should feel immediate:
35
+ - Optimistic UI for predictable state changes
36
+ - Skeleton loading, not blank spaces
37
+ - No spinners for actions the user initiated
38
+
39
+ **Intelligent** — The feature should leverage what we know:
40
+ - Pre-populate from history or context
41
+ - Surface the next-best-action at every step
42
+ - Show data as insight, not raw numbers
43
+
44
+ **Seamless** — The feature should fit into their existing workflow:
45
+ - Don't redirect users out of their context unnecessarily
46
+ - Use familiar patterns before inventing new ones
47
+ - Keep the feature's footprint as small as possible
48
+
49
+ **Powerful** — Complexity should be progressive:
50
+ - Surface the most-used 20% by default
51
+ - Hide the other 80% behind deliberate disclosure
52
+ - Add keyboard shortcuts for power users
53
+
54
+ ## Step 3: Flow Design
55
+
56
+ Map the user flow as a numbered sequence of steps. For each step, document:
57
+
58
+ ```
59
+ Step [N]: [What the user does / sees]
60
+ - State: [What the UI shows]
61
+ - CRISP check: [Which dimension(s) this step addresses or risks]
62
+ - Decision: [Why this approach over alternatives]
63
+ - Edge cases: [Empty state / error state / loading state]
64
+ ```
65
+
66
+ ## Step 4: Component Decisions
67
+
68
+ For each key UI element in the flow, document:
69
+ - **What it is** (component name/type)
70
+ - **Why this pattern** (over alternatives)
71
+ - **Benchmark reference** (where this pattern works well — Stripe, Linear, etc.)
72
+ - **CRISP alignment** (which dimension it serves)
73
+
74
+ ## Step 5: Open Questions
75
+
76
+ List the research questions this design surfaces — things that should be validated before building or in a usability test:
77
+
78
+ - User behaviour questions (do users actually do X?)
79
+ - Technical feasibility questions
80
+ - Edge cases that need product decisions
81
+
82
+ ## Output Format
83
+
84
+ ```
85
+ ## Feature Design: [Feature Name]
86
+
87
+ ### Problem
88
+ Who: [User]
89
+ Job: [What they're trying to do]
90
+ Today: [Current workaround]
91
+ Success: [Observable outcome]
92
+
93
+ ### User Flow
94
+
95
+ **Step 1: [Name]**
96
+ State: [UI description]
97
+ CRISP: [Dimension + how it's addressed]
98
+ Decision: [Why this approach]
99
+ Edge cases: [Empty / error / loading]
100
+
101
+ [Repeat for each step]
102
+
103
+ ### Key Component Decisions
104
+
105
+ | Component | Pattern | Benchmark | CRISP Dimension |
106
+ |-----------|---------|-----------|-----------------|
107
+ | [Name] | [Type] | [Reference] | [C/R/I/S/P] |
108
+
109
+ ### CRISP Compliance Summary
110
+
111
+ | Dimension | How addressed | Risk |
112
+ |-------------|--------------|------|
113
+ | Contextual | [How] | [Any gaps] |
114
+ | Responsive | [How] | [Any gaps] |
115
+ | Intelligent | [How] | [Any gaps] |
116
+ | Seamless | [How] | [Any gaps] |
117
+ | Powerful | [How] | [Any gaps] |
118
+
119
+ ### Open Questions
120
+ - [Research / product / technical questions to resolve]
121
+
122
+ ### Next Steps
123
+ - [Recommended actions before building]
124
+ ```
125
+
126
+ ## Design Style
127
+
128
+ - Design for the primary user first. Edge cases come second.
129
+ - Favour familiar patterns over novel ones unless novelty is the point.
130
+ - Every design decision should be justifiable against CRISP.
131
+ - If a step in the flow fails a CRISP dimension, flag it and propose a fix before moving on.
@@ -0,0 +1,219 @@
1
+ ---
2
+ name: handoff
3
+ description: Convert a CRISP-reviewed design into a developer-ready specification. Produces component states, implementation notes, token references, edge cases, and an accessibility checklist. Use after a design has passed /crisp-audit or /crisp-review. If .crisp.md exists, load it first.
4
+ user-invocable: true
5
+ ---
6
+
7
+ # /handoff — Developer Handoff Spec
8
+
9
+ Convert a design into a complete, implementation-ready specification. The goal is to eliminate ambiguity before a single line of code is written.
10
+
11
+ If `.crisp.md` exists, load it. Your spec should reference the documented design system tokens and constraints.
12
+
13
+ ## What to Produce
14
+
15
+ A handoff spec that covers everything a developer needs to build the design correctly, without having to interpret, invent, or ask follow-up questions.
16
+
17
+ ---
18
+
19
+ ## Section 1: Overview
20
+
21
+ ```
22
+ ## [Component / Screen Name] — Handoff Spec
23
+ Date: [Today]
24
+ Status: Ready for development
25
+
26
+ ### Summary
27
+ [One paragraph: what this is, what it does, who uses it]
28
+
29
+ ### Scope
30
+ [What's included in this spec]
31
+ [What's explicitly out of scope]
32
+ ```
33
+
34
+ ---
35
+
36
+ ## Section 2: Component Inventory
37
+
38
+ List every distinct component in the design:
39
+
40
+ ```
41
+ ### Components
42
+
43
+ | Component | Type | Existing / New | Notes |
44
+ |-----------|------|---------------|-------|
45
+ | [Name] | [Button / Card / Modal / etc.] | [Existing — link to design system] / [New] | [Any notes] |
46
+ ```
47
+
48
+ For new components only, add:
49
+ - **Props**: What data does it accept?
50
+ - **Variants**: What visual/functional variants exist?
51
+ - **Behaviour**: How does it respond to interactions?
52
+
53
+ ---
54
+
55
+ ## Section 3: States
56
+
57
+ For every interactive element and every screen, document all states:
58
+
59
+ ```
60
+ ### States: [Component/Screen Name]
61
+
62
+ **Default**
63
+ [Description of the resting state]
64
+
65
+ **Hover / Focus**
66
+ [Description — include cursor, outline, colour change]
67
+
68
+ **Active / Pressed**
69
+ [Description]
70
+
71
+ **Loading**
72
+ [Description — skeleton, spinner, optimistic UI? Which and why]
73
+
74
+ **Empty**
75
+ [What the user sees when there's no data — never "No data available"]
76
+
77
+ **Error**
78
+ [What the user sees when something goes wrong — specific error message, recovery action]
79
+
80
+ **Disabled**
81
+ [When is it disabled? What does it look like? Can the user tell why?]
82
+
83
+ **Success**
84
+ [What confirms the action completed — and what did it do specifically?]
85
+ ```
86
+
87
+ ---
88
+
89
+ ## Section 4: Tokens & Spacing
90
+
91
+ Reference the design system from `.crisp.md` or document the values directly:
92
+
93
+ ```
94
+ ### Design Tokens
95
+
96
+ **Colours**
97
+ | Token | Value | Usage |
98
+ |-------|-------|-------|
99
+ | [name] | [value] | [where used in this component] |
100
+
101
+ **Typography**
102
+ | Token | Size | Weight | Line-height | Usage |
103
+ |-------|------|--------|-------------|-------|
104
+
105
+ **Spacing**
106
+ | Context | Value | Token |
107
+ |---------|-------|-------|
108
+ | Internal padding | [value] | [token name] |
109
+ | Gap between elements | [value] | [token name] |
110
+
111
+ **Border / Radius**
112
+ [Document radius values and border styles used]
113
+ ```
114
+
115
+ ---
116
+
117
+ ## Section 5: Interactions & Animation
118
+
119
+ ```
120
+ ### Interactions
121
+
122
+ | Trigger | Action | Duration | Easing | Notes |
123
+ |---------|--------|----------|--------|-------|
124
+ | Button click | Optimistic state update | immediate | — | Don't wait for API |
125
+ | Hover on card | Background colour shift | 150ms | ease-out | |
126
+ | Modal open | Fade + scale from 95% | 200ms | ease-out | |
127
+ ```
128
+
129
+ ---
130
+
131
+ ## Section 6: Edge Cases
132
+
133
+ Document the cases that aren't shown in the main design but must be handled:
134
+
135
+ ```
136
+ ### Edge Cases
137
+
138
+ | Scenario | Expected behaviour |
139
+ |----------|-------------------|
140
+ | User has no data yet | [Empty state with CTA — exact copy] |
141
+ | API returns error | [Error message — exact copy + recovery action] |
142
+ | User has 1 item | [Singular vs. plural label handling] |
143
+ | User has 1,000+ items | [Truncation / pagination behaviour] |
144
+ | Long text / content | [How labels, headings, descriptions handle overflow] |
145
+ | Slow connection | [Loading state — duration before skeleton appears] |
146
+ | User lacks permission | [What they see — not a blank page] |
147
+ ```
148
+
149
+ ---
150
+
151
+ ## Section 7: Accessibility
152
+
153
+ ```
154
+ ### Accessibility Checklist
155
+
156
+ **Keyboard**
157
+ - [ ] All interactive elements reachable via Tab
158
+ - [ ] Focus order matches visual order
159
+ - [ ] Custom components have keyboard equivalents (Escape to close, Enter to confirm)
160
+ - [ ] Focus is managed correctly after modal open/close
161
+
162
+ **Screen reader**
163
+ - [ ] All interactive elements have accessible labels (aria-label or visible text)
164
+ - [ ] Icons without text have aria-hidden="true" or aria-label
165
+ - [ ] Dynamic content updates announced via aria-live
166
+ - [ ] Error messages associated with form fields via aria-describedby
167
+
168
+ **Visual**
169
+ - [ ] Text contrast meets WCAG AA (4.5:1 for body, 3:1 for large text)
170
+ - [ ] UI doesn't rely on colour alone to convey meaning
171
+ - [ ] Touch targets are minimum 44×44px
172
+ - [ ] Focus indicator is visible and has 3:1 contrast against background
173
+
174
+ **Specific to this component**
175
+ [Any component-specific accessibility considerations]
176
+ ```
177
+
178
+ ---
179
+
180
+ ## Section 8: Copy
181
+
182
+ Document every string that appears in this design:
183
+
184
+ ```
185
+ ### Copy
186
+
187
+ | Location | String | Notes |
188
+ |----------|--------|-------|
189
+ | Page heading | "[exact copy]" | [Tone note if relevant] |
190
+ | CTA button | "[exact copy]" | |
191
+ | Empty state heading | "[exact copy]" | |
192
+ | Empty state body | "[exact copy]" | |
193
+ | Success message | "[exact copy]" | |
194
+ | Error message | "[exact copy]" | [What error condition triggers this] |
195
+ ```
196
+
197
+ ---
198
+
199
+ ## Section 9: Open Questions
200
+
201
+ ```
202
+ ### Open Questions for Engineering
203
+ - [Technical questions the developer needs answered before building]
204
+
205
+ ### Open Questions for Product
206
+ - [Product decisions still outstanding]
207
+
208
+ ### Decisions Made
209
+ - [Key decisions already resolved — document so there's no revisiting]
210
+ ```
211
+
212
+ ---
213
+
214
+ ## Handoff Style
215
+
216
+ - Be precise. "Fade in" is not enough. "Opacity 0→1, 200ms, ease-out" is.
217
+ - Every string should be exact. No "TBD" in copy.
218
+ - Every error state needs a recovery action, not just an error message.
219
+ - If a state isn't documented, a developer will invent it. That's a design decision you've abdicated.