picasso-skill 2.8.0 → 3.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,190 @@
1
+ # Figma MCP Integration
2
+
3
+ Reference for using Figma's Model Context Protocol (MCP) server to access design data directly from Figma files. This replaces or supplements Playwright screenshots with structured design information.
4
+
5
+ ## Why Figma MCP > Screenshots
6
+
7
+ Screenshots show pixels. Figma MCP gives you the **design graph**: layers, tokens, spacing values, component instances, styles, and constraints. This means:
8
+
9
+ - **Accurate color extraction:** Get exact hex/OKLCH values, not color-picked approximations
10
+ - **Real spacing values:** Read auto-layout gaps, padding, and margins as the designer set them
11
+ - **Typography facts:** Font family, weight, size, line-height, letter-spacing — all exact
12
+ - **Component structure:** See which components are instances, variants, and overrides
13
+ - **Design intent:** Understand constraints, auto-layout direction, and responsive behavior
14
+
15
+ ## When to Use Figma MCP vs Playwright
16
+
17
+ | Scenario | Use Figma MCP | Use Playwright |
18
+ |---|---|---|
19
+ | Auditing a Figma design before implementation | ✅ | ❌ |
20
+ | Extracting design tokens from a Figma file | ✅ | ❌ |
21
+ | Reviewing a live deployed site | ❌ | ✅ |
22
+ | Comparing Figma design vs live implementation | ✅ + ✅ | ✅ only |
23
+ | Generating a DESIGN.md from existing design | ✅ preferred | ✅ fallback |
24
+ | Roasting a design that only exists in Figma | ✅ | ❌ |
25
+
26
+ **Rule:** If the user provides a Figma URL or mentions a Figma file, always prefer Figma MCP. If they provide a live URL or localhost, use Playwright. If both exist, use both for comparison.
27
+
28
+ ## MCP Server Options
29
+
30
+ Two main Figma MCP implementations:
31
+
32
+ ### Option A: Figma's Official MCP (REST API)
33
+ Package: `@anthropic/figma-mcp` or Figma's official MCP server
34
+ - Uses Figma REST API with a personal access token
35
+ - Read-only: fetch files, nodes, styles, components, export images
36
+ - Best for: CI/CD, automated audits, headless analysis
37
+ - Tools: `get_file`, `get_node`, `get_styles`, `get_components`, `get_image`
38
+
39
+ ### Option B: Talk to Figma MCP (Live Connection)
40
+ Package: `cursor-talk-to-figma-mcp` (localhost:3055)
41
+ - Connects to a running Figma instance via plugin
42
+ - **Read AND write**: read frames, update text/properties, create shapes, adjust spacing
43
+ - Best for: interactive design work, syncing copy, wireframing
44
+ - Can snapshot frames and verify changes visually
45
+
46
+ **Which to use:**
47
+ - For Picasso audits/roasts/scoring → Either works. REST API is simpler.
48
+ - For `/steal` token extraction → REST API preferred (structured data).
49
+ - For `/figma --compare` → REST API for Figma data + Playwright for live site.
50
+ - For interactive design updates (changing copy, adjusting spacing) → Talk to Figma MCP.
51
+
52
+ ## Available MCP Tools (REST API)
53
+
54
+ ### `mcp__figma__get_file`
55
+ Fetch the full structure of a Figma file.
56
+ - Input: `file_key` (extracted from Figma URL)
57
+ - Returns: Document tree with pages, frames, components, styles
58
+ - Use for: Understanding overall file structure, finding specific frames
59
+
60
+ ### `mcp__figma__get_node`
61
+ Fetch a specific node (frame, component, group) by ID.
62
+ - Input: `file_key`, `node_id`
63
+ - Returns: Full node properties including fills, strokes, effects, layout, children
64
+ - Use for: Deep-diving into specific components or sections
65
+
66
+ ### `mcp__figma__get_styles`
67
+ Fetch all published styles from the file.
68
+ - Input: `file_key`
69
+ - Returns: Color styles, text styles, effect styles, grid styles
70
+ - Use for: Extracting the design system / token set
71
+
72
+ ### `mcp__figma__get_components`
73
+ Fetch all components and component sets.
74
+ - Input: `file_key`
75
+ - Returns: Component names, descriptions, variant properties
76
+ - Use for: Understanding the component library structure
77
+
78
+ ### `mcp__figma__get_image`
79
+ Export a node as a rendered image (PNG/SVG/PDF).
80
+ - Input: `file_key`, `node_id`, `format`, `scale`
81
+ - Returns: Image URL
82
+ - Use for: Visual verification when you need to see rendered output alongside data
83
+
84
+ ## Available MCP Tools (Talk to Figma — Live Connection)
85
+
86
+ When using the Talk to Figma plugin (localhost:3055), additional capabilities:
87
+
88
+ - **Read frames**: List pages, get frame contents, read text layers
89
+ - **Update text**: Change text content in any text layer
90
+ - **Update properties**: Modify fills, strokes, effects, spacing
91
+ - **Create shapes**: Add rectangles, frames, text nodes
92
+ - **Adjust spacing**: Modify auto-layout padding and gaps
93
+ - **Snapshot**: Export current frame state for visual verification
94
+
95
+ **Critical rule:** After any Figma write operation, snapshot the frame and verify it looks correct before proceeding. Figma changes are live — there's no undo via MCP.
96
+
97
+ ## Extracting a Figma File Key
98
+
99
+ From a Figma URL:
100
+ ```
101
+ https://www.figma.com/design/ABC123xyz/My-Design-File?node-id=0-1
102
+ ^^^^^^^^^
103
+ file_key = ABC123xyz
104
+ ```
105
+
106
+ From a Figma node URL:
107
+ ```
108
+ https://www.figma.com/design/ABC123xyz/My-Design-File?node-id=123-456
109
+ ^^^^^^^^^ ^^^^^^^
110
+ file_key node_id (use 123:456 in API)
111
+ ```
112
+
113
+ **Note:** URL uses `-` separator in node-id, but the API expects `:` separator. Convert `123-456` → `123:456`.
114
+
115
+ ## Design Token Extraction Workflow
116
+
117
+ When extracting design tokens from Figma for DESIGN.md generation:
118
+
119
+ 1. **Get styles** via `get_styles` — these are the designer's intended token set
120
+ 2. **Get the root frame** via `get_node` — check auto-layout settings for spacing rhythm
121
+ 3. **Map to Picasso tokens:**
122
+ - Color styles → `--color-*` tokens (convert to OKLCH)
123
+ - Text styles → typography scale (check for consistent ratio)
124
+ - Effect styles → shadow scale, blur values
125
+ - Grid styles → layout columns, gutter, margin
126
+
127
+ ### Spacing Extraction
128
+ Read auto-layout `itemSpacing` and `paddingTop/Right/Bottom/Left` from frames. Look for patterns:
129
+ - If spacing values are multiples of 4 or 8 → 4px or 8px base unit
130
+ - If spacing values follow a ratio → extract the scale
131
+
132
+ ### Color Extraction
133
+ Figma stores colors as RGBA (0-1 float). Convert to OKLCH for Picasso:
134
+ - Extract fills from color styles
135
+ - Group into: primary, secondary, accent, neutral, semantic (success/warning/error)
136
+ - Check that neutrals are tinted (not pure gray) — flag if they aren't
137
+
138
+ ### Typography Extraction
139
+ From text styles, extract:
140
+ - Font family (flag if it's Inter/Roboto/system default — suggest alternatives)
141
+ - Size scale (check for consistent modular ratio)
142
+ - Weight usage (should have clear hierarchy: regular body, medium labels, semibold headings)
143
+ - Line-height (check it's proportional, not fixed px for all sizes)
144
+
145
+ ## Anti-Patterns to Flag
146
+
147
+ When analyzing Figma files, watch for these common design issues:
148
+
149
+ 1. **Detached instances** — Components used but detached from the library. Design debt.
150
+ 2. **Inconsistent spacing** — Auto-layout frames with ad-hoc spacing values (17px, 23px, 31px instead of a rhythm).
151
+ 3. **Unnamed layers** — "Frame 247", "Group 13". Signals hasty work.
152
+ 4. **Color styles not used** — Hardcoded colors instead of style references.
153
+ 5. **Text styles not used** — Hardcoded typography instead of style references.
154
+ 6. **Missing auto-layout** — Frames positioned absolutely instead of using auto-layout. Breaks responsive behavior.
155
+ 7. **Single-variant components** — Components that should have variants but don't (e.g., a button with only one state).
156
+ 8. **Enormous frame nesting** — 10+ levels deep. Simplify.
157
+
158
+ ## Copy Sync Workflow
159
+
160
+ When working with both Figma designs and code, copy (text content) is a common source of drift. Use Figma MCP to keep them synchronized:
161
+
162
+ 1. **Read current copy from Figma** — extract all text layers from target frames
163
+ 2. **Compare against code** — diff the Figma text against what's rendered in the implementation
164
+ 3. **Determine source of truth** — typically Figma is upstream (design → code), but if copy was updated in code first, flag it
165
+ 4. **Sync direction:**
166
+ - Figma newer → update code to match
167
+ - Code newer → flag for designer review (don't auto-write to Figma without confirmation)
168
+ 5. **Verify** — after syncing, screenshot the live site and compare against Figma export
169
+
170
+ This is especially useful for:
171
+ - Marketing pages where copy changes frequently
172
+ - Multi-language sites where translations update in Figma
173
+ - Design handoff where developers may use placeholder text
174
+
175
+ ## Comparison Workflow: Figma vs Implementation
176
+
177
+ When both a Figma file and live implementation exist:
178
+
179
+ 1. Extract tokens from Figma via MCP
180
+ 2. Screenshot the live site via Playwright
181
+ 3. Compare:
182
+ - Are the Figma fonts actually loaded on the site?
183
+ - Do spacing values match? (Common drift: Figma says 24px, CSS says 1.5rem which computes to 24px — match. Or CSS says `gap-6` which is 24px — match.)
184
+ - Are colors within ΔE < 3 tolerance? (Figma RGBA → site computed OKLCH)
185
+ - Are components structurally similar or did the dev reinterpret the design?
186
+ 4. Report discrepancies with severity:
187
+ - **Critical:** Wrong font, wrong primary color, missing sections
188
+ - **High:** Spacing off by >8px, wrong font weights, missing states
189
+ - **Medium:** Minor color drift, slightly different border radius, extra whitespace
190
+ - **Low:** Subpixel differences, minor animation timing differences
@@ -0,0 +1,211 @@
1
+ # UX Evaluation Reference
2
+
3
+ Structured frameworks for evaluating interface quality. Use these during /score, /roast, /audit, and the visual discovery crawl phase.
4
+
5
+ ---
6
+
7
+ ## 1. Nielsen's 10 Usability Heuristics (Evaluation Checklist)
8
+
9
+ For each heuristic, check the listed indicators. Score pass/fail for each.
10
+
11
+ ### H1: Visibility of System Status
12
+ The system should always keep users informed about what is going on.
13
+ - [ ] Loading states exist for async actions (skeletons, spinners, progress bars)
14
+ - [ ] Form submission shows pending/success/error feedback
15
+ - [ ] Current page/section is highlighted in navigation
16
+ - [ ] Active filters/sorts are visually indicated
17
+ - [ ] Upload progress is shown
18
+ - **Check in code:** grep for loading states, skeleton components, progress indicators
19
+ - **Check in screenshot:** is the current nav item highlighted? Are there loading indicators?
20
+
21
+ ### H2: Match Between System and Real World
22
+ Use language and concepts familiar to the user, not system-oriented terms.
23
+ - [ ] Button labels use verbs the user understands ("Save changes" not "Submit")
24
+ - [ ] Error messages explain the problem in plain language
25
+ - [ ] Navigation labels match user mental models
26
+ - [ ] Icons are conventional (trash = delete, pencil = edit, plus = add)
27
+ - **Check in code:** grep for generic labels ("Submit", "Click here", "Data")
28
+
29
+ ### H3: User Control and Freedom
30
+ Users need a clear emergency exit when they make mistakes.
31
+ - [ ] Modals have close buttons AND escape key support
32
+ - [ ] Destructive actions have confirmation OR undo
33
+ - [ ] Multi-step flows have back navigation
34
+ - [ ] Users can cancel in-progress operations
35
+ - **Check in code:** grep for confirm() dialogs, undo patterns, modal close handlers
36
+
37
+ ### H4: Consistency and Standards
38
+ Follow platform conventions. Same action = same result everywhere.
39
+ - [ ] Primary buttons look the same across all pages
40
+ - [ ] Same icon means the same thing everywhere
41
+ - [ ] Spacing and typography follow a consistent scale
42
+ - [ ] Color meanings are consistent (red = error, green = success)
43
+ - **Check in code:** grep for hardcoded colors, inconsistent button styles
44
+
45
+ ### H5: Error Prevention
46
+ Prevent problems from occurring in the first place.
47
+ - [ ] Required fields are marked before submission
48
+ - [ ] Date inputs use pickers (not free text)
49
+ - [ ] Destructive buttons are visually distinct (red/outlined, not primary)
50
+ - [ ] Inline validation catches errors before form submission
51
+ - **Check in code:** grep for required fields, inline validation, input types
52
+
53
+ ### H6: Recognition Rather Than Recall
54
+ Minimize memory load. Make options visible.
55
+ - [ ] Navigation is always visible (not hidden behind hamburger on desktop)
56
+ - [ ] Search results show context around matches
57
+ - [ ] Forms show labels (not placeholder-only)
58
+ - [ ] Recent items, favorites, or shortcuts are available
59
+ - **Check in screenshot:** are labels visible? Is navigation persistent?
60
+
61
+ ### H7: Flexibility and Efficiency of Use
62
+ Allow experts to speed up their workflow.
63
+ - [ ] Keyboard shortcuts exist for frequent actions
64
+ - [ ] Bulk operations are available for lists
65
+ - [ ] Command palette or search exists (Cmd+K)
66
+ - [ ] Default values are intelligent
67
+ - **Check in code:** grep for keyboard event listeners, bulk action patterns
68
+
69
+ ### H8: Aesthetic and Minimalist Design
70
+ Every extra element competes with relevant information.
71
+ - [ ] No decorative elements that don't serve a purpose
72
+ - [ ] Information hierarchy is clear (most important = most prominent)
73
+ - [ ] White space is used to group related elements
74
+ - [ ] No more than 3-4 colors for data categories
75
+ - **Check in screenshot:** squint test -- does hierarchy still read?
76
+
77
+ ### H9: Help Users Recognize, Diagnose, and Recover from Errors
78
+ Error messages should be in plain language, indicate the problem, and suggest a fix.
79
+ - [ ] Error messages follow: what happened + why + how to fix
80
+ - [ ] Form errors appear next to the relevant field
81
+ - [ ] API errors don't show raw technical messages to users
82
+ - [ ] Empty states guide the user on what to do next
83
+ - **Check in code:** grep for error handling, error messages, empty states
84
+
85
+ ### H10: Help and Documentation
86
+ Even though a system should be usable without docs, help should be available.
87
+ - [ ] Tooltips explain non-obvious UI elements
88
+ - [ ] Onboarding exists for first-time users
89
+ - [ ] Complex features have inline help or documentation links
90
+ - [ ] Keyboard shortcuts are discoverable
91
+ - **Check in code:** grep for tooltip components, help text, onboarding flows
92
+
93
+ ---
94
+
95
+ ## 2. Jobs to Be Done (JTBD) Framework
96
+
97
+ Use JTBD to understand WHY users interact with the app, not just WHAT they do. This informs design decisions during the crawl phase.
98
+
99
+ ### Extracting JTBD from Code
100
+
101
+ Analyze the codebase to identify user jobs:
102
+
103
+ 1. **Route structure** reveals user tasks:
104
+ - `/dashboard` = "When I start my day, I want to see what needs attention"
105
+ - `/clients/[id]` = "When I work on a client, I want all their info in one place"
106
+ - `/billing` = "When I need to invoice, I want to track time and generate bills"
107
+ - `/analyze` = "When I receive a contract, I want to understand the risks"
108
+
109
+ 2. **API endpoints** reveal user actions:
110
+ - POST /api/clients = "I want to onboard a new client"
111
+ - POST /api/analyze = "I want AI to review this document"
112
+ - GET /api/dashboard = "I want a summary of my practice"
113
+
114
+ 3. **Component names** reveal UI functions:
115
+ - `<ClientForm>` = data entry job
116
+ - `<TimerWidget>` = time tracking job
117
+ - `<RedlineView>` = document review job
118
+
119
+ ### Using JTBD to Inform Design
120
+
121
+ For each identified job, ask:
122
+ - **What's the trigger?** When does the user need to do this?
123
+ - **What's the desired outcome?** What does success look like?
124
+ - **What's the anxiety?** What could go wrong?
125
+ - **What's the context?** Where/when do they do this? (mobile? desktop? in a meeting?)
126
+
127
+ Design decisions should optimize for the job:
128
+ - High-frequency jobs need the fastest path (fewest clicks, most prominent placement)
129
+ - High-stakes jobs need the most clarity (larger text, explicit confirmation, clear feedback)
130
+ - Time-pressured jobs need efficiency (keyboard shortcuts, bulk actions, smart defaults)
131
+
132
+ ---
133
+
134
+ ## 3. Prompt Enhancement
135
+
136
+ When a user gives a vague design request, enhance it before proceeding.
137
+
138
+ ### Vague-to-Specific Mapping
139
+
140
+ | User Says | What They Mean | What to Do |
141
+ |-----------|---------------|------------|
142
+ | "Make it look good" | It looks amateur, fix the obvious issues | Run /audit, fix critical+high |
143
+ | "Make it modern" | It looks dated, update the aesthetic | Check font (is it Arial?), colors (pure gray?), radius (sharp corners?) |
144
+ | "Make it clean" | Too much visual noise, simplify | Remove decorative elements, increase whitespace, reduce color count |
145
+ | "Make it pop" | Not enough visual hierarchy, too flat | Increase contrast, add depth, strengthen heading sizes |
146
+ | "Make it professional" | It looks like a student project | Fix typography scale, add consistent spacing, tighten color palette |
147
+ | "I don't know what I want" | They need visual discovery | Generate the 10-20 sample gallery and let them react |
148
+
149
+ ### Enhancement Process
150
+
151
+ 1. Identify the complaint (what's wrong) vs. the goal (what they want)
152
+ 2. Map to specific design properties (typography, color, spacing, layout, motion)
153
+ 3. Propose concrete changes with before/after preview
154
+ 4. Never ask "what do you mean by modern?" -- instead, show 3 interpretations and ask which fits
155
+
156
+ ---
157
+
158
+ ## 4. State Machine for Interactive Components
159
+
160
+ Map all states for each interactive element. Missing states are the #1 source of unpolished UI.
161
+
162
+ ### The 8 States
163
+
164
+ Every interactive element should define:
165
+
166
+ | State | Visual Treatment | Trigger |
167
+ |-------|-----------------|---------|
168
+ | **Default** | Base appearance | Page load |
169
+ | **Hover** | Subtle background/border change | Mouse enters |
170
+ | **Focus** | Visible ring/outline (2px+ solid) | Tab navigation |
171
+ | **Active/Pressed** | Scale down slightly (0.97-0.98) | Mouse down |
172
+ | **Disabled** | Reduced opacity (0.5), no pointer | Programmatic |
173
+ | **Loading** | Spinner or pulse, disabled interaction | Async action |
174
+ | **Error** | Red border/text, error message | Validation fail |
175
+ | **Success** | Green indicator, confirmation | Action complete |
176
+
177
+ ### Audit Checklist
178
+
179
+ For each component type, verify states exist:
180
+
181
+ | Component | States to Check |
182
+ |-----------|----------------|
183
+ | Button | default, hover, focus, active, disabled, loading |
184
+ | Input | default, hover, focus, filled, error, disabled |
185
+ | Card (clickable) | default, hover, focus, active |
186
+ | Link | default, hover, focus, visited |
187
+ | Toggle | off, on, hover, focus, disabled |
188
+ | Select | default, hover, focus, open, selected, error |
189
+ | Modal | enter, exit, backdrop |
190
+
191
+ ---
192
+
193
+ ## 5. Scoring with Heuristics
194
+
195
+ When running /score, add heuristic evaluation points:
196
+
197
+ ```
198
+ Heuristic Evaluation (0-20 pts):
199
+ H1 System status: /2 (loading states, feedback)
200
+ H2 Real world match: /2 (language, icons)
201
+ H3 User control: /2 (undo, escape, back)
202
+ H4 Consistency: /2 (styles, patterns)
203
+ H5 Error prevention: /2 (validation, confirmation)
204
+ H6 Recognition: /2 (labels, navigation)
205
+ H7 Efficiency: /2 (shortcuts, bulk ops)
206
+ H8 Minimal design: /2 (hierarchy, whitespace)
207
+ H9 Error recovery: /2 (messages, guidance)
208
+ H10 Help: /2 (tooltips, onboarding)
209
+ ```
210
+
211
+ This replaces the ad-hoc accessibility scoring with a structured UX evaluation.