ui-mirror-skill 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +95 -0
- package/bin/cli.mjs +121 -0
- package/package.json +34 -0
- package/skill/SKILL.md +751 -0
- package/skill/references/analysis-dimensions.md +382 -0
- package/skill/references/component-catalog.md +758 -0
- package/skill/references/css-token-mapping.md +359 -0
- package/skill/references/output-template.md +249 -0
- package/skill/scripts/compare_tokens.py +741 -0
- package/skill/scripts/download_screenshot.py +125 -0
- package/skill/scripts/extract_design_tokens.py +617 -0
- package/skill/scripts/generate_migration.py +580 -0
- package/skill/scripts/generate_radar_chart.py +267 -0
package/skill/SKILL.md
ADDED
|
@@ -0,0 +1,751 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ui-mirror
|
|
3
|
+
description: "벤치마킹 사이트 UI 분석 및 재현 스킬. 대상 웹사이트의 스크린샷을 캡처하고 디자인 토큰(색상, 타이포, 간격, 컴포넌트)을 추출하여 현재 프로젝트 UI와 7축 비교 분석 후 CSS 토큰 오버라이드 + 컴포넌트 코드 스니펫 + 마이그레이션 플랜을 자동 생성한다. '/ui-mirror', '/mirror', 'UI 미러', '디자인 미러', '벤치마크 분석', '이 사이트처럼 만들어줘' 시 트리거."
|
|
4
|
+
allowed-tools: "Bash(python3:*), Bash(agent-browser:*), mcp__jina__capture_screenshot_url, mcp__jina__read_url, mcp__chrome-devtools__take_screenshot, mcp__chrome-devtools__navigate_page, mcp__chrome-devtools__evaluate_script, mcp__sequential-thinking__sequentialthinking"
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# ui-mirror
|
|
8
|
+
|
|
9
|
+
## 1. Overview
|
|
10
|
+
|
|
11
|
+
Capture screenshots and extract design tokens (colors, typography, spacing, border-radius, shadows, components, layout) from a benchmark website, then compare them against the current project's UI across 7 analysis axes. Produce a comprehensive analysis report together with directly usable code artifacts: a CSS token override file, component TSX snippets, and a prioritized migration plan.
|
|
12
|
+
|
|
13
|
+
**Input**: One or more benchmark URLs provided by the user.
|
|
14
|
+
**Output**: A self-contained analysis directory at `docs/ui-mirror/{name}/` containing `analysis.md`, `tokens-override.css`, `components/*.tsx`, `migration-plan.md`, and all captured screenshots.
|
|
15
|
+
|
|
16
|
+
## 2. Trigger Patterns
|
|
17
|
+
|
|
18
|
+
Activate this skill when the user issues any of the following:
|
|
19
|
+
|
|
20
|
+
- `/ui-mirror <url>` or `/mirror <url>`
|
|
21
|
+
- Natural language containing: "이 사이트처럼 만들어줘", "벤치마크 분석", "UI 미러", "디자인 미러", "디자인 캡처"
|
|
22
|
+
- Any request that provides a URL and asks to copy, match, emulate, reproduce, or benchmark against the design of that site
|
|
23
|
+
- "이 디자인 따라해줘", "이 사이트 분석해줘", "여기 UI 참고해서 만들어줘"
|
|
24
|
+
|
|
25
|
+
## 3. Prerequisites
|
|
26
|
+
|
|
27
|
+
Check for available browser/capture tools in this priority order:
|
|
28
|
+
|
|
29
|
+
1. **agent-browser** — `which agent-browser` or attempt `agent-browser --version`
|
|
30
|
+
2. **Chrome DevTools MCP** — verify `mcp__chrome-devtools__take_screenshot` is callable
|
|
31
|
+
3. **Jina MCP** — verify `mcp__jina__capture_screenshot_url` is callable
|
|
32
|
+
|
|
33
|
+
At least one tool must be available. If none are reachable, **HARD STOP** with this message:
|
|
34
|
+
|
|
35
|
+
```
|
|
36
|
+
HARD STOP: 스크린샷 도구를 찾을 수 없습니다.
|
|
37
|
+
다음 중 하나를 설치/활성화하세요:
|
|
38
|
+
1. agent-browser (권장): npm install -g @anthropic/agent-browser
|
|
39
|
+
2. Chrome DevTools MCP: .claude/settings.json에 MCP 설정 추가
|
|
40
|
+
3. Jina MCP: .claude/settings.json에 Jina MCP 설정 추가
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
Also verify Python 3.10+ is available (`python3 --version`) for token extraction and comparison scripts.
|
|
44
|
+
|
|
45
|
+
## 4. Phase 0: Input & Scope
|
|
46
|
+
|
|
47
|
+
### Step 0A — Parse User Input
|
|
48
|
+
|
|
49
|
+
Extract the following from the user's message:
|
|
50
|
+
|
|
51
|
+
| Parameter | Required | Default | Description |
|
|
52
|
+
|-----------|----------|---------|-------------|
|
|
53
|
+
| `BENCHMARK_URL` | Yes | — | The primary URL to analyze |
|
|
54
|
+
| `--depth N` | No | `1` | How many levels of subpages to crawl (0 = landing only) |
|
|
55
|
+
| `--name` | No | Auto-derived from URL hostname | Output directory name under `docs/ui-mirror/` |
|
|
56
|
+
| `--pages` | No | Auto-discovered | Comma-separated list of specific subpage paths |
|
|
57
|
+
|
|
58
|
+
Auto-derive `--name` from the URL hostname: strip `www.`, take the domain name before TLD, lowercase, kebab-case. Example: `https://www.notion.so/product` becomes `notion`.
|
|
59
|
+
|
|
60
|
+
### Step 0B — URL Validation
|
|
61
|
+
|
|
62
|
+
- Jina MCP requires HTTPS URLs. If the URL is HTTP-only and Jina is the only available tool, warn the user.
|
|
63
|
+
- agent-browser and Chrome DevTools MCP accept both HTTP and HTTPS.
|
|
64
|
+
- Reject obviously invalid URLs (no protocol, localhost references for benchmark).
|
|
65
|
+
|
|
66
|
+
### Step 0C — Subpage Discovery
|
|
67
|
+
|
|
68
|
+
When `--depth >= 1`, discover subpages:
|
|
69
|
+
|
|
70
|
+
```
|
|
71
|
+
mcp__jina__read_url(url: BENCHMARK_URL, withAllLinks: true)
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
Filter the returned links:
|
|
75
|
+
- Same-origin only (matching hostname)
|
|
76
|
+
- Exclude anchors (#), query-only variants, asset URLs (.css, .js, .png, .jpg, .svg, .ico)
|
|
77
|
+
- Exclude duplicate paths that differ only by trailing slash
|
|
78
|
+
- Limit to the top **5** most structurally distinct pages (prioritize paths with different first segments)
|
|
79
|
+
|
|
80
|
+
### Step 0D — Scope Confirmation
|
|
81
|
+
|
|
82
|
+
Present the discovered scope to the user before proceeding:
|
|
83
|
+
|
|
84
|
+
```
|
|
85
|
+
── ui-mirror 범위 확인 ──
|
|
86
|
+
벤치마크: {name} ({BENCHMARK_URL})
|
|
87
|
+
분석 페이지:
|
|
88
|
+
1. / (메인)
|
|
89
|
+
2. /features
|
|
90
|
+
3. /pricing
|
|
91
|
+
4. /docs/getting-started
|
|
92
|
+
5. /blog
|
|
93
|
+
|
|
94
|
+
총 {N}장 캡처 예정. 진행할까요? (Y/n)
|
|
95
|
+
```
|
|
96
|
+
|
|
97
|
+
Wait for user confirmation. If the user modifies the list, accept the changes.
|
|
98
|
+
|
|
99
|
+
### Step 0E — Create Output Directory
|
|
100
|
+
|
|
101
|
+
```bash
|
|
102
|
+
mkdir -p docs/ui-mirror/{name}/screenshots
|
|
103
|
+
mkdir -p docs/ui-mirror/{name}/components
|
|
104
|
+
mkdir -p docs/ui-mirror/{name}/raw
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
## 5. Phase 1: Benchmark Capture
|
|
108
|
+
|
|
109
|
+
Execute the following steps for each page in scope. Parallelize Step 1A and Step 1B where tool capabilities allow.
|
|
110
|
+
|
|
111
|
+
### Step 1A — Visual Capture
|
|
112
|
+
|
|
113
|
+
Derive a `{slug}` from each page path: `/features/pricing` becomes `features-pricing`, `/` becomes `index`.
|
|
114
|
+
|
|
115
|
+
**Primary (Jina MCP)**:
|
|
116
|
+
```
|
|
117
|
+
mcp__jina__capture_screenshot_url(url: PAGE_URL, firstScreenOnly: false)
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
If the Jina MCP returns a URL (not inline binary), download it to a local file:
|
|
121
|
+
```bash
|
|
122
|
+
python3 $HOME/.agents/skills/ui-mirror/scripts/download_screenshot.py \
|
|
123
|
+
--url "RETURNED_URL" \
|
|
124
|
+
--output "docs/ui-mirror/{name}/screenshots/benchmark-{slug}-full.png"
|
|
125
|
+
```
|
|
126
|
+
|
|
127
|
+
If it returns a base64 data URL (`data:image/png;base64,...`), the same script handles decoding:
|
|
128
|
+
```bash
|
|
129
|
+
python3 $HOME/.agents/skills/ui-mirror/scripts/download_screenshot.py \
|
|
130
|
+
--url "data:image/png;base64,..." \
|
|
131
|
+
--output "docs/ui-mirror/{name}/screenshots/benchmark-{slug}-full.png"
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
If neither URL nor data URL is returned (inline binary via MCP), write the binary data directly to `screenshots/benchmark-{slug}-full.png`.
|
|
135
|
+
|
|
136
|
+
**Fallback 1 (agent-browser)**:
|
|
137
|
+
```bash
|
|
138
|
+
agent-browser open "{PAGE_URL}"
|
|
139
|
+
agent-browser wait --load networkidle
|
|
140
|
+
agent-browser screenshot --full "screenshots/benchmark-{slug}-full.png"
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
Also capture an annotated screenshot if agent-browser is available:
|
|
144
|
+
```bash
|
|
145
|
+
agent-browser screenshot --annotate "screenshots/benchmark-{slug}-annotated.png"
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
**Fallback 2 (Chrome DevTools MCP)**:
|
|
149
|
+
```
|
|
150
|
+
mcp__chrome-devtools__navigate_page(url: PAGE_URL)
|
|
151
|
+
mcp__chrome-devtools__take_screenshot(fullPage: true)
|
|
152
|
+
```
|
|
153
|
+
Save returned screenshot data to `screenshots/benchmark-{slug}-full.png`.
|
|
154
|
+
|
|
155
|
+
### Step 1B — Content Extraction (parallel with 1A)
|
|
156
|
+
|
|
157
|
+
```
|
|
158
|
+
mcp__jina__read_url(url: PAGE_URL, withAllImages: true, withAllLinks: true)
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
Save the structured content output to `raw/benchmark-{slug}-content.md` for reference during analysis.
|
|
162
|
+
|
|
163
|
+
### Step 1C — CSS Extraction
|
|
164
|
+
|
|
165
|
+
Execute the following JavaScript on each page to extract computed styles. Use the appropriate tool based on availability:
|
|
166
|
+
|
|
167
|
+
**agent-browser**:
|
|
168
|
+
```bash
|
|
169
|
+
agent-browser eval --stdin <<'JSEOF'
|
|
170
|
+
(function() {
|
|
171
|
+
const result = {};
|
|
172
|
+
|
|
173
|
+
/* Body */
|
|
174
|
+
const body = document.body;
|
|
175
|
+
const bs = getComputedStyle(body);
|
|
176
|
+
result.body = {
|
|
177
|
+
fontFamily: bs.fontFamily,
|
|
178
|
+
fontSize: bs.fontSize,
|
|
179
|
+
lineHeight: bs.lineHeight,
|
|
180
|
+
color: bs.color,
|
|
181
|
+
backgroundColor: bs.backgroundColor,
|
|
182
|
+
letterSpacing: bs.letterSpacing
|
|
183
|
+
};
|
|
184
|
+
|
|
185
|
+
/* Headings — first 20 */
|
|
186
|
+
result.headings = [];
|
|
187
|
+
document.querySelectorAll('h1,h2,h3,h4,h5,h6').forEach((el, i) => {
|
|
188
|
+
if (i >= 20) return;
|
|
189
|
+
const s = getComputedStyle(el);
|
|
190
|
+
result.headings.push({
|
|
191
|
+
tag: el.tagName, text: el.textContent.slice(0, 60),
|
|
192
|
+
fontSize: s.fontSize, fontWeight: s.fontWeight, lineHeight: s.lineHeight,
|
|
193
|
+
color: s.color, letterSpacing: s.letterSpacing, marginBottom: s.marginBottom
|
|
194
|
+
});
|
|
195
|
+
});
|
|
196
|
+
|
|
197
|
+
/* Buttons — first 15 */
|
|
198
|
+
result.buttons = [];
|
|
199
|
+
document.querySelectorAll('button, [role="button"], a.btn, .button, [class*="btn"]').forEach((el, i) => {
|
|
200
|
+
if (i >= 15) return;
|
|
201
|
+
const s = getComputedStyle(el);
|
|
202
|
+
result.buttons.push({
|
|
203
|
+
text: el.textContent.trim().slice(0, 40),
|
|
204
|
+
fontSize: s.fontSize, fontWeight: s.fontWeight, padding: s.padding,
|
|
205
|
+
borderRadius: s.borderRadius, backgroundColor: s.backgroundColor,
|
|
206
|
+
color: s.color, border: s.border, boxShadow: s.boxShadow,
|
|
207
|
+
height: s.height, minHeight: s.minHeight
|
|
208
|
+
});
|
|
209
|
+
});
|
|
210
|
+
|
|
211
|
+
/* Cards — first 10 */
|
|
212
|
+
result.cards = [];
|
|
213
|
+
document.querySelectorAll('[class*="card"], [class*="Card"], article, .panel, [class*="tile"]').forEach((el, i) => {
|
|
214
|
+
if (i >= 10) return;
|
|
215
|
+
const s = getComputedStyle(el);
|
|
216
|
+
result.cards.push({
|
|
217
|
+
tagName: el.tagName, className: el.className.toString().slice(0, 80),
|
|
218
|
+
padding: s.padding, borderRadius: s.borderRadius, border: s.border,
|
|
219
|
+
backgroundColor: s.backgroundColor, boxShadow: s.boxShadow,
|
|
220
|
+
gap: s.gap, width: s.width
|
|
221
|
+
});
|
|
222
|
+
});
|
|
223
|
+
|
|
224
|
+
/* Inputs — first 10 */
|
|
225
|
+
result.inputs = [];
|
|
226
|
+
document.querySelectorAll('input, textarea, select').forEach((el, i) => {
|
|
227
|
+
if (i >= 10) return;
|
|
228
|
+
const s = getComputedStyle(el);
|
|
229
|
+
result.inputs.push({
|
|
230
|
+
type: el.type || el.tagName.toLowerCase(),
|
|
231
|
+
fontSize: s.fontSize, padding: s.padding, borderRadius: s.borderRadius,
|
|
232
|
+
border: s.border, backgroundColor: s.backgroundColor,
|
|
233
|
+
height: s.height, boxShadow: s.boxShadow
|
|
234
|
+
});
|
|
235
|
+
});
|
|
236
|
+
|
|
237
|
+
/* Nav */
|
|
238
|
+
const nav = document.querySelector('nav, [role="navigation"], header');
|
|
239
|
+
if (nav) {
|
|
240
|
+
const ns = getComputedStyle(nav);
|
|
241
|
+
result.nav = {
|
|
242
|
+
height: ns.height, backgroundColor: ns.backgroundColor,
|
|
243
|
+
padding: ns.padding, position: ns.position, boxShadow: ns.boxShadow,
|
|
244
|
+
borderBottom: ns.borderBottom, backdropFilter: ns.backdropFilter
|
|
245
|
+
};
|
|
246
|
+
}
|
|
247
|
+
|
|
248
|
+
/* Color inventory — first 500 elements */
|
|
249
|
+
const bgColors = new Map();
|
|
250
|
+
const textColors = new Map();
|
|
251
|
+
document.querySelectorAll('*').forEach((el, i) => {
|
|
252
|
+
if (i >= 500) return;
|
|
253
|
+
const s = getComputedStyle(el);
|
|
254
|
+
const bg = s.backgroundColor;
|
|
255
|
+
if (bg && bg !== 'rgba(0, 0, 0, 0)' && bg !== 'transparent') {
|
|
256
|
+
bgColors.set(bg, (bgColors.get(bg) || 0) + 1);
|
|
257
|
+
}
|
|
258
|
+
const tc = s.color;
|
|
259
|
+
if (tc) {
|
|
260
|
+
textColors.set(tc, (textColors.get(tc) || 0) + 1);
|
|
261
|
+
}
|
|
262
|
+
});
|
|
263
|
+
result.colorInventory = {
|
|
264
|
+
backgrounds: Object.fromEntries([...bgColors.entries()].sort((a,b) => b[1]-a[1]).slice(0, 25)),
|
|
265
|
+
texts: Object.fromEntries([...textColors.entries()].sort((a,b) => b[1]-a[1]).slice(0, 25))
|
|
266
|
+
};
|
|
267
|
+
|
|
268
|
+
/* Layout info */
|
|
269
|
+
const main = document.querySelector('main, [role="main"], .main, #main, .container, .wrapper');
|
|
270
|
+
if (main) {
|
|
271
|
+
const ms = getComputedStyle(main);
|
|
272
|
+
result.layout = {
|
|
273
|
+
maxWidth: ms.maxWidth, width: ms.width, padding: ms.padding,
|
|
274
|
+
margin: ms.margin, display: ms.display,
|
|
275
|
+
gridTemplateColumns: ms.gridTemplateColumns || null,
|
|
276
|
+
gap: ms.gap
|
|
277
|
+
};
|
|
278
|
+
}
|
|
279
|
+
|
|
280
|
+
return JSON.stringify(result, null, 2);
|
|
281
|
+
})()
|
|
282
|
+
JSEOF
|
|
283
|
+
```
|
|
284
|
+
|
|
285
|
+
**Chrome DevTools MCP alternative**:
|
|
286
|
+
```
|
|
287
|
+
mcp__chrome-devtools__evaluate_script(expression: <same JS as above>)
|
|
288
|
+
```
|
|
289
|
+
|
|
290
|
+
Save the output to `raw/benchmark-styles.json`.
|
|
291
|
+
|
|
292
|
+
### Step 1D — Token Processing
|
|
293
|
+
|
|
294
|
+
Run the design token extraction script:
|
|
295
|
+
|
|
296
|
+
```bash
|
|
297
|
+
python3 $HOME/.agents/skills/ui-mirror/scripts/extract_design_tokens.py raw/benchmark-styles.json > raw/benchmark-tokens.json
|
|
298
|
+
```
|
|
299
|
+
|
|
300
|
+
If the Python script is not present or fails, perform inline token extraction: parse `raw/benchmark-styles.json`, deduplicate colors, normalize font stacks, convert px values to rem, and produce a structured token JSON manually. Save to `raw/benchmark-tokens.json`.
|
|
301
|
+
|
|
302
|
+
## 6. Phase 2: Current Site Capture
|
|
303
|
+
|
|
304
|
+
### Step 2A — Page Mapping
|
|
305
|
+
|
|
306
|
+
Auto-map benchmark page types to equivalent routes in the current project. Detect the project's framework and route structure:
|
|
307
|
+
|
|
308
|
+
1. **Route Discovery**: Scan `src/app/` (Next.js App Router), `pages/` (Next.js Pages Router), `src/routes/` (SvelteKit/Remix), or `src/views/` (Vue) to identify available routes.
|
|
309
|
+
2. **Map by page type**:
|
|
310
|
+
|
|
311
|
+
| Benchmark Page Type | How to Find Equivalent |
|
|
312
|
+
|--------------------|------------------------|
|
|
313
|
+
| Landing / Home | Root route (`/` or `/dashboard`) |
|
|
314
|
+
| Content / Article | Content-heavy routes (blog, docs, wiki) |
|
|
315
|
+
| Admin / Dashboard / Settings | Routes under `/admin`, `/settings`, `/dashboard` |
|
|
316
|
+
| Form / Input-heavy | Routes with forms (creation pages, auth pages) |
|
|
317
|
+
| List / Table | Routes with list/table views |
|
|
318
|
+
| Profile / User | Routes under `/profile`, `/account`, `/me` |
|
|
319
|
+
|
|
320
|
+
3. If no matching route is found for a benchmark page type, skip that mapping and note it in the analysis.
|
|
321
|
+
|
|
322
|
+
### Step 2B — Dev Server Auto-Detection & Capture
|
|
323
|
+
|
|
324
|
+
Perform multi-step dev server detection before capturing current project pages:
|
|
325
|
+
|
|
326
|
+
**Step 2B-1: Port Discovery**
|
|
327
|
+
|
|
328
|
+
Check ports in priority order:
|
|
329
|
+
1. Read `.env.local` for `PORT` variable
|
|
330
|
+
2. Parse `package.json` `scripts.dev` for `--port N` or `-p N`
|
|
331
|
+
3. Try default ports in order: `3000 → 3001 → 5173 → 4321 → 8080`
|
|
332
|
+
|
|
333
|
+
Detection method (run via Bash):
|
|
334
|
+
```bash
|
|
335
|
+
# macOS: check if port is listening
|
|
336
|
+
lsof -i :3000 -sTCP:LISTEN -P -n 2>/dev/null | head -2
|
|
337
|
+
|
|
338
|
+
# Cross-platform fallback: HTTP probe
|
|
339
|
+
curl -s -o /dev/null -w '%{http_code}' --connect-timeout 3 http://localhost:3000/ 2>/dev/null
|
|
340
|
+
```
|
|
341
|
+
|
|
342
|
+
**Step 2B-2: Process Detection**
|
|
343
|
+
|
|
344
|
+
If no port responds, check for running dev server processes:
|
|
345
|
+
```bash
|
|
346
|
+
ps aux | grep -E 'next dev|npm run dev|vercel dev|vite|turbopack' | grep -v grep | head -3
|
|
347
|
+
```
|
|
348
|
+
|
|
349
|
+
**Step 2B-3: Result Branching**
|
|
350
|
+
|
|
351
|
+
| Scenario | Action |
|
|
352
|
+
|----------|--------|
|
|
353
|
+
| Port found responding | Use that port for Phase 2 capture |
|
|
354
|
+
| Dev process running but port not responding | Wait 10 seconds, retry port probe once |
|
|
355
|
+
| Nothing detected | Present user with options (see below) |
|
|
356
|
+
|
|
357
|
+
When no dev server is detected, present the user with a choice:
|
|
358
|
+
|
|
359
|
+
```
|
|
360
|
+
INFO: Dev server가 감지되지 않습니다.
|
|
361
|
+
|
|
362
|
+
[1] 직접 `npm run dev` 실행 후 재시도 (권장)
|
|
363
|
+
[2] Phase 2 건너뛰고 벤치마크 전용 분석
|
|
364
|
+
[3] dev server 자동 시작 (background, npm run dev)
|
|
365
|
+
```
|
|
366
|
+
|
|
367
|
+
- Option 1: Pause and wait for user to confirm dev server is running, then re-probe
|
|
368
|
+
- Option 2: Skip to Phase 3 with benchmark-only analysis (no current-site column in comparison tables)
|
|
369
|
+
- Option 3: Run `npm run dev &` in background, wait 15 seconds, verify port responds
|
|
370
|
+
|
|
371
|
+
**Step 2B-4: Capture Pages**
|
|
372
|
+
|
|
373
|
+
Once a responding port `PORT` is confirmed, capture each mapped current project page:
|
|
374
|
+
|
|
375
|
+
```
|
|
376
|
+
mcp__chrome-devtools__navigate_page(url: "http://localhost:{PORT}/{mapped-route}")
|
|
377
|
+
mcp__chrome-devtools__take_screenshot(fullPage: true)
|
|
378
|
+
```
|
|
379
|
+
|
|
380
|
+
Save to `screenshots/current-{slug}-full.png`.
|
|
381
|
+
|
|
382
|
+
### Step 2C — CSS Extraction for Current Project
|
|
383
|
+
|
|
384
|
+
Execute the same JavaScript extraction from Step 1C on each captured current project page. Save to `raw/current-styles.json`.
|
|
385
|
+
|
|
386
|
+
### Step 2D — Read Authoritative Tokens
|
|
387
|
+
|
|
388
|
+
Auto-detect the project's global CSS file containing CSS custom properties:
|
|
389
|
+
1. Check `src/app/globals.css` (Next.js App Router)
|
|
390
|
+
2. Check `src/styles/globals.css`, `src/index.css`, `src/global.css`
|
|
391
|
+
3. Check `app/globals.css`, `styles/globals.css`
|
|
392
|
+
4. Search for files matching `glob("**/globals.css")` or `glob("**/*.css")` containing `:root {`
|
|
393
|
+
|
|
394
|
+
Read the found file to extract the canonical CSS custom properties (the `:root` block). This is the source of truth for the project's design tokens — computed styles may differ from declared tokens due to component overrides.
|
|
395
|
+
|
|
396
|
+
Save parsed tokens to `raw/current-tokens.json`.
|
|
397
|
+
|
|
398
|
+
## 7. Phase 3: 7-Axis Analysis
|
|
399
|
+
|
|
400
|
+
### Step 3A — Load Analysis Rubric
|
|
401
|
+
|
|
402
|
+
Read `references/analysis-dimensions.md` for the detailed scoring rubric and comparison criteria for each axis. If the file is missing, use the built-in 7-axis framework described below.
|
|
403
|
+
|
|
404
|
+
### Step 3B — Structured Analysis via Sequential Thinking
|
|
405
|
+
|
|
406
|
+
Invoke Sequential Thinking MCP with exactly 9 thoughts:
|
|
407
|
+
|
|
408
|
+
| Thought # | Content |
|
|
409
|
+
|-----------|---------|
|
|
410
|
+
| 1 | **Setup**: Summarize benchmark identity (brand, target audience, design language) and the current project's design posture |
|
|
411
|
+
| 2 | **Axis 1 — Color Palette**: Compare primary, secondary, accent, semantic, neutral palettes. Count unique hues. Map to OKLCH. |
|
|
412
|
+
| 3 | **Axis 2 — Typography**: Font families, size scale, weight distribution, line-height ratios, letter-spacing. |
|
|
413
|
+
| 4 | **Axis 3 — Spacing & Layout**: Grid system, max-width, padding rhythm, gap patterns, section spacing. |
|
|
414
|
+
| 5 | **Axis 4 — Border Radius & Shape Language**: Radius scale, card vs button vs input differentiation, shape consistency. |
|
|
415
|
+
| 6 | **Axis 5 — Shadows & Elevation**: Shadow layers, elevation hierarchy, blur/spread patterns, colored shadows. |
|
|
416
|
+
| 7 | **Axis 6 — Components**: Button styles, card anatomy, input styling, navigation, badges, modals, empty states. |
|
|
417
|
+
| 8 | **Axis 7 — Motion & Interaction**: Transitions, hover effects, animation library, duration/easing conventions. |
|
|
418
|
+
| 9 | **Synthesis**: Overall design gap score (0-100), top 5 highest-impact changes, design system rule conflict inventory. |
|
|
419
|
+
|
|
420
|
+
### Step 3C — Token Comparison Script
|
|
421
|
+
|
|
422
|
+
```bash
|
|
423
|
+
python3 $HOME/.agents/skills/ui-mirror/scripts/compare_tokens.py raw/benchmark-tokens.json raw/current-tokens.json > raw/token-diff.json
|
|
424
|
+
```
|
|
425
|
+
|
|
426
|
+
If the script is not present or fails, perform inline comparison: diff each token category, identify additions/removals/changes, compute delta magnitudes.
|
|
427
|
+
|
|
428
|
+
### Step 3D — Per-Axis Comparison Tables
|
|
429
|
+
|
|
430
|
+
For each of the 7 axes, produce a markdown table:
|
|
431
|
+
|
|
432
|
+
```markdown
|
|
433
|
+
| Property | Benchmark | Current Project | Gap | Action |
|
|
434
|
+
|----------|-----------|---------------|-----|--------|
|
|
435
|
+
| Primary color | oklch(0.65 0.28 264) | oklch(0.55 0.25 280) | Hue shift +16 | Override --primary |
|
|
436
|
+
```
|
|
437
|
+
|
|
438
|
+
### Step 3E — Design System Rule Conflict Detection
|
|
439
|
+
|
|
440
|
+
Auto-detect the project's design system documentation:
|
|
441
|
+
1. Check for `FRONTEND_CONSISTENCY.md` in project root
|
|
442
|
+
2. Check for `DESIGN_SYSTEM.md`, `STYLE_GUIDE.md`, `docs/design-system.md`
|
|
443
|
+
3. Check `CLAUDE.md` for frontend design rules section
|
|
444
|
+
4. If no design system file found, skip conflict detection and note in analysis.md
|
|
445
|
+
|
|
446
|
+
If found, cross-reference every proposed change against the documented design rules.
|
|
447
|
+
|
|
448
|
+
For each conflict, document:
|
|
449
|
+
- Rule ID and rule text (if rules are numbered)
|
|
450
|
+
- What the benchmark requires
|
|
451
|
+
- Whether the user has opted for "제약 무시, 완전 재현 우선" mode
|
|
452
|
+
- Proposed rule amendment if full reproduction is requested
|
|
453
|
+
|
|
454
|
+
### Step 3F — Write Analysis Report
|
|
455
|
+
|
|
456
|
+
Write the full analysis to `docs/ui-mirror/{name}/analysis.md`. Use the template from `references/output-template.md` if available. The report must include:
|
|
457
|
+
|
|
458
|
+
1. Executive summary (3-5 sentences)
|
|
459
|
+
2. Screenshot gallery table (all captured pages in one overview table)
|
|
460
|
+
3. Benchmark identity profile
|
|
461
|
+
4. 7-axis comparison tables (from Step 3D), **each with inline screenshot evidence** (from Step 3F-3)
|
|
462
|
+
5. Design system rule conflict inventory (from Step 3E)
|
|
463
|
+
6. Overall gap score with visual charts (see below)
|
|
464
|
+
7. Recommended implementation priority (P0/P1/P2)
|
|
465
|
+
|
|
466
|
+
### Step 3F-2 — Generate Visual Charts
|
|
467
|
+
|
|
468
|
+
Generate both SVG radar chart and Mermaid bar chart for the 7-axis scores.
|
|
469
|
+
|
|
470
|
+
**SVG Radar Chart** (primary visualization):
|
|
471
|
+
```bash
|
|
472
|
+
python3 $HOME/.agents/skills/ui-mirror/scripts/generate_radar_chart.py \
|
|
473
|
+
--scores '{"color":SCORE,"typography":SCORE,"spacing":SCORE,"radius":SCORE,"shadows":SCORE,"components":SCORE,"motion":SCORE}' \
|
|
474
|
+
--max-score 5 \
|
|
475
|
+
--lang kr \
|
|
476
|
+
--output docs/ui-mirror/{name}/radar-chart.svg
|
|
477
|
+
```
|
|
478
|
+
|
|
479
|
+
Embed in analysis.md:
|
|
480
|
+
```markdown
|
|
481
|
+
### Score Radar Chart
|
|
482
|
+
|
|
483
|
+

|
|
484
|
+
```
|
|
485
|
+
|
|
486
|
+
**Mermaid Bar Chart** (fallback for markdown renderers without SVG support):
|
|
487
|
+
|
|
488
|
+
Generate via script or inline. Use `--mermaid` flag for stdout output:
|
|
489
|
+
```bash
|
|
490
|
+
python3 $HOME/.agents/skills/ui-mirror/scripts/generate_radar_chart.py \
|
|
491
|
+
--scores '{"color":SCORE,...}' \
|
|
492
|
+
--mermaid
|
|
493
|
+
```
|
|
494
|
+
|
|
495
|
+
Embed the output in analysis.md as a fenced Mermaid code block. This renders natively on GitHub, GitLab, and most modern markdown viewers.
|
|
496
|
+
|
|
497
|
+
### Step 3F-3 — Inline Screenshot Embedding
|
|
498
|
+
|
|
499
|
+
Each axis section in analysis.md **must** include the most relevant benchmark screenshot(s) as inline markdown images, placed immediately after the axis heading and before the descriptive paragraph. This provides visual evidence for every comparison claim.
|
|
500
|
+
|
|
501
|
+
**Page-to-axis mapping** — use the following default mapping to select which screenshot to embed in each axis section. If a page was not captured, skip that axis's screenshot.
|
|
502
|
+
|
|
503
|
+
| Axis | Primary Screenshot | Secondary Screenshot | Rationale |
|
|
504
|
+
|------|-------------------|---------------------|-----------|
|
|
505
|
+
| 1. Color Palette | `benchmark-article-full.png` | — | Shows accent colors, link colors, navbar, body/content bg |
|
|
506
|
+
| 2. Typography | `benchmark-article-full.png` | — | Shows heading scale, body text, font rendering |
|
|
507
|
+
| 3. Spacing & Layout | `benchmark-main-full.png` | `benchmark-article-full.png` | Shows page chrome, column structure, padding |
|
|
508
|
+
| 4. Border Radius | `benchmark-article-full.png` | — | Shows button/card/input radius in context |
|
|
509
|
+
| 5. Shadows & Elevation | `benchmark-article-full.png` | — | Shows shadow presence/absence, border-based depth |
|
|
510
|
+
| 6. Components | `benchmark-recentchanges-full.png` | `benchmark-history-full.png`, `benchmark-discussion-full.png` | Shows component anatomy, density, patterns |
|
|
511
|
+
| 7. Interaction | `benchmark-article-full.png` | — | Shows hover targets, edit entry points |
|
|
512
|
+
|
|
513
|
+
**Embedding format** — use blockquote + descriptive caption:
|
|
514
|
+
|
|
515
|
+
```markdown
|
|
516
|
+
### Axis N: Title (Score: X/5)
|
|
517
|
+
|
|
518
|
+
> **참고 스크린샷**: {page name} — {what to observe in the screenshot}
|
|
519
|
+
> 
|
|
520
|
+
|
|
521
|
+
{axis description paragraph...}
|
|
522
|
+
```
|
|
523
|
+
|
|
524
|
+
For axes with multiple screenshots (especially Components), embed all relevant screenshots with separate captions:
|
|
525
|
+
|
|
526
|
+
```markdown
|
|
527
|
+
> **참고 스크린샷**: 최근 변경 — action type + byte delta + 직접 링크 고밀도 테이블
|
|
528
|
+
> 
|
|
529
|
+
>
|
|
530
|
+
> **참고 스크린샷**: 문서 역사 — 리비전 목록 + diff 링크
|
|
531
|
+
> 
|
|
532
|
+
```
|
|
533
|
+
|
|
534
|
+
**Rules**:
|
|
535
|
+
- Every axis section must have at least one screenshot. If no dedicated page maps to the axis, use the article page as default.
|
|
536
|
+
- Use relative paths (`./screenshots/`) — never absolute paths.
|
|
537
|
+
- Captions must describe **what to observe** in the screenshot for that specific axis, not just the page name.
|
|
538
|
+
- The screenshot gallery table at the top of analysis.md (all screenshots in one table) is still included as an overview. The inline screenshots within axes provide contextual evidence.
|
|
539
|
+
|
|
540
|
+
### Step 3F-4 — Human-Readable Token Diff
|
|
541
|
+
|
|
542
|
+
In addition to the raw JSON diff, generate a human-readable summary:
|
|
543
|
+
```bash
|
|
544
|
+
python3 $HOME/.agents/skills/ui-mirror/scripts/compare_tokens.py \
|
|
545
|
+
raw/benchmark-tokens.json raw/current-tokens.json --format human \
|
|
546
|
+
> raw/token-diff-readable.txt
|
|
547
|
+
```
|
|
548
|
+
|
|
549
|
+
Include this text output as an appendix in analysis.md or as a standalone file for quick review.
|
|
550
|
+
|
|
551
|
+
## 8. Phase 4: Code Artifact Generation
|
|
552
|
+
|
|
553
|
+
### Step 4A — CSS Token Override (`tokens-override.css`)
|
|
554
|
+
|
|
555
|
+
Generate a CSS file with `:root` block containing OKLCH token overrides:
|
|
556
|
+
|
|
557
|
+
```css
|
|
558
|
+
/* ui-mirror: {name} ({BENCHMARK_URL})
|
|
559
|
+
* Generated: {ISO date}
|
|
560
|
+
* Apply by importing after globals.css or merging into globals.css
|
|
561
|
+
*/
|
|
562
|
+
|
|
563
|
+
:root {
|
|
564
|
+
/* === Colors === */
|
|
565
|
+
--primary: oklch(0.65 0.28 264); /* was: oklch(0.55 0.25 280) */
|
|
566
|
+
--primary-foreground: oklch(0.98 0 0); /* was: oklch(0.98 0.01 280) */
|
|
567
|
+
|
|
568
|
+
/* === Typography === */
|
|
569
|
+
--font-sans: "Inter", "Pretendard", sans-serif; /* was: "Pretendard", "Inter", sans-serif */
|
|
570
|
+
|
|
571
|
+
/* === Spacing (as reference — apply via Tailwind config) === */
|
|
572
|
+
/* card-padding: 20px (was: 16px) → p-5 instead of p-4 */
|
|
573
|
+
|
|
574
|
+
/* === New tokens (not in current project) === */
|
|
575
|
+
--accent-gradient: linear-gradient(135deg, oklch(0.65 0.28 264), oklch(0.70 0.20 300));
|
|
576
|
+
}
|
|
577
|
+
```
|
|
578
|
+
|
|
579
|
+
Every line must include a `/* was: ... */` comment showing the current project value, or `/* NEW */` for tokens that do not exist in the current project.
|
|
580
|
+
|
|
581
|
+
All colors must be in OKLCH format. Convert any RGB/HSL values from the benchmark extraction using the Python conversion utility or inline calculation.
|
|
582
|
+
|
|
583
|
+
### Step 4B — Component Code Snippets (`components/`)
|
|
584
|
+
|
|
585
|
+
For each major component where the benchmark significantly differs from the current project's implementation, generate a TSX file. Typical components:
|
|
586
|
+
|
|
587
|
+
- `mirror-button.tsx` — Button variant matching benchmark style
|
|
588
|
+
- `mirror-card.tsx` — Card component with benchmark proportions
|
|
589
|
+
- `mirror-input.tsx` — Input styling
|
|
590
|
+
- `mirror-nav.tsx` — Navigation bar structure
|
|
591
|
+
|
|
592
|
+
Each TSX file should follow the project's component conventions. If the project uses shadcn/ui + CVA, use this pattern:
|
|
593
|
+
|
|
594
|
+
```tsx
|
|
595
|
+
import { cva, type VariantProps } from "class-variance-authority"
|
|
596
|
+
import { cn } from "@/lib/utils"
|
|
597
|
+
|
|
598
|
+
const mirrorButtonVariants = cva(
|
|
599
|
+
"inline-flex items-center justify-center font-medium transition-all",
|
|
600
|
+
{
|
|
601
|
+
variants: {
|
|
602
|
+
variant: {
|
|
603
|
+
default: "bg-primary text-primary-foreground shadow-md",
|
|
604
|
+
outline: "border border-input bg-background hover:bg-accent",
|
|
605
|
+
},
|
|
606
|
+
size: {
|
|
607
|
+
default: "h-10 px-5 text-sm rounded-lg", // benchmark: rounded-lg, h-10
|
|
608
|
+
sm: "h-8 px-3 text-xs rounded-md",
|
|
609
|
+
lg: "h-12 px-8 text-base rounded-lg",
|
|
610
|
+
},
|
|
611
|
+
},
|
|
612
|
+
defaultVariants: { variant: "default", size: "default" },
|
|
613
|
+
}
|
|
614
|
+
)
|
|
615
|
+
|
|
616
|
+
interface MirrorButtonProps
|
|
617
|
+
extends React.ButtonHTMLAttributes<HTMLButtonElement>,
|
|
618
|
+
VariantProps<typeof mirrorButtonVariants> {}
|
|
619
|
+
|
|
620
|
+
function MirrorButton({ className, variant, size, ...props }: MirrorButtonProps) {
|
|
621
|
+
return (
|
|
622
|
+
<button
|
|
623
|
+
data-slot="mirror-button"
|
|
624
|
+
className={cn(mirrorButtonVariants({ variant, size }), className)}
|
|
625
|
+
{...props}
|
|
626
|
+
/>
|
|
627
|
+
)
|
|
628
|
+
}
|
|
629
|
+
|
|
630
|
+
export { MirrorButton, mirrorButtonVariants }
|
|
631
|
+
```
|
|
632
|
+
|
|
633
|
+
Include inline comments noting which benchmark property each class maps to. Prefix all component names with `Mirror` to avoid collisions with existing project components.
|
|
634
|
+
|
|
635
|
+
### Step 4C — Migration Plan (`migration-plan.md`)
|
|
636
|
+
|
|
637
|
+
Run the migration plan generator:
|
|
638
|
+
|
|
639
|
+
```bash
|
|
640
|
+
python3 $HOME/.agents/skills/ui-mirror/scripts/generate_migration.py raw/token-diff.json > migration-plan.md
|
|
641
|
+
```
|
|
642
|
+
|
|
643
|
+
If the script is unavailable, generate the migration plan inline with the following structure:
|
|
644
|
+
|
|
645
|
+
```markdown
|
|
646
|
+
# Migration Plan: {name}
|
|
647
|
+
|
|
648
|
+
## Priority Matrix
|
|
649
|
+
|
|
650
|
+
### P0 — Immediate (tokens-override.css import)
|
|
651
|
+
Changes achievable by importing the generated CSS token file.
|
|
652
|
+
No component code changes required.
|
|
653
|
+
|
|
654
|
+
| Change | File | Effort | Risk |
|
|
655
|
+
|--------|------|--------|------|
|
|
656
|
+
| Primary color shift | globals.css | 1 line | Low |
|
|
657
|
+
|
|
658
|
+
### P1 — Short-term (component updates)
|
|
659
|
+
Changes requiring component-level modifications using generated snippets.
|
|
660
|
+
|
|
661
|
+
| Change | File(s) | Effort | Risk | Design Rule Conflict |
|
|
662
|
+
|--------|---------|--------|------|----------------------|
|
|
663
|
+
|
|
664
|
+
### P2 — Medium-term (structural changes)
|
|
665
|
+
Layout, grid, or navigation restructuring.
|
|
666
|
+
|
|
667
|
+
### P3 — Deferred / Evaluated
|
|
668
|
+
Changes that conflict with the project's design principles or require user decision.
|
|
669
|
+
|
|
670
|
+
## Design System Rule Amendments
|
|
671
|
+
If full reproduction is requested, list each rule that needs amendment:
|
|
672
|
+
|
|
673
|
+
| Rule ID | Current Rule | Proposed Amendment | Justification |
|
|
674
|
+
|---------|-------------|-------------------|---------------|
|
|
675
|
+
|
|
676
|
+
## Estimated Total Effort
|
|
677
|
+
- P0: ~{N} minutes
|
|
678
|
+
- P1: ~{N} hours
|
|
679
|
+
- P2: ~{N} hours
|
|
680
|
+
- P3: Requires design decision
|
|
681
|
+
```
|
|
682
|
+
|
|
683
|
+
Copy the generated migration plan to `docs/ui-mirror/{name}/migration-plan.md`.
|
|
684
|
+
|
|
685
|
+
## 9. Output Summary
|
|
686
|
+
|
|
687
|
+
After all phases complete, present the completion summary:
|
|
688
|
+
|
|
689
|
+
```
|
|
690
|
+
── ui-mirror 완료 ──
|
|
691
|
+
벤치마크: {name} ({BENCHMARK_URL})
|
|
692
|
+
페이지: {N}장 분석 | 토큰: {N}개 비교
|
|
693
|
+
규칙 충돌: {N}건 (제안 해결책 포함)
|
|
694
|
+
전체 갭 스코어: {score}/100
|
|
695
|
+
|
|
696
|
+
산출물:
|
|
697
|
+
docs/ui-mirror/{name}/analysis.md — 7축 비교 분석 리포트
|
|
698
|
+
docs/ui-mirror/{name}/tokens-override.css — CSS 토큰 오버라이드
|
|
699
|
+
docs/ui-mirror/{name}/components/ — {N}개 컴포넌트 스니펫
|
|
700
|
+
docs/ui-mirror/{name}/migration-plan.md — 우선순위별 마이그레이션 계획
|
|
701
|
+
docs/ui-mirror/{name}/screenshots/ — {N}장 스크린샷
|
|
702
|
+
docs/ui-mirror/{name}/raw/ — 원시 데이터 (JSON)
|
|
703
|
+
|
|
704
|
+
다음 단계:
|
|
705
|
+
1. analysis.md 검토 후 적용 범위 결정
|
|
706
|
+
2. tokens-override.css를 globals.css에 병합 (P0)
|
|
707
|
+
3. components/ 스니펫을 src/components/ui/에 통합 (P1)
|
|
708
|
+
```
|
|
709
|
+
|
|
710
|
+
Do not automatically commit any generated artifacts. Wait for user review and explicit commit request.
|
|
711
|
+
|
|
712
|
+
## 10. Error Handling & Fallback
|
|
713
|
+
|
|
714
|
+
| Situation | Fallback |
|
|
715
|
+
|-----------|----------|
|
|
716
|
+
| Jina screenshot fails | agent-browser `screenshot --full` |
|
|
717
|
+
| agent-browser unavailable | Chrome DevTools MCP `take_screenshot` |
|
|
718
|
+
| All screenshot tools fail | **HARD STOP** + install instructions (see Section 3) |
|
|
719
|
+
| Benchmark requires authentication | Guide user through agent-browser auth vault: `agent-browser auth add {domain}` |
|
|
720
|
+
| Benchmark blocks bots / returns 403 | Suggest agent-browser `--headed` mode for manual navigation, then capture |
|
|
721
|
+
| Python script not found or fails | Claude performs inline analysis (degraded mode — slower, no automated diffing) |
|
|
722
|
+
| localhost:3000 not running | Skip Phase 2, produce benchmark-only analysis with WARN |
|
|
723
|
+
| 10+ pages discovered | Confirm with user, then batch in groups of 5 with progress updates |
|
|
724
|
+
| `raw/` JSON is malformed | Re-run CSS extraction with simplified JS (headings + buttons + colors only) |
|
|
725
|
+
| Design system file not found | Warn user, skip conflict detection, note in analysis.md |
|
|
726
|
+
| Output directory already exists | Ask user: overwrite / rename with timestamp suffix / abort |
|
|
727
|
+
| Network timeout on capture | Retry once with 30s delay, then skip page with WARN |
|
|
728
|
+
|
|
729
|
+
## 11. Safety Rules
|
|
730
|
+
|
|
731
|
+
### NEVER
|
|
732
|
+
|
|
733
|
+
- Modify existing source files (`src/`, `globals.css`, any `.tsx`/`.ts`) automatically. All generated code goes to `docs/ui-mirror/{name}/` only.
|
|
734
|
+
- Commit generated artifacts without explicit user review and approval.
|
|
735
|
+
- Send credentials, cookies, or auth tokens to Jina MCP. Use agent-browser for authenticated pages.
|
|
736
|
+
- Process more than 10 pages without explicit user confirmation.
|
|
737
|
+
- Overwrite an existing `docs/ui-mirror/{name}/` analysis directory without asking the user first.
|
|
738
|
+
- Skip the scope confirmation step (Step 0D).
|
|
739
|
+
- Use hardcoded HEX/RGB in generated `tokens-override.css` — always convert to OKLCH.
|
|
740
|
+
|
|
741
|
+
### ALWAYS
|
|
742
|
+
|
|
743
|
+
- Save all screenshots to the output directory before analysis (never analyze from memory alone).
|
|
744
|
+
- Convert all color values to OKLCH format in generated artifacts.
|
|
745
|
+
- Flag every conflict with the project's design system rules (if a design system file is found).
|
|
746
|
+
- Use Sequential Thinking MCP for the 7-axis analysis (structured reasoning prevents skipped dimensions).
|
|
747
|
+
- Present scope confirmation (Step 0D) before starting Phase 1 captures.
|
|
748
|
+
- Clean up any `/tmp/ui-mirror-*` temporary files after copying results to the output directory.
|
|
749
|
+
- Prefix generated component names with `Mirror` to avoid naming collisions.
|
|
750
|
+
- Include `/* was: ... */` comments in every token override line.
|
|
751
|
+
- Record the benchmark URL, capture date, and tool versions used in `analysis.md` metadata.
|