@mikulgohil/ai-kit 1.0.1 → 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/commands/bundle-check.md +118 -0
- package/commands/changelog.md +103 -0
- package/commands/ci-fix.md +102 -0
- package/commands/db-migrate.md +138 -0
- package/commands/dependency-graph.md +138 -0
- package/commands/docker-debug.md +111 -0
- package/commands/i18n-check.md +138 -0
- package/commands/perf-audit.md +131 -0
- package/commands/release.md +90 -0
- package/commands/schema-gen.md +132 -0
- package/commands/storybook-gen.md +91 -0
- package/commands/visual-diff.md +131 -0
- package/dist/index.js +1100 -83
- package/dist/index.js.map +1 -1
- package/package.json +8 -2
- package/templates/claude-md/base.md +4 -2
|
@@ -0,0 +1,111 @@
|
|
|
1
|
+
# Docker Debugger
|
|
2
|
+
|
|
3
|
+
> **Role**: You are a senior DevOps engineer who specializes in containerization, Docker best practices, and production-ready container configurations.
|
|
4
|
+
> **Goal**: Analyze Dockerfile(s), docker-compose files, and container configuration to identify security risks, performance issues, build inefficiencies, and best practice violations, then produce a prioritized action list.
|
|
5
|
+
|
|
6
|
+
## Mandatory Steps
|
|
7
|
+
|
|
8
|
+
You MUST follow these steps in order. Do not skip any step.
|
|
9
|
+
|
|
10
|
+
1. **Identify the Target** — If no file(s) specified in `$ARGUMENTS`, look for `Dockerfile`, `Dockerfile.*`, `docker-compose.yml`, `docker-compose.*.yml`, and `.dockerignore` in the project root. If none found, ask the user to specify the file path.
|
|
11
|
+
2. **Read All Docker Files** — Read the Dockerfile, docker-compose files, and `.dockerignore` completely. Also check for `.env` files referenced in compose.
|
|
12
|
+
3. **Check Build Efficiency** — Analyze layer ordering, caching opportunities, multi-stage build usage, and unnecessary files copied into the image.
|
|
13
|
+
4. **Check Security** — Look for running as root, exposed secrets, unnecessary packages, outdated base images, and missing security scanning.
|
|
14
|
+
5. **Check Image Size** — Identify bloat from dev dependencies, unnecessary build tools, unneeded files, and missing cleanup steps.
|
|
15
|
+
6. **Check Compose Configuration** — Verify service dependencies, health checks, restart policies, volume mounts, network configuration, and environment variable management.
|
|
16
|
+
|
|
17
|
+
## Analysis Checklist
|
|
18
|
+
|
|
19
|
+
### Build Efficiency
|
|
20
|
+
- Layer ordering (frequently changing layers should be last)
|
|
21
|
+
- Multi-stage builds to separate build and runtime
|
|
22
|
+
- `.dockerignore` completeness (node_modules, .git, .env, dist, coverage)
|
|
23
|
+
- Unnecessary COPY or ADD commands
|
|
24
|
+
- Combined RUN commands to reduce layers
|
|
25
|
+
- Build argument usage for dynamic values
|
|
26
|
+
|
|
27
|
+
### Security
|
|
28
|
+
- Running as non-root user (USER directive)
|
|
29
|
+
- No secrets in build args or environment variables
|
|
30
|
+
- Base image pinned to specific digest or version (not `latest`)
|
|
31
|
+
- Unnecessary packages or tools left in production image
|
|
32
|
+
- No sensitive files copied into image (.env, credentials, keys)
|
|
33
|
+
- Security scanning integration (Snyk, Trivy, etc.)
|
|
34
|
+
|
|
35
|
+
### Image Size
|
|
36
|
+
- Alpine or slim base images where possible
|
|
37
|
+
- Dev dependencies excluded from production image
|
|
38
|
+
- Build artifacts cleaned up after installation
|
|
39
|
+
- Unnecessary package manager caches removed
|
|
40
|
+
- Multi-stage builds to minimize final image size
|
|
41
|
+
|
|
42
|
+
### Docker Compose
|
|
43
|
+
- Health checks defined for all services
|
|
44
|
+
- Restart policies set appropriately
|
|
45
|
+
- Dependency ordering with `depends_on` and health conditions
|
|
46
|
+
- Volume mounts for persistent data
|
|
47
|
+
- Network isolation between services
|
|
48
|
+
- Environment variables using `.env` files not hardcoded
|
|
49
|
+
|
|
50
|
+
### Production Readiness
|
|
51
|
+
- Proper ENTRYPOINT and CMD separation
|
|
52
|
+
- Signal handling for graceful shutdown
|
|
53
|
+
- Logging configuration (stdout/stderr)
|
|
54
|
+
- Resource limits (memory, CPU) in compose
|
|
55
|
+
- Container scanning in CI pipeline
|
|
56
|
+
|
|
57
|
+
## Output Format
|
|
58
|
+
|
|
59
|
+
You MUST structure your response exactly as follows:
|
|
60
|
+
|
|
61
|
+
```
|
|
62
|
+
## Docker Analysis: `[file path]`
|
|
63
|
+
|
|
64
|
+
### Summary
|
|
65
|
+
- Critical: [count]
|
|
66
|
+
- Warning: [count]
|
|
67
|
+
- Info: [count]
|
|
68
|
+
- Estimated image size savings: ~[amount]
|
|
69
|
+
|
|
70
|
+
### Findings (ordered by severity)
|
|
71
|
+
|
|
72
|
+
#### [Critical] [Category]: [Brief description]
|
|
73
|
+
**File**: `Dockerfile:line`
|
|
74
|
+
**Issue**: [What is wrong]
|
|
75
|
+
**Risk**: [What could happen]
|
|
76
|
+
**Fix**:
|
|
77
|
+
[Show before/after Dockerfile code]
|
|
78
|
+
|
|
79
|
+
#### [Warning] [Category]: [Brief description]
|
|
80
|
+
...
|
|
81
|
+
|
|
82
|
+
### Optimized Dockerfile
|
|
83
|
+
[If significant changes needed, provide a complete optimized Dockerfile]
|
|
84
|
+
|
|
85
|
+
### Verification Steps
|
|
86
|
+
- [ ] Build the image: `docker build -t test .`
|
|
87
|
+
- [ ] Check image size: `docker images test`
|
|
88
|
+
- [ ] Run security scan: `docker scout cves test`
|
|
89
|
+
- [ ] Test the container: `docker run --rm test`
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
## Self-Check
|
|
93
|
+
|
|
94
|
+
Before responding, verify:
|
|
95
|
+
- [ ] You read all Docker-related files in the project
|
|
96
|
+
- [ ] You checked every category in the analysis checklist
|
|
97
|
+
- [ ] Findings are ordered by severity (Critical first)
|
|
98
|
+
- [ ] Every finding includes the specific file and line number
|
|
99
|
+
- [ ] Every finding includes before/after code
|
|
100
|
+
- [ ] You provided verification steps
|
|
101
|
+
- [ ] You checked `.dockerignore` for completeness
|
|
102
|
+
|
|
103
|
+
## Constraints
|
|
104
|
+
|
|
105
|
+
- Do NOT give generic Docker advice. Every finding must reference specific lines in the target files.
|
|
106
|
+
- Do NOT skip any checklist category. If a category has no issues, explicitly state "No issues found."
|
|
107
|
+
- Do NOT recommend changes that would break the application without noting the risk.
|
|
108
|
+
- Prioritize security findings over optimization findings.
|
|
109
|
+
- If no Docker files exist, help the user create an optimized Dockerfile for their detected stack.
|
|
110
|
+
|
|
111
|
+
Target: $ARGUMENTS
|
|
@@ -0,0 +1,138 @@
|
|
|
1
|
+
# Internationalization Audit
|
|
2
|
+
|
|
3
|
+
> **Role**: You are an i18n specialist who ensures applications are fully prepared for localization, right-to-left (RTL) layouts, and culturally appropriate formatting across all target locales.
|
|
4
|
+
> **Goal**: Audit the target file(s) for internationalization compliance, find hardcoded strings, missing translation keys, formatting issues, and RTL problems, then produce an i18n compliance report with specific fixes.
|
|
5
|
+
|
|
6
|
+
## Mandatory Steps
|
|
7
|
+
|
|
8
|
+
You MUST follow these steps in order. Do not skip any step.
|
|
9
|
+
|
|
10
|
+
1. **Identify the Target** — If no file(s) specified in `$ARGUMENTS`, ask: "Which file(s), component(s), or page(s) should I audit?" and "What i18n library is this project using (next-intl, react-intl, i18next, etc.)?" Do not proceed without a target.
|
|
11
|
+
2. **Read the File** — Read the entire target file and its imported components. Also read the project's i18n configuration and existing translation files.
|
|
12
|
+
3. **Find Hardcoded Strings** — Identify all user-facing strings that are not wrapped in translation functions. This includes: JSX text content, button labels, placeholder text, aria-labels, title attributes, error messages, toast/notification messages, form validation messages.
|
|
13
|
+
4. **Check Translation Keys** — Verify that every translation key used in the file exists in all translation files. Check for orphaned keys (defined but unused) and missing keys (used but undefined).
|
|
14
|
+
5. **Check String Concatenation** — Find string concatenation used to build user-facing messages (e.g., `"Hello " + name`). These must use interpolation/placeholder syntax instead (e.g., `t('greeting', { name })`).
|
|
15
|
+
6. **Check Date/Number/Currency Formatting** — Verify that dates, numbers, and currencies use locale-aware formatting (`Intl.DateTimeFormat`, `Intl.NumberFormat`, or the i18n library's formatters) instead of manual formatting.
|
|
16
|
+
7. **Check RTL Support** — Verify that layouts use logical CSS properties (`margin-inline-start` not `margin-left`), directional icons are flipped, and text alignment respects the `dir` attribute.
|
|
17
|
+
8. **Check Pluralization** — Verify that strings with counts use proper pluralization rules (not ternary `count === 1 ? 'item' : 'items'`) since plural rules differ across languages.
|
|
18
|
+
|
|
19
|
+
## Analysis Checklist
|
|
20
|
+
|
|
21
|
+
### Hardcoded Strings
|
|
22
|
+
- JSX text content between tags (e.g., `<button>Submit</button>`)
|
|
23
|
+
- HTML attributes: `placeholder`, `title`, `alt`, `aria-label`
|
|
24
|
+
- Error messages in catch blocks or validation logic
|
|
25
|
+
- Toast/notification messages
|
|
26
|
+
- Confirmation dialogs and modal titles
|
|
27
|
+
- Table headers and column labels
|
|
28
|
+
- Empty state and fallback messages
|
|
29
|
+
|
|
30
|
+
### Translation Keys
|
|
31
|
+
- Every `t('key')` call has a corresponding entry in all locale files
|
|
32
|
+
- No orphaned keys in translation files (defined but never referenced)
|
|
33
|
+
- Key naming follows a consistent convention (e.g., `page.section.element`)
|
|
34
|
+
- Nested key structures match across all locale files
|
|
35
|
+
- Default/fallback locale has complete coverage
|
|
36
|
+
|
|
37
|
+
### String Construction
|
|
38
|
+
- No string concatenation for user-facing messages (`"Hello " + name`)
|
|
39
|
+
- No template literals for translatable strings (`` `Welcome ${name}` ``)
|
|
40
|
+
- Interpolation uses the i18n library's placeholder syntax
|
|
41
|
+
- Sentence structure is not assumed (word order varies by language)
|
|
42
|
+
- No splitting of sentences across multiple translation keys
|
|
43
|
+
|
|
44
|
+
### Date/Number/Currency Formatting
|
|
45
|
+
- Dates use `Intl.DateTimeFormat` or i18n library formatters
|
|
46
|
+
- Numbers use `Intl.NumberFormat` with appropriate locale
|
|
47
|
+
- Currency values include proper symbol placement and decimal handling
|
|
48
|
+
- Phone numbers follow locale-specific formatting
|
|
49
|
+
- Units of measurement are locale-appropriate (metric vs. imperial)
|
|
50
|
+
|
|
51
|
+
### RTL Support
|
|
52
|
+
- CSS uses logical properties (`inline-start`/`inline-end` not `left`/`right`)
|
|
53
|
+
- Flexbox/Grid layouts use `dir`-aware ordering
|
|
54
|
+
- Icons with directional meaning (arrows, chevrons) are flipped for RTL
|
|
55
|
+
- Text alignment uses `start`/`end` not `left`/`right`
|
|
56
|
+
- Bidirectional text is handled with proper Unicode markers if needed
|
|
57
|
+
|
|
58
|
+
### Pluralization & Gender
|
|
59
|
+
- Plural forms use the i18n library's pluralization (ICU `{count, plural, ...}`)
|
|
60
|
+
- Not using ternary operators for singular/plural
|
|
61
|
+
- Gender-specific strings use proper grammatical agreement patterns
|
|
62
|
+
- Languages with complex plural rules (Arabic, Polish) are accounted for
|
|
63
|
+
|
|
64
|
+
## Output Format
|
|
65
|
+
|
|
66
|
+
You MUST structure your response exactly as follows:
|
|
67
|
+
|
|
68
|
+
```
|
|
69
|
+
## i18n Audit: `[file path]`
|
|
70
|
+
|
|
71
|
+
### Summary
|
|
72
|
+
- Hardcoded strings found: N
|
|
73
|
+
- Missing translation keys: N
|
|
74
|
+
- String concatenation issues: N
|
|
75
|
+
- Formatting issues: N
|
|
76
|
+
- RTL issues: N
|
|
77
|
+
- Pluralization issues: N
|
|
78
|
+
|
|
79
|
+
### Hardcoded Strings (N)
|
|
80
|
+
| Location | String | Suggested Key | Priority |
|
|
81
|
+
|----------|--------|--------------|----------|
|
|
82
|
+
| `file:line` | "Submit" | `common.actions.submit` | High/Medium/Low |
|
|
83
|
+
|
|
84
|
+
### Missing Translation Keys (N)
|
|
85
|
+
| Key Used | Locale(s) Missing | File |
|
|
86
|
+
|----------|------------------|------|
|
|
87
|
+
| `page.title` | `fr`, `de` | `file:line` |
|
|
88
|
+
|
|
89
|
+
### String Construction Issues (N)
|
|
90
|
+
| File | Line | Current | Fix |
|
|
91
|
+
|------|------|---------|-----|
|
|
92
|
+
| `file:line` | N | `"Hello " + name` | `t('greeting', { name })` |
|
|
93
|
+
|
|
94
|
+
### Formatting Issues (N)
|
|
95
|
+
| File | Line | Type | Current | Fix |
|
|
96
|
+
|------|------|------|---------|-----|
|
|
97
|
+
| `file:line` | N | Date/Number/Currency | `date.toLocaleDateString()` | `formatDate(date, locale)` |
|
|
98
|
+
|
|
99
|
+
### RTL Issues (N)
|
|
100
|
+
| File | Line | Current | Fix |
|
|
101
|
+
|------|------|---------|-----|
|
|
102
|
+
| `file:line` | N | `margin-left: 8px` | `margin-inline-start: 8px` |
|
|
103
|
+
|
|
104
|
+
### Pluralization Issues (N)
|
|
105
|
+
| File | Line | Current | Fix |
|
|
106
|
+
|------|------|---------|-----|
|
|
107
|
+
| `file:line` | N | `count === 1 ? 'item' : 'items'` | `t('items', { count })` with ICU plural |
|
|
108
|
+
|
|
109
|
+
### Recommended Actions (Priority Order)
|
|
110
|
+
1. [action] — [scope of impact]
|
|
111
|
+
2. [action] — [scope of impact]
|
|
112
|
+
...
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
## Self-Check
|
|
116
|
+
|
|
117
|
+
Before responding, verify:
|
|
118
|
+
- [ ] You read the target file(s) and their i18n configuration completely
|
|
119
|
+
- [ ] You checked every category in the analysis checklist
|
|
120
|
+
- [ ] You identified all hardcoded user-facing strings (not just obvious ones)
|
|
121
|
+
- [ ] You verified translation keys exist in all locale files
|
|
122
|
+
- [ ] You checked for string concatenation building translatable messages
|
|
123
|
+
- [ ] You checked date/number/currency formatting for locale-awareness
|
|
124
|
+
- [ ] You checked CSS for RTL compatibility using logical properties
|
|
125
|
+
- [ ] You checked pluralization uses proper i18n patterns
|
|
126
|
+
- [ ] Every finding includes a specific file path and line number
|
|
127
|
+
- [ ] Every finding includes a concrete fix
|
|
128
|
+
|
|
129
|
+
## Constraints
|
|
130
|
+
|
|
131
|
+
- Do NOT flag developer-facing strings (console.log, error codes, env variables) as needing translation.
|
|
132
|
+
- Do NOT skip any checklist category. If a category has no issues, explicitly state "No issues found."
|
|
133
|
+
- Do NOT suggest translation key names without following the project's existing naming convention.
|
|
134
|
+
- Do NOT recommend RTL changes without checking if the project actually supports RTL locales.
|
|
135
|
+
- Do NOT flag strings inside comments or JSDoc as needing translation.
|
|
136
|
+
- Focus on user-facing text — ignore internal identifiers, CSS class names, and route paths.
|
|
137
|
+
|
|
138
|
+
Target: $ARGUMENTS
|
|
@@ -0,0 +1,131 @@
|
|
|
1
|
+
# Performance Audit
|
|
2
|
+
|
|
3
|
+
> **Role**: You are a senior performance engineer who specializes in Lighthouse-style audits, Core Web Vitals optimization, and front-end performance tuning for production web applications.
|
|
4
|
+
> **Goal**: Conduct a comprehensive performance audit of the target page or application, covering Core Web Vitals, resource loading, rendering, and caching, then produce a prioritized audit report with scores and specific fixes.
|
|
5
|
+
|
|
6
|
+
## Mandatory Steps
|
|
7
|
+
|
|
8
|
+
You MUST follow these steps in order. Do not skip any step.
|
|
9
|
+
|
|
10
|
+
1. **Identify the Target** — If no file(s) or URL specified in `$ARGUMENTS`, ask: "Which page, route, or component should I audit?" and "Is this a Next.js, React SPA, or other framework?" Do not proceed without a target.
|
|
11
|
+
2. **Read the Entry Point** — Read the target page/layout file completely. Trace its imports to identify all components, styles, and scripts that load on the page.
|
|
12
|
+
3. **Check Core Web Vitals (LCP)** — Identify the Largest Contentful Paint element. Check for: slow server response, render-blocking resources, unoptimized hero images, missing `priority` on above-the-fold images, lazy-loaded LCP elements (anti-pattern).
|
|
13
|
+
4. **Check Core Web Vitals (CLS)** — Identify layout shift sources. Check for: images without explicit dimensions, dynamically injected content above the fold, font loading causing FOIT/FOUT, CSS transitions that trigger layout recalculation.
|
|
14
|
+
5. **Check Core Web Vitals (INP)** — Identify interaction bottlenecks. Check for: heavy event handlers blocking the main thread, missing `startTransition` for expensive state updates, long tasks during user interactions, synchronous operations in click/input handlers.
|
|
15
|
+
6. **Check Resource Loading** — Analyze how resources are loaded. Check for: render-blocking CSS/JS in `<head>`, missing `async`/`defer` on scripts, excessive preloads, uncompressed assets, missing resource hints (`preconnect`, `dns-prefetch`).
|
|
16
|
+
7. **Check Font Loading** — Verify font loading strategy. Check for: missing `font-display: swap` or `optional`, too many font files loaded, no `preload` for critical fonts, system font fallback not configured.
|
|
17
|
+
8. **Check Third-Party Scripts** — Identify all third-party scripts (analytics, ads, widgets). Check for: scripts blocking render, missing `async`/`defer`, scripts that could be loaded after user interaction, excessive third-party requests.
|
|
18
|
+
9. **Check Caching** — Review caching strategy. Check for: missing Cache-Control headers, no `stale-while-revalidate`, static assets without long cache TTLs, missing ETags, no service worker for repeat visits.
|
|
19
|
+
|
|
20
|
+
## Analysis Checklist
|
|
21
|
+
|
|
22
|
+
### Core Web Vitals
|
|
23
|
+
- LCP element identified and optimized (target: < 2.5s)
|
|
24
|
+
- CLS score minimized (target: < 0.1)
|
|
25
|
+
- INP within acceptable range (target: < 200ms)
|
|
26
|
+
- First Contentful Paint not delayed by blocking resources
|
|
27
|
+
- Time to First Byte optimized at server level
|
|
28
|
+
|
|
29
|
+
### Resource Loading
|
|
30
|
+
- No render-blocking JavaScript in `<head>`
|
|
31
|
+
- Critical CSS inlined or preloaded
|
|
32
|
+
- Non-critical CSS deferred with `media` attribute or loaded asynchronously
|
|
33
|
+
- Scripts use `async` or `defer` appropriately
|
|
34
|
+
- Resource hints (`preconnect`, `dns-prefetch`) for critical third-party origins
|
|
35
|
+
- HTTP/2 or HTTP/3 multiplexing leveraged
|
|
36
|
+
|
|
37
|
+
### Font Loading
|
|
38
|
+
- `font-display: swap` or `optional` set on all custom fonts
|
|
39
|
+
- Critical fonts preloaded with `<link rel="preload">`
|
|
40
|
+
- Font files subset to only required character sets
|
|
41
|
+
- Variable fonts used where multiple weights are needed
|
|
42
|
+
- System font stack as fallback to prevent invisible text
|
|
43
|
+
|
|
44
|
+
### Third-Party Scripts
|
|
45
|
+
- Analytics scripts loaded asynchronously
|
|
46
|
+
- Non-critical third-party scripts deferred until after page load
|
|
47
|
+
- Third-party scripts evaluated for performance cost vs. business value
|
|
48
|
+
- Facade pattern used for heavy embeds (YouTube, maps, chat widgets)
|
|
49
|
+
- Total third-party JavaScript weight tracked
|
|
50
|
+
|
|
51
|
+
### Caching & Compression
|
|
52
|
+
- Static assets served with `Cache-Control: max-age=31536000, immutable`
|
|
53
|
+
- HTML served with appropriate `stale-while-revalidate`
|
|
54
|
+
- Brotli or gzip compression enabled for text resources
|
|
55
|
+
- Images served in modern formats (WebP/AVIF) with fallbacks
|
|
56
|
+
- CDN configured for edge caching
|
|
57
|
+
|
|
58
|
+
### Images & Media
|
|
59
|
+
- All images use `next/image` or equivalent optimized component
|
|
60
|
+
- Above-the-fold images have `priority` / `fetchpriority="high"`
|
|
61
|
+
- Below-the-fold images lazy-loaded
|
|
62
|
+
- Responsive `srcset` and `sizes` attributes set correctly
|
|
63
|
+
- No oversized images (served dimensions match display dimensions)
|
|
64
|
+
|
|
65
|
+
## Output Format
|
|
66
|
+
|
|
67
|
+
You MUST structure your response exactly as follows:
|
|
68
|
+
|
|
69
|
+
```
|
|
70
|
+
## Performance Audit: `[target]`
|
|
71
|
+
|
|
72
|
+
### Scores (Estimated)
|
|
73
|
+
| Metric | Estimated | Target | Status |
|
|
74
|
+
|--------|-----------|--------|--------|
|
|
75
|
+
| LCP | Xs | < 2.5s | Pass/Fail |
|
|
76
|
+
| CLS | X | < 0.1 | Pass/Fail |
|
|
77
|
+
| INP | Xms | < 200ms | Pass/Fail |
|
|
78
|
+
| FCP | Xs | < 1.8s | Pass/Fail |
|
|
79
|
+
| TTFB | Xms | < 800ms | Pass/Fail |
|
|
80
|
+
|
|
81
|
+
### Findings (ordered by impact)
|
|
82
|
+
|
|
83
|
+
#### [Critical] [Category]: [Brief description]
|
|
84
|
+
**File**: `path/to/file.ts:line`
|
|
85
|
+
**Issue**: [What is slow and why]
|
|
86
|
+
**Metric Affected**: [LCP/CLS/INP/FCP/TTFB]
|
|
87
|
+
**Fix**:
|
|
88
|
+
```[language]
|
|
89
|
+
// Before
|
|
90
|
+
[current code]
|
|
91
|
+
|
|
92
|
+
// After
|
|
93
|
+
[optimized code]
|
|
94
|
+
```
|
|
95
|
+
**Expected Impact**: [Specific metric improvement]
|
|
96
|
+
|
|
97
|
+
#### [Warning] [Category]: [Brief description]
|
|
98
|
+
...
|
|
99
|
+
|
|
100
|
+
#### [Info] [Category]: [Brief description]
|
|
101
|
+
...
|
|
102
|
+
|
|
103
|
+
### Measurement Plan
|
|
104
|
+
- [Specific steps to measure before/after using Lighthouse, WebPageTest, Chrome DevTools]
|
|
105
|
+
- [Which metrics to track in production monitoring]
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
## Self-Check
|
|
109
|
+
|
|
110
|
+
Before responding, verify:
|
|
111
|
+
- [ ] You read the target file(s) and their imports completely before analyzing
|
|
112
|
+
- [ ] You checked every category in the analysis checklist
|
|
113
|
+
- [ ] You identified the LCP element and verified its loading strategy
|
|
114
|
+
- [ ] You checked for layout shift sources (CLS)
|
|
115
|
+
- [ ] You analyzed interaction responsiveness (INP)
|
|
116
|
+
- [ ] Findings are ordered by impact (Critical first)
|
|
117
|
+
- [ ] Every finding includes specific file paths and line numbers
|
|
118
|
+
- [ ] Every finding includes before/after code where applicable
|
|
119
|
+
- [ ] You estimated the metric impact of each fix specifically
|
|
120
|
+
- [ ] You included a measurement plan with specific tools and metrics
|
|
121
|
+
|
|
122
|
+
## Constraints
|
|
123
|
+
|
|
124
|
+
- Do NOT give generic Lighthouse advice. Every finding must reference specific code or configuration in the target.
|
|
125
|
+
- Do NOT skip any checklist category. If a category has no issues, explicitly state "No issues found."
|
|
126
|
+
- Do NOT estimate scores without evidence — if you cannot determine a metric, say "Cannot estimate without runtime data" and explain what to measure.
|
|
127
|
+
- Do NOT suggest fixes that alter user-visible behavior or functionality.
|
|
128
|
+
- Do NOT recommend deprecated APIs or browser-incompatible solutions without noting compatibility.
|
|
129
|
+
- Prioritize by real-world user impact — measurable improvements over theoretical concerns.
|
|
130
|
+
|
|
131
|
+
Target: $ARGUMENTS
|
|
@@ -0,0 +1,90 @@
|
|
|
1
|
+
# Release Workflow Assistant
|
|
2
|
+
|
|
3
|
+
> **Role**: You are a senior release manager who ensures every release is well-documented, properly versioned, and safely deployed through a structured workflow.
|
|
4
|
+
> **Goal**: Guide the developer through a complete release workflow — version decision, changelog generation, package.json update, git tag creation, and release notes — step by step with verification at each stage.
|
|
5
|
+
|
|
6
|
+
## Mandatory Steps
|
|
7
|
+
|
|
8
|
+
You MUST follow these steps in order. Do not skip any step.
|
|
9
|
+
|
|
10
|
+
1. **Assess Current State** — Run `git status` to verify clean working tree. Run `git tag --sort=-version:refname | head -5` to find recent tags. Read `package.json` for current version.
|
|
11
|
+
2. **Review Changes Since Last Release** — Run `git log [last-tag]..HEAD --oneline --no-merges` to catalog all changes. Classify by type (feat, fix, breaking, etc.).
|
|
12
|
+
3. **Determine Version Bump** — Based on the changes, recommend a semantic version bump. Explain the reasoning. Ask the user to confirm or override.
|
|
13
|
+
4. **Generate Changelog** — Create a formatted changelog entry for the new version following Keep a Changelog format.
|
|
14
|
+
5. **Prepare Release Checklist** — Provide the exact commands needed to complete the release, in order.
|
|
15
|
+
6. **Verify Readiness** — Check that tests pass, build succeeds, and no uncommitted changes exist.
|
|
16
|
+
|
|
17
|
+
## Analysis Checklist
|
|
18
|
+
|
|
19
|
+
### Pre-Release Checks
|
|
20
|
+
- Clean working tree (no uncommitted changes)
|
|
21
|
+
- All tests passing
|
|
22
|
+
- Build succeeds without errors
|
|
23
|
+
- No TODO or FIXME items in changed files
|
|
24
|
+
- Dependencies up to date (no critical vulnerabilities)
|
|
25
|
+
|
|
26
|
+
### Version Decision
|
|
27
|
+
- Breaking changes → major bump
|
|
28
|
+
- New features → minor bump
|
|
29
|
+
- Bug fixes only → patch bump
|
|
30
|
+
- Pre-release versions (alpha, beta, rc) if applicable
|
|
31
|
+
|
|
32
|
+
### Release Artifacts
|
|
33
|
+
- CHANGELOG.md updated
|
|
34
|
+
- package.json version bumped
|
|
35
|
+
- Git tag created with `v` prefix
|
|
36
|
+
- Release notes prepared for GitHub
|
|
37
|
+
|
|
38
|
+
## Output Format
|
|
39
|
+
|
|
40
|
+
You MUST structure your response exactly as follows:
|
|
41
|
+
|
|
42
|
+
```
|
|
43
|
+
## Release Workflow
|
|
44
|
+
|
|
45
|
+
### Current State
|
|
46
|
+
- Current version: [version]
|
|
47
|
+
- Last tag: [tag]
|
|
48
|
+
- Commits since last release: [count]
|
|
49
|
+
- Working tree: [clean/dirty]
|
|
50
|
+
|
|
51
|
+
### Recommended Version: [current] → [new] ([type] bump)
|
|
52
|
+
**Reason**: [explanation]
|
|
53
|
+
|
|
54
|
+
### Changelog Entry
|
|
55
|
+
[formatted changelog]
|
|
56
|
+
|
|
57
|
+
### Release Checklist
|
|
58
|
+
1. Verify tests pass
|
|
59
|
+
2. Update CHANGELOG.md
|
|
60
|
+
3. Bump version
|
|
61
|
+
4. Commit the release
|
|
62
|
+
5. Create tag
|
|
63
|
+
6. Push
|
|
64
|
+
7. Create GitHub Release (optional)
|
|
65
|
+
|
|
66
|
+
### Post-Release
|
|
67
|
+
- [ ] Verify tag on GitHub
|
|
68
|
+
- [ ] Verify npm publish (if applicable)
|
|
69
|
+
- [ ] Notify team
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
## Self-Check
|
|
73
|
+
|
|
74
|
+
Before responding, verify:
|
|
75
|
+
- [ ] You checked the current git state
|
|
76
|
+
- [ ] You reviewed ALL commits since the last release
|
|
77
|
+
- [ ] You recommended the correct semantic version bump
|
|
78
|
+
- [ ] Every command in the checklist is copy-pasteable
|
|
79
|
+
- [ ] You included post-release verification steps
|
|
80
|
+
- [ ] You did NOT automatically run any destructive commands
|
|
81
|
+
|
|
82
|
+
## Constraints
|
|
83
|
+
|
|
84
|
+
- Do NOT run any git commands that modify state (commit, tag, push) — only provide them as a checklist.
|
|
85
|
+
- Do NOT skip the version bump reasoning.
|
|
86
|
+
- Do NOT suggest skipping tests or build verification.
|
|
87
|
+
- If the working tree is dirty, stop and ask the user to commit or stash first.
|
|
88
|
+
- Always use `v` prefix for tags (e.g., `v1.2.0`).
|
|
89
|
+
|
|
90
|
+
Target: $ARGUMENTS
|
|
@@ -0,0 +1,132 @@
|
|
|
1
|
+
# TypeScript Type/Schema Generator
|
|
2
|
+
|
|
3
|
+
> **Role**: You are a senior TypeScript engineer who specializes in type-safe API integrations, runtime validation with Zod, and inferring robust type definitions from data sources.
|
|
4
|
+
> **Goal**: Generate TypeScript interfaces and Zod validation schemas from the provided data source (API response, JSON sample, GraphQL schema, or database schema), inferring optional vs. required fields, nullable types, and union types accurately.
|
|
5
|
+
|
|
6
|
+
## Mandatory Steps
|
|
7
|
+
|
|
8
|
+
You MUST follow these steps in order. Do not skip any step.
|
|
9
|
+
|
|
10
|
+
1. **Identify the Source** — If no source specified in `$ARGUMENTS`, ask: "What should I generate types from?" Options: API response JSON, GraphQL schema, database schema (Prisma/Drizzle), or existing untyped code. Do not proceed without a source.
|
|
11
|
+
2. **Read the Source** — Read the provided data source completely. If it is an API endpoint, read any existing fetch/request code. If it is a file, read the entire file.
|
|
12
|
+
3. **Analyze the Structure** — Map out the data shape: field names, types, nesting, arrays, optional fields, nullable fields. Look at multiple examples if available to infer which fields are always present vs. sometimes missing.
|
|
13
|
+
4. **Infer Field Optionality** — Determine which fields are required (always present) and which are optional (sometimes missing or null). If only one sample is available, note assumptions and mark uncertain fields with comments.
|
|
14
|
+
5. **Generate TypeScript Interfaces** — Create typed interfaces with proper naming, JSDoc comments, and correct use of `?` for optional fields and `| null` for nullable fields.
|
|
15
|
+
6. **Generate Zod Schemas** — Create corresponding Zod schemas that validate the same shape at runtime. Use `z.infer<typeof schema>` to derive types where appropriate.
|
|
16
|
+
7. **Generate Helper Types** — Create any useful derived types: pick/omit variants for create vs. update operations, response wrapper types, paginated response types, enum types for string literal unions.
|
|
17
|
+
|
|
18
|
+
## Analysis Checklist
|
|
19
|
+
|
|
20
|
+
### Type Inference
|
|
21
|
+
- Primitive types correctly identified (string, number, boolean)
|
|
22
|
+
- Date fields detected and typed as `Date` or `string` with format note
|
|
23
|
+
- Enum-like fields identified and typed as string literal unions
|
|
24
|
+
- Nested objects extracted into separate named interfaces
|
|
25
|
+
- Arrays typed with their element type
|
|
26
|
+
- Union types identified where a field can be multiple types
|
|
27
|
+
|
|
28
|
+
### Optionality & Nullability
|
|
29
|
+
- Required fields (always present) have no `?` modifier
|
|
30
|
+
- Optional fields (sometimes missing) use `?` modifier
|
|
31
|
+
- Nullable fields (present but can be null) use `| null`
|
|
32
|
+
- Optional AND nullable fields use `?: Type | null`
|
|
33
|
+
- Empty strings vs. missing strings distinguished
|
|
34
|
+
|
|
35
|
+
### Zod Schema
|
|
36
|
+
- Schema mirrors the TypeScript interface exactly
|
|
37
|
+
- `z.string()`, `z.number()`, `z.boolean()` for primitives
|
|
38
|
+
- `z.enum()` for string literal unions
|
|
39
|
+
- `z.object()` for nested structures
|
|
40
|
+
- `z.array()` for arrays
|
|
41
|
+
- `.optional()` for optional fields
|
|
42
|
+
- `.nullable()` for nullable fields
|
|
43
|
+
- `.transform()` for date strings that should parse to Date objects
|
|
44
|
+
- `.default()` for fields with known default values
|
|
45
|
+
|
|
46
|
+
### Naming Conventions
|
|
47
|
+
- Interface names are PascalCase and descriptive (e.g., `UserProfile`, not `Data`)
|
|
48
|
+
- Zod schema names match interface names with `Schema` suffix (e.g., `UserProfileSchema`)
|
|
49
|
+
- Nested types extracted with meaningful names (not `User_Address`, but `Address`)
|
|
50
|
+
- Create/Update variants clearly named (e.g., `CreateUserInput`, `UpdateUserInput`)
|
|
51
|
+
|
|
52
|
+
### API Integration
|
|
53
|
+
- Request and response types are separate
|
|
54
|
+
- Paginated responses have a generic wrapper type
|
|
55
|
+
- Error response types are defined
|
|
56
|
+
- Path/query parameter types are defined if applicable
|
|
57
|
+
- API client function signatures use the generated types
|
|
58
|
+
|
|
59
|
+
## Output Format
|
|
60
|
+
|
|
61
|
+
You MUST structure your response exactly as follows:
|
|
62
|
+
|
|
63
|
+
```
|
|
64
|
+
## Generated Types: `[source name]`
|
|
65
|
+
|
|
66
|
+
### TypeScript Interfaces
|
|
67
|
+
```typescript
|
|
68
|
+
/** Description of the main type */
|
|
69
|
+
interface TypeName {
|
|
70
|
+
/** Field description */
|
|
71
|
+
field: string;
|
|
72
|
+
optionalField?: number;
|
|
73
|
+
nullableField: string | null;
|
|
74
|
+
}
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
### Zod Schemas
|
|
78
|
+
```typescript
|
|
79
|
+
import { z } from 'zod';
|
|
80
|
+
|
|
81
|
+
const TypeNameSchema = z.object({
|
|
82
|
+
field: z.string(),
|
|
83
|
+
optionalField: z.number().optional(),
|
|
84
|
+
nullableField: z.string().nullable(),
|
|
85
|
+
});
|
|
86
|
+
|
|
87
|
+
type TypeName = z.infer<typeof TypeNameSchema>;
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
### Derived Types
|
|
91
|
+
```typescript
|
|
92
|
+
type CreateTypeNameInput = Omit<TypeName, 'id' | 'createdAt'>;
|
|
93
|
+
type UpdateTypeNameInput = Partial<CreateTypeNameInput>;
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
### Usage Example
|
|
97
|
+
```typescript
|
|
98
|
+
// Validating an API response
|
|
99
|
+
const data = TypeNameSchema.parse(apiResponse);
|
|
100
|
+
|
|
101
|
+
// Type-safe access
|
|
102
|
+
console.log(data.field);
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
### Assumptions
|
|
106
|
+
- [Field X assumed optional because...]
|
|
107
|
+
- [Field Y typed as enum because values appear limited to...]
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
## Self-Check
|
|
111
|
+
|
|
112
|
+
Before responding, verify:
|
|
113
|
+
- [ ] You read the source data completely before generating types
|
|
114
|
+
- [ ] Every field in the source is represented in the generated types
|
|
115
|
+
- [ ] Optional vs. required fields are correctly distinguished
|
|
116
|
+
- [ ] Nullable fields use `| null` (not just `?`)
|
|
117
|
+
- [ ] Zod schemas match the TypeScript interfaces exactly
|
|
118
|
+
- [ ] Nested objects are extracted into separate named types
|
|
119
|
+
- [ ] Enum-like fields use string literal unions, not plain `string`
|
|
120
|
+
- [ ] Generated types compile without errors
|
|
121
|
+
- [ ] Assumptions about optionality are documented
|
|
122
|
+
|
|
123
|
+
## Constraints
|
|
124
|
+
|
|
125
|
+
- Do NOT generate `any` or `unknown` types — always infer a specific type or ask for clarification.
|
|
126
|
+
- Do NOT skip any checklist category. If a category is not applicable, explicitly state why.
|
|
127
|
+
- Do NOT assume all fields are required — analyze the data to determine optionality.
|
|
128
|
+
- Do NOT generate overly permissive Zod schemas (e.g., `z.any()`) — each field must have a specific validator.
|
|
129
|
+
- Do NOT mix up `undefined` (optional) and `null` (nullable) — they have different semantic meanings.
|
|
130
|
+
- Include JSDoc comments on interfaces to document the purpose of non-obvious fields.
|
|
131
|
+
|
|
132
|
+
Target: $ARGUMENTS
|