@mikulgohil/ai-kit 1.3.1 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,58 +1,76 @@
1
- <p align="center">
2
- <h1 align="center">AI Kit</h1>
3
- <p align="center">
4
- Make AI coding assistants actually useful.<br/>
5
- One command. Project-aware AI from the first conversation.
6
- </p>
7
- </p>
8
-
9
- <p align="center">
10
- <a href="https://www.npmjs.com/package/@mikulgohil/ai-kit"><img src="https://img.shields.io/npm/v/@mikulgohil/ai-kit.svg" alt="npm version" /></a>
11
- <a href="https://www.npmjs.com/package/@mikulgohil/ai-kit"><img src="https://img.shields.io/npm/dm/@mikulgohil/ai-kit.svg" alt="npm downloads" /></a>
12
- <a href="https://github.com/mikulgohil/ai-kit/blob/main/LICENSE"><img src="https://img.shields.io/npm/l/@mikulgohil/ai-kit.svg" alt="license" /></a>
13
- <a href="https://mikulgohil.github.io/ai-kit-docs"><img src="https://img.shields.io/badge/docs-ai--kit--docs-blue" alt="documentation" /></a>
14
- </p>
1
+ # AI Kit
15
2
 
16
- ---
3
+ **Make AI coding assistants actually useful.**
4
+ One command. Project-aware AI from the first conversation.
17
5
 
18
- ## The Problem
6
+ [![npm version](https://img.shields.io/npm/v/@mikulgohil/ai-kit.svg)](https://www.npmjs.com/package/@mikulgohil/ai-kit)
7
+ [![npm downloads](https://img.shields.io/npm/dm/@mikulgohil/ai-kit.svg)](https://www.npmjs.com/package/@mikulgohil/ai-kit)
8
+ [![license](https://img.shields.io/npm/l/@mikulgohil/ai-kit.svg)](https://github.com/mikulgohil/ai-kit/blob/main/LICENSE)
19
9
 
20
- AI coding tools are powerful, but they start every project from zero:
10
+ > **[Read the full documentation](https://mikulgohil.github.io/ai-kit-docs)** | [Getting Started](https://mikulgohil.github.io/ai-kit-docs/getting-started) | [CLI Reference](https://mikulgohil.github.io/ai-kit-docs/cli-reference) | [Skills & Commands](https://mikulgohil.github.io/ai-kit-docs/slash-commands) | [Hooks](https://mikulgohil.github.io/ai-kit-docs/hooks) | [Agents](https://mikulgohil.github.io/ai-kit-docs/agents) | [Changelog](https://mikulgohil.github.io/ai-kit-docs/changelog)
21
11
 
22
- - **Wrong patterns** — The AI writes Pages Router code when you use App Router, CSS when you use Tailwind
23
- - **No standards** — No docs, no tests, no JSDoc, no error boundaries unless you explicitly ask every time
24
- - **Repeated context** — You explain the same conventions in every conversation, every day
25
- - **Inconsistent output** — Developer A gets different AI behavior than Developer B on the same project
26
- - **No quality gates** — AI-generated code goes straight to PR without validation
27
- - **Same mistakes** — No system to track what went wrong, so the team keeps hitting the same issues
12
+ ```bash
13
+ npx @mikulgohil/ai-kit init
14
+ ```
28
15
 
29
- **The result:** Teams spend more time fixing AI output than they save generating it.
16
+ ---
30
17
 
31
- ## The Solution
18
+ ## Problems AI Kit Solves
19
+
20
+ Every team using AI coding assistants hits these problems. AI Kit solves each one.
21
+
22
+ | # | Problem | How AI Kit Solves It |
23
+ |---|---------|---------------------|
24
+ | 1 | **AI forgets everything each session** — Every new chat starts from zero. No memory of project rules, patterns, or past decisions. | Generates a persistent `CLAUDE.md` with project rules, conventions, and stack details. The AI knows your project from the first prompt, every time. |
25
+ | 2 | **AI generates wrong framework patterns** — Writes Pages Router code when you use App Router. Uses CSS when you use Tailwind. Creates default exports when your project uses named exports. | Auto-detects your exact stack (framework, router, CMS, styling, TypeScript config) and generates rules specific to your setup. The AI can't use the wrong patterns. |
26
+ | 3 | **Developers write bad prompts** — Vague or incorrect prompts lead to wrong code, wasted time, and rework. Junior developers waste the most time. | Ships **46 pre-built skills** so developers don't write prompts from scratch — just run `/review`, `/security-check`, `/new-component`, `/refactor`, etc. |
27
+ | 4 | **Same mistakes happen repeatedly** — No system to track what went wrong, so the team keeps hitting the same build failures and lint errors. | Generates a **mistakes log** (`docs/mistakes-log.md`) with **auto-capture hook** that logs every build/lint failure automatically. The AI references it to avoid repeating them. |
28
+ | 5 | **Every developer gets different AI behavior** — No consistency in how the team uses AI tools, leading to inconsistent code quality and style. | One `ai-kit init` command generates the same rules for the entire team — everyone's AI follows identical project standards. Commit the generated files to the repo. |
29
+ | 6 | **No quality checks on AI-generated code** — AI output goes straight to PR without type checking, linting, or security review. | Automated **hooks** run formatting, type-checking, linting, and git safety checks in real-time as the AI writes code. **Quality gate** runs everything before merge. |
30
+ | 7 | **AI generates insecure code** — No guardrails for secrets exposure, XSS, SQL injection, or other vulnerabilities. AI doesn't scan its own output. | Built-in **security audit** scans for exposed secrets, OWASP risks, and misconfigurations. **Security review agent** catches issues at development time, not production. |
31
+ | 8 | **AI can't handle multi-file reasoning** — Changes to one component break related files. AI loses context across linked models and shared types. | **10 specialized agents** with focused expertise — planner, code-reviewer, build-resolver, doc-updater, refactor-cleaner — each maintains context for their domain. |
32
+ | 9 | **No decision trail** — Nobody remembers why a technical decision was made 3 months ago. Knowledge walks out the door when developers leave. | Auto-scaffolds a **decisions log** (`docs/decisions-log.md`) to capture what was decided, why, and by whom — fully searchable and traceable. |
33
+ | 10 | **Onboarding takes too long** — New developers spend days understanding the project and its AI setup before they can contribute. | AI Kit generates developer guides and project-aware configurations — new team members get productive AI assistance from day one with zero manual setup. |
34
+ | 11 | **Context gets repeated every conversation** — You explain the same conventions in every session: import order, naming, component structure, testing patterns. | All conventions are encoded in the generated rules file. The AI reads them automatically at session start. You explain once, it remembers forever. |
35
+ | 12 | **AI doesn't improve over time** — The AI makes the same wrong suggestions regardless of past feedback, team patterns, or previous failures. | The system **learns as you use it** — mistakes log, decisions log, and updated rules mean the AI gets smarter with every session. Mistakes auto-capture builds the log organically. |
36
+ | 13 | **Complex tasks need multiple manual AI passes** — Developers manually coordinate review + test + docs updates across separate conversations. | **Multi-agent orchestration** runs multiple specialized agents in parallel — review, test, document, and refactor in one command with `/orchestrate`. |
37
+ | 14 | **Switching AI tools means starting over** — Moving from Cursor to Claude Code (or vice versa) loses all configuration and project context. | Generates configs for **5+ AI tools** (Claude Code, Cursor, Windsurf, Aider, Cline) from a single source — switch tools without losing project knowledge. |
38
+ | 15 | **AI creates components without tests, docs, or types** — Every AI-generated file needs manual follow-up to add what was missed. | Skills like `/new-component` enforce a structured workflow: asks 10 questions, reads existing patterns, generates component + types + tests + docs together. |
39
+ | 16 | **No visibility into AI usage costs** — Management has no idea how many tokens the team is consuming or which projects cost the most. | Built-in **token tracking** provides daily/weekly/monthly usage summaries, per-project cost breakdown, budget alerts, and ROI estimates. |
40
+ | 17 | **Cursor copies entire modules instead of targeted edits** — AI bloats the repo with unnecessary file duplication, especially in CMS and monorepo setups. | Generated rules include explicit instructions for editing patterns — update in place, respect package boundaries, follow existing structure. Rules prevent over-generation. |
41
+ | 18 | **No component-level AI awareness** — AI doesn't know which components have tests, stories, Sitecore integration, or documentation gaps. | **Component scanner** discovers all React components and generates `.ai.md` docs with health scores, props tables, Sitecore field mappings, and dependency trees. |
42
+ | 19 | **Setup is manual and error-prone** — Configuring AI assistants requires deep knowledge of each tool's config format. Most teams skip it entirely. | **Zero manual configuration** — one command auto-detects your stack and generates everything. Update with one command when the project evolves. |
43
+ | 20 | **AI hallucinates framework-specific APIs** — Generates incorrect hook usage, wrong data fetching patterns, or non-existent component APIs for your framework version. | Stack-specific template fragments include exact API patterns for your detected framework version (e.g., Next.js 15 App Router patterns, Sitecore Content SDK v2 patterns). |
44
+
45
+ ---
46
+
47
+ ## Quick Start
32
48
 
33
49
  ```bash
50
+ # Install and configure in any project (30 seconds)
34
51
  npx @mikulgohil/ai-kit init
35
- ```
36
52
 
37
- One command. 30 seconds. Your AI assistant goes from generic to project-aware.
53
+ # Check your project health
54
+ npx @mikulgohil/ai-kit health
38
55
 
39
- AI Kit scans your project, detects your tech stack, and generates tailored rules, skills, agents, hooks, and guides so every AI interaction follows your standards, from the first conversation.
56
+ # Open in Claude Code or Cursor — AI now knows your project
57
+ ```
40
58
 
41
59
  ---
42
60
 
43
61
  ## What You Get
44
62
 
45
63
  | Generated | What It Does |
46
- |-----------|-------------|
47
- | **CLAUDE.md** | Project-aware rules for Claude Code — your stack, conventions, and patterns |
48
- | **.cursorrules** + `.cursor/rules/*.mdc` | Same rules formatted for Cursor AI with scoped file matching |
49
- | **39 Skills** | Auto-discovered workflows — `/review`, `/new-component`, `/security-check`, `/pre-pr`, and 35 more |
50
- | **8 Agents** | Specialized AI assistants for delegation — planner, reviewer, security, E2E, build-resolver, and more |
51
- | **3 Context Modes** | Switch between dev (build fast), review (check quality), and research (understand code) |
52
- | **Automated Hooks** | Auto-format, TypeScript checks, console.log warnings, mistakes auto-capture, git safety |
53
- | **6 Guides** | Developer playbooks for prompts, tokens, hooks, agents, Figma workflow |
54
- | **Doc Scaffolds** | Mistakes log, decisions log, time log — structured knowledge tracking |
55
- | **Component Docs** | Auto-generated `.ai.md` files per component with health scores and Sitecore integration |
64
+ |---|---|
65
+ | `CLAUDE.md` | Project-aware rules for Claude Code — your stack, conventions, and patterns |
66
+ | `.cursorrules` + `.cursor/rules/*.mdc` | Same rules formatted for Cursor AI with scoped file matching |
67
+ | 46 Skills | Auto-discovered workflows — `/review`, `/new-component`, `/security-check`, `/pre-pr`, and 42 more |
68
+ | 10 Agents | Specialized AI assistants — planner, reviewer, security, E2E, build-resolver, ci-debugger, and more |
69
+ | 3 Context Modes | Switch between dev (build fast), review (check quality), and research (understand code) |
70
+ | Automated Hooks | Auto-format, TypeScript checks, console.log warnings, mistakes auto-capture, git safety |
71
+ | 6 Guides | Developer playbooks for prompts, tokens, hooks, agents, Figma workflow |
72
+ | Doc Scaffolds | Mistakes log, decisions log, time log — structured knowledge tracking |
73
+ | Component Docs | Auto-generated `.ai.md` per component with health scores and Sitecore integration |
56
74
 
57
75
  ---
58
76
 
@@ -60,7 +78,7 @@ AI Kit scans your project, detects your tech stack, and generates tailored rules
60
78
 
61
79
  ### Auto Stack Detection
62
80
 
63
- Scans your `package.json`, config files, and directory structure to detect your exact stack — then generates rules tailored to it.
81
+ Scans your `package.json`, config files, and directory structure to detect your exact stack:
64
82
 
65
83
  | What It Detects | What the AI Learns |
66
84
  |---|---|
@@ -71,23 +89,21 @@ Scans your `package.json`, config files, and directory structure to detect your
71
89
  | Turborepo monorepo | Workspace conventions, cross-package imports |
72
90
  | Figma + design tokens | Token mapping, design-to-code workflow |
73
91
 
74
- ### 39 Pre-Built Skills
92
+ ### 46 Pre-Built Skills
75
93
 
76
- Skills are structured AI workflows that get applied automatically — you don't type a command, the AI recognizes what you're doing and loads the right skill.
94
+ Structured AI workflows applied automatically — the AI recognizes what you're doing and loads the right skill:
77
95
 
78
96
  | Category | Skills |
79
97
  |---|---|
80
- | **Getting Started** | `prompt-help`, `understand` |
81
- | **Building** | `new-component`, `new-page`, `api-route`, `error-boundary`, `extract-hook`, `figma-to-code`, `design-tokens`, `schema-gen`, `storybook-gen` |
82
- | **Quality & Review** | `review`, `pre-pr`, `test`, `accessibility-audit`, `security-check`, `responsive-check`, `type-fix`, `perf-audit`, `bundle-check`, `i18n-check` |
83
- | **Maintenance** | `fix-bug`, `refactor`, `optimize`, `migrate`, `dep-check`, `sitecore-debug` |
84
- | **Workflow** | `document`, `commit-msg`, `env-setup`, `changelog`, `release` |
85
- | **Session** | `save-session`, `resume-session`, `checkpoint` |
86
- | **Orchestration** | `orchestrate`, `quality-gate`, `harness-audit` |
98
+ | Getting Started | `prompt-help`, `understand` |
99
+ | Building | `new-component`, `new-page`, `api-route`, `error-boundary`, `extract-hook`, `figma-to-code`, `design-tokens`, `schema-gen`, `storybook-gen`, `scaffold-spec` |
100
+ | Quality & Review | `review`, `pre-pr`, `test`, `accessibility-audit`, `security-check`, `responsive-check`, `type-fix`, `perf-audit`, `bundle-check`, `i18n-check`, `test-gaps` |
101
+ | Maintenance | `fix-bug`, `refactor`, `optimize`, `migrate`, `dep-check`, `sitecore-debug`, `upgrade` |
102
+ | Workflow | `document`, `commit-msg`, `env-setup`, `changelog`, `release`, `pr-description`, `standup`, `learn-from-pr`, `release-notes` |
103
+ | Session | `save-session`, `resume-session`, `checkpoint` |
104
+ | Orchestration | `orchestrate`, `quality-gate`, `harness-audit` |
87
105
 
88
- ### 8 Specialized Agents
89
-
90
- Agents handle delegated tasks with focused expertise:
106
+ ### 10 Specialized Agents
91
107
 
92
108
  | Agent | Purpose | Conditional |
93
109
  |---|---|---|
@@ -97,28 +113,27 @@ Agents handle delegated tasks with focused expertise:
97
113
  | `build-resolver` | Diagnose and fix build/type errors | No |
98
114
  | `doc-updater` | Keep documentation in sync with code | No |
99
115
  | `refactor-cleaner` | Find and remove dead code | No |
100
- | `e2e-runner` | Playwright tests with Page Object Model | Yes — only if Playwright installed |
101
- | `sitecore-specialist` | Sitecore XM Cloud patterns and debugging | Yes only if Sitecore detected |
116
+ | `tdd-guide` | Test-driven development guidance and workflow | No |
117
+ | `ci-debugger` | CI/CD failure debugger analyzes logs and suggests fixes | No |
118
+ | `e2e-runner` | Playwright tests with Page Object Model | Yes — Playwright only |
119
+ | `sitecore-specialist` | Sitecore XM Cloud patterns and debugging | Yes — Sitecore only |
102
120
 
103
121
  ### Automated Quality Hooks
104
122
 
105
- Hooks run automatically as you code. Choose a profile during init:
106
-
107
- | Profile | What Runs |
123
+ | Profile | What Runs Automatically |
108
124
  |---|---|
109
- | **Minimal** | Auto-format + git push safety |
110
- | **Standard** | + TypeScript type-check + console.log warnings + mistakes auto-capture |
111
- | **Strict** | + ESLint check + stop-time console.log audit |
125
+ | Minimal | Auto-format + git push safety |
126
+ | Standard | + TypeScript type-check + console.log warnings + mistakes auto-capture + bundle impact warning |
127
+ | Strict | + ESLint check + stop-time console.log audit + pre-commit AI review + bundle impact warning |
112
128
 
113
- **Mistakes auto-capture** — When a build or lint command fails, the hook automatically logs the error to `docs/mistakes-log.md` with a timestamp and error preview. Your mistakes log builds itself over time.
129
+ **Mistakes auto-capture** — When a build/lint command fails, the hook logs the error to `docs/mistakes-log.md` with timestamp and error preview. The mistakes log builds itself over time.
114
130
 
115
131
  ### Component Scanner & Docs
116
132
 
117
- Discovers all React components and generates `.ai.md` documentation files with:
118
-
133
+ Discovers all React components and generates `.ai.md` documentation:
119
134
  - Props table with types and required flags
120
- - Health score (0-100) based on tests, stories, docs, and Sitecore integration
121
- - Sitecore integration details (datasource fields, rendering params, placeholders, GraphQL queries)
135
+ - Health score (0-100) based on tests, stories, docs, Sitecore integration
136
+ - Sitecore details: datasource fields, rendering params, placeholders, GraphQL queries
122
137
  - Smart merge — updates auto-generated sections while preserving manual edits
123
138
 
124
139
  ### Project Health Dashboard
@@ -127,15 +142,7 @@ Discovers all React components and generates `.ai.md` documentation files with:
127
142
  npx @mikulgohil/ai-kit health
128
143
  ```
129
144
 
130
- One-glance view of your project's AI setup health across 5 sections: setup integrity, security, stack detection, tools/MCP status, and documentation. Outputs an A-F grade with actionable recommendations.
131
-
132
- ### Security Audit
133
-
134
- ```bash
135
- npx @mikulgohil/ai-kit audit
136
- ```
137
-
138
- Scans for secrets in CLAUDE.md, MCP config security, .env gitignore status, hook validity, agent configuration, and more. Outputs an A-F health grade.
145
+ One-glance view across 5 sections: setup integrity, security, stack detection, tools/MCP, and documentation. Outputs an A-F grade with actionable recommendations.
139
146
 
140
147
  ### Token Tracking & Cost Estimates
141
148
 
@@ -143,24 +150,17 @@ Scans for secrets in CLAUDE.md, MCP config security, .env gitignore status, hook
143
150
  npx @mikulgohil/ai-kit tokens
144
151
  ```
145
152
 
146
- - Period summaries (today, this week, this month)
147
- - Budget progress with alerts at 50%, 75%, 90%
148
- - Per-project cost breakdown
149
- - Week-over-week trends
150
- - Model recommendations (Sonnet vs Opus optimization)
151
- - ROI estimate (time saved vs cost)
153
+ Period summaries, budget progress with alerts, per-project cost breakdown, week-over-week trends, model recommendations (Sonnet vs Opus), and ROI estimates.
152
154
 
153
155
  ### Multi-Tool Support
154
156
 
155
- Generate configs once, use across 5+ AI tools:
156
-
157
157
  | Tool | Output |
158
158
  |---|---|
159
- | **Claude Code** | `CLAUDE.md` + skills + agents + contexts + hooks |
160
- | **Cursor** | `.cursorrules` + `.cursor/rules/*.mdc` + skills |
161
- | **Windsurf** | `.windsurfrules` (via `ai-kit export`) |
162
- | **Aider** | `.aider.conf.yml` (via `ai-kit export`) |
163
- | **Cline** | `.clinerules` (via `ai-kit export`) |
159
+ | Claude Code | `CLAUDE.md` + skills + agents + contexts + hooks |
160
+ | Cursor | `.cursorrules` + `.cursor/rules/*.mdc` + skills |
161
+ | Windsurf | `.windsurfrules` (via `ai-kit export`) |
162
+ | Aider | `.aider.conf.yml` (via `ai-kit export`) |
163
+ | Cline | `.clinerules` (via `ai-kit export`) |
164
164
 
165
165
  ---
166
166
 
@@ -169,7 +169,7 @@ Generate configs once, use across 5+ AI tools:
169
169
  | Command | Description |
170
170
  |---|---|
171
171
  | `ai-kit init [path]` | Scan project and generate all configs |
172
- | `ai-kit update [path]` | Re-scan and update existing generated files |
172
+ | `ai-kit update [path]` | Re-scan and update generated files (safe merge) |
173
173
  | `ai-kit reset [path]` | Remove all AI Kit generated files |
174
174
  | `ai-kit health [path]` | One-glance project health dashboard |
175
175
  | `ai-kit audit [path]` | Security and configuration health audit |
@@ -178,8 +178,9 @@ Generate configs once, use across 5+ AI tools:
178
178
  | `ai-kit tokens` | Token usage summary and cost estimates |
179
179
  | `ai-kit stats [path]` | Project complexity metrics |
180
180
  | `ai-kit export [path]` | Export rules to Windsurf, Aider, Cline |
181
-
182
- `path` defaults to the current directory if omitted.
181
+ | `ai-kit patterns [path]` | Generate pattern library from recurring code patterns |
182
+ | `ai-kit dead-code [path]` | Find unused components and dead code |
183
+ | `ai-kit drift [path]` | Detect drift between code and .ai.md docs |
183
184
 
184
185
  ---
185
186
 
@@ -187,97 +188,80 @@ Generate configs once, use across 5+ AI tools:
187
188
 
188
189
  | Category | Technologies |
189
190
  |---|---|
190
- | **Frameworks** | Next.js (App Router, Pages Router, Hybrid), React |
191
- | **CMS** | Sitecore XM Cloud (Content SDK v2), Sitecore JSS |
192
- | **Styling** | Tailwind CSS (v3 + v4), SCSS, CSS Modules, styled-components |
193
- | **Language** | TypeScript (with strict mode detection) |
194
- | **Formatters** | Prettier, Biome (auto-detected for hooks) |
195
- | **Monorepos** | Turborepo, Nx, Lerna, pnpm workspaces |
196
- | **Design** | Figma MCP, Figma Code CLI, design tokens, visual tests |
197
- | **Testing** | Playwright, Storybook, axe-core |
198
- | **Quality** | ESLint, Snyk, Knip, @next/bundle-analyzer |
199
- | **Package Managers** | npm, pnpm, yarn, bun |
191
+ | Frameworks | Next.js (App Router, Pages Router, Hybrid), React |
192
+ | CMS | Sitecore XM Cloud (Content SDK v2), Sitecore JSS |
193
+ | Styling | Tailwind CSS (v3 + v4), SCSS, CSS Modules, styled-components |
194
+ | Language | TypeScript (with strict mode detection) |
195
+ | Formatters | Prettier, Biome (auto-detected for hooks) |
196
+ | Monorepos | Turborepo, Nx, Lerna, pnpm workspaces |
197
+ | Design | Figma MCP, Figma Code CLI, design tokens, visual tests |
198
+ | Testing | Playwright, Storybook, axe-core |
199
+ | Quality | ESLint, Snyk, Knip, @next/bundle-analyzer |
200
+ | Package Managers | npm, pnpm, yarn, bun |
200
201
 
201
202
  ---
202
203
 
203
- ## Quick Start
204
-
205
- ```bash
206
- # 1. Run in any project directory
207
- npx @mikulgohil/ai-kit init
208
-
209
- # 2. Follow the interactive prompts (30 seconds)
210
-
211
- # 3. Check your project health
212
- npx @mikulgohil/ai-kit health
213
-
214
- # 4. Open in Claude Code or Cursor — AI now knows your project
215
- ```
216
-
217
- ### Updating
218
-
219
- When your project evolves (new dependencies, framework upgrades):
220
-
221
- ```bash
222
- npx @mikulgohil/ai-kit update
223
- ```
204
+ ## The Impact
224
205
 
225
- Only content between `AI-KIT:START/END` markers is refreshed. Your custom rules and manual edits are preserved.
206
+ | Metric | Before AI Kit | After AI Kit |
207
+ |---|---|---|
208
+ | Context setup per conversation | 5-10 min | 0 min (auto-loaded) |
209
+ | Code review cycles per PR | 2-4 rounds | 1-2 rounds |
210
+ | Component creation time | 30-60 min | 10-15 min |
211
+ | New developer onboarding | 1-2 weeks | 2-3 days |
212
+ | Security issues caught | At PR review or production | At development time |
213
+ | Knowledge retention | Lost when developers leave | Logged in decisions and mistakes |
214
+ | AI tool switching cost | Start over from scratch | Zero — same rules across 5+ tools |
215
+ | AI-generated code quality | Inconsistent, needs manual fixing | Follows project standards automatically |
226
216
 
227
217
  ---
228
218
 
229
219
  ## Who Is This For?
230
220
 
231
- **Individual developers** — Stop re-explaining context. Let AI Kit teach the AI your project once. Every conversation starts informed.
221
+ **Individual developers** — Stop re-explaining context. The AI knows your project from the first conversation.
232
222
 
233
- **Tech leads** — Enforce coding standards through AI tools instead of code review comments. Standards are followed automatically, not policed manually.
223
+ **Tech leads** — Enforce coding standards through AI tools instead of code review comments.
234
224
 
235
- **Teams** — Same AI experience across every developer and every project. New hires get the same AI context as senior engineers.
225
+ **Teams** — Same AI experience across every developer. New hires get the same AI context as senior engineers.
236
226
 
237
227
  **Enterprise** — Consistent AI governance across projects. Security audit, token tracking, and quality hooks provide visibility and control.
238
228
 
239
229
  ---
240
230
 
241
- ## The Impact
231
+ ## Updating
242
232
 
243
- | Metric | Before AI Kit | After AI Kit |
244
- |---|---|---|
245
- | Context setup per conversation | 5-10 min | 0 min (auto-loaded) |
246
- | Code review cycles per PR | 2-4 rounds | 1-2 rounds |
247
- | Component creation time | 30-60 min | 10-15 min |
248
- | New developer onboarding | 1-2 weeks | 2-3 days |
249
- | Security issues caught | At PR review or production | At development time |
250
- | Knowledge retention | Lost when developers leave | Logged in decisions, mistakes, and time logs |
251
- | AI tool switching cost | Start over from scratch | Zero — same rules across 5+ tools |
233
+ When your project evolves:
234
+
235
+ ```bash
236
+ npx @mikulgohil/ai-kit update
237
+ ```
238
+
239
+ Only content between `AI-KIT:START/END` markers is refreshed. Your custom rules and manual edits are preserved.
252
240
 
253
241
  ---
254
242
 
255
243
  ## Documentation
256
244
 
257
- Full documentation is available at **[mikulgohil.github.io/ai-kit-docs](https://mikulgohil.github.io/ai-kit-docs)**
245
+ **[mikulgohil.github.io/ai-kit-docs](https://mikulgohil.github.io/ai-kit-docs)**
258
246
 
259
247
  | Page | What You'll Learn |
260
248
  |---|---|
261
249
  | [Getting Started](https://mikulgohil.github.io/ai-kit-docs/getting-started) | Step-by-step setup walkthrough |
262
- | [CLI Reference](https://mikulgohil.github.io/ai-kit-docs/cli-reference) | All 10 commands with examples |
263
- | [Skills & Commands](https://mikulgohil.github.io/ai-kit-docs/slash-commands) | All 39 skills with usage guides |
264
- | [Hooks](https://mikulgohil.github.io/ai-kit-docs/hooks) | Hook profiles and configuration |
265
- | [Agents](https://mikulgohil.github.io/ai-kit-docs/agents) | 8 specialized agents |
250
+ | [CLI Reference](https://mikulgohil.github.io/ai-kit-docs/cli-reference) | All 13 commands with examples |
251
+ | [Skills & Commands](https://mikulgohil.github.io/ai-kit-docs/slash-commands) | All 46 skills with usage guides |
266
252
  | [What Gets Generated](https://mikulgohil.github.io/ai-kit-docs/what-gets-generated) | Detailed breakdown of every generated file |
253
+ | [Hooks](https://mikulgohil.github.io/ai-kit-docs/hooks) | Hook profiles, mistakes auto-capture |
254
+ | [Agents](https://mikulgohil.github.io/ai-kit-docs/agents) | 10 specialized agents |
267
255
  | [Changelog](https://mikulgohil.github.io/ai-kit-docs/changelog) | Version history and release notes |
268
256
 
269
257
  ---
270
258
 
271
259
  ## Requirements
272
260
 
273
- - **Node.js 18+**
274
- - **A project with `package.json`**
275
- - **Claude Code or Cursor** (at least one AI tool installed)
276
-
277
- ## Repository
278
-
279
- [github.com/mikulgohil/ai-kit](https://github.com/mikulgohil/ai-kit)
261
+ - Node.js 18+
262
+ - A project with `package.json`
263
+ - Claude Code or Cursor (at least one AI tool)
280
264
 
281
265
  ## License
282
266
 
283
- MIT
267
+ MIT — [github.com/mikulgohil/ai-kit](https://github.com/mikulgohil/ai-kit)
@@ -0,0 +1,136 @@
1
+ ---
2
+ name: ci-debugger
3
+ description: CI/CD failure debugger — analyzes pipeline logs, identifies root causes, and suggests fixes for GitHub Actions, Vercel, and Netlify failures.
4
+ tools: Read, Edit, Glob, Grep, Bash
5
+ ---
6
+
7
+ # CI Failure Debugger
8
+
9
+ You are a CI/CD failure specialist. Analyze pipeline logs, identify root causes, and apply targeted fixes for GitHub Actions, Vercel, and Netlify deployments.
10
+
11
+ ## Process
12
+
13
+ ### 1. Parse CI Log
14
+
15
+ - Obtain the full CI log (from terminal output, log file, or CI platform URL)
16
+ - Identify the **error type** from the log output:
17
+ - **Build failure** — compilation, bundling, or asset generation errors
18
+ - **Test failure** — unit, integration, or e2e test assertions
19
+ - **Lint failure** — ESLint, Prettier, or type-check violations
20
+ - **Deploy failure** — deployment target rejections, permission errors, or resource limits
21
+ - **Timeout** — job exceeded time limit, hanging processes, or infinite loops
22
+ - **Infrastructure** — runner unavailable, Docker issues, or service container failures
23
+ - Extract the **first error** in the log — later errors are often cascading symptoms
24
+ - Note the **exit code**, **failed step name**, and **runner environment** (OS, Node version, package manager)
25
+
26
+ ### 2. Diagnose by Platform
27
+
28
+ #### GitHub Actions
29
+
30
+ - Check workflow YAML syntax: indentation, `uses` action versions, `with` parameters
31
+ - Verify `runs-on` runner availability (e.g., `ubuntu-latest` vs pinned versions)
32
+ - Check `actions/checkout` depth — shallow clones can break git-dependent tools
33
+ - Inspect secret and environment variable availability per job/environment
34
+ - Review `if` conditionals and job dependency chains (`needs`)
35
+ - Check for action version deprecations (`set-output`, `save-state`, Node 16 actions)
36
+ - Examine concurrency settings — jobs may be cancelled by newer runs
37
+ - Review caching: `actions/cache` key mismatches, cache size limits (10 GB)
38
+ - Check permissions: `GITHUB_TOKEN` scope, `permissions` block in workflow
39
+
40
+ #### Vercel
41
+
42
+ - Check build command and output directory in `vercel.json` or project settings
43
+ - Verify framework detection — wrong framework = wrong build pipeline
44
+ - Review environment variables: check if they are set for Preview vs Production
45
+ - Check function size limits (50 MB compressed) and serverless function timeout
46
+ - Inspect `vercel build` output for missing dependencies or peer dep warnings
47
+ - Edge Runtime errors: verify API routes use supported Node.js APIs
48
+ - Check `maxDuration` for serverless functions (default varies by plan)
49
+ - Review redirects/rewrites — syntax errors cause silent deployment failures
50
+
51
+ #### Netlify
52
+
53
+ - Check `netlify.toml` for build command, publish directory, and plugin configuration
54
+ - Verify build image — check Node.js version via `NODE_VERSION` env var or `.node-version`
55
+ - Review Netlify Functions directory and bundling (esbuild vs zip-it-and-ship-it)
56
+ - Check deploy context settings (production, deploy-preview, branch-deploy)
57
+ - Inspect plugin errors — community plugins can fail silently or break builds
58
+ - Review redirect rules — `_redirects` file vs `netlify.toml` conflicts
59
+ - Check bandwidth and build minute limits on the current plan
60
+
61
+ #### Generic CI (Jenkins, CircleCI, GitLab CI, etc.)
62
+
63
+ - Check pipeline configuration syntax and stage ordering
64
+ - Verify Docker image availability and version compatibility
65
+ - Review artifact passing between stages
66
+ - Check for resource constraints (memory, disk, CPU)
67
+
68
+ ### 3. Common CI Failures
69
+
70
+ #### Node.js Version Mismatch
71
+
72
+ - **Symptom**: `SyntaxError: Unexpected token`, unsupported API calls, or engine incompatibility
73
+ - **Check**: Compare CI runner Node version with `.nvmrc`, `.node-version`, `package.json` `engines` field
74
+ - **Fix**: Pin Node version in CI config using `actions/setup-node`, `NODE_VERSION` env, or engine-strict
75
+
76
+ #### Missing Environment Variables
77
+
78
+ - **Symptom**: `undefined` values at build time, API connection failures, empty config
79
+ - **Check**: Compare required env vars (from `.env.example` or docs) against CI platform secrets
80
+ - **Fix**: Add missing secrets in CI platform settings; verify they are exposed to the correct step/environment
81
+
82
+ #### Dependency Conflicts
83
+
84
+ - **Symptom**: `ERESOLVE`, peer dependency warnings, lockfile out of date
85
+ - **Check**: Compare `package-lock.json` / `pnpm-lock.yaml` / `yarn.lock` with `package.json`
86
+ - **Fix**: Regenerate lockfile locally, or pin conflicting dependency versions; avoid `--legacy-peer-deps` in CI unless truly necessary
87
+
88
+ #### Out of Memory (OOM)
89
+
90
+ - **Symptom**: `FATAL ERROR: Heap limit`, `JavaScript heap out of memory`, `Killed` with exit code 137
91
+ - **Check**: Build process memory usage, number of parallel processes, large asset processing
92
+ - **Fix**: Increase `NODE_OPTIONS=--max-old-space-size=4096`, reduce parallelism, split large builds, or use a larger runner
93
+
94
+ #### Timeout
95
+
96
+ - **Symptom**: Job cancelled after time limit, `ETIMEDOUT`, hanging step with no output
97
+ - **Check**: Network calls to external services, long-running test suites, missing test cleanup, deadlocks
98
+ - **Fix**: Add timeout limits to individual steps, mock external services, parallelize test suites, check for hanging processes
99
+
100
+ #### Cache Invalidation
101
+
102
+ - **Symptom**: Stale dependencies, "works locally but fails in CI", intermittent build failures
103
+ - **Check**: Cache key strategy — does it include lockfile hash? Is the cache corrupted?
104
+ - **Fix**: Bust the cache by changing the key prefix, verify restore-keys fallback chain, clear platform cache manually if needed
105
+
106
+ #### Permission and Authentication Errors
107
+
108
+ - **Symptom**: `403 Forbidden`, `401 Unauthorized`, `Permission denied`, deploy token expired
109
+ - **Check**: Token expiration dates, repository access scopes, OIDC configuration
110
+ - **Fix**: Rotate tokens/secrets, verify `permissions` block in GitHub Actions, check deploy key read/write access
111
+
112
+ #### Lockfile Drift
113
+
114
+ - **Symptom**: `The lockfile is not up to date`, `--frozen-lockfile` failures
115
+ - **Check**: Someone modified `package.json` without running install, or different package manager versions
116
+ - **Fix**: Run `npm ci` / `pnpm install --frozen-lockfile` locally to verify, commit the updated lockfile
117
+
118
+ ### 4. Apply Fix
119
+
120
+ - Identify the **root cause**, not just the failing line
121
+ - Make the minimal targeted change to resolve the failure
122
+ - If the fix is in CI config (workflow YAML, `vercel.json`, `netlify.toml`), validate syntax before committing
123
+ - If the fix is in application code, verify it passes locally first
124
+ - Suggest re-running the pipeline to confirm the fix
125
+ - If the failure is flaky (intermittent), identify the non-deterministic source and add resilience (retries, mocks, deterministic seeds)
126
+
127
+ ## Rules
128
+
129
+ - Always read the **full log output** before diagnosing — do not jump to conclusions from partial output
130
+ - Fix the **root cause**, not the symptom — suppressing errors or adding retries without understanding why is not a fix
131
+ - Verify the fix by suggesting a pipeline re-run — never assume a fix works without validation
132
+ - Check for **cascading failures** — the first error often causes many others; fix the first one and re-evaluate
133
+ - Do not hardcode secrets or tokens in workflow files — always use platform secret management
134
+ - When modifying CI config, preserve existing caching and optimization strategies unless they are the cause
135
+ - Two-attempt rule: if an approach fails twice, try a different strategy
136
+ - Document the failure and fix so the team can learn from it