@iceinvein/agent-skills 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json ADDED
@@ -0,0 +1,27 @@
1
+ {
2
+ "name": "@iceinvein/agent-skills",
3
+ "version": "0.1.0",
4
+ "description": "Install agent skills into AI coding tools",
5
+ "author": "iceinvein",
6
+ "license": "MIT",
7
+ "type": "module",
8
+ "bin": {
9
+ "agent-skills": "./dist/cli/index.js"
10
+ },
11
+ "files": [
12
+ "dist",
13
+ "skills"
14
+ ],
15
+ "scripts": {
16
+ "build": "bun build src/cli/index.ts --outdir dist/cli --target node",
17
+ "test": "bun test",
18
+ "dev": "bun run src/cli/index.ts"
19
+ },
20
+ "repository": {
21
+ "type": "git",
22
+ "url": "https://github.com/iceinvein/agent-skills"
23
+ },
24
+ "devDependencies": {
25
+ "bun-types": "^1.3.11"
26
+ }
27
+ }
@@ -0,0 +1,31 @@
1
+ {
2
+ "name": "code-intelligence",
3
+ "version": "1.0.0",
4
+ "description": "Semantic code search, call hierarchy, dependency graphs, and impact analysis via MCP",
5
+ "author": "iceinvein",
6
+ "type": "code",
7
+ "tools": ["claude", "cursor"],
8
+ "mcp": {
9
+ "package": "@iceinvein/code-intelligence-mcp",
10
+ "command": "npx",
11
+ "args": ["-y", "@iceinvein/code-intelligence-mcp"]
12
+ },
13
+ "install": {
14
+ "claude": {
15
+ "mcpServers": {
16
+ "code-intelligence": {
17
+ "command": "npx",
18
+ "args": ["-y", "@iceinvein/code-intelligence-mcp"]
19
+ }
20
+ }
21
+ },
22
+ "cursor": {
23
+ "mcpServers": {
24
+ "code-intelligence": {
25
+ "command": "npx",
26
+ "args": ["-y", "@iceinvein/code-intelligence-mcp"]
27
+ }
28
+ }
29
+ }
30
+ }
31
+ }
@@ -0,0 +1,183 @@
1
+ ---
2
+ name: codebase-architecture
3
+ description: Use when designing architecture for a new project, reviewing an existing codebase for structural health, identifying design patterns and anti-patterns, or when asked to refactor module boundaries, dependency direction, or component organization
4
+ ---
5
+
6
+ # Codebase Architecture
7
+
8
+ Design architecture for new projects or review existing codebases. Covers both macro (architectural patterns, module boundaries, dependency direction) and micro (design patterns within components) levels. Language-agnostic.
9
+
10
+ **Core principle:** Evidence before opinion. Read code before diagnosing. Name every pattern explicitly. Cite specific files and lines.
11
+
12
+ ## When to Use
13
+
14
+ - "Review the architecture of this codebase"
15
+ - "How should I structure this new project?"
16
+ - "What design patterns are we using?"
17
+ - "Help me refactor these module boundaries"
18
+ - "Map the dependencies in this project"
19
+ - Codebase feels tangled, coupled, or inconsistent
20
+
21
+ **Not for:** Reviewing design documents (use `design-integrity-review`), bug hunting (use `find-bugs`), performance optimization.
22
+
23
+ ## Mode Selection
24
+
25
+ ```dot
26
+ digraph mode {
27
+ "Skill invoked" [shape=doublecircle];
28
+ "Mode explicit?" [shape=diamond];
29
+ "Codebase has substantial code?" [shape=diamond];
30
+ "Ask user: new design or review?" [shape=box];
31
+ "New-Project Mode" [shape=doublecircle];
32
+ "Review Mode" [shape=doublecircle];
33
+
34
+ "Skill invoked" -> "Mode explicit?";
35
+ "Mode explicit?" -> "New-Project Mode" [label="user said design/new"];
36
+ "Mode explicit?" -> "Review Mode" [label="user said review/audit/refactor"];
37
+ "Mode explicit?" -> "Codebase has substantial code?" [label="ambiguous"];
38
+ "Codebase has substantial code?" -> "Ask user: new design or review?" [label="yes"];
39
+ "Codebase has substantial code?" -> "New-Project Mode" [label="no/empty"];
40
+ "Ask user: new design or review?" -> "New-Project Mode" [label="new"];
41
+ "Ask user: new design or review?" -> "Review Mode" [label="review"];
42
+ }
43
+ ```
44
+
45
+ ---
46
+
47
+ ## New-Project Mode
48
+
49
+ Structured dialogue that produces an architecture spec. **Do NOT design in silence** — ask questions, wait for answers.
50
+
51
+ ### Phase 1: Understand the Domain
52
+
53
+ Before patterns or layers — understand what's being built:
54
+ - What does this system do? (one sentence)
55
+ - Who are the actors? (users, services, external systems)
56
+ - What are the 3-5 key operations?
57
+ - What are the hard constraints? (performance, team size, deployment, compliance)
58
+
59
+ **One question at a time.** Do not dump all questions at once.
60
+
61
+ ### Phase 2: Identify Architectural Drivers
62
+
63
+ Surface forces that should *shape* the architecture:
64
+ - **Scale axis** — single user CLI vs multi-tenant SaaS vs embedded
65
+ - **Change axis** — what changes frequently vs remains stable?
66
+ - **Integration axis** — what external systems must this talk to?
67
+ - **Team axis** — how many people/teams? (Conway's Law)
68
+
69
+ These drivers determine which patterns are appropriate. A 500-line CLI does not need hexagonal architecture.
70
+
71
+ ### Phase 3: Select Architectural Patterns
72
+
73
+ Using Phase 2 drivers, recommend macro-level patterns from `patterns-reference.md`:
74
+ - Overall structure (layered, hexagonal, event-driven, etc.)
75
+ - Communication style (sync, async, event sourcing)
76
+ - Data strategy (single DB, CQRS, shared-nothing)
77
+
78
+ **MUST present 2-3 options with trade-offs.** Recommend one with rationale tied explicitly to the drivers from Phase 2. Do not present a single approach as inevitable.
79
+
80
+ ### Phase 4: Define Boundaries & Interfaces
81
+
82
+ The most important phase:
83
+ - Each module gets a clear responsibility (one sentence)
84
+ - Define interfaces between modules
85
+ - **Establish dependency direction** — which modules know about which? Dependencies should point toward stable abstractions, not toward volatile details.
86
+ - **Name the design patterns** that apply within each module (e.g., "plugin system uses Strategy", "event bus uses Observer"). Refer to `patterns-reference.md`.
87
+
88
+ ### Phase 5: Produce Architecture Spec
89
+
90
+ Write structured document to `docs/architecture/architecture-spec.md` (or user's preference):
91
+ - System overview (one paragraph)
92
+ - Architectural drivers & constraints
93
+ - Module map with responsibilities
94
+ - Interface definitions
95
+ - Dependency graph (text-based, showing direction)
96
+ - Pattern decisions with rationale
97
+ - Risk areas & open questions
98
+
99
+ Ask user how they want to proceed — their planning workflow, direct implementation, or keep as reference.
100
+
101
+ ---
102
+
103
+ ## Review Mode
104
+
105
+ Scan-first, dialogue-second. **Read code before forming opinions.**
106
+
107
+ ### Phase 1: Codebase Scan
108
+
109
+ Use available tools (code-intelligence MCP, grep, glob, file reads):
110
+ - **Dependency mapping** — trace imports/requires to build module dependency graph. Note direction and cycles.
111
+ - **Structure mapping** — directory layout, module boundaries, entry points
112
+ - **Size analysis** — identify oversized files/modules (signal of unclear responsibilities)
113
+ - **Pattern detection** — recognize patterns in use (repository classes, factory functions, event emitters, middleware chains)
114
+
115
+ **Do not skip this phase.** Do not rely on CLAUDE.md or README alone. Read actual source files.
116
+
117
+ ### Phase 2: Architectural Health Assessment
118
+
119
+ Evaluate findings against these dimensions:
120
+
121
+ | Dimension | Check | Rating |
122
+ |-----------|-------|--------|
123
+ | **Dependency direction** | Do deps point toward abstractions or toward details? Circular deps? | strong / adequate / weak |
124
+ | **Module cohesion** | Single clear responsibility per module? | strong / adequate / weak |
125
+ | **Coupling** | Can you change one module without rippling? | strong / adequate / weak |
126
+ | **Boundary clarity** | Explicit interfaces, or modules reach into internals? | strong / adequate / weak |
127
+ | **Pattern consistency** | Same problem solved the same way everywhere? | strong / adequate / weak |
128
+ | **Abstraction quality** | Abstractions earn their keep? Leaky or unnecessary? | strong / adequate / weak |
129
+
130
+ **Distinguish intentional from accidental.** A codebase using 3 patterns for the same problem might be evolving. Flag inconsistency but ask before assuming it's a mistake.
131
+
132
+ ### Phase 3: Health Report
133
+
134
+ Save to `docs/architecture/architecture-review-YYYY-MM-DD.md`:
135
+ - **Architecture overview** — what the structure actually is
136
+ - **Dependency graph** — text-based, showing direction and cycles
137
+ - **Pattern inventory** — each pattern named, where it's used, file:line references, consistency assessment
138
+ - **Smell inventory** — specific issues with severity (critical / warning / info)
139
+ - **Strengths** — what's working well and why. This section is mandatory.
140
+ - **Health score** — per-dimension rating table from Phase 2
141
+ - **Priority ordering** — what to fix first based on impact-to-effort
142
+
143
+ ### Phase 4: Prescriptions
144
+
145
+ For each smell or weakness:
146
+ - What to change and why
147
+ - Which pattern to apply (reference `patterns-reference.md`)
148
+ - Impact estimate (how much code changes, risk of breakage)
149
+ - Priority (high / medium / low) with rationale
150
+
151
+ ### Phase 5: Transition to Action
152
+
153
+ Present prescriptions and ask:
154
+ - Which fixes to pursue
155
+ - How they want to proceed (their planning skill, start executing, or keep the report)
156
+
157
+ ---
158
+
159
+ ## Guard Rails
160
+
161
+ **Evidence before opinion.** Read code before diagnosing. Use code-intelligence MCP tools, grep, glob — whatever is available.
162
+
163
+ **Name the pattern, cite the location.** "Strategy pattern in `src/providers/types.ts:24`" — not "you seem to use a strategy-like approach."
164
+
165
+ **Strengths matter as much as weaknesses.** A review that only lists problems is misleading. Call out what's well-structured.
166
+
167
+ **Don't prescribe patterns for their own sake.** Every recommendation answers: "What problem does this solve in this codebase?" If the answer is "best practice," that's not good enough.
168
+
169
+ **Scale to codebase size.** 500-line CLI? Don't recommend hexagonal architecture. 50-file project? Don't recommend CQRS. Actively resist over-engineering.
170
+
171
+ **Don't assume before asking.** In new-project mode, do not state 10 assumptions and build a full design. Ask, listen, then design.
172
+
173
+ ## Common Mistakes
174
+
175
+ | Mistake | Fix |
176
+ |---------|-----|
177
+ | Designing without understanding domain constraints | Complete Phase 1-2 before Phase 3 |
178
+ | Presenting one architecture as inevitable | Always present 2-3 options with trade-offs |
179
+ | Using patterns but not naming them | Explicitly name every pattern by its standard name |
180
+ | Tracing module layout but not dependency direction | Map which modules import which and assess direction |
181
+ | Listing only problems in a review | Strengths section is mandatory |
182
+ | Over-engineering for a small codebase | Match pattern complexity to actual codebase scale |
183
+ | Assuming inconsistency is accidental | Ask before diagnosing — it might be intentional evolution |
@@ -0,0 +1,168 @@
1
+ # Architecture & Design Patterns Reference
2
+
3
+ Decision-aid for the `codebase-architecture` skill. Concise per entry — enough to make a decision, not a textbook.
4
+
5
+ ---
6
+
7
+ ## Architectural Patterns (Macro)
8
+
9
+ ### Layered (N-Tier)
10
+ Horizontal layers where each depends only on the layer below.
11
+ - **Use when:** Clear separation of concerns is needed; team members work on different layers; standard business applications.
12
+ - **Avoid when:** Heavy cross-cutting concerns; performance-critical paths that suffer from layer traversal.
13
+ - **Trade-off:** Simple to understand vs rigid boundaries that can lead to "pass-through" layers with no logic.
14
+
15
+ ### Hexagonal (Ports & Adapters)
16
+ Core domain logic has no external dependencies. External systems connect through ports (interfaces) and adapters (implementations).
17
+ - **Use when:** Domain logic is complex and must be testable in isolation; multiple external integrations (DBs, APIs, queues).
18
+ - **Avoid when:** Simple CRUD apps where the indirection adds no value; small scripts or CLIs.
19
+ - **Trade-off:** Excellent testability and replaceability vs more files and indirection.
20
+
21
+ ### Event-Driven
22
+ Components communicate through events. Producers don't know about consumers.
23
+ - **Use when:** Loose coupling between subsystems; async workflows; audit trails; multiple consumers for the same event.
24
+ - **Avoid when:** Simple request-response flows; you need synchronous consistency; small teams where the indirection isn't justified.
25
+ - **Trade-off:** Extreme decoupling vs harder debugging and eventual consistency challenges.
26
+
27
+ ### CQRS (Command Query Responsibility Segregation)
28
+ Separate models for reading and writing data.
29
+ - **Use when:** Read and write patterns differ significantly; high-read/low-write or vice versa; complex domain with simple queries.
30
+ - **Avoid when:** Read and write models are nearly identical; adds complexity without benefit for simple domains.
31
+ - **Trade-off:** Optimized read/write paths vs maintaining two models and their synchronization.
32
+
33
+ ### Pipe-and-Filter
34
+ Data flows through a sequence of processing steps, each transforming the input.
35
+ - **Use when:** Data transformation pipelines; compiler stages; ETL processes; middleware chains.
36
+ - **Avoid when:** Complex branching logic; interactive applications.
37
+ - **Trade-off:** Composable and reusable stages vs limited to sequential data flow.
38
+
39
+ ### Microkernel (Plugin)
40
+ Minimal core with extensible functionality via plugins.
41
+ - **Use when:** Product must support customization; IDE-like extensibility; varying feature sets per deployment.
42
+ - **Avoid when:** All features are always needed; plugin overhead isn't justified.
43
+ - **Trade-off:** Maximum extensibility vs plugin API design is hard to get right and hard to change.
44
+
45
+ ### Monolith-First
46
+ Single deployable unit. Extract services only when proven necessary.
47
+ - **Use when:** Starting a new project; small team; unclear domain boundaries; rapid prototyping.
48
+ - **Avoid when:** Domain boundaries are well-understood AND team scale demands independent deployment.
49
+ - **Trade-off:** Simple deployment and debugging vs harder to scale individual components independently.
50
+
51
+ ---
52
+
53
+ ## Design Patterns (Micro)
54
+
55
+ ### Creating Things
56
+
57
+ **Factory** — Encapsulates object creation logic. Use when: construction is complex, multiple variants exist, or you want to decouple creation from usage.
58
+ - Detect: Functions named `createX()`, `buildX()`, or `XFactory` classes.
59
+ - Misuse: Factory for objects with trivial constructors (just use `new`).
60
+
61
+ **Builder** — Step-by-step construction of complex objects. Use when: objects have many optional parameters or construction requires validation.
62
+ - Detect: Method chaining patterns (`.setX().setY().build()`).
63
+ - Misuse: Builder for objects with 2-3 parameters (use constructor or options object).
64
+
65
+ **Singleton** — Single instance shared globally. Use when: truly global resources (DB connection pool, logger). **Use sparingly** — often a sign of hidden global state.
66
+ - Detect: `getInstance()`, module-level `export const instance = new X()`.
67
+ - Misuse: Using singleton when dependency injection would be cleaner and more testable.
68
+
69
+ ### Structuring Relationships
70
+
71
+ **Adapter** — Translates one interface to another. Use when: integrating with external APIs or libraries whose interface doesn't match yours.
72
+ - Detect: Classes wrapping third-party libraries; `XAdapter`, `XWrapper` names.
73
+ - Misuse: Adapting internal code to internal code (just change the interface).
74
+
75
+ **Facade** — Simplified interface over a complex subsystem. Use when: callers need a simple API but the underlying system is complex.
76
+ - Detect: Classes that delegate to multiple subsystem objects; `XService`, `XManager` names.
77
+ - Misuse: Facade that just passes through to one class (unnecessary indirection).
78
+
79
+ **Decorator** — Wraps an object to add behavior without modifying it. Use when: cross-cutting concerns (logging, caching, auth); composable behavior layers.
80
+ - Detect: Classes/functions that wrap another and add behavior; middleware patterns.
81
+ - Misuse: Decorator chains so deep that debugging becomes impossible.
82
+
83
+ **Composite** — Tree structure where individual objects and compositions share the same interface. Use when: hierarchical data (file systems, UI component trees, org charts).
84
+ - Detect: Recursive structures where a node can contain children of the same type.
85
+
86
+ ### Managing Behavior
87
+
88
+ **Strategy** — Swappable algorithms behind a common interface. Use when: multiple approaches to the same operation; user-selectable behavior; provider abstraction.
89
+ - Detect: Interface + multiple implementations; `XStrategy`, `XProvider` names; config-driven selection.
90
+ - Misuse: Strategy for a single implementation with no planned alternatives.
91
+
92
+ **Observer / EventEmitter** — Objects subscribe to notifications from a subject. Use when: one-to-many notifications; decoupled event handling.
93
+ - Detect: `.on()`, `.subscribe()`, `.addEventListener()`; `EventEmitter` usage; pub/sub patterns.
94
+ - Misuse: Observer between two tightly-coupled objects (just call a method).
95
+
96
+ **State Machine** — Object behavior changes based on internal state. Use when: complex state transitions with rules; workflow engines; protocol implementations.
97
+ - Detect: Switch/match on state; `status` fields with transition logic; state transition tables.
98
+ - Misuse: State machine for simple boolean flags.
99
+
100
+ **Command** — Encapsulates a request as an object. Use when: undo/redo; queuing operations; macro recording.
101
+ - Detect: Classes with `execute()` method; command queues; action objects.
102
+
103
+ **Middleware / Chain of Responsibility** — Request passes through a chain of handlers. Use when: HTTP request processing; plugin hooks; validation pipelines.
104
+ - Detect: `app.use()` patterns; `next()` callbacks; ordered handler arrays.
105
+ - Misuse: Chain with only one handler (just call it directly).
106
+
107
+ ### Accessing Data
108
+
109
+ **Repository** — Abstraction over data access that presents a collection-like interface. Use when: separating domain logic from persistence; multiple data sources; testability.
110
+ - Detect: `XRepository` classes with `find()`, `save()`, `delete()`; data access layer.
111
+ - Misuse: Repository that just wraps an ORM with identical methods (adds nothing).
112
+
113
+ **Unit of Work** — Tracks changes to objects and coordinates writing them back. Use when: multiple related changes must be atomic; transaction management.
114
+ - Detect: Transaction wrappers; `commit()`/`rollback()` patterns.
115
+
116
+ **Data Mapper** — Separates domain objects from database representation. Use when: domain model differs from storage schema; complex mapping logic.
117
+ - Detect: Mapping functions between DB rows and domain objects; `toEntity()`, `toRow()`.
118
+
119
+ **Active Record** — Domain objects handle their own persistence. Use when: simple CRUD; domain closely mirrors the database; rapid prototyping.
120
+ - Detect: Model classes with `.save()`, `.delete()` on instances.
121
+ - Misuse: Active Record with complex domain logic (business rules mixed with persistence).
122
+
123
+ ---
124
+
125
+ ## Anti-Patterns & Smells
126
+
127
+ ### Structural
128
+ | Smell | How to Detect | Cost | Typical Fix |
129
+ |-------|--------------|------|-------------|
130
+ | **God Object** | File >500 lines with many unrelated methods | Hard to test, understand, modify | Extract focused services |
131
+ | **Circular Dependencies** | Module A imports B, B imports A | Build failures, initialization order bugs | Introduce shared interface or restructure |
132
+ | **Feature Envy** | Method uses more data from another class than its own | Misplaced responsibility | Move method to the class whose data it uses |
133
+ | **Shotgun Surgery** | One change requires editing many files | High cost of change | Consolidate related logic into one module |
134
+
135
+ ### Abstraction
136
+ | Smell | How to Detect | Cost | Typical Fix |
137
+ |-------|--------------|------|-------------|
138
+ | **Leaky Abstraction** | Callers need to know implementation details | Abstraction provides false safety | Fix the abstraction or remove it |
139
+ | **Speculative Generality** | Abstractions for hypothetical future needs | Complexity without benefit | YAGNI — remove until actually needed |
140
+ | **Dead Abstraction** | Interface with exactly one implementation, no plan for more | Unnecessary indirection | Inline it; add interface when a second impl appears |
141
+
142
+ ### Coupling
143
+ | Smell | How to Detect | Cost | Typical Fix |
144
+ |-------|--------------|------|-------------|
145
+ | **Inappropriate Intimacy** | Module reaches into another's private state | Brittle coupling | Define explicit interface at the boundary |
146
+ | **Hidden Dependencies** | Module uses globals or singletons not in its interface | Surprises, hard to test | Make dependencies explicit (constructor/parameter injection) |
147
+ | **Global State** | Module-level mutable state shared across callers | Race conditions, test pollution | Scope state to instances; inject as dependency |
148
+
149
+ ---
150
+
151
+ ## Decision Matrix
152
+
153
+ | Situation | Consider These Patterns |
154
+ |-----------|------------------------|
155
+ | High change frequency in one area | Strategy, Plugin, Adapter |
156
+ | Multiple data sources or external integrations | Repository, Adapter, Hexagonal |
157
+ | Complex object creation with many variants | Factory, Builder |
158
+ | Cross-cutting concerns (logging, auth, caching) | Middleware, Decorator |
159
+ | Async coordination between subsystems | Observer, Event-Driven |
160
+ | Complex state transitions with rules | State Machine |
161
+ | Need to undo/replay operations | Command |
162
+ | Callers need simple API over complex internals | Facade |
163
+ | Same operation, different algorithms | Strategy |
164
+ | Hierarchical/recursive data structures | Composite |
165
+ | Separating domain logic from external dependencies | Hexagonal, Repository |
166
+ | Starting a new project, unclear boundaries | Monolith-First, Layered |
167
+ | Data transformation pipelines | Pipe-and-Filter, Middleware |
168
+ | Read-heavy vs write-heavy divergence | CQRS |
@@ -0,0 +1,36 @@
1
+ {
2
+ "name": "codebase-architecture",
3
+ "version": "1.0.0",
4
+ "description": "Architecture review for existing codebases or structured design for new projects, with patterns reference",
5
+ "author": "iceinvein",
6
+ "type": "prompt",
7
+ "tools": ["claude", "cursor", "codex", "gemini"],
8
+ "files": {
9
+ "prompt": "SKILL.md",
10
+ "supporting": ["patterns-reference.md"]
11
+ },
12
+ "install": {
13
+ "claude": {
14
+ "prompt": ".claude/skills/codebase-architecture/SKILL.md",
15
+ "supporting": {
16
+ "patterns-reference.md": ".claude/skills/codebase-architecture/patterns-reference.md"
17
+ }
18
+ },
19
+ "cursor": {
20
+ "prompt": ".cursor/rules/codebase-architecture.mdc",
21
+ "supporting": {
22
+ "patterns-reference.md": ".cursor/rules/codebase-architecture-patterns.mdc"
23
+ }
24
+ },
25
+ "codex": {
26
+ "prompt": "AGENTS.md",
27
+ "append": true
28
+ },
29
+ "gemini": {
30
+ "prompt": ".gemini/skills/codebase-architecture.md",
31
+ "supporting": {
32
+ "patterns-reference.md": ".gemini/skills/codebase-architecture-patterns.md"
33
+ }
34
+ }
35
+ }
36
+ }
@@ -0,0 +1,150 @@
1
+ ---
2
+ name: design-integrity-review
3
+ description: Use when reviewing, evaluating, or giving feedback on a design document, technical spec, architecture doc, system design, product spec, API design, or database schema — especially after AI helped write or brainstorm it. Always use this skill when the user shares a design and asks you to review it, check if it holds together, or help scope it down. Trigger on phrases like "review my design", "does this design make sense", "can you look at this spec", "not sure what to cut", "scope this down", "feels like too much", "check if this holds together", "design review", or any request to evaluate a design document. Also trigger when the user describes a design that sounds like a feature list without a unifying idea, mentions AI helped create it, or expresses uncertainty about whether the design is coherent.
4
+ ---
5
+
6
+ # Design Integrity Review
7
+
8
+ ## Overview
9
+
10
+ A structured design review inspired by Frederick Brooks' *The Design of Design*. Walks the designer through hard questions about conceptual integrity, constraint exploitation, and vision coherence — especially after AI-assisted design sessions where accretion without vision is the default failure mode.
11
+
12
+ **Core principle:** Great design comes from a coherent vision held by one mind. AI is a powerful collaborator, but it has no taste, no sense of budget, and no opinion about what the design is *about*. This review forces you to prove that you do.
13
+
14
+ ## When to Use
15
+
16
+ - After writing or revising a design document, spec, or architectural plan
17
+ - After a design session where AI generated significant portions of the design
18
+ - When a design "feels done" but you can't explain its central idea in one sentence
19
+ - When scope has grown and you're unsure what to cut
20
+ - Before presenting a design to stakeholders or beginning implementation
21
+
22
+ **Do NOT use for:**
23
+ - Code review (use code-review skills)
24
+ - Debugging or investigating failures
25
+ - Requirements gathering (this reviews an existing design, not creates one)
26
+
27
+ ## The Review Process
28
+
29
+ You are a thoughtful, experienced design partner. Not an adversary — but you ask hard questions and don't accept hand-waving. Short sentences. Direct feedback. Credit solid thinking when you see it.
30
+
31
+ **IMPORTANT:** This is an interactive interview, not a checklist. Ask ONE question at a time. Listen to the answer. Follow up based on what was actually said, not what you planned to ask next.
32
+
33
+ ### Phase 1: The One-Sentence Test
34
+
35
+ Start here. Always.
36
+
37
+ > "In one sentence, what is this design *about*? Not what it does — what it's about."
38
+
39
+ If the designer cannot answer this clearly, the design lacks conceptual integrity. Do not proceed to other phases until this is resolved. Help them find it by asking:
40
+
41
+ - "If you had to remove half the features, which half survives? Why?"
42
+ - "What would a user say this is, in five words?"
43
+ - "What existing thing is this most like — and where does it deliberately diverge?"
44
+
45
+ The one-sentence answer becomes the **design thesis**. Every subsequent question tests against it.
46
+
47
+ **Scan for signals while asking the thesis question.** Phase 1 gates progress, but it doesn't mean you ignore everything else the designer says. If they mention AI involvement, team size, constraints, or scope concerns in their opening description, weave those into your thesis question — don't save them for later phases. For example: "You mentioned AI helped lay out the components — I want to come back to that. But first: in one sentence, what is this design *about*?" This acknowledges what they said, signals you'll probe it, and keeps the thesis question front and center.
48
+
49
+ ### Phase 2: Conceptual Integrity
50
+
51
+ Walk through these branches, adapting to what the designer says:
52
+
53
+ **Coherence** — Does every part serve the thesis?
54
+ - "Walk me through each major component. For each one: how does it serve the central idea?"
55
+ - "Which parts feel bolted on? Which parts feel inevitable?"
56
+ - "If a stranger read this design, would they guess the same thesis you stated?"
57
+
58
+ **Authorship** — Is there a clear point of view?
59
+ - "Where in this design do I see *your* judgment, not just AI suggestions you accepted?"
60
+ - "What did the AI suggest that you rejected? Why?"
61
+ - "What decision in this design would a reasonable person disagree with? Good — that means someone made a choice."
62
+
63
+ **Unity of style** — Does it feel like one mind designed it?
64
+ - "Are there places where the design contradicts itself in tone, complexity, or approach?"
65
+ - "Does the system-design portion feel like it was written by the same person as the UI portion?"
66
+
67
+ ### Phase 3: Constraint Exploitation
68
+
69
+ Brooks argues constraints improve design. Probe whether constraints are being exploited or merely tolerated.
70
+
71
+ - "What are your hardest constraints? (Time, performance, team size, budget, technical debt)"
72
+ - "For each constraint — did it *shape* the design, or are you just working around it?"
73
+ - "Which constraint, if removed, would make you redesign from scratch? That's your most important constraint. Is the design honoring it?"
74
+ - "What did the constraints force you to invent that you wouldn't have thought of otherwise?"
75
+
76
+ If the designer can't point to a single constraint that *improved* the design, they may be fighting their constraints instead of using them.
77
+
78
+ ### Phase 4: What Was Removed
79
+
80
+ Design is as much about removal as addition. This phase catches accretion — the primary failure mode of AI-assisted design.
81
+
82
+ - "What was in an earlier draft that you cut? Why?"
83
+ - "What feature or component are you most tempted to add that isn't in the design? Why haven't you? (If the answer is 'no reason' — cut something.)"
84
+ - "If you had to ship this with 30% less scope, what goes? Does the design still hold together?"
85
+ - "Is there any part that exists because AI suggested it and it seemed reasonable, but you never asked whether it was *necessary*?"
86
+
87
+ **The AI accretion test:** If you removed every element that originated from AI and wasn't independently validated against the thesis, would the design still stand? If not, the AI is the designer and you are the editor. Reverse those roles.
88
+
89
+ ### Phase 5: Second-System Check
90
+
91
+ Brooks' second-system effect: the tendency to over-engineer, especially when you have powerful tools (like AI) that make adding complexity feel free.
92
+
93
+ - "Where is this design more complex than it needs to be?"
94
+ - "What's the simplest version of this that still delivers the thesis?"
95
+ - "Are there abstractions here that serve hypothetical future needs rather than current ones?"
96
+ - "If a junior engineer had to maintain this, what would confuse them first?"
97
+
98
+ ### Phase 6: Budget Review
99
+
100
+ Every design has budgets — explicit or implicit. Surface them.
101
+
102
+ - "What are your budgets? (Complexity budget, performance budget, cognitive load budget, time-to-ship budget)"
103
+ - "Which budget is most at risk of being blown?"
104
+ - "Have you allocated budget to the parts that matter most to the thesis, or spread it evenly?"
105
+ - "What would you sacrifice to stay within budget?"
106
+
107
+ ### Synthesis
108
+
109
+ After walking the branches, synthesize:
110
+
111
+ 1. **Design thesis** — restate the one-sentence answer (refined through discussion)
112
+ 2. **Integrity score** — how well does every part serve the thesis? (strong / mixed / weak)
113
+ 3. **Top risks** — the 2-3 things most likely to undermine the design
114
+ 4. **Recommended cuts** — what should be removed to strengthen coherence
115
+ 5. **Constraints to exploit** — constraints that could become creative advantages
116
+ 6. **Next questions** — what the designer should think about before implementation
117
+
118
+ ## Tone Guide
119
+
120
+ - You are Brooks at a whiteboard: experienced, curious, direct
121
+ - Credit strong thinking explicitly: "That's a solid constraint exploitation — the limitation forced a better interaction model"
122
+ - Challenge weak thinking directly: "That sounds like a feature you accepted because AI suggested it, not because the design demanded it"
123
+ - One question at a time. Wait for the answer. Follow the thread
124
+ - Use the designer's own words back to them — "You said the thesis is X. This component seems to serve Y. Help me reconcile that."
125
+ - Never generate design solutions. Your job is to ask questions that help the designer find their own answers
126
+
127
+ ## Red Flags During Review
128
+
129
+ If you notice any of these, name them directly:
130
+
131
+ | Signal | What it means |
132
+ |--------|---------------|
133
+ | Designer can't state thesis in one sentence | Design lacks conceptual integrity |
134
+ | Every feature "is important" | No prioritization — accretion, not design |
135
+ | No constraints shaped the design | Constraints are being fought, not used |
136
+ | Nothing was removed from earlier drafts | Design by addition, not by judgment |
137
+ | "The AI suggested it and it made sense" | AI is the designer, human is the editor |
138
+ | Design is equally detailed everywhere | No budget allocation — everything got equal attention |
139
+ | Can't identify a controversial decision | No point of view — committee design |
140
+ | Removing 30% breaks everything | Tightly coupled — fragile under change |
141
+
142
+ ## Common Mistakes
143
+
144
+ **Treating this as a checklist:** Don't mechanically ask every question. Follow the conversation. Some designs need 20 minutes on Phase 1 and nothing else. Others breeze through the thesis but fall apart on constraints.
145
+
146
+ **Being adversarial instead of Socratic:** The goal is clarity, not winning. If the designer has strong answers, say so and move on.
147
+
148
+ **Generating solutions:** Your job is questions, not answers. When the designer is stuck, ask a question that reframes the problem — don't propose a design.
149
+
150
+ **Skipping Phase 4 (removal):** This is the most uncomfortable phase and the most valuable. AI-assisted designs almost always have accretion. Always probe here.
@@ -0,0 +1,26 @@
1
+ {
2
+ "name": "design-review",
3
+ "version": "1.0.0",
4
+ "description": "Brooks-inspired design integrity review — tests conceptual integrity, constraint exploitation, removal discipline, and scope control",
5
+ "author": "iceinvein",
6
+ "type": "prompt",
7
+ "tools": ["claude", "cursor", "codex", "gemini"],
8
+ "files": {
9
+ "prompt": "SKILL.md"
10
+ },
11
+ "install": {
12
+ "claude": {
13
+ "prompt": ".claude/skills/design-review/SKILL.md"
14
+ },
15
+ "cursor": {
16
+ "prompt": ".cursor/rules/design-review.mdc"
17
+ },
18
+ "codex": {
19
+ "prompt": "AGENTS.md",
20
+ "append": true
21
+ },
22
+ "gemini": {
23
+ "prompt": ".gemini/skills/design-review.md"
24
+ }
25
+ }
26
+ }
@@ -0,0 +1,20 @@
1
+ [
2
+ {
3
+ "name": "design-review",
4
+ "description": "Brooks-inspired design integrity review — tests conceptual integrity, constraint exploitation, removal discipline, and scope control",
5
+ "type": "prompt",
6
+ "version": "1.0.0"
7
+ },
8
+ {
9
+ "name": "codebase-architecture",
10
+ "description": "Architecture review for existing codebases or structured design for new projects, with patterns reference",
11
+ "type": "prompt",
12
+ "version": "1.0.0"
13
+ },
14
+ {
15
+ "name": "code-intelligence",
16
+ "description": "Semantic code search, call hierarchy, dependency graphs, and impact analysis via MCP",
17
+ "type": "code",
18
+ "version": "1.0.0"
19
+ }
20
+ ]