@codyswann/lisa 1.0.5 → 1.0.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/all/copy-overwrite/.claude/commands/project/add-test-coverage.md +34 -35
- package/all/copy-overwrite/.claude/commands/project/archive.md +2 -1
- package/all/copy-overwrite/.claude/commands/project/bootstrap.md +26 -31
- package/all/copy-overwrite/.claude/commands/project/debrief.md +30 -35
- package/all/copy-overwrite/.claude/commands/project/execute.md +37 -19
- package/all/copy-overwrite/.claude/commands/project/fix-linter-error.md +40 -61
- package/all/copy-overwrite/.claude/commands/project/implement.md +9 -9
- package/all/copy-overwrite/.claude/commands/project/lower-code-complexity.md +42 -30
- package/all/copy-overwrite/.claude/commands/project/plan.md +32 -12
- package/all/copy-overwrite/.claude/commands/project/reduce-max-lines-per-function.md +35 -46
- package/all/copy-overwrite/.claude/commands/project/reduce-max-lines.md +33 -46
- package/all/copy-overwrite/.claude/commands/project/research.md +25 -0
- package/all/copy-overwrite/.claude/commands/project/review.md +30 -20
- package/all/copy-overwrite/.claude/commands/project/setup.md +51 -15
- package/all/copy-overwrite/.claude/commands/project/verify.md +26 -54
- package/all/copy-overwrite/.claude/commands/pull-request/review.md +62 -20
- package/all/copy-overwrite/.claude/settings.json +1 -1
- package/all/copy-overwrite/HUMAN.md +0 -12
- package/cdk/copy-overwrite/.github/workflows/deploy.yml +1 -1
- package/expo/copy-overwrite/.github/workflows/lighthouse.yml +17 -0
- package/expo/copy-overwrite/knip.json +2 -0
- package/nestjs/copy-overwrite/.github/workflows/deploy.yml +1 -1
- package/package.json +4 -1
- package/typescript/copy-overwrite/.github/workflows/quality.yml +46 -0
- package/all/copy-overwrite/.claude/commands/project/complete-task.md +0 -59
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
description: Increase test coverage to a specified threshold percentage
|
|
3
|
-
allowed-tools: Read,
|
|
3
|
+
allowed-tools: Read, Bash, Glob, Grep
|
|
4
4
|
argument-hint: <threshold-percentage>
|
|
5
5
|
model: sonnet
|
|
6
6
|
---
|
|
@@ -11,48 +11,47 @@ Target threshold: $ARGUMENTS%
|
|
|
11
11
|
|
|
12
12
|
If no argument provided, prompt the user for a target.
|
|
13
13
|
|
|
14
|
-
##
|
|
14
|
+
## Step 1: Gather Requirements
|
|
15
15
|
|
|
16
|
-
|
|
16
|
+
1. **Find coverage config** (jest.config.js, vitest.config.ts, .nycrc, etc.)
|
|
17
|
+
2. **Run coverage report** to get current state:
|
|
18
|
+
```bash
|
|
19
|
+
bun run test:cov 2>&1 | head -100
|
|
20
|
+
```
|
|
21
|
+
3. **Identify the 20 files with lowest coverage**, noting:
|
|
22
|
+
- File path
|
|
23
|
+
- Current coverage % (lines, branches, functions)
|
|
24
|
+
- Which lines/branches are uncovered
|
|
17
25
|
|
|
18
|
-
|
|
26
|
+
## Step 2: Generate Brief
|
|
19
27
|
|
|
20
|
-
|
|
21
|
-
cat .claude-active-project 2>/dev/null
|
|
22
|
-
```
|
|
23
|
-
|
|
24
|
-
If a project is active, include `metadata: { "project": "<project-name>" }` in all TaskCreate calls.
|
|
25
|
-
|
|
26
|
-
### Step 1: Locate Configuration
|
|
27
|
-
|
|
28
|
-
Find the test coverage config (jest.config.js, vitest.config.ts, .nycrc, etc.).
|
|
29
|
-
|
|
30
|
-
### Step 2: Update Thresholds
|
|
31
|
-
|
|
32
|
-
Set any threshold below $ARGUMENTS% to $ARGUMENTS% (line, branch, function, statement).
|
|
28
|
+
Compile findings into a detailed brief:
|
|
33
29
|
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
Run coverage and identify the **20 files** with the lowest coverage.
|
|
37
|
-
|
|
38
|
-
### Step 4: Create Task List
|
|
39
|
-
|
|
40
|
-
Create a task for each file needing test coverage, ordered by coverage gap (lowest first).
|
|
30
|
+
```
|
|
31
|
+
Increase test coverage from [current]% to $ARGUMENTS%.
|
|
41
32
|
|
|
42
|
-
|
|
43
|
-
- **subject**: "Add test coverage for [file]" (imperative form)
|
|
44
|
-
- **description**: File path, current coverage %, target threshold, notes about uncovered lines/branches
|
|
45
|
-
- **activeForm**: "Adding tests for [file]" (present continuous)
|
|
46
|
-
- **metadata**: `{ "project": "<active-project>" }` if project context exists
|
|
33
|
+
## Files Needing Coverage (ordered by coverage gap)
|
|
47
34
|
|
|
48
|
-
|
|
35
|
+
1. src/services/user.ts - 23% coverage (target: $ARGUMENTS%)
|
|
36
|
+
- Uncovered: lines 45-67, 89-102
|
|
37
|
+
- Missing branch coverage: lines 34, 56
|
|
38
|
+
2. src/utils/helpers.ts - 34% coverage (target: $ARGUMENTS%)
|
|
39
|
+
- Uncovered: lines 12-45
|
|
40
|
+
...
|
|
49
41
|
|
|
50
|
-
|
|
42
|
+
## Configuration
|
|
43
|
+
- Config file: [path to coverage config]
|
|
44
|
+
- Update thresholds to $ARGUMENTS% for: lines, branches, functions, statements
|
|
51
45
|
|
|
52
|
-
|
|
46
|
+
## Acceptance Criteria
|
|
47
|
+
- All files meet $ARGUMENTS% coverage threshold
|
|
48
|
+
- `bun run test:cov` passes with no threshold violations
|
|
53
49
|
|
|
54
|
-
|
|
50
|
+
## Verification
|
|
51
|
+
Command: `bun run test:cov`
|
|
52
|
+
Expected: All thresholds pass at $ARGUMENTS%
|
|
53
|
+
```
|
|
55
54
|
|
|
56
|
-
|
|
55
|
+
## Step 3: Bootstrap Project
|
|
57
56
|
|
|
58
|
-
|
|
57
|
+
Run `/project:bootstrap` with the generated brief as a text prompt.
|
|
@@ -1,49 +1,44 @@
|
|
|
1
1
|
---
|
|
2
2
|
description: Automated project setup and research with gap detection
|
|
3
|
-
argument-hint: <
|
|
3
|
+
argument-hint: <file-path|jira-issue|"text description">
|
|
4
4
|
---
|
|
5
5
|
|
|
6
6
|
Complete all of the following steps for $ARGUMENTS:
|
|
7
7
|
|
|
8
|
-
## Step 0: MANDATORY SETUP
|
|
9
|
-
|
|
10
|
-
Create workflow tracking todos:
|
|
11
|
-
- Step 1: Setup
|
|
12
|
-
- Step 2: Research
|
|
13
|
-
- Step 3: Gap Detection
|
|
14
|
-
|
|
15
|
-
⚠️ **CRITICAL**: DO NOT STOP until all 3 todos are marked completed.
|
|
16
|
-
|
|
17
8
|
## Step 1: Setup
|
|
18
|
-
Mark "Step 1: Setup" as in_progress.
|
|
19
9
|
|
|
20
|
-
|
|
10
|
+
Run `/project:setup $ARGUMENTS` directly (not via Task tool).
|
|
21
11
|
- Creates project directory, brief.md, findings.md, git branch
|
|
12
|
+
- Creates `.claude-active-project` marker file
|
|
13
|
+
- Outputs the project name (e.g., `2026-01-26-my-feature`)
|
|
22
14
|
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
## Step 2: Research
|
|
26
|
-
Mark "Step 2: Research" as in_progress.
|
|
15
|
+
Capture the project name from the output for use in subsequent steps.
|
|
27
16
|
|
|
28
|
-
|
|
29
|
-
- Generates research.md with findings
|
|
17
|
+
## Step 2: Create and Execute Tasks
|
|
30
18
|
|
|
31
|
-
|
|
19
|
+
Create workflow tracking tasks with `metadata.project` set to the project name from Step 1:
|
|
32
20
|
|
|
33
|
-
|
|
34
|
-
|
|
21
|
+
```
|
|
22
|
+
TaskCreate:
|
|
23
|
+
subject: "Research project requirements"
|
|
24
|
+
description: "Run /project:research projects/<project-name> to gather codebase and web research."
|
|
25
|
+
metadata: { project: "<project-name-from-step-1>" }
|
|
35
26
|
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
- Check
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
- If no gaps:
|
|
42
|
-
✅ Report: "Bootstrap complete. Research has no gaps. Ready to run /project:execute @projects/$PROJECT"
|
|
27
|
+
TaskCreate:
|
|
28
|
+
subject: "Gap detection and execution"
|
|
29
|
+
description: "Read projects/<project-name>/research.md. Check '## Open Questions' section. If unresolved questions exist, STOP and report to human. If no gaps, immediately run /project:execute @projects/<project-name>"
|
|
30
|
+
metadata: { project: "<project-name-from-step-1>" }
|
|
31
|
+
```
|
|
43
32
|
|
|
44
|
-
|
|
33
|
+
**Execute each task via a subagent** to preserve main context. Launch up to 6 in parallel where tasks don't have dependencies. Do not stop until both are completed.
|
|
45
34
|
|
|
46
35
|
## Output to Human
|
|
47
36
|
|
|
48
|
-
- "
|
|
49
|
-
-
|
|
37
|
+
- If gaps exist: "⚠️ Bootstrap complete but needs human review - see Open Questions in research.md"
|
|
38
|
+
- If no gaps: Execution will begin automatically
|
|
39
|
+
|
|
40
|
+
---
|
|
41
|
+
|
|
42
|
+
## Next Step
|
|
43
|
+
|
|
44
|
+
After completing this phase, tell the user: "To continue, run `/project:execute @projects/<project-name>`"
|
|
@@ -1,54 +1,49 @@
|
|
|
1
1
|
---
|
|
2
|
-
description:
|
|
2
|
+
description: Aggregates learnings from tasks and findings, uses skill-evaluator to decide where each belongs (new skill, .claude/rules/PROJECT_RULES.md, or omit)
|
|
3
3
|
argument-hint: <project-directory>
|
|
4
4
|
allowed-tools: Read, Write, Edit, Bash, Glob, Grep, Task, TaskCreate, TaskUpdate, TaskList, Skill
|
|
5
5
|
---
|
|
6
6
|
|
|
7
7
|
## Setup
|
|
8
8
|
|
|
9
|
-
|
|
9
|
+
Set active project marker: `echo "$ARGUMENTS" | sed 's|.*/||' > .claude-active-project`
|
|
10
10
|
|
|
11
|
-
|
|
12
|
-
2. Evaluate each finding
|
|
13
|
-
3. Apply decisions
|
|
11
|
+
Extract `<project-name>` from the last segment of `$ARGUMENTS`.
|
|
14
12
|
|
|
15
|
-
##
|
|
13
|
+
## Create and Execute Tasks
|
|
16
14
|
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
Extract each distinct finding/learning as a separate item.
|
|
20
|
-
|
|
21
|
-
## Step 2: Evaluate Each Finding
|
|
22
|
-
|
|
23
|
-
For each finding, use the Task tool with `subagent_type: "skill-evaluator"`:
|
|
15
|
+
Create workflow tracking tasks with `metadata.project` set to the project name:
|
|
24
16
|
|
|
25
17
|
```
|
|
26
|
-
|
|
18
|
+
TaskCreate:
|
|
19
|
+
subject: "Aggregate project learnings"
|
|
20
|
+
description: "Collect all learnings from two sources: 1) Read $ARGUMENTS/findings.md for manual findings. 2) Read all task files in $ARGUMENTS/tasks/*.json and extract metadata.learnings arrays. Compile into a single list of distinct findings/learnings."
|
|
21
|
+
metadata: { project: "<project-name>" }
|
|
22
|
+
|
|
23
|
+
TaskCreate:
|
|
24
|
+
subject: "Evaluate each learning"
|
|
25
|
+
description: "For each learning, use Task tool with subagent_type 'skill-evaluator' to determine: CREATE SKILL (complex, reusable pattern), ADD TO RULES (simple never/always rule), or OMIT ENTIRELY (already covered or too project-specific). Collect all decisions."
|
|
26
|
+
metadata: { project: "<project-name>" }
|
|
27
|
+
|
|
28
|
+
TaskCreate:
|
|
29
|
+
subject: "Apply decisions"
|
|
30
|
+
description: "For each learning based on skill-evaluator decision: CREATE SKILL → run /skill-creator with details. ADD TO RULES → add succinctly to .claude/rules/PROJECT_RULES.md. OMIT → no action. Report summary: skills created, rules added, omitted count."
|
|
31
|
+
metadata: { project: "<project-name>" }
|
|
32
|
+
```
|
|
27
33
|
|
|
28
|
-
|
|
34
|
+
**Execute each task via a subagent** to preserve main context. Launch up to 6 in parallel where tasks don't have dependencies. Do not stop until all are completed.
|
|
29
35
|
|
|
30
|
-
|
|
31
|
-
1. CREATE SKILL - if it's a complex, reusable pattern
|
|
32
|
-
2. ADD TO RULES - if it's a simple never/always rule for .claude/rules/PROJECT_RULES.md
|
|
33
|
-
3. OMIT ENTIRELY - if it's already covered or too project-specific
|
|
34
|
-
```
|
|
36
|
+
## Important: Rules vs Skills
|
|
35
37
|
|
|
36
|
-
|
|
38
|
+
**⚠️ WARNING about PROJECT_RULES.md**: Rules in `.claude/rules/` are **always loaded** at session start for every request. Only add learnings to PROJECT_RULES.md if they:
|
|
39
|
+
- Apply to **every** request in this codebase (not just specific features)
|
|
40
|
+
- Are simple "never do X" or "always do Y" statements
|
|
41
|
+
- Cannot be scoped to a skill that's invoked on-demand
|
|
37
42
|
|
|
38
|
-
|
|
43
|
+
If a learning only applies to certain types of work (e.g., "when writing GraphQL resolvers..."), it should be a **skill** instead, not a rule.
|
|
39
44
|
|
|
40
|
-
|
|
45
|
+
---
|
|
41
46
|
|
|
42
|
-
|
|
43
|
-
|----------|--------|
|
|
44
|
-
| CREATE SKILL | Use Task tool: "run /skill-creator with [finding details]" |
|
|
45
|
-
| ADD TO RULES | Add the rule succinctly to @.claude/rules/PROJECT_RULES.md |
|
|
46
|
-
| OMIT ENTIRELY | No action needed |
|
|
47
|
+
## Next Step
|
|
47
48
|
|
|
48
|
-
|
|
49
|
-
```
|
|
50
|
-
Debrief complete:
|
|
51
|
-
- Skills created: [X]
|
|
52
|
-
- Rules added: [Y]
|
|
53
|
-
- Omitted (redundant/narrow): [Z]
|
|
54
|
-
```
|
|
49
|
+
After completing this phase, tell the user: "To continue, run `/project:archive $ARGUMENTS`"
|
|
@@ -21,30 +21,48 @@ Execute complete implementation workflow for $ARGUMENTS.
|
|
|
21
21
|
3. Check if planning is already complete: `ls $ARGUMENTS/tasks/*.md 2>/dev/null | head -3`
|
|
22
22
|
- If task files exist: Skip planning, start at implementation
|
|
23
23
|
|
|
24
|
-
##
|
|
24
|
+
## Create and Execute Tasks
|
|
25
25
|
|
|
26
|
-
Create workflow tracking tasks with `metadata
|
|
26
|
+
Create workflow tracking tasks with `metadata.project` set to the project name:
|
|
27
27
|
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
6. Step 6: Archive
|
|
28
|
+
```
|
|
29
|
+
TaskCreate:
|
|
30
|
+
subject: "Planning"
|
|
31
|
+
description: "Run /project:plan $ARGUMENTS to create implementation tasks."
|
|
32
|
+
metadata: { project: "<project-name>" }
|
|
34
33
|
|
|
35
|
-
|
|
34
|
+
TaskCreate:
|
|
35
|
+
subject: "Implementation"
|
|
36
|
+
description: "Run /project:implement $ARGUMENTS to execute all planned tasks."
|
|
37
|
+
metadata: { project: "<project-name>" }
|
|
36
38
|
|
|
37
|
-
|
|
39
|
+
TaskCreate:
|
|
40
|
+
subject: "Review"
|
|
41
|
+
description: "Run /project:review $ARGUMENTS to review code changes."
|
|
42
|
+
metadata: { project: "<project-name>" }
|
|
38
43
|
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
| Review | `run /project:review $ARGUMENTS` |
|
|
44
|
-
| Verification | `run /project:verify $ARGUMENTS` |
|
|
45
|
-
| Debrief | `run /project:debrief $ARGUMENTS` |
|
|
46
|
-
| Archive | `run /project:archive $ARGUMENTS` |
|
|
44
|
+
TaskCreate:
|
|
45
|
+
subject: "Verification"
|
|
46
|
+
description: "Run /project:verify $ARGUMENTS to verify all requirements are met."
|
|
47
|
+
metadata: { project: "<project-name>" }
|
|
47
48
|
|
|
48
|
-
|
|
49
|
+
TaskCreate:
|
|
50
|
+
subject: "Debrief"
|
|
51
|
+
description: "Run /project:debrief $ARGUMENTS to capture learnings."
|
|
52
|
+
metadata: { project: "<project-name>" }
|
|
53
|
+
|
|
54
|
+
TaskCreate:
|
|
55
|
+
subject: "Archive"
|
|
56
|
+
description: "Run /project:archive $ARGUMENTS to archive the completed project."
|
|
57
|
+
metadata: { project: "<project-name>" }
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
**Execute each task via a subagent** to preserve main context. Launch up to 6 in parallel where tasks don't have dependencies. Do not stop until all are completed.
|
|
49
61
|
|
|
50
62
|
Report "Project complete and archived" when done.
|
|
63
|
+
|
|
64
|
+
---
|
|
65
|
+
|
|
66
|
+
## Next Step
|
|
67
|
+
|
|
68
|
+
The project workflow is now complete. The implementation is done, reviewed, verified, learnings captured, and the project is archived.
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
description: Fix all violations of one or more ESLint rules across the codebase
|
|
3
|
-
allowed-tools: Read,
|
|
3
|
+
allowed-tools: Read, Bash, Glob, Grep
|
|
4
4
|
argument-hint: <rule-1> [rule-2] [rule-3] ...
|
|
5
5
|
model: sonnet
|
|
6
6
|
---
|
|
@@ -9,79 +9,58 @@ model: sonnet
|
|
|
9
9
|
|
|
10
10
|
Target rules: $ARGUMENTS
|
|
11
11
|
|
|
12
|
-
|
|
12
|
+
If no arguments provided, prompt the user for at least one lint rule name.
|
|
13
13
|
|
|
14
|
-
##
|
|
14
|
+
## Step 1: Gather Requirements
|
|
15
15
|
|
|
16
|
-
|
|
16
|
+
1. **Parse rules** from $ARGUMENTS (space-separated)
|
|
17
|
+
2. **Run linter** to collect all violations:
|
|
18
|
+
```bash
|
|
19
|
+
bun run lint 2>&1
|
|
20
|
+
```
|
|
21
|
+
3. **Group violations** by rule, then by file, noting:
|
|
22
|
+
- File path and line numbers
|
|
23
|
+
- Violation count per file
|
|
24
|
+
- Sample error messages
|
|
17
25
|
|
|
18
|
-
|
|
26
|
+
## Step 2: Generate Brief
|
|
19
27
|
|
|
20
|
-
|
|
28
|
+
Compile findings into a detailed brief:
|
|
21
29
|
|
|
22
|
-
Split `$ARGUMENTS` into individual rule names (space-separated).
|
|
23
|
-
|
|
24
|
-
Example inputs:
|
|
25
|
-
- `sonarjs/cognitive-complexity` → 1 rule
|
|
26
|
-
- `sonarjs/cognitive-complexity @typescript-eslint/no-explicit-any` → 2 rules
|
|
27
|
-
- `react-hooks/exhaustive-deps import/order prefer-const` → 3 rules
|
|
28
|
-
|
|
29
|
-
## Step 2: Enable Rules
|
|
30
|
-
|
|
31
|
-
For each rule, find the ESLint config and set it to `"error"` severity if not already enabled. NOTE: Make sure to scan for overrides that need to be changed too. For example eslint.config.local.ts.
|
|
32
|
-
|
|
33
|
-
## Step 3: Identify Violations
|
|
34
|
-
|
|
35
|
-
Run linting and collect violations for all target rules:
|
|
36
|
-
|
|
37
|
-
```bash
|
|
38
|
-
bun run lint 2>&1 | grep -E "(rule-1|rule-2|...)"
|
|
39
30
|
```
|
|
31
|
+
Fix ESLint violations for rules: $ARGUMENTS
|
|
40
32
|
|
|
41
|
-
|
|
42
|
-
1. **Rule name** (primary grouping)
|
|
43
|
-
2. **File path** (secondary grouping)
|
|
33
|
+
## Violations by Rule
|
|
44
34
|
|
|
45
|
-
|
|
35
|
+
### [rule-name-1] (X total violations across Y files)
|
|
46
36
|
|
|
47
|
-
|
|
37
|
+
1. src/services/user.ts (5 violations)
|
|
38
|
+
- Line 23: [error message]
|
|
39
|
+
- Line 45: [error message]
|
|
40
|
+
- Line 67: [error message]
|
|
41
|
+
...
|
|
42
|
+
2. src/utils/helpers.ts (3 violations)
|
|
43
|
+
- Line 12: [error message]
|
|
44
|
+
...
|
|
48
45
|
|
|
49
|
-
|
|
46
|
+
### [rule-name-2] (X total violations across Y files)
|
|
47
|
+
...
|
|
50
48
|
|
|
51
|
-
|
|
52
|
-
-
|
|
53
|
-
-
|
|
49
|
+
## Fix Strategies
|
|
50
|
+
- **Complexity rules** (sonarjs/*): Extract functions, early returns, simplify conditions
|
|
51
|
+
- **Style rules** (prettier/*, import/order): Apply formatting fixes
|
|
52
|
+
- **Best practice rules** (react-hooks/*, prefer-const): Refactor to recommended pattern
|
|
53
|
+
- **Type rules** (@typescript-eslint/*): Add proper types, remove `any`
|
|
54
54
|
|
|
55
|
-
|
|
56
|
-
-
|
|
57
|
-
-
|
|
58
|
-
- Fix approach based on rule type:
|
|
59
|
-
- **Complexity rules** (`sonarjs/*`): Extract functions, use early returns, simplify conditions
|
|
60
|
-
- **Style rules** (`prettier/*`, `import/order`): Apply formatting fixes
|
|
61
|
-
- **Best practice rules** (`react-hooks/*`, `prefer-const`): Refactor to follow recommended pattern
|
|
62
|
-
- **Type rules** (`@typescript-eslint/*`): Add proper types, remove `any`
|
|
63
|
-
|
|
64
|
-
## Step 5: Execute
|
|
65
|
-
|
|
66
|
-
Process rules sequentially (to avoid conflicts), but parallelize file fixes within each rule:
|
|
67
|
-
|
|
68
|
-
For each rule:
|
|
69
|
-
1. Launch up to 5 sub-agents to fix files for that rule in parallel
|
|
70
|
-
2. Wait for all files to be fixed
|
|
71
|
-
3. Run `bun run lint` to verify rule is now clean
|
|
72
|
-
4. Commit all fixes for that rule with message: `fix(lint): resolve [rule-name] violations`
|
|
73
|
-
5. Move to next rule
|
|
74
|
-
|
|
75
|
-
## Step 6: Report
|
|
55
|
+
## Acceptance Criteria
|
|
56
|
+
- `bun run lint` passes with zero violations for: $ARGUMENTS
|
|
57
|
+
- Each rule's fixes committed separately: `fix(lint): resolve [rule-name] violations`
|
|
76
58
|
|
|
59
|
+
## Verification
|
|
60
|
+
Command: `bun run lint 2>&1 | grep -E "($ARGUMENTS)" | wc -l`
|
|
61
|
+
Expected: 0
|
|
77
62
|
```
|
|
78
|
-
Lint rule fix complete:
|
|
79
63
|
|
|
80
|
-
|
|
81
|
-
|------|-------------|---------------------|
|
|
82
|
-
| rule-1 | N1 | M1 |
|
|
83
|
-
| rule-2 | N2 | M2 |
|
|
84
|
-
| ... | ... | ... |
|
|
64
|
+
## Step 3: Bootstrap Project
|
|
85
65
|
|
|
86
|
-
|
|
87
|
-
```
|
|
66
|
+
Run `/project:bootstrap` with the generated brief as a text prompt.
|
|
@@ -15,17 +15,17 @@ allowed-tools: Read, Write, Edit, Bash, Glob, Grep, Task, TaskCreate, TaskUpdate
|
|
|
15
15
|
|
|
16
16
|
Use **TaskList** to get current task status.
|
|
17
17
|
|
|
18
|
+
**Always execute tasks via subagents** to keep the main context window clean. Launch up to 6 subagents in parallel for unblocked tasks.
|
|
19
|
+
|
|
18
20
|
For each pending, unblocked task (filter by `metadata.project` = `<project-name>`):
|
|
19
21
|
|
|
20
22
|
1. Use **TaskUpdate** to mark it `in_progress`
|
|
21
|
-
2. Use **TaskGet** to retrieve full task details
|
|
22
|
-
3.
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
5. If verification fails, keep task `in_progress` and report the failure
|
|
28
|
-
6. Check **TaskList** for newly unblocked tasks
|
|
23
|
+
2. Use **TaskGet** to retrieve full task details
|
|
24
|
+
3. Complete the task following the instructions in its description
|
|
25
|
+
4. Run the verification command and confirm expected output
|
|
26
|
+
5. If verification passes, use **TaskUpdate** to mark it `completed`
|
|
27
|
+
6. If verification fails, keep task `in_progress` and report the failure
|
|
28
|
+
7. Check **TaskList** for newly unblocked tasks
|
|
29
29
|
|
|
30
30
|
Continue until all tasks are completed.
|
|
31
31
|
|
|
@@ -35,4 +35,4 @@ Use **TaskList** to generate a summary showing:
|
|
|
35
35
|
- Total tasks completed
|
|
36
36
|
- Any tasks that failed or remain in progress
|
|
37
37
|
|
|
38
|
-
|
|
38
|
+
After completing this phase, tell the user: "To continue, run `/project:review $ARGUMENTS`"
|
|
@@ -1,49 +1,61 @@
|
|
|
1
1
|
---
|
|
2
2
|
description: Reduces the code complexity of the codebase by 2 on each run
|
|
3
|
-
allowed-tools: Read,
|
|
3
|
+
allowed-tools: Read, Bash, Glob, Grep
|
|
4
4
|
---
|
|
5
5
|
|
|
6
|
-
|
|
6
|
+
# Lower Code Complexity
|
|
7
7
|
|
|
8
|
-
|
|
8
|
+
Reduces the cognitive complexity threshold by 2 and fixes all violations.
|
|
9
9
|
|
|
10
|
-
|
|
10
|
+
## Step 1: Gather Requirements
|
|
11
11
|
|
|
12
|
-
|
|
12
|
+
1. **Read current threshold** from eslint config (cognitive-complexity rule)
|
|
13
|
+
2. **Calculate new threshold**: current - 2 (e.g., 15 → 13)
|
|
14
|
+
3. **Run lint** with the new threshold to find violations:
|
|
15
|
+
```bash
|
|
16
|
+
bun run lint 2>&1 | grep "cognitive-complexity"
|
|
17
|
+
```
|
|
18
|
+
4. **Note for each violation**:
|
|
19
|
+
- File path and line number
|
|
20
|
+
- Function name
|
|
21
|
+
- Current complexity score
|
|
13
22
|
|
|
14
|
-
|
|
15
|
-
2. Lower the threshold by 2 (e.g., 15 → 13)
|
|
23
|
+
If no violations at new threshold, report success and exit.
|
|
16
24
|
|
|
17
|
-
## Step 2:
|
|
25
|
+
## Step 2: Generate Brief
|
|
18
26
|
|
|
19
|
-
|
|
27
|
+
Compile findings into a detailed brief:
|
|
20
28
|
|
|
21
|
-
|
|
29
|
+
```
|
|
30
|
+
Reduce cognitive complexity threshold from [current] to [new].
|
|
22
31
|
|
|
23
|
-
##
|
|
32
|
+
## Functions Exceeding Threshold (ordered by complexity)
|
|
24
33
|
|
|
25
|
-
|
|
34
|
+
1. src/services/user.ts:processUser (complexity: 18, target: [new])
|
|
35
|
+
- Line 45, function spans lines 45-120
|
|
36
|
+
2. src/utils/helpers.ts:validateInput (complexity: 15, target: [new])
|
|
37
|
+
- Line 23, function spans lines 23-67
|
|
38
|
+
...
|
|
26
39
|
|
|
27
|
-
|
|
28
|
-
- File
|
|
29
|
-
-
|
|
30
|
-
- Refactoring strategies:
|
|
31
|
-
- **Extract functions**: Break complex logic into smaller, named functions
|
|
32
|
-
- **Early returns**: Reduce nesting with guard clauses
|
|
33
|
-
- **Extract conditions**: Move complex boolean logic into named variables
|
|
34
|
-
- **Use lookup tables**: Replace complex switch/if-else chains with object maps
|
|
40
|
+
## Configuration Change
|
|
41
|
+
- File: [eslint config path]
|
|
42
|
+
- Change: cognitive-complexity threshold from [current] to [new]
|
|
35
43
|
|
|
36
|
-
##
|
|
44
|
+
## Refactoring Strategies
|
|
45
|
+
- **Extract functions**: Break complex logic into smaller, named functions
|
|
46
|
+
- **Early returns**: Reduce nesting with guard clauses
|
|
47
|
+
- **Extract conditions**: Move complex boolean logic into named variables
|
|
48
|
+
- **Use lookup tables**: Replace complex switch/if-else chains with object maps
|
|
37
49
|
|
|
38
|
-
|
|
50
|
+
## Acceptance Criteria
|
|
51
|
+
- All functions at or below complexity [new]
|
|
52
|
+
- `bun run lint` passes with no cognitive-complexity violations
|
|
39
53
|
|
|
40
|
-
|
|
54
|
+
## Verification
|
|
55
|
+
Command: `bun run lint 2>&1 | grep "cognitive-complexity" | wc -l`
|
|
56
|
+
Expected: 0
|
|
57
|
+
```
|
|
41
58
|
|
|
42
|
-
## Step
|
|
59
|
+
## Step 3: Bootstrap Project
|
|
43
60
|
|
|
44
|
-
|
|
45
|
-
Code complexity reduction complete:
|
|
46
|
-
- Previous threshold: [X]
|
|
47
|
-
- New threshold: [Y]
|
|
48
|
-
- Functions simplified: [list]
|
|
49
|
-
```
|
|
61
|
+
Run `/project:bootstrap` with the generated brief as a text prompt.
|
|
@@ -109,7 +109,16 @@ For each task, use **TaskCreate** with:
|
|
|
109
109
|
[Or "N/A - no user-facing changes"]
|
|
110
110
|
|
|
111
111
|
## Verification
|
|
112
|
-
**Type:** `ui-recording` | `test-coverage` | `api-test` | `manual-check` | `documentation`
|
|
112
|
+
**Type:** `test` | `ui-recording` | `test-coverage` | `api-test` | `manual-check` | `documentation`
|
|
113
|
+
|
|
114
|
+
| Type | When to Use | Example |
|
|
115
|
+
|------|-------------|---------|
|
|
116
|
+
| `test` | Run specific tests | `bun run test -- src/services/user.spec.ts` |
|
|
117
|
+
| `ui-recording` | UI/UX changes | `bun run playwright:test ...` |
|
|
118
|
+
| `test-coverage` | Coverage threshold | `bun run test:cov -- --collectCoverageFrom='...'` |
|
|
119
|
+
| `api-test` | API endpoints | `./scripts/verify/<task-name>.sh` |
|
|
120
|
+
| `documentation` | Docs, README | `cat path/to/doc.md` |
|
|
121
|
+
| `manual-check` | Config, setup | Command showing config exists |
|
|
113
122
|
|
|
114
123
|
**Proof Command:**
|
|
115
124
|
```bash
|
|
@@ -118,6 +127,21 @@ For each task, use **TaskCreate** with:
|
|
|
118
127
|
|
|
119
128
|
**Expected Output:**
|
|
120
129
|
[What success looks like]
|
|
130
|
+
|
|
131
|
+
## Learnings
|
|
132
|
+
During implementation, collect any discoveries valuable for future developers:
|
|
133
|
+
- Gotchas or unexpected behavior encountered
|
|
134
|
+
- Edge cases that weren't obvious from requirements
|
|
135
|
+
- Better approaches discovered during implementation
|
|
136
|
+
- Patterns that should be reused or avoided
|
|
137
|
+
- Documentation gaps or misleading information found
|
|
138
|
+
|
|
139
|
+
**On task completion**, use `TaskUpdate` to save learnings:
|
|
140
|
+
```
|
|
141
|
+
TaskUpdate:
|
|
142
|
+
taskId: "<this-task-id>"
|
|
143
|
+
metadata: { learnings: ["Learning 1", "Learning 2", ...] }
|
|
144
|
+
```
|
|
121
145
|
```
|
|
122
146
|
|
|
123
147
|
**metadata**:
|
|
@@ -127,7 +151,7 @@ For each task, use **TaskCreate** with:
|
|
|
127
151
|
"type": "bug|task|epic|story",
|
|
128
152
|
"skills": ["/coding-philosophy", ...],
|
|
129
153
|
"verification": {
|
|
130
|
-
"type": "test
|
|
154
|
+
"type": "test|ui-recording|test-coverage|api-test|manual-check|documentation",
|
|
131
155
|
"command": "the proof command",
|
|
132
156
|
"expected": "what success looks like"
|
|
133
157
|
}
|
|
@@ -138,16 +162,6 @@ For each task, use **TaskCreate** with:
|
|
|
138
162
|
|
|
139
163
|
After creating all tasks, use **TaskUpdate** with `addBlockedBy` to establish task order where needed.
|
|
140
164
|
|
|
141
|
-
**Verification Type Reference:**
|
|
142
|
-
|
|
143
|
-
| Type | When to Use | Example |
|
|
144
|
-
|------|-------------|---------|
|
|
145
|
-
| `ui-recording` | UI/UX changes | `bun run playwright:test ...` |
|
|
146
|
-
| `test-coverage` | New code with tests | `bun run test:cov -- --collectCoverageFrom='...'` |
|
|
147
|
-
| `api-test` | New API endpoints | `./scripts/verify/<task-name>.sh` |
|
|
148
|
-
| `documentation` | Docs, README | `cat path/to/doc.md` |
|
|
149
|
-
| `manual-check` | Config, setup | Command showing config exists |
|
|
150
|
-
|
|
151
165
|
## Step 5: Report
|
|
152
166
|
|
|
153
167
|
Report: "Planning complete - X tasks created for project <project-name>"
|
|
@@ -157,3 +171,9 @@ Use **TaskList** to show the created tasks.
|
|
|
157
171
|
---
|
|
158
172
|
|
|
159
173
|
**IMPORTANT**: Each task description should contain all necessary information from `brief.md` and `research.md` to complete in isolation. Tasks should be independent and as small in scope as possible.
|
|
174
|
+
|
|
175
|
+
---
|
|
176
|
+
|
|
177
|
+
## Next Step
|
|
178
|
+
|
|
179
|
+
After completing this phase, tell the user: "To continue, run `/project:implement $ARGUMENTS`"
|