@codyswann/lisa 1.0.5 → 1.0.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (25) hide show
  1. package/all/copy-overwrite/.claude/commands/project/add-test-coverage.md +34 -35
  2. package/all/copy-overwrite/.claude/commands/project/archive.md +2 -1
  3. package/all/copy-overwrite/.claude/commands/project/bootstrap.md +26 -31
  4. package/all/copy-overwrite/.claude/commands/project/debrief.md +30 -35
  5. package/all/copy-overwrite/.claude/commands/project/execute.md +37 -19
  6. package/all/copy-overwrite/.claude/commands/project/fix-linter-error.md +40 -61
  7. package/all/copy-overwrite/.claude/commands/project/implement.md +9 -9
  8. package/all/copy-overwrite/.claude/commands/project/lower-code-complexity.md +42 -30
  9. package/all/copy-overwrite/.claude/commands/project/plan.md +32 -12
  10. package/all/copy-overwrite/.claude/commands/project/reduce-max-lines-per-function.md +35 -46
  11. package/all/copy-overwrite/.claude/commands/project/reduce-max-lines.md +33 -46
  12. package/all/copy-overwrite/.claude/commands/project/research.md +25 -0
  13. package/all/copy-overwrite/.claude/commands/project/review.md +30 -20
  14. package/all/copy-overwrite/.claude/commands/project/setup.md +51 -15
  15. package/all/copy-overwrite/.claude/commands/project/verify.md +26 -54
  16. package/all/copy-overwrite/.claude/commands/pull-request/review.md +62 -20
  17. package/all/copy-overwrite/.claude/settings.json +1 -1
  18. package/all/copy-overwrite/HUMAN.md +0 -12
  19. package/cdk/copy-overwrite/.github/workflows/deploy.yml +1 -1
  20. package/expo/copy-overwrite/.github/workflows/lighthouse.yml +17 -0
  21. package/expo/copy-overwrite/knip.json +2 -0
  22. package/nestjs/copy-overwrite/.github/workflows/deploy.yml +1 -1
  23. package/package.json +4 -1
  24. package/typescript/copy-overwrite/.github/workflows/quality.yml +46 -0
  25. package/all/copy-overwrite/.claude/commands/project/complete-task.md +0 -59
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  description: Reduce max lines per function threshold and fix violations
3
- allowed-tools: Read, Write, Edit, Bash, Glob, Grep, Task, TaskCreate, TaskUpdate, TaskList, TaskGet
3
+ allowed-tools: Read, Bash, Glob, Grep
4
4
  argument-hint: <max-lines-per-function-value>
5
5
  model: sonnet
6
6
  ---
@@ -11,66 +11,55 @@ Target threshold: $ARGUMENTS lines per function
11
11
 
12
12
  If no argument provided, prompt the user for a target.
13
13
 
14
- ## Process
14
+ ## Step 1: Gather Requirements
15
15
 
16
- ### Step 0: Check Project Context
16
+ 1. **Read current config** from eslint thresholds (eslint.thresholds.json or similar)
17
+ 2. **Run lint** with the new threshold to find violations:
18
+ ```bash
19
+ bun run lint 2>&1 | grep "max-lines-per-function"
20
+ ```
21
+ 3. **Note for each violation**:
22
+ - File path and line number
23
+ - Function name
24
+ - Current line count
17
25
 
18
- Check if there's an active project for task syncing:
26
+ If no violations at $ARGUMENTS, report success and exit.
19
27
 
20
- ```bash
21
- cat .claude-active-project 2>/dev/null
22
- ```
23
-
24
- If a project is active, include `metadata: { "project": "<project-name>" }` in all TaskCreate calls.
25
-
26
- ### Step 1: Locate Configuration
27
-
28
- Read the eslint thresholds config (`eslint.thresholds.json` or similar).
28
+ ## Step 2: Generate Brief
29
29
 
30
- ### Step 2: Update Threshold
30
+ Compile findings into a detailed brief:
31
31
 
32
- Set the `maxLinesPerFunction` threshold to $ARGUMENTS (e.g., `"maxLinesPerFunction": $ARGUMENTS`).
33
-
34
- ### Step 3: Identify Violations
35
-
36
- Run lint to find all functions exceeding the new threshold. Note file path, function name, and current line count.
37
-
38
- If no violations, report success and exit.
32
+ ```
33
+ Reduce max lines per function threshold to $ARGUMENTS.
39
34
 
40
- ### Step 4: Create Task List
35
+ ## Functions Exceeding Threshold (ordered by line count)
41
36
 
42
- Create a task for each function needing refactoring, ordered by line count (highest first).
37
+ 1. src/services/user.ts:processUser (95 lines, target: $ARGUMENTS)
38
+ - Line 45, function spans lines 45-140
39
+ 2. src/utils/helpers.ts:validateInput (82 lines, target: $ARGUMENTS)
40
+ - Line 23, function spans lines 23-105
41
+ ...
43
42
 
44
- Each task should have:
45
- - **subject**: "Reduce lines in [function-name]" (imperative form)
46
- - **description**: File path and line number, function name, current line count, target threshold, refactoring strategies
47
- - **activeForm**: "Reducing lines in [function-name]" (present continuous)
48
- - **metadata**: `{ "project": "<active-project>" }` if project context exists
43
+ ## Configuration Change
44
+ - File: eslint.thresholds.json
45
+ - Change: maxLinesPerFunction to $ARGUMENTS
49
46
 
50
- Refactoring strategies:
47
+ ## Refactoring Strategies
51
48
  - **Extract functions**: Break function into smaller named functions
52
49
  - **Early returns**: Reduce nesting with guard clauses
53
50
  - **Extract conditions**: Move complex boolean logic into named variables
54
51
  - **Use lookup tables**: Replace complex switch/if-else chains with object maps
55
52
  - **Consolidate logic**: Merge similar code paths
56
53
 
57
- ### Step 5: Parallel Execution
54
+ ## Acceptance Criteria
55
+ - All functions at or below $ARGUMENTS lines
56
+ - `bun run lint` passes with no max-lines-per-function violations
58
57
 
59
- Launch **up to 5 sub-agents** using the `code-simplifier` subagent to refactor in parallel.
60
-
61
- ### Step 6: Iterate
62
-
63
- Check for remaining pending tasks. Re-run lint to verify.
64
-
65
- If violations remain, repeat from Step 3.
66
-
67
- Continue until all functions meet or are under $ARGUMENTS lines.
58
+ ## Verification
59
+ Command: `bun run lint 2>&1 | grep "max-lines-per-function" | wc -l`
60
+ Expected: 0
61
+ ```
68
62
 
69
- ### Step 7: Report
63
+ ## Step 3: Bootstrap Project
70
64
 
71
- ```
72
- Max lines per function reduction complete:
73
- - Target threshold: $ARGUMENTS
74
- - Functions refactored: [count]
75
- - Functions reduced: [list with line counts]
76
- ```
65
+ Run `/project:bootstrap` with the generated brief as a text prompt.
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  description: Reduce max file lines threshold and fix violations
3
- allowed-tools: Read, Write, Edit, Bash, Glob, Grep, Task, TaskCreate, TaskUpdate, TaskList, TaskGet
3
+ allowed-tools: Read, Bash, Glob, Grep
4
4
  argument-hint: <max-lines-value>
5
5
  model: sonnet
6
6
  ---
@@ -11,65 +11,52 @@ Target threshold: $ARGUMENTS lines per file
11
11
 
12
12
  If no argument provided, prompt the user for a target.
13
13
 
14
- ## Process
14
+ ## Step 1: Gather Requirements
15
15
 
16
- ### Step 0: Check Project Context
16
+ 1. **Read current config** from eslint thresholds (eslint.thresholds.json or similar)
17
+ 2. **Run lint** with the new threshold to find violations:
18
+ ```bash
19
+ bun run lint 2>&1 | grep "max-lines"
20
+ ```
21
+ 3. **Note for each violation**:
22
+ - File path
23
+ - Current line count
17
24
 
18
- Check if there's an active project for task syncing:
25
+ If no violations at $ARGUMENTS, report success and exit.
19
26
 
20
- ```bash
21
- cat .claude-active-project 2>/dev/null
22
- ```
23
-
24
- If a project is active, include `metadata: { "project": "<project-name>" }` in all TaskCreate calls.
25
-
26
- ### Step 1: Locate Configuration
27
-
28
- Read the eslint thresholds config (`eslint.thresholds.json` or similar).
27
+ ## Step 2: Generate Brief
29
28
 
30
- ### Step 2: Update Threshold
29
+ Compile findings into a detailed brief:
31
30
 
32
- Set the `maxLines` threshold to $ARGUMENTS (e.g., `"maxLines": $ARGUMENTS`).
33
-
34
- ### Step 3: Identify Violations
35
-
36
- Run lint to find all files exceeding the new threshold. Note file path and current line count.
37
-
38
- If no violations, report success and exit.
31
+ ```
32
+ Reduce max file lines threshold to $ARGUMENTS.
39
33
 
40
- ### Step 4: Create Task List
34
+ ## Files Exceeding Threshold (ordered by line count)
41
35
 
42
- Create a task for each file needing refactoring, ordered by line count (highest first).
36
+ 1. src/services/user.ts (450 lines, target: $ARGUMENTS)
37
+ 2. src/utils/helpers.ts (380 lines, target: $ARGUMENTS)
38
+ 3. src/components/Dashboard.tsx (320 lines, target: $ARGUMENTS)
39
+ ...
43
40
 
44
- Each task should have:
45
- - **subject**: "Reduce lines in [file]" (imperative form)
46
- - **description**: File path, current line count, target threshold, refactoring strategies
47
- - **activeForm**: "Reducing lines in [file]" (present continuous)
48
- - **metadata**: `{ "project": "<active-project>" }` if project context exists
41
+ ## Configuration Change
42
+ - File: eslint.thresholds.json
43
+ - Change: maxLines to $ARGUMENTS
49
44
 
50
- Refactoring strategies:
45
+ ## Refactoring Strategies
51
46
  - **Extract modules**: Break file into smaller focused modules
52
47
  - **Remove duplication**: Consolidate repeated logic
53
48
  - **Delete dead code**: Remove unused functions/code paths
54
49
  - **Simplify logic**: Use early returns, reduce nesting
55
50
 
56
- ### Step 5: Parallel Execution
51
+ ## Acceptance Criteria
52
+ - All files at or below $ARGUMENTS lines
53
+ - `bun run lint` passes with no max-lines violations
57
54
 
58
- Launch **up to 5 sub-agents** using the `code-simplifier` subagent to refactor in parallel.
59
-
60
- ### Step 6: Iterate
61
-
62
- Check for remaining pending tasks. Re-run lint to verify.
63
-
64
- If violations remain, repeat from Step 3.
65
-
66
- Continue until all files meet or are under $ARGUMENTS lines.
55
+ ## Verification
56
+ Command: `bun run lint 2>&1 | grep "max-lines" | wc -l`
57
+ Expected: 0
58
+ ```
67
59
 
68
- ### Step 7: Report
60
+ ## Step 3: Bootstrap Project
69
61
 
70
- ```
71
- Max lines reduction complete:
72
- - Target threshold: $ARGUMENTS
73
- - Files refactored: [count]
74
- - Files reduced: [list with line counts]
75
- ```
62
+ Run `/project:bootstrap` with the generated brief as a text prompt.
@@ -103,6 +103,15 @@ last_updated: [YYYY-MM-DD]
103
103
  ## Code References
104
104
  - `path/to/file.py:123` - Description
105
105
 
106
+ ## Reusable Code
107
+ ### Existing Functions/Modules
108
+ - `path/to/utils.ts:45` - `functionName()` - description of what it does and how it can be reused
109
+ - `path/to/service.ts:120` - `ClassName` - description of reusable functionality
110
+
111
+ ### Existing Patterns to Follow
112
+ - Similar feature implemented in `path/to/feature/` - follow same structure
113
+ - Existing implementation of X in `path/to/file.ts` - can be extended/adapted
114
+
106
115
  ## Architecture Documentation
107
116
  [Patterns, conventions, design implementations]
108
117
 
@@ -119,6 +128,15 @@ last_updated: [YYYY-MM-DD]
119
128
  ### E2E Test Patterns
120
129
  [Similar structure]
121
130
 
131
+ ## Impacted Tests
132
+ ### Tests Requiring Modification
133
+ - `tests/example.spec.ts` - tests X functionality, will need updates for Y
134
+ - `e2e/feature.spec.ts` - may need new assertions for Z
135
+
136
+ ### Test Gaps
137
+ - No existing tests for X functionality - will need new test file
138
+ - Missing edge case coverage for Y scenario
139
+
122
140
  ## Documentation Patterns
123
141
  ### JSDoc Conventions
124
142
  ### Database Comments (Backend)
@@ -130,6 +148,7 @@ last_updated: [YYYY-MM-DD]
130
148
  **Question**: [Full question]
131
149
  **Context**: [Why this arose]
132
150
  **Impact**: [What it affects]
151
+ **Recommendation**: [Researcher's best recommendation based on findings]
133
152
  **Answer**: _[Human fills before /project:plan]_
134
153
  ```
135
154
 
@@ -144,3 +163,9 @@ Run `/git:commit`
144
163
  - Focus on finding concrete file paths and line numbers
145
164
  - Each sub-agent prompt should be specific and focused on read-only documentation
146
165
  - **REMEMBER**: Document what IS, not what SHOULD BE
166
+
167
+ ---
168
+
169
+ ## Next Step
170
+
171
+ After completing this phase, tell the user: "To continue, run `/project:plan $ARGUMENTS`"
@@ -10,35 +10,45 @@ The current branch is a feature branch with full implementation of the project i
10
10
 
11
11
  ## Setup
12
12
 
13
- Create workflow tracking tasks with `metadata: { "project": "<project-name>", "phase": "review" }`:
13
+ Set active project marker: `echo "$ARGUMENTS" | sed 's|.*/||' > .claude-active-project`
14
14
 
15
- 1. Perform Claude Review
16
- 2. Implement Claude Review Fixes
17
- 3. Perform CodeRabbit Review
18
- 4. Implement CodeRabbit Review Fixes
19
- 5. Perform Claude Optimizations
15
+ Extract `<project-name>` from the last segment of `$ARGUMENTS`.
20
16
 
21
- ## Step 1: Perform Claude Review
17
+ ## Create and Execute Tasks
22
18
 
23
- If `$ARGUMENTS/claude-review.md` already exists, skip to Step 2.
19
+ Create workflow tracking tasks with `metadata.project` set to the project name:
24
20
 
25
- Otherwise, run `/project:local-code-review $ARGUMENTS`
21
+ ```
22
+ TaskCreate:
23
+ subject: "Perform Claude review"
24
+ description: "If $ARGUMENTS/claude-review.md already exists, skip this task. Otherwise, run /project:local-code-review $ARGUMENTS"
25
+ metadata: { project: "<project-name>" }
26
26
 
27
- ## Step 2: Implement Claude Review Fixes
27
+ TaskCreate:
28
+ subject: "Implement Claude review fixes"
29
+ description: "Read $ARGUMENTS/claude-review.md and fix any suggestions that score above 45."
30
+ metadata: { project: "<project-name>" }
28
31
 
29
- 1. Read `$ARGUMENTS/claude-review.md`
30
- 2. Fix any suggestions that score above 45
32
+ TaskCreate:
33
+ subject: "Perform CodeRabbit review"
34
+ description: "If $ARGUMENTS/coderabbit-review.md already exists, skip this task. Otherwise, run `coderabbit review --plain || true` and write results to $ARGUMENTS/coderabbit-review.md"
35
+ metadata: { project: "<project-name>" }
31
36
 
32
- ## Step 3: Perform CodeRabbit Review
37
+ TaskCreate:
38
+ subject: "Implement CodeRabbit review fixes"
39
+ description: "Evaluate suggestions in $ARGUMENTS/coderabbit-review.md and implement fixes for valid findings."
40
+ metadata: { project: "<project-name>" }
33
41
 
34
- If `$ARGUMENTS/coderabbit-review.md` already exists, skip to Step 4.
42
+ TaskCreate:
43
+ subject: "Perform Claude optimizations"
44
+ description: "Use the code simplifier agent to clean up code added to the current branch."
45
+ metadata: { project: "<project-name>" }
46
+ ```
35
47
 
36
- Otherwise, use Task tool with prompt: "Run `coderabbit review --plain || true` and write results to $ARGUMENTS/coderabbit-review.md"
48
+ **Execute each task via a subagent** to preserve main context. Launch up to 6 in parallel where tasks don't have dependencies. Do not stop until all are completed.
37
49
 
38
- ## Step 4: Implement CodeRabbit Review Fixes
39
-
40
- Evaluate suggestions in `$ARGUMENTS/coderabbit-review.md` and implement fixes for valid findings.
50
+ ---
41
51
 
42
- ## Step 5: Perform Claude Optimizations
52
+ ## Next Step
43
53
 
44
- Use the code simplifier agent to clean up code added to the current branch.
54
+ After completing this phase, tell the user: "To continue, run `/project:verify $ARGUMENTS`"
@@ -1,19 +1,55 @@
1
1
  ---
2
- description: Initialize a comprehensive NestJS backend project with full requirements analysis, planning, and structure setup
3
- argument-hint: <project-brief-file-or-jira-issue-number>
2
+ description: Initialize a project with full requirements analysis, planning, and structure setup
3
+ argument-hint: <file-path|jira-issue|text-description>
4
4
  allowed-tools: Read, Write, Bash(git*), Glob, Grep, Task, TaskCreate, TaskUpdate, TaskList
5
5
  ---
6
6
 
7
- 1. Decide if $ARGUMENTS is a Jira issue number or a path to a file
8
- 2. If $ARGUMENTS is a Jira issue number
9
- 1. Use the atlassian MCP server to Read the issue FULLY. If the MCP server is not working, STOP WORKING AND LET THE HUMAN KNOW
10
- 2. Otherwise: $ARGUMENTS is a brief for a project. Read it FULLY (no limit/offset)
11
- 3. Create a project directory in projects/ that is appropriately named for the project and prefixed with today's date like `YYYY-MM-DD-<project-name>`
12
- 4. If $ARGUMENTS is a Jira issue number
13
- 1. Create a file called `brief.md` in the newly created project directory and populate it with the jira issue number, title and description
14
- 2. Otherwise: Move $ARGUMENTS into the newly created project directory and rename it `brief.md`
15
- 5. Create an empty file in the new project directory called `findings.md`
16
- 6. If $ARGUMENTS is a Jira issue number
17
- 1. run /git:commit and add the jira issue number to the newly created branch (e.g. feature/SE-111-<branch-name>)
18
- 2. OTHERWISE: run /git:commit
19
- 7. If $ARGUMENTS is a Jira issue number Use the atlassian MCP server to update the issue with status "In Progress"
7
+ ## Step 1: Determine Input Type
8
+
9
+ Examine $ARGUMENTS to determine which type:
10
+ - **Jira issue**: Matches pattern like `SE-123`, `PROJ-456` (letters-dash-numbers)
11
+ - **File path**: Path exists as a file (check with Glob or Read)
12
+ - **Text prompt**: Everything else - a description of work to be done
13
+
14
+ ## Step 2: Get Brief Content
15
+
16
+ Based on input type:
17
+
18
+ **If Jira issue:**
19
+ 1. Use the atlassian MCP server to Read the issue FULLY
20
+ 2. If MCP server not working, STOP and let human know
21
+
22
+ **If file path:**
23
+ 1. Read the file FULLY (no limit/offset)
24
+
25
+ **If text prompt:**
26
+ 1. The prompt IS the brief content - use $ARGUMENTS directly
27
+
28
+ ## Step 3: Create Project Structure
29
+
30
+ 1. Create project directory in `projects/` named `YYYY-MM-DD-<project-name>` where `<project-name>` is derived from:
31
+ - Jira: the issue key and title (e.g., `2026-01-26-se-123-add-auth`)
32
+ - File: the filename without extension
33
+ - Text prompt: a kebab-case summary of the prompt (e.g., `2026-01-26-add-user-authentication`)
34
+
35
+ 2. Create `brief.md` in the project directory:
36
+ - Jira: populate with issue number, title, and description
37
+ - File: move/copy the file and rename to `brief.md`
38
+ - Text prompt: create `brief.md` with the prompt text as content
39
+
40
+ 3. Create empty `findings.md` in the project directory
41
+
42
+ 4. Create `.claude-active-project` marker file:
43
+ ```bash
44
+ echo "YYYY-MM-DD-<project-name>" > .claude-active-project
45
+ ```
46
+
47
+ ## Step 4: Git and Finalize
48
+
49
+ 1. If Jira issue:
50
+ - run /git:commit with branch name including issue number (e.g., `feature/SE-111-<branch-name>`)
51
+ - Use atlassian MCP server to update issue status to "In Progress"
52
+ 2. Otherwise:
53
+ - run /git:commit
54
+
55
+ 3. Output: "Project created: `YYYY-MM-DD-<project-name>`"
@@ -8,68 +8,40 @@ The current branch is a feature branch with full implementation of the project i
8
8
 
9
9
  ## Setup
10
10
 
11
- Create workflow tracking tasks with `metadata: { "project": "<project-name>", "phase": "verify" }`:
11
+ Set active project marker: `echo "$ARGUMENTS" | sed 's|.*/||' > .claude-active-project`
12
12
 
13
- 1. Review Requirements
14
- 2. Verify Implementation
15
- 3. Run Task Verification Commands
16
- 4. Document Drift
13
+ Extract `<project-name>` from the last segment of `$ARGUMENTS`.
17
14
 
18
- ## Step 1: Review Requirements
15
+ ## Create and Execute Tasks
19
16
 
20
- Read all requirements for $ARGUMENTS.
17
+ Create workflow tracking tasks with `metadata.project` set to the project name:
21
18
 
22
- ## Step 2: Verify Implementation
19
+ ```
20
+ TaskCreate:
21
+ subject: "Review requirements"
22
+ description: "Read all requirements for $ARGUMENTS (brief.md, research.md, task files)."
23
+ metadata: { project: "<project-name>" }
23
24
 
24
- Verify the implementation completely and fully satisfies all requirements from Step 1.
25
+ TaskCreate:
26
+ subject: "Verify implementation"
27
+ description: "Verify the implementation completely and fully satisfies all requirements from the brief and research."
28
+ metadata: { project: "<project-name>" }
25
29
 
26
- ## Step 3: Run Task Verification Commands
30
+ TaskCreate:
31
+ subject: "Run task verification commands"
32
+ description: "Read all task files in $ARGUMENTS/tasks/. For each task with verification metadata (JSON: metadata.verification, or Markdown: ## Verification section), create a verification task with subject 'Verify: <original-subject>' and metadata including originalTaskId and verification details. Then execute each verification task: run the command, compare output to expected. If pass, mark completed. If fail, keep in_progress and document failure in $ARGUMENTS/drift.md. Report summary: total tasks, passed, failed, blocked."
33
+ metadata: { project: "<project-name>" }
27
34
 
28
- ### 3a: Create Verification Tasks
35
+ TaskCreate:
36
+ subject: "Document drift"
37
+ description: "If there is any divergence from requirements or verification failures, ensure all drift is documented in $ARGUMENTS/drift.md."
38
+ metadata: { project: "<project-name>" }
39
+ ```
29
40
 
30
- First, read all task files in `$ARGUMENTS/tasks/` and create a new task for each one that has verification metadata:
41
+ **Execute each task via a subagent** to preserve main context. Launch up to 6 in parallel where tasks don't have dependencies. Do not stop until all are completed.
31
42
 
32
- For each task file:
33
- 1. **Read the task file** (JSON or markdown)
34
- 2. **Check for verification metadata**:
35
- - JSON tasks: Look for `metadata.verification`
36
- - Markdown tasks: Look for `## Verification` section with `### Proof Command`
37
- 3. **If verification exists**, create a new task:
38
- ```
39
- subject: "Verify: <original-task-subject>"
40
- description: "Run verification for task <id>: <verification.command>"
41
- activeForm: "Verifying <original-task-subject>"
42
- metadata: {
43
- "project": "<project-name>",
44
- "phase": "verify",
45
- "originalTaskId": "<id>",
46
- "verification": <copy the verification object>
47
- }
48
- ```
49
-
50
- ### 3b: Execute Verification Tasks
51
-
52
- Work through each verification task:
53
-
54
- 1. **Run verification command** using Bash tool:
55
- - JSON: Execute `metadata.verification.command`
56
- - Markdown: Execute the command in `### Proof Command` code block
57
- 2. **Compare output to expected**:
58
- - JSON: Compare to `metadata.verification.expected`
59
- - Markdown: Compare to `### Expected Output` section
60
- 3. **Record results and mark task**:
61
- - If verification passes → Mark task completed
62
- - If verification fails → Keep task in_progress, document failure in drift.md
63
- - If command cannot run → Keep task in_progress, document blocker in drift.md
64
-
65
- ### Verification Summary
66
-
67
- After running all verification tasks, report:
68
- - Total tasks with verification: X
69
- - Passed: Y
70
- - Failed: Z
71
- - Blocked: W
43
+ ---
72
44
 
73
- ## Step 4: Document Drift
45
+ ## Next Step
74
46
 
75
- If there is any divergence from the requirements or verification failures, document the drift to `$ARGUMENTS/drift.md`.
47
+ After completing this phase, tell the user: "To continue, run `/project:debrief $ARGUMENTS`"
@@ -1,30 +1,72 @@
1
1
  ---
2
2
  description: Checks for code review comments on a PR and implements them if required.
3
3
  argument-hint: <github-pr-link>
4
- allowed-tools: Read, Write, Edit, Bash(git*), Glob, Grep, Task, TaskCreate, TaskUpdate, TaskList, TaskGet, Bash(gh*)
4
+ allowed-tools: Read, Bash, Glob, Grep
5
5
  ---
6
6
 
7
- Use the GitHub CLI to fetch all review comments on $ARGUMENTS:
7
+ # Review PR Comments
8
8
 
9
- ```bash
10
- gh pr view $ARGUMENTS --json reviews,comments
11
- gh api repos/{owner}/{repo}/pulls/{pr_number}/comments
12
- ```
9
+ Target PR: $ARGUMENTS
10
+
11
+ If no argument provided, prompt the user for a PR link or number.
12
+
13
+ ## Step 1: Gather Requirements
14
+
15
+ 1. **Fetch PR metadata and comments** using the GitHub CLI:
16
+ ```bash
17
+ gh pr view $ARGUMENTS --json number,title,body,reviews,comments
18
+ gh api repos/{owner}/{repo}/pulls/{pr_number}/comments
19
+ ```
20
+ 2. **Extract each unresolved review comment**:
21
+ - Comment ID
22
+ - File path
23
+ - Line number
24
+ - Comment body
25
+ - Author
26
+
27
+ If no unresolved comments exist, report success and exit.
28
+
29
+ ## Step 2: Generate Brief
30
+
31
+ Compile findings into a detailed brief:
32
+
33
+ ```markdown
34
+ Implement PR review feedback for $ARGUMENTS.
13
35
 
14
- Extract each unresolved review comment as a separate item. Note the comment ID, file path, line number, and comment body.
36
+ ## PR Overview
37
+ - Title: [PR title]
38
+ - Description: [PR description summary]
15
39
 
16
- Create a task for each unresolved comment with `metadata: { "pr": "$ARGUMENTS" }`:
17
- - **subject**: Brief description of the requested change
18
- - **description**: Full comment body, file path, line number, comment ID, and these instructions:
19
- 1. Evaluate if the requested change is valid
20
- 2. If not valid, reply explaining why and mark resolved, skip remaining steps
21
- 3. If valid, make appropriate code updates
22
- 4. Ensure changes follow project coding standards
23
- 5. Run relevant tests to verify changes work
24
- 6. Run `/git:commit` to commit changes
25
- 7. If hooks fail, fix errors and re-run `/git:commit`
26
- - **activeForm**: "Implementing PR feedback for [file]"
40
+ ## Review Comments to Address (ordered by file)
41
+
42
+ ### 1. [file_path]:[line_number] (Comment ID: [id])
43
+ **Reviewer**: [author]
44
+ **Comment**: [full comment body]
45
+ **Action Required**: [brief description of what needs to change]
46
+
47
+ ### 2. [file_path]:[line_number] (Comment ID: [id])
48
+ **Reviewer**: [author]
49
+ **Comment**: [full comment body]
50
+ **Action Required**: [brief description of what needs to change]
51
+
52
+ ...
53
+
54
+ ## Implementation Guidelines
55
+ - Evaluate each comment for validity before implementing
56
+ - If a comment is not valid, document the reasoning
57
+ - Ensure changes follow project coding standards
58
+ - Run relevant tests to verify changes work
59
+
60
+ ## Acceptance Criteria
61
+ - All valid review comments addressed
62
+ - Tests pass after changes
63
+ - `bun run lint` passes
64
+
65
+ ## Verification
66
+ Command: `bun run lint && bun run test`
67
+ Expected: All checks pass
68
+ ```
27
69
 
28
- Launch up to 6 subagents to work through the task list in parallel.
70
+ ## Step 3: Bootstrap Project
29
71
 
30
- When all tasks are completed, run `/git:commit-and-submit-pr`.
72
+ Run `/project:bootstrap` with the generated brief as a text prompt.
@@ -59,4 +59,4 @@
59
59
  "BASH_MAX_TIMEOUT_MS": "7200000"
60
60
  },
61
61
  "includeCoAuthoredBy": true
62
- }
62
+ }
@@ -75,7 +75,6 @@ Commands are organized by category. Sub-commands (commands that are called by ot
75
75
  | `/project:verify` | Verify implementation matches all project requirements | `<project-directory>` (required) | `/project:execute` |
76
76
  | `/project:debrief` | Evaluate findings and create skills or rules from learnings | `<project-directory>` (required) | `/project:execute` |
77
77
  | `/project:archive` | Move completed project to projects/archive | `<project-directory>` (required) | `/project:execute` |
78
- | `/project:complete-task` | Complete a single task using a subagent with fresh context | `<task-file>` (required) | `/project:implement` |
79
78
  | `/project:local-code-review` | Code review local changes on current branch | `<project-directory>` (required) | `/project:review` |
80
79
  | `/project:fix-linter-error` | Fix all violations of a specific ESLint rule | `<eslint-rule-name>` (required) | - |
81
80
  | `/project:lower-code-complexity` | Reduce code complexity threshold by 2 and fix violations | none | - |
@@ -298,16 +297,6 @@ Moves the completed project to `projects/archive` and submits a PR.
298
297
 
299
298
  ---
300
299
 
301
- ### `/project:complete-task`
302
-
303
- **Arguments:** `<task-file>` (required)
304
-
305
- Completes a single task within a project using a subagent with fresh context. Must execute verification commands before marking complete. If verification requires Docker/external services and they're unavailable, marks task as blocked.
306
-
307
- **Called by:** `/project:implement`
308
-
309
- ---
310
-
311
300
  ### `/project:fix-linter-error`
312
301
 
313
302
  **Arguments:** `<eslint-rule-name>` (required)
@@ -495,7 +484,6 @@ Run FROM a project with Lisa applied. Compares the project's Lisa-managed files
495
484
  /project:execute
496
485
  ├── /project:plan
497
486
  ├── /project:implement
498
- │ └── /project:complete-task
499
487
  ├── /project:review
500
488
  │ └── /project:local-code-review
501
489
  ├── /project:verify
@@ -47,7 +47,7 @@ jobs:
47
47
  needs: [determine_environment]
48
48
  with:
49
49
  environment: ${{ needs.determine_environment.outputs.environment }}
50
- release_strategy: 'semantic'
50
+ release_strategy: 'standard-version'
51
51
  skip_jobs: 'test:e2e,test:integration'
52
52
  require_approval: false
53
53
  require_signatures: false