@codyswann/lisa 1.67.3 → 1.69.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (143) hide show
  1. package/README.md +44 -49
  2. package/all/copy-overwrite/.claude/rules/base-rules.md +0 -50
  3. package/all/copy-overwrite/.claude/rules/intent-routing.md +126 -0
  4. package/all/copy-overwrite/.claude/rules/security-audit-handling.md +17 -0
  5. package/all/copy-overwrite/.claude/rules/verification.md +27 -538
  6. package/package.json +1 -1
  7. package/plugins/lisa/.claude-plugin/plugin.json +1 -1
  8. package/plugins/lisa/agents/architecture-specialist.md +4 -9
  9. package/plugins/lisa/agents/bug-fixer.md +40 -0
  10. package/plugins/lisa/agents/builder.md +41 -0
  11. package/plugins/lisa/agents/debug-specialist.md +4 -93
  12. package/plugins/lisa/agents/jira-agent.md +103 -0
  13. package/plugins/lisa/agents/performance-specialist.md +2 -11
  14. package/plugins/lisa/agents/product-specialist.md +2 -10
  15. package/plugins/lisa/agents/quality-specialist.md +2 -0
  16. package/plugins/lisa/agents/security-specialist.md +3 -9
  17. package/plugins/lisa/agents/test-specialist.md +2 -16
  18. package/plugins/lisa/agents/verification-specialist.md +38 -103
  19. package/plugins/lisa/commands/build.md +10 -0
  20. package/plugins/lisa/commands/fix.md +10 -0
  21. package/plugins/lisa/commands/improve.md +16 -0
  22. package/plugins/lisa/commands/investigate.md +10 -0
  23. package/plugins/lisa/commands/jira/triage.md +7 -0
  24. package/plugins/lisa/commands/monitor.md +10 -0
  25. package/plugins/lisa/commands/plan/create.md +1 -1
  26. package/plugins/lisa/commands/plan/execute.md +1 -2
  27. package/plugins/lisa/commands/plan/improve-tests.md +7 -0
  28. package/plugins/lisa/commands/plan.md +10 -0
  29. package/plugins/lisa/commands/review.md +10 -0
  30. package/plugins/lisa/commands/ship.md +10 -0
  31. package/plugins/lisa/skills/acceptance-criteria/SKILL.md +71 -0
  32. package/plugins/lisa/skills/bug-triage/SKILL.md +23 -0
  33. package/plugins/lisa/skills/codebase-research/SKILL.md +87 -0
  34. package/plugins/lisa/skills/epic-triage/SKILL.md +28 -0
  35. package/plugins/lisa/skills/nightly-add-test-coverage/SKILL.md +27 -0
  36. package/plugins/lisa/skills/nightly-improve-tests/SKILL.md +31 -0
  37. package/plugins/lisa/skills/nightly-lower-code-complexity/SKILL.md +25 -0
  38. package/plugins/lisa/skills/performance-review/SKILL.md +94 -0
  39. package/plugins/lisa/skills/plan-improve-tests/SKILL.md +47 -0
  40. package/plugins/lisa/skills/quality-review/SKILL.md +54 -0
  41. package/plugins/lisa/skills/reproduce-bug/SKILL.md +96 -0
  42. package/plugins/lisa/skills/root-cause-analysis/SKILL.md +155 -0
  43. package/plugins/lisa/skills/security-review/SKILL.md +57 -0
  44. package/plugins/lisa/skills/task-decomposition/SKILL.md +95 -0
  45. package/plugins/lisa/skills/task-triage/SKILL.md +23 -0
  46. package/plugins/lisa/skills/tdd-implementation/SKILL.md +83 -0
  47. package/plugins/lisa/skills/test-strategy/SKILL.md +63 -0
  48. package/plugins/lisa/skills/ticket-triage/SKILL.md +150 -0
  49. package/plugins/lisa/skills/verification-lifecycle/SKILL.md +292 -0
  50. package/plugins/lisa-cdk/.claude-plugin/plugin.json +1 -1
  51. package/plugins/lisa-expo/.claude-plugin/plugin.json +1 -1
  52. package/plugins/lisa-nestjs/.claude-plugin/plugin.json +1 -1
  53. package/plugins/lisa-rails/.claude-plugin/plugin.json +1 -1
  54. package/plugins/lisa-typescript/.claude-plugin/plugin.json +1 -1
  55. package/plugins/src/base/agents/architecture-specialist.md +4 -9
  56. package/plugins/src/base/agents/bug-fixer.md +40 -0
  57. package/plugins/src/base/agents/builder.md +41 -0
  58. package/plugins/src/base/agents/debug-specialist.md +4 -93
  59. package/plugins/src/base/agents/jira-agent.md +103 -0
  60. package/plugins/src/base/agents/performance-specialist.md +2 -11
  61. package/plugins/src/base/agents/product-specialist.md +2 -10
  62. package/plugins/src/base/agents/quality-specialist.md +2 -0
  63. package/plugins/src/base/agents/security-specialist.md +3 -9
  64. package/plugins/src/base/agents/test-specialist.md +2 -16
  65. package/plugins/src/base/agents/verification-specialist.md +38 -103
  66. package/plugins/src/base/commands/build.md +10 -0
  67. package/plugins/src/base/commands/fix.md +10 -0
  68. package/plugins/src/base/commands/improve.md +16 -0
  69. package/plugins/src/base/commands/investigate.md +10 -0
  70. package/plugins/src/base/commands/jira/triage.md +7 -0
  71. package/plugins/src/base/commands/monitor.md +10 -0
  72. package/plugins/src/base/commands/plan/create.md +1 -1
  73. package/plugins/src/base/commands/plan/execute.md +1 -2
  74. package/plugins/src/base/commands/plan/improve-tests.md +7 -0
  75. package/plugins/src/base/commands/plan.md +10 -0
  76. package/plugins/src/base/commands/review.md +10 -0
  77. package/plugins/src/base/commands/ship.md +10 -0
  78. package/plugins/src/base/skills/acceptance-criteria/SKILL.md +71 -0
  79. package/plugins/src/base/skills/bug-triage/SKILL.md +23 -0
  80. package/plugins/src/base/skills/codebase-research/SKILL.md +87 -0
  81. package/plugins/src/base/skills/epic-triage/SKILL.md +28 -0
  82. package/plugins/src/base/skills/nightly-add-test-coverage/SKILL.md +27 -0
  83. package/plugins/src/base/skills/nightly-improve-tests/SKILL.md +31 -0
  84. package/plugins/src/base/skills/nightly-lower-code-complexity/SKILL.md +25 -0
  85. package/plugins/src/base/skills/performance-review/SKILL.md +94 -0
  86. package/plugins/src/base/skills/plan-improve-tests/SKILL.md +47 -0
  87. package/plugins/src/base/skills/quality-review/SKILL.md +54 -0
  88. package/plugins/src/base/skills/reproduce-bug/SKILL.md +96 -0
  89. package/plugins/src/base/skills/root-cause-analysis/SKILL.md +155 -0
  90. package/plugins/src/base/skills/security-review/SKILL.md +57 -0
  91. package/plugins/src/base/skills/task-decomposition/SKILL.md +95 -0
  92. package/plugins/src/base/skills/task-triage/SKILL.md +23 -0
  93. package/plugins/src/base/skills/tdd-implementation/SKILL.md +83 -0
  94. package/plugins/src/base/skills/test-strategy/SKILL.md +63 -0
  95. package/plugins/src/base/skills/ticket-triage/SKILL.md +150 -0
  96. package/plugins/src/base/skills/verification-lifecycle/SKILL.md +292 -0
  97. package/expo/copy-overwrite/.claude/rules/expo-verification.md +0 -261
  98. package/plugins/lisa/agents/agent-architect.md +0 -310
  99. package/plugins/lisa/agents/hooks-expert.md +0 -74
  100. package/plugins/lisa/agents/implementer.md +0 -54
  101. package/plugins/lisa/agents/slash-command-architect.md +0 -87
  102. package/plugins/lisa/agents/web-search-researcher.md +0 -112
  103. package/plugins/lisa/commands/git/commit-and-submit-pr.md +0 -7
  104. package/plugins/lisa/commands/git/commit-submit-pr-and-verify.md +0 -7
  105. package/plugins/lisa/commands/git/commit-submit-pr-deploy-and-verify.md +0 -7
  106. package/plugins/lisa/commands/jira/fix.md +0 -7
  107. package/plugins/lisa/commands/jira/implement.md +0 -7
  108. package/plugins/lisa/commands/sonarqube/check.md +0 -6
  109. package/plugins/lisa/commands/sonarqube/fix.md +0 -6
  110. package/plugins/lisa/commands/tasks/load.md +0 -7
  111. package/plugins/lisa/commands/tasks/sync.md +0 -7
  112. package/plugins/lisa/skills/git-commit-and-submit-pr/SKILL.md +0 -8
  113. package/plugins/lisa/skills/git-commit-submit-pr-and-verify/SKILL.md +0 -7
  114. package/plugins/lisa/skills/git-commit-submit-pr-deploy-and-verify/SKILL.md +0 -7
  115. package/plugins/lisa/skills/jira-fix/SKILL.md +0 -16
  116. package/plugins/lisa/skills/jira-implement/SKILL.md +0 -18
  117. package/plugins/lisa/skills/sonarqube-check/SKILL.md +0 -11
  118. package/plugins/lisa/skills/sonarqube-fix/SKILL.md +0 -8
  119. package/plugins/lisa/skills/tasks-load/SKILL.md +0 -88
  120. package/plugins/lisa/skills/tasks-sync/SKILL.md +0 -108
  121. package/plugins/src/base/agents/agent-architect.md +0 -310
  122. package/plugins/src/base/agents/hooks-expert.md +0 -74
  123. package/plugins/src/base/agents/implementer.md +0 -54
  124. package/plugins/src/base/agents/slash-command-architect.md +0 -87
  125. package/plugins/src/base/agents/web-search-researcher.md +0 -112
  126. package/plugins/src/base/commands/git/commit-and-submit-pr.md +0 -7
  127. package/plugins/src/base/commands/git/commit-submit-pr-and-verify.md +0 -7
  128. package/plugins/src/base/commands/git/commit-submit-pr-deploy-and-verify.md +0 -7
  129. package/plugins/src/base/commands/jira/fix.md +0 -7
  130. package/plugins/src/base/commands/jira/implement.md +0 -7
  131. package/plugins/src/base/commands/sonarqube/check.md +0 -6
  132. package/plugins/src/base/commands/sonarqube/fix.md +0 -6
  133. package/plugins/src/base/commands/tasks/load.md +0 -7
  134. package/plugins/src/base/commands/tasks/sync.md +0 -7
  135. package/plugins/src/base/skills/git-commit-and-submit-pr/SKILL.md +0 -8
  136. package/plugins/src/base/skills/git-commit-submit-pr-and-verify/SKILL.md +0 -7
  137. package/plugins/src/base/skills/git-commit-submit-pr-deploy-and-verify/SKILL.md +0 -7
  138. package/plugins/src/base/skills/jira-fix/SKILL.md +0 -16
  139. package/plugins/src/base/skills/jira-implement/SKILL.md +0 -18
  140. package/plugins/src/base/skills/sonarqube-check/SKILL.md +0 -11
  141. package/plugins/src/base/skills/sonarqube-fix/SKILL.md +0 -8
  142. package/plugins/src/base/skills/tasks-load/SKILL.md +0 -88
  143. package/plugins/src/base/skills/tasks-sync/SKILL.md +0 -108
@@ -0,0 +1,155 @@
1
+ ---
2
+ name: root-cause-analysis
3
+ description: "Root cause analysis methodology. Evidence gathering from logs, execution path tracing, strategic log placement, and building irrefutable proof chains."
4
+ ---
5
+
6
+ # Root Cause Analysis
7
+
8
+ Definitively prove what is causing a problem. Do not guess. Do not theorize without evidence. Trace the actual execution path, read real logs, and produce irrefutable proof of root cause.
9
+
10
+ **Core principle: "Show me the proof."** Every conclusion must be backed by concrete evidence -- a log line, a stack trace, a reproducible sequence, or a failing test.
11
+
12
+ ## Phase 1: Gather Evidence from Logs
13
+
14
+ ### Local Logs
15
+
16
+ - Search application logs in the project directory (`logs/`, `tmp/`, stdout/stderr output)
17
+ - Run tests with verbose logging enabled to capture execution flow
18
+ - Check framework-specific log locations (e.g., `.next/`, `dist/`, build output)
19
+
20
+ ### Remote Logs (AWS CloudWatch, etc.)
21
+
22
+ - Discover existing scripts and tools in the project for tailing logs:
23
+ - Check `package.json` scripts for log-related commands
24
+ - Search for shell scripts: `scripts/*log*`, `scripts/*tail*`, `scripts/*watch*`
25
+ - Look for AWS CLI wrappers, CloudWatch log group configurations
26
+ - Check for `.env` files referencing log groups or log streams
27
+ - Use discovered tools first before falling back to raw CLI commands
28
+ - When using AWS CLI directly:
29
+ ```bash
30
+ # Discover available log groups
31
+ aws logs describe-log-groups --query 'logGroups[].logGroupName' --output text
32
+
33
+ # Tail recent logs with filter
34
+ aws logs filter-log-events \
35
+ --log-group-name "/aws/lambda/function-name" \
36
+ --start-time $(date -d '30 minutes ago' +%s000) \
37
+ --filter-pattern "ERROR" \
38
+ --query 'events[].message' --output text
39
+
40
+ # Follow live logs
41
+ aws logs tail "/aws/lambda/function-name" --follow --since 10m
42
+ ```
43
+
44
+ ## Phase 2: Trace the Execution Path
45
+
46
+ - Start from the error and work backward through the call stack
47
+ - Read every function in the chain -- do not skip intermediate code
48
+ - Identify the exact line where behavior diverges from expectation
49
+ - Map the data flow: what value was expected vs. what value was actually present
50
+
51
+ ## Phase 3: Strategic Log Placement
52
+
53
+ When existing logs are insufficient, add targeted log statements to prove or disprove hypotheses.
54
+
55
+ ### Log Statement Guidelines
56
+
57
+ - **Be surgical** -- add the minimum number of log statements needed to confirm the root cause
58
+ - **Include context** -- log the actual values, not just "reached here"
59
+ - **Use structured format** -- make logs easy to find and parse
60
+
61
+ ```typescript
62
+ // Bad: Vague, unhelpful
63
+ console.log("here");
64
+ console.log("data:", data);
65
+
66
+ // Good: Precise, searchable, includes context
67
+ console.log("[DEBUG:issue-123] processOrder entry", {
68
+ orderId: order.id,
69
+ status: order.status,
70
+ itemCount: order.items.length,
71
+ timestamp: new Date().toISOString(),
72
+ });
73
+ ```
74
+
75
+ ### Placement Strategy
76
+
77
+ | Placement | Purpose |
78
+ |-----------|---------|
79
+ | Function entry | Confirm the function is called and with what arguments |
80
+ | Before conditional branches | Verify which branch is taken and why |
81
+ | Before/after async operations | Detect timing issues, race conditions, failed awaits |
82
+ | Before/after data transformations | Catch where data becomes corrupted or unexpected |
83
+ | Error handlers and catch blocks | Ensure errors are not silently swallowed |
84
+
85
+ ### Hypothesis Elimination
86
+
87
+ When multiple hypotheses exist, design a log placement strategy that eliminates all but one. Each log statement should be placed to confirm or rule out a specific hypothesis.
88
+
89
+ ## Phase 4: Prove the Root Cause
90
+
91
+ Build an evidence chain that is irrefutable:
92
+
93
+ 1. **The symptom** -- what the user observes (error message, wrong output, crash)
94
+ 2. **The proximate cause** -- the line of code that directly produces the symptom
95
+ 3. **The root cause** -- the underlying reason the proximate cause occurs
96
+ 4. **The proof** -- log output, test result, or reproduction steps that confirm each link
97
+
98
+ ### Evidence Chain Format
99
+
100
+ ```text
101
+ Symptom: [exact error message or behavior]
102
+ |
103
+ v
104
+ Proximate cause: [file:line] -- [the line that directly produces the error]
105
+ |
106
+ v
107
+ Root cause: [file:line] -- [the underlying reason]
108
+ |
109
+ v
110
+ Proof: [log output / test result / reproduction that confirms the chain]
111
+ ```
112
+
113
+ ## Phase 5: Clean Up
114
+
115
+ After root cause is confirmed, **remove all debug log statements** added during investigation. Leave only:
116
+
117
+ - Log statements that belong in the application permanently (error logging, audit trails)
118
+ - Statements explicitly requested by the user
119
+
120
+ Verify cleanup:
121
+ ```bash
122
+ # Search for any remaining debug markers
123
+ grep -rn "\[DEBUG:" src/ --include="*.ts" --include="*.tsx" --include="*.js"
124
+ ```
125
+
126
+ ## Output Format
127
+
128
+ ```text
129
+ ## Root Cause Analysis
130
+
131
+ ### Evidence Trail
132
+ | Step | Location | Evidence | Conclusion |
133
+ |------|----------|----------|------------|
134
+ | 1 | file:line | Log output or observed value | What this proves |
135
+ | 2 | file:line | Log output or observed value | What this proves |
136
+
137
+ ### Root Cause
138
+ **Proximate cause:** The line that directly produces the error.
139
+ **Root cause:** The underlying reason this line behaves incorrectly.
140
+ **Proof:** The specific evidence that confirms this beyond doubt.
141
+
142
+ ### Recommended Fix
143
+ What needs to change and why. Include file:line references.
144
+ ```
145
+
146
+ ## Rules
147
+
148
+ - Never guess at root cause -- prove it with evidence
149
+ - Read the actual code in the execution path -- do not rely on function names or comments to infer behavior
150
+ - When adding debug logs, use a consistent prefix (e.g., `[DEBUG:issue-name]`) so they are easy to find and clean up
151
+ - Remove all temporary debug log statements after investigation is complete
152
+ - If remote log access is unavailable, report what logs would be needed and from where
153
+ - Prefer project-specific tooling and scripts over raw CLI commands for log access
154
+ - If the root cause is in a third-party dependency, identify the exact version and known issue
155
+ - Always verify the fix resolves the issue -- do not mark investigation complete without proof
@@ -0,0 +1,57 @@
1
+ ---
2
+ name: security-review
3
+ description: "Security review methodology. STRIDE threat modeling, OWASP Top 10 vulnerability checks, auth/validation/secrets handling review, and mitigation recommendations."
4
+ ---
5
+
6
+ # Security Review
7
+
8
+ Identify vulnerabilities, evaluate threats, and recommend mitigations for code changes.
9
+
10
+ ## Analysis Process
11
+
12
+ 1. **Read affected files** -- understand current security posture of the code being changed
13
+ 2. **STRIDE analysis** -- evaluate Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege risks
14
+ 3. **Check input validation** -- are user inputs sanitized at system boundaries?
15
+ 4. **Check secrets handling** -- are credentials, tokens, or API keys exposed in code, logs, or error messages?
16
+ 5. **Check auth/authz** -- are access controls properly enforced for new endpoints or features?
17
+ 6. **Review dependencies** -- do new dependencies introduce known vulnerabilities?
18
+
19
+ ## Output Format
20
+
21
+ Structure findings as:
22
+
23
+ ```
24
+ ## Security Analysis
25
+
26
+ ### Threat Model (STRIDE)
27
+ | Threat | Applies? | Description | Mitigation |
28
+ |--------|----------|-------------|------------|
29
+ | Spoofing | Yes/No | ... | ... |
30
+ | Tampering | Yes/No | ... | ... |
31
+ | Repudiation | Yes/No | ... | ... |
32
+ | Info Disclosure | Yes/No | ... | ... |
33
+ | Denial of Service | Yes/No | ... | ... |
34
+ | Elevation of Privilege | Yes/No | ... | ... |
35
+
36
+ ### Security Checklist
37
+ - [ ] Input validation at system boundaries
38
+ - [ ] No secrets in code or logs
39
+ - [ ] Auth/authz enforced on new endpoints
40
+ - [ ] No SQL/NoSQL injection vectors
41
+ - [ ] No XSS vectors in user-facing output
42
+ - [ ] Dependencies free of known CVEs
43
+
44
+ ### Vulnerabilities Found
45
+ - [vulnerability] -- where in the code, how to prevent
46
+
47
+ ### Recommendations
48
+ - [recommendation] -- priority (critical/warning/suggestion)
49
+ ```
50
+
51
+ ## Rules
52
+
53
+ - Focus on the specific changes proposed, not a full security audit of the entire codebase
54
+ - Flag only real risks -- do not invent hypothetical threats for internal tooling with no user input
55
+ - Prioritize OWASP Top 10 vulnerabilities
56
+ - If the changes are purely internal (config, refactoring, docs), report "No security concerns" and explain why
57
+ - Always check `.gitleaksignore` patterns to understand what secrets scanning is already in place
@@ -0,0 +1,95 @@
1
+ ---
2
+ name: task-decomposition
3
+ description: "Methodology for breaking work into ordered tasks. Each task gets acceptance criteria, verification type, dependencies, and skills required."
4
+ ---
5
+
6
+ # Task Decomposition
7
+
8
+ Break work into ordered, well-scoped tasks that can be independently implemented and verified.
9
+
10
+ ## Decomposition Process
11
+
12
+ ### 1. Identify Units of Work
13
+
14
+ - Break the work into the smallest units that are independently valuable
15
+ - Each unit should produce a verifiable outcome (a passing test, a working endpoint, observable behavior)
16
+ - Avoid tasks that are too large to complete in a single session
17
+ - Avoid tasks that are too small to be meaningful (e.g., "add an import statement")
18
+
19
+ ### 2. Define Acceptance Criteria
20
+
21
+ For each task, define what "done" looks like:
22
+
23
+ - Be specific and measurable -- avoid vague criteria like "works correctly"
24
+ - Include both positive cases (what should work) and negative cases (what should be rejected)
25
+ - Reference exact behavior: error messages, status codes, output format, performance thresholds
26
+ - If a task modifies existing behavior, state both the before and after
27
+
28
+ ### 3. Assign Verification Type
29
+
30
+ Each task must have a verification method. Choose the most appropriate:
31
+
32
+ | Verification Type | When to Use |
33
+ |-------------------|-------------|
34
+ | **Unit test** | Pure logic, data transformations, utility functions |
35
+ | **Integration test** | Cross-module interactions, database operations, API contracts |
36
+ | **E2E test** | User-facing workflows, multi-service interactions |
37
+ | **Manual verification** | UI/UX behavior, visual correctness, one-time infrastructure changes |
38
+ | **Build verification** | Compilation, type checking, linting, bundle size |
39
+ | **Deploy verification** | Service health checks, smoke tests, monitoring dashboards |
40
+
41
+ ### 4. Map Dependencies
42
+
43
+ - Identify which tasks must complete before others can start
44
+ - Order tasks so that each builds on a stable foundation
45
+ - Prefer independent tasks that can run in parallel where possible
46
+ - Flag external dependencies (other teams, services, permissions, data) that may block progress
47
+
48
+ ### 5. Determine Execution Order
49
+
50
+ - Place foundational tasks first (types, schemas, interfaces, shared utilities)
51
+ - Follow with implementation tasks (business logic, handlers, services)
52
+ - Then integration tasks (wiring, configuration, API routes)
53
+ - Finish with verification tasks (test suites, documentation, cleanup)
54
+
55
+ ### 6. Assign Required Skills
56
+
57
+ Map each task to the skills needed to complete it. This enables delegation to specialized agents or helps identify what expertise is required.
58
+
59
+ ## Output Format
60
+
61
+ ```
62
+ ## Task Breakdown
63
+
64
+ ### Task 1: [imperative description]
65
+ - **Acceptance criteria:**
66
+ - [specific, measurable criterion]
67
+ - [specific, measurable criterion]
68
+ - **Verification:** [type] -- [how to verify]
69
+ - **Dependencies:** [none | task IDs that must complete first]
70
+ - **Skills:** [list of skills needed]
71
+
72
+ ### Task 2: [imperative description]
73
+ - **Acceptance criteria:**
74
+ - [specific, measurable criterion]
75
+ - **Verification:** [type] -- [how to verify]
76
+ - **Dependencies:** [Task 1]
77
+ - **Skills:** [list of skills needed]
78
+
79
+ ### Execution Order
80
+ 1. [Task 1, Task 3] (parallel -- no dependencies)
81
+ 2. [Task 2] (depends on Task 1)
82
+ 3. [Task 4] (depends on Task 2, Task 3)
83
+
84
+ ### External Dependencies
85
+ - [dependency] -- [who owns it] -- [current status]
86
+ ```
87
+
88
+ ## Rules
89
+
90
+ - Every task must have at least one acceptance criterion that can be empirically verified
91
+ - Do not create tasks that cannot be verified -- if you cannot define how to prove it is done, the task is not well-scoped
92
+ - Keep tasks ordered so that no task references work that has not been completed by a prior task
93
+ - Flag any task that requires access, permissions, or external input not yet available
94
+ - Prefer more small tasks over fewer large tasks -- smaller tasks are easier to verify and less risky to fail
95
+ - Do not create placeholder or "TODO" tasks -- every task should describe concrete work
@@ -0,0 +1,23 @@
1
+ ---
2
+ name: task-triage
3
+ description: "8-step task triage and implementation workflow. Ensures tasks have clear requirements, dependencies, and verification plans before implementation begins."
4
+ ---
5
+
6
+ # Task Triage
7
+
8
+ Follow this 8-step triage process before implementing any task. Do not skip triage.
9
+
10
+ ## Triage Steps
11
+
12
+ 1. Verify you have all information needed to implement this task (acceptance criteria, design specs, environment information, dependencies, etc.). Do not make assumptions. If anything is missing, stop and ask before proceeding.
13
+ 2. Verify you have a clear understanding of the expected behavior or outcome when the task is complete. If not, stop and clarify before starting.
14
+ 3. Identify all dependencies (other tasks, services, APIs, data) that must be in place before you can complete this task. If any are unresolved, stop and raise them before starting implementation.
15
+ 4. Verify you have access to the tools, environments, and permissions needed to deploy and verify this task (e.g. CI/CD pipelines, deployment targets, logging/monitoring systems, API access, database access). If any are missing or inaccessible, stop and raise them before starting implementation.
16
+ 5. Define the tests you will write to confirm the task is implemented correctly and prevent regressions.
17
+ 6. Define the documentation you will create or update to explain the "how" and "what" behind this task so another developer understands it.
18
+ 7. If you can verify your implementation before deploying to the target environment (e.g. start the app, invoke the API, open a browser, run the process, check logs), do so before deploying.
19
+ 8. Define how you will verify the task is complete beyond a shadow of a doubt (e.g. deploy to the target environment, invoke the API, open a browser, run the process, check logs).
20
+
21
+ ## Implementation
22
+
23
+ Use the output of the triage steps above as your guide. Do not skip triage.
@@ -0,0 +1,83 @@
1
+ ---
2
+ name: tdd-implementation
3
+ description: "Test-Driven Development implementation workflow. RED: write failing test, GREEN: minimum code to pass, REFACTOR: clean up. Includes task metadata requirements, verification, and atomic commit practices."
4
+ ---
5
+
6
+ # TDD Implementation
7
+
8
+ Implement code changes using the Test-Driven Development (RED/GREEN/REFACTOR) cycle. This skill defines the complete workflow from task metadata validation through atomic commit.
9
+
10
+ ## Task Metadata
11
+
12
+ Each task you work on must have the following in its metadata:
13
+
14
+ ```json
15
+ {
16
+ "plan": "<plan-name>",
17
+ "type": "spike|bug|task|epic|story",
18
+ "acceptance_criteria": ["..."],
19
+ "relevant_documentation": "",
20
+ "testing_requirements": ["..."],
21
+ "skills": ["..."],
22
+ "learnings": ["..."],
23
+ "verification": {
24
+ "type": "test|ui-recording|test-coverage|api-test|manual-check|documentation",
25
+ "command": "the proof command",
26
+ "expected": "what success looks like"
27
+ }
28
+ }
29
+ ```
30
+
31
+ All fields are mandatory — empty arrays are ok. If any are missing, ask the agent team to fill them in and wait to get a response.
32
+
33
+ ## Workflow
34
+
35
+ 1. **Verify task metadata** — All fields are mandatory. If any are missing, ask the agent team to fill them in and wait to get a response.
36
+ 2. **Load skills** — Load the skills in the `skills` property of the task metadata.
37
+ 3. **Read before writing** — Read existing code before modifying it. Understand acceptance criteria, verification, and relevant research.
38
+ 4. **Follow existing patterns** — Match the style, naming, and structure of surrounding code.
39
+ 5. **One task at a time** — Complete the current task before moving on.
40
+ 6. **RED** — Write a failing test that captures the expected behavior from the task description. Focus on testing behavior, not implementation details.
41
+ 7. **GREEN** — Write the minimum production code to make the test pass.
42
+ 8. **REFACTOR** — Clean up while keeping tests green.
43
+ 9. **Verify empirically** — Run the task's proof command and confirm expected output.
44
+ 10. **Update documentation** — Add/Remove/Modify all relevant JSDoc preambles, explaining "why", not "what".
45
+ 11. **Update the learnings** — Add what you learned during implementation to the `learnings` array in the task's `metadata.learnings`. These should be things that are relevant for other implementers to know.
46
+ 12. **Commit atomically** — Once verified, run the `/git-commit` skill.
47
+
48
+ ## TDD Cycle
49
+
50
+ **Always write failing tests before implementation code.** This is mandatory, not optional.
51
+
52
+ ```text
53
+ TDD Cycle:
54
+ 1. RED: Write a failing test that defines expected behavior
55
+ 2. GREEN: Write the minimum code to make the test pass
56
+ 3. REFACTOR: Clean up while keeping tests green
57
+ ```
58
+
59
+ ### RED Phase
60
+
61
+ - Write a test that captures the expected behavior from the task description
62
+ - Focus on testing behavior, not implementation details
63
+ - The test must fail before you write any production code
64
+ - If the imported module doesn't exist, Jest reports 0 tests found (not N failed) — this is expected RED behavior
65
+
66
+ ### GREEN Phase
67
+
68
+ - Write the minimum production code to make the test pass
69
+ - Do not optimize, do not add features beyond what the test requires
70
+ - The goal is the simplest code that makes the test green
71
+
72
+ ### REFACTOR Phase
73
+
74
+ - Clean up code while keeping all tests green
75
+ - Remove duplication, improve naming, simplify structure
76
+ - Run tests after every refactor step to confirm nothing breaks
77
+
78
+ ## When Stuck
79
+
80
+ - Re-read the task description and acceptance criteria
81
+ - Check relevant research for reusable code references
82
+ - Search the codebase for similar implementations
83
+ - Ask the team lead if the task is ambiguous — do not guess
@@ -0,0 +1,63 @@
1
+ ---
2
+ name: test-strategy
3
+ description: "Test strategy design. Coverage matrix, edge cases, TDD sequence planning, test quality review. Behavior-focused testing over implementation details."
4
+ ---
5
+
6
+ # Test Strategy
7
+
8
+ Design test strategies, write tests, and review test quality.
9
+
10
+ ## Analysis Process
11
+
12
+ 1. **Read existing tests** -- understand the project's test conventions (describe/it structure, naming, helpers)
13
+ 2. **Identify test types needed** -- unit, integration, E2E based on the scope of changes
14
+ 3. **Map edge cases** -- boundary values, empty inputs, error states, concurrency scenarios
15
+ 4. **Check coverage gaps** -- run existing tests to understand current coverage of affected files
16
+ 5. **Design verification commands** -- proof commands that empirically demonstrate the code works
17
+
18
+ ## Test Writing Process
19
+
20
+ 1. **Analyze the source file** to understand its functionality
21
+ 2. **Identify untested code paths**, edge cases, and error conditions
22
+ 3. **Write comprehensive, meaningful tests** (not just coverage padding)
23
+ 4. **Follow the project's existing test patterns** and conventions
24
+ 5. **Ensure tests are readable and maintainable**
25
+
26
+ ## Output Format
27
+
28
+ Structure findings as:
29
+
30
+ ```text
31
+ ## Test Analysis
32
+
33
+ ### Test Matrix
34
+ | Component | Test Type | What to Test | Priority |
35
+ |-----------|-----------|-------------|----------|
36
+
37
+ ### Edge Cases
38
+ - [edge case] -- why it matters
39
+
40
+ ### Coverage Targets
41
+ - `path/to/file.ts` -- current: X%, target: Y%
42
+
43
+ ### Test Patterns (from codebase)
44
+ - Pattern: [description] -- found in `path/to/test.spec.ts`
45
+
46
+ ### Verification Commands
47
+ | Task | Proof Command | Expected Output |
48
+ |------|--------------|-----------------|
49
+
50
+ ### TDD Sequence
51
+ 1. [first test to write] -- covers [behavior]
52
+ 2. [second test] -- covers [behavior]
53
+ ```
54
+
55
+ ## Rules
56
+
57
+ - Always run `bun run test` to understand current test state before recommending or writing new tests
58
+ - Match existing test conventions -- do not introduce new test patterns
59
+ - Every test must have a clear "why" -- no tests for testing's sake
60
+ - Focus on testing behavior, not implementation details
61
+ - Verification commands must be runnable locally (no CI/CD dependencies)
62
+ - Prioritize tests that catch regressions over tests that verify happy paths
63
+ - Write comprehensive tests, not just coverage padding
@@ -0,0 +1,150 @@
1
+ ---
2
+ name: ticket-triage
3
+ description: "Analytical triage gate for JIRA tickets. Detects requirement ambiguities, identifies edge cases from codebase analysis, and plans verification methodology. Posts findings to the ticket and produces a verdict (BLOCKED/PASSED_WITH_FINDINGS/PASSED) that gates whether implementation can proceed."
4
+ allowed-tools: ["Read", "Glob", "Grep", "Bash"]
5
+ ---
6
+
7
+ # Ticket Triage: $ARGUMENTS
8
+
9
+ Perform analytical triage on the JIRA ticket. The caller has fetched the ticket details (summary, description, acceptance criteria, labels, status, comments) and provided them in context.
10
+
11
+ Repository name for scoped labels and comment headers: determine via `basename $(git rev-parse --show-toplevel)`.
12
+
13
+ ## Phase 1 -- Relevance Check
14
+
15
+ Search the local codebase using Glob and Grep for code related to the ticket's subject matter:
16
+ - Keywords from summary and description
17
+ - Component names, API endpoints, database tables
18
+ - Error messages or log strings mentioned in the ticket
19
+
20
+ If NO relevant code is found in this repo:
21
+ - Output: `## Verdict: NOT_RELEVANT`
22
+ - Instruct caller to add the label `claude-triaged-{repo}` and skip this ticket
23
+
24
+ If relevant code IS found, proceed to Phase 2.
25
+
26
+ ## Phase 2 -- Cross-Repo Awareness
27
+
28
+ Parse the ticket's existing comments for triage headers from OTHER repositories. Look for patterns like:
29
+ - `*[some-repo-name] Ambiguity detected*`
30
+ - `*[some-repo-name] Edge cases*`
31
+ - `*[some-repo-name] Verification methodology*`
32
+
33
+ Note which phases other repos have already covered and what findings they posted. In subsequent phases:
34
+ - Do NOT duplicate findings already posted by another repo
35
+ - DO add supplementary findings specific to THIS repo's codebase
36
+
37
+ ## Phase 3 -- Ambiguity Detection
38
+
39
+ Examine the ticket summary, description, and acceptance criteria. Look for:
40
+
41
+ | Signal | Example |
42
+ |--------|---------|
43
+ | Vague language | "should work properly", "handle edge cases", "improve performance" |
44
+ | Untestable criteria | No measurable outcome defined |
45
+ | Undefined terms | Acronyms or domain terms not explained in context |
46
+ | Missing scope boundaries | What's included vs excluded is unclear |
47
+ | Implicit assumptions | Assumptions not stated explicitly |
48
+
49
+ Skip ambiguities already raised by another repo's triage comments.
50
+
51
+ For each NEW ambiguity found, produce:
52
+
53
+ ```text
54
+ ### Ambiguity: [short title]
55
+ **Description:** [what is ambiguous]
56
+ **Suggested clarification:** [specific question to resolve it]
57
+ ```
58
+
59
+ Be specific -- every ambiguity must have a concrete clarifying question.
60
+
61
+ ## Phase 4 -- Edge Case Analysis
62
+
63
+ Search the codebase using Glob and Grep for files related to the ticket's subject matter. Check git history for recent changes in those areas:
64
+
65
+ ```bash
66
+ git log --oneline -20 -- <relevant-paths>
67
+ ```
68
+
69
+ Identify:
70
+ - Boundary conditions (empty inputs, max values, concurrent access)
71
+ - Error handling gaps in related code
72
+ - Integration risks with other components
73
+ - Data migration or backward compatibility concerns
74
+
75
+ Reference only files in THIS repo. Acknowledge edge cases from other repos if relevant, but do not duplicate them.
76
+
77
+ For each edge case, produce:
78
+
79
+ ```text
80
+ ### Edge Case: [title]
81
+ **Description:** [what could go wrong]
82
+ **Code reference:** [file path and relevant lines or patterns]
83
+ ```
84
+
85
+ Every edge case must reference specific code files or patterns found in the codebase. If no relevant code exists, note that this appears to be a new feature with no existing code to analyze.
86
+
87
+ ## Phase 5 -- Verification Methodology
88
+
89
+ For each acceptance criterion, specify a concrete verification method scoped to what THIS repo can test:
90
+
91
+ | Verification Type | When to Use | What to Specify |
92
+ |-------------------|-------------|-----------------|
93
+ | UI | Change affects user-visible interface | Playwright test description with specific assertions |
94
+ | API | Change affects HTTP/GraphQL/RPC endpoints | curl command with expected response status and body |
95
+ | Data | Change involves schema, migrations, queries | Database query or service call to verify state |
96
+ | Performance | Change claims performance improvement | Benchmark description with target metrics |
97
+
98
+ Do not duplicate verification methods already posted by other repos.
99
+
100
+ Produce a table:
101
+
102
+ ```text
103
+ | Acceptance Criterion | Verification Method | Type |
104
+ |---------------------|--------------------| -----|
105
+ ```
106
+
107
+ Every verification method must be specific enough that an automated agent could execute it.
108
+
109
+ ## Phase 6 -- Verdict
110
+
111
+ Evaluate the findings and produce exactly one verdict:
112
+
113
+ - **`NOT_RELEVANT`** -- No relevant code was found in this repository (Phase 1). The caller should add the triage label and skip implementation in this repo.
114
+ - **`BLOCKED`** -- Ambiguities were found in Phase 3. Work MUST NOT proceed until the ambiguities are resolved by a human. The caller should post findings, add the triage label, and STOP.
115
+ - **`PASSED_WITH_FINDINGS`** -- No ambiguities, but edge cases or verification findings were identified. Work can proceed. The caller should post findings and add the triage label.
116
+ - **`PASSED`** -- No ambiguities, edge cases, or verification gaps found. Work can proceed. The caller should add the triage label.
117
+
118
+ Output format:
119
+
120
+ ```text
121
+ ## Verdict: [NOT_RELEVANT | BLOCKED | PASSED_WITH_FINDINGS | PASSED]
122
+
123
+ **Ambiguities found:** [count]
124
+ **Edge cases identified:** [count]
125
+ **Verification methods defined:** [count]
126
+ ```
127
+
128
+ ## Output Structure
129
+
130
+ Structure all output with clear section headers so the caller can parse and post findings:
131
+
132
+ ```text
133
+ ## Triage: [TICKET-KEY] ([repo-name])
134
+
135
+ ### Ambiguities
136
+ [Phase 3 findings, or "None found."]
137
+
138
+ ### Edge Cases
139
+ [Phase 4 findings, or "None found."]
140
+
141
+ ### Verification Methodology
142
+ [Phase 5 table, or "No acceptance criteria to verify."]
143
+
144
+ ## Verdict: [NOT_RELEVANT | BLOCKED | PASSED_WITH_FINDINGS | PASSED]
145
+ ```
146
+
147
+ The caller is responsible for:
148
+ 1. Posting the findings as comments on the ticket (using whatever Jira mechanism is available)
149
+ 2. Adding the `claude-triaged-{repo}` label to the ticket
150
+ 3. If `BLOCKED`: stopping all work and reporting the ambiguities to the human