@codyswann/lisa 1.78.8 → 1.79.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/plugins/lisa/.claude-plugin/plugin.json +1 -1
- package/plugins/lisa/skills/nightly-add-test-coverage/SKILL.md +23 -10
- package/plugins/lisa/skills/nightly-improve-tests/SKILL.md +12 -14
- package/plugins/lisa/skills/nightly-lower-code-complexity/SKILL.md +12 -13
- package/plugins/lisa/skills/notion-to-jira/SKILL.md +163 -0
- package/plugins/lisa-cdk/.claude-plugin/plugin.json +1 -1
- package/plugins/lisa-expo/.claude-plugin/plugin.json +1 -1
- package/plugins/lisa-nestjs/.claude-plugin/plugin.json +1 -1
- package/plugins/lisa-rails/.claude-plugin/plugin.json +1 -1
- package/plugins/lisa-typescript/.claude-plugin/plugin.json +1 -1
- package/plugins/src/base/skills/nightly-add-test-coverage/SKILL.md +23 -10
- package/plugins/src/base/skills/nightly-improve-tests/SKILL.md +12 -14
- package/plugins/src/base/skills/nightly-lower-code-complexity/SKILL.md +12 -13
- package/plugins/src/base/skills/notion-to-jira/SKILL.md +163 -0
package/package.json
CHANGED
|
@@ -76,7 +76,7 @@
|
|
|
76
76
|
"lodash": ">=4.18.1"
|
|
77
77
|
},
|
|
78
78
|
"name": "@codyswann/lisa",
|
|
79
|
-
"version": "1.
|
|
79
|
+
"version": "1.79.0",
|
|
80
80
|
"description": "Claude Code governance framework that applies guardrails, guidance, and automated enforcement to projects",
|
|
81
81
|
"main": "dist/index.js",
|
|
82
82
|
"exports": {
|
|
@@ -15,13 +15,26 @@ The caller provides pre-computed context:
|
|
|
15
15
|
|
|
16
16
|
## Instructions
|
|
17
17
|
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
18
|
+
### Phase 1: Identify gaps
|
|
19
|
+
|
|
20
|
+
1. Run the project's coverage script with the provided package manager (e.g., `npm run test:cov`, `yarn test:cov`, or `bun run test:cov`) to get the coverage report -- identify gaps BEFORE reading any source files
|
|
21
|
+
2. Parse the coverage output to identify the specific files and lines with the lowest coverage. Prioritize files with the most uncovered lines/branches.
|
|
22
|
+
3. Read ONE existing test file to learn the project's testing patterns (mocks, imports, assertions). Do not read more than one.
|
|
23
|
+
|
|
24
|
+
### Phase 2: Write tests incrementally — one file at a time
|
|
25
|
+
|
|
26
|
+
For each source file with low coverage (starting with the lowest):
|
|
27
|
+
|
|
28
|
+
4. Read only the uncovered sections of the source file using the coverage report line numbers -- do not explore the codebase broadly
|
|
29
|
+
5. Write a test file targeting the uncovered branches/lines for that source file
|
|
30
|
+
6. Run the test file in isolation (e.g., `npx vitest run tests/my-new-test.test.ts`) to verify it passes before moving on
|
|
31
|
+
7. If the test fails, fix it immediately. Do not move to the next file until the current test passes.
|
|
32
|
+
8. Repeat steps 4-7 for the next source file. Stop once you estimate the proposed thresholds will be met.
|
|
33
|
+
|
|
34
|
+
### Phase 3: Verify and ship
|
|
35
|
+
|
|
36
|
+
9. Run the full coverage script to verify the new thresholds pass
|
|
37
|
+
10. Update the thresholds file with the proposed new threshold values
|
|
38
|
+
11. Re-run the coverage script to confirm the updated thresholds pass
|
|
39
|
+
12. Commit all changes (new tests + updated thresholds file) with conventional commit messages
|
|
40
|
+
13. Create a PR with `gh pr create` with a title like "test: increase test coverage: [metrics being bumped]" summarizing coverage improvements
|
|
@@ -12,20 +12,18 @@ The caller provides:
|
|
|
12
12
|
|
|
13
13
|
## Nightly Mode
|
|
14
14
|
|
|
15
|
-
1.
|
|
16
|
-
2.
|
|
17
|
-
3.
|
|
18
|
-
4.
|
|
19
|
-
5.
|
|
20
|
-
6.
|
|
21
|
-
7. Create a PR with `gh pr create` summarizing what was improved and why
|
|
15
|
+
1. For each changed source file, find its corresponding test file(s)
|
|
16
|
+
2. Analyze those test files for: missing edge cases, weak assertions (toBeTruthy instead of specific values), missing error path coverage, tests that test implementation rather than behavior
|
|
17
|
+
3. Improve the test files with the most impactful changes
|
|
18
|
+
4. Run the full test suite to verify all tests pass
|
|
19
|
+
5. Commit changes with conventional commit messages
|
|
20
|
+
6. Create a PR with `gh pr create` summarizing what was improved and why
|
|
22
21
|
|
|
23
22
|
## General Mode
|
|
24
23
|
|
|
25
|
-
1.
|
|
26
|
-
2.
|
|
27
|
-
3.
|
|
28
|
-
4.
|
|
29
|
-
5.
|
|
30
|
-
6.
|
|
31
|
-
7. Create a PR with `gh pr create` summarizing what was improved and why
|
|
24
|
+
1. Scan the test files to find weak, brittle, or poorly-written tests
|
|
25
|
+
2. Look for: missing edge cases, weak assertions (toBeTruthy instead of specific values), missing error path coverage, tests that test implementation rather than behavior
|
|
26
|
+
3. Improve 3-5 test files with the most impactful changes
|
|
27
|
+
4. Run the full test suite to verify all tests pass
|
|
28
|
+
5. Commit changes with conventional commit messages
|
|
29
|
+
6. Create a PR with `gh pr create` summarizing what was improved and why
|
|
@@ -14,16 +14,15 @@ The caller provides pre-computed context:
|
|
|
14
14
|
|
|
15
15
|
## Instructions
|
|
16
16
|
|
|
17
|
-
1.
|
|
18
|
-
2.
|
|
19
|
-
3.
|
|
20
|
-
4.
|
|
21
|
-
5.
|
|
22
|
-
6. For
|
|
23
|
-
7.
|
|
24
|
-
8.
|
|
25
|
-
9.
|
|
26
|
-
10. Run the
|
|
27
|
-
11.
|
|
28
|
-
12.
|
|
29
|
-
13. Create a PR with `gh pr create` with a title like "refactor: reduce code complexity: [metrics being reduced]" summarizing the changes
|
|
17
|
+
1. Update eslint.thresholds.json with the proposed new threshold values (do NOT change the maxLines threshold)
|
|
18
|
+
2. Run the project's lint script with the provided package manager (e.g., `npm run lint`, `yarn lint`, or `bun run lint`) to find functions that violate the new stricter thresholds
|
|
19
|
+
3. **Before editing**, check each violating file's total line count (`wc -l`). If a file is within 20 lines of its `max-lines` ESLint limit (typically 300), extract helpers into a **separate companion file** (e.g., `fooHelpers.ts`) instead of adding them to the same file. Extracting functions into the same file adds net lines and can create new max-lines violations.
|
|
20
|
+
4. Fix violations one file at a time. Read only the specific function that violates — do not pre-read all files upfront. Fix it, then move to the next.
|
|
21
|
+
5. For cognitive complexity violations: use early returns, extract helper functions, replace conditionals with lookup tables
|
|
22
|
+
6. For max-lines-per-function violations: split large functions, extract helper functions, separate concerns
|
|
23
|
+
7. After each file edit, run the project's formatter (e.g., `bun run format` or `npx prettier --write <file>`) to ensure line counts reflect the final formatted state before moving on
|
|
24
|
+
8. Re-run the lint script with the provided package manager to verify all violations are resolved (both the target metric AND max-lines)
|
|
25
|
+
9. Run the TypeScript compiler to catch type errors early: `npx tsc --noEmit 2>&1 | head -30`. If there are type errors, fix them now — do NOT wait until the commit step. Pre-commit hooks run type checking, and discovering errors at commit time wastes turns.
|
|
26
|
+
10. Run the project's test script with the provided package manager (e.g., `npm run test`, `yarn test`, or `bun run test`) to verify no tests are broken by the refactoring
|
|
27
|
+
11. Commit all changes (refactored code + updated eslint.thresholds.json) with conventional commit messages
|
|
28
|
+
12. Create a PR with `gh pr create` with a title like "refactor: reduce code complexity: [metrics being reduced]" summarizing the changes
|
|
@@ -0,0 +1,163 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: notion-to-jira
|
|
3
|
+
description: >
|
|
4
|
+
Break down a Notion PRD into JIRA epics, stories, and sub-tasks. Use this skill whenever the user
|
|
5
|
+
shares a Notion PRD URL and wants it converted into JIRA tickets, or asks to "break down a PRD",
|
|
6
|
+
"create tickets from a PRD", "turn this PRD into JIRA", or similar. Also trigger when the user
|
|
7
|
+
mentions creating epics/stories/tasks from a Notion document. This skill handles the full pipeline:
|
|
8
|
+
fetching the PRD, analyzing comments, researching the codebase, identifying blockers, and creating
|
|
9
|
+
all tickets with empirical verification plans.
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
# Notion PRD to JIRA Breakdown
|
|
13
|
+
|
|
14
|
+
Convert a Notion PRD into a structured JIRA ticket hierarchy: Epics > Stories > Sub-tasks.
|
|
15
|
+
Each sub-task is scoped to exactly one repo and includes an empirical verification plan.
|
|
16
|
+
|
|
17
|
+
## Input
|
|
18
|
+
|
|
19
|
+
A Notion PRD URL. The PRD is expected to have:
|
|
20
|
+
- A main page with context, problems, and links to Epic sub-pages
|
|
21
|
+
- Epic sub-pages with User Stories and functional/non-functional requirements
|
|
22
|
+
- Comments/discussions on each page with engineering notes and product decisions
|
|
23
|
+
|
|
24
|
+
## Configuration
|
|
25
|
+
|
|
26
|
+
This skill reads project-specific configuration from environment variables. If these are not set,
|
|
27
|
+
ask the user for the values before proceeding.
|
|
28
|
+
|
|
29
|
+
| Variable | Purpose | Example |
|
|
30
|
+
|----------|---------|---------|
|
|
31
|
+
| `JIRA_PROJECT` | JIRA project key for ticket creation | `SE` |
|
|
32
|
+
| `JIRA_SERVER` | Atlassian instance URL (site host) | `mycompany.atlassian.net` |
|
|
33
|
+
| `E2E_TEST_PHONE` | Test user phone number for verification plans | `0000000099` |
|
|
34
|
+
| `E2E_TEST_OTP` | Test user OTP code | `555555` |
|
|
35
|
+
| `E2E_TEST_ORG` | Test organization name | `Arsenal` |
|
|
36
|
+
| `E2E_BASE_URL` | Frontend base URL for Playwright tests | `https://dev.example.io/` |
|
|
37
|
+
| `E2E_GRAPHQL_URL` | GraphQL API URL for curl verification | `https://gql.dev.example.io/graphql` |
|
|
38
|
+
|
|
39
|
+
If env vars are not available, ask the user to provide them explicitly before proceeding.
|
|
40
|
+
Do not retrieve credentials from repository files or local agent settings.
|
|
41
|
+
|
|
42
|
+
## Workflow
|
|
43
|
+
|
|
44
|
+
### Phase 1: Fetch & Analyze the PRD
|
|
45
|
+
|
|
46
|
+
1. **Fetch the main PRD page** with `include_discussions: true`
|
|
47
|
+
2. **Identify all Epic sub-pages** from the content (look for child page links)
|
|
48
|
+
3. **Fetch all Epic pages** in parallel with `include_discussions: true`
|
|
49
|
+
4. **Fetch full comments** from every page using `notion-get-comments` with `include_all_blocks: true`
|
|
50
|
+
5. **Synthesize decisions and blockers** from the PRD content + all comments:
|
|
51
|
+
- Decisions already confirmed by the team (look for agreement in comment threads)
|
|
52
|
+
- Open questions that need product/engineering input
|
|
53
|
+
- Engineering comments (prefixed with "Engineering:" or wrench emoji) that identify technical constraints
|
|
54
|
+
- Cross-PRD dependencies (references to other features or shared infrastructure)
|
|
55
|
+
|
|
56
|
+
### Phase 2: Codebase Research (if needed)
|
|
57
|
+
|
|
58
|
+
If the session doesn't already have codebase context, explore the repos to understand what exists.
|
|
59
|
+
Use Explore agents for repos not yet examined. Skip repos already explored in the current session.
|
|
60
|
+
|
|
61
|
+
Key things to look for:
|
|
62
|
+
- Existing entities/modules that overlap with the PRD
|
|
63
|
+
- Patterns to reuse (auth decorators, S3 services, existing UI components)
|
|
64
|
+
- Data model gaps the PRD requires (new fields, new entities)
|
|
65
|
+
- The tech stack per repo (framework, ORM, UI library, deployment)
|
|
66
|
+
|
|
67
|
+
### Phase 3: Create Epics
|
|
68
|
+
|
|
69
|
+
Create one Epic per PRD epic using the JIRA project key from config. Each Epic description should include:
|
|
70
|
+
- Summary of the epic from the PRD
|
|
71
|
+
- List of user stories it contains
|
|
72
|
+
- Key decisions from comments (with attribution)
|
|
73
|
+
- Blockers and open questions
|
|
74
|
+
- Dependencies on other epics or PRDs
|
|
75
|
+
|
|
76
|
+
Use `contentFormat: "markdown"` for all descriptions.
|
|
77
|
+
|
|
78
|
+
### Phase 4: Create Stories
|
|
79
|
+
|
|
80
|
+
For each Epic, create Stories:
|
|
81
|
+
- **One "X.0 Setup" story** for data model and infrastructure prerequisites (new entities, migrations, new fields, infrastructure like S3 buckets or SQS queues)
|
|
82
|
+
- **One story per user story** from the PRD (numbered to match the PRD)
|
|
83
|
+
|
|
84
|
+
**Story naming convention**: Prefix with a short code derived from the PRD title:
|
|
85
|
+
- "Contract Upload" -> `[CU-1.1]`
|
|
86
|
+
- "Squad Planning" -> `[SP-1.1]`
|
|
87
|
+
- Use your judgment for other PRDs
|
|
88
|
+
|
|
89
|
+
Each story description should include:
|
|
90
|
+
- The user story statement from the PRD
|
|
91
|
+
- Acceptance criteria (from functional requirements)
|
|
92
|
+
- Technical notes from engineering comments
|
|
93
|
+
- Blockers with recommendation + alternatives (if any)
|
|
94
|
+
|
|
95
|
+
Set `parent` to the Epic key to link stories to their epic.
|
|
96
|
+
|
|
97
|
+
### Phase 5: Create Sub-tasks
|
|
98
|
+
|
|
99
|
+
Delegate sub-task creation to **parallel agents** (one per epic or batch of stories) for efficiency.
|
|
100
|
+
|
|
101
|
+
Each sub-task MUST:
|
|
102
|
+
1. **Be scoped to exactly ONE repo** — indicated in brackets in the summary: `[repo-name]`
|
|
103
|
+
2. **Include an Empirical Verification Plan** — real user-like verification, NOT unit tests, linting, or typechecking
|
|
104
|
+
|
|
105
|
+
**Verification plan examples by stack:**
|
|
106
|
+
- **Backend APIs**: curl GraphQL/REST calls with auth token, database queries, checking audit entries
|
|
107
|
+
- **Frontend web**: Playwright browser tests (login with test user, navigate, interact, screenshot)
|
|
108
|
+
- **Infrastructure**: `cdk synth` / `terraform plan` verification, CLI checks after deploy
|
|
109
|
+
|
|
110
|
+
Use the test user credentials from config for all verification plans.
|
|
111
|
+
|
|
112
|
+
Set `parent` to the Story key. Use `issueTypeName: "Sub-task"`.
|
|
113
|
+
|
|
114
|
+
### Phase 6: Report Results
|
|
115
|
+
|
|
116
|
+
After all tickets are created, present a summary table to the user:
|
|
117
|
+
- All Epics with keys and URLs
|
|
118
|
+
- All Stories grouped by Epic
|
|
119
|
+
- All Sub-tasks grouped by Story with repo tags
|
|
120
|
+
- Repo distribution (how many tasks per repo)
|
|
121
|
+
- Blockers list with recommendations and alternatives
|
|
122
|
+
- Cross-PRD dependencies
|
|
123
|
+
|
|
124
|
+
## Handling Ambiguities and Blockers
|
|
125
|
+
|
|
126
|
+
When you encounter something the PRD + comments + codebase can't resolve:
|
|
127
|
+
|
|
128
|
+
1. **Don't guess** — mark the ticket with a BLOCKER section
|
|
129
|
+
2. **Include your recommendation** with rationale (what you'd pick and why)
|
|
130
|
+
3. **List 2-3 alternatives** so the user/product can choose
|
|
131
|
+
4. **State what's needed to unblock** (e.g., "Product decision on X", "Sync with Y on Z")
|
|
132
|
+
|
|
133
|
+
Common blocker categories:
|
|
134
|
+
- Missing data model decisions (field types, entity relationships)
|
|
135
|
+
- Permission model unclear (who can view/edit)
|
|
136
|
+
- UX decisions not finalized (display format, input mechanism)
|
|
137
|
+
- Cross-team dependencies (shared infrastructure, coordinated rollouts)
|
|
138
|
+
- Preset values not defined (enums, tags, labels)
|
|
139
|
+
|
|
140
|
+
## Agent Prompt Template for Sub-task Creation
|
|
141
|
+
|
|
142
|
+
When delegating to agents, provide this context:
|
|
143
|
+
|
|
144
|
+
```text
|
|
145
|
+
Create JIRA sub-tasks in the [PROJECT] project at [CLOUD_ID].
|
|
146
|
+
Use issueTypeName "Sub-task" and set parent to the story key.
|
|
147
|
+
Use contentFormat "markdown".
|
|
148
|
+
|
|
149
|
+
Test user info: [credentials from config]
|
|
150
|
+
|
|
151
|
+
Each sub-task must:
|
|
152
|
+
1. Be scoped to ONE repo only
|
|
153
|
+
2. Include an **Empirical Verification Plan** section
|
|
154
|
+
3. Include the repo in brackets in the summary
|
|
155
|
+
|
|
156
|
+
[Then list all sub-tasks grouped by parent story with details]
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
## Cross-PRD Shared Infrastructure
|
|
160
|
+
|
|
161
|
+
Track tickets that are shared across PRDs to avoid duplication.
|
|
162
|
+
When a sub-task overlaps with an existing ticket, reference it instead of creating a duplicate.
|
|
163
|
+
Search JIRA for existing tickets in the project before creating new ones for shared infrastructure.
|
|
@@ -15,13 +15,26 @@ The caller provides pre-computed context:
|
|
|
15
15
|
|
|
16
16
|
## Instructions
|
|
17
17
|
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
18
|
+
### Phase 1: Identify gaps
|
|
19
|
+
|
|
20
|
+
1. Run the project's coverage script with the provided package manager (e.g., `npm run test:cov`, `yarn test:cov`, or `bun run test:cov`) to get the coverage report -- identify gaps BEFORE reading any source files
|
|
21
|
+
2. Parse the coverage output to identify the specific files and lines with the lowest coverage. Prioritize files with the most uncovered lines/branches.
|
|
22
|
+
3. Read ONE existing test file to learn the project's testing patterns (mocks, imports, assertions). Do not read more than one.
|
|
23
|
+
|
|
24
|
+
### Phase 2: Write tests incrementally — one file at a time
|
|
25
|
+
|
|
26
|
+
For each source file with low coverage (starting with the lowest):
|
|
27
|
+
|
|
28
|
+
4. Read only the uncovered sections of the source file using the coverage report line numbers -- do not explore the codebase broadly
|
|
29
|
+
5. Write a test file targeting the uncovered branches/lines for that source file
|
|
30
|
+
6. Run the test file in isolation (e.g., `npx vitest run tests/my-new-test.test.ts`) to verify it passes before moving on
|
|
31
|
+
7. If the test fails, fix it immediately. Do not move to the next file until the current test passes.
|
|
32
|
+
8. Repeat steps 4-7 for the next source file. Stop once you estimate the proposed thresholds will be met.
|
|
33
|
+
|
|
34
|
+
### Phase 3: Verify and ship
|
|
35
|
+
|
|
36
|
+
9. Run the full coverage script to verify the new thresholds pass
|
|
37
|
+
10. Update the thresholds file with the proposed new threshold values
|
|
38
|
+
11. Re-run the coverage script to confirm the updated thresholds pass
|
|
39
|
+
12. Commit all changes (new tests + updated thresholds file) with conventional commit messages
|
|
40
|
+
13. Create a PR with `gh pr create` with a title like "test: increase test coverage: [metrics being bumped]" summarizing coverage improvements
|
|
@@ -12,20 +12,18 @@ The caller provides:
|
|
|
12
12
|
|
|
13
13
|
## Nightly Mode
|
|
14
14
|
|
|
15
|
-
1.
|
|
16
|
-
2.
|
|
17
|
-
3.
|
|
18
|
-
4.
|
|
19
|
-
5.
|
|
20
|
-
6.
|
|
21
|
-
7. Create a PR with `gh pr create` summarizing what was improved and why
|
|
15
|
+
1. For each changed source file, find its corresponding test file(s)
|
|
16
|
+
2. Analyze those test files for: missing edge cases, weak assertions (toBeTruthy instead of specific values), missing error path coverage, tests that test implementation rather than behavior
|
|
17
|
+
3. Improve the test files with the most impactful changes
|
|
18
|
+
4. Run the full test suite to verify all tests pass
|
|
19
|
+
5. Commit changes with conventional commit messages
|
|
20
|
+
6. Create a PR with `gh pr create` summarizing what was improved and why
|
|
22
21
|
|
|
23
22
|
## General Mode
|
|
24
23
|
|
|
25
|
-
1.
|
|
26
|
-
2.
|
|
27
|
-
3.
|
|
28
|
-
4.
|
|
29
|
-
5.
|
|
30
|
-
6.
|
|
31
|
-
7. Create a PR with `gh pr create` summarizing what was improved and why
|
|
24
|
+
1. Scan the test files to find weak, brittle, or poorly-written tests
|
|
25
|
+
2. Look for: missing edge cases, weak assertions (toBeTruthy instead of specific values), missing error path coverage, tests that test implementation rather than behavior
|
|
26
|
+
3. Improve 3-5 test files with the most impactful changes
|
|
27
|
+
4. Run the full test suite to verify all tests pass
|
|
28
|
+
5. Commit changes with conventional commit messages
|
|
29
|
+
6. Create a PR with `gh pr create` summarizing what was improved and why
|
|
@@ -14,16 +14,15 @@ The caller provides pre-computed context:
|
|
|
14
14
|
|
|
15
15
|
## Instructions
|
|
16
16
|
|
|
17
|
-
1.
|
|
18
|
-
2.
|
|
19
|
-
3.
|
|
20
|
-
4.
|
|
21
|
-
5.
|
|
22
|
-
6. For
|
|
23
|
-
7.
|
|
24
|
-
8.
|
|
25
|
-
9.
|
|
26
|
-
10. Run the
|
|
27
|
-
11.
|
|
28
|
-
12.
|
|
29
|
-
13. Create a PR with `gh pr create` with a title like "refactor: reduce code complexity: [metrics being reduced]" summarizing the changes
|
|
17
|
+
1. Update eslint.thresholds.json with the proposed new threshold values (do NOT change the maxLines threshold)
|
|
18
|
+
2. Run the project's lint script with the provided package manager (e.g., `npm run lint`, `yarn lint`, or `bun run lint`) to find functions that violate the new stricter thresholds
|
|
19
|
+
3. **Before editing**, check each violating file's total line count (`wc -l`). If a file is within 20 lines of its `max-lines` ESLint limit (typically 300), extract helpers into a **separate companion file** (e.g., `fooHelpers.ts`) instead of adding them to the same file. Extracting functions into the same file adds net lines and can create new max-lines violations.
|
|
20
|
+
4. Fix violations one file at a time. Read only the specific function that violates — do not pre-read all files upfront. Fix it, then move to the next.
|
|
21
|
+
5. For cognitive complexity violations: use early returns, extract helper functions, replace conditionals with lookup tables
|
|
22
|
+
6. For max-lines-per-function violations: split large functions, extract helper functions, separate concerns
|
|
23
|
+
7. After each file edit, run the project's formatter (e.g., `bun run format` or `npx prettier --write <file>`) to ensure line counts reflect the final formatted state before moving on
|
|
24
|
+
8. Re-run the lint script with the provided package manager to verify all violations are resolved (both the target metric AND max-lines)
|
|
25
|
+
9. Run the TypeScript compiler to catch type errors early: `npx tsc --noEmit 2>&1 | head -30`. If there are type errors, fix them now — do NOT wait until the commit step. Pre-commit hooks run type checking, and discovering errors at commit time wastes turns.
|
|
26
|
+
10. Run the project's test script with the provided package manager (e.g., `npm run test`, `yarn test`, or `bun run test`) to verify no tests are broken by the refactoring
|
|
27
|
+
11. Commit all changes (refactored code + updated eslint.thresholds.json) with conventional commit messages
|
|
28
|
+
12. Create a PR with `gh pr create` with a title like "refactor: reduce code complexity: [metrics being reduced]" summarizing the changes
|
|
@@ -0,0 +1,163 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: notion-to-jira
|
|
3
|
+
description: >
|
|
4
|
+
Break down a Notion PRD into JIRA epics, stories, and sub-tasks. Use this skill whenever the user
|
|
5
|
+
shares a Notion PRD URL and wants it converted into JIRA tickets, or asks to "break down a PRD",
|
|
6
|
+
"create tickets from a PRD", "turn this PRD into JIRA", or similar. Also trigger when the user
|
|
7
|
+
mentions creating epics/stories/tasks from a Notion document. This skill handles the full pipeline:
|
|
8
|
+
fetching the PRD, analyzing comments, researching the codebase, identifying blockers, and creating
|
|
9
|
+
all tickets with empirical verification plans.
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
# Notion PRD to JIRA Breakdown
|
|
13
|
+
|
|
14
|
+
Convert a Notion PRD into a structured JIRA ticket hierarchy: Epics > Stories > Sub-tasks.
|
|
15
|
+
Each sub-task is scoped to exactly one repo and includes an empirical verification plan.
|
|
16
|
+
|
|
17
|
+
## Input
|
|
18
|
+
|
|
19
|
+
A Notion PRD URL. The PRD is expected to have:
|
|
20
|
+
- A main page with context, problems, and links to Epic sub-pages
|
|
21
|
+
- Epic sub-pages with User Stories and functional/non-functional requirements
|
|
22
|
+
- Comments/discussions on each page with engineering notes and product decisions
|
|
23
|
+
|
|
24
|
+
## Configuration
|
|
25
|
+
|
|
26
|
+
This skill reads project-specific configuration from environment variables. If these are not set,
|
|
27
|
+
ask the user for the values before proceeding.
|
|
28
|
+
|
|
29
|
+
| Variable | Purpose | Example |
|
|
30
|
+
|----------|---------|---------|
|
|
31
|
+
| `JIRA_PROJECT` | JIRA project key for ticket creation | `SE` |
|
|
32
|
+
| `JIRA_SERVER` | Atlassian instance URL (site host) | `mycompany.atlassian.net` |
|
|
33
|
+
| `E2E_TEST_PHONE` | Test user phone number for verification plans | `0000000099` |
|
|
34
|
+
| `E2E_TEST_OTP` | Test user OTP code | `555555` |
|
|
35
|
+
| `E2E_TEST_ORG` | Test organization name | `Arsenal` |
|
|
36
|
+
| `E2E_BASE_URL` | Frontend base URL for Playwright tests | `https://dev.example.io/` |
|
|
37
|
+
| `E2E_GRAPHQL_URL` | GraphQL API URL for curl verification | `https://gql.dev.example.io/graphql` |
|
|
38
|
+
|
|
39
|
+
If env vars are not available, ask the user to provide them explicitly before proceeding.
|
|
40
|
+
Do not retrieve credentials from repository files or local agent settings.
|
|
41
|
+
|
|
42
|
+
## Workflow
|
|
43
|
+
|
|
44
|
+
### Phase 1: Fetch & Analyze the PRD
|
|
45
|
+
|
|
46
|
+
1. **Fetch the main PRD page** with `include_discussions: true`
|
|
47
|
+
2. **Identify all Epic sub-pages** from the content (look for child page links)
|
|
48
|
+
3. **Fetch all Epic pages** in parallel with `include_discussions: true`
|
|
49
|
+
4. **Fetch full comments** from every page using `notion-get-comments` with `include_all_blocks: true`
|
|
50
|
+
5. **Synthesize decisions and blockers** from the PRD content + all comments:
|
|
51
|
+
- Decisions already confirmed by the team (look for agreement in comment threads)
|
|
52
|
+
- Open questions that need product/engineering input
|
|
53
|
+
- Engineering comments (prefixed with "Engineering:" or wrench emoji) that identify technical constraints
|
|
54
|
+
- Cross-PRD dependencies (references to other features or shared infrastructure)
|
|
55
|
+
|
|
56
|
+
### Phase 2: Codebase Research (if needed)
|
|
57
|
+
|
|
58
|
+
If the session doesn't already have codebase context, explore the repos to understand what exists.
|
|
59
|
+
Use Explore agents for repos not yet examined. Skip repos already explored in the current session.
|
|
60
|
+
|
|
61
|
+
Key things to look for:
|
|
62
|
+
- Existing entities/modules that overlap with the PRD
|
|
63
|
+
- Patterns to reuse (auth decorators, S3 services, existing UI components)
|
|
64
|
+
- Data model gaps the PRD requires (new fields, new entities)
|
|
65
|
+
- The tech stack per repo (framework, ORM, UI library, deployment)
|
|
66
|
+
|
|
67
|
+
### Phase 3: Create Epics
|
|
68
|
+
|
|
69
|
+
Create one Epic per PRD epic using the JIRA project key from config. Each Epic description should include:
|
|
70
|
+
- Summary of the epic from the PRD
|
|
71
|
+
- List of user stories it contains
|
|
72
|
+
- Key decisions from comments (with attribution)
|
|
73
|
+
- Blockers and open questions
|
|
74
|
+
- Dependencies on other epics or PRDs
|
|
75
|
+
|
|
76
|
+
Use `contentFormat: "markdown"` for all descriptions.
|
|
77
|
+
|
|
78
|
+
### Phase 4: Create Stories
|
|
79
|
+
|
|
80
|
+
For each Epic, create Stories:
|
|
81
|
+
- **One "X.0 Setup" story** for data model and infrastructure prerequisites (new entities, migrations, new fields, infrastructure like S3 buckets or SQS queues)
|
|
82
|
+
- **One story per user story** from the PRD (numbered to match the PRD)
|
|
83
|
+
|
|
84
|
+
**Story naming convention**: Prefix with a short code derived from the PRD title:
|
|
85
|
+
- "Contract Upload" -> `[CU-1.1]`
|
|
86
|
+
- "Squad Planning" -> `[SP-1.1]`
|
|
87
|
+
- Use your judgment for other PRDs
|
|
88
|
+
|
|
89
|
+
Each story description should include:
|
|
90
|
+
- The user story statement from the PRD
|
|
91
|
+
- Acceptance criteria (from functional requirements)
|
|
92
|
+
- Technical notes from engineering comments
|
|
93
|
+
- Blockers with recommendation + alternatives (if any)
|
|
94
|
+
|
|
95
|
+
Set `parent` to the Epic key to link stories to their epic.
|
|
96
|
+
|
|
97
|
+
### Phase 5: Create Sub-tasks
|
|
98
|
+
|
|
99
|
+
Delegate sub-task creation to **parallel agents** (one per epic or batch of stories) for efficiency.
|
|
100
|
+
|
|
101
|
+
Each sub-task MUST:
|
|
102
|
+
1. **Be scoped to exactly ONE repo** — indicated in brackets in the summary: `[repo-name]`
|
|
103
|
+
2. **Include an Empirical Verification Plan** — real user-like verification, NOT unit tests, linting, or typechecking
|
|
104
|
+
|
|
105
|
+
**Verification plan examples by stack:**
|
|
106
|
+
- **Backend APIs**: curl GraphQL/REST calls with auth token, database queries, checking audit entries
|
|
107
|
+
- **Frontend web**: Playwright browser tests (login with test user, navigate, interact, screenshot)
|
|
108
|
+
- **Infrastructure**: `cdk synth` / `terraform plan` verification, CLI checks after deploy
|
|
109
|
+
|
|
110
|
+
Use the test user credentials from config for all verification plans.
|
|
111
|
+
|
|
112
|
+
Set `parent` to the Story key. Use `issueTypeName: "Sub-task"`.
|
|
113
|
+
|
|
114
|
+
### Phase 6: Report Results
|
|
115
|
+
|
|
116
|
+
After all tickets are created, present a summary table to the user:
|
|
117
|
+
- All Epics with keys and URLs
|
|
118
|
+
- All Stories grouped by Epic
|
|
119
|
+
- All Sub-tasks grouped by Story with repo tags
|
|
120
|
+
- Repo distribution (how many tasks per repo)
|
|
121
|
+
- Blockers list with recommendations and alternatives
|
|
122
|
+
- Cross-PRD dependencies
|
|
123
|
+
|
|
124
|
+
## Handling Ambiguities and Blockers
|
|
125
|
+
|
|
126
|
+
When you encounter something the PRD + comments + codebase can't resolve:
|
|
127
|
+
|
|
128
|
+
1. **Don't guess** — mark the ticket with a BLOCKER section
|
|
129
|
+
2. **Include your recommendation** with rationale (what you'd pick and why)
|
|
130
|
+
3. **List 2-3 alternatives** so the user/product can choose
|
|
131
|
+
4. **State what's needed to unblock** (e.g., "Product decision on X", "Sync with Y on Z")
|
|
132
|
+
|
|
133
|
+
Common blocker categories:
|
|
134
|
+
- Missing data model decisions (field types, entity relationships)
|
|
135
|
+
- Permission model unclear (who can view/edit)
|
|
136
|
+
- UX decisions not finalized (display format, input mechanism)
|
|
137
|
+
- Cross-team dependencies (shared infrastructure, coordinated rollouts)
|
|
138
|
+
- Preset values not defined (enums, tags, labels)
|
|
139
|
+
|
|
140
|
+
## Agent Prompt Template for Sub-task Creation
|
|
141
|
+
|
|
142
|
+
When delegating to agents, provide this context:
|
|
143
|
+
|
|
144
|
+
```text
|
|
145
|
+
Create JIRA sub-tasks in the [PROJECT] project at [CLOUD_ID].
|
|
146
|
+
Use issueTypeName "Sub-task" and set parent to the story key.
|
|
147
|
+
Use contentFormat "markdown".
|
|
148
|
+
|
|
149
|
+
Test user info: [credentials from config]
|
|
150
|
+
|
|
151
|
+
Each sub-task must:
|
|
152
|
+
1. Be scoped to ONE repo only
|
|
153
|
+
2. Include an **Empirical Verification Plan** section
|
|
154
|
+
3. Include the repo in brackets in the summary
|
|
155
|
+
|
|
156
|
+
[Then list all sub-tasks grouped by parent story with details]
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
## Cross-PRD Shared Infrastructure
|
|
160
|
+
|
|
161
|
+
Track tickets that are shared across PRDs to avoid duplication.
|
|
162
|
+
When a sub-task overlaps with an existing ticket, reference it instead of creating a duplicate.
|
|
163
|
+
Search JIRA for existing tickets in the project before creating new ones for shared infrastructure.
|