ai-devkit 0.4.2 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (53) hide show
  1. package/CHANGELOG.md +1 -1
  2. package/README.md +2 -1
  3. package/dist/lib/TemplateManager.d.ts +5 -0
  4. package/dist/lib/TemplateManager.d.ts.map +1 -1
  5. package/dist/lib/TemplateManager.js +24 -3
  6. package/dist/lib/TemplateManager.js.map +1 -1
  7. package/dist/types.d.ts +1 -1
  8. package/dist/types.d.ts.map +1 -1
  9. package/dist/util/env.d.ts.map +1 -1
  10. package/dist/util/env.js +7 -1
  11. package/dist/util/env.js.map +1 -1
  12. package/package.json +2 -1
  13. package/templates/commands/capture-knowledge.md +4 -0
  14. package/templates/commands/check-implementation.md +4 -0
  15. package/templates/commands/code-review.md +4 -0
  16. package/templates/commands/debug.md +4 -0
  17. package/templates/commands/execute-plan.md +4 -0
  18. package/templates/commands/new-requirement.md +4 -0
  19. package/templates/commands/review-design.md +4 -0
  20. package/templates/commands/review-requirements.md +4 -0
  21. package/templates/commands/simplify-implementation.md +148 -0
  22. package/templates/commands/update-planning.md +4 -0
  23. package/templates/commands/writing-test.md +4 -0
  24. package/dist/__tests__/lib/Config.test.d.ts +0 -2
  25. package/dist/__tests__/lib/Config.test.d.ts.map +0 -1
  26. package/dist/__tests__/lib/Config.test.js +0 -281
  27. package/dist/__tests__/lib/Config.test.js.map +0 -1
  28. package/dist/__tests__/lib/EnvironmentSelector.test.d.ts +0 -2
  29. package/dist/__tests__/lib/EnvironmentSelector.test.d.ts.map +0 -1
  30. package/dist/__tests__/lib/EnvironmentSelector.test.js +0 -117
  31. package/dist/__tests__/lib/EnvironmentSelector.test.js.map +0 -1
  32. package/dist/__tests__/lib/PhaseSelector.test.d.ts +0 -2
  33. package/dist/__tests__/lib/PhaseSelector.test.d.ts.map +0 -1
  34. package/dist/__tests__/lib/PhaseSelector.test.js +0 -77
  35. package/dist/__tests__/lib/PhaseSelector.test.js.map +0 -1
  36. package/dist/__tests__/lib/TemplateManager.test.d.ts +0 -2
  37. package/dist/__tests__/lib/TemplateManager.test.d.ts.map +0 -1
  38. package/dist/__tests__/lib/TemplateManager.test.js +0 -351
  39. package/dist/__tests__/lib/TemplateManager.test.js.map +0 -1
  40. package/dist/__tests__/util/env.test.d.ts +0 -2
  41. package/dist/__tests__/util/env.test.d.ts.map +0 -1
  42. package/dist/__tests__/util/env.test.js +0 -166
  43. package/dist/__tests__/util/env.test.js.map +0 -1
  44. package/templates/commands/capture-knowledge.toml +0 -49
  45. package/templates/commands/check-implementation.toml +0 -21
  46. package/templates/commands/code-review.toml +0 -83
  47. package/templates/commands/debug.toml +0 -48
  48. package/templates/commands/execute-plan.toml +0 -74
  49. package/templates/commands/new-requirement.toml +0 -129
  50. package/templates/commands/review-design.toml +0 -13
  51. package/templates/commands/review-requirements.toml +0 -11
  52. package/templates/commands/update-planning.toml +0 -63
  53. package/templates/commands/writing-test.toml +0 -46
@@ -1,83 +0,0 @@
1
- description='''Perform a local code review before pushing changes, ensuring
2
- alignment with design docs and best practices.'''
3
- prompt='''# Local Code Review Assistant
4
-
5
- You are helping me perform a local code review **before** I push changes. Please follow this structured workflow.
6
-
7
- ## Step 1: Gather Context
8
- Ask me for:
9
- - Brief feature/branch description
10
- - List of modified files (with optional summaries)
11
- - Relevant design doc(s) (e.g., `docs/ai/design/feature-{name}.md` or project-level design)
12
- - Any known constraints or risky areas
13
- - Any open bugs or TODOs linked to this work
14
- - Which tests have already been run
15
-
16
- If possible, request the latest diff:
17
- ```bash
18
- git status -sb
19
- git diff --stat
20
- ```
21
-
22
- ## Step 2: Understand Design Alignment
23
- For each provided design doc:
24
- - Summarize the architectural intent
25
- - Note critical requirements, patterns, or constraints the design mandates
26
-
27
- ## Step 3: File-by-File Review
28
- For every modified file:
29
- 1. Highlight deviations from the referenced design or requirements
30
- 2. Spot potential logic or flow issues and edge cases
31
- 3. Identify redundant or duplicate code
32
- 4. Suggest simplifications or refactors (prefer clarity over cleverness)
33
- 5. Flag security concerns (input validation, secrets, auth, data handling)
34
- 6. Check for performance pitfalls or scalability risks
35
- 7. Ensure error handling, logging, and observability are appropriate
36
- 8. Note any missing comments or docs
37
- 9. Flag missing or outdated tests related to this file
38
-
39
- ## Step 4: Cross-Cutting Concerns
40
- - Verify naming consistency and adherence to project conventions
41
- - Confirm documentation/comments are updated where the behavior changed
42
- - Identify missing tests (unit, integration, E2E) needed to cover the changes
43
- - Ensure configuration/migration updates are captured if applicable
44
-
45
- ## Step 5: Summarize Findings
46
- Provide results in this structure:
47
- ```
48
- ### Summary
49
- - Blocking issues: [count]
50
- - Important follow-ups: [count]
51
- - Nice-to-have improvements: [count]
52
-
53
- ### Detailed Notes
54
- 1. **[File or Component]**
55
- - Issue/Observation: ...
56
- - Impact: (e.g., blocking / important / nice-to-have)
57
- - Recommendation: ...
58
- - Design reference: [...]
59
-
60
- 2. ... (repeat per finding)
61
-
62
- ### Recommended Next Steps
63
- - [ ] Address blocking issues
64
- - [ ] Update design/implementation docs if needed
65
- - [ ] Add/adjust tests:
66
- - Unit:
67
- - Integration:
68
- - E2E:
69
- - [ ] Rerun local test suite
70
- - [ ] Re-run code review command after fixes
71
- ```
72
-
73
- ## Step 6: Final Checklist
74
- Confirm whether each item is complete (yes/no/needs follow-up):
75
- - Implementation matches design & requirements
76
- - No obvious logic or edge-case gaps remain
77
- - Redundant code removed or justified
78
- - Security considerations addressed
79
- - Tests cover new/changed behavior
80
- - Documentation/design notes updated
81
-
82
- ---
83
- Let me know when you're ready to begin the review.'''
@@ -1,48 +0,0 @@
1
- description='''Guide me through debugging a code issue by clarifying
2
- expectations, identifying gaps, and agreeing on a fix plan before changing
3
- code.'''
4
- prompt='''# Local Debugging Assistant
5
-
6
- Help me debug an issue by clarifying expectations, identifying gaps, and agreeing on a fix plan before changing code.
7
-
8
- ## Step 1: Gather Context
9
- Ask me for:
10
- - Brief issue description (what is happening?)
11
- - Expected behavior or acceptance criteria (what should happen?)
12
- - Current behavior and any error messages/logs
13
- - Recent related changes or deployments
14
- - Scope of impact (users, services, environments)
15
-
16
- ## Step 2: Clarify Reality vs Expectation
17
- - Restate the observed behavior vs the expected outcome
18
- - Confirm relevant requirements, tickets, or docs that define the expectation
19
- - Identify acceptance criteria for the fix (how we know it is resolved)
20
-
21
- ## Step 3: Reproduce & Isolate
22
- - Determine reproducibility (always, intermittent, environment-specific)
23
- - Capture reproduction steps or commands
24
- - Note any available tests that expose the failure
25
- - List suspected components, services, or modules
26
-
27
- ## Step 4: Analyze Potential Causes
28
- - Brainstorm plausible root causes (data, config, code regressions, external dependencies)
29
- - Gather supporting evidence (logs, metrics, traces, screenshots)
30
- - Highlight gaps or unknowns that need investigation
31
-
32
- ## Step 5: Surface Options
33
- - Present possible resolution paths (quick fix, deeper refactor, rollback, feature flag, etc.)
34
- - For each option, list pros/cons, risks, and verification steps
35
- - Consider required approvals or coordination
36
-
37
- ## Step 6: Confirm Path Forward
38
- - Ask which option we should pursue
39
- - Summarize chosen approach, required pre-work, and success criteria
40
- - Plan validation steps (tests, monitoring, user sign-off)
41
-
42
- ## Step 7: Next Actions & Tracking
43
- - Document tasks, owners, and timelines for the selected option
44
- - Note follow-up actions after deployment (monitoring, comms, postmortem if needed)
45
- - Encourage updating relevant docs/tests once resolved
46
-
47
- Let me know when you're ready to walk through the debugging flow.
48
- '''
@@ -1,74 +0,0 @@
1
- description='''Execute a feature plan interactively, guiding me through each
2
- task while referencing relevant docs and updating status.'''
3
- prompt='''# Feature Plan Execution Assistant
4
-
5
- Help me work through a feature plan one task at a time.
6
-
7
- ## Step 1: Gather Context
8
- Ask me for:
9
- - Feature name (kebab-case, e.g., `user-authentication`)
10
- - Brief feature/branch description
11
- - Relevant planning doc path (default `docs/ai/planning/feature-{name}.md`)
12
- - Any supporting design/implementation docs (design, requirements, implementation)
13
- - Current branch and latest diff summary (`git status -sb`, `git diff --stat`)
14
-
15
- ## Step 2: Load the Plan
16
- - Request the planning doc contents or offer commands like:
17
- ```bash
18
- cat docs/ai/planning/feature-<name>.md
19
- ```
20
- - Parse sections that represent task lists (look for headings + checkboxes `[ ]`, `[x]`).
21
- - Build an ordered queue of tasks grouped by section (e.g., Foundation, Core Features, Testing).
22
-
23
- ## Step 3: Present Task Queue
24
- Show an overview:
25
- ```
26
- ### Task Queue: <Feature Name>
27
- 1. [status] Section • Task title
28
- 2. ...
29
- ```
30
- Status legend: `todo`, `in-progress`, `done`, `blocked` (based on checkbox/notes if present).
31
-
32
- ## Step 4: Interactive Task Execution
33
- For each task in order:
34
- 1. Display the section/context, full bullet text, and any existing notes.
35
- 2. Suggest relevant docs to reference (requirements/design/implementation).
36
- 3. Ask: "Plan for this task?" Offer to outline sub-steps using the design doc.
37
- 4. Prompt to mark status (`done`, `in-progress`, `blocked`, `skipped`) and capture short notes/next steps.
38
- 5. Encourage code/document edits inside Cursor; offer commands/snippets when useful.
39
- 6. If blocked, record blocker info and move task to the end or into a "Blocked" list.
40
-
41
- ## Step 5: Update Planning Doc
42
- After each status change, generate a Markdown snippet the user can paste back into the planning doc, e.g.:
43
- ```
44
- - [x] Task: Implement auth service (Notes: finished POST /auth/login, tests added)
45
- ```
46
- Remind the user to keep the source doc updated.
47
-
48
- ## Step 6: Check for Newly Discovered Work
49
- After each section, ask if new tasks were discovered. If yes, capture them in a "New Work" list with status `todo` and include in the summary.
50
-
51
- ## Step 7: Session Summary
52
- Produce a summary table:
53
- ```
54
- ### Execution Summary
55
- - Completed: (list)
56
- - In Progress: (list + owners/next steps)
57
- - Blocked: (list + blockers)
58
- - Skipped / Deferred: (list + rationale)
59
- - New Tasks: (list)
60
- ```
61
-
62
- ## Step 8: Next Actions
63
- Remind the user to:
64
- - Update `docs/ai/planning/feature-{name}.md` with the new statuses
65
- - Sync related docs (requirements/design/implementation/testing) if decisions changed
66
- - Run `/check-implementation` to validate changes against design docs
67
- - Run `/writing-test` to produce unit/integration tests targeting 100% coverage
68
- - Run `/update-planning` to reconcile the planning doc with the latest status
69
- - Run `/code-review` when ready for final review
70
- - Run test suites relevant to completed tasks
71
-
72
- ---
73
- Let me know when you're ready to start executing the plan. Provide the feature
74
- name and planning doc first.'''
@@ -1,129 +0,0 @@
1
- description='''Add new feature/requirement documentation and guide me through
2
- the development workflow from requirements to testing.'''
3
- prompt='''I want to add a new feature/requirement. Please guide me through the complete development workflow:
4
-
5
- ## Step 1: Capture Requirement
6
- First, ask me:
7
- - What is the feature name? (e.g., "user-authentication", "payment-integration")
8
- - What problem does it solve?
9
- - Who will use it?
10
- - What are the key user stories?
11
-
12
- ## Step 2: Create Feature Documentation Structure
13
- Once I provide the requirement, create the following files (copy the existing template content so sections/frontmatter match exactly):
14
- - Start from `docs/ai/requirements/README.md` → save as `docs/ai/requirements/feature-{name}.md`
15
- - Start from `docs/ai/design/README.md` → save as `docs/ai/design/feature-{name}.md`
16
- - Start from `docs/ai/planning/README.md` → save as `docs/ai/planning/feature-{name}.md`
17
- - Start from `docs/ai/implementation/README.md` → save as `docs/ai/implementation/feature-{name}.md`
18
- - Start from `docs/ai/testing/README.md` → save as `docs/ai/testing/feature-{name}.md`
19
-
20
- Ensure the YAML frontmatter and section headings remain identical to the templates before filling in feature-specific content.
21
-
22
- ## Step 3: Requirements Phase
23
- Help me fill out `docs/ai/requirements/feature-{name}.md`:
24
- - Clarify the problem statement
25
- - Define goals and non-goals
26
- - Write detailed user stories
27
- - Establish success criteria
28
- - Identify constraints and assumptions
29
- - List open questions
30
-
31
- ## Step 4: Design Phase
32
- Guide me through `docs/ai/design/feature-{name}.md`:
33
- - Propose system architecture changes needed
34
- - Define data models/schema changes
35
- - Design API endpoints or interfaces
36
- - Identify components to create/modify
37
- - Document key design decisions
38
- - Note security and performance considerations
39
-
40
- ## Step 5: Planning Phase
41
- Help me break down work in `docs/ai/planning/feature-{name}.md`:
42
- - Create task breakdown with subtasks
43
- - Identify dependencies (on other features, APIs, etc.)
44
- - Estimate effort for each task
45
- - Suggest implementation order
46
- - Identify risks and mitigation strategies
47
-
48
- ## Step 6: Documentation Review (Chained Commands)
49
- Once the docs above are drafted, run the following commands to tighten them up:
50
- - `/review-requirements` to validate the requirements doc for completeness and clarity
51
- - `/review-design` to ensure the design doc aligns with requirements and highlights key decisions
52
-
53
- (If you are using Claude Code, reference the `review-requirements` and `review-design` commands instead.)
54
-
55
- ## Step 7: Implementation Phase (Deferred)
56
- This command focuses on documentation only. Actual implementation happens later via `/execute-plan`.
57
- For each task in the plan:
58
- 1. Review the task requirements and design
59
- 2. Ask me to confirm I'm starting this task
60
- 3. Guide implementation with reference to design docs
61
- 4. Suggest code structure and patterns
62
- 5. Help with error handling and edge cases
63
- 6. Update `docs/ai/implementation/feature-{name}.md` with notes
64
-
65
- ## Step 8: Testing Phase
66
- Guide testing in `docs/ai/testing/feature-{name}.md`:
67
- - Draft unit test cases with `/writing-test`
68
- - Draft integration test scenarios with `/writing-test`
69
- - Recommend manual testing steps
70
- - Help write test code
71
- - Verify all success criteria are testable
72
-
73
- ## Step 9: Local Testing & Verification
74
- Guide me through:
75
- 1. Running all tests locally
76
- 2. Manual testing checklist
77
- 3. Reviewing against requirements
78
- 4. Checking design compliance
79
- 5. Preparing for code review (diff summary, list of files, design references)
80
-
81
- ## Step 10: Local Code Review (Optional but recommended)
82
- Before pushing, ask me to run `/code-review` with the modified file list and relevant docs.
83
-
84
- ## Step 11: Implementation Execution Reminder
85
- When ready to implement, run `/execute-plan` to work through the planning doc tasks interactively. That command will orchestrate implementation, testing, and follow-up documentation.
86
-
87
- ## Step 12: Create Merge/Pull Request
88
- Provide the MR/PR description:
89
- ```markdown
90
- ## Feature: [Feature Name]
91
-
92
- ### Summary
93
- [Brief description of what this feature does]
94
-
95
- ### Requirements
96
- - Documented in: `docs/ai/requirements/feature-{name}.md`
97
- - Related to: [issue/ticket number if applicable]
98
-
99
- ### Changes
100
- - [List key changes]
101
- - [List new files/components]
102
- - [List modified files]
103
-
104
- ### Design
105
- - Architecture: [Link to design doc section]
106
- - Key decisions: [Brief summary]
107
-
108
- ### Testing
109
- - Unit tests: [coverage/status]
110
- - Integration tests: [status]
111
- - Manual testing: Completed
112
- - Test documentation: `docs/ai/testing/feature-{name}.md`
113
-
114
- ### Checklist
115
- - [ ] Code follows project standards
116
- - [ ] All tests pass
117
- - [ ] Documentation updated
118
- - [ ] No breaking changes (or documented if any)
119
- - [ ] Ready for review
120
- ```
121
-
122
- Then provide the appropriate command:
123
- - **GitHub**: `gh pr create --title "feat: [feature-name]" --body-file pr-description.md`
124
- - **GitLab**: `glab mr create --title "feat: [feature-name]" --description "$(cat mr-description.md)"`
125
-
126
- ---
127
-
128
- **Let's start! Tell me about the feature you want to build.**
129
- '''
@@ -1,13 +0,0 @@
1
- description='''Review the design documentation for a feature to ensure
2
- completeness and accuracy.'''
3
- prompt='''Review the design documentation in docs/ai/design/feature-{name}.md (and the project-level README if relevant). Summarize:
4
- - Architecture overview (ensure mermaid diagram is present and accurate)
5
- - Key components and their responsibilities
6
- - Technology choices and rationale
7
- - Data models and relationships
8
- - API/interface contracts (inputs, outputs, auth)
9
- - Major design decisions and trade-offs
10
- - Non-functional requirements that must be preserved
11
-
12
- Highlight any inconsistencies, missing sections, or diagrams that need updates.
13
- '''
@@ -1,11 +0,0 @@
1
- description='''Review the requirements documentation for a feature to ensure
2
- completeness and alignment with project standards.'''
3
- prompt='''Please review `docs/ai/requirements/feature-{name}.md` and the project-level template `docs/ai/requirements/README.md` to ensure structure and content alignment. Summarize:
4
- - Core problem statement and affected users
5
- - Goals, non-goals, and success criteria
6
- - Primary user stories & critical flows
7
- - Constraints, assumptions, open questions
8
- - Any missing sections or deviations from the template
9
-
10
- Identify gaps or contradictions and suggest clarifications.
11
- '''
@@ -1,63 +0,0 @@
1
- description='''Assist in updating planning documentation to reflect current
2
- implementation progress for a feature.'''
3
- prompt='''# Planning Update Assistant
4
-
5
- Please help me reconcile the current implementation progress with our planning documentation.
6
-
7
- ## Step 1: Gather Context
8
- Ask me for:
9
- - Feature/branch name and brief status
10
- - Tasks completed since last update
11
- - Any new tasks discovered
12
- - Current blockers or risks
13
- - Relevant planning docs (e.g., `docs/ai/planning/feature-{name}.md`)
14
-
15
- ## Step 2: Review Planning Doc
16
- If a planning doc exists:
17
- - Summarize existing milestones and task breakdowns
18
- - Note expected sequencing and dependencies
19
- - Identify outstanding tasks in the plan
20
-
21
- ## Step 3: Reconcile Progress
22
- For each planned task:
23
- - Mark status (done / in progress / blocked / not started)
24
- - Note actual work completed vs. planned scope
25
- - Record blockers or changes in approach
26
- - Identify tasks that were skipped or added
27
-
28
- ## Step 4: Update Task List
29
- Help me produce an updated checklist such as:
30
- ```
31
- ### Current Status: [Feature Name]
32
-
33
- #### Done
34
- - [x] Task A - short note on completion or link to commit/pr
35
- - [x] Task B
36
-
37
- #### In Progress
38
- - [ ] Task C - waiting on [dependency]
39
-
40
- #### Blocked
41
- - [ ] Task D - blocked by [issue/owner]
42
-
43
- #### Newly Discovered Work
44
- - [ ] Task E - reason discovered
45
- - [ ] Task F - due by [date]
46
- ```
47
-
48
- ## Step 5: Next Steps & Priorities
49
- - Suggest the next 2-3 actionable tasks
50
- - Highlight risky areas needing attention
51
- - Recommend coordination (design changes, stakeholder sync, etc.)
52
- - List documentation updates needed
53
-
54
- ## Step 6: Summary for Planning Doc
55
- Prepare a summary paragraph to copy into the planning doc, covering:
56
- - Current state and progress
57
- - Major risks/blockers
58
- - Upcoming focus items
59
- - Any changes to scope or timeline
60
-
61
- ---
62
- Let me know when you're ready to begin the planning update.
63
- '''
@@ -1,46 +0,0 @@
1
- description='''Add tests for a new feature'''
2
- prompt='''Review `docs/ai/testing/feature-{name}.md` and ensure it mirrors the base template before writing tests.
3
-
4
- ## Step 1: Gather Context
5
- Ask me for:
6
- - Feature name and branch
7
- - Summary of what changed (link to design & requirements docs)
8
- - Target environment (backend, frontend, full-stack)
9
- - Existing automated test suites (unit, integration, E2E)
10
- - Any flaky or slow tests to avoid
11
-
12
- ## Step 2: Analyze Testing Template
13
- - Identify required sections from `docs/ai/testing/feature-{name}.md` (unit, integration, manual verification, coverage targets)
14
- - Confirm success criteria and edge cases from requirements & design docs
15
- - Note any mocks/stubs or fixtures already available
16
-
17
- ## Step 3: Unit Tests (Aim for 100% coverage)
18
- For each module/function:
19
- 1. List behavior scenarios (happy path, edge cases, error handling)
20
- 2. Generate concrete test cases with assertions and inputs
21
- 3. Reference existing utilities/mocks to accelerate implementation
22
- 4. Provide pseudocode or actual test snippets
23
- 5. Highlight potential missing branches preventing full coverage
24
-
25
- ## Step 4: Integration Tests
26
- 1. Identify critical flows that span multiple components/services
27
- 2. Define setup/teardown steps (databases, APIs, queues)
28
- 3. Outline test cases validating interaction boundaries, data contracts, and failure modes
29
- 4. Suggest instrumentation/logging to debug failures
30
-
31
- ## Step 5: Coverage Strategy
32
- - Recommend tooling commands (e.g., `npm run test -- --coverage`)
33
- - Call out files/functions that still need coverage and why
34
- - Suggest additional tests if coverage <100%
35
-
36
- ## Step 6: Manual & Exploratory Testing
37
- - Propose manual test checklist covering UX, accessibility, and error handling
38
- - Identify exploratory scenarios or chaos/failure injection tests if relevant
39
-
40
- ## Step 7: Update Documentation & TODOs
41
- - Summarize which tests were added or still missing
42
- - Update `docs/ai/testing/feature-{name}.md` sections with links to test files and results
43
- - Flag follow-up tasks for deferred tests (with owners/dates)
44
-
45
- Let me know when you have the latest code changes ready; we'll write tests together until we hit 100% coverage.
46
- '''