ai-devkit 0.4.0 → 0.4.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. package/CHANGELOG.md +28 -0
  2. package/README.md +10 -10
  3. package/dist/__tests__/lib/EnvironmentSelector.test.js +3 -3
  4. package/dist/__tests__/lib/EnvironmentSelector.test.js.map +1 -1
  5. package/dist/__tests__/lib/PhaseSelector.test.js +3 -3
  6. package/dist/__tests__/lib/PhaseSelector.test.js.map +1 -1
  7. package/dist/__tests__/lib/TemplateManager.test.d.ts +2 -0
  8. package/dist/__tests__/lib/TemplateManager.test.d.ts.map +1 -0
  9. package/dist/__tests__/lib/TemplateManager.test.js +351 -0
  10. package/dist/__tests__/lib/TemplateManager.test.js.map +1 -0
  11. package/dist/lib/EnvironmentSelector.js +1 -1
  12. package/dist/lib/EnvironmentSelector.js.map +1 -1
  13. package/dist/lib/PhaseSelector.js +1 -1
  14. package/dist/lib/PhaseSelector.js.map +1 -1
  15. package/dist/lib/TemplateManager.d.ts +3 -1
  16. package/dist/lib/TemplateManager.d.ts.map +1 -1
  17. package/dist/lib/TemplateManager.js +50 -23
  18. package/dist/lib/TemplateManager.js.map +1 -1
  19. package/dist/types.d.ts +2 -0
  20. package/dist/types.d.ts.map +1 -1
  21. package/dist/types.js.map +1 -1
  22. package/dist/util/env.d.ts.map +1 -1
  23. package/dist/util/env.js +3 -1
  24. package/dist/util/env.js.map +1 -1
  25. package/package.json +1 -1
  26. package/templates/commands/capture-knowledge.toml +49 -0
  27. package/templates/commands/check-implementation.toml +21 -0
  28. package/templates/commands/code-review.toml +83 -0
  29. package/templates/commands/debug.toml +48 -0
  30. package/templates/commands/execute-plan.toml +74 -0
  31. package/templates/commands/new-requirement.toml +129 -0
  32. package/templates/commands/review-design.toml +13 -0
  33. package/templates/commands/review-requirements.toml +11 -0
  34. package/templates/commands/update-planning.toml +63 -0
  35. package/templates/commands/writing-test.toml +46 -0
  36. package/templates/env/base.md +51 -0
  37. package/web/content/pages/vision.md +2 -0
@@ -0,0 +1,11 @@
1
+ description='''Review the requirements documentation for a feature to ensure
2
+ completeness and alignment with project standards.'''
3
+ prompt='''Please review `docs/ai/requirements/feature-{name}.md` and the project-level template `docs/ai/requirements/README.md` to ensure structure and content alignment. Summarize:
4
+ - Core problem statement and affected users
5
+ - Goals, non-goals, and success criteria
6
+ - Primary user stories & critical flows
7
+ - Constraints, assumptions, open questions
8
+ - Any missing sections or deviations from the template
9
+
10
+ Identify gaps or contradictions and suggest clarifications.
11
+ '''
@@ -0,0 +1,63 @@
1
+ description='''Assist in updating planning documentation to reflect current
2
+ implementation progress for a feature.'''
3
+ prompt='''# Planning Update Assistant
4
+
5
+ Please help me reconcile the current implementation progress with our planning documentation.
6
+
7
+ ## Step 1: Gather Context
8
+ Ask me for:
9
+ - Feature/branch name and brief status
10
+ - Tasks completed since last update
11
+ - Any new tasks discovered
12
+ - Current blockers or risks
13
+ - Relevant planning docs (e.g., `docs/ai/planning/feature-{name}.md`)
14
+
15
+ ## Step 2: Review Planning Doc
16
+ If a planning doc exists:
17
+ - Summarize existing milestones and task breakdowns
18
+ - Note expected sequencing and dependencies
19
+ - Identify outstanding tasks in the plan
20
+
21
+ ## Step 3: Reconcile Progress
22
+ For each planned task:
23
+ - Mark status (done / in progress / blocked / not started)
24
+ - Note actual work completed vs. planned scope
25
+ - Record blockers or changes in approach
26
+ - Identify tasks that were skipped or added
27
+
28
+ ## Step 4: Update Task List
29
+ Help me produce an updated checklist such as:
30
+ ```
31
+ ### Current Status: [Feature Name]
32
+
33
+ #### Done
34
+ - [x] Task A - short note on completion or link to commit/pr
35
+ - [x] Task B
36
+
37
+ #### In Progress
38
+ - [ ] Task C - waiting on [dependency]
39
+
40
+ #### Blocked
41
+ - [ ] Task D - blocked by [issue/owner]
42
+
43
+ #### Newly Discovered Work
44
+ - [ ] Task E - reason discovered
45
+ - [ ] Task F - due by [date]
46
+ ```
47
+
48
+ ## Step 5: Next Steps & Priorities
49
+ - Suggest the next 2-3 actionable tasks
50
+ - Highlight risky areas needing attention
51
+ - Recommend coordination (design changes, stakeholder sync, etc.)
52
+ - List documentation updates needed
53
+
54
+ ## Step 6: Summary for Planning Doc
55
+ Prepare a summary paragraph to copy into the planning doc, covering:
56
+ - Current state and progress
57
+ - Major risks/blockers
58
+ - Upcoming focus items
59
+ - Any changes to scope or timeline
60
+
61
+ ---
62
+ Let me know when you're ready to begin the planning update.
63
+ '''
@@ -0,0 +1,46 @@
1
+ description='''Add tests for a new feature'''
2
+ prompt='''Review `docs/ai/testing/feature-{name}.md` and ensure it mirrors the base template before writing tests.
3
+
4
+ ## Step 1: Gather Context
5
+ Ask me for:
6
+ - Feature name and branch
7
+ - Summary of what changed (link to design & requirements docs)
8
+ - Target environment (backend, frontend, full-stack)
9
+ - Existing automated test suites (unit, integration, E2E)
10
+ - Any flaky or slow tests to avoid
11
+
12
+ ## Step 2: Analyze Testing Template
13
+ - Identify required sections from `docs/ai/testing/feature-{name}.md` (unit, integration, manual verification, coverage targets)
14
+ - Confirm success criteria and edge cases from requirements & design docs
15
+ - Note any mocks/stubs or fixtures already available
16
+
17
+ ## Step 3: Unit Tests (Aim for 100% coverage)
18
+ For each module/function:
19
+ 1. List behavior scenarios (happy path, edge cases, error handling)
20
+ 2. Generate concrete test cases with assertions and inputs
21
+ 3. Reference existing utilities/mocks to accelerate implementation
22
+ 4. Provide pseudocode or actual test snippets
23
+ 5. Highlight potential missing branches preventing full coverage
24
+
25
+ ## Step 4: Integration Tests
26
+ 1. Identify critical flows that span multiple components/services
27
+ 2. Define setup/teardown steps (databases, APIs, queues)
28
+ 3. Outline test cases validating interaction boundaries, data contracts, and failure modes
29
+ 4. Suggest instrumentation/logging to debug failures
30
+
31
+ ## Step 5: Coverage Strategy
32
+ - Recommend tooling commands (e.g., `npm run test -- --coverage`)
33
+ - Call out files/functions that still need coverage and why
34
+ - Suggest additional tests if coverage <100%
35
+
36
+ ## Step 6: Manual & Exploratory Testing
37
+ - Propose manual test checklist covering UX, accessibility, and error handling
38
+ - Identify exploratory scenarios or chaos/failure injection tests if relevant
39
+
40
+ ## Step 7: Update Documentation & TODOs
41
+ - Summarize which tests were added or still missing
42
+ - Update `docs/ai/testing/feature-{name}.md` sections with links to test files and results
43
+ - Flag follow-up tasks for deferred tests (with owners/dates)
44
+
45
+ Let me know when you have the latest code changes ready; we'll write tests together until we hit 100% coverage.
46
+ '''
@@ -0,0 +1,51 @@
1
+ # AI DevKit Rules
2
+
3
+ ## Project Context
4
+ This project uses ai-devkit for structured AI-assisted development. Phase documentation is located in `docs/ai/`.
5
+
6
+ ## Documentation Structure
7
+ - `docs/ai/requirements/` - Problem understanding and requirements
8
+ - `docs/ai/design/` - System architecture and design decisions (include mermaid diagrams)
9
+ - `docs/ai/planning/` - Task breakdown and project planning
10
+ - `docs/ai/implementation/` - Implementation guides and notes
11
+ - `docs/ai/testing/` - Testing strategy and test cases
12
+ - `docs/ai/deployment/` - Deployment and infrastructure docs
13
+ - `docs/ai/monitoring/` - Monitoring and observability setup
14
+
15
+ ## Code Style & Standards
16
+ - Follow the project's established code style and conventions
17
+ - Write clear, self-documenting code with meaningful variable names
18
+ - Add comments for complex logic or non-obvious decisions
19
+
20
+ ## Development Workflow
21
+ - Review phase documentation in `docs/ai/` before implementing features
22
+ - Keep requirements, design, and implementation docs updated as the project evolves
23
+ - Reference the planning doc for task breakdown and priorities
24
+ - Copy the testing template (`docs/ai/testing/README.md`) before creating feature-specific testing docs
25
+
26
+ ## AI Interaction Guidelines
27
+ - When implementing features, first check relevant phase documentation
28
+ - For new features, start with requirements clarification
29
+ - Update phase docs when significant changes or decisions are made
30
+
31
+ ## Testing & Quality
32
+ - Write tests alongside implementation
33
+ - Follow the testing strategy defined in `docs/ai/testing/`
34
+ - Use `/writing-test` to generate unit and integration tests targeting 100% coverage
35
+ - Ensure code passes all tests before considering it complete
36
+
37
+ ## Documentation
38
+ - Update phase documentation when requirements or design changes
39
+ - Keep inline code comments focused and relevant
40
+ - Document architectural decisions and their rationale
41
+ - Use mermaid diagrams for any architectural or data-flow visuals (update existing diagrams if needed)
42
+ - Record test coverage results and outstanding gaps in `docs/ai/testing/`
43
+
44
+ ## Key Commands
45
+ When working on this project, you can run commands to:
46
+ - Understand project requirements and goals (`review-requirements`)
47
+ - Review architectural decisions (`review-design`)
48
+ - Plan and execute tasks (`execute-plan`)
49
+ - Verify implementation against design (`check-implementation`)
50
+ - Suggest missing tests (`suggest-tests`)
51
+ - Perform structured code reviews (`code-review`)
@@ -11,6 +11,8 @@ AI DevKit exists to bridge the gap between AI-assisted development and structure
11
11
 
12
12
  ## Our Purpose
13
13
 
14
+ It inherits the idea from [The New Engineering Workflow](https://ownthecraftbook.com/chapters/9-the-new-engineering-workflow/) in [AI changes the Tools, you still own the Craft](https://ownthecraftbook.com/): empowering engineers to integrate AI effectively into a structured, test-driven, and craftsmanship-focused development process.
15
+
14
16
  We believe that AI coding assistants are most powerful when they work within a well-defined engineering framework where documentation is clear, plans are systematic, and architecture is intentional.
15
17
 
16
18
  AI DevKit provides the scaffolding for that structure. It helps engineers reduce the back-and-forth of prompting, keep context in sync, and share memory between themselves and their AI tools.