ai-workflow-init 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,18 @@
1
+ Compare current implementation against planning and implementation notes.
2
+
3
+ 1) Ask me for:
4
+ - Feature name (if not provided)
5
+ - Then locate docs by feature name:
6
+ - Planning: `docs/ai/planning/feature-{name}.md`
7
+ - Implementation (optional): `docs/ai/implementation/feature-{name}.md`
8
+
9
+ 2) Validation Scope (no inference):
10
+ - Verify code follows the acceptance criteria from the planning doc
11
+ - Verify code matches the steps/changes in the implementation notes (when present)
12
+ - Do NOT invent or infer alternative logic beyond what the docs specify
13
+
14
+ 3) Output
15
+ - List concrete mismatches between code and docs
16
+ - List missing pieces the docs require but code lacks
17
+ - Short actionable next steps
18
+
@@ -0,0 +1,67 @@
1
+ You are helping me perform a local code review **before** I push changes. This review is restricted to standards conformance only.
2
+
3
+ ## Step 1: Gather Context (minimal)
4
+ - Ask for feature name if not provided (must be kebab-case).
5
+ - Then locate and read:
6
+ - Planning doc: `docs/ai/planning/feature-{name}.md` (for file list context only)
7
+ - Implementation doc: `docs/ai/implementation/feature-{name}.md` (for file list context only)
8
+
9
+ ## Step 2: Standards Focus (only)
10
+ - Load project standards:
11
+ - `docs/ai/project/CODE_CONVENTIONS.md`
12
+ - `docs/ai/project/PROJECT_STRUCTURE.md`
13
+ - Review code strictly for violations against these two documents.
14
+ - **Do NOT** provide design opinions, performance guesses, or alternative architectures.
15
+ - **Do NOT** infer requirements beyond what standards explicitly state.
16
+
17
+ ## Step 3: File-by-File Review (standards violations only)
18
+ For every relevant file, report ONLY standards violations per the two docs above. Do not assess broader design or run git commands.
19
+
20
+ ## Step 4: Cross-Cutting Concerns (standards only)
21
+ - Naming consistency and adherence to CODE_CONVENTIONS
22
+ - Structure/module boundaries per PROJECT_STRUCTURE
23
+
24
+ ### Standards Conformance Report (required)
25
+ - After reviewing `CODE_CONVENTIONS.md` and `PROJECT_STRUCTURE.md`, list violations in this exact format:
26
+ ```
27
+ - path/to/file.ext — [Rule]: short description of the violated rule
28
+ ```
29
+ - Only include clear violations. Group similar violations by file when helpful.
30
+
31
+ ## Step 5: Summarize Findings (rules-focused)
32
+ Provide results in this structure:
33
+ ```
34
+ ### Summary
35
+ - Blocking issues: [count]
36
+ - Important follow-ups: [count]
37
+ - Nice-to-have improvements: [count]
38
+
39
+ Severity criteria:
40
+ - Blocking: Violates CODE_CONVENTIONS or PROJECT_STRUCTURE causing build/test failure, security risk, or clear architectural breach.
41
+ - Important: Violations that don't break build but degrade maintainability/performance or developer ergonomics.
42
+ - Nice-to-have: Style/consistency improvements with low impact.
43
+
44
+ ### Detailed Notes
45
+ 1. **[File or Component]**
46
+ - Issue/Observation: ...
47
+ - Impact: (blocking / important / nice-to-have)
48
+ - Rule violated: [Rule name from CODE_CONVENTIONS or PROJECT_STRUCTURE]
49
+ - Recommendation: Fix to comply with [Rule]
50
+
51
+ 2. ... (repeat per finding)
52
+
53
+ ### Recommended Next Steps
54
+ - [ ] Address blocking issues
55
+ - [ ] Fix important violations
56
+ - [ ] Consider nice-to-have improvements
57
+ - [ ] Re-run code review command after fixes
58
+ ```
59
+
60
+ ## Step 6: Final Checklist (rules-focused)
61
+ Confirm whether each item is complete (yes/no/needs follow-up):
62
+ - Naming and formatting adhere to CODE_CONVENTIONS
63
+ - Structure and boundaries adhere to PROJECT_STRUCTURE
64
+
65
+ ---
66
+
67
+ Let me know when you're ready to begin the review.
@@ -0,0 +1,71 @@
1
+
2
+ ## Goal
3
+ Generate a planning doc at `docs/ai/planning/feature-{name}.md` using the template, with minimal, actionable content aligned to the 4-phase workflow.
4
+
5
+ ## Step 1: Clarify Scope (Focused Q&A Guidelines)
6
+ Purpose: the agent MUST generate a short, numbered Q&A for the user to clarify scope; keep it relevant, avoid off-topic, and do not build a static question bank.
7
+
8
+ Principles:
9
+ - Quickly classify context: a) Micro-UI, b) Page/Flow, c) Service/API/Data, d) Cross-cutting.
10
+ - Ask only what is missing to produce Goal, Tasks, Risks, and DoD. Keep to 3–7 questions.
11
+ - Do not re-ask what the user already stated; if ambiguous, confirm briefly (yes/no or single choice).
12
+ - Keep each question short and single-purpose; avoid multi-part questions.
13
+ - Answers may be a/b/c or free text; the agent is not required to present fixed option lists.
14
+
15
+ Output format for Q&A:
16
+ - Number questions sequentially starting at 1 (e.g., "1.", "2.").
17
+ - Under each question, provide 2–4 suggested options labeled with lowercase letters + ")" (e.g., "a)", "b)").
18
+ - Keep options short (≤7 words) and add an "other" when useful.
19
+ - Example:
20
+ 1. UI library?
21
+ a) TailwindCSS b) Bootstrap c) SCSS d) Other
22
+
23
+ Scope checklist to cover (ask only missing items, based on context):
24
+ 1) Problem & Users: the core problem and target user groups.
25
+ 2) In-scope vs Out-of-scope: what is included and excluded (e.g., MVP, no i18n, no payments).
26
+ 3) Acceptance Criteria (GWT): 2–3 key Given–When–Then scenarios.
27
+ 4) Constraints & Dependencies: technical constraints, libraries, real API vs mock, deadlines, external deps.
28
+ 5) Risks & Assumptions: known risks and key assumptions.
29
+ 6) Tasks Overview: 3–7 high-level work items.
30
+ 7) Definition of Done: completion criteria (build/test/docs/review).
31
+
32
+ Adaptive behavior:
33
+ - Always reduce questions to what is necessary; once Goal/Tasks/Risks/DoD can be written, stop asking.
34
+ - Prioritize clarifying scope and acceptance criteria before implementation details.
35
+ - If the user already specified items (framework, API/Mock, deadlines, etc.), confirm briefly only.
36
+
37
+ Then collect inputs (after Q&A):
38
+ - Feature name (kebab-case, e.g., `user-authentication`)
39
+ - Short goal and scope
40
+ - High-level tasks overview (3–7 items)
41
+ - Definition of Done (build/test/review/docs)
42
+
43
+ ## Step 2: Load Templates
44
+ **Before creating docs, read the following files:**
45
+ - `docs/ai/planning/feature-template.md` - Template structure to follow
46
+ - `docs/ai/testing/feature-template.md` - Template structure for test plan skeleton
47
+
48
+ These templates define the required structure and format. Use them as the baseline for creating feature docs.
49
+
50
+ ## Step 3: Draft the Plan (auto-generate)
51
+ Using the Q&A results and templates, immediately generate the plan without asking for confirmation.
52
+
53
+ Auto-name feature:
54
+ - Derive `feature-{name}` from user prompt + Q&A (kebab-case, concise, specific).
55
+ - Example: "Login Page (HTML/CSS)" → `feature-login-page`.
56
+ - If a file with the same name already exists, append a numeric suffix: `feature-{name}-2`, `feature-{name}-3`, ...
57
+
58
+ Create the following files automatically and populate initial content:
59
+ - `docs/ai/planning/feature-{name}.md` - Use structure from `feature-template.md`
60
+ - `docs/ai/testing/feature-{name}.md` - Use structure from testing `feature-template.md` (skeleton)
61
+
62
+ Do NOT create the implementation file at this step.
63
+ Notify the user when done.
64
+
65
+ Produce a Markdown doc following the template structure:
66
+ - Match sections from `docs/ai/planning/feature-template.md`
67
+ - Fill in content from Q&A inputs
68
+ - Ensure all required sections are present
69
+
70
+ ## Step 4: Next Actions
71
+ Suggest running `execute-plan` to begin task execution.
@@ -0,0 +1,66 @@
1
+ ## Goal
2
+ Execute the feature plan by implementing tasks and persisting notes to docs.
3
+
4
+ ### Prerequisites
5
+ - Feature name (kebab-case, e.g., `user-authentication`)
6
+ - Planning doc exists: `docs/ai/planning/feature-{name}.md`
7
+
8
+ ## Step 1: Gather Context
9
+ - Ask for feature name if not provided (must be kebab-case).
10
+ - Load plan: `docs/ai/planning/feature-{name}.md`.
11
+ - **Load template:** Read `docs/ai/implementation/feature-template.md` to understand required structure.
12
+ - Ensure implementation doc exists or will be created: `docs/ai/implementation/feature-{name}.md`.
13
+
14
+ ## Step 2: Build Task Queue
15
+ - Parse tasks (checkboxes `[ ]`, `[x]`) from the plan.
16
+ - Build prioritized task queue (top-to-bottom unless dependencies block).
17
+ - Identify blocked tasks and note reasons.
18
+
19
+ ## Step 3: Implement Iteratively (per task)
20
+ For each task in queue:
21
+ 1. Plan minimal change set:
22
+ - Identify files/regions to modify
23
+ - Map changes to acceptance criteria from plan
24
+ 2. Implement changes:
25
+ - Write/edit code according to the plan
26
+ - Keep changes minimal and incremental
27
+ - Avoid speculative changes beyond plan scope
28
+ 3. Quick validation:
29
+ - Run build/compile if available
30
+ - Run fast unit/smoke tests if available
31
+ - Fix immediate issues before proceeding
32
+ 4. Persist notes to implementation doc:
33
+ - File: `docs/ai/implementation/feature-{name}.md`
34
+ - Append entry per completed task:
35
+ - Files touched: `path/to/file.ext` (lines: x–y)
36
+ - Approach/pattern used: brief description
37
+ - Edge cases handled: list if any
38
+ - Risks/notes: any concerns
39
+ 5. Update planning doc:
40
+ - Mark completed tasks `[x]` with brief note
41
+ - Mark blocked tasks with reason
42
+
43
+ ## Step 4: Implementation Doc Structure
44
+ **Before creating the implementation doc, ensure you have read:**
45
+ - `docs/ai/implementation/feature-template.md` - Template structure to follow
46
+
47
+ If creating implementation doc for first task:
48
+ - Use the structure from `feature-template.md` exactly
49
+ - Create `docs/ai/implementation/feature-{name}.md` with:
50
+ - `# Implementation Notes: {Feature Name}`
51
+ - `## Summary` - Brief description of overall approach
52
+ - `## Changes` - Per-task entries with file paths, line ranges, approach
53
+ - `## Edge Cases` - List of handled edge cases
54
+ - `## Follow-ups` - TODOs or deferred work
55
+ - Follow the template format strictly
56
+
57
+ ## Step 5: Next Actions
58
+ After completing tasks:
59
+ - Suggest running `code-review` to verify against standards
60
+ - Suggest running `writing-test` if edge cases need coverage
61
+ - Suggest running `check-implementation` to validate alignment with plan
62
+
63
+ ## Notes
64
+ - Keep code changes minimal and focused on plan tasks
65
+ - Document all changes in implementation doc for later review/refactor
66
+ - Avoid implementing features not in the plan
@@ -0,0 +1,72 @@
1
+ ## Goal
2
+ Generate or update `docs/ai/project/CODE_CONVENTIONS.md` and `PROJECT_STRUCTURE.md` from the current codebase with brief Q&A refinement.
3
+
4
+ ## Step 1: Clarify Scope (3–6 questions max)
5
+ Quick classification and targeted questions:
6
+ - Languages/frameworks detected: confirm correct? (a/b/other)
7
+ - Import/style tools in use: (a) ESLint/Prettier (b) Other formatter (c) None
8
+ - Test placement preference: (a) Colocated `*.spec.*` (b) `__tests__/` directory (c) Other
9
+ - Error handling strategy: (a) Exceptions/try-catch (b) Result types (c) Other
10
+ - Module organization: (a) By feature (b) By layer (c) Mixed
11
+ - Any performance/security constraints to encode? (yes/no, brief)
12
+
13
+ Keep questions short and single-purpose. Stop once sufficient info gathered.
14
+
15
+ ## Step 2: Auto-Discovery
16
+ Analyze repository to infer:
17
+ - Dominant naming patterns:
18
+ - Variables/functions: camelCase/PascalCase/snake_case
19
+ - Classes/types: PascalCase
20
+ - Constants: CONSTANT_CASE/UPPER_SNAKE_CASE
21
+ - Import patterns:
22
+ - Import order (node/builtin, third-party, internal)
23
+ - Grouping style
24
+ - Typical folder structure:
25
+ - Organization under `src/` (by feature/by layer/mixed)
26
+ - Common directories (components/, utils/, services/, etc.)
27
+ - Test file locations/naming if present:
28
+ - Colocated patterns
29
+ - Test directory structure
30
+ - Common patterns observed:
31
+ - Repository/Service patterns
32
+ - Factory patterns
33
+ - Strategy patterns
34
+ - Other architectural patterns
35
+
36
+ ## Step 3: Draft Standards
37
+ Generate two documents:
38
+
39
+ ### CODE_CONVENTIONS.md
40
+ - Naming conventions (variables, functions, classes, constants)
41
+ - Import order and grouping
42
+ - Formatting tools (ESLint/Prettier/etc.) if detected
43
+ - Function size and complexity guidelines
44
+ - Error handling strategy (exceptions/result types)
45
+ - Test rules (unit first, integration when needed)
46
+ - Comments policy (only for complex logic)
47
+ - Async/await patterns if applicable
48
+
49
+ ### PROJECT_STRUCTURE.md
50
+ - Folder layout summary:
51
+ - `src/`: source code organization
52
+ - `docs/ai/**`: documentation structure
53
+ - Module boundaries and dependency direction
54
+ - Design patterns actually observed in codebase
55
+ - Test placement and naming conventions
56
+ - Config/secrets handling summary
57
+
58
+ ## Step 4: Persist
59
+ - Overwrite or create:
60
+ - `docs/ai/project/CODE_CONVENTIONS.md`
61
+ - `docs/ai/project/PROJECT_STRUCTURE.md`
62
+ - Add header note: "This document is auto-generated from codebase analysis + brief Q&A. Edit manually as needed."
63
+
64
+ ## Step 5: Next Actions
65
+ - Suggest running `code-review` to validate new standards are being followed
66
+ - Inform user they can manually edit these files anytime
67
+
68
+ ## Notes
69
+ - Focus on patterns actually present in codebase, not ideal patterns
70
+ - Keep generated docs concise and actionable
71
+ - User can refine standards manually after generation
72
+
@@ -0,0 +1,54 @@
1
+ Use `docs/ai/testing/feature-{name}.md` as the source of truth.
2
+
3
+ ## Step 1: Gather Context (minimal)
4
+ - Ask for feature name if not provided (must be kebab-case).
5
+ - **Load template:** Read `docs/ai/testing/feature-template.md` to understand required structure.
6
+ - Then locate docs by convention:
7
+ - Planning: `docs/ai/planning/feature-{name}.md`
8
+ - Implementation (optional): `docs/ai/implementation/feature-{name}.md`
9
+
10
+ Always align test cases with acceptance criteria from the planning doc. If implementation notes are missing, treat planning as the single source of truth.
11
+
12
+ ## Step 2: Scope (simple tests only)
13
+ - Focus on pure functions, small utilities, and isolated component logic.
14
+ - Test edge cases and logic branches.
15
+ - **Do NOT** write complex rendering tests, E2E flows, or integration tests requiring heavy setup.
16
+ - Keep tests simple and fast to run via command line.
17
+
18
+ ## Step 3: Generate Unit Tests
19
+ - List scenarios: happy path, edge cases, error cases.
20
+ - Propose concrete test cases/snippets per main function/module.
21
+ - Ensure tests are deterministic (avoid external IO when possible).
22
+ - Focus on logic validation, not complex UI rendering or user flows.
23
+
24
+ ## Step 4: Placement & Commands
25
+ - Place tests per `PROJECT_STRUCTURE.md`:
26
+ - Colocated `*.spec.*` files with source, or
27
+ - Under `__tests__/` mirroring source structure
28
+ - Provide run commands consistent with project:
29
+ - Example: `npm test -- --run <pattern>` or language-specific runner
30
+ - Ensure tests can be run individually for quick iteration
31
+ - Keep each test file small and focused.
32
+
33
+ ## Step 5: Coverage Strategy (lightweight)
34
+ - Suggest coverage command if available (e.g., `npm test -- --coverage`).
35
+ - Default targets (adjust if project-specific):
36
+ - Lines: 80%
37
+ - Branches: 70%
38
+ - Highlight files still lacking coverage for critical paths.
39
+
40
+ ## Step 6: Update Testing Doc
41
+ - Use structure from `docs/ai/testing/feature-template.md` to populate `docs/ai/testing/feature-{name}.md`.
42
+ - Fill cases/snippets/coverage notes following the template sections:
43
+ - `## Unit Tests`
44
+ - `## Integration Tests`
45
+ - `## Manual Checklist`
46
+ - `## Coverage Targets`
47
+ - Keep the document brief and actionable.
48
+ - Include run commands for quick verification.
49
+ - Ensure all required sections from template are present.
50
+
51
+ ## Notes
52
+ - Tests should be simple enough to run quickly and verify logic correctness.
53
+ - Avoid complex test setup or mocking unless necessary.
54
+ - Focus on catching logic errors and edge cases, not testing frameworks or flows.
package/.npmignore ADDED
@@ -0,0 +1,2 @@
1
+ node_modules
2
+ .git
package/AGENTS.md ADDED
@@ -0,0 +1,21 @@
1
+ # AI DevKit Rules (Personal Workflow)
2
+
3
+ ## Documentation Structure
4
+ - `docs/ai/project/` — Project docs (PROJECT_STRUCTURE, CODE_CONVENTIONS, patterns)
5
+ - `docs/ai/planning/` — Feature plans (`feature-{name}.md`)
6
+ - `docs/ai/implementation/` — Implementation notes per feature (`feature-{name}.md`)
7
+ - `docs/ai/testing/` — Test plans per feature (`feature-{name}.md`)
8
+
9
+ ## Development Workflow (4 phases)
10
+ 1. Plan: define goals, scope, and acceptance criteria
11
+ 2. Implementation: code against acceptance criteria, small and shippable changes
12
+ 3. Testing: cover main flows and critical edges
13
+ 4. Review: self-review + tool-assisted review, then ship
14
+
15
+ ## Code Style & Standards
16
+ - Follow `docs/ai/project/CODE_CONVENTIONS.md`
17
+ - Use clear names and keep comments focused on non-obvious logic
18
+
19
+ ## AI Interaction Guidelines
20
+ - Reference the active feature docs in planning/implementation/testing
21
+ - Keep docs concise and aligned with actual work
package/README.md ADDED
@@ -0,0 +1,58 @@
1
+ # AI DevKit Workflow Quick Start
2
+
3
+ This repository contains standardized project documentation, workflow templates, and AI agent configuration for structured development in any codebase.
4
+
5
+ ## How to Integrate into Any Project
6
+
7
+ You have two methods to quickly add the workflow docs & agent commands into ANY repo:
8
+
9
+ ---
10
+
11
+ ## 1. Using degit (**Recommended, cross-platform**)
12
+
13
+ > **Requires:** [Node.js](https://nodejs.org/) (includes npx & degit on Windows, Ubuntu, Mac)
14
+
15
+ **Step 1:** Install degit *(optional—can use npx degit directly)*
16
+ ```bash
17
+ npm install -g degit
18
+ ```
19
+
20
+ **Step 2:** Run (in your target repo root):
21
+ ```bash
22
+ npx degit your-username/ai-devkit-workflow-template/docs/ai docs/ai
23
+ npx degit your-username/ai-devkit-workflow-template/.cursor/commands .cursor/commands
24
+ ```
25
+ - This will copy all workflow docs and AI command templates to the current project.
26
+ - Compatible with Windows Powershell, CMD, GitBash, WSL, Ubuntu/macOS terminals.
27
+
28
+ ---
29
+
30
+ ## 2. Using npx script (when available)
31
+
32
+ > **Requires:** Node.js (>= v14). No other global install required.
33
+
34
+ When a published init package is available, just run:
35
+ ```bash
36
+ npx @your-org/ai-devkit-workflow-init
37
+ ```
38
+ - The script will automatically fetch the workflow files and set up your repo.
39
+
40
+ ---
41
+
42
+ ## 3. Manual method (fallback)
43
+
44
+ Clone this repo and copy files:
45
+ ```bash
46
+ git clone https://github.com/your-username/ai-devkit-workflow-template.git ai-template-tmp
47
+ cp -r ai-template-tmp/docs/ai ./docs/
48
+ cp -r ai-template-tmp/.cursor/commands ./.cursor/
49
+ rm -rf ai-template-tmp
50
+ ```
51
+
52
+ ---
53
+ ## Notes
54
+ - All commands work identically on Windows, Ubuntu, Mac if Node.js is installed.
55
+ - If you want to contribute updates to templates, fork this repo!
56
+
57
+ ---
58
+ *See INTEGRATE.md in this repo for more advanced integration and troubleshooting.*
package/cli.js ADDED
@@ -0,0 +1,48 @@
1
+ #!/usr/bin/env node
2
+
3
+ const { execSync } = require('child_process');
4
+ const { existsSync, mkdirSync } = require('fs');
5
+
6
+ // Repo workflow gốc của bạn
7
+ const REPO = 'phananhtuan09/ai-agent-workflow';
8
+ const RAW_BASE = 'https://raw.githubusercontent.com/phananhtuan09/ai-agent-workflow/main';
9
+
10
+ // In ra helper log
11
+ function step(msg) {
12
+ console.log('\x1b[36m%s\x1b[0m', msg); // cyan
13
+ }
14
+
15
+ function run(cmd) {
16
+ try {
17
+ execSync(cmd, { stdio: 'inherit' });
18
+ } catch (e) {
19
+ console.error('❌ Failed:', cmd);
20
+ process.exit(1);
21
+ }
22
+ }
23
+
24
+ // Kiểm tra và tạo folder nếu chưa có
25
+ if (!existsSync('docs/ai')) {
26
+ mkdirSync('docs/ai', { recursive: true });
27
+ }
28
+ if (!existsSync('.cursor/commands')) {
29
+ mkdirSync('.cursor/commands', { recursive: true });
30
+ }
31
+
32
+ step('🚚 Downloading workflow template (docs/ai)...');
33
+ run(`npx degit ${REPO}/docs/ai docs/ai --force`);
34
+
35
+ step('🚚 Downloading agent commands (.cursor/commands)...');
36
+ run(`npx degit ${REPO}/.cursor/commands .cursor/commands --force`);
37
+
38
+ step('🚚 Downloading AGENTS.md ...');
39
+ // degit không hỗ trợ tải 1 file đơn lẻ -> dùng raw github
40
+ try {
41
+ run(`curl -fsSL ${RAW_BASE}/AGENTS.md -o AGENTS.md`);
42
+ } catch (_) {
43
+ // Fallback cho môi trường không có curl
44
+ run(`wget -qO AGENTS.md ${RAW_BASE}/AGENTS.md`);
45
+ }
46
+
47
+ step('✅ All AI workflow docs, command templates, and AGENTS.md have been copied!');
48
+ console.log('\n🌱 You can now use your AI workflow! Edit docs/ai/ and AGENTS.md as needed.\n');
@@ -0,0 +1,46 @@
1
+ ---
2
+ phase: implementation
3
+ title: Implementation Documentation
4
+ description: Feature implementation notes and tracking
5
+ ---
6
+
7
+ # Implementation Documentation
8
+
9
+ ## Purpose
10
+ This directory contains implementation notes for individual features. These docs track what code was written, why, and how it aligns with the plan.
11
+
12
+ ## Implementation Documentation Workflow
13
+
14
+ ### Creating Implementation Notes
15
+ Implementation notes are created automatically during `execute-plan`:
16
+ - Command: `.cursor/commands/execute-plan.md`
17
+ - Output: `docs/ai/implementation/feature-{name}.md`
18
+ - Template: `docs/ai/implementation/feature-template.md`
19
+
20
+ ### Implementation Doc Structure
21
+ Each implementation doc follows the template structure:
22
+ - **Summary**: Brief description of overall solution approach
23
+ - **Changes**: Per-task entries with:
24
+ - File paths and line ranges
25
+ - Approach/pattern used
26
+ - Brief description of changes
27
+ - **Edge Cases**: List of handled edge cases
28
+ - **Follow-ups**: TODOs or deferred work
29
+
30
+ ### Purpose
31
+ Implementation docs serve as:
32
+ - **Audit trail**: What code was written for this feature
33
+ - **Review reference**: For code-review and check-implementation commands
34
+ - **Refactor guide**: Understanding what was done for future improvements
35
+
36
+ ## Template Reference
37
+ See `feature-template.md` for the exact structure required for implementation notes.
38
+
39
+ ## Related Documentation
40
+ - Planning docs: `../planning/`
41
+ - Test plans: `../testing/`
42
+ - Project standards: `../project/`
43
+
44
+ ---
45
+
46
+ **Note**: For general implementation patterns and best practices, refer to `../project/CODE_CONVENTIONS.md` and `../project/PROJECT_STRUCTURE.md`.
@@ -0,0 +1,14 @@
1
+ # Implementation Notes: {Feature Name}
2
+
3
+ ## Summary
4
+ - Short description of the solution approach
5
+
6
+ ## Changes
7
+ - File: path/to/file1 (lines: x–y) — solution/ API/ pattern
8
+ - File: path/to/file2 (lines: a–b) — change summary
9
+
10
+ ## Edge Cases
11
+ - List of handled edge cases
12
+
13
+ ## Follow-ups
14
+ - TODOs or deferred work
@@ -0,0 +1,42 @@
1
+ ---
2
+ phase: planning
3
+ title: Planning Documentation
4
+ description: Feature planning docs and workflow guidelines
5
+ ---
6
+
7
+ # Planning Documentation
8
+
9
+ ## Purpose
10
+ This directory contains planning documents for individual features. Each feature follows a structured planning process before implementation.
11
+
12
+ ## Feature Planning Workflow
13
+
14
+ ### Creating a Feature Plan
15
+ Use the `create-plan` command to generate a new feature plan:
16
+ - Command: `.cursor/commands/create-plan.md`
17
+ - Output: `docs/ai/planning/feature-{name}.md`
18
+ - Template: `docs/ai/planning/feature-template.md`
19
+
20
+ ### Feature Plan Structure
21
+ Each feature plan follows the template structure:
22
+ - **Goal**: Objectives, scope, acceptance criteria (Given-When-Then format)
23
+ - **Tasks**: Checklist of high-level work items (3–7 tasks)
24
+ - **Risks/Assumptions**: Key risks and assumptions
25
+ - **Metrics / Definition of Done**: Completion criteria (build/test/review/docs)
26
+
27
+ ### Feature Plan Naming Convention
28
+ - Format: `feature-{name}.md` (kebab-case)
29
+ - Example: `feature-user-authentication.md`
30
+ - If duplicate name exists, append suffix: `feature-{name}-2.md`
31
+
32
+ ## Template Reference
33
+ See `feature-template.md` for the exact structure required for feature plans.
34
+
35
+ ## Related Documentation
36
+ - Implementation notes: `../implementation/`
37
+ - Test plans: `../testing/`
38
+ - Project standards: `../project/`
39
+
40
+ ---
41
+
42
+ **Note**: For project-level planning (milestones, phases, timelines), use a separate project planning document outside this directory structure.
@@ -0,0 +1,15 @@
1
+ # Plan: {Feature Name}
2
+
3
+ ## Goal
4
+ - Objectives, scope, acceptance criteria (Given-When-Then)
5
+
6
+ ## Tasks (overview)
7
+ - [ ] Task 1
8
+ - [ ] Task 2
9
+ - [ ] Task 3
10
+
11
+ ## Risks/Assumptions
12
+ - Key risks / assumptions
13
+
14
+ ## Metrics / Definition of Done
15
+ - Build green, tests added, review passed, docs updated
@@ -0,0 +1,31 @@
1
+ # Code Conventions
2
+
3
+ > This document can be auto-generated via `generate-standards`. Edit manually as needed.
4
+
5
+ ## Naming
6
+ - camelCase: variables, functions
7
+ - PascalCase: classes, types
8
+ - CONSTANT_CASE: constants
9
+
10
+ ## Structure
11
+ - One primary concept per file
12
+ - Prefer functions < 50 lines; refactor into helpers when needed
13
+
14
+ ## Error Handling & Logging
15
+ - Throw errors with clear messages
16
+ - Use appropriate log levels: debug/info/warn/error
17
+
18
+ ## Tests
19
+ - Write unit tests first; add integration tests when necessary
20
+ - Test names should describe behavior
21
+
22
+ ## Comments
23
+ - Only for complex logic or non-obvious decisions
24
+
25
+ ## Guiding Questions (for AI regeneration)
26
+ - What languages/frameworks are used, and do they impose naming/structure patterns?
27
+ - What are the preferred file/module boundaries for this project?
28
+ - What error handling strategy is standard (exceptions vs result types), and logging levels?
29
+ - What is the minimum test level required per change (unit/integration/E2E)?
30
+ - Are there performance/security constraints that influence coding style?
31
+ - Any repository-wide conventions (imports order, lint rules, formatting tools)?
@@ -0,0 +1,27 @@
1
+ # Project Structure
2
+
3
+ > This document can be auto-generated via `generate-standards`. Edit manually as needed.
4
+
5
+ ## Folders
6
+ - src/: source code
7
+ - docs/ai/project/: project docs (structure, conventions, patterns)
8
+ - docs/ai/planning/: feature plans
9
+ - docs/ai/implementation/: implementation notes per feature
10
+ - docs/ai/testing/: test plans per feature
11
+
12
+ ## Design Patterns (in use)
13
+ - Pattern A: short description + when to use
14
+ - Pattern B: short description + when to use
15
+
16
+ ## Notes
17
+ - Import/module conventions
18
+ - Config & secrets handling (if applicable)
19
+
20
+ ## Guiding Questions (for AI regeneration)
21
+ - How is the codebase organized by domain/feature vs layers?
22
+ - What are the module boundaries and dependency directions to preserve?
23
+ - Which design patterns are officially adopted and where?
24
+ - Where do configs/secrets live and how are they injected?
25
+ - What is the expected test file placement and naming?
26
+ - Any build/deployment constraints affecting structure (monorepo, packages)?
27
+
@@ -0,0 +1,65 @@
1
+ ---
2
+ phase: project
3
+ title: Project Standards
4
+ description: Code conventions and project structure standards
5
+ ---
6
+
7
+ # Project Standards
8
+
9
+ ## Purpose
10
+ This directory contains project-wide standards that govern code quality, structure, and conventions. These standards are used by the `code-review` command to validate code.
11
+
12
+ ## Standards Documentation
13
+
14
+ ### CODE_CONVENTIONS.md
15
+ Defines coding standards including:
16
+ - Naming conventions (variables, functions, classes, constants)
17
+ - Import order and formatting
18
+ - Function size and complexity guidelines
19
+ - Error handling strategy
20
+ - Test rules
21
+ - Comments policy
22
+
23
+ ### PROJECT_STRUCTURE.md
24
+ Defines project organization including:
25
+ - Folder layout and directory structure
26
+ - Module boundaries and dependency direction
27
+ - Design patterns in use
28
+ - Test file placement and naming
29
+ - Config/secrets handling
30
+
31
+ ## Generating Standards
32
+
33
+ ### Auto-Generation
34
+ Use the `generate-standards` command to auto-generate these files from your codebase:
35
+ - Command: `.cursor/commands/generate-standards.md`
36
+ - Output: Updates `CODE_CONVENTIONS.md` and `PROJECT_STRUCTURE.md`
37
+ - Process: Analyzes codebase + brief Q&A to infer standards
38
+
39
+ ### Manual Editing
40
+ Both files can be edited manually at any time. They are marked with a note indicating they can be auto-generated.
41
+
42
+ ## Usage in Workflow
43
+
44
+ ### Code Review
45
+ The `code-review` command strictly checks code against these two standards:
46
+ - Only reports violations of explicit rules
47
+ - Does not provide design opinions or performance guesses
48
+ - Focuses on conformance to standards
49
+
50
+ ### Implementation
51
+ When implementing features (`execute-plan`), follow these standards:
52
+ - Adhere to naming conventions
53
+ - Respect module boundaries
54
+ - Follow error handling patterns
55
+ - Place tests according to structure rules
56
+
57
+ ## Related Documentation
58
+ - Feature planning: `../planning/`
59
+ - Implementation notes: `../implementation/`
60
+ - Test plans: `../testing/`
61
+
62
+ ---
63
+
64
+ **Note**: These standards are project-specific and should reflect actual patterns in your codebase. Regenerate when codebase evolves significantly.
65
+
@@ -0,0 +1,51 @@
1
+ ---
2
+ phase: testing
3
+ title: Testing Documentation
4
+ description: Feature test plans and testing guidelines
5
+ ---
6
+
7
+ # Testing Documentation
8
+
9
+ ## Purpose
10
+ This directory contains test plans for individual features. These docs focus on simple, fast-running tests to verify logic correctness.
11
+
12
+ ## Testing Workflow
13
+
14
+ ### Creating Test Plans
15
+ Use the `writing-test` command to generate test plans:
16
+ - Command: `.cursor/commands/writing-test.md`
17
+ - Output: `docs/ai/testing/feature-{name}.md`
18
+ - Template: `docs/ai/testing/feature-template.md`
19
+
20
+ ### Test Plan Structure
21
+ Each test plan follows the template structure:
22
+ - **Unit Tests**: Simple test cases for functions/components
23
+ - Happy path scenarios
24
+ - Edge cases
25
+ - Error cases
26
+ - **Integration Tests**: Simple component interaction tests (if needed)
27
+ - **Manual Checklist**: Steps for manual verification
28
+ - **Coverage Targets**: Coverage goals and gaps
29
+
30
+ ### Testing Philosophy
31
+ - **Focus**: Pure functions, small utilities, isolated component logic
32
+ - **Speed**: Tests must run quickly via command line
33
+ - **Simplicity**: Avoid complex rendering tests, E2E flows, or heavy setup
34
+ - **Purpose**: Catch logic errors and edge cases, not test frameworks
35
+
36
+ ### Coverage Targets
37
+ Default targets (adjust if project-specific):
38
+ - Lines: 80%
39
+ - Branches: 70%
40
+
41
+ ## Template Reference
42
+ See `feature-template.md` for the exact structure required for test plans.
43
+
44
+ ## Related Documentation
45
+ - Planning docs: `../planning/`
46
+ - Implementation notes: `../implementation/`
47
+ - Project standards: `../project/`
48
+
49
+ ---
50
+
51
+ **Note**: For complex E2E tests, performance testing, or bug tracking strategies, document these separately or in project-level documentation.
@@ -0,0 +1,16 @@
1
+ # Test Plan: {Feature Name}
2
+
3
+ ## Unit Tests
4
+ - Case 1: ...
5
+ - Case 2: ...
6
+ - Edge: ...
7
+
8
+ ## Integration Tests
9
+ - Main flow: ...
10
+ - Failure modes: ...
11
+
12
+ ## Manual Checklist
13
+ - Steps for manual verification
14
+
15
+ ## Coverage Targets
16
+ - Coverage goals and remaining gaps
package/package.json ADDED
@@ -0,0 +1,22 @@
1
+ {
2
+ "name": "ai-workflow-init",
3
+ "version": "1.0.0",
4
+ "description": "Initialize AI workflow docs & commands into any repo with one command",
5
+ "bin": {
6
+ "ai-workflow-init": "./cli.js"
7
+ },
8
+ "author": "your-name",
9
+ "license": "MIT",
10
+ "repository": {
11
+ "type": "git",
12
+ "url": "git+https://github.com/phananhtuan09/ai-agent-workflow.git"
13
+ },
14
+ "homepage": "https://github.com/phananhtuan09/ai-agent-workflow#readme",
15
+ "keywords": ["ai", "workflow", "docs", "degit", "init"],
16
+ "dependencies": {
17
+ "degit": "^2.8.4"
18
+ },
19
+ "engines": {
20
+ "node": ">=14"
21
+ }
22
+ }