@kienha/anti-chaotic 1.0.7 → 1.0.8
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.agent/rules/tests.md +55 -0
- package/.agent/workflows/break-tasks.md +82 -0
- package/.agent/workflows/debug.md +71 -0
- package/.agent/workflows/development.md +54 -0
- package/.agent/workflows/gen-tests.md +58 -0
- package/.agent/workflows/qa.md +59 -0
- package/README.md +14 -9
- package/bin/anti-chaotic.js +164 -50
- package/package.json +1 -1
|
@@ -0,0 +1,55 @@
|
|
|
1
|
+
---
|
|
2
|
+
trigger: model_decision
|
|
3
|
+
description: Always run tests to ensure no regressions when implementing new features or fixing bugs
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Testing and Regression Rule
|
|
7
|
+
|
|
8
|
+
> [!IMPORTANT]
|
|
9
|
+
> This rule is **MANDATORY** when implementing new features, fixing bugs, or performing major refactors. Ensuring system stability is a top priority.
|
|
10
|
+
|
|
11
|
+
## Critical Rules (MUST Follow)
|
|
12
|
+
|
|
13
|
+
1. **MUST** run existing tests before starting any work to establish a baseline.
|
|
14
|
+
2. **MUST** run tests after completing changes to ensure no regressions were introduced.
|
|
15
|
+
3. **MUST** add new tests for any new features implemented.
|
|
16
|
+
4. **MUST** add reproduction tests for any bug fixes to ensure the bug does not return.
|
|
17
|
+
5. **MUST** report test results (pass/fail) in the task summary and walkthrough.
|
|
18
|
+
6. **MUST NOT** proceed to finalize a task if tests are failing, unless explicitly instructed by the user after explaining the failure and its impact.
|
|
19
|
+
|
|
20
|
+
## Decision Flow
|
|
21
|
+
|
|
22
|
+
```
|
|
23
|
+
┌─────────────────────────────────────────────────────────────┐
|
|
24
|
+
│ WHEN implementing a feature or fixing a bug: │
|
|
25
|
+
├─────────────────────────────────────────────────────────────┤
|
|
26
|
+
│ 1. Run baseline tests. │
|
|
27
|
+
│ Are they passing? │
|
|
28
|
+
│ NO → Notify user of existing failures before proceeding. │
|
|
29
|
+
│ YES → Continue. │
|
|
30
|
+
├─────────────────────────────────────────────────────────────┤
|
|
31
|
+
│ 2. Implement changes (Feature/Fix/Refactor). │
|
|
32
|
+
├─────────────────────────────────────────────────────────────┤
|
|
33
|
+
│ 3. Add/Update tests for the new code. │
|
|
34
|
+
├─────────────────────────────────────────────────────────────┤
|
|
35
|
+
│ 4. Run ALL relevant tests. │
|
|
36
|
+
│ Are they passing? │
|
|
37
|
+
│ NO → Analyze failures, fix code/tests, repeat step 4. │
|
|
38
|
+
│ YES → Continue. │
|
|
39
|
+
├─────────────────────────────────────────────────────────────┤
|
|
40
|
+
│ 5. Document test results in Walkthrough/Task Summary. │
|
|
41
|
+
└─────────────────────────────────────────────────────────────┘
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
## Running Tests
|
|
45
|
+
|
|
46
|
+
- Use `npm test` to run the test suite.
|
|
47
|
+
- For specific tests, use `npm test -- <path_to_test_file>`.
|
|
48
|
+
- If the project uses a different test runner (e.g., `pytest`, `jest`, `vitest`), adapt the command accordingly based on `package.json` or project documentation.
|
|
49
|
+
|
|
50
|
+
## What to do on Failure
|
|
51
|
+
|
|
52
|
+
1. **Analyze logs**: Look for the specific assertion failure or error message.
|
|
53
|
+
2. **Determine Root Cause**: Is it a bug in the code, a bug in the test, or a change in requirements?
|
|
54
|
+
3. **Fix and Re-run**: Apply the necessary fix and run the tests again.
|
|
55
|
+
4. **Communicate**: If a failure is expected or cannot be fixed easily, notify the user with a detailed explanation.
|
|
@@ -0,0 +1,82 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Orchestrates breaking down requirements into actionable tasks for implementation.
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Break Tasks Workflow
|
|
6
|
+
|
|
7
|
+
> [!IMPORTANT]
|
|
8
|
+
> **MANDATORY**: Follow `.agent/rules/documents.md` for all task-related documentation.
|
|
9
|
+
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
## MCP Usage Guidelines
|
|
13
|
+
|
|
14
|
+
| MCP Tool | When to Use |
|
|
15
|
+
| :------------------------------------------- | :-------------------------------------------------------- |
|
|
16
|
+
| `mcp_sequential-thinking_sequentialthinking` | **REQUIRED** to break down requirements into atomic tasks |
|
|
17
|
+
| `mcp_context7_query-docs` | To check best practices for specific technologies |
|
|
18
|
+
|
|
19
|
+
---
|
|
20
|
+
|
|
21
|
+
## Step 1: Identify Source Document
|
|
22
|
+
|
|
23
|
+
1. Locate the source document (PRD, User Story, Feature Spec, or SDD).
|
|
24
|
+
2. If multiple versions exist, ask the user for clarification.
|
|
25
|
+
3. Relevant folders to check:
|
|
26
|
+
- `docs/020-Requirements/`
|
|
27
|
+
- `docs/022-User-Stories/`
|
|
28
|
+
- `docs/030-Specs/`
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
## Step 2: Analyze Requirements
|
|
33
|
+
|
|
34
|
+
// turbo
|
|
35
|
+
|
|
36
|
+
1. **Invoke `[business-analysis]` skill** to extract key features and acceptance criteria.
|
|
37
|
+
2. Use `sequential-thinking` to:
|
|
38
|
+
- Identify technical dependencies.
|
|
39
|
+
- Separate backend, frontend, and QA requirements.
|
|
40
|
+
- Spot ambiguous or missing details.
|
|
41
|
+
3. List any clarifying questions for the user.
|
|
42
|
+
4. **WAIT** for user clarification if needed.
|
|
43
|
+
|
|
44
|
+
---
|
|
45
|
+
|
|
46
|
+
## Step 3: Atomic Task Breakdown
|
|
47
|
+
|
|
48
|
+
// turbo
|
|
49
|
+
|
|
50
|
+
> 💡 **MCP**: **MUST** use `sequential-thinking` here to ensure tasks are atomic and manageable.
|
|
51
|
+
|
|
52
|
+
1. **Invoke `[lead-architect]` skill** to create a structured task list.
|
|
53
|
+
2. Group tasks by component or phase (e.g., Database, API, Logic, UI, Testing).
|
|
54
|
+
3. For each task, include:
|
|
55
|
+
- Goal/Description.
|
|
56
|
+
- Acceptance Criteria.
|
|
57
|
+
- Estimated complexity (if applicable).
|
|
58
|
+
4. Create a `task-breakdown.md` artifact representing the proposed sequence.
|
|
59
|
+
|
|
60
|
+
---
|
|
61
|
+
|
|
62
|
+
## Step 4: Finalize Task Documentation
|
|
63
|
+
|
|
64
|
+
// turbo
|
|
65
|
+
|
|
66
|
+
1. After user approves the `task-breakdown.md` artifact:
|
|
67
|
+
2. Update the `task.md` of the current session or create a new task file in `docs/050-Tasks/`.
|
|
68
|
+
3. If creating a new file, follow standard naming: `docs/050-Tasks/Task-{FeatureName}.md`.
|
|
69
|
+
4. Update `docs/050-Tasks/Tasks-MOC.md`.
|
|
70
|
+
5. Present the finalized task list to the user.
|
|
71
|
+
|
|
72
|
+
---
|
|
73
|
+
|
|
74
|
+
## Quick Reference
|
|
75
|
+
|
|
76
|
+
| Role | Skill | Responsibility |
|
|
77
|
+
| :----------------- | :------------------- | :-------------------------------------- |
|
|
78
|
+
| Product Manager | `product-manager` | Requirement validation & prioritization |
|
|
79
|
+
| Lead Architect | `lead-architect` | Technical breakdown & dependencies |
|
|
80
|
+
| Developer | `backend-developer` | Backend/API specific tasks |
|
|
81
|
+
| Frontend Developer | `frontend-developer` | UI/UX specific tasks |
|
|
82
|
+
| QA Tester | `qa-tester` | Verification & Edge case tasks |
|
|
@@ -0,0 +1,71 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Scientific debugging workflow: Hypothesize, Instrument, Reproduce, Analyze, Fix.
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Scientific Debug & Fix Workflow
|
|
6
|
+
|
|
7
|
+
> [!IMPORTANT]
|
|
8
|
+
> **GOAL**: Follow a scientific process to identify root causes with EVIDENCE before applying fixes.
|
|
9
|
+
> **Best for**: Hard-to-reproduce bugs, race conditions, performance issues, and regressions.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Step 1: Hypothesis Generation
|
|
14
|
+
|
|
15
|
+
1. **Analyze the Issue**: Review the bug report and available context.
|
|
16
|
+
2. **Generate Hypotheses**: Brainstorm multiple potential causes (e.g., "Race condition in data fetching", "Incorrect state update logic", "Edge case in input validation").
|
|
17
|
+
3. **Select Top Candidates**: Prioritize the most likely hypotheses to investigate first.
|
|
18
|
+
|
|
19
|
+
---
|
|
20
|
+
|
|
21
|
+
## Step 2: Instrumentation
|
|
22
|
+
|
|
23
|
+
1. **Plan Logging**: Decide WHERE key information is missing to validate your hypotheses.
|
|
24
|
+
2. **Add Logging**: Instrument the code with targetted `console.log`, specific logger calls, or performance markers.
|
|
25
|
+
- _Goal_: Capture runtime state, variable values, and execution flow relevant to the hypotheses.
|
|
26
|
+
- _Tip_: Add unique prefixes to logs (e.g., `[DEBUG-HYPOTHESIS-1]`) for easy filtering.
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
## Step 3: Reproduction & Data Collection
|
|
31
|
+
|
|
32
|
+
1. **Execution**: Run the application or test case to reproduce the bug.
|
|
33
|
+
- If a reproduction script doesn't exist, create one now if possible.
|
|
34
|
+
2. **Collect Data**: Capture the output from your instrumentation.
|
|
35
|
+
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
## Step 4: Analysis & Root Cause
|
|
39
|
+
|
|
40
|
+
1. **Analyze Evidence**: Look at the collected logs/data.
|
|
41
|
+
- Does the data confirm a hypothesis?
|
|
42
|
+
- Does it rule one out?
|
|
43
|
+
2. **Pinpoint Root Cause**: Identify exactly _why_ the bug is happening based on the evidence.
|
|
44
|
+
3. **Iterate (if needed)**: If inconclusive, return to Step 1 or 2 with new knowledge.
|
|
45
|
+
|
|
46
|
+
---
|
|
47
|
+
|
|
48
|
+
## Step 5: Targeted Implementation
|
|
49
|
+
|
|
50
|
+
1. **Apply Fix**: Implement a targeted fix based _only_ on the confirmed root cause. Avoid "shotgun debugging" (changing things randomly).
|
|
51
|
+
2. **Cleanup**: Remove the temporary debugging instrumentation.
|
|
52
|
+
|
|
53
|
+
---
|
|
54
|
+
|
|
55
|
+
## Step 6: Verification
|
|
56
|
+
|
|
57
|
+
// turbo
|
|
58
|
+
|
|
59
|
+
1. **Run Reproduction Test**: Verify that the bug is gone.
|
|
60
|
+
2. **Run Regression Tests**: Run related unit tests to ensure no side effects.
|
|
61
|
+
3. **Lint & Type Check**: Ensure code quality standards are met.
|
|
62
|
+
|
|
63
|
+
---
|
|
64
|
+
|
|
65
|
+
## Step 7: Finalize
|
|
66
|
+
|
|
67
|
+
1. **Draft Commit**: Create a concise commit message (e.g., `fix(module): description of fix`).
|
|
68
|
+
2. **Report**: Summarize the process:
|
|
69
|
+
- What was the hypothesis?
|
|
70
|
+
- What evidence confirmed it?
|
|
71
|
+
- How was it fixed?
|
|
@@ -0,0 +1,54 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: General coding workflow for implementing changes, bug fixes, or minor features.
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Development Workflow
|
|
6
|
+
|
|
7
|
+
> [!IMPORTANT]
|
|
8
|
+
> **MANDATORY**: Always read `.agent/rules/documents.md` before creating or modifying any documentation related to development.
|
|
9
|
+
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
## Step 1: Analyze & Plan
|
|
13
|
+
|
|
14
|
+
// turbo
|
|
15
|
+
|
|
16
|
+
1. Understand the requirement or bug report.
|
|
17
|
+
2. **MUST** use `mcp_sequential-thinking_sequentialthinking` to:
|
|
18
|
+
- Analyze the existing code structure.
|
|
19
|
+
- Design the solution.
|
|
20
|
+
- Identify potential edge cases and impacts.
|
|
21
|
+
3. If the task is complex, create an `implementation_plan.md` artifact.
|
|
22
|
+
4. **WAIT** for user confirmation if the plan involves major architectural changes.
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
## Step 2: Execute Code Changes
|
|
27
|
+
|
|
28
|
+
// turbo
|
|
29
|
+
|
|
30
|
+
1. Implement the planned changes iteratively.
|
|
31
|
+
2. **Backend**: Update models, logic, and APIs as needed.
|
|
32
|
+
3. **Frontend**: Update UI components and state management.
|
|
33
|
+
4. Ensure code follows project standards and linting rules.
|
|
34
|
+
|
|
35
|
+
---
|
|
36
|
+
|
|
37
|
+
## Step 3: Verify & Test
|
|
38
|
+
|
|
39
|
+
// turbo
|
|
40
|
+
|
|
41
|
+
1. Run existing tests to ensure no regressions.
|
|
42
|
+
2. Add new unit or integration tests for the changes.
|
|
43
|
+
3. Perform manual verification (e.g., using the browser tool for UI changes).
|
|
44
|
+
4. **MUST** document proof of work in a `walkthrough.md` artifact if the change is significant.
|
|
45
|
+
|
|
46
|
+
---
|
|
47
|
+
|
|
48
|
+
## Step 4: Finalize
|
|
49
|
+
|
|
50
|
+
// turbo
|
|
51
|
+
|
|
52
|
+
1. Update related documentation (MOCs, API specs, etc.).
|
|
53
|
+
2. Clean up any temporary files or comments.
|
|
54
|
+
3. Present a summary of changes and verification results.
|
|
@@ -0,0 +1,58 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Generate unit, E2E, security, and performance tests using the qa-tester skill.
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Generate Tests Workflow
|
|
6
|
+
|
|
7
|
+
> [!IMPORTANT]
|
|
8
|
+
> **MANDATORY**: Apply `.agent/rules/documents.md` for all document creation and directory structure. All QA documents MUST be stored under `docs/035-QA/`.
|
|
9
|
+
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
## Step 1: Discovery & Strategy
|
|
13
|
+
|
|
14
|
+
// turbo
|
|
15
|
+
|
|
16
|
+
1. **Invoke `[qa-tester]` skill** to analyze the `docs/` folder and current codebase structure.
|
|
17
|
+
2. Ask the user which type of tests they want to generate:
|
|
18
|
+
- **Unit Tests**: For specific functions or utilities (e.g., `tests/unit/`).
|
|
19
|
+
- **E2E Tests**: For user flows (e.g., `tests/e2e/`).
|
|
20
|
+
- **Security Tests**: For vulnerability assessments.
|
|
21
|
+
- **Performance Tests**: For load and responsiveness checks.
|
|
22
|
+
3. Identify the specific files or features that need testing based on user input.
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
## Step 2: Test Plan & Case Generation
|
|
27
|
+
|
|
28
|
+
// turbo
|
|
29
|
+
|
|
30
|
+
1. **Invoke `[qa-tester]` skill** to create/update the Test Plan and Test Cases:
|
|
31
|
+
- For **Unit Tests**: Identify edge cases, boundary conditions, and happy paths.
|
|
32
|
+
- For **E2E Tests**: valid/invalid user flows.
|
|
33
|
+
- For **Security**: potential injection points, auth flaws.
|
|
34
|
+
2. Generate the test documentation in `docs/035-QA/Test-Cases/` following the naming convention `TC-{Feature}-{NNN}.md`.
|
|
35
|
+
3. Create a `draft-test-docs.md` artifact with the proposed test cases for review.
|
|
36
|
+
|
|
37
|
+
---
|
|
38
|
+
|
|
39
|
+
## Step 3: Test Code Generation
|
|
40
|
+
|
|
41
|
+
1. **Wait** for user approval of the test cases.
|
|
42
|
+
2. **Invoke `[qa-tester]` skill** to generate the actual test code.
|
|
43
|
+
- Use the project's existing testing framework (e.g., Jest, Playwright, Vitest).
|
|
44
|
+
- Ensure mocks and stubs are correctly implemented for unit tests.
|
|
45
|
+
- Ensure selectors and interaction steps are robust for E2E tests.
|
|
46
|
+
3. Save the generated test code to the appropriate directories (e.g., `tests/unit/`, `tests/e2e/`).
|
|
47
|
+
|
|
48
|
+
---
|
|
49
|
+
|
|
50
|
+
## Step 4: Verification & Reporting
|
|
51
|
+
|
|
52
|
+
1. Run the generated tests using the project's test runner.
|
|
53
|
+
2. If tests fail:
|
|
54
|
+
- Analyze the failure.
|
|
55
|
+
- **Invoke `[qa-tester]` skill** to fix the test code or report the bug if it's a real issue.
|
|
56
|
+
3. **Mandatory**:
|
|
57
|
+
- Update `docs/035-QA/QA-MOC.md`.
|
|
58
|
+
- Update `docs/000-Index.md` if needed.
|
|
@@ -0,0 +1,59 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Create comprehensive test case documents and test plans based on project requirements.
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# QA Workflow
|
|
6
|
+
|
|
7
|
+
> [!IMPORTANT]
|
|
8
|
+
> **MANDATORY**: Apply `.agent/rules/documents.md` for all document creation and directory structure. All QA documents MUST be stored under `docs/035-QA/`.
|
|
9
|
+
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
## MCP Usage Guidelines
|
|
13
|
+
|
|
14
|
+
| MCP Tool | When to Use |
|
|
15
|
+
| :------------------------------------------- | :----------------------------------------------- |
|
|
16
|
+
| `mcp_sequential-thinking_sequentialthinking` | Analyze complex application logic and edge cases |
|
|
17
|
+
| `mcp_context7_query-docs` | Research testing frameworks or best practices |
|
|
18
|
+
|
|
19
|
+
---
|
|
20
|
+
|
|
21
|
+
## Step 1: Requirement Discovery
|
|
22
|
+
|
|
23
|
+
// turbo
|
|
24
|
+
|
|
25
|
+
1. **Invoke `[qa-tester]` skill** to analyze the `docs/` folder.
|
|
26
|
+
2. Identify features, constraints, and business logic that require testing.
|
|
27
|
+
3. Map out:
|
|
28
|
+
- Happy Paths (Golden Flows)
|
|
29
|
+
- Negative Paths (Error handling)
|
|
30
|
+
- Boundary Cases
|
|
31
|
+
- Security/Performance considerations
|
|
32
|
+
4. **WAIT** for user to confirm the list of scenarios to be documented.
|
|
33
|
+
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
## Step 2: Draft Test Documentation
|
|
37
|
+
|
|
38
|
+
// turbo
|
|
39
|
+
|
|
40
|
+
1. **Invoke `[qa-tester]` skill** to create:
|
|
41
|
+
- **Test Plan**: High-level strategy for the current release/feature (`docs/035-QA/Test-Plans/`).
|
|
42
|
+
- **Test Cases**: Detailed step-by-step cases (`docs/035-QA/Test-Cases/`).
|
|
43
|
+
2. Follow the standard mapping in `.agent/rules/documents.md`:
|
|
44
|
+
- Test Plan naming: `MTP-{Name}.md`
|
|
45
|
+
- Test Case naming: `TC-{Feature}-{NNN}.md`
|
|
46
|
+
3. Create a `draft-qa-docs.md` artifact for review.
|
|
47
|
+
|
|
48
|
+
---
|
|
49
|
+
|
|
50
|
+
## Step 3: Finalize and Organize
|
|
51
|
+
|
|
52
|
+
// turbo
|
|
53
|
+
|
|
54
|
+
1. After approval, save all files to their respective folders in `docs/035-QA/`.
|
|
55
|
+
2. **Mandatory**:
|
|
56
|
+
- Update `docs/035-QA/QA-MOC.md`.
|
|
57
|
+
- Update `docs/000-Index.md` if needed.
|
|
58
|
+
- Ensure all frontmatter (id, type, status, created) is correctly populated according to `.agent/rules/documents.md`.
|
|
59
|
+
3. Present summary of created tests.
|
package/README.md
CHANGED
|
@@ -25,7 +25,7 @@ _We encourage teams to customize these skills and define their own rules & workf
|
|
|
25
25
|
### 📦 Components
|
|
26
26
|
|
|
27
27
|
- 🧠 **12+ Multi-domain AI Skills** - From Product Manager, Business Analyst to Lead Architect.
|
|
28
|
-
- 🔄 **
|
|
28
|
+
- 🔄 **11 Automated Workflows** - Pre-defined, reusable work processes.
|
|
29
29
|
- 📜 **Rules Engine** - A rule system that ensures AI Agents follow project standards.
|
|
30
30
|
- 📚 **References Library** - Documentation references for various technologies.
|
|
31
31
|
|
|
@@ -139,14 +139,19 @@ Read and execute the workflow at .agent/workflows/brainstorm.md
|
|
|
139
139
|
|
|
140
140
|
## 🔄 Automated Workflows
|
|
141
141
|
|
|
142
|
-
| Workflow | Description | Use Case
|
|
143
|
-
| :----------------------- | :--------------------------------------------------------------------------------------------------------- |
|
|
144
|
-
| **`/bootstrap`** | Sets up project structure, installs dependencies, and configures environment based on architectural specs. | Start of Implementation Phase.
|
|
145
|
-
| **`/brainstorm`** | Analyze ideas with the user and create preliminary high-level documents (Roadmap, PRD). | Start of a new project or feature when you only have a rough idea.
|
|
146
|
-
| **`/
|
|
147
|
-
| **`/
|
|
148
|
-
| **`/
|
|
149
|
-
| **`/
|
|
142
|
+
| Workflow | Description | Use Case |
|
|
143
|
+
| :----------------------- | :--------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------- |
|
|
144
|
+
| **`/bootstrap`** | Sets up project structure, installs dependencies, and configures environment based on architectural specs. | Start of Implementation Phase. |
|
|
145
|
+
| **`/brainstorm`** | Analyze ideas with the user and create preliminary high-level documents (Roadmap, PRD). | Start of a new project or feature when you only have a rough idea. |
|
|
146
|
+
| **`/break-tasks`** | Orchestrates breaking down requirements into actionable tasks for implementation. | When you have a PRD and need a task list. |
|
|
147
|
+
| **`/custom-behavior`** | Safely customize Agent rules and workflows with impact analysis and user confirmation. | Adjust Agent behavior or fix recurring mistakes. |
|
|
148
|
+
| **`/debug`** | Scientific debugging workflow: Hypothesize, Instrument, Reproduce, Analyze, Fix. | When facing complex bugs that need systematic analysis. |
|
|
149
|
+
| **`/development`** | General coding workflow for implementing changes, bug fixes, or minor features. | Day-to-day coding tasks. |
|
|
150
|
+
| **`/documentation`** | Generate comprehensive documentation (Architecture, API, Specs) from either Codebase or Requirements. | Onboarding or creating detailed specs. |
|
|
151
|
+
| **`/gen-tests`** | Generate unit, E2E, security, and performance tests using the qa-tester skill. | Improving test coverage for new or existing code. |
|
|
152
|
+
| **`/implement-feature`** | Orchestrates feature implementation from specification to deployment. | End-to-end feature development. |
|
|
153
|
+
| **`/qa`** | Create comprehensive test case documents and test plans based on project requirements. | Planning testing strategy for a feature. |
|
|
154
|
+
| **`/ui-ux-design`** | Transform requirements into comprehensive UI/UX design deliverables. | After requirements are finalized, before coding. |
|
|
150
155
|
|
|
151
156
|
---
|
|
152
157
|
|
package/bin/anti-chaotic.js
CHANGED
|
@@ -4,11 +4,39 @@ const { program } = require("commander");
|
|
|
4
4
|
const fs = require("fs-extra");
|
|
5
5
|
const path = require("path");
|
|
6
6
|
const chalk = require("chalk");
|
|
7
|
+
const crypto = require("crypto");
|
|
8
|
+
const os = require("os");
|
|
7
9
|
|
|
8
10
|
program.version("0.0.1").description("Anti-Chaotic Agent Kit CLI");
|
|
9
11
|
|
|
10
12
|
const REPO_URI = "kienhaminh/anti-chaotic/.agent";
|
|
11
13
|
|
|
14
|
+
/**
|
|
15
|
+
* Calculates SHA-256 hash of a file
|
|
16
|
+
*/
|
|
17
|
+
async function getFileHash(filePath) {
|
|
18
|
+
const buffer = await fs.readFile(filePath);
|
|
19
|
+
return crypto.createHash("sha256").update(buffer).digest("hex");
|
|
20
|
+
}
|
|
21
|
+
|
|
22
|
+
/**
|
|
23
|
+
* Recursively gets all files in a directory
|
|
24
|
+
*/
|
|
25
|
+
async function getAllFiles(dirPath, baseDir = dirPath) {
|
|
26
|
+
let results = [];
|
|
27
|
+
const list = await fs.readdir(dirPath, { withFileTypes: true });
|
|
28
|
+
|
|
29
|
+
for (const item of list) {
|
|
30
|
+
const fullPath = path.join(dirPath, item.name);
|
|
31
|
+
if (item.isDirectory()) {
|
|
32
|
+
results = results.concat(await getAllFiles(fullPath, baseDir));
|
|
33
|
+
} else {
|
|
34
|
+
results.push(path.relative(baseDir, fullPath));
|
|
35
|
+
}
|
|
36
|
+
}
|
|
37
|
+
return results;
|
|
38
|
+
}
|
|
39
|
+
|
|
12
40
|
program
|
|
13
41
|
.command("init")
|
|
14
42
|
.description("Initialize the Anti-Chaotic Agent Kit (download from GitHub)")
|
|
@@ -26,22 +54,6 @@ program
|
|
|
26
54
|
force: true,
|
|
27
55
|
});
|
|
28
56
|
|
|
29
|
-
// Cleanup deprecated files
|
|
30
|
-
const deprecatedFiles = [
|
|
31
|
-
"rules/documentation.md",
|
|
32
|
-
"workflows/docs-from-codebase.md",
|
|
33
|
-
"workflows/requirement-analysis.md",
|
|
34
|
-
"workflows/setup-codebase.md",
|
|
35
|
-
];
|
|
36
|
-
|
|
37
|
-
for (const file of deprecatedFiles) {
|
|
38
|
-
const filePath = path.join(targetAgentDir, file);
|
|
39
|
-
if (await fs.pathExists(filePath)) {
|
|
40
|
-
await fs.remove(filePath);
|
|
41
|
-
console.log(chalk.dim(` Removed legacy file: ${file}`));
|
|
42
|
-
}
|
|
43
|
-
}
|
|
44
|
-
|
|
45
57
|
console.log(
|
|
46
58
|
chalk.green("✔ Successfully installed Anti-Chaotic Agent Kit."),
|
|
47
59
|
);
|
|
@@ -56,56 +68,158 @@ program
|
|
|
56
68
|
.command("update")
|
|
57
69
|
.description("Update .agent configuration from GitHub")
|
|
58
70
|
.action(async () => {
|
|
59
|
-
const
|
|
71
|
+
const projectRoot = process.cwd();
|
|
72
|
+
const targetAgentDir = path.join(projectRoot, ".agent");
|
|
73
|
+
const tempDir = path.join(os.tmpdir(), `anti-chaotic-update-${Date.now()}`);
|
|
60
74
|
const { default: inquirer } = await import("inquirer");
|
|
61
75
|
|
|
62
76
|
try {
|
|
63
|
-
if (await fs.pathExists(targetAgentDir)) {
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
message: `Directory ${targetAgentDir} already exists. Do you want to overwrite it?`,
|
|
69
|
-
default: false,
|
|
70
|
-
},
|
|
71
|
-
]);
|
|
72
|
-
|
|
73
|
-
if (!confirm) {
|
|
74
|
-
console.log(chalk.yellow("Operation cancelled by user."));
|
|
75
|
-
return;
|
|
76
|
-
}
|
|
77
|
+
if (!(await fs.pathExists(targetAgentDir))) {
|
|
78
|
+
console.log(
|
|
79
|
+
chalk.red("✘ .agent directory not found. Please run 'init' first."),
|
|
80
|
+
);
|
|
81
|
+
return;
|
|
77
82
|
}
|
|
78
83
|
|
|
79
|
-
console.log(
|
|
80
|
-
chalk.blue(`Updating Anti-Chaotic Agent Kit from ${REPO_URI}...`),
|
|
81
|
-
);
|
|
84
|
+
console.log(chalk.blue("Checking for updates from GitHub..."));
|
|
82
85
|
|
|
83
86
|
const { downloadTemplate } = await import("giget");
|
|
84
87
|
await downloadTemplate(`github:${REPO_URI}`, {
|
|
85
|
-
dir:
|
|
88
|
+
dir: tempDir,
|
|
86
89
|
force: true,
|
|
87
90
|
});
|
|
88
91
|
|
|
89
|
-
|
|
90
|
-
const
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
for (const file of
|
|
98
|
-
const
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
|
|
92
|
+
const localFiles = await getAllFiles(targetAgentDir);
|
|
93
|
+
const remoteFiles = await getAllFiles(tempDir);
|
|
94
|
+
|
|
95
|
+
const modified = [];
|
|
96
|
+
const added = [];
|
|
97
|
+
const deleted = [];
|
|
98
|
+
|
|
99
|
+
// Check for modified and deleted files
|
|
100
|
+
for (const file of localFiles) {
|
|
101
|
+
const localPath = path.join(targetAgentDir, file);
|
|
102
|
+
const remotePath = path.join(tempDir, file);
|
|
103
|
+
|
|
104
|
+
if (await fs.pathExists(remotePath)) {
|
|
105
|
+
const localHash = await getFileHash(localPath);
|
|
106
|
+
const remoteHash = await getFileHash(remotePath);
|
|
107
|
+
if (localHash !== remoteHash) {
|
|
108
|
+
modified.push(file);
|
|
109
|
+
}
|
|
110
|
+
} else {
|
|
111
|
+
deleted.push(file);
|
|
102
112
|
}
|
|
103
113
|
}
|
|
104
114
|
|
|
105
|
-
|
|
106
|
-
|
|
115
|
+
// Check for new files from remote
|
|
116
|
+
for (const file of remoteFiles) {
|
|
117
|
+
if (!localFiles.includes(file)) {
|
|
118
|
+
added.push(file);
|
|
119
|
+
}
|
|
120
|
+
}
|
|
121
|
+
|
|
122
|
+
if (modified.length === 0 && added.length === 0 && deleted.length === 0) {
|
|
123
|
+
console.log(
|
|
124
|
+
chalk.green("✔ Your .agent configuration is already up to date."),
|
|
125
|
+
);
|
|
126
|
+
await fs.remove(tempDir);
|
|
127
|
+
return;
|
|
128
|
+
}
|
|
129
|
+
|
|
130
|
+
console.log(chalk.yellow("\nUpdate Summary:"));
|
|
131
|
+
|
|
132
|
+
let filesToOverwrite = [];
|
|
133
|
+
let filesToAdd = [];
|
|
134
|
+
let filesToDelete = [];
|
|
135
|
+
|
|
136
|
+
if (modified.length > 0) {
|
|
137
|
+
const { selectedModified } = await inquirer.prompt([
|
|
138
|
+
{
|
|
139
|
+
type: "checkbox",
|
|
140
|
+
name: "selectedModified",
|
|
141
|
+
message:
|
|
142
|
+
"Select MODIFIED files to OVERWRITE (Unselected = Keep Local):",
|
|
143
|
+
choices: modified.map((f) => ({
|
|
144
|
+
name: f,
|
|
145
|
+
value: f,
|
|
146
|
+
checked: false,
|
|
147
|
+
})),
|
|
148
|
+
pageSize: 20,
|
|
149
|
+
},
|
|
150
|
+
]);
|
|
151
|
+
filesToOverwrite = selectedModified;
|
|
152
|
+
}
|
|
153
|
+
|
|
154
|
+
if (added.length > 0) {
|
|
155
|
+
const { selectedAdded } = await inquirer.prompt([
|
|
156
|
+
{
|
|
157
|
+
type: "checkbox",
|
|
158
|
+
name: "selectedAdded",
|
|
159
|
+
message: "Select NEW files to ADD (Unselected = Skip):",
|
|
160
|
+
choices: added.map((f) => ({ name: f, value: f, checked: true })),
|
|
161
|
+
pageSize: 20,
|
|
162
|
+
},
|
|
163
|
+
]);
|
|
164
|
+
filesToAdd = selectedAdded;
|
|
165
|
+
}
|
|
166
|
+
|
|
167
|
+
if (deleted.length > 0) {
|
|
168
|
+
const { selectedDeleted } = await inquirer.prompt([
|
|
169
|
+
{
|
|
170
|
+
type: "checkbox",
|
|
171
|
+
name: "selectedDeleted",
|
|
172
|
+
message:
|
|
173
|
+
"Select DELETED files to REMOVE locally (Unselected = Keep Local):",
|
|
174
|
+
choices: deleted.map((f) => ({ name: f, value: f, checked: true })),
|
|
175
|
+
pageSize: 20,
|
|
176
|
+
},
|
|
177
|
+
]);
|
|
178
|
+
filesToDelete = selectedDeleted;
|
|
179
|
+
}
|
|
180
|
+
|
|
181
|
+
if (
|
|
182
|
+
filesToOverwrite.length === 0 &&
|
|
183
|
+
filesToAdd.length === 0 &&
|
|
184
|
+
filesToDelete.length === 0
|
|
185
|
+
) {
|
|
186
|
+
console.log(chalk.yellow("No changes selected. Update cancelled."));
|
|
187
|
+
await fs.remove(tempDir);
|
|
188
|
+
return;
|
|
189
|
+
}
|
|
190
|
+
|
|
191
|
+
console.log(chalk.blue("\nApplying updates..."));
|
|
192
|
+
|
|
193
|
+
// Process Overwrites
|
|
194
|
+
for (const file of filesToOverwrite) {
|
|
195
|
+
const src = path.join(tempDir, file);
|
|
196
|
+
const dest = path.join(targetAgentDir, file);
|
|
197
|
+
await fs.copy(src, dest, { overwrite: true });
|
|
198
|
+
console.log(chalk.green(` ✔ Overwritten: ${file}`));
|
|
199
|
+
}
|
|
200
|
+
|
|
201
|
+
// Process New Files
|
|
202
|
+
for (const file of filesToAdd) {
|
|
203
|
+
const src = path.join(tempDir, file);
|
|
204
|
+
const dest = path.join(targetAgentDir, file);
|
|
205
|
+
await fs.copy(src, dest, { overwrite: false });
|
|
206
|
+
console.log(chalk.green(` ✔ Added: ${file}`));
|
|
207
|
+
}
|
|
208
|
+
|
|
209
|
+
// Process Deletions
|
|
210
|
+
for (const file of filesToDelete) {
|
|
211
|
+
const dest = path.join(targetAgentDir, file);
|
|
212
|
+
await fs.remove(dest);
|
|
213
|
+
console.log(chalk.red(` ✘ Deleted: ${file}`));
|
|
214
|
+
}
|
|
215
|
+
|
|
216
|
+
console.log(chalk.green("\n✔ Selected updates applied successfully."));
|
|
217
|
+
|
|
218
|
+
// Cleanup
|
|
219
|
+
await fs.remove(tempDir);
|
|
107
220
|
} catch (err) {
|
|
108
221
|
console.error(chalk.red("✘ Error updating framework:"), err.message);
|
|
222
|
+
if (await fs.pathExists(tempDir)) await fs.remove(tempDir);
|
|
109
223
|
}
|
|
110
224
|
});
|
|
111
225
|
|