@paulduvall/claude-dev-toolkit 0.0.1-alpha.2 → 0.0.1-alpha.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +44 -6
- package/commands/active/xarchitecture.md +393 -0
- package/commands/active/xconfig.md +127 -0
- package/commands/active/xdebug.md +130 -0
- package/commands/active/xdocs.md +178 -0
- package/commands/active/xgit.md +149 -0
- package/commands/active/xpipeline.md +152 -0
- package/commands/active/xquality.md +96 -0
- package/commands/active/xrefactor.md +198 -0
- package/commands/active/xrelease.md +142 -0
- package/commands/active/xsecurity.md +92 -0
- package/commands/active/xspec.md +174 -0
- package/commands/active/xtdd.md +151 -0
- package/commands/active/xtest.md +89 -0
- package/commands/experiments/xact.md +742 -0
- package/commands/experiments/xanalytics.md +113 -0
- package/commands/experiments/xanalyze.md +70 -0
- package/commands/experiments/xapi.md +161 -0
- package/commands/experiments/xatomic.md +112 -0
- package/commands/experiments/xaws.md +85 -0
- package/commands/experiments/xcicd.md +337 -0
- package/commands/experiments/xcommit.md +122 -0
- package/commands/experiments/xcompliance.md +182 -0
- package/commands/experiments/xconstraints.md +89 -0
- package/commands/experiments/xcoverage.md +90 -0
- package/commands/experiments/xdb.md +102 -0
- package/commands/experiments/xdesign.md +121 -0
- package/commands/experiments/xevaluate.md +111 -0
- package/commands/experiments/xfootnote.md +12 -0
- package/commands/experiments/xgenerate.md +117 -0
- package/commands/experiments/xgovernance.md +149 -0
- package/commands/experiments/xgreen.md +66 -0
- package/commands/experiments/xiac.md +118 -0
- package/commands/experiments/xincident.md +137 -0
- package/commands/experiments/xinfra.md +115 -0
- package/commands/experiments/xknowledge.md +115 -0
- package/commands/experiments/xmaturity.md +120 -0
- package/commands/experiments/xmetrics.md +118 -0
- package/commands/experiments/xmonitoring.md +128 -0
- package/commands/experiments/xnew.md +898 -0
- package/commands/experiments/xobservable.md +114 -0
- package/commands/experiments/xoidc.md +165 -0
- package/commands/experiments/xoptimize.md +115 -0
- package/commands/experiments/xperformance.md +112 -0
- package/commands/experiments/xplanning.md +131 -0
- package/commands/experiments/xpolicy.md +115 -0
- package/commands/experiments/xproduct.md +98 -0
- package/commands/experiments/xreadiness.md +75 -0
- package/commands/experiments/xred.md +55 -0
- package/commands/experiments/xrisk.md +128 -0
- package/commands/experiments/xrules.md +124 -0
- package/commands/experiments/xsandbox.md +120 -0
- package/commands/experiments/xscan.md +102 -0
- package/commands/experiments/xsetup.md +123 -0
- package/commands/experiments/xtemplate.md +116 -0
- package/commands/experiments/xtrace.md +212 -0
- package/commands/experiments/xux.md +171 -0
- package/commands/experiments/xvalidate.md +104 -0
- package/commands/experiments/xworkflow.md +113 -0
- package/hooks/README.md +231 -0
- package/hooks/file-logger.sh +98 -0
- package/hooks/lib/argument-parser.sh +422 -0
- package/hooks/lib/config-constants.sh +230 -0
- package/hooks/lib/context-manager.sh +549 -0
- package/hooks/lib/error-handler.sh +412 -0
- package/hooks/lib/execution-engine.sh +627 -0
- package/hooks/lib/file-utils.sh +375 -0
- package/hooks/lib/subagent-discovery.sh +465 -0
- package/hooks/lib/subagent-validator.sh +597 -0
- package/hooks/on-error-debug.sh +221 -0
- package/hooks/pre-commit-quality.sh +204 -0
- package/hooks/pre-write-security.sh +107 -0
- package/hooks/prevent-credential-exposure.sh +265 -0
- package/hooks/subagent-trigger-simple.sh +193 -0
- package/hooks/subagent-trigger.sh +253 -0
- package/lib/hook-installer-core.js +2 -2
- package/package.json +3 -1
- package/scripts/postinstall.js +28 -10
- package/templates/README.md +100 -0
- package/templates/basic-settings.json +30 -0
- package/templates/comprehensive-settings.json +206 -0
- package/templates/hybrid-hook-config.yaml +133 -0
- package/templates/security-focused-settings.json +62 -0
- package/templates/subagent-hooks.yaml +188 -0
|
@@ -0,0 +1,151 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Complete Test-Driven Development workflow automation with Red-Green-Refactor-Commit cycle
|
|
3
|
+
tags: [tdd, testing, red-green-refactor, workflow, specifications, automation]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Execute complete TDD workflow automation based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
## Usage Examples
|
|
9
|
+
|
|
10
|
+
**Basic TDD workflow:**
|
|
11
|
+
```
|
|
12
|
+
/xtdd
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
**Start RED phase:**
|
|
16
|
+
```
|
|
17
|
+
/xtdd --red ContactForm
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
**Implement GREEN phase:**
|
|
21
|
+
```
|
|
22
|
+
/xtdd --green
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
**Help and options:**
|
|
26
|
+
```
|
|
27
|
+
/xtdd --help
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
## Implementation
|
|
31
|
+
|
|
32
|
+
If $ARGUMENTS contains "help" or "--help":
|
|
33
|
+
Display this usage information and exit.
|
|
34
|
+
|
|
35
|
+
First, verify project structure and TDD readiness:
|
|
36
|
+
!ls -la specs/ 2>/dev/null || echo "No specs directory found"
|
|
37
|
+
!find specs/tests/ -name "*.py" | head -5 2>/dev/null || echo "No tests found"
|
|
38
|
+
!python -c "import pytest; print('pytest available')" 2>/dev/null || echo "pytest not available"
|
|
39
|
+
|
|
40
|
+
Based on $ARGUMENTS, perform the appropriate TDD phase:
|
|
41
|
+
|
|
42
|
+
## 1. RED Phase - Write Failing Test
|
|
43
|
+
|
|
44
|
+
If starting RED phase (--red):
|
|
45
|
+
!grep -r "#{#$spec_id" specs/specifications/ 2>/dev/null || echo "Specification not found"
|
|
46
|
+
!find specs/tests/ -name "*.py" -exec grep -l "$spec_id" {} \; 2>/dev/null | head -3
|
|
47
|
+
|
|
48
|
+
Create failing test for specification:
|
|
49
|
+
- Verify specification exists and is readable
|
|
50
|
+
- Analyze specification requirements and criteria
|
|
51
|
+
- Create test file if not exists
|
|
52
|
+
- Write test that exercises the requirement
|
|
53
|
+
- Ensure test fails (RED phase validation)
|
|
54
|
+
|
|
55
|
+
## 2. GREEN Phase - Minimal Implementation
|
|
56
|
+
|
|
57
|
+
If implementing GREEN phase (--green):
|
|
58
|
+
!python -m pytest specs/tests/ -x --tb=short 2>/dev/null || echo "Tests currently failing"
|
|
59
|
+
!find . -name "*.py" | grep -v test | head -5
|
|
60
|
+
|
|
61
|
+
Implement minimal passing code:
|
|
62
|
+
- Run tests to identify failures
|
|
63
|
+
- Implement simplest code to make tests pass
|
|
64
|
+
- Focus on meeting test requirements only
|
|
65
|
+
- Avoid over-engineering or premature optimization
|
|
66
|
+
- Verify all tests pass (GREEN phase validation)
|
|
67
|
+
|
|
68
|
+
## 3. REFACTOR Phase - Code Quality Improvement
|
|
69
|
+
|
|
70
|
+
If refactoring (--refactor):
|
|
71
|
+
!python -m pytest specs/tests/ -v 2>/dev/null || echo "Tests must pass before refactoring"
|
|
72
|
+
!python -c "import mypy" 2>/dev/null && echo "MyPy available" || echo "MyPy not available"
|
|
73
|
+
!python -c "import ruff" 2>/dev/null && echo "Ruff available" || echo "Ruff not available"
|
|
74
|
+
|
|
75
|
+
Improve code quality while maintaining tests:
|
|
76
|
+
- Verify tests pass before starting
|
|
77
|
+
- Run quality checks (type checking, linting)
|
|
78
|
+
- Apply code formatting and style improvements
|
|
79
|
+
- Remove duplication and improve naming
|
|
80
|
+
- Ensure tests still pass after changes
|
|
81
|
+
|
|
82
|
+
## 4. COMMIT Phase - Traceability Documentation
|
|
83
|
+
|
|
84
|
+
If committing changes (--commit):
|
|
85
|
+
!git status --porcelain
|
|
86
|
+
!grep -r "#{#$spec_id" specs/specifications/ 2>/dev/null | grep -o "authority=[^}]*"
|
|
87
|
+
!python -m pytest --cov=. --cov-report=term specs/tests/ 2>/dev/null | grep "TOTAL" || echo "Coverage not available"
|
|
88
|
+
|
|
89
|
+
Create commit with TDD traceability:
|
|
90
|
+
- Final test validation before commit
|
|
91
|
+
- Extract specification authority level
|
|
92
|
+
- Generate coverage metrics
|
|
93
|
+
- Stage all changes for commit
|
|
94
|
+
- Create detailed commit message with TDD cycle documentation
|
|
95
|
+
|
|
96
|
+
## 5. Workflow Validation and Guidance
|
|
97
|
+
|
|
98
|
+
If validating TDD state:
|
|
99
|
+
!find specs/tests/ -name "*.py" -exec grep -l "def test_" {} \; | wc -l
|
|
100
|
+
!python -m pytest specs/tests/ --collect-only 2>/dev/null | grep "test" | wc -l || echo "0"
|
|
101
|
+
|
|
102
|
+
Validate TDD workflow compliance:
|
|
103
|
+
- Check for existing tests and specifications
|
|
104
|
+
- Verify test-to-specification traceability
|
|
105
|
+
- Analyze current TDD cycle phase
|
|
106
|
+
- Provide next step recommendations
|
|
107
|
+
- Ensure proper TDD discipline
|
|
108
|
+
|
|
109
|
+
## 6. Quality Gates and Automation
|
|
110
|
+
|
|
111
|
+
If running quality checks:
|
|
112
|
+
!mypy . --ignore-missing-imports 2>/dev/null || echo "Type checking not available"
|
|
113
|
+
!ruff check . 2>/dev/null || echo "Linting not available"
|
|
114
|
+
!ruff format . --check 2>/dev/null || echo "Formatting needed"
|
|
115
|
+
|
|
116
|
+
Automated quality validation:
|
|
117
|
+
- Type checking with MyPy
|
|
118
|
+
- Code linting with Ruff
|
|
119
|
+
- Style formatting validation
|
|
120
|
+
- Test coverage analysis
|
|
121
|
+
- Security and compliance checks
|
|
122
|
+
|
|
123
|
+
Think step by step about TDD workflow requirements and provide:
|
|
124
|
+
|
|
125
|
+
1. **Current TDD State Analysis**:
|
|
126
|
+
- Active TDD phase identification
|
|
127
|
+
- Test and implementation status
|
|
128
|
+
- Specification traceability validation
|
|
129
|
+
- Quality metrics assessment
|
|
130
|
+
|
|
131
|
+
2. **Phase-Specific Guidance**:
|
|
132
|
+
- RED: Test creation and failure validation
|
|
133
|
+
- GREEN: Minimal implementation strategy
|
|
134
|
+
- REFACTOR: Quality improvement opportunities
|
|
135
|
+
- COMMIT: Traceability and documentation
|
|
136
|
+
|
|
137
|
+
3. **Quality Assurance**:
|
|
138
|
+
- Test coverage and effectiveness
|
|
139
|
+
- Code quality metrics
|
|
140
|
+
- Specification compliance
|
|
141
|
+
- TDD discipline adherence
|
|
142
|
+
|
|
143
|
+
4. **Workflow Optimization**:
|
|
144
|
+
- Cycle efficiency improvements
|
|
145
|
+
- Automation opportunities
|
|
146
|
+
- Quality gate enforcement
|
|
147
|
+
- Team collaboration enhancement
|
|
148
|
+
|
|
149
|
+
Generate comprehensive TDD workflow automation with complete Red-Green-Refactor-Commit cycle, specification traceability, and quality assurance integration.
|
|
150
|
+
|
|
151
|
+
If no specific phase is provided, analyze current TDD state and recommend next appropriate action based on project status and requirements.
|
|
@@ -0,0 +1,89 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Run tests with smart defaults (runs all tests if no arguments)
|
|
3
|
+
tags: [testing, coverage, quality]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Test Execution
|
|
7
|
+
|
|
8
|
+
Run tests with intelligent defaults. No parameters needed for basic usage.
|
|
9
|
+
|
|
10
|
+
## Usage Examples
|
|
11
|
+
|
|
12
|
+
**Basic usage (runs all available tests):**
|
|
13
|
+
```
|
|
14
|
+
/xtest
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
**Run with coverage report:**
|
|
18
|
+
```
|
|
19
|
+
/xtest coverage
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
**Quick unit tests only:**
|
|
23
|
+
```
|
|
24
|
+
/xtest unit
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
**Help and options:**
|
|
28
|
+
```
|
|
29
|
+
/xtest help
|
|
30
|
+
/xtest --help
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
## Implementation
|
|
34
|
+
|
|
35
|
+
If $ARGUMENTS contains "help" or "--help":
|
|
36
|
+
Display this usage information and exit.
|
|
37
|
+
|
|
38
|
+
First, examine the project structure and detect testing framework:
|
|
39
|
+
!ls -la | grep -E "(test|spec|__tests__|\.test\.|\.spec\.)"
|
|
40
|
+
!find . -name "*test*" -o -name "*spec*" -o -name "__tests__" | head -5
|
|
41
|
+
!python -c "import pytest; print('✓ pytest available')" 2>/dev/null || npm test --version 2>/dev/null || echo "Detecting test framework..."
|
|
42
|
+
|
|
43
|
+
Determine testing approach based on $ARGUMENTS (default to running all tests):
|
|
44
|
+
|
|
45
|
+
**Mode 1: Default Test Run (no arguments)**
|
|
46
|
+
If $ARGUMENTS is empty or contains "all":
|
|
47
|
+
|
|
48
|
+
Auto-detect and run available tests:
|
|
49
|
+
- **Python projects**: Run pytest with sensible defaults
|
|
50
|
+
- **Node.js projects**: Run npm test or jest
|
|
51
|
+
- **Other frameworks**: Detect and run appropriately
|
|
52
|
+
|
|
53
|
+
!python -m pytest -v --tb=short 2>/dev/null || npm test 2>/dev/null || echo "No standard test configuration found"
|
|
54
|
+
|
|
55
|
+
**Mode 2: Unit Tests Only (argument: "unit")**
|
|
56
|
+
If $ARGUMENTS contains "unit":
|
|
57
|
+
!python -m pytest -v -k "unit" --tb=short 2>/dev/null || npm test -- --testNamePattern="unit" 2>/dev/null || echo "Running unit tests..."
|
|
58
|
+
|
|
59
|
+
Focus on fast, isolated tests:
|
|
60
|
+
- Skip integration and e2e tests
|
|
61
|
+
- Quick feedback on core logic
|
|
62
|
+
- Fast execution for frequent testing
|
|
63
|
+
|
|
64
|
+
**Mode 3: Coverage Analysis (argument: "coverage")**
|
|
65
|
+
If $ARGUMENTS contains "coverage":
|
|
66
|
+
!python -m pytest --cov=. --cov-report=term-missing -v 2>/dev/null || npm test -- --coverage 2>/dev/null || echo "Coverage analysis..."
|
|
67
|
+
|
|
68
|
+
Generate coverage report:
|
|
69
|
+
- Show percentage of code tested
|
|
70
|
+
- Identify untested code areas
|
|
71
|
+
- Highlight coverage gaps
|
|
72
|
+
- Suggest areas for additional testing
|
|
73
|
+
|
|
74
|
+
## Test Results Analysis
|
|
75
|
+
|
|
76
|
+
Think step by step about test execution and provide:
|
|
77
|
+
|
|
78
|
+
1. **Test Summary**: Clear pass/fail status with count of tests run
|
|
79
|
+
2. **Failed Tests**: List any failures with concise explanations
|
|
80
|
+
3. **Coverage Status**: Coverage percentage if available
|
|
81
|
+
4. **Next Steps**: Specific actions to improve test quality
|
|
82
|
+
|
|
83
|
+
Generate a focused test report showing:
|
|
84
|
+
- ✅ Tests passed
|
|
85
|
+
- ❌ Tests failed (with brief error summaries)
|
|
86
|
+
- 📊 Coverage percentage (if requested)
|
|
87
|
+
- 🔧 Recommended improvements
|
|
88
|
+
|
|
89
|
+
Keep output concise and actionable, focusing on what developers need to know immediately.
|