@paulduvall/claude-dev-toolkit 0.0.1-alpha.2 → 0.0.1-alpha.21
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +88 -37
- package/bin/claude-commands +307 -65
- package/commands/active/xarchitecture.md +393 -0
- package/commands/active/xconfig.md +127 -0
- package/commands/active/xcontinue.md +92 -0
- package/commands/active/xdebug.md +130 -0
- package/commands/active/xdocs.md +178 -0
- package/commands/active/xexplore.md +94 -0
- package/commands/active/xgit.md +149 -0
- package/commands/active/xpipeline.md +152 -0
- package/commands/active/xquality.md +96 -0
- package/commands/active/xrefactor.md +198 -0
- package/commands/active/xrelease.md +142 -0
- package/commands/active/xsecurity.md +92 -0
- package/commands/active/xspec.md +174 -0
- package/commands/active/xtdd.md +151 -0
- package/commands/active/xtest.md +89 -0
- package/commands/active/xverify.md +80 -0
- package/commands/experiments/xact.md +742 -0
- package/commands/experiments/xanalytics.md +113 -0
- package/commands/experiments/xanalyze.md +70 -0
- package/commands/experiments/xapi.md +161 -0
- package/commands/experiments/xatomic.md +112 -0
- package/commands/experiments/xaws.md +85 -0
- package/commands/experiments/xcicd.md +337 -0
- package/commands/experiments/xcommit.md +122 -0
- package/commands/experiments/xcompliance.md +182 -0
- package/commands/experiments/xconstraints.md +89 -0
- package/commands/experiments/xcoverage.md +90 -0
- package/commands/experiments/xdb.md +102 -0
- package/commands/experiments/xdesign.md +121 -0
- package/commands/experiments/xdevcontainer.md +238 -0
- package/commands/experiments/xevaluate.md +111 -0
- package/commands/experiments/xfootnote.md +12 -0
- package/commands/experiments/xgenerate.md +117 -0
- package/commands/experiments/xgovernance.md +149 -0
- package/commands/experiments/xgreen.md +66 -0
- package/commands/experiments/xiac.md +118 -0
- package/commands/experiments/xincident.md +137 -0
- package/commands/experiments/xinfra.md +115 -0
- package/commands/experiments/xknowledge.md +115 -0
- package/commands/experiments/xmaturity.md +120 -0
- package/commands/experiments/xmetrics.md +118 -0
- package/commands/experiments/xmonitoring.md +128 -0
- package/commands/experiments/xnew.md +903 -0
- package/commands/experiments/xobservable.md +114 -0
- package/commands/experiments/xoidc.md +165 -0
- package/commands/experiments/xoptimize.md +115 -0
- package/commands/experiments/xperformance.md +112 -0
- package/commands/experiments/xplanning.md +131 -0
- package/commands/experiments/xpolicy.md +115 -0
- package/commands/experiments/xproduct.md +98 -0
- package/commands/experiments/xreadiness.md +75 -0
- package/commands/experiments/xred.md +55 -0
- package/commands/experiments/xrisk.md +128 -0
- package/commands/experiments/xrules.md +124 -0
- package/commands/experiments/xsandbox.md +120 -0
- package/commands/experiments/xscan.md +102 -0
- package/commands/experiments/xsetup.md +123 -0
- package/commands/experiments/xtemplate.md +116 -0
- package/commands/experiments/xtrace.md +212 -0
- package/commands/experiments/xux.md +171 -0
- package/commands/experiments/xvalidate.md +104 -0
- package/commands/experiments/xworkflow.md +113 -0
- package/hooks/.smellrc.example.json +19 -0
- package/hooks/README.md +263 -0
- package/hooks/check-commit-signing.py +127 -0
- package/hooks/check-complexity.py +38 -0
- package/hooks/check-security.py +37 -0
- package/hooks/claude-wrapper.sh +29 -0
- package/hooks/config.py +110 -0
- package/hooks/file-logger.sh +100 -0
- package/hooks/lib/argument-parser.sh +427 -0
- package/hooks/lib/config-constants.sh +230 -0
- package/hooks/lib/context-manager.sh +560 -0
- package/hooks/lib/error-handler.sh +423 -0
- package/hooks/lib/execution-engine.sh +444 -0
- package/hooks/lib/execution-results.sh +113 -0
- package/hooks/lib/execution-simulation.sh +114 -0
- package/hooks/lib/field-validators.sh +104 -0
- package/hooks/lib/file-utils.sh +398 -0
- package/hooks/lib/subagent-discovery.sh +468 -0
- package/hooks/lib/subagent-validator.sh +407 -0
- package/hooks/lib/validation-reporter.sh +134 -0
- package/hooks/on-error-debug.sh +226 -0
- package/hooks/pre-commit-quality.sh +204 -0
- package/hooks/pre-commit-test-runner.sh +132 -0
- package/hooks/pre-write-security.sh +115 -0
- package/hooks/prevent-credential-exposure.sh +279 -0
- package/hooks/security_bandit.py +177 -0
- package/hooks/security_checks.py +97 -0
- package/hooks/security_secrets.py +81 -0
- package/hooks/security_trojan.py +61 -0
- package/hooks/settings.example.json +52 -0
- package/hooks/smell_checks.py +238 -0
- package/hooks/smell_javascript.py +231 -0
- package/hooks/smell_python.py +110 -0
- package/hooks/smell_ruff.py +70 -0
- package/hooks/smell_types.py +72 -0
- package/hooks/subagent-trigger-simple.sh +202 -0
- package/hooks/subagent-trigger.sh +253 -0
- package/hooks/suppression.py +82 -0
- package/hooks/tab-color.sh +70 -0
- package/hooks/verify-before-edit.sh +135 -0
- package/lib/backup-restore-command.js +140 -0
- package/lib/base/base-command.js +252 -0
- package/lib/base/command-result.js +184 -0
- package/lib/config/constants.js +255 -0
- package/lib/config.js +48 -6
- package/lib/configure-command.js +428 -0
- package/lib/dependency-validator.js +64 -5
- package/lib/hook-installer-core.js +2 -2
- package/lib/installation-instruction-generator.js +213 -495
- package/lib/installer.js +134 -56
- package/lib/oidc-command.js +740 -0
- package/lib/services/backup-list-service.js +226 -0
- package/lib/services/backup-service.js +230 -0
- package/lib/services/command-installer-service.js +217 -0
- package/lib/services/logger-service.js +201 -0
- package/lib/services/package-manager-service.js +319 -0
- package/lib/services/platform-instruction-service.js +294 -0
- package/lib/services/recovery-instruction-service.js +348 -0
- package/lib/services/restore-service.js +221 -0
- package/lib/setup-command.js +359 -0
- package/lib/setup-wizard.js +155 -262
- package/lib/uninstall-command.js +100 -0
- package/lib/utils/claude-path-config.js +184 -0
- package/lib/utils/file-system-utils.js +152 -0
- package/lib/utils.js +8 -4
- package/lib/verify-command.js +430 -0
- package/package.json +7 -3
- package/scripts/postinstall.js +172 -157
- package/subagents/debug-specialist.md +7 -0
- package/templates/README.md +115 -0
- package/templates/basic-settings.json +30 -0
- package/templates/comprehensive-settings.json +57 -0
- package/templates/global-claude.md +344 -0
- package/templates/hybrid-hook-config.yaml +132 -0
- package/templates/security-focused-settings.json +62 -0
- package/templates/subagent-hooks.yaml +188 -0
- package/lib/package-manager-service.js +0 -270
- package/subagents/debug-context.md +0 -197
|
@@ -0,0 +1,174 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Machine-readable specifications with unique identifiers and authority levels for precise AI code generation
|
|
3
|
+
tags: [specifications, traceability, ai-generation, coverage, requirements, authority]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Manage SpecDriven AI specifications based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
## Usage Examples
|
|
9
|
+
|
|
10
|
+
**Basic specification analysis:**
|
|
11
|
+
```
|
|
12
|
+
/xspec
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
**Read specifications:**
|
|
16
|
+
```
|
|
17
|
+
/xspec --read
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
**Create new specification:**
|
|
21
|
+
```
|
|
22
|
+
/xspec --new "Add contact form"
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
**Help and options:**
|
|
26
|
+
```
|
|
27
|
+
/xspec --help
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
## Implementation
|
|
31
|
+
|
|
32
|
+
If $ARGUMENTS contains "help" or "--help":
|
|
33
|
+
Display this usage information and exit.
|
|
34
|
+
|
|
35
|
+
First, verify SpecDriven AI project structure:
|
|
36
|
+
!ls -la specs/ 2>/dev/null || echo "No specs directory found"
|
|
37
|
+
!find specs/specifications/ -name "*.md" 2>/dev/null | head -5 || echo "No specifications found"
|
|
38
|
+
!find specs/tests/ -name "*.py" 2>/dev/null | head -5 || echo "No tests found"
|
|
39
|
+
|
|
40
|
+
Based on $ARGUMENTS, perform the appropriate specification operation:
|
|
41
|
+
|
|
42
|
+
## 1. Specification Reading and Discovery
|
|
43
|
+
|
|
44
|
+
If reading specifications (--read, --find):
|
|
45
|
+
!grep -r "#{#" specs/specifications/ | head -10 2>/dev/null || echo "No specification IDs found"
|
|
46
|
+
!find specs/specifications/ -name "*.md" -exec grep -l "authority=" {} \; 2>/dev/null | head -5
|
|
47
|
+
|
|
48
|
+
Read and analyze specifications:
|
|
49
|
+
- Parse specification IDs and authority levels
|
|
50
|
+
- Extract requirement descriptions and criteria
|
|
51
|
+
- Identify implementation dependencies
|
|
52
|
+
- Display specification content and metadata
|
|
53
|
+
- Cross-reference related specifications
|
|
54
|
+
|
|
55
|
+
## 2. Traceability Analysis
|
|
56
|
+
|
|
57
|
+
If tracing specifications (--trace):
|
|
58
|
+
!grep -r "$spec_id" specs/tests/ 2>/dev/null || echo "No tests found for specification"
|
|
59
|
+
!grep -r "$spec_id" . --exclude-dir=specs --exclude-dir=.git 2>/dev/null | head -5
|
|
60
|
+
|
|
61
|
+
Analyze specification traceability:
|
|
62
|
+
- Find tests implementing the specification
|
|
63
|
+
- Locate code referencing specification ID
|
|
64
|
+
- Map requirement-to-implementation relationships
|
|
65
|
+
- Validate traceability completeness
|
|
66
|
+
- Generate traceability reports
|
|
67
|
+
|
|
68
|
+
## 3. Specification Validation
|
|
69
|
+
|
|
70
|
+
If validating specifications (--validate, --machine-readable):
|
|
71
|
+
!grep -r "#{#[a-z]{3}[0-9][a-z] authority=" specs/specifications/ 2>/dev/null || echo "Invalid specification format"
|
|
72
|
+
!find specs/specifications/ -name "*.md" -exec grep -E "authority=(system|platform|developer)" {} \; | wc -l
|
|
73
|
+
|
|
74
|
+
Validate specification compliance:
|
|
75
|
+
- Check ID format (3 letters + 1 digit + 1 letter)
|
|
76
|
+
- Verify authority levels (system/platform/developer)
|
|
77
|
+
- Validate specification structure
|
|
78
|
+
- Ensure machine-readable format compliance
|
|
79
|
+
- Report format violations
|
|
80
|
+
|
|
81
|
+
## 4. Specification Creation
|
|
82
|
+
|
|
83
|
+
If creating new specifications (--new):
|
|
84
|
+
!find specs/specifications/ -name "*.md" -exec grep -o "#{#[a-z]*[0-9][a-z]" {} \; | sort | tail -5
|
|
85
|
+
!mkdir -p specs/specifications specs/tests
|
|
86
|
+
|
|
87
|
+
Create new specification with proper format:
|
|
88
|
+
- Generate unique specification ID
|
|
89
|
+
- Apply appropriate authority level
|
|
90
|
+
- Create specification template
|
|
91
|
+
- Include acceptance criteria
|
|
92
|
+
- Add traceability placeholders
|
|
93
|
+
|
|
94
|
+
## 5. Coverage Analysis
|
|
95
|
+
|
|
96
|
+
If analyzing coverage (--coverage, --dual-coverage):
|
|
97
|
+
!grep -r "#{#" specs/specifications/ | wc -l
|
|
98
|
+
!grep -r "#{#" specs/tests/ | wc -l
|
|
99
|
+
!python -m pytest --cov=. --cov-report=term-missing specs/tests/ 2>/dev/null || echo "Code coverage not available"
|
|
100
|
+
|
|
101
|
+
Analyze dual coverage metrics:
|
|
102
|
+
- Specification coverage (tests exist for requirements)
|
|
103
|
+
- Code coverage (tests execute relevant code)
|
|
104
|
+
- Traceability coverage (links maintained)
|
|
105
|
+
- Gap analysis and recommendations
|
|
106
|
+
- Coverage trend reporting
|
|
107
|
+
|
|
108
|
+
## 6. AI Code Generation
|
|
109
|
+
|
|
110
|
+
If generating from specifications (--generate-test, --ai-implement):
|
|
111
|
+
@specs/specifications/$spec_file 2>/dev/null || echo "Specification file not found"
|
|
112
|
+
!find specs/tests/ -name "*test*" | grep "$component" | head -3
|
|
113
|
+
|
|
114
|
+
Generate AI implementation:
|
|
115
|
+
- Extract requirements from specification
|
|
116
|
+
- Generate test cases covering all criteria
|
|
117
|
+
- Create minimal implementation code
|
|
118
|
+
- Ensure specification traceability
|
|
119
|
+
- Validate generated code compliance
|
|
120
|
+
|
|
121
|
+
## 7. Authority Management
|
|
122
|
+
|
|
123
|
+
If filtering by authority (--authority):
|
|
124
|
+
!grep -r "authority=$authority_level" specs/specifications/ 2>/dev/null || echo "No specifications with authority level found"
|
|
125
|
+
|
|
126
|
+
Manage specification authority:
|
|
127
|
+
- system: Critical system requirements (highest priority)
|
|
128
|
+
- platform: Infrastructure/framework requirements
|
|
129
|
+
- developer: Application/feature requirements (lowest priority)
|
|
130
|
+
- Authority-based filtering and prioritization
|
|
131
|
+
- Compliance validation by authority level
|
|
132
|
+
|
|
133
|
+
## 8. Gap Analysis
|
|
134
|
+
|
|
135
|
+
If identifying gaps (--gaps):
|
|
136
|
+
!find specs/specifications/ -name "*.md" -exec grep -o "#{#[a-z]{3}[0-9][a-z]" {} \; | sort | uniq > /tmp/spec_ids
|
|
137
|
+
!find specs/tests/ -name "*.py" -exec grep -o "#{#[a-z]{3}[0-9][a-z]" {} \; | sort | uniq > /tmp/test_ids || touch /tmp/test_ids
|
|
138
|
+
|
|
139
|
+
Identify specification gaps:
|
|
140
|
+
- Specifications without corresponding tests
|
|
141
|
+
- Tests without specification references
|
|
142
|
+
- Missing implementation coverage
|
|
143
|
+
- Broken traceability links
|
|
144
|
+
- Prioritized gap remediation
|
|
145
|
+
|
|
146
|
+
Think step by step about specification management and provide:
|
|
147
|
+
|
|
148
|
+
1. **Specification Analysis**:
|
|
149
|
+
- Current specification inventory
|
|
150
|
+
- Authority level distribution
|
|
151
|
+
- Format compliance status
|
|
152
|
+
- Coverage metrics and gaps
|
|
153
|
+
|
|
154
|
+
2. **Traceability Assessment**:
|
|
155
|
+
- Requirement-to-test mapping completeness
|
|
156
|
+
- Implementation traceability status
|
|
157
|
+
- Broken or missing links
|
|
158
|
+
- Cross-reference validation
|
|
159
|
+
|
|
160
|
+
3. **Quality Metrics**:
|
|
161
|
+
- Specification coverage percentage
|
|
162
|
+
- Code coverage achieved by tests
|
|
163
|
+
- Authority level compliance
|
|
164
|
+
- Format standardization status
|
|
165
|
+
|
|
166
|
+
4. **Improvement Recommendations**:
|
|
167
|
+
- Missing specifications to create
|
|
168
|
+
- Tests requiring implementation
|
|
169
|
+
- Traceability links to establish
|
|
170
|
+
- Coverage improvement opportunities
|
|
171
|
+
|
|
172
|
+
Generate comprehensive specification management report with dual coverage analysis, traceability validation, and actionable recommendations for improving SpecDriven AI development practices.
|
|
173
|
+
|
|
174
|
+
If no specific operation is provided, analyze current specification state and suggest priorities for improvement.
|
|
@@ -0,0 +1,151 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Complete Test-Driven Development workflow automation with Red-Green-Refactor-Commit cycle
|
|
3
|
+
tags: [tdd, testing, red-green-refactor, workflow, specifications, automation]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Execute complete TDD workflow automation based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
## Usage Examples
|
|
9
|
+
|
|
10
|
+
**Basic TDD workflow:**
|
|
11
|
+
```
|
|
12
|
+
/xtdd
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
**Start RED phase:**
|
|
16
|
+
```
|
|
17
|
+
/xtdd --red ContactForm
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
**Implement GREEN phase:**
|
|
21
|
+
```
|
|
22
|
+
/xtdd --green
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
**Help and options:**
|
|
26
|
+
```
|
|
27
|
+
/xtdd --help
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
## Implementation
|
|
31
|
+
|
|
32
|
+
If $ARGUMENTS contains "help" or "--help":
|
|
33
|
+
Display this usage information and exit.
|
|
34
|
+
|
|
35
|
+
First, verify project structure and TDD readiness:
|
|
36
|
+
!ls -la specs/ 2>/dev/null || echo "No specs directory found"
|
|
37
|
+
!find specs/tests/ -name "*.py" | head -5 2>/dev/null || echo "No tests found"
|
|
38
|
+
!python -c "import pytest; print('pytest available')" 2>/dev/null || echo "pytest not available"
|
|
39
|
+
|
|
40
|
+
Based on $ARGUMENTS, perform the appropriate TDD phase:
|
|
41
|
+
|
|
42
|
+
## 1. RED Phase - Write Failing Test
|
|
43
|
+
|
|
44
|
+
If starting RED phase (--red):
|
|
45
|
+
!grep -r "#{#$spec_id" specs/specifications/ 2>/dev/null || echo "Specification not found"
|
|
46
|
+
!find specs/tests/ -name "*.py" -exec grep -l "$spec_id" {} \; 2>/dev/null | head -3
|
|
47
|
+
|
|
48
|
+
Create failing test for specification:
|
|
49
|
+
- Verify specification exists and is readable
|
|
50
|
+
- Analyze specification requirements and criteria
|
|
51
|
+
- Create test file if not exists
|
|
52
|
+
- Write test that exercises the requirement
|
|
53
|
+
- Ensure test fails (RED phase validation)
|
|
54
|
+
|
|
55
|
+
## 2. GREEN Phase - Minimal Implementation
|
|
56
|
+
|
|
57
|
+
If implementing GREEN phase (--green):
|
|
58
|
+
!python -m pytest specs/tests/ -x --tb=short 2>/dev/null || echo "Tests currently failing"
|
|
59
|
+
!find . -name "*.py" | grep -v test | head -5
|
|
60
|
+
|
|
61
|
+
Implement minimal passing code:
|
|
62
|
+
- Run tests to identify failures
|
|
63
|
+
- Implement simplest code to make tests pass
|
|
64
|
+
- Focus on meeting test requirements only
|
|
65
|
+
- Avoid over-engineering or premature optimization
|
|
66
|
+
- Verify all tests pass (GREEN phase validation)
|
|
67
|
+
|
|
68
|
+
## 3. REFACTOR Phase - Code Quality Improvement
|
|
69
|
+
|
|
70
|
+
If refactoring (--refactor):
|
|
71
|
+
!python -m pytest specs/tests/ -v 2>/dev/null || echo "Tests must pass before refactoring"
|
|
72
|
+
!python -c "import mypy" 2>/dev/null && echo "MyPy available" || echo "MyPy not available"
|
|
73
|
+
!python -c "import ruff" 2>/dev/null && echo "Ruff available" || echo "Ruff not available"
|
|
74
|
+
|
|
75
|
+
Improve code quality while maintaining tests:
|
|
76
|
+
- Verify tests pass before starting
|
|
77
|
+
- Run quality checks (type checking, linting)
|
|
78
|
+
- Apply code formatting and style improvements
|
|
79
|
+
- Remove duplication and improve naming
|
|
80
|
+
- Ensure tests still pass after changes
|
|
81
|
+
|
|
82
|
+
## 4. COMMIT Phase - Traceability Documentation
|
|
83
|
+
|
|
84
|
+
If committing changes (--commit):
|
|
85
|
+
!git status --porcelain
|
|
86
|
+
!grep -r "#{#$spec_id" specs/specifications/ 2>/dev/null | grep -o "authority=[^}]*"
|
|
87
|
+
!python -m pytest --cov=. --cov-report=term specs/tests/ 2>/dev/null | grep "TOTAL" || echo "Coverage not available"
|
|
88
|
+
|
|
89
|
+
Create commit with TDD traceability:
|
|
90
|
+
- Final test validation before commit
|
|
91
|
+
- Extract specification authority level
|
|
92
|
+
- Generate coverage metrics
|
|
93
|
+
- Stage all changes for commit
|
|
94
|
+
- Create detailed commit message with TDD cycle documentation
|
|
95
|
+
|
|
96
|
+
## 5. Workflow Validation and Guidance
|
|
97
|
+
|
|
98
|
+
If validating TDD state:
|
|
99
|
+
!find specs/tests/ -name "*.py" -exec grep -l "def test_" {} \; | wc -l
|
|
100
|
+
!python -m pytest specs/tests/ --collect-only 2>/dev/null | grep "test" | wc -l || echo "0"
|
|
101
|
+
|
|
102
|
+
Validate TDD workflow compliance:
|
|
103
|
+
- Check for existing tests and specifications
|
|
104
|
+
- Verify test-to-specification traceability
|
|
105
|
+
- Analyze current TDD cycle phase
|
|
106
|
+
- Provide next step recommendations
|
|
107
|
+
- Ensure proper TDD discipline
|
|
108
|
+
|
|
109
|
+
## 6. Quality Gates and Automation
|
|
110
|
+
|
|
111
|
+
If running quality checks:
|
|
112
|
+
!mypy . --ignore-missing-imports 2>/dev/null || echo "Type checking not available"
|
|
113
|
+
!ruff check . 2>/dev/null || echo "Linting not available"
|
|
114
|
+
!ruff format . --check 2>/dev/null || echo "Formatting needed"
|
|
115
|
+
|
|
116
|
+
Automated quality validation:
|
|
117
|
+
- Type checking with MyPy
|
|
118
|
+
- Code linting with Ruff
|
|
119
|
+
- Style formatting validation
|
|
120
|
+
- Test coverage analysis
|
|
121
|
+
- Security and compliance checks
|
|
122
|
+
|
|
123
|
+
Think step by step about TDD workflow requirements and provide:
|
|
124
|
+
|
|
125
|
+
1. **Current TDD State Analysis**:
|
|
126
|
+
- Active TDD phase identification
|
|
127
|
+
- Test and implementation status
|
|
128
|
+
- Specification traceability validation
|
|
129
|
+
- Quality metrics assessment
|
|
130
|
+
|
|
131
|
+
2. **Phase-Specific Guidance**:
|
|
132
|
+
- RED: Test creation and failure validation
|
|
133
|
+
- GREEN: Minimal implementation strategy
|
|
134
|
+
- REFACTOR: Quality improvement opportunities
|
|
135
|
+
- COMMIT: Traceability and documentation
|
|
136
|
+
|
|
137
|
+
3. **Quality Assurance**:
|
|
138
|
+
- Test coverage and effectiveness
|
|
139
|
+
- Code quality metrics
|
|
140
|
+
- Specification compliance
|
|
141
|
+
- TDD discipline adherence
|
|
142
|
+
|
|
143
|
+
4. **Workflow Optimization**:
|
|
144
|
+
- Cycle efficiency improvements
|
|
145
|
+
- Automation opportunities
|
|
146
|
+
- Quality gate enforcement
|
|
147
|
+
- Team collaboration enhancement
|
|
148
|
+
|
|
149
|
+
Generate comprehensive TDD workflow automation with complete Red-Green-Refactor-Commit cycle, specification traceability, and quality assurance integration.
|
|
150
|
+
|
|
151
|
+
If no specific phase is provided, analyze current TDD state and recommend next appropriate action based on project status and requirements.
|
|
@@ -0,0 +1,89 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Run tests with smart defaults (runs all tests if no arguments)
|
|
3
|
+
tags: [testing, coverage, quality]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Test Execution
|
|
7
|
+
|
|
8
|
+
Run tests with intelligent defaults. No parameters needed for basic usage.
|
|
9
|
+
|
|
10
|
+
## Usage Examples
|
|
11
|
+
|
|
12
|
+
**Basic usage (runs all available tests):**
|
|
13
|
+
```
|
|
14
|
+
/xtest
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
**Run with coverage report:**
|
|
18
|
+
```
|
|
19
|
+
/xtest coverage
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
**Quick unit tests only:**
|
|
23
|
+
```
|
|
24
|
+
/xtest unit
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
**Help and options:**
|
|
28
|
+
```
|
|
29
|
+
/xtest help
|
|
30
|
+
/xtest --help
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
## Implementation
|
|
34
|
+
|
|
35
|
+
If $ARGUMENTS contains "help" or "--help":
|
|
36
|
+
Display this usage information and exit.
|
|
37
|
+
|
|
38
|
+
First, examine the project structure and detect testing framework:
|
|
39
|
+
!ls -la | grep -E "(test|spec|__tests__|\.test\.|\.spec\.)"
|
|
40
|
+
!find . -name "*test*" -o -name "*spec*" -o -name "__tests__" | head -5
|
|
41
|
+
!python -c "import pytest; print('✓ pytest available')" 2>/dev/null || npm test --version 2>/dev/null || echo "Detecting test framework..."
|
|
42
|
+
|
|
43
|
+
Determine testing approach based on $ARGUMENTS (default to running all tests):
|
|
44
|
+
|
|
45
|
+
**Mode 1: Default Test Run (no arguments)**
|
|
46
|
+
If $ARGUMENTS is empty or contains "all":
|
|
47
|
+
|
|
48
|
+
Auto-detect and run available tests:
|
|
49
|
+
- **Python projects**: Run pytest with sensible defaults
|
|
50
|
+
- **Node.js projects**: Run npm test or jest
|
|
51
|
+
- **Other frameworks**: Detect and run appropriately
|
|
52
|
+
|
|
53
|
+
!python -m pytest -v --tb=short 2>/dev/null || npm test 2>/dev/null || echo "No standard test configuration found"
|
|
54
|
+
|
|
55
|
+
**Mode 2: Unit Tests Only (argument: "unit")**
|
|
56
|
+
If $ARGUMENTS contains "unit":
|
|
57
|
+
!python -m pytest -v -k "unit" --tb=short 2>/dev/null || npm test -- --testNamePattern="unit" 2>/dev/null || echo "Running unit tests..."
|
|
58
|
+
|
|
59
|
+
Focus on fast, isolated tests:
|
|
60
|
+
- Skip integration and e2e tests
|
|
61
|
+
- Quick feedback on core logic
|
|
62
|
+
- Fast execution for frequent testing
|
|
63
|
+
|
|
64
|
+
**Mode 3: Coverage Analysis (argument: "coverage")**
|
|
65
|
+
If $ARGUMENTS contains "coverage":
|
|
66
|
+
!python -m pytest --cov=. --cov-report=term-missing -v 2>/dev/null || npm test -- --coverage 2>/dev/null || echo "Coverage analysis..."
|
|
67
|
+
|
|
68
|
+
Generate coverage report:
|
|
69
|
+
- Show percentage of code tested
|
|
70
|
+
- Identify untested code areas
|
|
71
|
+
- Highlight coverage gaps
|
|
72
|
+
- Suggest areas for additional testing
|
|
73
|
+
|
|
74
|
+
## Test Results Analysis
|
|
75
|
+
|
|
76
|
+
Think step by step about test execution and provide:
|
|
77
|
+
|
|
78
|
+
1. **Test Summary**: Clear pass/fail status with count of tests run
|
|
79
|
+
2. **Failed Tests**: List any failures with concise explanations
|
|
80
|
+
3. **Coverage Status**: Coverage percentage if available
|
|
81
|
+
4. **Next Steps**: Specific actions to improve test quality
|
|
82
|
+
|
|
83
|
+
Generate a focused test report showing:
|
|
84
|
+
- ✅ Tests passed
|
|
85
|
+
- ❌ Tests failed (with brief error summaries)
|
|
86
|
+
- 📊 Coverage percentage (if requested)
|
|
87
|
+
- 🔧 Recommended improvements
|
|
88
|
+
|
|
89
|
+
Keep output concise and actionable, focusing on what developers need to know immediately.
|
|
@@ -0,0 +1,80 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Verify references before taking action — catch fabricated URLs, placeholder IDs, and unverified claims"
|
|
3
|
+
tags: ["verification", "quality", "safety", "pre-action"]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Pre-Action Verification
|
|
7
|
+
|
|
8
|
+
Scan proposed changes, generated content, or referenced artifacts for fabricated URLs, placeholder IDs, nonexistent file paths, and unverified references. Run this before committing, publishing, or acting on generated content.
|
|
9
|
+
|
|
10
|
+
## Usage Examples
|
|
11
|
+
|
|
12
|
+
**Verify current staged changes:**
|
|
13
|
+
```
|
|
14
|
+
/xverify
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
**Verify a specific file:**
|
|
18
|
+
```
|
|
19
|
+
/xverify README.md
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
**Verify generated documentation:**
|
|
23
|
+
```
|
|
24
|
+
/xverify docs/
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
## What Gets Checked
|
|
28
|
+
|
|
29
|
+
### URLs and Endpoints
|
|
30
|
+
- HTTP/HTTPS URLs: attempt to confirm they are plausible (check format, known domains)
|
|
31
|
+
- API endpoints: verify they match project routes or documented APIs
|
|
32
|
+
- Flag common fabrication patterns: `example.com` placeholders, sequential IDs, lorem ipsum domains
|
|
33
|
+
|
|
34
|
+
### File Paths and References
|
|
35
|
+
- Verify referenced file paths exist on disk
|
|
36
|
+
- Check import/require statements resolve to real modules
|
|
37
|
+
- Flag references to deleted or renamed files
|
|
38
|
+
|
|
39
|
+
### IDs and Tokens
|
|
40
|
+
- Flag placeholder patterns: `xxx`, `TODO`, `FIXME`, `your-*-here`, `placeholder`
|
|
41
|
+
- Check for hardcoded test IDs that may not be valid in production
|
|
42
|
+
- Flag UUIDs or IDs that appear fabricated (all zeros, sequential)
|
|
43
|
+
|
|
44
|
+
### Claims and Assertions
|
|
45
|
+
- Cross-check version numbers against package.json or lock files
|
|
46
|
+
- Verify command names match actual available commands
|
|
47
|
+
- Check that referenced environment variables are documented
|
|
48
|
+
|
|
49
|
+
## Output Format
|
|
50
|
+
|
|
51
|
+
For each issue found, report:
|
|
52
|
+
1. **File and line** where the reference appears
|
|
53
|
+
2. **What was found** (the suspicious reference)
|
|
54
|
+
3. **Why it's suspicious** (pattern match or verification failure)
|
|
55
|
+
4. **Suggested fix** (if determinable)
|
|
56
|
+
|
|
57
|
+
## Implementation
|
|
58
|
+
|
|
59
|
+
When invoked:
|
|
60
|
+
|
|
61
|
+
1. **Determine scope**: If a file or directory argument is provided, scan that. Otherwise scan staged git changes (`git diff --cached`), or if nothing is staged, scan recent modifications.
|
|
62
|
+
|
|
63
|
+
2. **Extract references**: Parse the scoped content for:
|
|
64
|
+
- URLs (http/https links)
|
|
65
|
+
- File paths (relative and absolute)
|
|
66
|
+
- Import/require statements
|
|
67
|
+
- Version strings
|
|
68
|
+
- Environment variable references
|
|
69
|
+
- Command names with `/x` prefix
|
|
70
|
+
|
|
71
|
+
3. **Verify each reference**:
|
|
72
|
+
- File paths: check `fs.existsSync` or equivalent
|
|
73
|
+
- URLs: validate format, flag known placeholder domains
|
|
74
|
+
- Versions: cross-check against package.json
|
|
75
|
+
- Commands: cross-check against slash-commands/active/ and experiments/
|
|
76
|
+
- Env vars: check against .env.example or documentation
|
|
77
|
+
|
|
78
|
+
4. **Report findings**: Group by severity (error, warning, info) and output a summary table.
|
|
79
|
+
|
|
80
|
+
5. **Exit cleanly**: This is a read-only verification. No files are modified.
|