@paulduvall/claude-dev-toolkit 0.0.1-alpha.2 → 0.0.1-alpha.21
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +88 -37
- package/bin/claude-commands +307 -65
- package/commands/active/xarchitecture.md +393 -0
- package/commands/active/xconfig.md +127 -0
- package/commands/active/xcontinue.md +92 -0
- package/commands/active/xdebug.md +130 -0
- package/commands/active/xdocs.md +178 -0
- package/commands/active/xexplore.md +94 -0
- package/commands/active/xgit.md +149 -0
- package/commands/active/xpipeline.md +152 -0
- package/commands/active/xquality.md +96 -0
- package/commands/active/xrefactor.md +198 -0
- package/commands/active/xrelease.md +142 -0
- package/commands/active/xsecurity.md +92 -0
- package/commands/active/xspec.md +174 -0
- package/commands/active/xtdd.md +151 -0
- package/commands/active/xtest.md +89 -0
- package/commands/active/xverify.md +80 -0
- package/commands/experiments/xact.md +742 -0
- package/commands/experiments/xanalytics.md +113 -0
- package/commands/experiments/xanalyze.md +70 -0
- package/commands/experiments/xapi.md +161 -0
- package/commands/experiments/xatomic.md +112 -0
- package/commands/experiments/xaws.md +85 -0
- package/commands/experiments/xcicd.md +337 -0
- package/commands/experiments/xcommit.md +122 -0
- package/commands/experiments/xcompliance.md +182 -0
- package/commands/experiments/xconstraints.md +89 -0
- package/commands/experiments/xcoverage.md +90 -0
- package/commands/experiments/xdb.md +102 -0
- package/commands/experiments/xdesign.md +121 -0
- package/commands/experiments/xdevcontainer.md +238 -0
- package/commands/experiments/xevaluate.md +111 -0
- package/commands/experiments/xfootnote.md +12 -0
- package/commands/experiments/xgenerate.md +117 -0
- package/commands/experiments/xgovernance.md +149 -0
- package/commands/experiments/xgreen.md +66 -0
- package/commands/experiments/xiac.md +118 -0
- package/commands/experiments/xincident.md +137 -0
- package/commands/experiments/xinfra.md +115 -0
- package/commands/experiments/xknowledge.md +115 -0
- package/commands/experiments/xmaturity.md +120 -0
- package/commands/experiments/xmetrics.md +118 -0
- package/commands/experiments/xmonitoring.md +128 -0
- package/commands/experiments/xnew.md +903 -0
- package/commands/experiments/xobservable.md +114 -0
- package/commands/experiments/xoidc.md +165 -0
- package/commands/experiments/xoptimize.md +115 -0
- package/commands/experiments/xperformance.md +112 -0
- package/commands/experiments/xplanning.md +131 -0
- package/commands/experiments/xpolicy.md +115 -0
- package/commands/experiments/xproduct.md +98 -0
- package/commands/experiments/xreadiness.md +75 -0
- package/commands/experiments/xred.md +55 -0
- package/commands/experiments/xrisk.md +128 -0
- package/commands/experiments/xrules.md +124 -0
- package/commands/experiments/xsandbox.md +120 -0
- package/commands/experiments/xscan.md +102 -0
- package/commands/experiments/xsetup.md +123 -0
- package/commands/experiments/xtemplate.md +116 -0
- package/commands/experiments/xtrace.md +212 -0
- package/commands/experiments/xux.md +171 -0
- package/commands/experiments/xvalidate.md +104 -0
- package/commands/experiments/xworkflow.md +113 -0
- package/hooks/.smellrc.example.json +19 -0
- package/hooks/README.md +263 -0
- package/hooks/check-commit-signing.py +127 -0
- package/hooks/check-complexity.py +38 -0
- package/hooks/check-security.py +37 -0
- package/hooks/claude-wrapper.sh +29 -0
- package/hooks/config.py +110 -0
- package/hooks/file-logger.sh +100 -0
- package/hooks/lib/argument-parser.sh +427 -0
- package/hooks/lib/config-constants.sh +230 -0
- package/hooks/lib/context-manager.sh +560 -0
- package/hooks/lib/error-handler.sh +423 -0
- package/hooks/lib/execution-engine.sh +444 -0
- package/hooks/lib/execution-results.sh +113 -0
- package/hooks/lib/execution-simulation.sh +114 -0
- package/hooks/lib/field-validators.sh +104 -0
- package/hooks/lib/file-utils.sh +398 -0
- package/hooks/lib/subagent-discovery.sh +468 -0
- package/hooks/lib/subagent-validator.sh +407 -0
- package/hooks/lib/validation-reporter.sh +134 -0
- package/hooks/on-error-debug.sh +226 -0
- package/hooks/pre-commit-quality.sh +204 -0
- package/hooks/pre-commit-test-runner.sh +132 -0
- package/hooks/pre-write-security.sh +115 -0
- package/hooks/prevent-credential-exposure.sh +279 -0
- package/hooks/security_bandit.py +177 -0
- package/hooks/security_checks.py +97 -0
- package/hooks/security_secrets.py +81 -0
- package/hooks/security_trojan.py +61 -0
- package/hooks/settings.example.json +52 -0
- package/hooks/smell_checks.py +238 -0
- package/hooks/smell_javascript.py +231 -0
- package/hooks/smell_python.py +110 -0
- package/hooks/smell_ruff.py +70 -0
- package/hooks/smell_types.py +72 -0
- package/hooks/subagent-trigger-simple.sh +202 -0
- package/hooks/subagent-trigger.sh +253 -0
- package/hooks/suppression.py +82 -0
- package/hooks/tab-color.sh +70 -0
- package/hooks/verify-before-edit.sh +135 -0
- package/lib/backup-restore-command.js +140 -0
- package/lib/base/base-command.js +252 -0
- package/lib/base/command-result.js +184 -0
- package/lib/config/constants.js +255 -0
- package/lib/config.js +48 -6
- package/lib/configure-command.js +428 -0
- package/lib/dependency-validator.js +64 -5
- package/lib/hook-installer-core.js +2 -2
- package/lib/installation-instruction-generator.js +213 -495
- package/lib/installer.js +134 -56
- package/lib/oidc-command.js +740 -0
- package/lib/services/backup-list-service.js +226 -0
- package/lib/services/backup-service.js +230 -0
- package/lib/services/command-installer-service.js +217 -0
- package/lib/services/logger-service.js +201 -0
- package/lib/services/package-manager-service.js +319 -0
- package/lib/services/platform-instruction-service.js +294 -0
- package/lib/services/recovery-instruction-service.js +348 -0
- package/lib/services/restore-service.js +221 -0
- package/lib/setup-command.js +359 -0
- package/lib/setup-wizard.js +155 -262
- package/lib/uninstall-command.js +100 -0
- package/lib/utils/claude-path-config.js +184 -0
- package/lib/utils/file-system-utils.js +152 -0
- package/lib/utils.js +8 -4
- package/lib/verify-command.js +430 -0
- package/package.json +7 -3
- package/scripts/postinstall.js +172 -157
- package/subagents/debug-specialist.md +7 -0
- package/templates/README.md +115 -0
- package/templates/basic-settings.json +30 -0
- package/templates/comprehensive-settings.json +57 -0
- package/templates/global-claude.md +344 -0
- package/templates/hybrid-hook-config.yaml +132 -0
- package/templates/security-focused-settings.json +62 -0
- package/templates/subagent-hooks.yaml +188 -0
- package/lib/package-manager-service.js +0 -270
- package/subagents/debug-context.md +0 -197
|
@@ -0,0 +1,111 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Comprehensive evaluation and assessment tools for code quality and project health
|
|
3
|
+
tags: [evaluation, assessment, quality, metrics, analysis]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Perform comprehensive evaluation and assessment based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, examine the project structure and available metrics:
|
|
9
|
+
!find . -name "*.py" -o -name "*.js" -o -name "*.ts" | head -15
|
|
10
|
+
!ls -la | grep -E "(test|spec|coverage|metrics)"
|
|
11
|
+
!git log --oneline -10 2>/dev/null || echo "No git repository found"
|
|
12
|
+
|
|
13
|
+
Based on $ARGUMENTS, perform the appropriate evaluation:
|
|
14
|
+
|
|
15
|
+
## 1. Code Quality Assessment
|
|
16
|
+
|
|
17
|
+
If evaluating quality (--quality):
|
|
18
|
+
!python -m flake8 . --count 2>/dev/null || echo "No Python linting available"
|
|
19
|
+
!eslint . --format compact 2>/dev/null || echo "No JavaScript linting available"
|
|
20
|
+
!find . -name "*.py" -exec wc -l {} \; | awk '{sum+=$1} END {print "Total Python lines:", sum}'
|
|
21
|
+
|
|
22
|
+
Analyze code quality metrics:
|
|
23
|
+
- Code complexity and maintainability
|
|
24
|
+
- Test coverage percentage
|
|
25
|
+
- Linting and style violations
|
|
26
|
+
- Documentation coverage
|
|
27
|
+
- Technical debt indicators
|
|
28
|
+
|
|
29
|
+
## 2. Project Health Evaluation
|
|
30
|
+
|
|
31
|
+
If evaluating project health (--project):
|
|
32
|
+
!git log --since="30 days ago" --pretty=format:"%h %s" | wc -l
|
|
33
|
+
!git log --since="30 days ago" --pretty=format:"%an" | sort | uniq -c | sort -nr
|
|
34
|
+
!find . -name "TODO" -o -name "FIXME" | xargs grep -i "todo\|fixme" | wc -l 2>/dev/null || echo "0"
|
|
35
|
+
|
|
36
|
+
Assess project health indicators:
|
|
37
|
+
- Development velocity and commit frequency
|
|
38
|
+
- Issue resolution rate
|
|
39
|
+
- Technical debt accumulation
|
|
40
|
+
- Team collaboration patterns
|
|
41
|
+
- Release readiness
|
|
42
|
+
|
|
43
|
+
## 3. Team Performance Assessment
|
|
44
|
+
|
|
45
|
+
If evaluating team performance (--team):
|
|
46
|
+
!git shortlog -sn --since="30 days ago" 2>/dev/null || echo "No git history available"
|
|
47
|
+
!git log --since="7 days ago" --pretty=format:"%ad" --date=short | sort | uniq -c
|
|
48
|
+
|
|
49
|
+
Evaluate team metrics:
|
|
50
|
+
- Individual and team velocity
|
|
51
|
+
- Code review participation
|
|
52
|
+
- Knowledge sharing patterns
|
|
53
|
+
- Skill development indicators
|
|
54
|
+
- Collaboration effectiveness
|
|
55
|
+
|
|
56
|
+
## 4. Process Effectiveness Analysis
|
|
57
|
+
|
|
58
|
+
If evaluating process (--process):
|
|
59
|
+
!find . -name "*.yml" -o -name "*.yaml" | grep -E "(ci|pipeline|workflow)" | head -5
|
|
60
|
+
!ls -la .github/workflows/ 2>/dev/null || echo "No GitHub workflows found"
|
|
61
|
+
!find . -name "*test*" | wc -l
|
|
62
|
+
|
|
63
|
+
Analyze development processes:
|
|
64
|
+
- CI/CD pipeline effectiveness
|
|
65
|
+
- Testing process maturity
|
|
66
|
+
- Code review process efficiency
|
|
67
|
+
- Release management effectiveness
|
|
68
|
+
- Incident response capabilities
|
|
69
|
+
|
|
70
|
+
## 5. Comprehensive Reporting
|
|
71
|
+
|
|
72
|
+
If generating reports (--report):
|
|
73
|
+
!date
|
|
74
|
+
!uptime
|
|
75
|
+
!df -h . | tail -1
|
|
76
|
+
|
|
77
|
+
Generate evaluation metrics:
|
|
78
|
+
- Overall project health score
|
|
79
|
+
- Quality trend analysis
|
|
80
|
+
- Risk assessment summary
|
|
81
|
+
- Improvement recommendations
|
|
82
|
+
- Benchmarking against industry standards
|
|
83
|
+
|
|
84
|
+
Think step by step about the evaluation results and provide:
|
|
85
|
+
|
|
86
|
+
1. **Current Status Assessment**:
|
|
87
|
+
- Overall health score (0-100)
|
|
88
|
+
- Key strengths identified
|
|
89
|
+
- Critical areas for improvement
|
|
90
|
+
- Risk factors and mitigation strategies
|
|
91
|
+
|
|
92
|
+
2. **Trend Analysis**:
|
|
93
|
+
- Performance trends over time
|
|
94
|
+
- Quality trajectory
|
|
95
|
+
- Team productivity patterns
|
|
96
|
+
- Process improvement opportunities
|
|
97
|
+
|
|
98
|
+
3. **Actionable Recommendations**:
|
|
99
|
+
- Prioritized improvement actions
|
|
100
|
+
- Resource allocation suggestions
|
|
101
|
+
- Timeline for improvements
|
|
102
|
+
- Success metrics and KPIs
|
|
103
|
+
|
|
104
|
+
4. **Benchmarking Results**:
|
|
105
|
+
- Industry standard comparisons
|
|
106
|
+
- Best practice alignment
|
|
107
|
+
- Competitive positioning
|
|
108
|
+
- Excellence opportunities
|
|
109
|
+
|
|
110
|
+
Generate comprehensive evaluation report with specific, actionable insights and improvement roadmap.
|
|
111
|
+
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Track machine-readable requirement links for SpecDriven AI development
|
|
3
|
+
tags: [requirements, traceability, specifications, footnotes, coverage]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Track and manage requirement links based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, verify this is a SpecDriven AI project structure:
|
|
9
|
+
!ls -la specs/ 2>/dev/null || echo "No specs directory found"
|
|
10
|
+
!find specs/specifications/ -name "*.md" 2>/dev/null | head -5 || echo "No specifications found"
|
|
11
|
+
|
|
12
|
+
Based on $ARGUMENTS, perform the appropriate footnote operation:\n\n## 1. Find Requirements by ID\n\nIf finding requirements (--find):\n!grep -r \"#{#$footnote_id\" specs/specifications/ 2>/dev/null || echo \"Footnote ID not found\"\n!grep -r \"authority=\" specs/specifications/ | grep \"$footnote_id\" | head -1\n\nSearch for:\n- Specification containing the footnote ID\n- Authority level (system/platform/developer)\n- Context and description\n- Related requirements\n\n## 2. Generate Next Available ID\n\nIf generating next ID (--next):\n!find specs/specifications/ -name \"*.md\" -exec grep -o \"#{#[a-z]\\{3\\}[0-9][a-z]\" {} \\; 2>/dev/null | sort | tail -5\n\nGenerate next footnote ID:\n- Extract component prefix (first 3 chars)\n- Find highest existing sequence number\n- Generate next available ID with proper format\n- Validate format: ^[a-z]{3}[0-9][a-z]\n\n## 3. Trace Test Implementations\n\nIf tracing implementations (--trace):\n!grep -r \"$footnote_id\" specs/tests/ 2>/dev/null || echo \"No tests found for $footnote_id\"\n!grep -r \"$footnote_id\" . --exclude-dir=specs --exclude-dir=.git 2>/dev/null | head -5\n\nTrace requirement implementation:\n- Find tests referencing the footnote ID\n- Locate code implementing the requirement\n- Check traceability links\n- Count implementation coverage\n\n## 4. Validate ID Format\n\nIf validating format (--validate):\n!echo \"$footnote_id\" | grep -E \"^[a-z]{3}[0-9][a-z]$\" >/dev/null && echo \"Valid format\" || echo \"Invalid format\"\n\nValidate footnote ID:\n- Check format compliance (3 letters + 1 digit + 1 letter)\n- Extract component prefix\n- Verify sequence number\n- Confirm version suffix\n\n## 5. Check Authority Level\n\nIf checking authority (--authority):\n!grep -r \"#{#$footnote_id.*authority=\" specs/specifications/ 2>/dev/null | grep -o \"authority=[^}]*\"\n\nDetermine authority level:\n- system: Critical system requirements (highest)\n- platform: Framework/infrastructure requirements (medium)\n- developer: Application/feature requirements (lowest)\n\n## 6. Dual Coverage Analysis\n\nIf checking coverage (--coverage):\n!grep -r \"$footnote_id\" specs/tests/ >/dev/null && echo \"✓ Has tests\" || echo \"✗ No tests\"\n!python -m pytest specs/tests/ --cov=. -q 2>/dev/null | grep \"TOTAL\" || echo \"Coverage analysis not available\"\n\nAnalyze dual coverage:\n- Specification coverage (tests exist for requirement)\n- Code coverage (tests execute relevant code)\n- Traceability coverage (links between spec-test-code)\n\nThink step by step about requirement traceability and provide:\n\n1. **Requirement Status**:\n - Specification exists and is well-defined\n - Authority level and compliance requirements\n - Implementation completeness\n\n2. **Test Coverage**:\n - Tests exist for the requirement\n - Test quality and completeness\n - Code coverage achieved by tests\n\n3. **Traceability Links**:\n - Clear links from specification to tests\n - Code references to requirement ID\n - End-to-end traceability validation\n\n4. **Recommendations**:\n - Missing tests or implementations\n - Coverage improvement opportunities\n - Traceability enhancement suggestions\n\nGenerate requirement traceability report with coverage metrics and improvement recommendations.\n\nIf no specific operation is provided, show available footnote IDs and their status.
|
|
@@ -0,0 +1,117 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Auto-generate code, tests, and documentation from specifications using AI
|
|
3
|
+
tags: [generation, ai, code, tests, documentation, automation]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Generate code, tests, and documentation based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, examine the project structure and identify generation targets:
|
|
9
|
+
!find . -name "*.py" -o -name "*.js" -o -name "*.ts" | head -10
|
|
10
|
+
!ls -la specs/ 2>/dev/null || echo "No specs directory found"
|
|
11
|
+
!find specs/specifications/ -name "*.md" 2>/dev/null | head -5 || echo "No specifications found"
|
|
12
|
+
|
|
13
|
+
Based on $ARGUMENTS, perform the appropriate code generation:
|
|
14
|
+
|
|
15
|
+
## 1. Generate Tests from Specifications
|
|
16
|
+
|
|
17
|
+
If generating tests (--test):
|
|
18
|
+
!find specs/specifications/ -name "*.md" -exec grep -l "$spec_id" {} \; 2>/dev/null || echo "Specification not found"
|
|
19
|
+
@specs/specifications/$specification_file 2>/dev/null || echo "Unable to read specification"
|
|
20
|
+
|
|
21
|
+
Generate test file:
|
|
22
|
+
- Extract requirements from specification
|
|
23
|
+
- Create test cases covering all scenarios
|
|
24
|
+
- Include proper traceability links to specification ID
|
|
25
|
+
- Follow testing framework conventions (pytest, jest, etc.)
|
|
26
|
+
- Add assertion patterns based on specification authority
|
|
27
|
+
|
|
28
|
+
## 2. Generate Implementation Code
|
|
29
|
+
|
|
30
|
+
If generating code (--code):
|
|
31
|
+
!find . -name "*test*" | grep "$test_name" | head -1
|
|
32
|
+
@$test_file 2>/dev/null || echo "Test file not found"
|
|
33
|
+
|
|
34
|
+
Generate minimal implementation:
|
|
35
|
+
- Analyze test requirements and assertions
|
|
36
|
+
- Create minimal code to satisfy all tests
|
|
37
|
+
- Follow project coding standards
|
|
38
|
+
- Include proper error handling
|
|
39
|
+
- Add docstrings and type hints
|
|
40
|
+
|
|
41
|
+
## 3. Generate Schema Definitions
|
|
42
|
+
|
|
43
|
+
If generating schema (--schema):
|
|
44
|
+
!find . -name "*.py" -exec grep -l "$model_name" {} \; | head -3
|
|
45
|
+
!python -c "import inspect; print('Python inspection available')" 2>/dev/null || echo "Python not available"
|
|
46
|
+
|
|
47
|
+
Generate Pydantic schema:
|
|
48
|
+
- Extract field definitions from existing models
|
|
49
|
+
- Add proper type annotations
|
|
50
|
+
- Include validation rules
|
|
51
|
+
- Add field descriptions and examples
|
|
52
|
+
- Handle relationships and dependencies
|
|
53
|
+
|
|
54
|
+
## 4. Generate Documentation
|
|
55
|
+
|
|
56
|
+
If generating documentation (--docs):
|
|
57
|
+
!find . -name "*.py" -o -name "*.js" -o -name "*.ts" | xargs grep -l "$component" | head -5
|
|
58
|
+
!python -c "import ast; print('AST parsing available')" 2>/dev/null || echo "AST parsing not available"
|
|
59
|
+
|
|
60
|
+
Generate component documentation:
|
|
61
|
+
- Extract docstrings and comments
|
|
62
|
+
- Generate API reference documentation
|
|
63
|
+
- Create usage examples
|
|
64
|
+
- Add parameter and return type documentation
|
|
65
|
+
- Include integration examples
|
|
66
|
+
|
|
67
|
+
## 5. Generate Configuration Files
|
|
68
|
+
|
|
69
|
+
If generating config (--config):
|
|
70
|
+
!ls -la | grep -E "(config|env|yml|yaml|json|toml)"
|
|
71
|
+
!find . -name "*.example" -o -name "*.template" | head -3
|
|
72
|
+
|
|
73
|
+
Generate configuration:
|
|
74
|
+
- Create environment-specific config files
|
|
75
|
+
- Add validation schemas
|
|
76
|
+
- Include default values and examples
|
|
77
|
+
- Add documentation comments
|
|
78
|
+
- Handle sensitive data properly
|
|
79
|
+
|
|
80
|
+
## 6. Template-Based Generation
|
|
81
|
+
|
|
82
|
+
Check for existing templates:
|
|
83
|
+
!find . -name "*template*" -o -name "*scaffold*" | head -5
|
|
84
|
+
!ls -la templates/ 2>/dev/null || echo "No templates directory"
|
|
85
|
+
|
|
86
|
+
Use templates when available:
|
|
87
|
+
- Load appropriate template for generation type
|
|
88
|
+
- Replace placeholders with actual values
|
|
89
|
+
- Customize based on project structure
|
|
90
|
+
- Maintain consistent formatting
|
|
91
|
+
|
|
92
|
+
Think step by step about the generation requirements and:
|
|
93
|
+
|
|
94
|
+
1. **Analyze Input**:
|
|
95
|
+
- Understand the specification or test requirements
|
|
96
|
+
- Identify the target output format and structure
|
|
97
|
+
- Determine dependencies and relationships
|
|
98
|
+
|
|
99
|
+
2. **Generate Content**:
|
|
100
|
+
- Create minimal, focused implementation
|
|
101
|
+
- Follow established patterns and conventions
|
|
102
|
+
- Include proper error handling and validation
|
|
103
|
+
- Add comprehensive documentation
|
|
104
|
+
|
|
105
|
+
3. **Ensure Quality**:
|
|
106
|
+
- Validate generated code syntax
|
|
107
|
+
- Check compliance with project standards
|
|
108
|
+
- Verify traceability links are maintained
|
|
109
|
+
- Test generated code functionality
|
|
110
|
+
|
|
111
|
+
4. **Integration**:
|
|
112
|
+
- Place generated files in appropriate locations
|
|
113
|
+
- Update imports and dependencies
|
|
114
|
+
- Integrate with existing codebase
|
|
115
|
+
- Maintain project structure consistency
|
|
116
|
+
|
|
117
|
+
Generate high-quality, well-documented code that follows SpecDriven AI principles and integrates seamlessly with the existing project structure.
|
|
@@ -0,0 +1,149 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Comprehensive development governance framework for policies, audits, and compliance
|
|
3
|
+
tags: [governance, policies, audits, compliance, controls, standards]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Manage development governance based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, examine current governance structure and documentation:
|
|
9
|
+
!find . -name "*policy*" -o -name "*governance*" -o -name "*compliance*" | head -10
|
|
10
|
+
!ls -la | grep -E "(POLICY|COMPLIANCE|GOVERNANCE)"
|
|
11
|
+
!find . -name "*.md" | grep -E "(policy|standard|procedure)" | head -5
|
|
12
|
+
|
|
13
|
+
Based on $ARGUMENTS, perform the appropriate governance operation:
|
|
14
|
+
|
|
15
|
+
## 1. Policy Management
|
|
16
|
+
|
|
17
|
+
If managing policies (--policy):
|
|
18
|
+
!find . -name "POLICY.md" -o -name "policies/" | head -3
|
|
19
|
+
!grep -r "policy" . --include="*.md" | head -5
|
|
20
|
+
|
|
21
|
+
Policy operations:
|
|
22
|
+
- Create new development policies
|
|
23
|
+
- Validate existing policy compliance
|
|
24
|
+
- Update policies based on requirements
|
|
25
|
+
- Track policy exceptions and approvals
|
|
26
|
+
- Enforce policy across projects
|
|
27
|
+
|
|
28
|
+
## 2. Governance Audit
|
|
29
|
+
|
|
30
|
+
If running audit (--audit):
|
|
31
|
+
!git log --since="30 days ago" --pretty=format:"%h %s" | head -10
|
|
32
|
+
!find . -name "*audit*" -o -name "*review*" | head -5
|
|
33
|
+
!ls -la .github/ 2>/dev/null || echo "No GitHub configuration found"
|
|
34
|
+
|
|
35
|
+
Audit activities:
|
|
36
|
+
- Review code quality standards compliance
|
|
37
|
+
- Check security policy adherence
|
|
38
|
+
- Validate development process maturity
|
|
39
|
+
- Assess risk management effectiveness
|
|
40
|
+
- Generate audit findings and recommendations
|
|
41
|
+
|
|
42
|
+
## 3. Compliance Assessment
|
|
43
|
+
|
|
44
|
+
If checking compliance (--compliance):
|
|
45
|
+
!grep -r "compliance" . --include="*.md" --include="*.yml" | head -5
|
|
46
|
+
!find . -name "*cert*" -o -name "*standard*" | head -3
|
|
47
|
+
|
|
48
|
+
Compliance checks:
|
|
49
|
+
- SOC 2 compliance validation
|
|
50
|
+
- ISO 27001 adherence assessment
|
|
51
|
+
- GDPR data protection compliance
|
|
52
|
+
- Industry-specific regulatory requirements
|
|
53
|
+
- Certification readiness evaluation
|
|
54
|
+
|
|
55
|
+
## 4. Controls Implementation
|
|
56
|
+
|
|
57
|
+
If managing controls (--controls):
|
|
58
|
+
!find . -name "*.yml" -o -name "*.yaml" | grep -E "(workflow|action|pipeline)" | head -5
|
|
59
|
+
!ls -la .github/workflows/ 2>/dev/null || echo "No CI/CD workflows found"
|
|
60
|
+
|
|
61
|
+
Governance controls:
|
|
62
|
+
- Implement automated compliance checks
|
|
63
|
+
- Set up governance monitoring
|
|
64
|
+
- Configure approval workflows
|
|
65
|
+
- Establish access controls
|
|
66
|
+
- Monitor control effectiveness
|
|
67
|
+
|
|
68
|
+
## 5. Standards Management
|
|
69
|
+
|
|
70
|
+
If managing standards (--standards):
|
|
71
|
+
!find . -name "*standard*" -o -name "*guideline*" | head -5
|
|
72
|
+
!python -m flake8 --version 2>/dev/null || echo "No Python linting standards"
|
|
73
|
+
!eslint --version 2>/dev/null || echo "No JavaScript linting standards"
|
|
74
|
+
|
|
75
|
+
Standards enforcement:
|
|
76
|
+
- Define coding standards
|
|
77
|
+
- Implement documentation standards
|
|
78
|
+
- Establish security standards
|
|
79
|
+
- Create architecture guidelines
|
|
80
|
+
- Monitor standards compliance
|
|
81
|
+
|
|
82
|
+
## 6. Review Processes
|
|
83
|
+
|
|
84
|
+
If managing reviews (--review):
|
|
85
|
+
!git log --grep="review" --oneline | head -5
|
|
86
|
+
!find . -name "CODEOWNERS" -o -name "*review*" | head -3
|
|
87
|
+
|
|
88
|
+
Review governance:
|
|
89
|
+
- Code review requirements and processes
|
|
90
|
+
- Architecture review checkpoints
|
|
91
|
+
- Security review mandatory gates
|
|
92
|
+
- Compliance review procedures
|
|
93
|
+
- Approval workflow management
|
|
94
|
+
|
|
95
|
+
## 7. Gap Analysis
|
|
96
|
+
|
|
97
|
+
If performing gap analysis (--gap-analysis):
|
|
98
|
+
!find . -name "*.md" | xargs grep -l "requirement" | head -5
|
|
99
|
+
!grep -r "TODO\|FIXME" . --include="*.py" --include="*.js" | wc -l
|
|
100
|
+
|
|
101
|
+
Identify gaps in:
|
|
102
|
+
- Policy coverage and implementation
|
|
103
|
+
- Compliance requirements fulfillment
|
|
104
|
+
- Control effectiveness
|
|
105
|
+
- Process maturity
|
|
106
|
+
- Documentation completeness
|
|
107
|
+
|
|
108
|
+
## 8. Metrics and Reporting
|
|
109
|
+
|
|
110
|
+
If generating reports (--metrics, --dashboard):
|
|
111
|
+
!git shortlog -sn --since="30 days ago" | head -10
|
|
112
|
+
!find . -name "*test*" | wc -l
|
|
113
|
+
!uptime
|
|
114
|
+
|
|
115
|
+
Governance metrics:
|
|
116
|
+
- Policy compliance rates
|
|
117
|
+
- Audit finding resolution time
|
|
118
|
+
- Control effectiveness measures
|
|
119
|
+
- Process maturity indicators
|
|
120
|
+
- Risk exposure levels
|
|
121
|
+
|
|
122
|
+
Think step by step about governance requirements and provide:
|
|
123
|
+
|
|
124
|
+
1. **Current State Assessment**:
|
|
125
|
+
- Existing governance structure
|
|
126
|
+
- Policy coverage and gaps
|
|
127
|
+
- Compliance status
|
|
128
|
+
- Control effectiveness
|
|
129
|
+
|
|
130
|
+
2. **Risk Analysis**:
|
|
131
|
+
- Governance risk exposure
|
|
132
|
+
- Compliance risks
|
|
133
|
+
- Process risks
|
|
134
|
+
- Technology risks
|
|
135
|
+
|
|
136
|
+
3. **Improvement Plan**:
|
|
137
|
+
- Priority governance actions
|
|
138
|
+
- Policy updates needed
|
|
139
|
+
- Control enhancements
|
|
140
|
+
- Process improvements
|
|
141
|
+
|
|
142
|
+
4. **Implementation Roadmap**:
|
|
143
|
+
- Phased implementation approach
|
|
144
|
+
- Resource requirements
|
|
145
|
+
- Timeline and milestones
|
|
146
|
+
- Success metrics
|
|
147
|
+
|
|
148
|
+
Generate comprehensive governance assessment with actionable recommendations for improving organizational governance maturity.
|
|
149
|
+
|
|
@@ -0,0 +1,66 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Make failing tests pass following TDD Green phase principles with minimal implementation
|
|
3
|
+
tags: [tdd, testing, green-phase, minimal-implementation, specifications]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# /xgreen — Make Tests Pass
|
|
7
|
+
|
|
8
|
+
Implement minimal code to make failing tests pass following TDD Green phase principles.
|
|
9
|
+
|
|
10
|
+
Think step by step:
|
|
11
|
+
1. Check for SpecDriven AI project structure and existing tests
|
|
12
|
+
2. Identify currently failing tests and their requirements
|
|
13
|
+
3. Guide minimal implementation to make tests pass
|
|
14
|
+
4. Verify all tests pass before proceeding to refactor phase
|
|
15
|
+
|
|
16
|
+
## Usage
|
|
17
|
+
|
|
18
|
+
```bash
|
|
19
|
+
/xgreen --minimal # Implement just enough to pass
|
|
20
|
+
/xgreen --check # Verify tests pass
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
## Implementation Steps
|
|
24
|
+
|
|
25
|
+
When implementing code to make tests pass:
|
|
26
|
+
|
|
27
|
+
1. **For minimal implementation (--minimal)**:
|
|
28
|
+
- Check if SpecDriven AI project structure exists (specs/ directory)
|
|
29
|
+
- If not found, suggest running `!xsetup --env` to initialize
|
|
30
|
+
- Verify that failing tests exist in @specs/tests/
|
|
31
|
+
- If no tests found, suggest creating tests first with `/xred --spec <spec-id>`
|
|
32
|
+
- Run test suite to identify failing tests and their requirements
|
|
33
|
+
- Provide guidance on GREEN phase principles for minimal implementation
|
|
34
|
+
- After implementation, verify tests pass with detailed output
|
|
35
|
+
|
|
36
|
+
2. **For verification (--check)**:
|
|
37
|
+
- Run comprehensive test suite with detailed reporting
|
|
38
|
+
- Show test coverage information if available
|
|
39
|
+
- Provide clear pass/fail status for GREEN phase completion
|
|
40
|
+
- Guide next steps in TDD workflow based on results
|
|
41
|
+
|
|
42
|
+
3. **Error handling**:
|
|
43
|
+
- Validate project structure and test environment
|
|
44
|
+
- Handle cases where tests are already passing
|
|
45
|
+
- Provide clear feedback on test failures and requirements
|
|
46
|
+
- Suggest appropriate next steps based on current state
|
|
47
|
+
|
|
48
|
+
## GREEN Phase Principles
|
|
49
|
+
|
|
50
|
+
Guide implementation following these principles:
|
|
51
|
+
- Make tests pass with MINIMAL code only
|
|
52
|
+
- Don't worry about code quality or elegance yet
|
|
53
|
+
- Hardcode values if necessary to make tests pass
|
|
54
|
+
- Focus on making tests green, not perfect code
|
|
55
|
+
- Avoid adding extra functionality beyond test requirements
|
|
56
|
+
- Save optimization and refactoring for the next phase
|
|
57
|
+
|
|
58
|
+
## Expected Outputs
|
|
59
|
+
|
|
60
|
+
- Clear identification of failing tests and requirements
|
|
61
|
+
- Guidance for minimal implementation strategies
|
|
62
|
+
- Verification that all tests pass after implementation
|
|
63
|
+
- Test coverage reporting when available
|
|
64
|
+
- Next steps in TDD workflow (refactor or commit)
|
|
65
|
+
|
|
66
|
+
Use $ARGUMENTS to handle command-line parameters and `!` prefix for running test commands and coverage analysis.
|
|
@@ -0,0 +1,118 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Comprehensive Infrastructure as Code management with focus on AWS IAM, Terraform, CloudFormation, and infrastructure validation
|
|
3
|
+
tags: [infrastructure, terraform, cloudformation, iam, aws, security, compliance]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Manage Infrastructure as Code operations based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, examine the current IaC setup:
|
|
9
|
+
!find . -name "*.tf" -o -name "*.yml" -o -name "*.yaml" | grep -E "(terraform|cloudformation|infra)" | head -10
|
|
10
|
+
!ls -la terraform/ cloudformation/ infrastructure/ iac/ 2>/dev/null || echo "No IaC directories found"
|
|
11
|
+
!which terraform 2>/dev/null && terraform version || echo "Terraform not available"
|
|
12
|
+
!which aws 2>/dev/null && aws --version || echo "AWS CLI not available"
|
|
13
|
+
!docker --version 2>/dev/null || echo "Docker not available"
|
|
14
|
+
|
|
15
|
+
Based on $ARGUMENTS, perform the appropriate Infrastructure as Code operation:
|
|
16
|
+
|
|
17
|
+
## 1. Infrastructure Scanning and Discovery
|
|
18
|
+
|
|
19
|
+
If scanning infrastructure (--scan, --discover, --inventory):
|
|
20
|
+
!find . -name "*.tf" | head -10
|
|
21
|
+
!find . -name "*.yml" -o -name "*.yaml" | xargs grep -l "Resources\|AWSTemplateFormatVersion" 2>/dev/null | head -5
|
|
22
|
+
!aws sts get-caller-identity 2>/dev/null || echo "AWS credentials not configured"
|
|
23
|
+
!aws iam list-roles --max-items 5 2>/dev/null || echo "No AWS access or roles not accessible"
|
|
24
|
+
|
|
25
|
+
Scan and discover infrastructure:
|
|
26
|
+
- Analyze existing IaC files and configurations
|
|
27
|
+
- Discover cloud resources and dependencies
|
|
28
|
+
- Generate infrastructure inventory
|
|
29
|
+
- Detect configuration drift
|
|
30
|
+
- Map resource relationships
|
|
31
|
+
|
|
32
|
+
## 2. Terraform Operations
|
|
33
|
+
|
|
34
|
+
If managing Terraform (--terraform, --tf-validate, --tf-plan):
|
|
35
|
+
!terraform version 2>/dev/null || echo "Terraform not installed"
|
|
36
|
+
!ls -la *.tf terraform/ 2>/dev/null || echo "No Terraform files found"
|
|
37
|
+
!terraform init -backend=false 2>/dev/null || echo "Terraform not initialized"
|
|
38
|
+
!terraform validate 2>/dev/null || echo "Terraform validation failed"
|
|
39
|
+
|
|
40
|
+
Manage Terraform infrastructure:
|
|
41
|
+
- Validate and format Terraform configurations
|
|
42
|
+
- Plan and apply infrastructure changes
|
|
43
|
+
- Manage Terraform state and modules
|
|
44
|
+
- Handle provider configurations
|
|
45
|
+
- Perform terraform operations safely
|
|
46
|
+
|
|
47
|
+
## 3. CloudFormation Operations
|
|
48
|
+
|
|
49
|
+
If managing CloudFormation (--cloudformation, --cf-validate, --cf-deploy):
|
|
50
|
+
!find . -name "*.yml" -o -name "*.yaml" -o -name "*.json" | xargs grep -l "AWSTemplateFormatVersion" 2>/dev/null | head -5
|
|
51
|
+
!aws cloudformation validate-template --template-body file://template.yml 2>/dev/null || echo "No valid CloudFormation templates found"
|
|
52
|
+
!aws cloudformation list-stacks --stack-status-filter CREATE_COMPLETE UPDATE_COMPLETE 2>/dev/null | head -10 || echo "No CloudFormation access"
|
|
53
|
+
|
|
54
|
+
Manage CloudFormation infrastructure:
|
|
55
|
+
- Validate and lint CloudFormation templates
|
|
56
|
+
- Deploy and manage CloudFormation stacks
|
|
57
|
+
- Handle stack updates and rollbacks
|
|
58
|
+
- Manage nested stacks and dependencies
|
|
59
|
+
- Monitor stack events and status
|
|
60
|
+
|
|
61
|
+
## 4. IAM Security Management
|
|
62
|
+
|
|
63
|
+
If managing IAM (--iam-roles, --iam-policies, --iam-validate):
|
|
64
|
+
!find . -name "*.tf" -o -name "*.yml" -o -name "*.yaml" | xargs grep -l "iam\|IAM" 2>/dev/null | head -5
|
|
65
|
+
!aws iam list-roles --max-items 10 2>/dev/null || echo "IAM access not available"
|
|
66
|
+
!grep -r "aws_iam\|AWS::IAM" . --include="*.tf" --include="*.yml" --include="*.yaml" | head -5 2>/dev/null
|
|
67
|
+
|
|
68
|
+
Manage IAM security:
|
|
69
|
+
- Analyze and validate IAM roles and policies
|
|
70
|
+
- Check least privilege compliance
|
|
71
|
+
- Scan for overly permissive policies
|
|
72
|
+
- Validate IAM policy syntax and logic
|
|
73
|
+
- Assess security posture and risks
|
|
74
|
+
|
|
75
|
+
## 5. Security and Compliance Scanning
|
|
76
|
+
|
|
77
|
+
If performing security analysis (--security-scan, --compliance, --secrets-scan):
|
|
78
|
+
!pip install checkov 2>/dev/null || echo "Install checkov: pip install checkov"
|
|
79
|
+
!checkov -f . --framework terraform cloudformation 2>/dev/null || echo "Checkov not available"
|
|
80
|
+
!grep -r "password\|secret\|key" . --include="*.tf" --include="*.yml" --include="*.yaml" | grep -v "example\|template" | head -5 2>/dev/null
|
|
81
|
+
|
|
82
|
+
Perform security analysis:
|
|
83
|
+
- Scan for security vulnerabilities
|
|
84
|
+
- Check compliance with security standards
|
|
85
|
+
- Detect hardcoded secrets and credentials
|
|
86
|
+
- Validate encryption and security controls
|
|
87
|
+
- Generate security assessment reports
|
|
88
|
+
|
|
89
|
+
Think step by step about Infrastructure as Code requirements and provide:
|
|
90
|
+
|
|
91
|
+
1. **Current State Assessment**:
|
|
92
|
+
- Existing IaC tool usage and maturity
|
|
93
|
+
- Infrastructure security posture
|
|
94
|
+
- Compliance gaps and risks
|
|
95
|
+
- Resource organization and management
|
|
96
|
+
|
|
97
|
+
2. **IaC Strategy**:
|
|
98
|
+
- Tool selection and standardization
|
|
99
|
+
- Module and template design patterns
|
|
100
|
+
- State management and collaboration
|
|
101
|
+
- Security and compliance integration
|
|
102
|
+
|
|
103
|
+
3. **Implementation Plan**:
|
|
104
|
+
- IaC adoption and migration strategy
|
|
105
|
+
- CI/CD pipeline integration
|
|
106
|
+
- Testing and validation framework
|
|
107
|
+
- Team training and knowledge transfer
|
|
108
|
+
|
|
109
|
+
4. **Operational Excellence**:
|
|
110
|
+
- Monitoring and drift detection
|
|
111
|
+
- Cost optimization strategies
|
|
112
|
+
- Disaster recovery and backup
|
|
113
|
+
- Continuous security assessment
|
|
114
|
+
|
|
115
|
+
Generate comprehensive Infrastructure as Code management plan with security controls, compliance checks, automation strategies, and operational best practices.
|
|
116
|
+
|
|
117
|
+
If no specific operation is provided, perform IaC readiness assessment and recommend implementation strategy based on current infrastructure setup and organizational needs.
|
|
118
|
+
|