@paulduvall/claude-dev-toolkit 0.0.1-alpha.1 → 0.0.1-alpha.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +74 -23
- package/bin/claude-commands +263 -64
- package/commands/active/xarchitecture.md +393 -0
- package/commands/active/xconfig.md +127 -0
- package/commands/active/xdebug.md +130 -0
- package/commands/active/xdocs.md +178 -0
- package/commands/active/xgit.md +149 -0
- package/commands/active/xpipeline.md +152 -0
- package/commands/active/xquality.md +96 -0
- package/commands/active/xrefactor.md +198 -0
- package/commands/active/xrelease.md +142 -0
- package/commands/active/xsecurity.md +92 -0
- package/commands/active/xspec.md +174 -0
- package/commands/active/xtdd.md +151 -0
- package/commands/active/xtest.md +89 -0
- package/commands/experiments/xact.md +742 -0
- package/commands/experiments/xanalytics.md +113 -0
- package/commands/experiments/xanalyze.md +70 -0
- package/commands/experiments/xapi.md +161 -0
- package/commands/experiments/xatomic.md +112 -0
- package/commands/experiments/xaws.md +85 -0
- package/commands/experiments/xcicd.md +337 -0
- package/commands/experiments/xcommit.md +122 -0
- package/commands/experiments/xcompliance.md +182 -0
- package/commands/experiments/xconstraints.md +89 -0
- package/commands/experiments/xcoverage.md +90 -0
- package/commands/experiments/xdb.md +102 -0
- package/commands/experiments/xdesign.md +121 -0
- package/commands/experiments/xevaluate.md +111 -0
- package/commands/experiments/xfootnote.md +12 -0
- package/commands/experiments/xgenerate.md +117 -0
- package/commands/experiments/xgovernance.md +149 -0
- package/commands/experiments/xgreen.md +66 -0
- package/commands/experiments/xiac.md +118 -0
- package/commands/experiments/xincident.md +137 -0
- package/commands/experiments/xinfra.md +115 -0
- package/commands/experiments/xknowledge.md +115 -0
- package/commands/experiments/xmaturity.md +120 -0
- package/commands/experiments/xmetrics.md +118 -0
- package/commands/experiments/xmonitoring.md +128 -0
- package/commands/experiments/xnew.md +898 -0
- package/commands/experiments/xobservable.md +114 -0
- package/commands/experiments/xoidc.md +165 -0
- package/commands/experiments/xoptimize.md +115 -0
- package/commands/experiments/xperformance.md +112 -0
- package/commands/experiments/xplanning.md +131 -0
- package/commands/experiments/xpolicy.md +115 -0
- package/commands/experiments/xproduct.md +98 -0
- package/commands/experiments/xreadiness.md +75 -0
- package/commands/experiments/xred.md +55 -0
- package/commands/experiments/xrisk.md +128 -0
- package/commands/experiments/xrules.md +124 -0
- package/commands/experiments/xsandbox.md +120 -0
- package/commands/experiments/xscan.md +102 -0
- package/commands/experiments/xsetup.md +123 -0
- package/commands/experiments/xtemplate.md +116 -0
- package/commands/experiments/xtrace.md +212 -0
- package/commands/experiments/xux.md +171 -0
- package/commands/experiments/xvalidate.md +104 -0
- package/commands/experiments/xworkflow.md +113 -0
- package/hooks/README.md +231 -0
- package/hooks/file-logger.sh +98 -0
- package/hooks/lib/argument-parser.sh +422 -0
- package/hooks/lib/config-constants.sh +230 -0
- package/hooks/lib/context-manager.sh +549 -0
- package/hooks/lib/error-handler.sh +412 -0
- package/hooks/lib/execution-engine.sh +627 -0
- package/hooks/lib/file-utils.sh +375 -0
- package/hooks/lib/subagent-discovery.sh +465 -0
- package/hooks/lib/subagent-validator.sh +597 -0
- package/hooks/on-error-debug.sh +221 -0
- package/hooks/pre-commit-quality.sh +204 -0
- package/hooks/pre-write-security.sh +107 -0
- package/hooks/prevent-credential-exposure.sh +265 -0
- package/hooks/subagent-trigger-simple.sh +193 -0
- package/hooks/subagent-trigger.sh +253 -0
- package/lib/backup-restore-command.js +140 -0
- package/lib/base/base-command.js +252 -0
- package/lib/base/command-result.js +184 -0
- package/lib/config/constants.js +255 -0
- package/lib/config.js +228 -3
- package/lib/configure-command.js +428 -0
- package/lib/dependency-validator.js +64 -5
- package/lib/hook-installer-core.js +2 -2
- package/lib/installation-instruction-generator-backup.js +579 -0
- package/lib/installation-instruction-generator.js +213 -495
- package/lib/installer.js +134 -56
- package/lib/oidc-command.js +363 -0
- package/lib/result.js +138 -0
- package/lib/services/backup-list-service.js +226 -0
- package/lib/services/backup-service.js +230 -0
- package/lib/services/command-installer-service.js +217 -0
- package/lib/services/logger-service.js +201 -0
- package/lib/services/package-manager-service.js +319 -0
- package/lib/services/platform-instruction-service.js +294 -0
- package/lib/services/recovery-instruction-service.js +348 -0
- package/lib/services/restore-service.js +221 -0
- package/lib/setup-command.js +309 -0
- package/lib/subagent-formatter.js +278 -0
- package/lib/subagents-core.js +237 -0
- package/lib/subagents.js +508 -0
- package/lib/types.d.ts +183 -0
- package/lib/utils/claude-path-config.js +184 -0
- package/lib/utils/file-system-utils.js +152 -0
- package/lib/utils.js +8 -4
- package/lib/verify-command.js +430 -0
- package/package.json +17 -4
- package/scripts/postinstall.js +28 -10
- package/subagents/api-guardian.md +29 -0
- package/subagents/audit-trail-verifier.md +24 -0
- package/subagents/change-scoper.md +23 -0
- package/subagents/ci-pipeline-curator.md +24 -0
- package/subagents/code-review-assistant.md +258 -0
- package/subagents/continuous-release-orchestrator.md +29 -0
- package/subagents/contract-tester.md +24 -0
- package/subagents/data-steward.md +29 -0
- package/subagents/debug-context.md +197 -0
- package/subagents/debug-specialist.md +138 -0
- package/subagents/dependency-steward.md +24 -0
- package/subagents/deployment-strategist.md +29 -0
- package/subagents/documentation-curator.md +29 -0
- package/subagents/environment-guardian.md +29 -0
- package/subagents/license-compliance-guardian.md +29 -0
- package/subagents/observability-engineer.md +25 -0
- package/subagents/performance-guardian.md +29 -0
- package/subagents/product-owner-proxy.md +28 -0
- package/subagents/requirements-reviewer.md +26 -0
- package/subagents/rollback-first-responder.md +24 -0
- package/subagents/sbom-provenance.md +25 -0
- package/subagents/security-auditor.md +29 -0
- package/subagents/style-enforcer.md +23 -0
- package/subagents/test-writer.md +24 -0
- package/subagents/trunk-guardian.md +29 -0
- package/subagents/workflow-coordinator.md +26 -0
- package/templates/README.md +100 -0
- package/templates/basic-settings.json +30 -0
- package/templates/comprehensive-settings.json +206 -0
- package/templates/hybrid-hook-config.yaml +133 -0
- package/templates/security-focused-settings.json +62 -0
- package/templates/subagent-hooks.yaml +188 -0
- package/tsconfig.json +37 -0
|
@@ -0,0 +1,89 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Manage and enforce development constraints for quality and compliance
|
|
3
|
+
tags: [constraints, quality, compliance, validation, governance]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Manage development constraints based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, check for existing constraint configuration:
|
|
9
|
+
!ls -la .constraints.yml .constraints.yaml 2>/dev/null || echo "No constraint configuration found"
|
|
10
|
+
!find . -name "*constraint*" -o -name "*rule*" | head -5
|
|
11
|
+
|
|
12
|
+
Based on $ARGUMENTS, perform the appropriate constraint operation:
|
|
13
|
+
|
|
14
|
+
## 1. Define New Constraints
|
|
15
|
+
|
|
16
|
+
If defining constraints (--define):
|
|
17
|
+
!touch .constraints.yml
|
|
18
|
+
!echo "Adding constraint: $constraint_name" >> .constraints.yml
|
|
19
|
+
|
|
20
|
+
Common constraint types to define:
|
|
21
|
+
- Code complexity limits (max_complexity=10)
|
|
22
|
+
- File size limits (max_lines=500)
|
|
23
|
+
- Naming conventions (snake_case, camelCase)
|
|
24
|
+
- Security patterns (no_secrets, https_only)
|
|
25
|
+
- Architecture boundaries (no_direct_db_access)
|
|
26
|
+
|
|
27
|
+
## 2. Enforce Constraints
|
|
28
|
+
|
|
29
|
+
If enforcing constraints (--enforce):
|
|
30
|
+
!python -m flake8 --max-complexity=10 . 2>/dev/null || echo "No Python linter available"
|
|
31
|
+
!eslint --max-complexity 10 . 2>/dev/null || echo "No JavaScript linter available"
|
|
32
|
+
!grep -r "password\|secret\|key" . --exclude-dir=.git | head -5 || echo "No hardcoded secrets found"
|
|
33
|
+
|
|
34
|
+
Check for:
|
|
35
|
+
- Code complexity violations
|
|
36
|
+
- File size violations
|
|
37
|
+
- Naming convention violations
|
|
38
|
+
- Security violations
|
|
39
|
+
- Architecture violations
|
|
40
|
+
|
|
41
|
+
## 3. Validate Compliance
|
|
42
|
+
|
|
43
|
+
If validating constraints (--validate):
|
|
44
|
+
!find . -name "*.py" -exec wc -l {} \; | awk '$1 > 500 {print $2 ": " $1 " lines (exceeds 500)"}'
|
|
45
|
+
!find . -name "*.js" -exec wc -l {} \; | awk '$1 > 300 {print $2 ": " $1 " lines (exceeds 300)"}'
|
|
46
|
+
|
|
47
|
+
Validate:
|
|
48
|
+
- Code meets complexity limits
|
|
49
|
+
- Files are within size limits
|
|
50
|
+
- Naming follows conventions
|
|
51
|
+
- No security violations
|
|
52
|
+
- Architecture boundaries respected
|
|
53
|
+
|
|
54
|
+
## 4. List Current Constraints
|
|
55
|
+
|
|
56
|
+
If listing constraints (--list):
|
|
57
|
+
@.constraints.yml 2>/dev/null || echo "No constraints file found"
|
|
58
|
+
!echo "Active constraints:"
|
|
59
|
+
!echo "- Max complexity: 10"
|
|
60
|
+
!echo "- Max file lines: 500"
|
|
61
|
+
!echo "- Naming: snake_case (Python), camelCase (JavaScript)"
|
|
62
|
+
!echo "- Security: No hardcoded secrets"
|
|
63
|
+
|
|
64
|
+
## 5. Generate Compliance Report
|
|
65
|
+
|
|
66
|
+
If generating report (--report):
|
|
67
|
+
!date
|
|
68
|
+
!echo "=== Constraint Compliance Report ==="
|
|
69
|
+
!echo "Project: $(basename $(pwd))"
|
|
70
|
+
|
|
71
|
+
Run constraint checks:
|
|
72
|
+
!python -c "import ast; print('Python syntax check: OK')" 2>/dev/null || echo "Python syntax issues found"
|
|
73
|
+
!node -c "console.log('JavaScript syntax check: OK')" 2>/dev/null || echo "JavaScript syntax issues found"
|
|
74
|
+
|
|
75
|
+
Generate summary:
|
|
76
|
+
- Total files checked
|
|
77
|
+
- Violations found
|
|
78
|
+
- Compliance percentage
|
|
79
|
+
- Recommendations for fixes
|
|
80
|
+
|
|
81
|
+
Think step by step about constraint violations and provide:
|
|
82
|
+
- Current compliance status
|
|
83
|
+
- Specific violations found
|
|
84
|
+
- Prioritized fix recommendations
|
|
85
|
+
- Prevention strategies
|
|
86
|
+
- Integration suggestions
|
|
87
|
+
|
|
88
|
+
Report overall constraint health and suggest improvements for maintaining code quality and compliance.
|
|
89
|
+
|
|
@@ -0,0 +1,90 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Comprehensive dual coverage analysis for code and specifications
|
|
3
|
+
tags: [coverage, testing, specifications, quality, metrics]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Perform dual coverage analysis based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, examine the project structure for test files and coverage tools:
|
|
9
|
+
!find . -name "*test*" -o -name "*spec*" | grep -E "\.(py|js|ts)$" | head -10
|
|
10
|
+
!ls -la | grep -E "(pytest|jest|coverage|nyc)"
|
|
11
|
+
!which pytest 2>/dev/null || which npm 2>/dev/null || echo "No test runners found"
|
|
12
|
+
|
|
13
|
+
Based on $ARGUMENTS, perform the appropriate coverage analysis:
|
|
14
|
+
|
|
15
|
+
## 1. HTML Coverage Report Generation
|
|
16
|
+
|
|
17
|
+
If generating HTML report (--html):
|
|
18
|
+
!python -m pytest --cov=. --cov-report=html 2>/dev/null || npm test -- --coverage 2>/dev/null || echo "No coverage tools configured"
|
|
19
|
+
!ls htmlcov/ 2>/dev/null && echo "HTML report generated in htmlcov/" || echo "No HTML coverage report found"
|
|
20
|
+
|
|
21
|
+
## 2. Missing Coverage Analysis
|
|
22
|
+
|
|
23
|
+
If checking missing coverage (--missing):
|
|
24
|
+
!python -m pytest --cov=. --cov-report=term-missing 2>/dev/null || echo "Python coverage not available"
|
|
25
|
+
!npm test -- --coverage --verbose 2>/dev/null || echo "JavaScript coverage not available"
|
|
26
|
+
|
|
27
|
+
Show uncovered lines and specifications that need attention.
|
|
28
|
+
|
|
29
|
+
## 3. Specification Coverage Analysis
|
|
30
|
+
|
|
31
|
+
If checking specific specification (--spec):
|
|
32
|
+
@specs/ 2>/dev/null || echo "No specs directory found"
|
|
33
|
+
!find . -name "*test*" -exec grep -l "$spec_id" {} \; 2>/dev/null
|
|
34
|
+
|
|
35
|
+
Analyze:
|
|
36
|
+
- Tests linked to the specification
|
|
37
|
+
- Code coverage for specification implementation
|
|
38
|
+
- Traceability from spec to test to code
|
|
39
|
+
|
|
40
|
+
## 4. Dual Coverage Metrics
|
|
41
|
+
|
|
42
|
+
If showing dual coverage (--dual):
|
|
43
|
+
!python -m pytest --cov=. --cov-report=term 2>/dev/null | grep "TOTAL" || echo "Code coverage not available"
|
|
44
|
+
!find specs/ -name "*.md" 2>/dev/null | wc -l | xargs echo "Total specifications:"
|
|
45
|
+
!find . -name "*test*" 2>/dev/null | wc -l | xargs echo "Total test files:"
|
|
46
|
+
|
|
47
|
+
Calculate:
|
|
48
|
+
- Code coverage percentage
|
|
49
|
+
- Specification coverage percentage
|
|
50
|
+
- Traceability coverage percentage
|
|
51
|
+
- Combined dual coverage score
|
|
52
|
+
|
|
53
|
+
## 5. Authority Level Coverage
|
|
54
|
+
|
|
55
|
+
If checking by authority (--authority):
|
|
56
|
+
!grep -r "authority=$authority_level" specs/ 2>/dev/null || echo "No authority specifications found"
|
|
57
|
+
|
|
58
|
+
Break down coverage by:
|
|
59
|
+
- System level specifications
|
|
60
|
+
- Platform level specifications
|
|
61
|
+
- Developer level specifications
|
|
62
|
+
|
|
63
|
+
## 6. Coverage Gaps Analysis
|
|
64
|
+
|
|
65
|
+
If identifying gaps (--gaps):
|
|
66
|
+
!find specs/ -name "*.md" -exec basename {} \; 2>/dev/null | sed 's/\.md$//' > /tmp/specs.txt
|
|
67
|
+
!find . -name "*test*" -exec grep -l "spec" {} \; 2>/dev/null | xargs grep -o "spec[0-9a-zA-Z]*" | sort -u > /tmp/tested_specs.txt
|
|
68
|
+
!comm -23 <(sort /tmp/specs.txt) <(sort /tmp/tested_specs.txt) 2>/dev/null || echo "Gap analysis not available"
|
|
69
|
+
|
|
70
|
+
Identify:
|
|
71
|
+
- Specifications without tests
|
|
72
|
+
- Code without specification coverage
|
|
73
|
+
- Missing traceability links
|
|
74
|
+
|
|
75
|
+
## 7. Comprehensive Metrics Dashboard
|
|
76
|
+
|
|
77
|
+
If generating metrics (--metrics):
|
|
78
|
+
!uptime
|
|
79
|
+
!date
|
|
80
|
+
|
|
81
|
+
Think step by step about coverage analysis and provide:
|
|
82
|
+
- Current code coverage percentage
|
|
83
|
+
- Specification coverage percentage
|
|
84
|
+
- Traceability coverage percentage
|
|
85
|
+
- Gap analysis summary
|
|
86
|
+
- Recommendations for improvement
|
|
87
|
+
- Coverage trends and targets
|
|
88
|
+
|
|
89
|
+
Generate a comprehensive coverage report with actionable insights and recommendations.
|
|
90
|
+
|
|
@@ -0,0 +1,102 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Comprehensive database management, migrations, and performance operations
|
|
3
|
+
tags: [database, schema, migration, performance, backup]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Perform database operations based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, examine the project for database configuration and tools:
|
|
9
|
+
!ls -la | grep -E "(database|db|migration|schema)"
|
|
10
|
+
!find . -name "*.sql" -o -name "*migration*" -o -name "*schema*" | head -10
|
|
11
|
+
!which psql 2>/dev/null || which mysql 2>/dev/null || which sqlite3 2>/dev/null || echo "No database clients found"
|
|
12
|
+
|
|
13
|
+
Based on $ARGUMENTS, perform the appropriate database operation:
|
|
14
|
+
|
|
15
|
+
## 1. Schema Management
|
|
16
|
+
|
|
17
|
+
If managing schema (--schema):
|
|
18
|
+
!find . -name "schema.sql" -o -name "*.schema" | head -5
|
|
19
|
+
!ls models/ 2>/dev/null || ls app/models/ 2>/dev/null || echo "No models directory found"
|
|
20
|
+
|
|
21
|
+
For schema operations:
|
|
22
|
+
- Check existing schema files
|
|
23
|
+
- Validate schema syntax
|
|
24
|
+
- Generate schema documentation
|
|
25
|
+
- Compare schema versions
|
|
26
|
+
|
|
27
|
+
## 2. Migration Operations
|
|
28
|
+
|
|
29
|
+
If handling migrations (--migrate):
|
|
30
|
+
!find . -name "*migration*" -o -path "*/migrations/*" | head -10
|
|
31
|
+
!python manage.py showmigrations 2>/dev/null || rails db:migrate:status 2>/dev/null || echo "No migration framework detected"
|
|
32
|
+
|
|
33
|
+
Migration tasks:
|
|
34
|
+
- Check migration status
|
|
35
|
+
- Run pending migrations
|
|
36
|
+
- Create new migration files
|
|
37
|
+
- Rollback migrations if needed
|
|
38
|
+
|
|
39
|
+
## 3. Data Seeding
|
|
40
|
+
|
|
41
|
+
If seeding data (--seed):
|
|
42
|
+
!find . -name "*seed*" -o -name "*fixture*" | head -5
|
|
43
|
+
!python manage.py loaddata 2>/dev/null || rails db:seed 2>/dev/null || echo "No seeding framework detected"
|
|
44
|
+
|
|
45
|
+
Seeding operations:
|
|
46
|
+
- Load test fixtures
|
|
47
|
+
- Populate sample data
|
|
48
|
+
- Environment-specific seeding
|
|
49
|
+
- Data validation after seeding
|
|
50
|
+
|
|
51
|
+
## 4. Performance Analysis
|
|
52
|
+
|
|
53
|
+
If analyzing performance (--performance):
|
|
54
|
+
!ps aux | grep -E "(postgres|mysql|sqlite)" | head -3
|
|
55
|
+
!top -l 1 | grep -E "(CPU|Memory)" 2>/dev/null || echo "System stats not available"
|
|
56
|
+
|
|
57
|
+
Performance checks:
|
|
58
|
+
- Database connection status
|
|
59
|
+
- Query performance analysis
|
|
60
|
+
- Index optimization suggestions
|
|
61
|
+
- Resource usage monitoring
|
|
62
|
+
|
|
63
|
+
## 5. Backup Operations
|
|
64
|
+
|
|
65
|
+
If performing backup (--backup):
|
|
66
|
+
!ls -la *.sql *.dump 2>/dev/null || echo "No backup files found"
|
|
67
|
+
!which pg_dump 2>/dev/null || which mysqldump 2>/dev/null || echo "No backup tools found"
|
|
68
|
+
|
|
69
|
+
Backup tasks:
|
|
70
|
+
- Create database backups
|
|
71
|
+
- Verify backup integrity
|
|
72
|
+
- Schedule automated backups
|
|
73
|
+
- Test restore procedures
|
|
74
|
+
|
|
75
|
+
## 6. Database Testing
|
|
76
|
+
|
|
77
|
+
If testing database (--test):
|
|
78
|
+
!python -m pytest tests/test_*db* 2>/dev/null || npm test 2>/dev/null || echo "No database tests found"
|
|
79
|
+
!find . -name "*test*" | grep -i db | head -5
|
|
80
|
+
|
|
81
|
+
Testing operations:
|
|
82
|
+
- Run database unit tests
|
|
83
|
+
- Test migration scripts
|
|
84
|
+
- Validate data integrity
|
|
85
|
+
- Check constraint violations
|
|
86
|
+
|
|
87
|
+
## 7. Connection and Status
|
|
88
|
+
|
|
89
|
+
Check database connectivity:
|
|
90
|
+
!python -c "import sqlite3; print('SQLite available')" 2>/dev/null || echo "SQLite not available"
|
|
91
|
+
!python -c "import psycopg2; print('PostgreSQL client available')" 2>/dev/null || echo "PostgreSQL client not available"
|
|
92
|
+
!python -c "import pymongo; print('MongoDB client available')" 2>/dev/null || echo "MongoDB client not available"
|
|
93
|
+
|
|
94
|
+
Think step by step about database operations and provide:
|
|
95
|
+
- Current database status
|
|
96
|
+
- Available operations for detected database type
|
|
97
|
+
- Recommendations for database optimization
|
|
98
|
+
- Best practices for data management
|
|
99
|
+
- Security considerations
|
|
100
|
+
|
|
101
|
+
Generate database management report with actionable recommendations.
|
|
102
|
+
|
|
@@ -0,0 +1,121 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Apply design patterns and architectural principles to improve code quality
|
|
3
|
+
tags: [design-patterns, architecture, solid, refactoring, best-practices]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Analyze code structure and apply design patterns based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, examine the project structure and identify current patterns:
|
|
9
|
+
!find . -name "*.py" -o -name "*.js" -o -name "*.ts" -o -name "*.java" | head -15
|
|
10
|
+
!ls -la src/ app/ lib/ 2>/dev/null || echo "No standard source directories found"
|
|
11
|
+
|
|
12
|
+
Based on $ARGUMENTS, perform the appropriate design analysis:
|
|
13
|
+
|
|
14
|
+
## 1. Pattern Analysis and Suggestions
|
|
15
|
+
|
|
16
|
+
If analyzing patterns (--patterns, --analyze):
|
|
17
|
+
!grep -r "class" . --include="*.py" --include="*.js" --include="*.ts" | head -10
|
|
18
|
+
!grep -r "interface\|abstract" . --include="*.py" --include="*.js" --include="*.ts" | head -5
|
|
19
|
+
|
|
20
|
+
Analyze current code for:
|
|
21
|
+
- Existing design patterns
|
|
22
|
+
- Anti-patterns and code smells
|
|
23
|
+
- Opportunities for pattern application
|
|
24
|
+
- Architectural structure
|
|
25
|
+
|
|
26
|
+
## 2. SOLID Principles Assessment
|
|
27
|
+
|
|
28
|
+
If checking SOLID principles (--solid, --principles):
|
|
29
|
+
!find . -name "*.py" -exec grep -l "class" {} \; | head -5
|
|
30
|
+
!python -c "import ast; print('Analyzing class structures...')" 2>/dev/null || echo "Python AST analysis not available"
|
|
31
|
+
|
|
32
|
+
Check for:
|
|
33
|
+
- Single Responsibility Principle violations
|
|
34
|
+
- Open/Closed Principle compliance
|
|
35
|
+
- Liskov Substitution Principle adherence
|
|
36
|
+
- Interface Segregation implementation
|
|
37
|
+
- Dependency Inversion usage
|
|
38
|
+
|
|
39
|
+
## 3. Code Quality Analysis
|
|
40
|
+
|
|
41
|
+
If checking DRY violations (--dry):
|
|
42
|
+
!grep -r "def\|function" . --include="*.py" --include="*.js" | cut -d: -f2 | sort | uniq -c | sort -nr | head -10
|
|
43
|
+
!find . -name "*.py" -exec grep -l "copy\|duplicate" {} \; 2>/dev/null
|
|
44
|
+
|
|
45
|
+
Identify:
|
|
46
|
+
- Duplicated code blocks
|
|
47
|
+
- Similar functions/methods
|
|
48
|
+
- Copy-paste patterns
|
|
49
|
+
- Refactoring opportunities
|
|
50
|
+
|
|
51
|
+
## 4. Coupling and Cohesion Analysis
|
|
52
|
+
|
|
53
|
+
If analyzing coupling (--coupling):
|
|
54
|
+
!find . -name "*.py" -exec grep -c "import" {} \; | sort -nr | head -10
|
|
55
|
+
!grep -r "from.*import" . --include="*.py" | wc -l
|
|
56
|
+
|
|
57
|
+
Evaluate:
|
|
58
|
+
- Module dependencies
|
|
59
|
+
- Import complexity
|
|
60
|
+
- Circular dependencies
|
|
61
|
+
- Cohesion within modules
|
|
62
|
+
|
|
63
|
+
## 5. Refactoring Suggestions
|
|
64
|
+
|
|
65
|
+
If providing refactoring guidance (--refactor):
|
|
66
|
+
!find . -name "*.py" -exec wc -l {} \; | awk '$1 > 100 {print $2 ": " $1 " lines (consider refactoring)"}'
|
|
67
|
+
!grep -r "def" . --include="*.py" | wc -l | xargs echo "Total functions:"
|
|
68
|
+
|
|
69
|
+
Suggest:
|
|
70
|
+
- Extract method opportunities
|
|
71
|
+
- Class decomposition
|
|
72
|
+
- Interface extraction
|
|
73
|
+
- Dependency injection improvements
|
|
74
|
+
|
|
75
|
+
## 6. Specific Pattern Implementation
|
|
76
|
+
|
|
77
|
+
If implementing specific patterns (--factory, --observer, --strategy):
|
|
78
|
+
@src/ 2>/dev/null || @app/ 2>/dev/null || echo "No source directory to analyze"
|
|
79
|
+
|
|
80
|
+
Pattern suggestions based on context:
|
|
81
|
+
- Factory patterns for object creation
|
|
82
|
+
- Observer patterns for event handling
|
|
83
|
+
- Strategy patterns for algorithm selection
|
|
84
|
+
- Repository patterns for data access
|
|
85
|
+
- Decorator patterns for feature extension
|
|
86
|
+
|
|
87
|
+
## 7. Architecture Pattern Assessment
|
|
88
|
+
|
|
89
|
+
If checking architecture patterns (--mvc, --repository):
|
|
90
|
+
!find . -name "*model*" -o -name "*view*" -o -name "*controller*" | head -10
|
|
91
|
+
!find . -name "*repository*" -o -name "*service*" -o -name "*dao*" | head -5
|
|
92
|
+
|
|
93
|
+
Assess current architecture:
|
|
94
|
+
- MVC pattern implementation
|
|
95
|
+
- Layer separation
|
|
96
|
+
- Service layer design
|
|
97
|
+
- Data access patterns
|
|
98
|
+
|
|
99
|
+
## 8. Best Practices Review
|
|
100
|
+
|
|
101
|
+
If reviewing best practices (--best-practices, --clean-code):
|
|
102
|
+
!python -m flake8 . 2>/dev/null | head -10 || echo "No Python linting available"
|
|
103
|
+
!eslint . 2>/dev/null | head -10 || echo "No JavaScript linting available"
|
|
104
|
+
|
|
105
|
+
Review:
|
|
106
|
+
- Naming conventions
|
|
107
|
+
- Function/method length
|
|
108
|
+
- Class responsibilities
|
|
109
|
+
- Code complexity
|
|
110
|
+
- Documentation quality
|
|
111
|
+
|
|
112
|
+
Think step by step about design improvements and provide:
|
|
113
|
+
- Current design pattern usage
|
|
114
|
+
- Anti-pattern identification
|
|
115
|
+
- SOLID principle compliance
|
|
116
|
+
- Refactoring recommendations
|
|
117
|
+
- Architecture improvement suggestions
|
|
118
|
+
- Implementation guidance for suggested patterns
|
|
119
|
+
|
|
120
|
+
Generate a comprehensive design analysis with actionable recommendations for code quality improvement.
|
|
121
|
+
|
|
@@ -0,0 +1,111 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Comprehensive evaluation and assessment tools for code quality and project health
|
|
3
|
+
tags: [evaluation, assessment, quality, metrics, analysis]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Perform comprehensive evaluation and assessment based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, examine the project structure and available metrics:
|
|
9
|
+
!find . -name "*.py" -o -name "*.js" -o -name "*.ts" | head -15
|
|
10
|
+
!ls -la | grep -E "(test|spec|coverage|metrics)"
|
|
11
|
+
!git log --oneline -10 2>/dev/null || echo "No git repository found"
|
|
12
|
+
|
|
13
|
+
Based on $ARGUMENTS, perform the appropriate evaluation:
|
|
14
|
+
|
|
15
|
+
## 1. Code Quality Assessment
|
|
16
|
+
|
|
17
|
+
If evaluating quality (--quality):
|
|
18
|
+
!python -m flake8 . --count 2>/dev/null || echo "No Python linting available"
|
|
19
|
+
!eslint . --format compact 2>/dev/null || echo "No JavaScript linting available"
|
|
20
|
+
!find . -name "*.py" -exec wc -l {} \; | awk '{sum+=$1} END {print "Total Python lines:", sum}'
|
|
21
|
+
|
|
22
|
+
Analyze code quality metrics:
|
|
23
|
+
- Code complexity and maintainability
|
|
24
|
+
- Test coverage percentage
|
|
25
|
+
- Linting and style violations
|
|
26
|
+
- Documentation coverage
|
|
27
|
+
- Technical debt indicators
|
|
28
|
+
|
|
29
|
+
## 2. Project Health Evaluation
|
|
30
|
+
|
|
31
|
+
If evaluating project health (--project):
|
|
32
|
+
!git log --since="30 days ago" --pretty=format:"%h %s" | wc -l
|
|
33
|
+
!git log --since="30 days ago" --pretty=format:"%an" | sort | uniq -c | sort -nr
|
|
34
|
+
!find . -name "TODO" -o -name "FIXME" | xargs grep -i "todo\|fixme" | wc -l 2>/dev/null || echo "0"
|
|
35
|
+
|
|
36
|
+
Assess project health indicators:
|
|
37
|
+
- Development velocity and commit frequency
|
|
38
|
+
- Issue resolution rate
|
|
39
|
+
- Technical debt accumulation
|
|
40
|
+
- Team collaboration patterns
|
|
41
|
+
- Release readiness
|
|
42
|
+
|
|
43
|
+
## 3. Team Performance Assessment
|
|
44
|
+
|
|
45
|
+
If evaluating team performance (--team):
|
|
46
|
+
!git shortlog -sn --since="30 days ago" 2>/dev/null || echo "No git history available"
|
|
47
|
+
!git log --since="7 days ago" --pretty=format:"%ad" --date=short | sort | uniq -c
|
|
48
|
+
|
|
49
|
+
Evaluate team metrics:
|
|
50
|
+
- Individual and team velocity
|
|
51
|
+
- Code review participation
|
|
52
|
+
- Knowledge sharing patterns
|
|
53
|
+
- Skill development indicators
|
|
54
|
+
- Collaboration effectiveness
|
|
55
|
+
|
|
56
|
+
## 4. Process Effectiveness Analysis
|
|
57
|
+
|
|
58
|
+
If evaluating process (--process):
|
|
59
|
+
!find . -name "*.yml" -o -name "*.yaml" | grep -E "(ci|pipeline|workflow)" | head -5
|
|
60
|
+
!ls -la .github/workflows/ 2>/dev/null || echo "No GitHub workflows found"
|
|
61
|
+
!find . -name "*test*" | wc -l
|
|
62
|
+
|
|
63
|
+
Analyze development processes:
|
|
64
|
+
- CI/CD pipeline effectiveness
|
|
65
|
+
- Testing process maturity
|
|
66
|
+
- Code review process efficiency
|
|
67
|
+
- Release management effectiveness
|
|
68
|
+
- Incident response capabilities
|
|
69
|
+
|
|
70
|
+
## 5. Comprehensive Reporting
|
|
71
|
+
|
|
72
|
+
If generating reports (--report):
|
|
73
|
+
!date
|
|
74
|
+
!uptime
|
|
75
|
+
!df -h . | tail -1
|
|
76
|
+
|
|
77
|
+
Generate evaluation metrics:
|
|
78
|
+
- Overall project health score
|
|
79
|
+
- Quality trend analysis
|
|
80
|
+
- Risk assessment summary
|
|
81
|
+
- Improvement recommendations
|
|
82
|
+
- Benchmarking against industry standards
|
|
83
|
+
|
|
84
|
+
Think step by step about the evaluation results and provide:
|
|
85
|
+
|
|
86
|
+
1. **Current Status Assessment**:
|
|
87
|
+
- Overall health score (0-100)
|
|
88
|
+
- Key strengths identified
|
|
89
|
+
- Critical areas for improvement
|
|
90
|
+
- Risk factors and mitigation strategies
|
|
91
|
+
|
|
92
|
+
2. **Trend Analysis**:
|
|
93
|
+
- Performance trends over time
|
|
94
|
+
- Quality trajectory
|
|
95
|
+
- Team productivity patterns
|
|
96
|
+
- Process improvement opportunities
|
|
97
|
+
|
|
98
|
+
3. **Actionable Recommendations**:
|
|
99
|
+
- Prioritized improvement actions
|
|
100
|
+
- Resource allocation suggestions
|
|
101
|
+
- Timeline for improvements
|
|
102
|
+
- Success metrics and KPIs
|
|
103
|
+
|
|
104
|
+
4. **Benchmarking Results**:
|
|
105
|
+
- Industry standard comparisons
|
|
106
|
+
- Best practice alignment
|
|
107
|
+
- Competitive positioning
|
|
108
|
+
- Excellence opportunities
|
|
109
|
+
|
|
110
|
+
Generate comprehensive evaluation report with specific, actionable insights and improvement roadmap.
|
|
111
|
+
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Track machine-readable requirement links for SpecDriven AI development
|
|
3
|
+
tags: [requirements, traceability, specifications, footnotes, coverage]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Track and manage requirement links based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, verify this is a SpecDriven AI project structure:
|
|
9
|
+
!ls -la specs/ 2>/dev/null || echo "No specs directory found"
|
|
10
|
+
!find specs/specifications/ -name "*.md" 2>/dev/null | head -5 || echo "No specifications found"
|
|
11
|
+
|
|
12
|
+
Based on $ARGUMENTS, perform the appropriate footnote operation:\n\n## 1. Find Requirements by ID\n\nIf finding requirements (--find):\n!grep -r \"#{#$footnote_id\" specs/specifications/ 2>/dev/null || echo \"Footnote ID not found\"\n!grep -r \"authority=\" specs/specifications/ | grep \"$footnote_id\" | head -1\n\nSearch for:\n- Specification containing the footnote ID\n- Authority level (system/platform/developer)\n- Context and description\n- Related requirements\n\n## 2. Generate Next Available ID\n\nIf generating next ID (--next):\n!find specs/specifications/ -name \"*.md\" -exec grep -o \"#{#[a-z]\\{3\\}[0-9][a-z]\" {} \\; 2>/dev/null | sort | tail -5\n\nGenerate next footnote ID:\n- Extract component prefix (first 3 chars)\n- Find highest existing sequence number\n- Generate next available ID with proper format\n- Validate format: ^[a-z]{3}[0-9][a-z]\n\n## 3. Trace Test Implementations\n\nIf tracing implementations (--trace):\n!grep -r \"$footnote_id\" specs/tests/ 2>/dev/null || echo \"No tests found for $footnote_id\"\n!grep -r \"$footnote_id\" . --exclude-dir=specs --exclude-dir=.git 2>/dev/null | head -5\n\nTrace requirement implementation:\n- Find tests referencing the footnote ID\n- Locate code implementing the requirement\n- Check traceability links\n- Count implementation coverage\n\n## 4. Validate ID Format\n\nIf validating format (--validate):\n!echo \"$footnote_id\" | grep -E \"^[a-z]{3}[0-9][a-z]$\" >/dev/null && echo \"Valid format\" || echo \"Invalid format\"\n\nValidate footnote ID:\n- Check format compliance (3 letters + 1 digit + 1 letter)\n- Extract component prefix\n- Verify sequence number\n- Confirm version suffix\n\n## 5. Check Authority Level\n\nIf checking authority (--authority):\n!grep -r \"#{#$footnote_id.*authority=\" specs/specifications/ 2>/dev/null | grep -o \"authority=[^}]*\"\n\nDetermine authority level:\n- system: Critical system requirements (highest)\n- platform: Framework/infrastructure requirements (medium)\n- developer: Application/feature requirements (lowest)\n\n## 6. Dual Coverage Analysis\n\nIf checking coverage (--coverage):\n!grep -r \"$footnote_id\" specs/tests/ >/dev/null && echo \"✓ Has tests\" || echo \"✗ No tests\"\n!python -m pytest specs/tests/ --cov=. -q 2>/dev/null | grep \"TOTAL\" || echo \"Coverage analysis not available\"\n\nAnalyze dual coverage:\n- Specification coverage (tests exist for requirement)\n- Code coverage (tests execute relevant code)\n- Traceability coverage (links between spec-test-code)\n\nThink step by step about requirement traceability and provide:\n\n1. **Requirement Status**:\n - Specification exists and is well-defined\n - Authority level and compliance requirements\n - Implementation completeness\n\n2. **Test Coverage**:\n - Tests exist for the requirement\n - Test quality and completeness\n - Code coverage achieved by tests\n\n3. **Traceability Links**:\n - Clear links from specification to tests\n - Code references to requirement ID\n - End-to-end traceability validation\n\n4. **Recommendations**:\n - Missing tests or implementations\n - Coverage improvement opportunities\n - Traceability enhancement suggestions\n\nGenerate requirement traceability report with coverage metrics and improvement recommendations.\n\nIf no specific operation is provided, show available footnote IDs and their status.
|
|
@@ -0,0 +1,117 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Auto-generate code, tests, and documentation from specifications using AI
|
|
3
|
+
tags: [generation, ai, code, tests, documentation, automation]
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Generate code, tests, and documentation based on the arguments provided in $ARGUMENTS.
|
|
7
|
+
|
|
8
|
+
First, examine the project structure and identify generation targets:
|
|
9
|
+
!find . -name "*.py" -o -name "*.js" -o -name "*.ts" | head -10
|
|
10
|
+
!ls -la specs/ 2>/dev/null || echo "No specs directory found"
|
|
11
|
+
!find specs/specifications/ -name "*.md" 2>/dev/null | head -5 || echo "No specifications found"
|
|
12
|
+
|
|
13
|
+
Based on $ARGUMENTS, perform the appropriate code generation:
|
|
14
|
+
|
|
15
|
+
## 1. Generate Tests from Specifications
|
|
16
|
+
|
|
17
|
+
If generating tests (--test):
|
|
18
|
+
!find specs/specifications/ -name "*.md" -exec grep -l "$spec_id" {} \; 2>/dev/null || echo "Specification not found"
|
|
19
|
+
@specs/specifications/$specification_file 2>/dev/null || echo "Unable to read specification"
|
|
20
|
+
|
|
21
|
+
Generate test file:
|
|
22
|
+
- Extract requirements from specification
|
|
23
|
+
- Create test cases covering all scenarios
|
|
24
|
+
- Include proper traceability links to specification ID
|
|
25
|
+
- Follow testing framework conventions (pytest, jest, etc.)
|
|
26
|
+
- Add assertion patterns based on specification authority
|
|
27
|
+
|
|
28
|
+
## 2. Generate Implementation Code
|
|
29
|
+
|
|
30
|
+
If generating code (--code):
|
|
31
|
+
!find . -name "*test*" | grep "$test_name" | head -1
|
|
32
|
+
@$test_file 2>/dev/null || echo "Test file not found"
|
|
33
|
+
|
|
34
|
+
Generate minimal implementation:
|
|
35
|
+
- Analyze test requirements and assertions
|
|
36
|
+
- Create minimal code to satisfy all tests
|
|
37
|
+
- Follow project coding standards
|
|
38
|
+
- Include proper error handling
|
|
39
|
+
- Add docstrings and type hints
|
|
40
|
+
|
|
41
|
+
## 3. Generate Schema Definitions
|
|
42
|
+
|
|
43
|
+
If generating schema (--schema):
|
|
44
|
+
!find . -name "*.py" -exec grep -l "$model_name" {} \; | head -3
|
|
45
|
+
!python -c "import inspect; print('Python inspection available')" 2>/dev/null || echo "Python not available"
|
|
46
|
+
|
|
47
|
+
Generate Pydantic schema:
|
|
48
|
+
- Extract field definitions from existing models
|
|
49
|
+
- Add proper type annotations
|
|
50
|
+
- Include validation rules
|
|
51
|
+
- Add field descriptions and examples
|
|
52
|
+
- Handle relationships and dependencies
|
|
53
|
+
|
|
54
|
+
## 4. Generate Documentation
|
|
55
|
+
|
|
56
|
+
If generating documentation (--docs):
|
|
57
|
+
!find . -name "*.py" -o -name "*.js" -o -name "*.ts" | xargs grep -l "$component" | head -5
|
|
58
|
+
!python -c "import ast; print('AST parsing available')" 2>/dev/null || echo "AST parsing not available"
|
|
59
|
+
|
|
60
|
+
Generate component documentation:
|
|
61
|
+
- Extract docstrings and comments
|
|
62
|
+
- Generate API reference documentation
|
|
63
|
+
- Create usage examples
|
|
64
|
+
- Add parameter and return type documentation
|
|
65
|
+
- Include integration examples
|
|
66
|
+
|
|
67
|
+
## 5. Generate Configuration Files
|
|
68
|
+
|
|
69
|
+
If generating config (--config):
|
|
70
|
+
!ls -la | grep -E "(config|env|yml|yaml|json|toml)"
|
|
71
|
+
!find . -name "*.example" -o -name "*.template" | head -3
|
|
72
|
+
|
|
73
|
+
Generate configuration:
|
|
74
|
+
- Create environment-specific config files
|
|
75
|
+
- Add validation schemas
|
|
76
|
+
- Include default values and examples
|
|
77
|
+
- Add documentation comments
|
|
78
|
+
- Handle sensitive data properly
|
|
79
|
+
|
|
80
|
+
## 6. Template-Based Generation
|
|
81
|
+
|
|
82
|
+
Check for existing templates:
|
|
83
|
+
!find . -name "*template*" -o -name "*scaffold*" | head -5
|
|
84
|
+
!ls -la templates/ 2>/dev/null || echo "No templates directory"
|
|
85
|
+
|
|
86
|
+
Use templates when available:
|
|
87
|
+
- Load appropriate template for generation type
|
|
88
|
+
- Replace placeholders with actual values
|
|
89
|
+
- Customize based on project structure
|
|
90
|
+
- Maintain consistent formatting
|
|
91
|
+
|
|
92
|
+
Think step by step about the generation requirements and:
|
|
93
|
+
|
|
94
|
+
1. **Analyze Input**:
|
|
95
|
+
- Understand the specification or test requirements
|
|
96
|
+
- Identify the target output format and structure
|
|
97
|
+
- Determine dependencies and relationships
|
|
98
|
+
|
|
99
|
+
2. **Generate Content**:
|
|
100
|
+
- Create minimal, focused implementation
|
|
101
|
+
- Follow established patterns and conventions
|
|
102
|
+
- Include proper error handling and validation
|
|
103
|
+
- Add comprehensive documentation
|
|
104
|
+
|
|
105
|
+
3. **Ensure Quality**:
|
|
106
|
+
- Validate generated code syntax
|
|
107
|
+
- Check compliance with project standards
|
|
108
|
+
- Verify traceability links are maintained
|
|
109
|
+
- Test generated code functionality
|
|
110
|
+
|
|
111
|
+
4. **Integration**:
|
|
112
|
+
- Place generated files in appropriate locations
|
|
113
|
+
- Update imports and dependencies
|
|
114
|
+
- Integrate with existing codebase
|
|
115
|
+
- Maintain project structure consistency
|
|
116
|
+
|
|
117
|
+
Generate high-quality, well-documented code that follows SpecDriven AI principles and integrates seamlessly with the existing project structure.
|