agcel 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.agent/workflows/api-gen.md +59 -0
- package/.agent/workflows/architect.md +44 -0
- package/.agent/workflows/brainstorm.md +223 -0
- package/.agent/workflows/build.md +38 -0
- package/.agent/workflows/changelog.md +51 -0
- package/.agent/workflows/checkpoint.md +138 -0
- package/.agent/workflows/commit.md +223 -0
- package/.agent/workflows/debug.md +57 -0
- package/.agent/workflows/deploy.md +76 -0
- package/.agent/workflows/doc.md +247 -0
- package/.agent/workflows/execute-plan.md +225 -0
- package/.agent/workflows/feature.md +255 -0
- package/.agent/workflows/fix.md +323 -0
- package/.agent/workflows/help.md +63 -0
- package/.agent/workflows/index.md +148 -0
- package/.agent/workflows/load.md +112 -0
- package/.agent/workflows/mode.md +170 -0
- package/.agent/workflows/optimize.md +53 -0
- package/.agent/workflows/plan.md +337 -0
- package/.agent/workflows/pr.md +74 -0
- package/.agent/workflows/product-plan.md +36 -0
- package/.agent/workflows/production-deploy.md +39 -0
- package/.agent/workflows/refactor.md +63 -0
- package/.agent/workflows/research.md +116 -0
- package/.agent/workflows/review.md +344 -0
- package/.agent/workflows/security-scan.md +56 -0
- package/.agent/workflows/ship.md +221 -0
- package/.agent/workflows/spawn.md +177 -0
- package/.agent/workflows/status.md +59 -0
- package/.agent/workflows/tdd.md +139 -0
- package/.agent/workflows/test.md +340 -0
- package/.agent/workflows/verify.md +35 -0
- package/LICENSE +21 -0
- package/README.md +67 -0
- package/dist/commands/init.js +142 -0
- package/dist/commands/install.js +98 -0
- package/dist/commands/list.js +49 -0
- package/dist/commands/restart.js +17 -0
- package/dist/commands/start.js +41 -0
- package/dist/commands/status.js +24 -0
- package/dist/commands/stop.js +29 -0
- package/dist/commands/uninstall.js +78 -0
- package/dist/index.js +58 -0
- package/dist/server/index.js +174 -0
- package/dist/utils/index.js +63 -0
- package/package.json +54 -0
- package/skills/api-security-best-practices/SKILL.md +291 -0
- package/skills/api-security-best-practices/references/examples.md +617 -0
- package/skills/architecture/SKILL.md +59 -0
- package/skills/architecture/context-discovery.md +43 -0
- package/skills/architecture/examples.md +94 -0
- package/skills/architecture/pattern-selection.md +68 -0
- package/skills/architecture/patterns-reference.md +50 -0
- package/skills/architecture/trade-off-analysis.md +77 -0
- package/skills/aws-serverless/SKILL.md +327 -0
- package/skills/brainstorming/SKILL.md +234 -0
- package/skills/c4-context/SKILL.md +154 -0
- package/skills/ci-cd-engineer/SKILL.md +50 -0
- package/skills/code-auditing/SKILL.md +55 -0
- package/skills/copywriting/SKILL.md +248 -0
- package/skills/database-engineer/SKILL.md +47 -0
- package/skills/doc-coauthoring/SKILL.md +379 -0
- package/skills/docker-expert/SKILL.md +412 -0
- package/skills/langgraph/SKILL.md +291 -0
- package/skills/postgresql/SKILL.md +73 -0
- package/skills/pricing-strategy/SKILL.md +360 -0
- package/skills/product-manager/SKILL.md +57 -0
- package/skills/prompt-engineer/README.md +659 -0
- package/skills/prompt-engineer/SKILL.md +256 -0
- package/skills/python-patterns/SKILL.md +445 -0
- package/skills/qa-engineer/SKILL.md +42 -0
- package/skills/rag-engineer/SKILL.md +94 -0
- package/skills/react-patterns/SKILL.md +202 -0
- package/skills/secure-refactoring/SKILL.md +54 -0
- package/skills/security-documentation/SKILL.md +53 -0
- package/skills/senior-architect/SKILL.md +213 -0
- package/skills/senior-architect/references/architecture_patterns.md +103 -0
- package/skills/senior-architect/references/system_design_workflows.md +103 -0
- package/skills/senior-architect/references/tech_decision_guide.md +103 -0
- package/skills/senior-architect/scripts/architecture_diagram_generator.py +114 -0
- package/skills/senior-architect/scripts/dependency_analyzer.py +114 -0
- package/skills/senior-architect/scripts/project_architect.py +114 -0
- package/skills/seo-audit/SKILL.md +491 -0
- package/skills/sql-injection-testing/SKILL.md +452 -0
- package/skills/test-driven-development/SKILL.md +375 -0
- package/skills/test-driven-development/testing-anti-patterns.md +299 -0
- package/skills/test-fixing/SKILL.md +123 -0
- package/skills/testing-patterns/SKILL.md +263 -0
- package/skills/typescript-expert/SKILL.md +202 -0
- package/skills/typescript-expert/references/advanced-topics.md +252 -0
- package/skills/typescript-expert/references/tsconfig-strict.json +92 -0
- package/skills/typescript-expert/references/typescript-cheatsheet.md +383 -0
- package/skills/typescript-expert/references/utility-types.ts +335 -0
- package/skills/typescript-expert/scripts/ts_diagnostic.py +203 -0
- package/skills/ui-ux-designer/SKILL.md +46 -0
- package/skills/vercel-deployment/SKILL.md +83 -0
- package/skills/vulnerability-scanner/SKILL.md +280 -0
- package/skills/vulnerability-scanner/checklists.md +121 -0
- package/skills/vulnerability-scanner/scripts/security_scan.py +458 -0
- package/skills/writing-plans/SKILL.md +120 -0
|
@@ -0,0 +1,121 @@
|
|
|
1
|
+
# Security Checklists
|
|
2
|
+
|
|
3
|
+
> Quick reference checklists for security audits. Use alongside vulnerability-scanner principles.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## OWASP Top 10 Audit Checklist
|
|
8
|
+
|
|
9
|
+
### A01: Broken Access Control
|
|
10
|
+
- [ ] Authorization on all protected routes
|
|
11
|
+
- [ ] Deny by default
|
|
12
|
+
- [ ] Rate limiting implemented
|
|
13
|
+
- [ ] CORS properly configured
|
|
14
|
+
|
|
15
|
+
### A02: Cryptographic Failures
|
|
16
|
+
- [ ] Passwords hashed (bcrypt/argon2, cost 12+)
|
|
17
|
+
- [ ] Sensitive data encrypted at rest
|
|
18
|
+
- [ ] TLS 1.2+ for all connections
|
|
19
|
+
- [ ] No secrets in code/logs
|
|
20
|
+
|
|
21
|
+
### A03: Injection
|
|
22
|
+
- [ ] Parameterized queries
|
|
23
|
+
- [ ] Input validation on all user data
|
|
24
|
+
- [ ] Output encoding for XSS
|
|
25
|
+
- [ ] No eval() or dynamic code execution
|
|
26
|
+
|
|
27
|
+
### A04: Insecure Design
|
|
28
|
+
- [ ] Threat modeling done
|
|
29
|
+
- [ ] Security requirements defined
|
|
30
|
+
- [ ] Business logic validated
|
|
31
|
+
|
|
32
|
+
### A05: Security Misconfiguration
|
|
33
|
+
- [ ] Unnecessary features disabled
|
|
34
|
+
- [ ] Error messages sanitized
|
|
35
|
+
- [ ] Security headers configured
|
|
36
|
+
- [ ] Default credentials changed
|
|
37
|
+
|
|
38
|
+
### A06: Vulnerable Components
|
|
39
|
+
- [ ] Dependencies up to date
|
|
40
|
+
- [ ] No known vulnerabilities
|
|
41
|
+
- [ ] Unused dependencies removed
|
|
42
|
+
|
|
43
|
+
### A07: Authentication Failures
|
|
44
|
+
- [ ] MFA available
|
|
45
|
+
- [ ] Session invalidation on logout
|
|
46
|
+
- [ ] Session timeout implemented
|
|
47
|
+
- [ ] Brute force protection
|
|
48
|
+
|
|
49
|
+
### A08: Integrity Failures
|
|
50
|
+
- [ ] Dependency integrity verified
|
|
51
|
+
- [ ] CI/CD pipeline secured
|
|
52
|
+
- [ ] Update mechanism secured
|
|
53
|
+
|
|
54
|
+
### A09: Logging Failures
|
|
55
|
+
- [ ] Security events logged
|
|
56
|
+
- [ ] Logs protected
|
|
57
|
+
- [ ] No sensitive data in logs
|
|
58
|
+
- [ ] Alerting configured
|
|
59
|
+
|
|
60
|
+
### A10: SSRF
|
|
61
|
+
- [ ] URL validation implemented
|
|
62
|
+
- [ ] Allow-list for external calls
|
|
63
|
+
- [ ] Network segmentation
|
|
64
|
+
|
|
65
|
+
---
|
|
66
|
+
|
|
67
|
+
## Authentication Checklist
|
|
68
|
+
|
|
69
|
+
- [ ] Strong password policy
|
|
70
|
+
- [ ] Account lockout
|
|
71
|
+
- [ ] Secure password reset
|
|
72
|
+
- [ ] Session management
|
|
73
|
+
- [ ] Token expiration
|
|
74
|
+
- [ ] Logout invalidation
|
|
75
|
+
|
|
76
|
+
---
|
|
77
|
+
|
|
78
|
+
## API Security Checklist
|
|
79
|
+
|
|
80
|
+
- [ ] Authentication required
|
|
81
|
+
- [ ] Authorization per endpoint
|
|
82
|
+
- [ ] Input validation
|
|
83
|
+
- [ ] Rate limiting
|
|
84
|
+
- [ ] Output sanitization
|
|
85
|
+
- [ ] Error handling
|
|
86
|
+
|
|
87
|
+
---
|
|
88
|
+
|
|
89
|
+
## Data Protection Checklist
|
|
90
|
+
|
|
91
|
+
- [ ] Encryption at rest
|
|
92
|
+
- [ ] Encryption in transit
|
|
93
|
+
- [ ] Key management
|
|
94
|
+
- [ ] Data minimization
|
|
95
|
+
- [ ] Secure deletion
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
## Security Headers
|
|
100
|
+
|
|
101
|
+
| Header | Purpose |
|
|
102
|
+
|--------|---------|
|
|
103
|
+
| **Content-Security-Policy** | XSS prevention |
|
|
104
|
+
| **X-Content-Type-Options** | MIME sniffing |
|
|
105
|
+
| **X-Frame-Options** | Clickjacking |
|
|
106
|
+
| **Strict-Transport-Security** | Force HTTPS |
|
|
107
|
+
| **Referrer-Policy** | Referrer control |
|
|
108
|
+
|
|
109
|
+
---
|
|
110
|
+
|
|
111
|
+
## Quick Audit Commands
|
|
112
|
+
|
|
113
|
+
| Check | What to Look For |
|
|
114
|
+
|-------|------------------|
|
|
115
|
+
| Secrets in code | password, api_key, secret |
|
|
116
|
+
| Dangerous patterns | eval, innerHTML, SQL concat |
|
|
117
|
+
| Dependency issues | npm audit, snyk |
|
|
118
|
+
|
|
119
|
+
---
|
|
120
|
+
|
|
121
|
+
> **Usage:** Copy relevant checklists into your PLAN.md or security report.
|
|
@@ -0,0 +1,458 @@
|
|
|
1
|
+
#!/usr/bin/env python3
|
|
2
|
+
"""
|
|
3
|
+
Skill: vulnerability-scanner
|
|
4
|
+
Script: security_scan.py
|
|
5
|
+
Purpose: Validate that security principles from SKILL.md are applied correctly
|
|
6
|
+
Usage: python security_scan.py <project_path> [--scan-type all|deps|secrets|patterns|config]
|
|
7
|
+
Output: JSON with validation findings
|
|
8
|
+
|
|
9
|
+
This script verifies:
|
|
10
|
+
1. Dependencies - Supply chain security (OWASP A03)
|
|
11
|
+
2. Secrets - No hardcoded credentials (OWASP A04)
|
|
12
|
+
3. Code Patterns - Dangerous patterns identified (OWASP A05)
|
|
13
|
+
4. Configuration - Security settings validated (OWASP A02)
|
|
14
|
+
"""
|
|
15
|
+
import subprocess
|
|
16
|
+
import json
|
|
17
|
+
import os
|
|
18
|
+
import sys
|
|
19
|
+
import re
|
|
20
|
+
import argparse
|
|
21
|
+
from pathlib import Path
|
|
22
|
+
from typing import Dict, List, Any
|
|
23
|
+
from datetime import datetime
|
|
24
|
+
|
|
25
|
+
# Fix Windows console encoding for Unicode output
|
|
26
|
+
try:
|
|
27
|
+
sys.stdout.reconfigure(encoding='utf-8', errors='replace')
|
|
28
|
+
sys.stderr.reconfigure(encoding='utf-8', errors='replace')
|
|
29
|
+
except AttributeError:
|
|
30
|
+
pass # Python < 3.7
|
|
31
|
+
|
|
32
|
+
|
|
33
|
+
# ============================================================================
|
|
34
|
+
# CONFIGURATION
|
|
35
|
+
# ============================================================================
|
|
36
|
+
|
|
37
|
+
SECRET_PATTERNS = [
|
|
38
|
+
# API Keys & Tokens
|
|
39
|
+
(r'api[_-]?key\s*[=:]\s*["\'][^"\']{10,}["\']', "API Key", "high"),
|
|
40
|
+
(r'token\s*[=:]\s*["\'][^"\']{10,}["\']', "Token", "high"),
|
|
41
|
+
(r'bearer\s+[a-zA-Z0-9\-_.]+', "Bearer Token", "critical"),
|
|
42
|
+
|
|
43
|
+
# Cloud Credentials
|
|
44
|
+
(r'AKIA[0-9A-Z]{16}', "AWS Access Key", "critical"),
|
|
45
|
+
(r'aws[_-]?secret[_-]?access[_-]?key\s*[=:]\s*["\'][^"\']+["\']', "AWS Secret", "critical"),
|
|
46
|
+
(r'AZURE[_-]?[A-Z_]+\s*[=:]\s*["\'][^"\']+["\']', "Azure Credential", "critical"),
|
|
47
|
+
(r'GOOGLE[_-]?[A-Z_]+\s*[=:]\s*["\'][^"\']+["\']', "GCP Credential", "critical"),
|
|
48
|
+
|
|
49
|
+
# Database & Connections
|
|
50
|
+
(r'password\s*[=:]\s*["\'][^"\']{4,}["\']', "Password", "high"),
|
|
51
|
+
(r'(mongodb|postgres|mysql|redis):\/\/[^\s"\']+', "Database Connection String", "critical"),
|
|
52
|
+
|
|
53
|
+
# Private Keys
|
|
54
|
+
(r'-----BEGIN\s+(RSA|PRIVATE|EC)\s+KEY-----', "Private Key", "critical"),
|
|
55
|
+
(r'ssh-rsa\s+[A-Za-z0-9+/]+', "SSH Key", "critical"),
|
|
56
|
+
|
|
57
|
+
# JWT
|
|
58
|
+
(r'eyJ[A-Za-z0-9-_]+\.eyJ[A-Za-z0-9-_]+\.[A-Za-z0-9-_]+', "JWT Token", "high"),
|
|
59
|
+
]
|
|
60
|
+
|
|
61
|
+
DANGEROUS_PATTERNS = [
|
|
62
|
+
# Injection risks
|
|
63
|
+
(r'eval\s*\(', "eval() usage", "critical", "Code Injection risk"),
|
|
64
|
+
(r'exec\s*\(', "exec() usage", "critical", "Code Injection risk"),
|
|
65
|
+
(r'new\s+Function\s*\(', "Function constructor", "high", "Code Injection risk"),
|
|
66
|
+
(r'child_process\.exec\s*\(', "child_process.exec", "high", "Command Injection risk"),
|
|
67
|
+
(r'subprocess\.call\s*\([^)]*shell\s*=\s*True', "subprocess with shell=True", "high", "Command Injection risk"),
|
|
68
|
+
|
|
69
|
+
# XSS risks
|
|
70
|
+
(r'dangerouslySetInnerHTML', "dangerouslySetInnerHTML", "high", "XSS risk"),
|
|
71
|
+
(r'\.innerHTML\s*=', "innerHTML assignment", "medium", "XSS risk"),
|
|
72
|
+
(r'document\.write\s*\(', "document.write", "medium", "XSS risk"),
|
|
73
|
+
|
|
74
|
+
# SQL Injection indicators
|
|
75
|
+
(r'["\'][^"\']*\+\s*[a-zA-Z_]+\s*\+\s*["\'].*(?:SELECT|INSERT|UPDATE|DELETE)', "SQL String Concat", "critical", "SQL Injection risk"),
|
|
76
|
+
(r'f"[^"]*(?:SELECT|INSERT|UPDATE|DELETE)[^"]*\{', "SQL f-string", "critical", "SQL Injection risk"),
|
|
77
|
+
|
|
78
|
+
# Insecure configurations
|
|
79
|
+
(r'verify\s*=\s*False', "SSL Verify Disabled", "high", "MITM risk"),
|
|
80
|
+
(r'--insecure', "Insecure flag", "medium", "Security disabled"),
|
|
81
|
+
(r'disable[_-]?ssl', "SSL Disabled", "high", "MITM risk"),
|
|
82
|
+
|
|
83
|
+
# Unsafe deserialization
|
|
84
|
+
(r'pickle\.loads?\s*\(', "pickle usage", "high", "Deserialization risk"),
|
|
85
|
+
(r'yaml\.load\s*\([^)]*\)(?!\s*,\s*Loader)', "Unsafe YAML load", "high", "Deserialization risk"),
|
|
86
|
+
]
|
|
87
|
+
|
|
88
|
+
SKIP_DIRS = {'node_modules', '.git', 'dist', 'build', '__pycache__', '.venv', 'venv', '.next'}
|
|
89
|
+
CODE_EXTENSIONS = {'.js', '.ts', '.jsx', '.tsx', '.py', '.go', '.java', '.rb', '.php'}
|
|
90
|
+
CONFIG_EXTENSIONS = {'.json', '.yaml', '.yml', '.toml', '.env', '.env.local', '.env.development'}
|
|
91
|
+
|
|
92
|
+
|
|
93
|
+
# ============================================================================
|
|
94
|
+
# SCANNING FUNCTIONS
|
|
95
|
+
# ============================================================================
|
|
96
|
+
|
|
97
|
+
def scan_dependencies(project_path: str) -> Dict[str, Any]:
|
|
98
|
+
"""
|
|
99
|
+
Validate supply chain security (OWASP A03).
|
|
100
|
+
Checks: npm audit, lock file presence, dependency age.
|
|
101
|
+
"""
|
|
102
|
+
results = {"tool": "dependency_scanner", "findings": [], "status": "[OK] Secure"}
|
|
103
|
+
|
|
104
|
+
# Check for lock files
|
|
105
|
+
lock_files = {
|
|
106
|
+
"npm": ["package-lock.json", "npm-shrinkwrap.json"],
|
|
107
|
+
"yarn": ["yarn.lock"],
|
|
108
|
+
"pnpm": ["pnpm-lock.yaml"],
|
|
109
|
+
"pip": ["requirements.txt", "Pipfile.lock", "poetry.lock"],
|
|
110
|
+
}
|
|
111
|
+
|
|
112
|
+
found_locks = []
|
|
113
|
+
missing_locks = []
|
|
114
|
+
|
|
115
|
+
for manager, files in lock_files.items():
|
|
116
|
+
pkg_file = "package.json" if manager in ["npm", "yarn", "pnpm"] else "setup.py"
|
|
117
|
+
pkg_path = Path(project_path) / pkg_file
|
|
118
|
+
|
|
119
|
+
if pkg_path.exists() or (manager == "pip" and (Path(project_path) / "requirements.txt").exists()):
|
|
120
|
+
has_lock = any((Path(project_path) / f).exists() for f in files)
|
|
121
|
+
if has_lock:
|
|
122
|
+
found_locks.append(manager)
|
|
123
|
+
else:
|
|
124
|
+
missing_locks.append(manager)
|
|
125
|
+
results["findings"].append({
|
|
126
|
+
"type": "Missing Lock File",
|
|
127
|
+
"severity": "high",
|
|
128
|
+
"message": f"{manager}: No lock file found. Supply chain integrity at risk."
|
|
129
|
+
})
|
|
130
|
+
|
|
131
|
+
# Run npm audit if applicable
|
|
132
|
+
if (Path(project_path) / "package.json").exists():
|
|
133
|
+
try:
|
|
134
|
+
result = subprocess.run(
|
|
135
|
+
["npm", "audit", "--json"],
|
|
136
|
+
cwd=project_path,
|
|
137
|
+
capture_output=True,
|
|
138
|
+
text=True,
|
|
139
|
+
timeout=60
|
|
140
|
+
)
|
|
141
|
+
|
|
142
|
+
try:
|
|
143
|
+
audit_data = json.loads(result.stdout)
|
|
144
|
+
vulnerabilities = audit_data.get("vulnerabilities", {})
|
|
145
|
+
|
|
146
|
+
severity_count = {"critical": 0, "high": 0, "moderate": 0, "low": 0}
|
|
147
|
+
for vuln in vulnerabilities.values():
|
|
148
|
+
sev = vuln.get("severity", "low").lower()
|
|
149
|
+
if sev in severity_count:
|
|
150
|
+
severity_count[sev] += 1
|
|
151
|
+
|
|
152
|
+
if severity_count["critical"] > 0:
|
|
153
|
+
results["status"] = "[!!] Critical vulnerabilities"
|
|
154
|
+
results["findings"].append({
|
|
155
|
+
"type": "npm audit",
|
|
156
|
+
"severity": "critical",
|
|
157
|
+
"message": f"{severity_count['critical']} critical vulnerabilities in dependencies"
|
|
158
|
+
})
|
|
159
|
+
elif severity_count["high"] > 0:
|
|
160
|
+
results["status"] = "[!] High vulnerabilities"
|
|
161
|
+
results["findings"].append({
|
|
162
|
+
"type": "npm audit",
|
|
163
|
+
"severity": "high",
|
|
164
|
+
"message": f"{severity_count['high']} high severity vulnerabilities"
|
|
165
|
+
})
|
|
166
|
+
|
|
167
|
+
results["npm_audit"] = severity_count
|
|
168
|
+
|
|
169
|
+
except json.JSONDecodeError:
|
|
170
|
+
pass
|
|
171
|
+
|
|
172
|
+
except (FileNotFoundError, subprocess.TimeoutExpired):
|
|
173
|
+
pass
|
|
174
|
+
|
|
175
|
+
if not results["findings"]:
|
|
176
|
+
results["status"] = "[OK] Supply chain checks passed"
|
|
177
|
+
|
|
178
|
+
return results
|
|
179
|
+
|
|
180
|
+
|
|
181
|
+
def scan_secrets(project_path: str) -> Dict[str, Any]:
|
|
182
|
+
"""
|
|
183
|
+
Validate no hardcoded secrets (OWASP A04).
|
|
184
|
+
Checks: API keys, tokens, passwords, cloud credentials.
|
|
185
|
+
"""
|
|
186
|
+
results = {
|
|
187
|
+
"tool": "secret_scanner",
|
|
188
|
+
"findings": [],
|
|
189
|
+
"status": "[OK] No secrets detected",
|
|
190
|
+
"scanned_files": 0,
|
|
191
|
+
"by_severity": {"critical": 0, "high": 0, "medium": 0}
|
|
192
|
+
}
|
|
193
|
+
|
|
194
|
+
for root, dirs, files in os.walk(project_path):
|
|
195
|
+
dirs[:] = [d for d in dirs if d not in SKIP_DIRS]
|
|
196
|
+
|
|
197
|
+
for file in files:
|
|
198
|
+
ext = Path(file).suffix.lower()
|
|
199
|
+
if ext not in CODE_EXTENSIONS and ext not in CONFIG_EXTENSIONS:
|
|
200
|
+
continue
|
|
201
|
+
|
|
202
|
+
filepath = Path(root) / file
|
|
203
|
+
results["scanned_files"] += 1
|
|
204
|
+
|
|
205
|
+
try:
|
|
206
|
+
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
|
|
207
|
+
content = f.read()
|
|
208
|
+
|
|
209
|
+
for pattern, secret_type, severity in SECRET_PATTERNS:
|
|
210
|
+
matches = re.findall(pattern, content, re.IGNORECASE)
|
|
211
|
+
if matches:
|
|
212
|
+
results["findings"].append({
|
|
213
|
+
"file": str(filepath.relative_to(project_path)),
|
|
214
|
+
"type": secret_type,
|
|
215
|
+
"severity": severity,
|
|
216
|
+
"count": len(matches)
|
|
217
|
+
})
|
|
218
|
+
results["by_severity"][severity] += len(matches)
|
|
219
|
+
|
|
220
|
+
except Exception:
|
|
221
|
+
pass
|
|
222
|
+
|
|
223
|
+
if results["by_severity"]["critical"] > 0:
|
|
224
|
+
results["status"] = "[!!] CRITICAL: Secrets exposed!"
|
|
225
|
+
elif results["by_severity"]["high"] > 0:
|
|
226
|
+
results["status"] = "[!] HIGH: Secrets found"
|
|
227
|
+
elif sum(results["by_severity"].values()) > 0:
|
|
228
|
+
results["status"] = "[?] Potential secrets detected"
|
|
229
|
+
|
|
230
|
+
# Limit findings for output
|
|
231
|
+
results["findings"] = results["findings"][:15]
|
|
232
|
+
|
|
233
|
+
return results
|
|
234
|
+
|
|
235
|
+
|
|
236
|
+
def scan_code_patterns(project_path: str) -> Dict[str, Any]:
|
|
237
|
+
"""
|
|
238
|
+
Validate dangerous code patterns (OWASP A05).
|
|
239
|
+
Checks: Injection risks, XSS, unsafe deserialization.
|
|
240
|
+
"""
|
|
241
|
+
results = {
|
|
242
|
+
"tool": "pattern_scanner",
|
|
243
|
+
"findings": [],
|
|
244
|
+
"status": "[OK] No dangerous patterns",
|
|
245
|
+
"scanned_files": 0,
|
|
246
|
+
"by_category": {}
|
|
247
|
+
}
|
|
248
|
+
|
|
249
|
+
for root, dirs, files in os.walk(project_path):
|
|
250
|
+
dirs[:] = [d for d in dirs if d not in SKIP_DIRS]
|
|
251
|
+
|
|
252
|
+
for file in files:
|
|
253
|
+
ext = Path(file).suffix.lower()
|
|
254
|
+
if ext not in CODE_EXTENSIONS:
|
|
255
|
+
continue
|
|
256
|
+
|
|
257
|
+
filepath = Path(root) / file
|
|
258
|
+
results["scanned_files"] += 1
|
|
259
|
+
|
|
260
|
+
try:
|
|
261
|
+
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
|
|
262
|
+
lines = f.readlines()
|
|
263
|
+
|
|
264
|
+
for line_num, line in enumerate(lines, 1):
|
|
265
|
+
for pattern, name, severity, category in DANGEROUS_PATTERNS:
|
|
266
|
+
if re.search(pattern, line, re.IGNORECASE):
|
|
267
|
+
results["findings"].append({
|
|
268
|
+
"file": str(filepath.relative_to(project_path)),
|
|
269
|
+
"line": line_num,
|
|
270
|
+
"pattern": name,
|
|
271
|
+
"severity": severity,
|
|
272
|
+
"category": category,
|
|
273
|
+
"snippet": line.strip()[:80]
|
|
274
|
+
})
|
|
275
|
+
results["by_category"][category] = results["by_category"].get(category, 0) + 1
|
|
276
|
+
|
|
277
|
+
except Exception:
|
|
278
|
+
pass
|
|
279
|
+
|
|
280
|
+
critical_count = sum(1 for f in results["findings"] if f["severity"] == "critical")
|
|
281
|
+
high_count = sum(1 for f in results["findings"] if f["severity"] == "high")
|
|
282
|
+
|
|
283
|
+
if critical_count > 0:
|
|
284
|
+
results["status"] = f"[!!] CRITICAL: {critical_count} dangerous patterns"
|
|
285
|
+
elif high_count > 0:
|
|
286
|
+
results["status"] = f"[!] HIGH: {high_count} risky patterns"
|
|
287
|
+
elif results["findings"]:
|
|
288
|
+
results["status"] = "[?] Some patterns need review"
|
|
289
|
+
|
|
290
|
+
# Limit findings
|
|
291
|
+
results["findings"] = results["findings"][:20]
|
|
292
|
+
|
|
293
|
+
return results
|
|
294
|
+
|
|
295
|
+
|
|
296
|
+
def scan_configuration(project_path: str) -> Dict[str, Any]:
|
|
297
|
+
"""
|
|
298
|
+
Validate security configuration (OWASP A02).
|
|
299
|
+
Checks: Security headers, CORS, debug modes.
|
|
300
|
+
"""
|
|
301
|
+
results = {
|
|
302
|
+
"tool": "config_scanner",
|
|
303
|
+
"findings": [],
|
|
304
|
+
"status": "[OK] Configuration secure",
|
|
305
|
+
"checks": {}
|
|
306
|
+
}
|
|
307
|
+
|
|
308
|
+
# Check common config files for issues
|
|
309
|
+
config_issues = [
|
|
310
|
+
(r'"DEBUG"\s*:\s*true', "Debug mode enabled", "high"),
|
|
311
|
+
(r'debug\s*=\s*True', "Debug mode enabled", "high"),
|
|
312
|
+
(r'NODE_ENV.*development', "Development mode in config", "medium"),
|
|
313
|
+
(r'"CORS_ALLOW_ALL".*true', "CORS allow all origins", "high"),
|
|
314
|
+
(r'"Access-Control-Allow-Origin".*\*', "CORS wildcard", "high"),
|
|
315
|
+
(r'allowCredentials.*true.*origin.*\*', "Dangerous CORS combo", "critical"),
|
|
316
|
+
]
|
|
317
|
+
|
|
318
|
+
for root, dirs, files in os.walk(project_path):
|
|
319
|
+
dirs[:] = [d for d in dirs if d not in SKIP_DIRS]
|
|
320
|
+
|
|
321
|
+
for file in files:
|
|
322
|
+
ext = Path(file).suffix.lower()
|
|
323
|
+
if ext not in CONFIG_EXTENSIONS and file not in ['next.config.js', 'webpack.config.js', '.eslintrc.js']:
|
|
324
|
+
continue
|
|
325
|
+
|
|
326
|
+
filepath = Path(root) / file
|
|
327
|
+
|
|
328
|
+
try:
|
|
329
|
+
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
|
|
330
|
+
content = f.read()
|
|
331
|
+
|
|
332
|
+
for pattern, issue, severity in config_issues:
|
|
333
|
+
if re.search(pattern, content, re.IGNORECASE):
|
|
334
|
+
results["findings"].append({
|
|
335
|
+
"file": str(filepath.relative_to(project_path)),
|
|
336
|
+
"issue": issue,
|
|
337
|
+
"severity": severity
|
|
338
|
+
})
|
|
339
|
+
|
|
340
|
+
except Exception:
|
|
341
|
+
pass
|
|
342
|
+
|
|
343
|
+
# Check for security header configurations
|
|
344
|
+
header_files = ["next.config.js", "next.config.mjs", "middleware.ts", "nginx.conf"]
|
|
345
|
+
for hf in header_files:
|
|
346
|
+
hf_path = Path(project_path) / hf
|
|
347
|
+
if hf_path.exists():
|
|
348
|
+
results["checks"]["security_headers_config"] = True
|
|
349
|
+
break
|
|
350
|
+
else:
|
|
351
|
+
results["checks"]["security_headers_config"] = False
|
|
352
|
+
results["findings"].append({
|
|
353
|
+
"issue": "No security headers configuration found",
|
|
354
|
+
"severity": "medium",
|
|
355
|
+
"recommendation": "Configure CSP, HSTS, X-Frame-Options headers"
|
|
356
|
+
})
|
|
357
|
+
|
|
358
|
+
if any(f["severity"] == "critical" for f in results["findings"]):
|
|
359
|
+
results["status"] = "[!!] CRITICAL: Configuration issues"
|
|
360
|
+
elif any(f["severity"] == "high" for f in results["findings"]):
|
|
361
|
+
results["status"] = "[!] HIGH: Configuration review needed"
|
|
362
|
+
elif results["findings"]:
|
|
363
|
+
results["status"] = "[?] Minor configuration issues"
|
|
364
|
+
|
|
365
|
+
return results
|
|
366
|
+
|
|
367
|
+
|
|
368
|
+
# ============================================================================
|
|
369
|
+
# MAIN
|
|
370
|
+
# ============================================================================
|
|
371
|
+
|
|
372
|
+
def run_full_scan(project_path: str, scan_type: str = "all") -> Dict[str, Any]:
|
|
373
|
+
"""Execute security validation scans."""
|
|
374
|
+
|
|
375
|
+
report = {
|
|
376
|
+
"project": project_path,
|
|
377
|
+
"timestamp": datetime.now().isoformat(),
|
|
378
|
+
"scan_type": scan_type,
|
|
379
|
+
"scans": {},
|
|
380
|
+
"summary": {
|
|
381
|
+
"total_findings": 0,
|
|
382
|
+
"critical": 0,
|
|
383
|
+
"high": 0,
|
|
384
|
+
"overall_status": "[OK] SECURE"
|
|
385
|
+
}
|
|
386
|
+
}
|
|
387
|
+
|
|
388
|
+
scanners = {
|
|
389
|
+
"deps": ("dependencies", scan_dependencies),
|
|
390
|
+
"secrets": ("secrets", scan_secrets),
|
|
391
|
+
"patterns": ("code_patterns", scan_code_patterns),
|
|
392
|
+
"config": ("configuration", scan_configuration),
|
|
393
|
+
}
|
|
394
|
+
|
|
395
|
+
for key, (name, scanner) in scanners.items():
|
|
396
|
+
if scan_type == "all" or scan_type == key:
|
|
397
|
+
result = scanner(project_path)
|
|
398
|
+
report["scans"][name] = result
|
|
399
|
+
|
|
400
|
+
findings_count = len(result.get("findings", []))
|
|
401
|
+
report["summary"]["total_findings"] += findings_count
|
|
402
|
+
|
|
403
|
+
for finding in result.get("findings", []):
|
|
404
|
+
sev = finding.get("severity", "low")
|
|
405
|
+
if sev == "critical":
|
|
406
|
+
report["summary"]["critical"] += 1
|
|
407
|
+
elif sev == "high":
|
|
408
|
+
report["summary"]["high"] += 1
|
|
409
|
+
|
|
410
|
+
# Determine overall status
|
|
411
|
+
if report["summary"]["critical"] > 0:
|
|
412
|
+
report["summary"]["overall_status"] = "[!!] CRITICAL ISSUES FOUND"
|
|
413
|
+
elif report["summary"]["high"] > 0:
|
|
414
|
+
report["summary"]["overall_status"] = "[!] HIGH RISK ISSUES"
|
|
415
|
+
elif report["summary"]["total_findings"] > 0:
|
|
416
|
+
report["summary"]["overall_status"] = "[?] REVIEW RECOMMENDED"
|
|
417
|
+
|
|
418
|
+
return report
|
|
419
|
+
|
|
420
|
+
|
|
421
|
+
def main():
|
|
422
|
+
parser = argparse.ArgumentParser(
|
|
423
|
+
description="Validate security principles from vulnerability-scanner skill"
|
|
424
|
+
)
|
|
425
|
+
parser.add_argument("project_path", nargs="?", default=".", help="Project directory to scan")
|
|
426
|
+
parser.add_argument("--scan-type", choices=["all", "deps", "secrets", "patterns", "config"],
|
|
427
|
+
default="all", help="Type of scan to run")
|
|
428
|
+
parser.add_argument("--output", choices=["json", "summary"], default="json",
|
|
429
|
+
help="Output format")
|
|
430
|
+
|
|
431
|
+
args = parser.parse_args()
|
|
432
|
+
|
|
433
|
+
if not os.path.isdir(args.project_path):
|
|
434
|
+
print(json.dumps({"error": f"Directory not found: {args.project_path}"}))
|
|
435
|
+
sys.exit(1)
|
|
436
|
+
|
|
437
|
+
result = run_full_scan(args.project_path, args.scan_type)
|
|
438
|
+
|
|
439
|
+
if args.output == "summary":
|
|
440
|
+
print(f"\n{'='*60}")
|
|
441
|
+
print(f"Security Scan: {result['project']}")
|
|
442
|
+
print(f"{'='*60}")
|
|
443
|
+
print(f"Status: {result['summary']['overall_status']}")
|
|
444
|
+
print(f"Total Findings: {result['summary']['total_findings']}")
|
|
445
|
+
print(f" Critical: {result['summary']['critical']}")
|
|
446
|
+
print(f" High: {result['summary']['high']}")
|
|
447
|
+
print(f"{'='*60}\n")
|
|
448
|
+
|
|
449
|
+
for scan_name, scan_result in result['scans'].items():
|
|
450
|
+
print(f"\n{scan_name.upper()}: {scan_result['status']}")
|
|
451
|
+
for finding in scan_result.get('findings', [])[:5]:
|
|
452
|
+
print(f" - {finding}")
|
|
453
|
+
else:
|
|
454
|
+
print(json.dumps(result, indent=2))
|
|
455
|
+
|
|
456
|
+
|
|
457
|
+
if __name__ == "__main__":
|
|
458
|
+
main()
|
|
@@ -0,0 +1,120 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: writing-plans
|
|
3
|
+
description: Use when you have a spec or requirements for a multi-step task, before touching code
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Writing Plans
|
|
7
|
+
|
|
8
|
+
## Overview
|
|
9
|
+
|
|
10
|
+
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
|
|
11
|
+
|
|
12
|
+
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
|
|
13
|
+
|
|
14
|
+
**Announce at start:** "I'm using the writing-plans skill to create the implementation plan."
|
|
15
|
+
|
|
16
|
+
**Context:** This should be run in a dedicated worktree (created by brainstorming skill).
|
|
17
|
+
|
|
18
|
+
**Save plans to:** `docs/plans/YYYY-MM-DD-<feature-name>.md`
|
|
19
|
+
|
|
20
|
+
## Bite-Sized Task Granularity
|
|
21
|
+
|
|
22
|
+
**Each step is one action (2-5 minutes):**
|
|
23
|
+
- "Write the failing test" - step
|
|
24
|
+
- "Run it to make sure it fails" - step
|
|
25
|
+
- "Implement the minimal code to make the test pass" - step
|
|
26
|
+
- "Run the tests and make sure they pass" - step
|
|
27
|
+
- "Commit" - step
|
|
28
|
+
|
|
29
|
+
## Plan Document Header
|
|
30
|
+
|
|
31
|
+
**Every plan MUST start with this header:**
|
|
32
|
+
|
|
33
|
+
```markdown
|
|
34
|
+
# [Feature Name] Implementation Plan
|
|
35
|
+
|
|
36
|
+
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
|
37
|
+
|
|
38
|
+
**Goal:** [One sentence describing what this builds]
|
|
39
|
+
|
|
40
|
+
**Architecture:** [2-3 sentences about approach]
|
|
41
|
+
|
|
42
|
+
**Tech Stack:** [Key technologies/libraries]
|
|
43
|
+
|
|
44
|
+
---
|
|
45
|
+
```
|
|
46
|
+
|
|
47
|
+
## Task Structure
|
|
48
|
+
|
|
49
|
+
```markdown
|
|
50
|
+
### Task N: [Component Name]
|
|
51
|
+
|
|
52
|
+
**Files:**
|
|
53
|
+
- Create: `exact/path/to/file.py`
|
|
54
|
+
- Modify: `exact/path/to/existing.py:123-145`
|
|
55
|
+
- Test: `tests/exact/path/to/test.py`
|
|
56
|
+
|
|
57
|
+
**Step 1: Write the failing test**
|
|
58
|
+
|
|
59
|
+
```python
|
|
60
|
+
def test_specific_behavior():
|
|
61
|
+
result = function(input)
|
|
62
|
+
assert result == expected
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
**Step 2: Run test to verify it fails**
|
|
66
|
+
|
|
67
|
+
Run: `pytest tests/path/test.py::test_name -v`
|
|
68
|
+
Expected: FAIL with "function not defined"
|
|
69
|
+
|
|
70
|
+
**Step 3: Write minimal implementation**
|
|
71
|
+
|
|
72
|
+
```python
|
|
73
|
+
def function(input):
|
|
74
|
+
return expected
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
**Step 4: Run test to verify it passes**
|
|
78
|
+
|
|
79
|
+
Run: `pytest tests/path/test.py::test_name -v`
|
|
80
|
+
Expected: PASS
|
|
81
|
+
|
|
82
|
+
**Step 5: Commit**
|
|
83
|
+
|
|
84
|
+
```bash
|
|
85
|
+
git add tests/path/test.py src/path/file.py
|
|
86
|
+
git commit -m "feat: add specific feature"
|
|
87
|
+
```
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
## Remember
|
|
91
|
+
- Exact file paths always
|
|
92
|
+
- Complete code in plan (not "add validation")
|
|
93
|
+
- Exact commands with expected output
|
|
94
|
+
- Reference relevant skills with @ syntax
|
|
95
|
+
- DRY, YAGNI, TDD, frequent commits
|
|
96
|
+
|
|
97
|
+
## Execution Handoff
|
|
98
|
+
|
|
99
|
+
After saving the plan, offer execution choice:
|
|
100
|
+
|
|
101
|
+
**"Plan complete and saved to `docs/plans/<filename>.md`. Two execution options:**
|
|
102
|
+
|
|
103
|
+
**1. Subagent-Driven (this session)** - I dispatch fresh subagent per task, review between tasks, fast iteration
|
|
104
|
+
|
|
105
|
+
**2. Parallel Session (separate)** - Open new session with executing-plans, batch execution with checkpoints
|
|
106
|
+
|
|
107
|
+
**Which approach?"**
|
|
108
|
+
|
|
109
|
+
**If Subagent-Driven chosen:**
|
|
110
|
+
- **REQUIRED SUB-SKILL:** Use superpowers:subagent-driven-development
|
|
111
|
+
- Stay in this session
|
|
112
|
+
- Fresh subagent per task + code review
|
|
113
|
+
|
|
114
|
+
**If Parallel Session chosen:**
|
|
115
|
+
- Guide them to open new session in worktree
|
|
116
|
+
- **REQUIRED SUB-SKILL:** New session uses superpowers:executing-plans
|
|
117
|
+
|
|
118
|
+
|
|
119
|
+
## Gap Analysis Rule
|
|
120
|
+
Always identify gaps and suggest next steps to users. In case there is no gaps anymore, then AI should clearly state that there is no gap left.
|