@torka/claude-workflows 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,194 @@
1
+ ---
2
+ description: 'Analyze epic files to identify which epics can run in parallel Git worktrees'
3
+ ---
4
+
5
+ # Epic Parallelization Analysis Command
6
+
7
+ You are a project planning analyst. Analyze the provided epic files to identify which **epics** can be worked on in parallel using Git worktrees. Each epic is treated as an atomic unit that will be implemented in a separate work session.
8
+
9
+ **Key assumption**: Stories within an epic are handled sequentially in their own worktree session. This analysis focuses on epic-to-epic dependencies only.
10
+
11
+ ## Input Handling
12
+
13
+ The user may provide one or more of:
14
+ - Epic files (markdown files with story definitions)
15
+ - Sprint status YAML file
16
+ - Epic folder path
17
+
18
+ <steps>
19
+ 1. **Context Detection (Auto)**
20
+
21
+ **Always check for sprint-status.yaml:**
22
+ - Look in `_bmad-output/implementation-artifacts/sprint-status.yaml`
23
+ - If found, read to get current epic/story status
24
+ - Determine epic-level status: an epic is "done" only if ALL its stories are done
25
+ - An epic is "in-progress" if ANY story is in-progress
26
+ - Otherwise epic is "pending"
27
+
28
+ **Check for previous parallelization plan:**
29
+ - Glob for `_bmad-output/planning-artifacts/parallelization-analysis-*.md`
30
+ - If found, read the most recent one
31
+ - This enables delta/comparison in the output
32
+
33
+ 2. **Identify and Load Input Files**
34
+ - If a folder is provided, glob for `epic-*.md` files
35
+ - Read all provided epic files completely
36
+ - If no specific input given, default to `_bmad-output/planning-artifacts/epics/`
37
+
38
+ 3. **Parse Epics (Treat as Atomic Units)**
39
+ For each epic file:
40
+ - Extract epic number and title from filename/header
41
+ - Parse all `## Story N.M:` sections to understand scope
42
+ - Capture ALL acceptance criteria across ALL stories (for dependency detection)
43
+ - Determine epic status from sprint-status (done/in-progress/pending)
44
+ - If file parsing fails, log warning and skip (don't crash)
45
+
46
+ 4. **Categorize Epics by Status**
47
+
48
+ | Epic Status | Condition | Include in Plan? |
49
+ |-------------|-----------|-----------------|
50
+ | done | ALL stories done | No (summary only) |
51
+ | in-progress | ANY story in-progress | Yes (active section) |
52
+ | pending | No stories started | Yes (pending section) |
53
+
54
+ 5. **Analyze Epic-to-Epic Dependencies**
55
+
56
+ Scan ALL acceptance criteria within an epic for CAPABILITY REFERENCES:
57
+
58
+ | When any story AC mentions... | Epic depends on... | How to find it |
59
+ |-------------------------------|-------------------|----------------|
60
+ | "email is sent", "verification email", "password reset email" | Email epic | Epic with *Email* in title |
61
+ | "user is authenticated", "signed in", "session", "logged in" | Auth epic | Epic with *Auth* in title |
62
+ | "admin user", "admin role", "admin access" | Admin epic | Epic with *Admin* in title |
63
+ | "analytics", "track event", "metrics" | Analytics epic | Epic with *Analytics* in title |
64
+ | "I have verified my email", "email verified" | Auth epic | Epic with *Auth* in title |
65
+ | "payment", "subscription", "billing" | Payments epic | Epic with *Payment* or *Billing* in title |
66
+ | "notification", "notify user" | Notifications epic | Epic with *Notification* in title |
67
+
68
+ **Key insight**: Match capability names in epic TITLES, not epic numbers.
69
+ This makes detection project-agnostic.
70
+
71
+ **Handle completed dependencies:**
72
+ - If a dependency epic is already "done", don't block waiting epics
73
+ - Mark as "dependency satisfied"
74
+
75
+ 6. **Build Epic Execution Plan for Worktrees**
76
+ Group remaining (non-done) epics into phases:
77
+ - **In Progress**: Epics currently being worked on
78
+ - **Phase 1**: Foundation epics (no pending dependencies) - can start worktrees in parallel
79
+ - **Phase 2+**: Epics that depend on earlier phases
80
+ - **Parallel Groups**: Epics within a phase that can have concurrent worktrees
81
+
82
+ 7. **Generate Output**
83
+ Create markdown report with:
84
+ - Context summary (what was detected)
85
+ - Epic-level progress summary
86
+ - What's changed since last analysis (if prior plan exists)
87
+ - Worktree execution phases (which epics can run in parallel)
88
+ - Epic dependency matrix
89
+ - Recommended worktree strategy
90
+
91
+ 8. **Save Report**
92
+ Write to: `_bmad-output/planning-artifacts/parallelization-analysis-{YYYY-MM-DD-HHmm}.md`
93
+ (Includes timestamp to prevent same-day collisions)
94
+ </steps>
95
+
96
+ ## Output Template
97
+
98
+ Use this structure for the report:
99
+
100
+ ```markdown
101
+ # Epic Parallelization Analysis
102
+ Generated: {date}
103
+
104
+ ## Context
105
+ - **Sprint Status**: Found / Not Found
106
+ - **Previous Plan**: Found ({date}) / Not Found
107
+ - **Analysis Mode**: Fresh / Incremental
108
+ - **Parse Warnings**: None / [list of skipped files]
109
+
110
+ ## Epic Progress Summary
111
+ | Status | Epics | Stories | Percentage |
112
+ |--------|-------|---------|------------|
113
+ | Completed | X | Y | Z% |
114
+ | In Progress | X | Y | Z% |
115
+ | Pending | X | Y | Z% |
116
+ | **Total** | X | Y | 100% |
117
+
118
+ ## What's Changed Since Last Analysis
119
+ <!-- Only include if previous plan was found -->
120
+ - **New Epics**: [list or "None"]
121
+ - **Completed Epics**: [list of epics that moved to done]
122
+ - **Status Changes**: [epics that changed status]
123
+
124
+ ## Currently In Progress (Active Worktrees)
125
+ <!-- Epics with any in-progress stories -->
126
+ | Epic | Title | Stories Done | Blocked By |
127
+ |------|-------|--------------|------------|
128
+ | 2 | User Authentication | 2/5 | None |
129
+
130
+ ## Worktree Execution Plan
131
+
132
+ ### Phase 1: Foundation Epics
133
+ These epics have no pending dependencies - **start worktrees in parallel**:
134
+
135
+ | Epic | Title | Stories | Depends On | Notes |
136
+ |------|-------|---------|------------|-------|
137
+ | 1 | Core Infrastructure | 3 | None | Foundation |
138
+ | 3 | Email System | 4 | None | Independent |
139
+
140
+ **Worktree commands:**
141
+ ```bash
142
+ git worktree add ../epic-1-core-infrastructure feature/epic-1
143
+ git worktree add ../epic-3-email-system feature/epic-3
144
+ ```
145
+
146
+ ### Phase 2: Dependent Epics
147
+ **Requires**: Phase 1 completion (or specific epics noted)
148
+
149
+ | Epic | Title | Stories | Depends On | Can Parallel With |
150
+ |------|-------|---------|------------|-------------------|
151
+ | 2 | User Auth | 5 | Epic 1 | Epic 4 |
152
+ | 4 | Admin Dashboard | 3 | Epic 1 | Epic 2 |
153
+
154
+ ### Phase 3+
155
+ [Continue pattern for remaining phases...]
156
+
157
+ ## Completed Epics
158
+ <details>
159
+ <summary>X epics completed (Y stories)</summary>
160
+
161
+ | Epic | Title | Stories |
162
+ |------|-------|---------|
163
+ | ... | ... | ... |
164
+
165
+ </details>
166
+
167
+ ## Epic Dependency Matrix
168
+
169
+ | Epic | Title | Depends On | Dependency Status |
170
+ |------|-------|------------|-------------------|
171
+ | 2 | User Auth | Epic 1 (Infrastructure) | Pending |
172
+ | 5 | Analytics | Epic 2 (Auth) | Pending |
173
+
174
+ ## Worktree Strategy Recommendations
175
+ - **Max parallel worktrees**: [recommended number based on dependencies]
176
+ - **Critical path**: Epic X → Epic Y → Epic Z
177
+ - **Bottleneck epics**: [epics that block the most others]
178
+ - **Quick wins**: [small epics that can be completed to unblock others]
179
+ ```
180
+
181
+ ## Important Notes
182
+ - **Epic-level focus**: Treat each epic as an atomic unit for a separate worktree
183
+ - Stories within an epic are NOT analyzed for cross-epic parallelization
184
+ - Auto-detect context: never require user to specify "mode"
185
+ - Sprint status is source of truth for completion (parse slugs: `2-1-foo` → `2.1`)
186
+ - Epic is "done" only when ALL its stories are done
187
+ - Epic is "in-progress" if ANY story is in-progress
188
+ - Done epics are excluded from dependency blocking
189
+ - Highlight NEW epics when comparing to prior plan
190
+ - Match dependencies by CAPABILITY (epic titles like "Email", "Auth") not epic numbers
191
+ - Gracefully skip malformed files with warnings
192
+ - Include worktree commands for easy copy-paste
193
+ - Identify critical path and bottleneck epics
194
+ - Keep output concise but actionable
@@ -0,0 +1,57 @@
1
+ {
2
+ "permissions": {
3
+ "allow": [
4
+ "Bash(git status:*)",
5
+ "Bash(git diff:*)",
6
+ "Bash(git log:*)",
7
+ "Bash(git branch:*)",
8
+ "Bash(git fetch:*)",
9
+ "Bash(gh pr:*)",
10
+ "Bash(gh repo:*)",
11
+ "Bash(npm test:*)",
12
+ "Bash(npm run:*)",
13
+ "WebSearch"
14
+ ],
15
+ "deny": [],
16
+ "ask": []
17
+ },
18
+ "hooks": {
19
+ "PreToolUse": [
20
+ {
21
+ "matcher": "Bash|Read|Grep|Glob|Write|Edit|MultiEdit",
22
+ "hooks": [
23
+ {
24
+ "type": "command",
25
+ "command": "python3 .claude/scripts/auto_approve_safe.py"
26
+ }
27
+ ]
28
+ }
29
+ ],
30
+ "PostToolUse": [
31
+ {
32
+ "matcher": "Edit|MultiEdit",
33
+ "hooks": [
34
+ {
35
+ "type": "command",
36
+ "command": "if [[ \"$CLAUDE_TOOL_FILE_PATH\" == *.js || \"$CLAUDE_TOOL_FILE_PATH\" == *.ts || \"$CLAUDE_TOOL_FILE_PATH\" == *.jsx || \"$CLAUDE_TOOL_FILE_PATH\" == *.tsx ]]; then npx eslint \"$CLAUDE_TOOL_FILE_PATH\" --fix 2>/dev/null || true; fi"
37
+ }
38
+ ]
39
+ }
40
+ ],
41
+ "Stop": [
42
+ {
43
+ "matcher": "*",
44
+ "hooks": [
45
+ {
46
+ "type": "command",
47
+ "command": "if command -v osascript >/dev/null 2>&1; then osascript -e 'display notification \"Claude Code task completed\" with title \"Claude Code\"'; elif command -v notify-send >/dev/null 2>&1; then notify-send 'Claude Code' 'Task completed'; fi"
48
+ }
49
+ ]
50
+ }
51
+ ]
52
+ },
53
+ "statusLine": {
54
+ "type": "command",
55
+ "command": "python3 .claude/scripts/context-monitor.py"
56
+ }
57
+ }
@@ -0,0 +1,261 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ Claude Code Hook: Auto-approve safe tool usage for solo dev workflows.
4
+
5
+ Handles PreToolUse events to:
6
+ - Auto-allow known-safe commands (read-only, tests, linting)
7
+ - Deny obviously dangerous commands
8
+ - Defer everything else to normal permission system ("ask")
9
+
10
+ Install:
11
+ 1. mkdir -p ~/.claude/hooks && chmod 700 ~/.claude/hooks
12
+ 2. Save this file to ~/.claude/hooks/auto_approve_safe.py
13
+ 3. chmod +x ~/.claude/hooks/auto_approve_safe.py
14
+ 4. Add hook config to ~/.claude/settings.json
15
+ """
16
+
17
+ import json
18
+ import os
19
+ import re
20
+ import sys
21
+ from datetime import datetime, timezone
22
+ from pathlib import Path
23
+
24
+ # Max-autonomy default:
25
+ # - Allow reads/searches
26
+ # - Allow edits/writes except for sensitive paths
27
+ # - Allow bash commands only if they match allowlist (supports simple compound commands)
28
+ #
29
+ # Debugging:
30
+ # Set this to True temporarily to log every decision to a local jsonl file.
31
+ ENABLE_DECISION_LOG = True
32
+
33
+
34
+ def load_rules() -> dict:
35
+ """Load rules from global and project-specific config files."""
36
+ rules = {"allow_patterns": [], "deny_patterns": [], "sensitive_paths": []}
37
+
38
+ # # Load global rules
39
+ # global_rules_path = Path.home() / ".claude" / "hooks" / "auto_approve_safe.rules.json"
40
+ # if global_rules_path.exists():
41
+ # try:
42
+ # with open(global_rules_path) as f:
43
+ # global_rules = json.load(f)
44
+ # for key in rules:
45
+ # rules[key].extend(global_rules.get(key, []))
46
+ # except (json.JSONDecodeError, IOError) as e:
47
+ # print(f"Warning: Could not load global rules: {e}", file=sys.stderr)
48
+
49
+ # Load project-specific rules (merge with global)
50
+ project_rules_path = Path.cwd() / ".claude" / "scripts" / "auto_approve_safe.rules.json"
51
+ if project_rules_path.exists():
52
+ try:
53
+ with open(project_rules_path) as f:
54
+ project_rules = json.load(f)
55
+ for key in rules:
56
+ rules[key].extend(project_rules.get(key, []))
57
+ except (json.JSONDecodeError, IOError) as e:
58
+ print(f"Warning: Could not load project rules: {e}", file=sys.stderr)
59
+
60
+ return rules
61
+
62
+
63
+ def matches_any_pattern(text: str, patterns: list[str]) -> bool:
64
+ """Check if text matches any of the given regex patterns."""
65
+ for pattern in patterns:
66
+ try:
67
+ if re.search(pattern, text, re.IGNORECASE):
68
+ return True
69
+ except re.error:
70
+ continue
71
+ return False
72
+
73
+
74
+ def check_sensitive_path(file_path: str, sensitive_patterns: list[str]) -> bool:
75
+ """Check if file path matches sensitive path patterns."""
76
+ if not file_path:
77
+ return False
78
+ return matches_any_pattern(file_path, sensitive_patterns)
79
+
80
+
81
+ def split_compound_shell_command(command: str) -> list[str]:
82
+ """Split a shell command on simple compound operators (heuristic, not a full parser)."""
83
+ command = (command or "").strip()
84
+ if not command:
85
+ return []
86
+ # Common patterns produced by agents: `cd x && pnpm test`, `cmd1; cmd2`
87
+ return [p.strip() for p in re.split(r"\s*(?:&&|;)\s*", command) if p.strip()]
88
+
89
+
90
+ def is_shell_file_read_command(command: str) -> bool:
91
+ """Detect common shell file-read commands that could exfiltrate secrets."""
92
+ return bool(re.search(r"^\s*(cat|head|tail|less)\b", command or "", re.IGNORECASE))
93
+
94
+
95
+ def summarize_tool_input(tool_name: str, tool_input: dict) -> dict:
96
+ """Small, reviewable summary for decision logs."""
97
+ if tool_name == "Bash":
98
+ return {"command": tool_input.get("command", "")}
99
+ if tool_name in ("Read", "Write", "Edit", "MultiEdit"):
100
+ return {"file_path": tool_input.get("file_path", "")}
101
+ return {"tool_input_keys": list((tool_input or {}).keys())}
102
+
103
+
104
+ def log_decision(tool_name: str, tool_input: dict, decision: str, reason: str) -> None:
105
+ """Append a decision record to a jsonl file when debugging is enabled."""
106
+ if not ENABLE_DECISION_LOG:
107
+ return
108
+
109
+ log_path = Path.cwd() / ".claude" / "auto_approve_safe.decisions.jsonl"
110
+ record = {
111
+ "ts": datetime.now(timezone.utc).isoformat(),
112
+ "cwd": str(Path.cwd()),
113
+ "tool_name": tool_name,
114
+ "decision": decision,
115
+ "reason": reason,
116
+ "input": summarize_tool_input(tool_name, tool_input or {}),
117
+ }
118
+
119
+ try:
120
+ log_path.parent.mkdir(parents=True, exist_ok=True)
121
+ with open(log_path, "a", encoding="utf-8") as f:
122
+ f.write(json.dumps(record, ensure_ascii=False) + "\n")
123
+ except Exception as e:
124
+ # Never break tool execution because logging failed.
125
+ print(f"Warning: Could not write decision log: {e}", file=sys.stderr)
126
+
127
+
128
+ def make_decision(tool_name: str, tool_input: dict, rules: dict) -> tuple[str, str]:
129
+ """
130
+ Determine permission decision for a tool call.
131
+
132
+ Returns:
133
+ tuple: (decision, reason)
134
+ decision: "allow", "deny", or "ask"
135
+ reason: Human-readable explanation
136
+
137
+ Notes on integration with Claude Code:
138
+ - "allow" short-circuits Claude Code prompts
139
+ - "deny" blocks the tool
140
+ - "ask" defers to Claude Code's built-in permission system
141
+
142
+ This file is tuned for maximum autonomy by default, while:
143
+ - denying obvious dangerous commands
144
+ - blocking edits to sensitive paths
145
+ - prompting for reads of sensitive paths
146
+ """
147
+ tool_input = tool_input or {}
148
+
149
+ # Handle Bash commands
150
+ if tool_name == "Bash":
151
+ command = (tool_input.get("command", "") or "").strip()
152
+ if not command:
153
+ return "ask", "Empty command"
154
+
155
+ segments = split_compound_shell_command(command)
156
+ if not segments:
157
+ return "ask", "Empty command"
158
+
159
+ # Deny wins if any segment matches a deny pattern.
160
+ for seg in segments:
161
+ if matches_any_pattern(seg, rules["deny_patterns"]):
162
+ return "deny", "Command matches dangerous pattern"
163
+
164
+ # If a segment looks like it could read a file, apply sensitive path checks.
165
+ # (Prevents silently allowing: `cat .env`, `head ~/.ssh/id_rsa`, etc.)
166
+ for seg in segments:
167
+ if is_shell_file_read_command(seg) and matches_any_pattern(seg, rules["sensitive_paths"]):
168
+ return "ask", "Bash command may read sensitive data"
169
+
170
+ # Max autonomy, but still require an allowlist match per segment.
171
+ # Add common "glue" patterns that agents use.
172
+ glue_allow_patterns = [
173
+ r"^cd\s+\S+(\s+.*)?$",
174
+ r"^pushd\s+\S+(\s+.*)?$",
175
+ r"^popd$",
176
+ r"^export\s+[A-Za-z_][A-Za-z0-9_]*=.*$",
177
+ r"^(true|false)$",
178
+ ]
179
+
180
+ for seg in segments:
181
+ if matches_any_pattern(seg, rules["allow_patterns"]):
182
+ continue
183
+ if matches_any_pattern(seg, glue_allow_patterns):
184
+ continue
185
+ return "ask", f"Command not in allowlist: {seg}"
186
+
187
+ return "allow", "Matches safe allowlist"
188
+
189
+ # Handle Read tool - check for sensitive files
190
+ if tool_name == "Read":
191
+ file_path = tool_input.get("file_path", "")
192
+ if check_sensitive_path(file_path, rules["sensitive_paths"]):
193
+ return "ask", "File may contain sensitive data"
194
+ return "allow", "Read operations are generally safe"
195
+
196
+ # Handle Grep/Glob - generally safe read-only operations
197
+ if tool_name in ("Grep", "Glob"):
198
+ return "allow", "Search operations are read-only"
199
+
200
+ # Handle Write/Edit - max autonomy by default; still protect sensitive paths.
201
+ if tool_name in ("Write", "Edit", "MultiEdit"):
202
+ file_path = tool_input.get("file_path", "")
203
+ if check_sensitive_path(file_path, rules["sensitive_paths"]):
204
+ return "deny", "Cannot modify sensitive files"
205
+ return "allow", "Write operations are generally safe"
206
+
207
+ # Default: defer to normal permission system
208
+ return "ask", "Unknown tool, deferring to permission system"
209
+
210
+
211
+ def output_decision(decision: str, reason: str) -> None:
212
+ """Output the hook decision in Claude Code's expected format."""
213
+ output = {
214
+ "hookSpecificOutput": {
215
+ "hookEventName": "PreToolUse",
216
+ "permissionDecision": decision,
217
+ "permissionDecisionReason": reason
218
+ }
219
+ }
220
+ print(json.dumps(output))
221
+
222
+
223
+ def main():
224
+ """Main entry point for the hook."""
225
+ try:
226
+ # Read input from stdin
227
+ input_data = sys.stdin.read()
228
+ if not input_data.strip():
229
+ output_decision("ask", "No input received")
230
+ return
231
+
232
+ data = json.loads(input_data)
233
+
234
+ # Extract tool information
235
+ tool_name = data.get("tool_name", "")
236
+ tool_input = data.get("tool_input", {})
237
+
238
+ # Load rules
239
+ rules = load_rules()
240
+
241
+ # Make decision
242
+ decision, reason = make_decision(tool_name, tool_input, rules)
243
+
244
+ # Optional debug log
245
+ log_decision(tool_name, tool_input, decision, reason)
246
+
247
+ # Output result
248
+ output_decision(decision, reason)
249
+
250
+ except json.JSONDecodeError as e:
251
+ print(f"Error parsing input JSON: {e}", file=sys.stderr)
252
+ output_decision("ask", "Failed to parse input")
253
+ except Exception as e:
254
+ print(f"Hook error: {e}", file=sys.stderr)
255
+ output_decision("ask", f"Hook error: {e}")
256
+
257
+
258
+ if __name__ == "__main__":
259
+ main()
260
+
261
+
@@ -0,0 +1,134 @@
1
+ {
2
+ "allow_patterns": [
3
+ "^pwd$",
4
+ "^whoami$",
5
+ "^date$",
6
+ "^uname(\\s+-a)?$",
7
+ "^which\\s+\\S+$",
8
+ "^echo\\s+",
9
+
10
+ "^ls(\\s+.*)?$",
11
+ "^cat\\s+",
12
+ "^head\\s+",
13
+ "^tail\\s+",
14
+ "^wc\\s+",
15
+ "^less\\s+",
16
+ "^file\\s+",
17
+ "^stat\\s+",
18
+ "^du\\s+",
19
+ "^df\\s+",
20
+ "^tree(\\s+.*)?$",
21
+
22
+ "^python(3)?\\s+--version$",
23
+ "^node\\s+--version$",
24
+ "^npm\\s+--version$",
25
+ "^pnpm\\s+--version$",
26
+ "^yarn\\s+--version$",
27
+ "^uv\\s+--version$",
28
+
29
+ "^git\\s+(status|diff|log|show|branch|remote|stash\\s+list)(\\s+.*)?$",
30
+
31
+ "^pnpm\\s+(test|run\\s+(test|lint|typecheck|type-check|check|build|dev|start)|install|i|add|remove)(\\s+.*)?$",
32
+ "^npm\\s+(test|run\\s+(test|lint|typecheck|type-check|check|build|dev|start)|install|i|ci)(\\s+.*)?$",
33
+ "^yarn\\s+(test|lint|typecheck|type-check|check|build|dev|start|install|add|remove)(\\s+.*)?$",
34
+ "^npx\\s+(tsc|eslint|prettier|vitest|jest)(\\s+.*)?$",
35
+
36
+ "^pytest(\\s+.*)?$",
37
+ "^python(3)?\\s+-m\\s+pytest(\\s+.*)?$",
38
+ "^uv\\s+run\\s+(pytest|python|ruff|mypy)(\\s+.*)?$",
39
+ "^ruff\\s+(check|format)(\\s+.*)?$",
40
+ "^mypy(\\s+.*)?$",
41
+ "^black\\s+--check(\\s+.*)?$",
42
+ "^isort\\s+--check(\\s+.*)?$",
43
+ "^pip\\s+(list|show|freeze)$",
44
+ "^uv\\s+(pip\\s+list|pip\\s+show|sync|lock)(\\s+.*)?$",
45
+
46
+ "^cargo\\s+(check|test|clippy|fmt\\s+--check|build)(\\s+.*)?$",
47
+ "^go\\s+(test|vet|fmt|build)(\\s+.*)?$",
48
+
49
+ "^jq\\s+",
50
+ "^grep\\s+",
51
+ "^rg\\s+",
52
+ "^find\\s+",
53
+ "^fd\\s+",
54
+ "^ag\\s+",
55
+ "^awk\\s+",
56
+ "^sed\\s+-n\\s+",
57
+ "^sort(\\s+.*)?$",
58
+ "^uniq(\\s+.*)?$",
59
+ "^cut\\s+",
60
+ "^tr\\s+",
61
+ "^diff\\s+",
62
+ "^comm\\s+",
63
+
64
+ "^curl\\s+.*--head",
65
+ "^curl\\s+-I\\s+",
66
+ "^ping\\s+-c\\s+\\d+\\s+",
67
+ "^dig\\s+",
68
+ "^nslookup\\s+",
69
+ "^host\\s+",
70
+
71
+ "^mkdir(\\s+.*)?$",
72
+ "^touch\\s+",
73
+ "^cp\\s+",
74
+ "^mv\\s+",
75
+
76
+ "^git\\s+(add|commit|checkout|fetch|pull|push|worktree|merge|rebase|stash\\s+(push|pop|drop|apply)|tag|switch|restore)(\\s+.*)?$",
77
+
78
+ "^gh\\s+(pr|issue|repo|release|workflow|run|api)(\\s+.*)?$",
79
+
80
+ "^chmod\\s+[0-6][0-7][0-7]\\s+"
81
+ ],
82
+
83
+ "deny_patterns": [
84
+ "^sudo\\b",
85
+ "^doas\\b",
86
+ "\\brm\\s+.*(-r|-rf|-fr|--recursive)",
87
+ "\\brm\\s+-[^\\s]*r",
88
+ "^rm\\s+/",
89
+ "\\bmkfs\\.",
90
+ "\\bdd\\b.*\\bof=",
91
+ "\\bshutdown\\b",
92
+ "\\breboot\\b",
93
+ "\\bsystemctl\\s+(start|stop|restart|enable|disable)",
94
+ "\\bchmod\\s+777",
95
+ "\\bchown\\s+.*:.*\\s+/",
96
+ ">\\s*/etc/",
97
+ ">\\s*~/\\.",
98
+ "\\bcurl\\b.*\\|.*\\b(bash|sh|zsh)\\b",
99
+ "\\bwget\\b.*\\|.*\\b(bash|sh|zsh)\\b",
100
+ "\\beval\\s+.*\\$\\(",
101
+ ":(){ :|:& };:",
102
+ "\\bfork\\s*bomb",
103
+ "\\bkill\\s+-9\\s+-1",
104
+ "\\bpkill\\s+-9",
105
+ "\\bkillall\\b"
106
+ ],
107
+
108
+ "sensitive_paths": [
109
+ "\\.env$",
110
+ "\\.env\\.",
111
+ "\\.pem$",
112
+ "\\.key$",
113
+ "\\.crt$",
114
+ "\\.p12$",
115
+ "\\.pfx$",
116
+ "id_rsa",
117
+ "id_ed25519",
118
+ "id_ecdsa",
119
+ "\\.ssh/",
120
+ "\\.gnupg/",
121
+ "\\.git/config$",
122
+ "\\.gitconfig$",
123
+ "credentials",
124
+ "\\.aws/",
125
+ "\\.gcloud/",
126
+ "\\.azure/",
127
+ "\\.npmrc$",
128
+ "\\.pypirc$",
129
+ "\\.netrc$",
130
+ "\\bsecrets?\\b",
131
+ "\\bpassw",
132
+ "\\btoken"
133
+ ]
134
+ }