awesome-slash 2.4.2 → 2.4.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -7,9 +7,41 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ## [2.4.4] - 2026-01-18
11
+
12
+ ### Added
13
+ - **PR Auto-Review Process** - Added mandatory workflow for 4 auto-reviewers (Copilot, Claude, Gemini, Codex)
14
+ - **Agent Responsibilities** - Documented required tools and MUST-CALL agents for /next-task and /ship
15
+ - **CLAUDE.md Enhancement** - Comprehensive agent workflow documentation with tool restrictions
16
+
17
+ ### Changed
18
+ - Updated ci-monitor.md with 4-reviewer process details
19
+ - Updated ship-ci-review-loop.md with PR auto-review section
20
+
21
+ ## [2.4.3] - 2026-01-18
22
+
23
+ ### Added
24
+ - **CLAUDE.md** - Project guidelines with release process and PR auto-review workflow
25
+ - **npm installation option** - Added npm as primary installation method to INSTALLATION.md
26
+
27
+ ### Fixed
28
+ - **Documentation sync** - Fixed outdated references across all documentation:
29
+ - Fixed plugin install commands in adapters/README.md (`deslop-around` → `awesome-slash`)
30
+ - Updated phase counts in CROSS_PLATFORM.md (`17-phase` → `13/12-phase`)
31
+ - Completed agent list in CROSS_PLATFORM.md (8 → 18 agents)
32
+ - Updated version references throughout docs
33
+
34
+ ### Changed
35
+ - Reorganized INSTALLATION.md with npm as Option 1 (Recommended)
36
+
10
37
  ## [2.4.2] - 2026-01-18
11
38
 
12
39
  ### Fixed
40
+ - **Security**: Addressed 32 technical debt issues from multi-agent review (#84)
41
+ - Fixed command injection vulnerabilities in context-optimizer.js
42
+ - Addressed path traversal risks in workflow-state.js
43
+ - Enhanced input validation across core libraries
44
+ - Added 255 new tests (total: 180 → 435 tests)
13
45
  - **Renamed package** from `awsome-slash` to `awesome-slash` (fixed typo)
14
46
  - Updated all internal references, repository URLs, and environment variable names
15
47
 
package/README.md CHANGED
@@ -6,17 +6,17 @@ A cross-platform plugin providing powerful, zero-configuration slash commands fo
6
6
 
7
7
  [![npm](https://img.shields.io/npm/v/awesome-slash?color=red)](https://www.npmjs.com/package/awesome-slash)
8
8
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
9
- [![Version](https://img.shields.io/badge/version-2.4.2-blue)](https://github.com/avifenesh/awesome-slash/releases)
9
+ [![Version](https://img.shields.io/badge/version-2.4.4-blue)](https://github.com/avifenesh/awesome-slash/releases)
10
10
  [![GitHub stars](https://img.shields.io/github/stars/awesome-slash?style=flat&color=yellow)](https://github.com/avifenesh/awesome-slash/stargazers)
11
11
  [![Claude Code](https://img.shields.io/badge/Claude-Code%20Plugin-blue)](https://docs.anthropic.com/en/docs/claude-code)
12
12
  [![Codex CLI](https://img.shields.io/badge/Codex-CLI%20Compatible-green)](https://developers.openai.com/codex/cli)
13
13
  [![OpenCode](https://img.shields.io/badge/OpenCode-Compatible-orange)](https://opencode.ai)
14
14
 
15
- ## What's New in v2.4.0
15
+ ## What's New in v2.4.4
16
16
 
17
- - **Reality Check Plugin** - Deep repository analysis to detect plan drift and gaps
18
- - **Multi-Agent Parallel Scanning** - Issue scanner, doc analyzer, code explorer run simultaneously
19
- - **Prioritized Reconstruction Plans** - Automated drift detection with priority-weighted action items
17
+ - **PR Auto-Review Process** - Added mandatory workflow for 4 auto-reviewers (Copilot, Claude, Gemini, Codex)
18
+ - **Agent Responsibilities** - Documented required tools and MUST-CALL agents for /next-task and /ship
19
+ - **CLAUDE.md Enhancement** - Comprehensive agent workflow documentation
20
20
 
21
21
  ---
22
22
 
@@ -96,7 +96,7 @@ Complete task-to-production automation with state management and resume capabili
96
96
  **Features:**
97
97
  - **Fully autonomous** after plan approval - no human in the loop
98
98
  - Resume capability with `.claude/.workflow-state.json`
99
- - 14 specialist agents with model optimization (opus/sonnet)
99
+ - 18 specialist agents with model optimization (opus/sonnet)
100
100
  - Quality gates: deslop-work, test-coverage-checker, delivery-validator, docs-updater
101
101
  - SubagentStop hooks for automatic workflow transitions
102
102
  - Policy-based stopping points (pr-created, merged, deployed, production)
@@ -274,7 +274,7 @@ Workflows persist state in `.claude/.workflow-state.json`:
274
274
  }
275
275
  ```
276
276
 
277
- ### Specialist Agents (12 Total)
277
+ ### Specialist Agents (18 Total)
278
278
 
279
279
  **Core Workflow (Opus - Complex Tasks):**
280
280
  | Agent | Purpose |
@@ -300,6 +300,14 @@ Workflows persist state in `.claude/.workflow-state.json`:
300
300
  | worktree-manager | Create isolated worktrees |
301
301
  | ci-monitor | Monitor CI/PR status with sleep loops |
302
302
 
303
+ **Reality Check (Sonnet + Opus - Plan Drift Detection):**
304
+ | Agent | Purpose |
305
+ |-------|---------|
306
+ | issue-scanner | Analyze GitHub issues, PRs, milestones |
307
+ | doc-analyzer | Examine documentation for plans and roadmaps |
308
+ | code-explorer | Deep codebase structure analysis |
309
+ | plan-synthesizer | Combine findings into prioritized plan (opus) |
310
+
303
311
  ---
304
312
 
305
313
  ## Repository Structure
@@ -311,11 +319,12 @@ awesome-slash/
311
319
  ├── plugins/
312
320
  │ ├── next-task/ # Master workflow orchestrator
313
321
  │ │ ├── commands/ # next-task, update-docs-around, delivery-approval
314
- │ │ ├── agents/ # 14 specialist agents
322
+ │ │ ├── agents/ # 18 specialist agents
315
323
  │ │ └── hooks/ # SubagentStop hooks for workflow automation
316
324
  │ ├── ship/ # PR workflow
317
325
  │ ├── deslop-around/ # AI slop cleanup
318
- └── project-review/ # Multi-agent review
326
+ ├── project-review/ # Multi-agent review
327
+ │ └── reality-check/ # Plan drift detection
319
328
  ├── lib/
320
329
  │ ├── state/ # Workflow state management
321
330
  │ ├── platform/ # Auto-detection
@@ -8,7 +8,7 @@ This directory contains adapters for using awesome-slash commands with different
8
8
  The primary target. Install via marketplace:
9
9
  ```bash
10
10
  claude plugin marketplace add avifenesh/awesome-slash
11
- claude plugin install deslop-around@awesome-slash
11
+ claude plugin install awesome-slash@awesome-slash
12
12
  ```
13
13
 
14
14
  See main [README.md](../README.md) for details.
@@ -130,7 +130,7 @@ Installers automatically handle these substitutions.
130
130
  ```bash
131
131
  # Via marketplace (easiest)
132
132
  claude plugin marketplace add avifenesh/awesome-slash
133
- claude plugin install deslop-around@awesome-slash
133
+ claude plugin install awesome-slash@awesome-slash
134
134
  ```
135
135
 
136
136
  **Pros:**
package/lib/index.js ADDED
@@ -0,0 +1,127 @@
1
+ /**
2
+ * Awesome-Slash Core Library
3
+ *
4
+ * Unified entry point for all core library modules.
5
+ * Provides platform detection, pattern matching, workflow state management,
6
+ * and context optimization utilities.
7
+ *
8
+ * @module awesome-slash/lib
9
+ * @author Avi Fenesh
10
+ * @license MIT
11
+ */
12
+
13
+ const detectPlatform = require('./platform/detect-platform');
14
+ const verifyTools = require('./platform/verify-tools');
15
+ const reviewPatterns = require('./patterns/review-patterns');
16
+ const slopPatterns = require('./patterns/slop-patterns');
17
+ const workflowState = require('./state/workflow-state');
18
+ const contextOptimizer = require('./utils/context-optimizer');
19
+
20
+ /**
21
+ * Platform detection and verification utilities
22
+ */
23
+ const platform = {
24
+ /**
25
+ * Detect project platform configuration
26
+ * @see module:platform/detect-platform
27
+ */
28
+ detect: detectPlatform.detect,
29
+ detectAsync: detectPlatform.detectAsync,
30
+ detectCI: detectPlatform.detectCI,
31
+ detectDeployment: detectPlatform.detectDeployment,
32
+ detectProjectType: detectPlatform.detectProjectType,
33
+ detectPackageManager: detectPlatform.detectPackageManager,
34
+ detectBranchStrategy: detectPlatform.detectBranchStrategy,
35
+ detectMainBranch: detectPlatform.detectMainBranch,
36
+ invalidateCache: detectPlatform.invalidateCache,
37
+
38
+ /**
39
+ * Verify tool availability
40
+ * @see module:platform/verify-tools
41
+ */
42
+ verifyTools: verifyTools.verify,
43
+ verifyToolsAsync: verifyTools.verifyAsync,
44
+ checkTool: verifyTools.checkTool,
45
+ checkToolAsync: verifyTools.checkToolAsync,
46
+ TOOL_DEFINITIONS: verifyTools.TOOL_DEFINITIONS
47
+ };
48
+
49
+ /**
50
+ * Code pattern matching utilities
51
+ */
52
+ const patterns = {
53
+ /**
54
+ * Review patterns for code quality analysis
55
+ * @see module:patterns/review-patterns
56
+ */
57
+ review: reviewPatterns,
58
+
59
+ /**
60
+ * Slop patterns for AI-generated code detection
61
+ * @see module:patterns/slop-patterns
62
+ */
63
+ slop: slopPatterns
64
+ };
65
+
66
+ /**
67
+ * Workflow state management
68
+ * @see module:state/workflow-state
69
+ */
70
+ const state = {
71
+ // Constants
72
+ SCHEMA_VERSION: workflowState.SCHEMA_VERSION,
73
+ PHASES: workflowState.PHASES,
74
+ DEFAULT_POLICY: workflowState.DEFAULT_POLICY,
75
+
76
+ // Core functions
77
+ generateWorkflowId: workflowState.generateWorkflowId,
78
+ getStatePath: workflowState.getStatePath,
79
+ ensureStateDir: workflowState.ensureStateDir,
80
+
81
+ // CRUD operations
82
+ createState: workflowState.createState,
83
+ readState: workflowState.readState,
84
+ writeState: workflowState.writeState,
85
+ updateState: workflowState.updateState,
86
+ deleteState: workflowState.deleteState,
87
+
88
+ // Phase management
89
+ startPhase: workflowState.startPhase,
90
+ completePhase: workflowState.completePhase,
91
+ failPhase: workflowState.failPhase,
92
+ skipToPhase: workflowState.skipToPhase,
93
+
94
+ // Workflow lifecycle
95
+ completeWorkflow: workflowState.completeWorkflow,
96
+ abortWorkflow: workflowState.abortWorkflow,
97
+ hasActiveWorkflow: workflowState.hasActiveWorkflow,
98
+ getWorkflowSummary: workflowState.getWorkflowSummary,
99
+
100
+ // Agent management
101
+ updateAgentResult: workflowState.updateAgentResult,
102
+ incrementIteration: workflowState.incrementIteration
103
+ };
104
+
105
+ /**
106
+ * Git command optimization utilities
107
+ * @see module:utils/context-optimizer
108
+ */
109
+ const utils = {
110
+ contextOptimizer
111
+ };
112
+
113
+ // Main exports
114
+ module.exports = {
115
+ platform,
116
+ patterns,
117
+ state,
118
+ utils,
119
+
120
+ // Direct module access for backward compatibility
121
+ detectPlatform,
122
+ verifyTools,
123
+ reviewPatterns,
124
+ slopPatterns,
125
+ workflowState,
126
+ contextOptimizer
127
+ };
@@ -24,8 +24,14 @@ function deepFreeze(obj) {
24
24
  const MAX_PATTERN_CACHE_SIZE = 50;
25
25
  const _compiledExcludePatterns = new Map();
26
26
 
27
+ /**
28
+ * Maximum allowed wildcards in a glob pattern to prevent ReDoS
29
+ */
30
+ const MAX_GLOB_WILDCARDS = 10;
31
+
27
32
  /**
28
33
  * Get a compiled regex for an exclude pattern (cached)
34
+ * Uses safe regex construction to prevent catastrophic backtracking
29
35
  * @param {string} pattern - Glob pattern to compile
30
36
  * @returns {RegExp} Compiled regex
31
37
  */
@@ -36,9 +42,27 @@ function getCompiledPattern(pattern) {
36
42
  const firstKey = _compiledExcludePatterns.keys().next().value;
37
43
  _compiledExcludePatterns.delete(firstKey);
38
44
  }
39
- // Escape all regex metacharacters except *, then replace * with .*
45
+
46
+ // Count wildcards to prevent overly complex patterns
47
+ const wildcardCount = (pattern.match(/\*/g) || []).length;
48
+ if (wildcardCount > MAX_GLOB_WILDCARDS) {
49
+ // Too many wildcards - use a safe fallback that matches nothing dangerous
50
+ _compiledExcludePatterns.set(pattern, /^$/);
51
+ return _compiledExcludePatterns.get(pattern);
52
+ }
53
+
54
+ // Escape all regex metacharacters except *
40
55
  const escaped = pattern.replace(/[.+?^${}()|[\]\\]/g, '\\$&');
41
- const regexStr = '^' + escaped.replace(/\*/g, '.*') + '$';
56
+
57
+ // Convert glob patterns to regex:
58
+ // - Both * and ** use .* for backward compatibility (patterns match anywhere in path)
59
+ // - ReDoS protection is provided by MAX_GLOB_WILDCARDS limit above
60
+ let regexStr = escaped
61
+ .replace(/\*\*/g, '\0GLOBSTAR\0') // Temporarily mark globstar
62
+ .replace(/\*/g, '.*') // Single star: match anything (backward compatible)
63
+ .replace(/\0GLOBSTAR\0/g, '.*'); // Globstar: match anything
64
+
65
+ regexStr = '^' + regexStr + '$';
42
66
  _compiledExcludePatterns.set(pattern, new RegExp(regexStr));
43
67
  }
44
68
  return _compiledExcludePatterns.get(pattern);
@@ -240,9 +264,12 @@ const slopPatterns = {
240
264
 
241
265
  /**
242
266
  * Hardcoded credentials patterns (expanded for comprehensive detection)
267
+ * Excludes common false positives:
268
+ * - Template placeholders: ${VAR}, {{VAR}}, <VAR>
269
+ * - Masked/example values: xxxxxxxx, ********
243
270
  */
244
271
  hardcoded_secrets: {
245
- pattern: /(password|secret|api[_-]?key|token|credential|auth)[_-]?(key|token|secret|pass)?\s*[:=]\s*["'`][^"'`\s]{8,}["'`]/i,
272
+ pattern: /(password|secret|api[_-]?key|token|credential|auth)[_-]?(key|token|secret|pass)?\s*[:=]\s*["'`](?!\$\{)(?!\{\{)(?!<[A-Z_])(?![x*#]{8,})(?![X*#]{8,})[^"'`\s]{8,}["'`]/i,
246
273
  exclude: ['*.test.*', '*.spec.*', '*.example.*', '*.sample.*', 'README.*', '*.md'],
247
274
  severity: 'critical',
248
275
  autoFix: 'flag',
@@ -17,6 +17,68 @@ const { promisify } = require('util');
17
17
  const execAsync = promisify(exec);
18
18
  const fsPromises = fs.promises;
19
19
 
20
+ /**
21
+ * Default timeout for async operations (5 seconds)
22
+ */
23
+ const DEFAULT_ASYNC_TIMEOUT_MS = 5000;
24
+
25
+ /**
26
+ * Maximum JSON file size to parse (1MB) - prevents DoS via large files
27
+ */
28
+ const MAX_JSON_SIZE_BYTES = 1024 * 1024;
29
+
30
+ /**
31
+ * Safely parse JSON content with size limit
32
+ * @param {string} content - JSON string to parse
33
+ * @param {string} filename - Filename for error messages
34
+ * @returns {Object|null} Parsed object or null if invalid/too large
35
+ */
36
+ function safeJSONParse(content, filename = 'unknown') {
37
+ if (!content || typeof content !== 'string') {
38
+ return null;
39
+ }
40
+ if (content.length > MAX_JSON_SIZE_BYTES) {
41
+ // File too large - skip parsing to prevent DoS
42
+ return null;
43
+ }
44
+ try {
45
+ return JSON.parse(content);
46
+ } catch {
47
+ return null;
48
+ }
49
+ }
50
+
51
+ /**
52
+ * Wrap a promise with a timeout
53
+ * @param {Promise} promise - Promise to wrap
54
+ * @param {number} timeoutMs - Timeout in milliseconds
55
+ * @param {string} operation - Operation name for error message
56
+ * @returns {Promise} Promise that rejects on timeout
57
+ */
58
+ function withTimeout(promise, timeoutMs = DEFAULT_ASYNC_TIMEOUT_MS, operation = 'operation') {
59
+ let timeoutId;
60
+ const timeoutPromise = new Promise((_, reject) => {
61
+ timeoutId = setTimeout(() => {
62
+ reject(new Error(`${operation} timed out after ${timeoutMs}ms`));
63
+ }, timeoutMs);
64
+ });
65
+
66
+ return Promise.race([promise, timeoutPromise]).finally(() => {
67
+ clearTimeout(timeoutId);
68
+ });
69
+ }
70
+
71
+ /**
72
+ * Execute a command with timeout protection
73
+ * @param {string} cmd - Command to execute
74
+ * @param {Object} options - exec options
75
+ * @param {number} timeoutMs - Timeout in milliseconds
76
+ * @returns {Promise<{stdout: string, stderr: string}>}
77
+ */
78
+ async function execAsyncWithTimeout(cmd, options = {}, timeoutMs = DEFAULT_ASYNC_TIMEOUT_MS) {
79
+ return withTimeout(execAsync(cmd, options), timeoutMs, `exec: ${cmd.substring(0, 50)}`);
80
+ }
81
+
20
82
  // Detection cache for performance (platform rarely changes during session)
21
83
  let _cachedDetection = null;
22
84
  let _cacheExpiry = 0;
@@ -25,18 +87,22 @@ const CACHE_TTL_MS = 60000; // 1 minute cache
25
87
  // File read cache to avoid reading the same file multiple times (#17)
26
88
  // Limited to prevent unbounded memory growth in long-running processes
27
89
  const MAX_CACHE_SIZE = 100;
90
+ const MAX_CACHED_FILE_SIZE = 64 * 1024; // 64KB max per cached file
28
91
  const _fileCache = new Map();
29
92
  const _existsCache = new Map();
30
93
 
31
94
  /**
32
- * Enforce cache size limits using FIFO eviction
95
+ * Enforce cache size limits using O(1) FIFO eviction
96
+ * Uses Map's insertion order guarantee for efficient eviction
33
97
  * @param {Map} cache - Cache to limit
34
98
  * @param {number} maxSize - Maximum entries
35
99
  */
36
100
  function enforceMaxCacheSize(cache, maxSize = MAX_CACHE_SIZE) {
37
- if (cache.size > maxSize) {
38
- const keysToDelete = Array.from(cache.keys()).slice(0, cache.size - maxSize);
39
- keysToDelete.forEach(key => cache.delete(key));
101
+ // O(1) eviction: delete oldest entries one at a time
102
+ // Map maintains insertion order, so first key is oldest
103
+ while (cache.size > maxSize) {
104
+ const firstKey = cache.keys().next().value;
105
+ cache.delete(firstKey);
40
106
  }
41
107
  }
42
108
 
@@ -78,6 +144,7 @@ async function existsCachedAsync(filepath) {
78
144
 
79
145
  /**
80
146
  * Read file contents (cached)
147
+ * Only caches files smaller than MAX_CACHED_FILE_SIZE to prevent memory bloat
81
148
  * @param {string} filepath - Path to read
82
149
  * @returns {string|null}
83
150
  */
@@ -87,10 +154,14 @@ function readFileCached(filepath) {
87
154
  }
88
155
  try {
89
156
  const content = fs.readFileSync(filepath, 'utf8');
90
- _fileCache.set(filepath, content);
91
- enforceMaxCacheSize(_fileCache);
157
+ // Only cache small files to prevent memory bloat
158
+ if (content.length <= MAX_CACHED_FILE_SIZE) {
159
+ _fileCache.set(filepath, content);
160
+ enforceMaxCacheSize(_fileCache);
161
+ }
92
162
  return content;
93
163
  } catch {
164
+ // Cache null for missing files (small memory footprint)
94
165
  _fileCache.set(filepath, null);
95
166
  enforceMaxCacheSize(_fileCache);
96
167
  return null;
@@ -99,6 +170,7 @@ function readFileCached(filepath) {
99
170
 
100
171
  /**
101
172
  * Read file contents (cached, async)
173
+ * Only caches files smaller than MAX_CACHED_FILE_SIZE to prevent memory bloat
102
174
  * @param {string} filepath - Path to read
103
175
  * @returns {Promise<string|null>}
104
176
  */
@@ -108,10 +180,14 @@ async function readFileCachedAsync(filepath) {
108
180
  }
109
181
  try {
110
182
  const content = await fsPromises.readFile(filepath, 'utf8');
111
- _fileCache.set(filepath, content);
112
- enforceMaxCacheSize(_fileCache);
183
+ // Only cache small files to prevent memory bloat
184
+ if (content.length <= MAX_CACHED_FILE_SIZE) {
185
+ _fileCache.set(filepath, content);
186
+ enforceMaxCacheSize(_fileCache);
187
+ }
113
188
  return content;
114
189
  } catch {
190
+ // Cache null for missing files (small memory footprint)
115
191
  _fileCache.set(filepath, null);
116
192
  enforceMaxCacheSize(_fileCache);
117
193
  return null;
@@ -298,7 +374,7 @@ function detectBranchStrategy() {
298
374
  try {
299
375
  const content = readFileCached('railway.json');
300
376
  if (content) {
301
- const config = JSON.parse(content);
377
+ const config = safeJSONParse(content, 'railway.json');
302
378
  // Validate JSON structure before accessing properties
303
379
  if (config &&
304
380
  typeof config === 'object' &&
@@ -323,10 +399,10 @@ function detectBranchStrategy() {
323
399
  */
324
400
  async function detectBranchStrategyAsync() {
325
401
  try {
326
- // Run git commands in parallel
402
+ // Run git commands in parallel with timeout protection
327
403
  const [localResult, remoteResult] = await Promise.all([
328
- execAsync('git branch', { encoding: 'utf8' }).catch(() => ({ stdout: '' })),
329
- execAsync('git branch -r', { encoding: 'utf8' }).catch(() => ({ stdout: '' }))
404
+ execAsyncWithTimeout('git branch', { encoding: 'utf8' }).catch(() => ({ stdout: '' })),
405
+ execAsyncWithTimeout('git branch -r', { encoding: 'utf8' }).catch(() => ({ stdout: '' }))
330
406
  ]);
331
407
 
332
408
  const allBranches = (localResult.stdout || '') + (remoteResult.stdout || '');
@@ -343,7 +419,7 @@ async function detectBranchStrategyAsync() {
343
419
  try {
344
420
  const content = await readFileCachedAsync('railway.json');
345
421
  if (content) {
346
- const config = JSON.parse(content);
422
+ const config = safeJSONParse(content, 'railway.json');
347
423
  if (config &&
348
424
  typeof config === 'object' &&
349
425
  typeof config.environments === 'object' &&
@@ -394,12 +470,12 @@ function detectMainBranch() {
394
470
  */
395
471
  async function detectMainBranchAsync() {
396
472
  try {
397
- const { stdout } = await execAsync('git symbolic-ref refs/remotes/origin/HEAD', { encoding: 'utf8' });
473
+ const { stdout } = await execAsyncWithTimeout('git symbolic-ref refs/remotes/origin/HEAD', { encoding: 'utf8' });
398
474
  return stdout.trim().replace('refs/remotes/origin/', '');
399
475
  } catch {
400
476
  // Fallback: check common names
401
477
  try {
402
- await execAsync('git rev-parse --verify main', { encoding: 'utf8' });
478
+ await execAsyncWithTimeout('git rev-parse --verify main', { encoding: 'utf8' });
403
479
  return 'main';
404
480
  } catch {
405
481
  return 'master';
@@ -13,6 +13,118 @@ const SCHEMA_VERSION = '2.0.0';
13
13
  const STATE_DIR = '.claude';
14
14
  const STATE_FILE = 'workflow-state.json';
15
15
 
16
+ /**
17
+ * State cache configuration
18
+ */
19
+ const STATE_CACHE_TTL_MS = 200; // Cache TTL for rapid successive reads
20
+ const _stateCache = new Map(); // Cache keyed by resolved base directory
21
+
22
+ /**
23
+ * Get cached state if valid
24
+ * @param {string} cacheKey - Cache key (resolved base path)
25
+ * @returns {Object|null} Cached state or null if expired/missing
26
+ */
27
+ function getCachedState(cacheKey) {
28
+ const cached = _stateCache.get(cacheKey);
29
+ if (cached && Date.now() < cached.expiry) {
30
+ return cached.state;
31
+ }
32
+ return null;
33
+ }
34
+
35
+ /**
36
+ * Set state cache
37
+ * @param {string} cacheKey - Cache key (resolved base path)
38
+ * @param {Object} state - State to cache
39
+ */
40
+ function setCachedState(cacheKey, state) {
41
+ _stateCache.set(cacheKey, {
42
+ state,
43
+ expiry: Date.now() + STATE_CACHE_TTL_MS
44
+ });
45
+ }
46
+
47
+ /**
48
+ * Invalidate state cache for a directory
49
+ * @param {string} cacheKey - Cache key (resolved base path)
50
+ */
51
+ function invalidateStateCache(cacheKey) {
52
+ _stateCache.delete(cacheKey);
53
+ }
54
+
55
+ /**
56
+ * Clear all state caches (useful for testing)
57
+ */
58
+ function clearAllStateCaches() {
59
+ _stateCache.clear();
60
+ }
61
+
62
+ /**
63
+ * Validate and normalize base directory path to prevent path traversal
64
+ * @param {string} baseDir - Base directory path
65
+ * @returns {string} Validated absolute path
66
+ * @throws {Error} If path is invalid or potentially dangerous
67
+ */
68
+ function validateBasePath(baseDir) {
69
+ if (typeof baseDir !== 'string' || baseDir.length === 0) {
70
+ throw new Error('Base directory must be a non-empty string');
71
+ }
72
+
73
+ // Resolve to absolute path
74
+ const resolvedPath = path.resolve(baseDir);
75
+
76
+ // Check for null bytes (path traversal via null byte injection)
77
+ if (resolvedPath.includes('\0')) {
78
+ throw new Error('Path contains invalid null byte');
79
+ }
80
+
81
+ // Ensure the path exists and is a directory
82
+ try {
83
+ const stats = fs.statSync(resolvedPath);
84
+ if (!stats.isDirectory()) {
85
+ throw new Error('Path is not a directory');
86
+ }
87
+ } catch (error) {
88
+ if (error.code === 'ENOENT') {
89
+ // Directory doesn't exist yet - that's OK, it will be created
90
+ // But the parent must exist and be a directory
91
+ const parentDir = path.dirname(resolvedPath);
92
+ try {
93
+ const parentStats = fs.statSync(parentDir);
94
+ if (!parentStats.isDirectory()) {
95
+ throw new Error('Parent path is not a directory');
96
+ }
97
+ } catch (parentError) {
98
+ if (parentError.code === 'ENOENT') {
99
+ throw new Error('Parent directory does not exist');
100
+ }
101
+ throw parentError;
102
+ }
103
+ } else if (error.message) {
104
+ throw error;
105
+ }
106
+ }
107
+
108
+ return resolvedPath;
109
+ }
110
+
111
+ /**
112
+ * Validate that the final state path is within the base directory
113
+ * @param {string} statePath - The full state file path
114
+ * @param {string} baseDir - The validated base directory
115
+ * @throws {Error} If path traversal is detected
116
+ */
117
+ function validateStatePathWithinBase(statePath, baseDir) {
118
+ const resolvedStatePath = path.resolve(statePath);
119
+ const resolvedBaseDir = path.resolve(baseDir);
120
+
121
+ // Ensure state path is within base directory
122
+ if (!resolvedStatePath.startsWith(resolvedBaseDir + path.sep) &&
123
+ resolvedStatePath !== resolvedBaseDir) {
124
+ throw new Error('Path traversal detected: state path is outside base directory');
125
+ }
126
+ }
127
+
16
128
  const PHASES = [
17
129
  'policy-selection',
18
130
  'task-discovery',
@@ -34,6 +146,27 @@ const PHASES = [
34
146
  'complete'
35
147
  ];
36
148
 
149
+ // Pre-computed phase index map for O(1) lookup (vs O(n) array indexOf)
150
+ const PHASE_INDEX = new Map(PHASES.map((phase, index) => [phase, index]));
151
+
152
+ /**
153
+ * Check if a phase name is valid (O(1) lookup)
154
+ * @param {string} phaseName - Phase to check
155
+ * @returns {boolean} True if valid phase
156
+ */
157
+ function isValidPhase(phaseName) {
158
+ return PHASE_INDEX.has(phaseName);
159
+ }
160
+
161
+ /**
162
+ * Get the index of a phase (O(1) lookup)
163
+ * @param {string} phaseName - Phase name
164
+ * @returns {number} Phase index or -1 if invalid
165
+ */
166
+ function getPhaseIndex(phaseName) {
167
+ return PHASE_INDEX.has(phaseName) ? PHASE_INDEX.get(phaseName) : -1;
168
+ }
169
+
37
170
  const DEFAULT_POLICY = {
38
171
  taskSource: 'gh-issues',
39
172
  priorityFilter: 'continue',
@@ -57,20 +190,33 @@ function generateWorkflowId() {
57
190
  }
58
191
 
59
192
  /**
60
- * Get the state file path
193
+ * Get the state file path with validation
61
194
  * @param {string} [baseDir=process.cwd()] - Base directory
62
195
  * @returns {string} Full path to state file
196
+ * @throws {Error} If path validation fails
63
197
  */
64
198
  function getStatePath(baseDir = process.cwd()) {
65
- return path.join(baseDir, STATE_DIR, STATE_FILE);
199
+ const validatedBase = validateBasePath(baseDir);
200
+ const statePath = path.join(validatedBase, STATE_DIR, STATE_FILE);
201
+
202
+ // Verify the state path is still within the base directory
203
+ validateStatePathWithinBase(statePath, validatedBase);
204
+
205
+ return statePath;
66
206
  }
67
207
 
68
208
  /**
69
- * Ensure state directory exists
209
+ * Ensure state directory exists with validation
70
210
  * @param {string} [baseDir=process.cwd()] - Base directory
211
+ * @throws {Error} If path validation fails
71
212
  */
72
213
  function ensureStateDir(baseDir = process.cwd()) {
73
- const stateDir = path.join(baseDir, STATE_DIR);
214
+ const validatedBase = validateBasePath(baseDir);
215
+ const stateDir = path.join(validatedBase, STATE_DIR);
216
+
217
+ // Verify the state dir path is still within the base directory
218
+ validateStatePathWithinBase(stateDir, validatedBase);
219
+
74
220
  if (!fs.existsSync(stateDir)) {
75
221
  fs.mkdirSync(stateDir, { recursive: true });
76
222
  }
@@ -122,12 +268,24 @@ function createState(type = 'next-task', policy = {}) {
122
268
  }
123
269
 
124
270
  /**
125
- * Read workflow state from file
271
+ * Read workflow state from file (with caching for rapid successive reads)
126
272
  * @param {string} [baseDir=process.cwd()] - Base directory
273
+ * @param {Object} [options={}] - Options
274
+ * @param {boolean} [options.skipCache=false] - Skip cache and read from file
127
275
  * @returns {Object|Error|null} Workflow state, Error if corrupted, or null if not found
128
276
  */
129
- function readState(baseDir = process.cwd()) {
277
+ function readState(baseDir = process.cwd(), options = {}) {
130
278
  const statePath = getStatePath(baseDir);
279
+ const cacheKey = path.resolve(baseDir);
280
+
281
+ // Check cache first (unless skipCache is true)
282
+ if (!options.skipCache) {
283
+ const cached = getCachedState(cacheKey);
284
+ if (cached !== null) {
285
+ // Return a deep copy to prevent mutations affecting cache
286
+ return JSON.parse(JSON.stringify(cached));
287
+ }
288
+ }
131
289
 
132
290
  if (!fs.existsSync(statePath)) {
133
291
  return null;
@@ -143,6 +301,9 @@ function readState(baseDir = process.cwd()) {
143
301
  // Future: Add migration logic here
144
302
  }
145
303
 
304
+ // Cache the state
305
+ setCachedState(cacheKey, state);
306
+
146
307
  return state;
147
308
  } catch (error) {
148
309
  const corrupted = new Error(`Corrupted workflow state: ${error.message}`);
@@ -154,7 +315,7 @@ function readState(baseDir = process.cwd()) {
154
315
  }
155
316
 
156
317
  /**
157
- * Write workflow state to file
318
+ * Write workflow state to file (invalidates cache)
158
319
  * @param {Object} state - Workflow state
159
320
  * @param {string} [baseDir=process.cwd()] - Base directory
160
321
  * @returns {boolean} Success status
@@ -162,6 +323,7 @@ function readState(baseDir = process.cwd()) {
162
323
  function writeState(state, baseDir = process.cwd()) {
163
324
  ensureStateDir(baseDir);
164
325
  const statePath = getStatePath(baseDir);
326
+ const cacheKey = path.resolve(baseDir);
165
327
 
166
328
  try {
167
329
  // Update timestamp
@@ -169,8 +331,14 @@ function writeState(state, baseDir = process.cwd()) {
169
331
 
170
332
  const content = JSON.stringify(state, null, 2);
171
333
  fs.writeFileSync(statePath, content, 'utf8');
334
+
335
+ // Update cache with new state
336
+ setCachedState(cacheKey, state);
337
+
172
338
  return true;
173
339
  } catch (error) {
340
+ // Invalidate cache on write error
341
+ invalidateStateCache(cacheKey);
174
342
  console.error(`Error writing state: ${error.message}`);
175
343
  return false;
176
344
  }
@@ -205,12 +373,24 @@ function updateState(updates, baseDir = process.cwd()) {
205
373
  }
206
374
 
207
375
  /**
208
- * Deep merge two objects (with prototype pollution protection)
376
+ * Maximum recursion depth for deepMerge to prevent stack overflow attacks
377
+ */
378
+ const MAX_MERGE_DEPTH = 50;
379
+
380
+ /**
381
+ * Deep merge two objects (with prototype pollution and stack overflow protection)
209
382
  * @param {Object} target - Target object
210
383
  * @param {Object} source - Source object
384
+ * @param {number} [depth=0] - Current recursion depth (internal)
211
385
  * @returns {Object} Merged object
386
+ * @throws {Error} If recursion depth exceeds MAX_MERGE_DEPTH
212
387
  */
213
- function deepMerge(target, source) {
388
+ function deepMerge(target, source, depth = 0) {
389
+ // Protect against stack overflow from deeply nested objects
390
+ if (depth > MAX_MERGE_DEPTH) {
391
+ throw new Error(`Maximum merge depth (${MAX_MERGE_DEPTH}) exceeded - possible circular reference or attack`);
392
+ }
393
+
214
394
  // Handle null/undefined cases
215
395
  if (!source || typeof source !== 'object') return target;
216
396
  if (!target || typeof target !== 'object') return source;
@@ -234,9 +414,9 @@ function deepMerge(target, source) {
234
414
  else if (sourceVal === null) {
235
415
  result[key] = null;
236
416
  }
237
- // Recursively merge plain objects
417
+ // Recursively merge plain objects (with depth tracking)
238
418
  else if (sourceVal && typeof sourceVal === 'object' && !Array.isArray(sourceVal)) {
239
- result[key] = deepMerge(targetVal || {}, sourceVal);
419
+ result[key] = deepMerge(targetVal || {}, sourceVal, depth + 1);
240
420
  }
241
421
  // Replace arrays and primitives
242
422
  else {
@@ -254,7 +434,7 @@ function deepMerge(target, source) {
254
434
  * @returns {Object|null} Updated state or null on error
255
435
  */
256
436
  function startPhase(phaseName, baseDir = process.cwd()) {
257
- if (!PHASES.includes(phaseName)) {
437
+ if (!isValidPhase(phaseName)) {
258
438
  console.error(`Invalid phase: ${phaseName}`);
259
439
  return null;
260
440
  }
@@ -324,7 +504,7 @@ function completePhase(result = {}, baseDir = process.cwd()) {
324
504
  if (!state) return null;
325
505
 
326
506
  const history = finalizePhaseEntry(state, 'completed', result);
327
- const currentIndex = PHASES.indexOf(state.phases.current);
507
+ const currentIndex = getPhaseIndex(state.phases.current);
328
508
  const nextPhase = currentIndex < PHASES.length - 1 ? PHASES[currentIndex + 1] : 'complete';
329
509
 
330
510
  return updateState({
@@ -369,7 +549,7 @@ function failPhase(reason, context = {}, baseDir = process.cwd()) {
369
549
  * @returns {Object|null} Updated state or null on error
370
550
  */
371
551
  function skipToPhase(phaseName, reason = 'manual skip', baseDir = process.cwd()) {
372
- if (!PHASES.includes(phaseName)) {
552
+ if (!isValidPhase(phaseName)) {
373
553
  console.error(`Invalid phase: ${phaseName}`);
374
554
  return null;
375
555
  }
@@ -381,8 +561,8 @@ function skipToPhase(phaseName, reason = 'manual skip', baseDir = process.cwd())
381
561
  }
382
562
  if (!state) return null;
383
563
 
384
- const currentIndex = PHASES.indexOf(state.phases.current);
385
- const targetIndex = PHASES.indexOf(phaseName);
564
+ const currentIndex = getPhaseIndex(state.phases.current);
565
+ const targetIndex = getPhaseIndex(phaseName);
386
566
 
387
567
  // Add skipped entries for phases we're jumping over
388
568
  const history = [...(state.phases.history || [])];
@@ -603,7 +783,13 @@ module.exports = {
603
783
  // Constants
604
784
  SCHEMA_VERSION,
605
785
  PHASES,
786
+ PHASE_INDEX,
606
787
  DEFAULT_POLICY,
788
+ MAX_MERGE_DEPTH,
789
+
790
+ // Phase helpers (O(1) lookup)
791
+ isValidPhase,
792
+ getPhaseIndex,
607
793
 
608
794
  // Core functions
609
795
  generateWorkflowId,
@@ -631,5 +817,19 @@ module.exports = {
631
817
 
632
818
  // Agent management
633
819
  updateAgentResult,
634
- incrementIteration
820
+ incrementIteration,
821
+
822
+ // Cache management
823
+ clearAllStateCaches,
824
+
825
+ // Internal functions for testing
826
+ _internal: {
827
+ validateBasePath,
828
+ validateStatePathWithinBase,
829
+ deepMerge,
830
+ getCachedState,
831
+ setCachedState,
832
+ invalidateStateCache,
833
+ STATE_CACHE_TTL_MS
834
+ }
635
835
  };
@@ -45,6 +45,79 @@ function sanitizeExtension(ext) {
45
45
  return safe || 'ts';
46
46
  }
47
47
 
48
+ /**
49
+ * Validate git branch name to prevent command injection
50
+ * @param {string} branch - Branch name to validate
51
+ * @returns {string} Validated branch name
52
+ * @throws {Error} If branch name contains invalid characters
53
+ */
54
+ function validateBranchName(branch) {
55
+ if (typeof branch !== 'string' || branch.length === 0) {
56
+ throw new Error('Branch name must be a non-empty string');
57
+ }
58
+ if (branch.length > 255) {
59
+ throw new Error('Branch name too long (max 255 characters)');
60
+ }
61
+ // Allow alphanumeric, underscore, hyphen, forward slash, and dot
62
+ if (!/^[a-zA-Z0-9/_.-]+$/.test(branch)) {
63
+ throw new Error('Branch name contains invalid characters');
64
+ }
65
+ // Prevent git option injection
66
+ if (branch.startsWith('-')) {
67
+ throw new Error('Branch name cannot start with hyphen');
68
+ }
69
+ return branch;
70
+ }
71
+
72
+ /**
73
+ * Validate git reference to prevent command injection
74
+ * @param {string} ref - Git reference to validate
75
+ * @returns {string} Validated reference
76
+ * @throws {Error} If reference contains invalid characters
77
+ */
78
+ function validateGitRef(ref) {
79
+ if (typeof ref !== 'string' || ref.length === 0) {
80
+ throw new Error('Git reference must be a non-empty string');
81
+ }
82
+ if (ref.length > 255) {
83
+ throw new Error('Git reference too long (max 255 characters)');
84
+ }
85
+ // Allow alphanumeric, tilde, caret, dot, hyphen, underscore, forward slash
86
+ if (!/^[a-zA-Z0-9~^._/-]+$/.test(ref)) {
87
+ throw new Error('Git reference contains invalid characters');
88
+ }
89
+ // Prevent git option injection
90
+ if (ref.startsWith('-')) {
91
+ throw new Error('Git reference cannot start with hyphen');
92
+ }
93
+ return ref;
94
+ }
95
+
96
+ /**
97
+ * Validate numeric limit parameter
98
+ * @param {number} limit - Limit value to validate
99
+ * @param {number} max - Maximum allowed value (default: 1000)
100
+ * @returns {number} Validated limit
101
+ * @throws {Error} If limit is invalid
102
+ */
103
+ function validateLimit(limit, max = 1000) {
104
+ // Strict type check - must be number or numeric string
105
+ if (typeof limit === 'string') {
106
+ // Only allow pure numeric strings
107
+ if (!/^\d+$/.test(limit)) {
108
+ throw new Error('Limit must be a positive integer');
109
+ }
110
+ }
111
+ const num = typeof limit === 'number' ? limit : parseInt(limit, 10);
112
+ if (!Number.isInteger(num) || num < 1) {
113
+ throw new Error('Limit must be a positive integer');
114
+ }
115
+ if (num > max) {
116
+ throw new Error(`Limit cannot exceed ${max}`);
117
+ }
118
+ return num;
119
+ }
120
+
48
121
  /**
49
122
  * Git command optimization utilities for context efficiency
50
123
  */
@@ -54,8 +127,10 @@ const contextOptimizer = {
54
127
  * @param {number} limit - Number of commits to retrieve (default: 10)
55
128
  * @returns {string} Git command
56
129
  */
57
- recentCommits: (limit = 10) =>
58
- `git log --oneline --no-decorate -${limit} --format="%h %s"`,
130
+ recentCommits: (limit = 10) => {
131
+ const safeLimit = validateLimit(limit);
132
+ return `git log --oneline --no-decorate -${safeLimit} --format="%h %s"`;
133
+ },
59
134
 
60
135
  /**
61
136
  * Get compact git status (untracked files excluded)
@@ -69,8 +144,10 @@ const contextOptimizer = {
69
144
  * @param {string} ref - Reference to compare from (default: 'HEAD~5')
70
145
  * @returns {string} Git command
71
146
  */
72
- fileChanges: (ref = 'HEAD~5') =>
73
- `git diff ${ref}..HEAD --name-status`,
147
+ fileChanges: (ref = 'HEAD~5') => {
148
+ const safeRef = validateGitRef(ref);
149
+ return `git diff ${safeRef}..HEAD --name-status`;
150
+ },
74
151
 
75
152
  /**
76
153
  * Get current branch name
@@ -107,11 +184,15 @@ const contextOptimizer = {
107
184
  * @returns {string} Git command
108
185
  */
109
186
  lineAge: (file, line) => {
110
- // Validate line is a positive integer
187
+ // Validate line is a positive integer with reasonable bounds
111
188
  const lineNum = parseInt(line, 10);
189
+ const MAX_LINE_NUMBER = 10000000; // 10 million lines - reasonable upper bound
112
190
  if (!Number.isInteger(lineNum) || lineNum < 1) {
113
191
  throw new Error('Line must be a positive integer');
114
192
  }
193
+ if (lineNum > MAX_LINE_NUMBER) {
194
+ throw new Error(`Line number cannot exceed ${MAX_LINE_NUMBER}`);
195
+ }
115
196
  // Escape file path for safe shell usage
116
197
  const safeFile = escapeShell(file);
117
198
  return `git blame -L ${lineNum},${lineNum} "${safeFile}" --porcelain | grep '^committer-time' | cut -d' ' -f2`;
@@ -132,8 +213,10 @@ const contextOptimizer = {
132
213
  * @param {string} ref - Reference to compare from (default: 'HEAD~5')
133
214
  * @returns {string} Git command
134
215
  */
135
- diffStat: (ref = 'HEAD~5') =>
136
- `git diff ${ref}..HEAD --stat | head -20`,
216
+ diffStat: (ref = 'HEAD~5') => {
217
+ const safeRef = validateGitRef(ref);
218
+ return `git diff ${safeRef}..HEAD --stat | head -20`;
219
+ },
137
220
 
138
221
  /**
139
222
  * Get contributors list (limited to top 10)
@@ -161,24 +244,30 @@ const contextOptimizer = {
161
244
  * @param {number} limit - Number of branches (default: 10)
162
245
  * @returns {string} Git command
163
246
  */
164
- branches: (limit = 10) =>
165
- `git branch --format='%(refname:short)' | head -${limit}`,
247
+ branches: (limit = 10) => {
248
+ const safeLimit = validateLimit(limit);
249
+ return `git branch --format='%(refname:short)' | head -${safeLimit}`;
250
+ },
166
251
 
167
252
  /**
168
253
  * Get tags list (limited)
169
254
  * @param {number} limit - Number of tags (default: 10)
170
255
  * @returns {string} Git command
171
256
  */
172
- tags: (limit = 10) =>
173
- `git tag --sort=-creatordate | head -${limit}`,
257
+ tags: (limit = 10) => {
258
+ const safeLimit = validateLimit(limit);
259
+ return `git tag --sort=-creatordate | head -${safeLimit}`;
260
+ },
174
261
 
175
262
  /**
176
263
  * Get count of commits on current branch since branching from main
177
264
  * @param {string} mainBranch - Main branch name (default: 'main')
178
265
  * @returns {string} Git command
179
266
  */
180
- commitsSinceBranch: (mainBranch = 'main') =>
181
- `git rev-list --count ${mainBranch}..HEAD`,
267
+ commitsSinceBranch: (mainBranch = 'main') => {
268
+ const safeBranch = validateBranchName(mainBranch);
269
+ return `git rev-list --count ${safeBranch}..HEAD`;
270
+ },
182
271
 
183
272
  /**
184
273
  * Check if working directory is clean
@@ -192,16 +281,20 @@ const contextOptimizer = {
192
281
  * @param {string} mainBranch - Main branch name (default: 'main')
193
282
  * @returns {string} Git command
194
283
  */
195
- mergeBase: (mainBranch = 'main') =>
196
- `git merge-base ${mainBranch} HEAD`,
284
+ mergeBase: (mainBranch = 'main') => {
285
+ const safeBranch = validateBranchName(mainBranch);
286
+ return `git merge-base ${safeBranch} HEAD`;
287
+ },
197
288
 
198
289
  /**
199
290
  * Get files modified in current branch (since branching)
200
291
  * @param {string} mainBranch - Main branch name (default: 'main')
201
292
  * @returns {string} Git command
202
293
  */
203
- branchChangedFiles: (mainBranch = 'main') =>
204
- `git diff ${mainBranch}...HEAD --name-only`,
294
+ branchChangedFiles: (mainBranch = 'main') => {
295
+ const safeBranch = validateBranchName(mainBranch);
296
+ return `git diff ${safeBranch}...HEAD --name-only`;
297
+ },
205
298
 
206
299
  /**
207
300
  * Get commit count by author
@@ -224,4 +317,15 @@ const contextOptimizer = {
224
317
  }
225
318
  };
226
319
 
320
+ // Export main API
227
321
  module.exports = contextOptimizer;
322
+
323
+ // Export internal functions for testing
324
+ module.exports._internal = {
325
+ escapeShell,
326
+ escapeSingleQuotes,
327
+ sanitizeExtension,
328
+ validateBranchName,
329
+ validateGitRef,
330
+ validateLimit
331
+ };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "awesome-slash",
3
- "version": "2.4.2",
3
+ "version": "2.4.4",
4
4
  "description": "Professional-grade slash commands for Claude Code that work across any project",
5
5
  "main": "lib/platform/detect-platform.js",
6
6
  "type": "commonjs",
@@ -15,6 +15,25 @@ and focused on observation and coordination, not complex reasoning.
15
15
  - This agent (haiku): Poll status, detect issues, report findings
16
16
  - ci-fixer (sonnet): Diagnose and fix CI failures, address PR comments
17
17
 
18
+ ## PR Auto-Review Process
19
+
20
+ > **CRITICAL**: Every PR receives automatic reviews from **4 agents**:
21
+ > - **Copilot** - GitHub's AI reviewer
22
+ > - **Claude** - Anthropic's AI reviewer
23
+ > - **Gemini** - Google's AI reviewer
24
+ > - **Codex** - OpenAI's AI reviewer
25
+
26
+ **Mandatory workflow:**
27
+ 1. After PR creation, wait **at least 3 minutes** for first review round
28
+ 2. Read **ALL comments** from all 4 reviewers
29
+ 3. Address **EVERY comment** - no exceptions
30
+ 4. Iterate until **zero unresolved threads** (typically 2-4 rounds)
31
+
32
+ **Rules:**
33
+ - ALWAYS address all comments, including "minor" or "nit" suggestions
34
+ - NEVER skip a comment unless factually wrong or user-approved
35
+ - Treat all feedback as **required changes**, not suggestions
36
+
18
37
  ## Configuration
19
38
 
20
39
  ```javascript
@@ -4,6 +4,25 @@ This file contains detailed implementation for the CI & Review Monitor Loop phas
4
4
 
5
5
  **Parent document**: `ship.md`
6
6
 
7
+ ## PR Auto-Review Process
8
+
9
+ > **CRITICAL**: Every PR receives automatic reviews from **4 agents**:
10
+ > - **Copilot** - GitHub's AI reviewer
11
+ > - **Claude** - Anthropic's AI reviewer
12
+ > - **Gemini** - Google's AI reviewer
13
+ > - **Codex** - OpenAI's AI reviewer
14
+
15
+ **Mandatory workflow:**
16
+ 1. After PR creation, wait **at least 3 minutes** for first review round
17
+ 2. Read **ALL comments** from all 4 reviewers
18
+ 3. Address **EVERY comment** - no exceptions
19
+ 4. Iterate until **zero unresolved threads** (typically 2-4 rounds)
20
+
21
+ **Rules:**
22
+ - ALWAYS address all comments, including "minor" or "nit" suggestions
23
+ - NEVER skip a comment unless factually wrong or user-approved
24
+ - Treat all feedback as **required changes**, not suggestions
25
+
7
26
  ## Overview
8
27
 
9
28
  The monitor loop waits for: