@sandrinio/vbounce 1.5.0 → 1.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/brains/AGENTS.md CHANGED
@@ -45,11 +45,13 @@ Before starting any sprint, the Team Lead MUST:
45
45
  0. Team Lead runs `./scripts/pre_bounce_sync.sh` to ensure LanceDB RAG context is fresh.
46
46
  1. Team Lead sends Story context pack to Developer.
47
47
  2. Developer queries LanceDB, implements code, writes Implementation Report. CLI Orchestrator must run `./scripts/validate_report.mjs` on the report to enforce YAML strictness.
48
- 3. QA runs Quick Scan + PR Review, validates against Story §2 The Truth. If fail → Bug Report to Dev. CLI Orchestrator must run `./scripts/validate_report.mjs` on the QA report before passing to Architect/Dev.
49
- 4. Dev fixes and resubmits. 3+ failuresEscalated.
50
- 5. Architect runs Deep Audit + Trend Check, validates Safe Zone compliance and ADR adherence.
51
- 6. DevOps merges story branch into sprint branch, validates post-merge, handles release tagging.
52
- 7. Team Lead consolidates reports into Sprint Report.
48
+ 3. **Pre-QA Gate Scan:** Team Lead runs `./scripts/pre_gate_runner.sh qa` to catch mechanical failures before spawning QA. If trivial issues return to Dev.
49
+ 4. QA runs Quick Scan + PR Review (skipping pre-scanned checks), validates against Story §2 The Truth. If fail Bug Report to Dev. CLI Orchestrator must run `./scripts/validate_report.mjs` on the QA report.
50
+ 5. Dev fixes and resubmits. 3+ failures Escalated.
51
+ 6. **Pre-Architect Gate Scan:** Team Lead runs `./scripts/pre_gate_runner.sh arch` to catch structural issues before spawning Architect. If mechanical failures → return to Dev.
52
+ 7. Architect runs Deep Audit + Trend Check (skipping pre-scanned checks), validates Safe Zone compliance and ADR adherence.
53
+ 8. DevOps merges story branch into sprint branch, validates post-merge (tests + lint + build), handles release tagging.
54
+ 9. Team Lead consolidates reports into Sprint Report.
53
55
 
54
56
  **Hotfix Path (L1 Trivial Tasks):**
55
57
  1. Team Lead evaluates request and creates `HOTFIX-{Date}-{Name}.md`.
@@ -6,6 +6,17 @@ Per **Rule 13: Framework Integrity**, anytime an entry is made here, the orchest
6
6
  ## [2026-03-02]
7
7
  - **Initialized**: Created strict Framework Integrity tracking, YAML context handoffs, and RAG validation pipeline.
8
8
 
9
+ ## [2026-03-06] — Pre-Gate Automation
10
+ - **Added**: `scripts/pre_gate_common.sh` — shared gate check functions with auto-detection for JS/TS, Python, Rust, Go stacks.
11
+ - **Added**: `scripts/pre_gate_runner.sh` — universal pre-gate scanner. Reads `.bounce/gate-checks.json` config or falls back to auto-detected defaults. Runs before QA (`qa`) and Architect (`arch`) agents to catch mechanical failures with zero tokens.
12
+ - **Added**: `scripts/init_gate_config.sh` — auto-detects project stack (language, framework, test runner, build/lint commands) and generates `.bounce/gate-checks.json`.
13
+ - **Modified**: `brains/claude-agents/qa.md` — added Pre-Computed Scan Results section. QA skips checks covered by `pre-qa-scan.txt`.
14
+ - **Modified**: `brains/claude-agents/architect.md` — added Pre-Computed Scan Results section. Architect skips mechanical checks covered by `pre-arch-scan.txt`.
15
+ - **Modified**: `brains/claude-agents/devops.md` — added `npm run lint` (tsc --noEmit) to post-merge validation.
16
+ - **Modified**: `skills/agent-team/SKILL.md` — wired pre-gate scans into Steps 3 (QA) and 4 (Architect). Added gate config init to Step 0 sprint setup. Added lint to post-merge validation.
17
+ - **Modified**: All brain files (`CLAUDE.md`, `GEMINI.md`, `AGENTS.md`, `cursor-rules/vbounce-process.mdc`) — updated bounce sequence to include pre-gate scan steps.
18
+ - **Modified**: `skills/improve/SKILL.md` — added Gate Check Proposals section for iteratively growing project-specific checks via `gate-checks.json`.
19
+
9
20
  ## [2026-03-06]
10
21
  - **Fixed**: All brain files (`CLAUDE.md`, `GEMINI.md`, `AGENTS.md`) incorrectly referenced `DELIVERY_PLAN.md §5 Open Questions` or `ACTIVE_SPRINT.md §3 Open Questions`. Corrected to `sprint-{XX}.md §2 Sprint Open Questions` to match the authoritative `agent-team/SKILL.md` and `sprint.md` template.
11
22
  - **Fixed**: Removed duplicate "Product Plans Folder Structure" header in `doc-manager/SKILL.md`.
package/brains/CLAUDE.md CHANGED
@@ -59,11 +59,13 @@ Before starting any sprint, the Team Lead MUST:
59
59
  0. Team Lead runs `./scripts/pre_bounce_sync.sh` to ensure LanceDB RAG context is fresh.
60
60
  1. Team Lead sends Story context pack to Developer.
61
61
  2. Developer queries LanceDB, implements code, writes Implementation Report. CLI Orchestrator must run `./scripts/validate_report.mjs` on the report to enforce YAML strictness.
62
- 3. QA runs Quick Scan + PR Review, validates against Story §2 The Truth. If fail Bug Report to Dev. CLI Orchestrator must run `./scripts/validate_report.mjs` on the QA report before passing to Architect/Dev.
63
- 4. Dev fixes and resubmits. 3+ failuresEscalated.
64
- 5. Architect runs Deep Audit + Trend Check, validates Safe Zone compliance and ADR adherence.
65
- 6. DevOps merges story branch into sprint branch, validates post-merge, handles release tagging.
66
- 7. Team Lead consolidates reports into Sprint Report.
62
+ 3. **Pre-QA Gate Scan:** Team Lead runs `./scripts/pre_gate_runner.sh qa` to catch mechanical failures (tests, build, lint, debug output, JSDoc) before spawning QA. If trivial issues found return to Dev.
63
+ 4. QA runs Quick Scan + PR Review (skipping pre-scanned checks), validates against Story §2 The Truth. If fail Bug Report to Dev. CLI Orchestrator must run `./scripts/validate_report.mjs` on the QA report.
64
+ 5. Dev fixes and resubmits. 3+ failures Escalated.
65
+ 6. **Pre-Architect Gate Scan:** Team Lead runs `./scripts/pre_gate_runner.sh arch` to catch structural issues (new deps, file sizes) before spawning Architect. If mechanical failures → return to Dev.
66
+ 7. Architect runs Deep Audit + Trend Check (skipping pre-scanned checks), validates Safe Zone compliance and ADR adherence.
67
+ 8. DevOps merges story branch into sprint branch, validates post-merge (tests + lint + build), handles release tagging.
68
+ 9. Team Lead consolidates reports into Sprint Report.
67
69
 
68
70
  **Hotfix Path (L1 Trivial Tasks):**
69
71
  1. Team Lead evaluates request and creates `HOTFIX-{Date}-{Name}.md`.
package/brains/GEMINI.md CHANGED
@@ -50,11 +50,13 @@ Before starting any sprint, the Team Lead MUST:
50
50
  0. Team Lead runs `./scripts/pre_bounce_sync.sh` to ensure LanceDB RAG context is fresh.
51
51
  1. Team Lead sends Story context pack to Developer.
52
52
  2. Developer queries LanceDB, implements code, writes Implementation Report. CLI Orchestrator must run `./scripts/validate_report.mjs` on the report to enforce YAML strictness.
53
- 3. QA runs Quick Scan + PR Review, validates against Story §2 The Truth. If fail → Bug Report to Dev. CLI Orchestrator must run `./scripts/validate_report.mjs` on the QA report before passing to Architect/Dev.
54
- 4. Dev fixes and resubmits. 3+ failuresEscalated.
55
- 5. Architect runs Deep Audit + Trend Check, validates Safe Zone compliance and ADR adherence.
56
- 6. DevOps merges story branch into sprint branch, validates post-merge, handles release tagging.
57
- 7. Team Lead consolidates reports into Sprint Report.
53
+ 3. **Pre-QA Gate Scan:** Team Lead runs `./scripts/pre_gate_runner.sh qa` to catch mechanical failures before spawning QA. If trivial issues return to Dev.
54
+ 4. QA runs Quick Scan + PR Review (skipping pre-scanned checks), validates against Story §2 The Truth. If fail Bug Report to Dev. CLI Orchestrator must run `./scripts/validate_report.mjs` on the QA report.
55
+ 5. Dev fixes and resubmits. 3+ failures Escalated.
56
+ 6. **Pre-Architect Gate Scan:** Team Lead runs `./scripts/pre_gate_runner.sh arch` to catch structural issues before spawning Architect. If mechanical failures → return to Dev.
57
+ 7. Architect runs Deep Audit + Trend Check (skipping pre-scanned checks), validates Safe Zone compliance and ADR adherence.
58
+ 8. DevOps merges story branch into sprint branch, validates post-merge (tests + lint + build), handles release tagging.
59
+ 9. Team Lead consolidates reports into Sprint Report.
58
60
 
59
61
  **Hotfix Path (L1 Trivial Tasks):**
60
62
  1. Team Lead evaluates request and creates `HOTFIX-{Date}-{Name}.md`.
@@ -19,6 +19,15 @@ Audit the codebase for structural integrity, standards compliance, and long-term
19
19
  3. **Read the full Story spec** — especially §3 Implementation Guide and §3.1 ADR References.
20
20
  4. **Read Roadmap §3 ADRs** — every architecture decision the implementation must comply with.
21
21
 
22
+ ## Pre-Computed Scan Results
23
+
24
+ Before you were spawned, the Team Lead ran `scripts/pre_gate_runner.sh arch`. Read the results at `.bounce/reports/pre-arch-scan.txt`.
25
+
26
+ - If **ALL checks PASS**: Skip mechanical verification in your Deep Audit (dependency changes, file sizes, test/build/lint status). Focus on **judgment-based dimensions**: architectural consistency, error handling quality, data flow traceability, coupling assessment, and AI-ism detection.
27
+ - If **ANY check FAILS**: Note failures in your report. Focus your audit on the areas that failed.
28
+
29
+ The 6-dimension evaluation should focus on qualitative judgment. Mechanical checks (new deps, file sizes, exports documentation) are pre-computed — reference `pre-arch-scan.txt` rather than re-running them.
30
+
22
31
  ## Your Audit Process
23
32
 
24
33
  ### Deep Audit (Full Codebase Analysis)
@@ -63,6 +63,9 @@ When resolving conflicts:
63
63
  # Run test suite on the merged sprint branch
64
64
  npm test # or project-appropriate test command
65
65
 
66
+ # Type check (if applicable)
67
+ npm run lint # tsc --noEmit or equivalent linter
68
+
66
69
  # Verify no regressions
67
70
  npm run build # or project-appropriate build command
68
71
 
@@ -16,12 +16,19 @@ Validate that the Developer's implementation meets the Story's acceptance criter
16
16
  2. **Read the Developer Implementation Report** (`.bounce/reports/STORY-{ID}-{StoryName}-dev.md`) to understand what was built.
17
17
  3. **Read Story §2 The Truth** — these are your pass/fail criteria. If the Gherkin scenarios don't pass, the bounce failed.
18
18
 
19
+ ## Pre-Computed Scan Results
20
+
21
+ Before you were spawned, the Team Lead ran `scripts/pre_gate_runner.sh qa`. Read the results at `.bounce/reports/pre-qa-scan.txt`.
22
+
23
+ - If **ALL checks PASS**: Skip the mechanical portions of Quick Scan (test existence, build, debug statements, TODOs, JSDoc coverage). Focus your Quick Scan on **architectural consistency and error handling** only.
24
+ - If **ANY check FAILS**: The Team Lead should have fixed trivial failures before spawning you. If failures remain, note them in your report but do not re-run the checks — trust the scan output.
25
+
19
26
  ## Your Testing Process
20
27
 
21
28
  ### Quick Scan (Health Check)
22
- Run a fast structural check of the project using the vibe-code-review skill (Quick Scan mode):
29
+ Run a fast structural check using the vibe-code-review skill (Quick Scan mode):
23
30
  - Read `skills/vibe-code-review/SKILL.md` and `skills/vibe-code-review/references/quick-scan.md`
24
- - Execute the checks against the codebase
31
+ - Skip checks already covered by `pre-qa-scan.txt` (tests exist, build passes, no debug output, no TODOs, JSDoc coverage). Focus on **judgment-based structural assessment**.
25
32
  - Flag any obvious structural issues
26
33
 
27
34
  ### PR Review (Diff Analysis)
@@ -22,11 +22,13 @@ Before sprints: Lead triages request (L1 Trivial → Hotfix Path. L2-L4 → Epic
22
22
  0. Team Lead runs `./scripts/pre_bounce_sync.sh` to ensure LanceDB RAG context is fresh.
23
23
  1. Team Lead sends Story context pack to Developer.
24
24
  2. Developer reads LESSONS.md, implements code, writes Implementation Report. CLI Orchestrator must run `./scripts/validate_report.mjs` on the report to enforce YAML strictness.
25
- 3. QA validates against Story §2 The Truth. If fail Bug Report to Dev. CLI Orchestrator must run `./scripts/validate_report.mjs` on the QA report before passing to Architect/Dev.
26
- 4. Dev fixes and resubmits. 3+ failuresEscalated.
27
- 5. Architect validates Safe Zone compliance and ADR adherence.
28
- 6. DevOps merges story into sprint branch, validates, handles releases.
29
- 7. Team Lead consolidates into Sprint Report.
25
+ 3. Pre-QA Gate Scan: `./scripts/pre_gate_runner.sh qa` catches mechanical failures before spawning QA. Trivial issues return to Dev.
26
+ 4. QA validates against Story §2 The Truth (skipping pre-scanned checks). If failBug Report to Dev.
27
+ 5. Dev fixes and resubmits. 3+ failures Escalated.
28
+ 6. Pre-Architect Gate Scan: `./scripts/pre_gate_runner.sh arch` catches structural issues before spawning Architect.
29
+ 7. Architect validates Safe Zone compliance and ADR adherence (skipping pre-scanned checks).
30
+ 8. DevOps merges story into sprint branch, validates (tests + lint + build), handles releases.
31
+ 9. Team Lead consolidates into Sprint Report.
30
32
 
31
33
  **Hotfix Path (L1 Trivial Task):**
32
34
  1. Team Lead evaluates request and creates `HOTFIX-{Date}-{Name}.md`.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@sandrinio/vbounce",
3
- "version": "1.5.0",
3
+ "version": "1.6.0",
4
4
  "description": "V-Bounce OS: Turn your AI coding assistant into a full engineering team through structured SDLC skills.",
5
5
  "type": "module",
6
6
  "bin": {
@@ -0,0 +1,151 @@
1
+ #!/usr/bin/env bash
2
+ # init_gate_config.sh — Auto-detect project stack and generate .bounce/gate-checks.json
3
+ # Usage: ./scripts/init_gate_config.sh [project-path]
4
+ #
5
+ # Run once during project setup or when the improve skill suggests new checks.
6
+ # Safe to re-run — merges with existing config (preserves custom checks).
7
+
8
+ set -euo pipefail
9
+
10
+ PROJECT_PATH="${1:-.}"
11
+ PROJECT_PATH="$(cd "$PROJECT_PATH" && pwd)"
12
+ CONFIG_PATH="${PROJECT_PATH}/.bounce/gate-checks.json"
13
+
14
+ RED='\033[0;31m'
15
+ GREEN='\033[0;32m'
16
+ YELLOW='\033[1;33m'
17
+ CYAN='\033[0;36m'
18
+ NC='\033[0m'
19
+
20
+ echo -e "${CYAN}V-Bounce OS Gate Config Initializer${NC}"
21
+ echo -e "Project: ${PROJECT_PATH}"
22
+ echo ""
23
+
24
+ # ── Detect stack ─────────────────────────────────────────────────────
25
+
26
+ LANGUAGE="unknown"
27
+ FRAMEWORK="unknown"
28
+ TEST_RUNNER="unknown"
29
+ BUILD_CMD=""
30
+ LINT_CMD=""
31
+ TEST_CMD=""
32
+
33
+ # Language detection
34
+ if [[ -f "${PROJECT_PATH}/tsconfig.json" ]]; then
35
+ LANGUAGE="typescript"
36
+ elif [[ -f "${PROJECT_PATH}/package.json" ]]; then
37
+ LANGUAGE="javascript"
38
+ elif [[ -f "${PROJECT_PATH}/pyproject.toml" || -f "${PROJECT_PATH}/setup.py" || -f "${PROJECT_PATH}/requirements.txt" ]]; then
39
+ LANGUAGE="python"
40
+ elif [[ -f "${PROJECT_PATH}/Cargo.toml" ]]; then
41
+ LANGUAGE="rust"
42
+ elif [[ -f "${PROJECT_PATH}/go.mod" ]]; then
43
+ LANGUAGE="go"
44
+ elif [[ -f "${PROJECT_PATH}/build.gradle" || -f "${PROJECT_PATH}/pom.xml" ]]; then
45
+ LANGUAGE="java"
46
+ elif [[ -f "${PROJECT_PATH}/Package.swift" ]]; then
47
+ LANGUAGE="swift"
48
+ fi
49
+
50
+ echo -e "Language: ${GREEN}${LANGUAGE}${NC}"
51
+
52
+ # Framework detection (JS/TS ecosystem)
53
+ if [[ -f "${PROJECT_PATH}/package.json" ]]; then
54
+ PKG_CONTENT=$(cat "${PROJECT_PATH}/package.json")
55
+
56
+ if echo "$PKG_CONTENT" | grep -q '"next"'; then FRAMEWORK="nextjs"
57
+ elif echo "$PKG_CONTENT" | grep -q '"react"'; then FRAMEWORK="react"
58
+ elif echo "$PKG_CONTENT" | grep -q '"vue"'; then FRAMEWORK="vue"
59
+ elif echo "$PKG_CONTENT" | grep -q '"svelte"'; then FRAMEWORK="svelte"
60
+ elif echo "$PKG_CONTENT" | grep -q '"express"'; then FRAMEWORK="express"
61
+ elif echo "$PKG_CONTENT" | grep -q '"fastify"'; then FRAMEWORK="fastify"
62
+ elif echo "$PKG_CONTENT" | grep -q '"@angular/core"'; then FRAMEWORK="angular"
63
+ fi
64
+
65
+ # Test runner
66
+ if echo "$PKG_CONTENT" | grep -q '"vitest"'; then TEST_RUNNER="vitest"
67
+ elif echo "$PKG_CONTENT" | grep -q '"jest"'; then TEST_RUNNER="jest"
68
+ elif echo "$PKG_CONTENT" | grep -q '"mocha"'; then TEST_RUNNER="mocha"
69
+ elif echo "$PKG_CONTENT" | grep -q '"ava"'; then TEST_RUNNER="ava"
70
+ fi
71
+
72
+ # Commands from scripts
73
+ BUILD_CMD=$(node -e "try{const p=require('${PROJECT_PATH}/package.json');console.log(p.scripts&&p.scripts.build||'')}catch(e){}" 2>/dev/null || echo "")
74
+ LINT_CMD=$(node -e "try{const p=require('${PROJECT_PATH}/package.json');console.log(p.scripts&&p.scripts.lint||'')}catch(e){}" 2>/dev/null || echo "")
75
+ TEST_CMD=$(node -e "try{const p=require('${PROJECT_PATH}/package.json');const t=p.scripts&&p.scripts.test||'';if(t&&!t.includes('no test specified'))console.log(t);else console.log('')}catch(e){}" 2>/dev/null || echo "")
76
+ elif [[ "$LANGUAGE" == "python" ]]; then
77
+ if command -v pytest &>/dev/null; then TEST_RUNNER="pytest"; fi
78
+ if command -v ruff &>/dev/null; then LINT_CMD="ruff check ."; fi
79
+ elif [[ "$LANGUAGE" == "rust" ]]; then
80
+ TEST_RUNNER="cargo"
81
+ BUILD_CMD="cargo build"
82
+ LINT_CMD="cargo clippy"
83
+ elif [[ "$LANGUAGE" == "go" ]]; then
84
+ TEST_RUNNER="go"
85
+ BUILD_CMD="go build ./..."
86
+ LINT_CMD="golangci-lint run"
87
+ fi
88
+
89
+ echo -e "Framework: ${GREEN}${FRAMEWORK}${NC}"
90
+ echo -e "Test runner: ${GREEN}${TEST_RUNNER}${NC}"
91
+ [[ -n "$BUILD_CMD" ]] && echo -e "Build: ${GREEN}${BUILD_CMD}${NC}"
92
+ [[ -n "$LINT_CMD" ]] && echo -e "Lint: ${GREEN}${LINT_CMD}${NC}"
93
+ echo ""
94
+
95
+ # ── Generate config ──────────────────────────────────────────────────
96
+
97
+ # Build QA checks array
98
+ QA_CHECKS='[
99
+ { "id": "tests_exist", "enabled": true, "description": "Verify test files exist for modified source files" },
100
+ { "id": "tests_pass", "enabled": true, "description": "Run test suite" },
101
+ { "id": "build", "enabled": true, "description": "Run build command" },
102
+ { "id": "lint", "enabled": true, "description": "Run linter" },
103
+ { "id": "no_debug_output", "enabled": true, "description": "No debug statements in modified files" },
104
+ { "id": "no_todo_fixme", "enabled": true, "description": "No TODO/FIXME in modified files" },
105
+ { "id": "exports_have_docs", "enabled": true, "description": "Exported symbols have doc comments" }
106
+ ]'
107
+
108
+ # Build Architect checks array
109
+ ARCH_CHECKS='[
110
+ { "id": "tests_exist", "enabled": true, "description": "Verify test files exist for modified source files" },
111
+ { "id": "tests_pass", "enabled": true, "description": "Run test suite" },
112
+ { "id": "build", "enabled": true, "description": "Run build command" },
113
+ { "id": "lint", "enabled": true, "description": "Run linter" },
114
+ { "id": "no_debug_output", "enabled": true, "description": "No debug statements in modified files" },
115
+ { "id": "no_todo_fixme", "enabled": true, "description": "No TODO/FIXME in modified files" },
116
+ { "id": "exports_have_docs", "enabled": true, "description": "Exported symbols have doc comments" },
117
+ { "id": "no_new_deps", "enabled": true, "description": "No new dependencies without review" },
118
+ { "id": "file_size", "enabled": true, "max_lines": 500, "description": "Source files under 500 lines" }
119
+ ]'
120
+
121
+ # Write config
122
+ mkdir -p "$(dirname "$CONFIG_PATH")"
123
+
124
+ cat > "$CONFIG_PATH" << HEREDOC
125
+ {
126
+ "version": 1,
127
+ "detected_stack": {
128
+ "language": "${LANGUAGE}",
129
+ "framework": "${FRAMEWORK}",
130
+ "test_runner": "${TEST_RUNNER}",
131
+ "build_cmd": "${BUILD_CMD}",
132
+ "lint_cmd": "${LINT_CMD}",
133
+ "test_cmd": "${TEST_CMD}"
134
+ },
135
+ "qa_checks": ${QA_CHECKS},
136
+ "arch_checks": ${ARCH_CHECKS},
137
+ "custom_checks": []
138
+ }
139
+ HEREDOC
140
+
141
+ echo -e "${GREEN}Generated: ${CONFIG_PATH}${NC}"
142
+ echo ""
143
+ echo "Universal checks enabled. To add project-specific checks:"
144
+ echo " 1. Run sprints and let agents collect Process Feedback"
145
+ echo " 2. Use the 'improve' skill to propose new checks"
146
+ echo " 3. Or manually add entries to the custom_checks array"
147
+ echo ""
148
+ echo -e "Example custom check (add to custom_checks):"
149
+ echo ' { "id": "custom_grep", "gate": "arch", "enabled": true,'
150
+ echo ' "pattern": "var\\(--my-prefix-", "glob": "*.tsx",'
151
+ echo ' "should_find": false, "description": "No raw CSS vars in components" }'
@@ -0,0 +1,576 @@
1
+ #!/usr/bin/env bash
2
+ # pre_gate_common.sh — Shared gate check functions for V-Bounce OS
3
+ # Sourced by pre_gate_runner.sh. Never run directly.
4
+
5
+ set -euo pipefail
6
+
7
+ # ── Colors & formatting ──────────────────────────────────────────────
8
+ RED='\033[0;31m'
9
+ GREEN='\033[0;32m'
10
+ YELLOW='\033[1;33m'
11
+ CYAN='\033[0;36m'
12
+ NC='\033[0m'
13
+
14
+ PASS_COUNT=0
15
+ FAIL_COUNT=0
16
+ SKIP_COUNT=0
17
+ RESULTS=""
18
+
19
+ record_result() {
20
+ local id="$1" status="$2" detail="$3"
21
+ case "$status" in
22
+ PASS) PASS_COUNT=$((PASS_COUNT + 1)); RESULTS+="| ${id} | ${GREEN}PASS${NC} | ${detail} |"$'\n' ;;
23
+ FAIL) FAIL_COUNT=$((FAIL_COUNT + 1)); RESULTS+="| ${id} | ${RED}FAIL${NC} | ${detail} |"$'\n' ;;
24
+ SKIP) SKIP_COUNT=$((SKIP_COUNT + 1)); RESULTS+="| ${id} | ${YELLOW}SKIP${NC} | ${detail} |"$'\n' ;;
25
+ esac
26
+ }
27
+
28
+ record_result_plain() {
29
+ local id="$1" status="$2" detail="$3"
30
+ PLAIN_RESULTS+="| ${id} | ${status} | ${detail} |"$'\n'
31
+ }
32
+
33
+ print_summary() {
34
+ echo ""
35
+ echo -e "${CYAN}── Gate Check Results ──${NC}"
36
+ echo "| Check | Status | Detail |"
37
+ echo "|-------|--------|--------|"
38
+ echo -e "$RESULTS"
39
+ echo ""
40
+ echo -e "PASS: ${GREEN}${PASS_COUNT}${NC} FAIL: ${RED}${FAIL_COUNT}${NC} SKIP: ${YELLOW}${SKIP_COUNT}${NC}"
41
+ }
42
+
43
+ write_report() {
44
+ local output_path="$1"
45
+ mkdir -p "$(dirname "$output_path")"
46
+ {
47
+ echo "# Pre-Gate Scan Results"
48
+ echo "Date: $(date -u '+%Y-%m-%d %H:%M UTC')"
49
+ echo "Target: ${WORKTREE_PATH}"
50
+ echo "Gate: ${GATE_TYPE}"
51
+ echo ""
52
+ echo "| Check | Status | Detail |"
53
+ echo "|-------|--------|--------|"
54
+ echo -e "$PLAIN_RESULTS"
55
+ echo ""
56
+ echo "PASS: ${PASS_COUNT} FAIL: ${FAIL_COUNT} SKIP: ${SKIP_COUNT}"
57
+ } > "$output_path"
58
+ }
59
+
60
+ # ── Stack detection helpers ──────────────────────────────────────────
61
+
62
+ detect_test_cmd() {
63
+ local dir="$1"
64
+ if [[ -f "${dir}/package.json" ]]; then
65
+ local test_script
66
+ test_script=$(node -e "try{const p=require('${dir}/package.json');console.log(p.scripts&&p.scripts.test||'')}catch(e){}" 2>/dev/null || echo "")
67
+ if [[ -n "$test_script" && "$test_script" != "echo \"Error: no test specified\" && exit 1" ]]; then
68
+ echo "npm test"
69
+ return
70
+ fi
71
+ fi
72
+ if [[ -f "${dir}/pytest.ini" || -f "${dir}/pyproject.toml" || -f "${dir}/setup.cfg" ]]; then
73
+ echo "pytest"
74
+ return
75
+ fi
76
+ if [[ -f "${dir}/Cargo.toml" ]]; then
77
+ echo "cargo test"
78
+ return
79
+ fi
80
+ if [[ -f "${dir}/go.mod" ]]; then
81
+ echo "go test ./..."
82
+ return
83
+ fi
84
+ echo ""
85
+ }
86
+
87
+ detect_build_cmd() {
88
+ local dir="$1"
89
+ if [[ -f "${dir}/package.json" ]]; then
90
+ local build_script
91
+ build_script=$(node -e "try{const p=require('${dir}/package.json');console.log(p.scripts&&p.scripts.build||'')}catch(e){}" 2>/dev/null || echo "")
92
+ if [[ -n "$build_script" ]]; then
93
+ echo "npm run build"
94
+ return
95
+ fi
96
+ fi
97
+ if [[ -f "${dir}/Cargo.toml" ]]; then
98
+ echo "cargo build"
99
+ return
100
+ fi
101
+ if [[ -f "${dir}/go.mod" ]]; then
102
+ echo "go build ./..."
103
+ return
104
+ fi
105
+ echo ""
106
+ }
107
+
108
+ detect_lint_cmd() {
109
+ local dir="$1"
110
+ if [[ -f "${dir}/package.json" ]]; then
111
+ local lint_script
112
+ lint_script=$(node -e "try{const p=require('${dir}/package.json');console.log(p.scripts&&p.scripts.lint||'')}catch(e){}" 2>/dev/null || echo "")
113
+ if [[ -n "$lint_script" ]]; then
114
+ echo "npm run lint"
115
+ return
116
+ fi
117
+ fi
118
+ if command -v ruff &>/dev/null && [[ -f "${dir}/pyproject.toml" ]]; then
119
+ echo "ruff check ."
120
+ return
121
+ fi
122
+ if [[ -f "${dir}/Cargo.toml" ]]; then
123
+ echo "cargo clippy"
124
+ return
125
+ fi
126
+ echo ""
127
+ }
128
+
129
+ detect_source_glob() {
130
+ local dir="$1"
131
+ if [[ -f "${dir}/tsconfig.json" ]]; then
132
+ echo "*.{ts,tsx}"
133
+ return
134
+ fi
135
+ if [[ -f "${dir}/package.json" ]]; then
136
+ echo "*.{js,jsx}"
137
+ return
138
+ fi
139
+ if [[ -f "${dir}/pyproject.toml" || -f "${dir}/setup.py" || -f "${dir}/setup.cfg" ]]; then
140
+ echo "*.py"
141
+ return
142
+ fi
143
+ if [[ -f "${dir}/Cargo.toml" ]]; then
144
+ echo "*.rs"
145
+ return
146
+ fi
147
+ if [[ -f "${dir}/go.mod" ]]; then
148
+ echo "*.go"
149
+ return
150
+ fi
151
+ echo "*"
152
+ }
153
+
154
+ detect_dep_file() {
155
+ local dir="$1"
156
+ if [[ -f "${dir}/package-lock.json" ]]; then echo "package.json"; return; fi
157
+ if [[ -f "${dir}/yarn.lock" ]]; then echo "package.json"; return; fi
158
+ if [[ -f "${dir}/pnpm-lock.yaml" ]]; then echo "package.json"; return; fi
159
+ if [[ -f "${dir}/requirements.txt" ]]; then echo "requirements.txt"; return; fi
160
+ if [[ -f "${dir}/Pipfile.lock" ]]; then echo "Pipfile"; return; fi
161
+ if [[ -f "${dir}/pyproject.toml" ]]; then echo "pyproject.toml"; return; fi
162
+ if [[ -f "${dir}/Cargo.lock" ]]; then echo "Cargo.toml"; return; fi
163
+ if [[ -f "${dir}/go.sum" ]]; then echo "go.mod"; return; fi
164
+ echo ""
165
+ }
166
+
167
+ detect_test_pattern() {
168
+ local dir="$1"
169
+ if [[ -f "${dir}/tsconfig.json" || -f "${dir}/package.json" ]]; then
170
+ echo '\.test\.\|\.spec\.\|__tests__'
171
+ return
172
+ fi
173
+ if [[ -f "${dir}/pyproject.toml" || -f "${dir}/setup.py" ]]; then
174
+ echo 'test_\|_test\.py'
175
+ return
176
+ fi
177
+ if [[ -f "${dir}/Cargo.toml" ]]; then
178
+ echo '_test\.rs\|tests/'
179
+ return
180
+ fi
181
+ echo '\.test\.\|\.spec\.\|test_'
182
+ }
183
+
184
+ detect_doc_comment_pattern() {
185
+ local dir="$1"
186
+ if [[ -f "${dir}/tsconfig.json" || -f "${dir}/package.json" ]]; then
187
+ echo '/\*\*'
188
+ return
189
+ fi
190
+ if [[ -f "${dir}/pyproject.toml" || -f "${dir}/setup.py" ]]; then
191
+ echo '"""'
192
+ return
193
+ fi
194
+ if [[ -f "${dir}/Cargo.toml" ]]; then
195
+ echo '///'
196
+ return
197
+ fi
198
+ echo '/\*\*\|"""\|///'
199
+ }
200
+
201
+ detect_export_pattern() {
202
+ local dir="$1"
203
+ if [[ -f "${dir}/tsconfig.json" || -f "${dir}/package.json" ]]; then
204
+ echo 'export '
205
+ return
206
+ fi
207
+ if [[ -f "${dir}/pyproject.toml" || -f "${dir}/setup.py" ]]; then
208
+ echo '^def \|^class '
209
+ return
210
+ fi
211
+ if [[ -f "${dir}/Cargo.toml" ]]; then
212
+ echo '^pub '
213
+ return
214
+ fi
215
+ if [[ -f "${dir}/go.mod" ]]; then
216
+ echo '^func [A-Z]'
217
+ return
218
+ fi
219
+ echo 'export \|^def \|^class \|^pub \|^func [A-Z]'
220
+ }
221
+
222
+ detect_debug_pattern() {
223
+ local dir="$1"
224
+ if [[ -f "${dir}/tsconfig.json" || -f "${dir}/package.json" ]]; then
225
+ echo 'console\.log\|console\.debug'
226
+ return
227
+ fi
228
+ if [[ -f "${dir}/pyproject.toml" || -f "${dir}/setup.py" ]]; then
229
+ echo 'print(\|breakpoint()'
230
+ return
231
+ fi
232
+ if [[ -f "${dir}/Cargo.toml" ]]; then
233
+ echo 'dbg!\|println!'
234
+ return
235
+ fi
236
+ if [[ -f "${dir}/go.mod" ]]; then
237
+ echo 'fmt\.Print'
238
+ return
239
+ fi
240
+ echo 'console\.log\|print(\|dbg!\|fmt\.Print'
241
+ }
242
+
243
+ # ── Get modified files from git diff ─────────────────────────────────
244
+
245
+ get_modified_files() {
246
+ local dir="$1"
247
+ local base_branch="${2:-}"
248
+ cd "$dir"
249
+ if [[ -n "$base_branch" ]]; then
250
+ git diff --name-only "$base_branch"...HEAD -- . 2>/dev/null || git diff --name-only HEAD~1 -- . 2>/dev/null || echo ""
251
+ else
252
+ git diff --name-only HEAD~1 -- . 2>/dev/null || echo ""
253
+ fi
254
+ }
255
+
256
+ # ── Universal check functions ────────────────────────────────────────
257
+
258
+ check_tests_exist() {
259
+ local dir="$1" modified_files="$2"
260
+ local test_pattern
261
+ test_pattern=$(detect_test_pattern "$dir")
262
+ local source_glob
263
+ source_glob=$(detect_source_glob "$dir")
264
+
265
+ if [[ -z "$modified_files" ]]; then
266
+ record_result "tests_exist" "SKIP" "No modified files detected"
267
+ record_result_plain "tests_exist" "SKIP" "No modified files detected"
268
+ return
269
+ fi
270
+
271
+ local missing=0
272
+ local checked=0
273
+ while IFS= read -r file; do
274
+ [[ -z "$file" ]] && continue
275
+ # Skip test files themselves, configs, docs
276
+ if echo "$file" | grep -qE '(\.test\.|\.spec\.|__tests__|test_|_test\.|\.md$|\.json$|\.yml$|\.yaml$|\.config\.)'; then
277
+ continue
278
+ fi
279
+ # Only check source files
280
+ if ! echo "$file" | grep -qE "\.(ts|tsx|js|jsx|py|rs|go)$"; then
281
+ continue
282
+ fi
283
+ checked=$((checked + 1))
284
+ local basename
285
+ basename=$(basename "$file" | sed 's/\.[^.]*$//')
286
+ # Look for a corresponding test file anywhere in the tree
287
+ if ! find "$dir" -name "*${basename}*" 2>/dev/null | grep -q "$test_pattern"; then
288
+ missing=$((missing + 1))
289
+ fi
290
+ done <<< "$modified_files"
291
+
292
+ if [[ $checked -eq 0 ]]; then
293
+ record_result "tests_exist" "SKIP" "No source files in diff"
294
+ record_result_plain "tests_exist" "SKIP" "No source files in diff"
295
+ elif [[ $missing -eq 0 ]]; then
296
+ record_result "tests_exist" "PASS" "${checked} source files have tests"
297
+ record_result_plain "tests_exist" "PASS" "${checked} source files have tests"
298
+ else
299
+ record_result "tests_exist" "FAIL" "${missing}/${checked} source files missing tests"
300
+ record_result_plain "tests_exist" "FAIL" "${missing}/${checked} source files missing tests"
301
+ fi
302
+ }
303
+
304
+ check_tests_pass() {
305
+ local dir="$1"
306
+ local test_cmd
307
+ test_cmd=$(detect_test_cmd "$dir")
308
+
309
+ if [[ -z "$test_cmd" ]]; then
310
+ record_result "tests_pass" "SKIP" "No test runner detected"
311
+ record_result_plain "tests_pass" "SKIP" "No test runner detected"
312
+ return
313
+ fi
314
+
315
+ if (cd "$dir" && eval "$test_cmd" > /dev/null 2>&1); then
316
+ record_result "tests_pass" "PASS" "${test_cmd}"
317
+ record_result_plain "tests_pass" "PASS" "${test_cmd}"
318
+ else
319
+ record_result "tests_pass" "FAIL" "${test_cmd} failed"
320
+ record_result_plain "tests_pass" "FAIL" "${test_cmd} failed"
321
+ fi
322
+ }
323
+
324
+ check_build() {
325
+ local dir="$1"
326
+ local build_cmd
327
+ build_cmd=$(detect_build_cmd "$dir")
328
+
329
+ if [[ -z "$build_cmd" ]]; then
330
+ record_result "build" "SKIP" "No build command detected"
331
+ record_result_plain "build" "SKIP" "No build command detected"
332
+ return
333
+ fi
334
+
335
+ if (cd "$dir" && eval "$build_cmd" > /dev/null 2>&1); then
336
+ record_result "build" "PASS" "${build_cmd}"
337
+ record_result_plain "build" "PASS" "${build_cmd}"
338
+ else
339
+ record_result "build" "FAIL" "${build_cmd} failed"
340
+ record_result_plain "build" "FAIL" "${build_cmd} failed"
341
+ fi
342
+ }
343
+
344
+ check_lint() {
345
+ local dir="$1"
346
+ local lint_cmd
347
+ lint_cmd=$(detect_lint_cmd "$dir")
348
+
349
+ if [[ -z "$lint_cmd" ]]; then
350
+ record_result "lint" "SKIP" "No linter detected"
351
+ record_result_plain "lint" "SKIP" "No linter detected"
352
+ return
353
+ fi
354
+
355
+ if (cd "$dir" && eval "$lint_cmd" > /dev/null 2>&1); then
356
+ record_result "lint" "PASS" "${lint_cmd}"
357
+ record_result_plain "lint" "PASS" "${lint_cmd}"
358
+ else
359
+ record_result "lint" "FAIL" "${lint_cmd} failed"
360
+ record_result_plain "lint" "FAIL" "${lint_cmd} failed"
361
+ fi
362
+ }
363
+
364
+ check_no_debug_output() {
365
+ local dir="$1" modified_files="$2"
366
+ local debug_pattern
367
+ debug_pattern=$(detect_debug_pattern "$dir")
368
+
369
+ if [[ -z "$modified_files" ]]; then
370
+ record_result "no_debug_output" "SKIP" "No modified files"
371
+ record_result_plain "no_debug_output" "SKIP" "No modified files"
372
+ return
373
+ fi
374
+
375
+ local found=0
376
+ while IFS= read -r file; do
377
+ [[ -z "$file" ]] && continue
378
+ [[ ! -f "${dir}/${file}" ]] && continue
379
+ # Skip test files and configs
380
+ if echo "$file" | grep -qE '(\.test\.|\.spec\.|__tests__|test_|_test\.|\.config\.|\.md$|\.json$)'; then
381
+ continue
382
+ fi
383
+ if grep -qE "$debug_pattern" "${dir}/${file}" 2>/dev/null; then
384
+ found=$((found + 1))
385
+ fi
386
+ done <<< "$modified_files"
387
+
388
+ if [[ $found -eq 0 ]]; then
389
+ record_result "no_debug_output" "PASS" "No debug statements in modified files"
390
+ record_result_plain "no_debug_output" "PASS" "No debug statements in modified files"
391
+ else
392
+ record_result "no_debug_output" "FAIL" "${found} files contain debug statements"
393
+ record_result_plain "no_debug_output" "FAIL" "${found} files contain debug statements"
394
+ fi
395
+ }
396
+
397
+ check_no_todo_fixme() {
398
+ local dir="$1" modified_files="$2"
399
+
400
+ if [[ -z "$modified_files" ]]; then
401
+ record_result "no_todo_fixme" "SKIP" "No modified files"
402
+ record_result_plain "no_todo_fixme" "SKIP" "No modified files"
403
+ return
404
+ fi
405
+
406
+ local found=0
407
+ while IFS= read -r file; do
408
+ [[ -z "$file" ]] && continue
409
+ [[ ! -f "${dir}/${file}" ]] && continue
410
+ if grep -qiE '(TODO|FIXME|HACK|XXX)' "${dir}/${file}" 2>/dev/null; then
411
+ found=$((found + 1))
412
+ fi
413
+ done <<< "$modified_files"
414
+
415
+ if [[ $found -eq 0 ]]; then
416
+ record_result "no_todo_fixme" "PASS" "No TODO/FIXME in modified files"
417
+ record_result_plain "no_todo_fixme" "PASS" "No TODO/FIXME in modified files"
418
+ else
419
+ record_result "no_todo_fixme" "FAIL" "${found} files contain TODO/FIXME"
420
+ record_result_plain "no_todo_fixme" "FAIL" "${found} files contain TODO/FIXME"
421
+ fi
422
+ }
423
+
424
+ check_exports_have_docs() {
425
+ local dir="$1" modified_files="$2"
426
+ local export_pattern doc_pattern
427
+ export_pattern=$(detect_export_pattern "$dir")
428
+ doc_pattern=$(detect_doc_comment_pattern "$dir")
429
+
430
+ if [[ -z "$modified_files" ]]; then
431
+ record_result "exports_have_docs" "SKIP" "No modified files"
432
+ record_result_plain "exports_have_docs" "SKIP" "No modified files"
433
+ return
434
+ fi
435
+
436
+ local missing=0
437
+ local total=0
438
+ while IFS= read -r file; do
439
+ [[ -z "$file" ]] && continue
440
+ [[ ! -f "${dir}/${file}" ]] && continue
441
+ # Skip test files and configs
442
+ if echo "$file" | grep -qE '(\.test\.|\.spec\.|__tests__|test_|_test\.|\.config\.|\.md$|\.json$)'; then
443
+ continue
444
+ fi
445
+ if ! echo "$file" | grep -qE "\.(ts|tsx|js|jsx|py|rs|go)$"; then
446
+ continue
447
+ fi
448
+ # Count exports without preceding doc comments
449
+ local exports_in_file
450
+ exports_in_file=$(grep -c "$export_pattern" "${dir}/${file}" 2>/dev/null || echo 0)
451
+ if [[ $exports_in_file -gt 0 ]]; then
452
+ total=$((total + exports_in_file))
453
+ local docs_in_file
454
+ docs_in_file=$(grep -c "$doc_pattern" "${dir}/${file}" 2>/dev/null || echo 0)
455
+ if [[ $docs_in_file -lt $exports_in_file ]]; then
456
+ missing=$((missing + (exports_in_file - docs_in_file)))
457
+ fi
458
+ fi
459
+ done <<< "$modified_files"
460
+
461
+ if [[ $total -eq 0 ]]; then
462
+ record_result "exports_have_docs" "SKIP" "No exports in modified files"
463
+ record_result_plain "exports_have_docs" "SKIP" "No exports in modified files"
464
+ elif [[ $missing -eq 0 ]]; then
465
+ record_result "exports_have_docs" "PASS" "${total} exports documented"
466
+ record_result_plain "exports_have_docs" "PASS" "${total} exports documented"
467
+ else
468
+ record_result "exports_have_docs" "FAIL" "${missing}/${total} exports missing doc comments"
469
+ record_result_plain "exports_have_docs" "FAIL" "${missing}/${total} exports missing doc comments"
470
+ fi
471
+ }
472
+
473
+ check_no_new_dependencies() {
474
+ local dir="$1" base_branch="${2:-}"
475
+ local dep_file
476
+ dep_file=$(detect_dep_file "$dir")
477
+
478
+ if [[ -z "$dep_file" ]]; then
479
+ record_result "no_new_deps" "SKIP" "No dependency file detected"
480
+ record_result_plain "no_new_deps" "SKIP" "No dependency file detected"
481
+ return
482
+ fi
483
+
484
+ if [[ -z "$base_branch" ]]; then
485
+ record_result "no_new_deps" "SKIP" "No base branch to compare"
486
+ record_result_plain "no_new_deps" "SKIP" "No base branch to compare"
487
+ return
488
+ fi
489
+
490
+ cd "$dir"
491
+ local diff_output
492
+ diff_output=$(git diff "$base_branch"...HEAD -- "$dep_file" 2>/dev/null || echo "")
493
+
494
+ if [[ -z "$diff_output" ]]; then
495
+ record_result "no_new_deps" "PASS" "No changes to ${dep_file}"
496
+ record_result_plain "no_new_deps" "PASS" "No changes to ${dep_file}"
497
+ else
498
+ local added
499
+ added=$(echo "$diff_output" | grep -c "^+" | head -1 || echo "0")
500
+ record_result "no_new_deps" "FAIL" "${dep_file} modified — review new dependencies"
501
+ record_result_plain "no_new_deps" "FAIL" "${dep_file} modified — review new dependencies"
502
+ fi
503
+ }
504
+
505
+ check_file_size_limit() {
506
+ local dir="$1" modified_files="$2" max_lines="${3:-500}"
507
+
508
+ if [[ -z "$modified_files" ]]; then
509
+ record_result "file_size" "SKIP" "No modified files"
510
+ record_result_plain "file_size" "SKIP" "No modified files"
511
+ return
512
+ fi
513
+
514
+ local oversized=0
515
+ local details=""
516
+ while IFS= read -r file; do
517
+ [[ -z "$file" ]] && continue
518
+ [[ ! -f "${dir}/${file}" ]] && continue
519
+ if ! echo "$file" | grep -qE "\.(ts|tsx|js|jsx|py|rs|go|swift|kt|java)$"; then
520
+ continue
521
+ fi
522
+ local lines
523
+ lines=$(wc -l < "${dir}/${file}" | tr -d ' ')
524
+ if [[ $lines -gt $max_lines ]]; then
525
+ oversized=$((oversized + 1))
526
+ details="${details}${file}(${lines}L) "
527
+ fi
528
+ done <<< "$modified_files"
529
+
530
+ if [[ $oversized -eq 0 ]]; then
531
+ record_result "file_size" "PASS" "All files under ${max_lines} lines"
532
+ record_result_plain "file_size" "PASS" "All files under ${max_lines} lines"
533
+ else
534
+ record_result "file_size" "FAIL" "${oversized} files over ${max_lines}L: ${details}"
535
+ record_result_plain "file_size" "FAIL" "${oversized} files over ${max_lines}L: ${details}"
536
+ fi
537
+ }
538
+
539
+ # ── Custom check runner ──────────────────────────────────────────────
540
+
541
+ run_custom_check() {
542
+ local dir="$1" id="$2" cmd="$3" description="${4:-Custom check}"
543
+
544
+ if (cd "$dir" && eval "$cmd" > /dev/null 2>&1); then
545
+ record_result "$id" "PASS" "$description"
546
+ record_result_plain "$id" "PASS" "$description"
547
+ else
548
+ record_result "$id" "FAIL" "$description"
549
+ record_result_plain "$id" "FAIL" "$description"
550
+ fi
551
+ }
552
+
553
+ run_custom_grep_check() {
554
+ local dir="$1" id="$2" pattern="$3" glob="$4" should_find="${5:-false}"
555
+
556
+ local count
557
+ count=$(find "$dir" -name "$glob" -not -path '*/node_modules/*' -not -path '*/.git/*' -exec grep -l "$pattern" {} \; 2>/dev/null | wc -l | tr -d ' ')
558
+
559
+ if [[ "$should_find" == "true" ]]; then
560
+ if [[ $count -gt 0 ]]; then
561
+ record_result "$id" "PASS" "Pattern found in ${count} files"
562
+ record_result_plain "$id" "PASS" "Pattern found in ${count} files"
563
+ else
564
+ record_result "$id" "FAIL" "Expected pattern not found"
565
+ record_result_plain "$id" "FAIL" "Expected pattern not found"
566
+ fi
567
+ else
568
+ if [[ $count -eq 0 ]]; then
569
+ record_result "$id" "PASS" "Pattern not found (good)"
570
+ record_result_plain "$id" "PASS" "Pattern not found (good)"
571
+ else
572
+ record_result "$id" "FAIL" "Unwanted pattern in ${count} files"
573
+ record_result_plain "$id" "FAIL" "Unwanted pattern in ${count} files"
574
+ fi
575
+ fi
576
+ }
@@ -0,0 +1,176 @@
1
+ #!/usr/bin/env bash
2
+ # pre_gate_runner.sh — Runs pre-gate checks before QA or Architect agents
3
+ # Usage: ./scripts/pre_gate_runner.sh <qa|arch> [worktree-path] [base-branch]
4
+ #
5
+ # Reads .bounce/gate-checks.json for check configuration.
6
+ # If no config exists, runs universal defaults with auto-detected stack.
7
+
8
+ set -euo pipefail
9
+
10
+ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
11
+ source "${SCRIPT_DIR}/pre_gate_common.sh"
12
+
13
+ # ── Arguments ────────────────────────────────────────────────────────
14
+
15
+ GATE_TYPE="${1:-}"
16
+ WORKTREE_PATH="${2:-.}"
17
+ BASE_BRANCH="${3:-}"
18
+ PLAIN_RESULTS=""
19
+
20
+ if [[ -z "$GATE_TYPE" ]] || [[ "$GATE_TYPE" != "qa" && "$GATE_TYPE" != "arch" ]]; then
21
+ echo "Usage: ./scripts/pre_gate_runner.sh <qa|arch> [worktree-path] [base-branch]"
22
+ echo ""
23
+ echo " qa — Run QA pre-gate checks (before QA agent)"
24
+ echo " arch — Run Architect pre-gate checks (before Architect agent)"
25
+ echo ""
26
+ echo " worktree-path — Path to story worktree (default: current dir)"
27
+ echo " base-branch — Branch to diff against (default: auto-detect)"
28
+ exit 1
29
+ fi
30
+
31
+ # Resolve to absolute path
32
+ WORKTREE_PATH="$(cd "$WORKTREE_PATH" && pwd)"
33
+
34
+ echo -e "${CYAN}V-Bounce OS Pre-Gate Scanner${NC}"
35
+ echo -e "Gate: ${YELLOW}${GATE_TYPE}${NC}"
36
+ echo -e "Target: ${WORKTREE_PATH}"
37
+ echo ""
38
+
39
+ # ── Auto-detect base branch if not provided ──────────────────────────
40
+
41
+ if [[ -z "$BASE_BRANCH" ]]; then
42
+ cd "$WORKTREE_PATH"
43
+ # Try to find the sprint branch this story branched from
44
+ BASE_BRANCH=$(git log --oneline --merges -1 --format=%H 2>/dev/null || echo "")
45
+ if [[ -z "$BASE_BRANCH" ]]; then
46
+ # Fall back to parent branch detection
47
+ BASE_BRANCH=$(git rev-parse --abbrev-ref HEAD@{upstream} 2>/dev/null || echo "")
48
+ fi
49
+ fi
50
+
51
+ # ── Load config or use defaults ──────────────────────────────────────
52
+
53
+ CONFIG_PATH="${WORKTREE_PATH}/.bounce/gate-checks.json"
54
+ HAS_CONFIG=false
55
+
56
+ if [[ -f "$CONFIG_PATH" ]]; then
57
+ HAS_CONFIG=true
58
+ echo -e "Config: ${GREEN}${CONFIG_PATH}${NC}"
59
+ else
60
+ # Check parent repo too (worktree might not have it)
61
+ REPO_ROOT=$(cd "$WORKTREE_PATH" && git rev-parse --show-toplevel 2>/dev/null || echo "$WORKTREE_PATH")
62
+ CONFIG_PATH="${REPO_ROOT}/.bounce/gate-checks.json"
63
+ if [[ -f "$CONFIG_PATH" ]]; then
64
+ HAS_CONFIG=true
65
+ echo -e "Config: ${GREEN}${CONFIG_PATH}${NC}"
66
+ else
67
+ echo -e "Config: ${YELLOW}None found — using universal defaults${NC}"
68
+ fi
69
+ fi
70
+
71
+ echo ""
72
+
73
+ # ── Get modified files ───────────────────────────────────────────────
74
+
75
+ MODIFIED_FILES=$(get_modified_files "$WORKTREE_PATH" "$BASE_BRANCH")
76
+
77
+ # ── Run checks ───────────────────────────────────────────────────────
78
+
79
+ run_checks_from_config() {
80
+ local gate="$1"
81
+ local checks_key="${gate}_checks"
82
+
83
+ # Parse config with node (available since V-Bounce requires it)
84
+ local check_ids
85
+ check_ids=$(node -e "
86
+ const fs = require('fs');
87
+ const cfg = JSON.parse(fs.readFileSync('${CONFIG_PATH}', 'utf8'));
88
+ const checks = cfg['${checks_key}'] || [];
89
+ checks.filter(c => c.enabled !== false).forEach(c => {
90
+ console.log(JSON.stringify(c));
91
+ });
92
+ " 2>/dev/null)
93
+
94
+ while IFS= read -r check_json; do
95
+ [[ -z "$check_json" ]] && continue
96
+
97
+ local id cmd pattern glob should_find max_lines description
98
+ id=$(echo "$check_json" | node -e "const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8'));console.log(d.id||'')" 2>/dev/null)
99
+ cmd=$(echo "$check_json" | node -e "const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8'));console.log(d.cmd||'')" 2>/dev/null)
100
+ pattern=$(echo "$check_json" | node -e "const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8'));console.log(d.pattern||'')" 2>/dev/null)
101
+ glob=$(echo "$check_json" | node -e "const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8'));console.log(d.glob||'')" 2>/dev/null)
102
+ should_find=$(echo "$check_json" | node -e "const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8'));console.log(d.should_find||'false')" 2>/dev/null)
103
+ max_lines=$(echo "$check_json" | node -e "const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8'));console.log(d.max_lines||'500')" 2>/dev/null)
104
+ description=$(echo "$check_json" | node -e "const d=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8'));console.log(d.description||d.id||'')" 2>/dev/null)
105
+
106
+ case "$id" in
107
+ tests_exist) check_tests_exist "$WORKTREE_PATH" "$MODIFIED_FILES" ;;
108
+ tests_pass) check_tests_pass "$WORKTREE_PATH" ;;
109
+ build) check_build "$WORKTREE_PATH" ;;
110
+ lint) check_lint "$WORKTREE_PATH" ;;
111
+ no_debug_output) check_no_debug_output "$WORKTREE_PATH" "$MODIFIED_FILES" ;;
112
+ no_todo_fixme) check_no_todo_fixme "$WORKTREE_PATH" "$MODIFIED_FILES" ;;
113
+ exports_have_docs) check_exports_have_docs "$WORKTREE_PATH" "$MODIFIED_FILES" ;;
114
+ no_new_deps) check_no_new_dependencies "$WORKTREE_PATH" "$BASE_BRANCH" ;;
115
+ file_size) check_file_size_limit "$WORKTREE_PATH" "$MODIFIED_FILES" "$max_lines" ;;
116
+ custom_cmd) run_custom_check "$WORKTREE_PATH" "$description" "$cmd" "$description" ;;
117
+ custom_grep) run_custom_grep_check "$WORKTREE_PATH" "$description" "$pattern" "$glob" "$should_find" ;;
118
+ *)
119
+ # Unknown built-in — try as custom command if cmd is provided
120
+ if [[ -n "$cmd" ]]; then
121
+ run_custom_check "$WORKTREE_PATH" "$id" "$cmd" "$description"
122
+ else
123
+ record_result "$id" "SKIP" "Unknown check type"
124
+ record_result_plain "$id" "SKIP" "Unknown check type"
125
+ fi
126
+ ;;
127
+ esac
128
+ done <<< "$check_ids"
129
+ }
130
+
131
+ run_universal_defaults() {
132
+ local gate="$1"
133
+
134
+ # QA-level checks (always run)
135
+ check_tests_exist "$WORKTREE_PATH" "$MODIFIED_FILES"
136
+ check_tests_pass "$WORKTREE_PATH"
137
+ check_build "$WORKTREE_PATH"
138
+ check_lint "$WORKTREE_PATH"
139
+ check_no_debug_output "$WORKTREE_PATH" "$MODIFIED_FILES"
140
+ check_no_todo_fixme "$WORKTREE_PATH" "$MODIFIED_FILES"
141
+ check_exports_have_docs "$WORKTREE_PATH" "$MODIFIED_FILES"
142
+
143
+ # Architect-level checks (only for arch gate)
144
+ if [[ "$gate" == "arch" ]]; then
145
+ check_no_new_dependencies "$WORKTREE_PATH" "$BASE_BRANCH"
146
+ check_file_size_limit "$WORKTREE_PATH" "$MODIFIED_FILES" 500
147
+ fi
148
+ }
149
+
150
+ # ── Execute ──────────────────────────────────────────────────────────
151
+
152
+ if [[ "$HAS_CONFIG" == "true" ]]; then
153
+ run_checks_from_config "$GATE_TYPE"
154
+ else
155
+ run_universal_defaults "$GATE_TYPE"
156
+ fi
157
+
158
+ # ── Output ───────────────────────────────────────────────────────────
159
+
160
+ print_summary
161
+
162
+ # Write report
163
+ REPORT_DIR="${WORKTREE_PATH}/.bounce/reports"
164
+ REPORT_FILE="${REPORT_DIR}/pre-${GATE_TYPE}-scan.txt"
165
+ write_report "$REPORT_FILE"
166
+ echo ""
167
+ echo -e "Report: ${CYAN}${REPORT_FILE}${NC}"
168
+
169
+ # Exit code
170
+ if [[ $FAIL_COUNT -gt 0 ]]; then
171
+ echo -e "\n${RED}Gate check failed with ${FAIL_COUNT} failure(s).${NC}"
172
+ exit 1
173
+ else
174
+ echo -e "\n${GREEN}All checks passed.${NC}"
175
+ exit 0
176
+ fi
@@ -201,20 +201,24 @@ Examples:
201
201
  e. DevOps runs `hotfix_manager.sh sync` to update any active story worktrees.
202
202
  f. Update Delivery Plan Status to "Done".
203
203
 
204
- 6. **Parallel Readiness Check** (before bouncing multiple stories simultaneously):
204
+ 6. **Gate Config Check**:
205
+ - If `.bounce/gate-checks.json` does not exist, run `./scripts/init_gate_config.sh` to auto-detect the project stack and generate default gate checks.
206
+ - If it exists, verify it's current (stack detection may have changed).
207
+
208
+ 7. **Parallel Readiness Check** (before bouncing multiple stories simultaneously):
205
209
  - Verify test runner config excludes `.worktrees/` (vitest, jest, pytest, etc.)
206
210
  - Verify no shared mutable state between worktrees (e.g., shared temp files, singletons writing to same path)
207
211
  - Verify `.gitignore` includes `.worktrees/`
208
212
  If any check fails, fix before spawning parallel stories. Intermittent test failures from worktree cross-contamination erode trust in the test suite fast.
209
213
 
210
- 7. **Dependency Check & Execution Mode**:
214
+ 8. **Dependency Check & Execution Mode**:
211
215
  - For each story, check the `Depends On:` field in its template.
212
216
  - If Story B depends on Story A, you MUST execute them sequentially. Do not create Story B's worktree or spawn its Developer until Story A has successfully passed the DevOps merge step.
213
217
  - Determine Execution Mode:
214
218
  - **Full Bounce (Default)**: Normal L2-L4 stories go through full Dev → QA → Architect → DevOps flow.
215
219
  - **Fast Track (L1/L2 Minor)**: For cosmetic UI tweaks or isolated refactors, execute Dev → DevOps only. Skip QA and Architect loops to save overhead. Validate manually during Sprint Review.
216
220
 
217
- 8. Update sprint-{XX}.md: Status → "Active"
221
+ 9. Update sprint-{XX}.md: Status → "Active"
218
222
  ```
219
223
 
220
224
  ### Step 1: Story Initialization
@@ -245,11 +249,18 @@ mkdir -p .worktrees/STORY-{ID}-{StoryName}/.bounce/{tasks,reports}
245
249
 
246
250
  ### Step 3: QA Pass
247
251
  ```
252
+ 0. Run pre-QA gate scan:
253
+ ./scripts/pre_gate_runner.sh qa .worktrees/STORY-{ID}-{StoryName}/ sprint/S-{XX}
254
+ - If scan FAILS on trivial issues (debug statements, missing JSDoc, TODOs):
255
+ Return to Developer for quick fix. Do NOT spawn QA for mechanical failures.
256
+ - If scan PASSES: Include scan output path in the QA task file.
248
257
  1. Spawn qa subagent in .worktrees/STORY-{ID}-{StoryName}/ with:
249
258
  - Developer Implementation Report
259
+ - Pre-QA scan results (.bounce/reports/pre-qa-scan.txt)
250
260
  - Story §2 The Truth (acceptance criteria)
251
261
  - LESSONS.md
252
262
  2. QA validates against Gherkin scenarios, runs vibe-code-review
263
+ (skipping checks already covered by pre-qa-scan.txt)
253
264
  3. If FAIL:
254
265
  - QA writes Bug Report (STORY-{ID}-{StoryName}-qa-bounce{N}.md)
255
266
  - Increment bounce counter
@@ -262,8 +273,14 @@ mkdir -p .worktrees/STORY-{ID}-{StoryName}/.bounce/{tasks,reports}
262
273
 
263
274
  ### Step 4: Architect Pass
264
275
  ```
276
+ 0. Run pre-Architect gate scan:
277
+ ./scripts/pre_gate_runner.sh arch .worktrees/STORY-{ID}-{StoryName}/ sprint/S-{XX}
278
+ - If scan reveals new dependencies or structural violations:
279
+ Return to Developer for resolution. Do NOT spawn Architect for mechanical failures.
280
+ - If scan PASSES: Include scan output path in the Architect task file.
265
281
  1. Spawn architect subagent in .worktrees/STORY-{ID}-{StoryName}/ with:
266
282
  - All reports for this story
283
+ - Pre-Architect scan results (.bounce/reports/pre-arch-scan.txt)
267
284
  - Full Story spec + Roadmap §3 ADRs
268
285
  - LESSONS.md
269
286
  2. If FAIL:
@@ -286,7 +303,7 @@ mkdir -p .worktrees/STORY-{ID}-{StoryName}/.bounce/{tasks,reports}
286
303
  - Pre-merge checks (worktree clean, gate reports verified)
287
304
  - Archive reports to .bounce/archive/S-{XX}/STORY-{ID}-{StoryName}/
288
305
  - Merge story branch into sprint branch (--no-ff)
289
- - Post-merge validation (tests + build on sprint branch)
306
+ - Post-merge validation (tests + lint + build on sprint branch)
290
307
  - Worktree removal and story branch cleanup
291
308
  3. DevOps writes Merge Report to .bounce/archive/S-{XX}/STORY-{ID}-{StoryName}/STORY-{ID}-{StoryName}-devops.md
292
309
  4. If merge conflicts:
@@ -107,6 +107,32 @@ For script changes, describe the new behavior.}
107
107
  **Reversibility:** {Easy — revert the edit / Medium — downstream docs may need updating}
108
108
  ```
109
109
 
110
+ #### Special Case: Gate Check Proposals
111
+
112
+ When agent feedback reveals a mechanical check that was repeated manually across multiple stories (e.g., "QA checked for inline styles 4 times"), propose adding it as a pre-gate check instead of a skill/template change:
113
+
114
+ ```markdown
115
+ ### Proposal {N}: Add pre-gate check — {check name}
116
+
117
+ **Finding:** {Agent} manually performed {check description} in {N} stories this sprint.
118
+ **Tokens saved:** ~{estimate} per story (based on agent token usage for this check type)
119
+ **Gate:** qa / arch
120
+ **Check config to add to `.bounce/gate-checks.json`:**
121
+ ```json
122
+ {
123
+ "id": "custom_grep",
124
+ "gate": "arch",
125
+ "enabled": true,
126
+ "pattern": "{regex}",
127
+ "glob": "{file pattern}",
128
+ "should_find": false,
129
+ "description": "{human-readable description}"
130
+ }
131
+ ```
132
+ ```
133
+
134
+ This is the primary mechanism for the gate system to grow organically — the `improve` skill reads what agents repeatedly checked by hand and proposes automating those checks via `gate-checks.json`.
135
+
110
136
  ### Step 5: Present to Human
111
137
  Present ALL proposals as a numbered list. The human can:
112
138
  - **Approve** — apply the change