triflux 8.12.5 → 9.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -9,7 +9,7 @@
9
9
  {
10
10
  "name": "triflux",
11
11
  "description": "CLI-first multi-model orchestrator for Claude Code. Routes tasks to Codex, Gemini, and Claude CLIs with automatic triage (Sonnet classification + Opus decomposition), DAG-based parallel execution, and cost-optimized routing. Includes 16 skills, HUD status bar, and shell-based CLI routing wrapper.",
12
- "version": "7.1.4",
12
+ "version": "9.0.0",
13
13
  "author": {
14
14
  "name": "tellang"
15
15
  },
@@ -27,5 +27,5 @@
27
27
  ]
28
28
  }
29
29
  ],
30
- "version": "7.1.4"
30
+ "version": "9.0.0"
31
31
  }
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "triflux",
3
- "version": "8.11.2",
3
+ "version": "9.0.0",
4
4
  "description": "CLI-first multi-model orchestrator for Claude Code — route tasks to Codex, Gemini, and Claude",
5
5
  "author": {
6
6
  "name": "tellang"
package/README.ko.md CHANGED
@@ -10,7 +10,7 @@
10
10
 
11
11
  <p align="center">
12
12
  <strong>Consensus Intelligence 기반 Tri-CLI 오케스트레이션</strong><br>
13
- <em>Claude + Codex + Gemini — 3자 토론, Anti-Herding 검증, Deep/Light 변형을 갖춘 35개 스킬.</em>
13
+ <em>Claude + Codex + Gemini — 자연어 라우팅, 교차 모델 리뷰, Deep/Light 변형을 갖춘 38개 스킬.</em>
14
14
  </p>
15
15
 
16
16
  <p align="center">
@@ -27,7 +27,7 @@
27
27
  <p align="center">
28
28
  <a href="#빠른-시작">빠른 시작</a> ·
29
29
  <a href="#tri-cli-합의-엔진">Tri-CLI 합의 엔진</a> ·
30
- <a href="#35개-스킬">35개 스킬</a> ·
30
+ <a href="#38개-스킬">38개 스킬</a> ·
31
31
  <a href="#아키텍처">아키텍처</a> ·
32
32
  <a href="#deep-vs-light">Deep vs Light</a> ·
33
33
  <a href="#보안">보안</a>
@@ -80,21 +80,26 @@ tfx setup
80
80
 
81
81
  ---
82
82
 
83
- ## v8의 새로운 기능
83
+ ## v9의 새로운 기능
84
84
 
85
- **triflux v8**은 **Tri-CLI Consensus Intelligence**를 도입합니다. Claude, Codex, Gemini가 각각 독립적으로 분석한 뒤, 구조화된 토론을 거쳐 교차 검증하는 근본적으로 새로운 접근 방식입니다. 모든 Deep 스킬은 Anti-Herding(편향 오염 방지)과 Consensus Gate통한 출력 보장을 제공합니다.
85
+ **triflux v9**은 **하네스 네이티브 인텔리전스**를 도입합니다. 자연어로 말하면 적절한 스킬로 자동 라우팅되고, 교차 모델 리뷰로 동일 모델의 self-approve차단합니다.
86
86
 
87
- ### 주요 특징
87
+ ### v9 주요 특징
88
+
89
+ - **자연어 라우팅** — "리뷰해줘"라고 말하면 `/tfx-review`가 자동 호출. "제대로/꼼꼼히" 수정자로 Deep 변형 자동 에스컬레이션
90
+ - **교차 모델 리뷰** — Claude가 작성하면 Codex가 리뷰, Codex가 작성하면 Claude가 리뷰. 동일 모델 self-approve 차단. 커밋 전 미검증 파일 nudge
91
+ - **맥락 격리** — 현재 맥락과 무관한 요청을 감지하면 별도 psmux 세션으로 분리 제안
92
+ - **38개 스킬** — Light 14개 + Deep 10개 + Infrastructure 14개, 10개 도메인으로 구성
93
+ - **Codex Swarm 강화** — PowerShell `.ps1` 런처, 프로파일 기반 실행, `/merge-worktree`로 결과 자동 수집
94
+ - **스킬 메타데이터** — 모든 스킬에 래퍼/인프라/Light-Deep 관계 표기. 트리거 충돌 해소
95
+
96
+ ### v8 기반 (계속 유지)
88
97
 
89
- - **35개 스킬** — Light 11개 + Deep 10개 + Infrastructure 12개, 9개 도메인으로 구성
90
98
  - **Tri-Debate Engine** — 3개 CLI가 독립 분석 후 Anti-Herding, 교차 검증, 합의 점수 산출
91
99
  - **Deep/Light 변형** — 모든 기능에 토큰 효율적인 Light 모드와 정밀한 Deep 모드를 제공
92
- - **Consensus Gate** — Deep 스킬은 3개 CLI 중 2개 이상의 동의를 요구하며, 학습된 가중치로 CLI 신뢰도를 추적
93
- - **Anti-Herding** — 1라운드는 상호 참조 없이 병렬 실행하여 편향 오염을 원천 차단
94
- - **Expert Panel** — `tfx-panel`을 통한 가상 전문가 시뮬레이션 (Fowler, Newman, Porter 등)
95
- - **94% 토큰 절감** — `tfx-index`가 58K 토큰 분량의 파일 읽기를 3KB 프로젝트 맵으로 대체
96
- - **Persistence Loop** — `tfx-persist`(정식 이름, 3자 검증), `/tfx-ralph`(호환 별칭), `tfx-sisyphus`(자동 라우팅)가 검증 완료까지 반복 실행
97
- - **Hub IPC** — Named Pipe 및 HTTP MCP 브리지를 활용한 초고속 상주형 Hub 서버
100
+ - **Consensus Gate** — Deep 스킬은 3개 CLI 중 2개 이상의 동의 요구
101
+ - **Expert Panel** — `tfx-panel`을 통한 가상 전문가 시뮬레이션
102
+ - **Hub IPC** — Named Pipe HTTP MCP 브리지를 활용한 상주형 Hub 서버
98
103
  - **psmux / Windows 네이티브** — `tmux`(WSL)와 `psmux`(Windows Terminal) 하이브리드 지원
99
104
 
100
105
  ---
@@ -128,7 +133,7 @@ Phase 3: Resolution (합의율 < 70%일 경우)
128
133
 
129
134
  ---
130
135
 
131
- ## 35개 스킬
136
+ ## 38개 스킬
132
137
 
133
138
  ### 리서치
134
139
 
package/README.md CHANGED
@@ -10,7 +10,7 @@
10
10
 
11
11
  <p align="center">
12
12
  <strong>Tri-CLI Orchestration with Consensus Intelligence</strong><br>
13
- <em>Claude + Codex + Gemini — 3-party debate, anti-herding verification, and 35 skills with Deep/Light variants.</em>
13
+ <em>Claude + Codex + Gemini — natural language routing, cross-model review, 38 skills with Deep/Light variants.</em>
14
14
  </p>
15
15
 
16
16
  <p align="center">
@@ -27,7 +27,7 @@
27
27
  <p align="center">
28
28
  <a href="#quick-start">Quick Start</a> ·
29
29
  <a href="#tri-cli-consensus">Tri-CLI Consensus</a> ·
30
- <a href="#35-skills">35 Skills</a> ·
30
+ <a href="#38-skills">38 Skills</a> ·
31
31
  <a href="#architecture">Architecture</a> ·
32
32
  <a href="#deep-vs-light">Deep vs Light</a> ·
33
33
  <a href="#security">Security</a>
@@ -80,21 +80,26 @@ tfx setup
80
80
 
81
81
  ---
82
82
 
83
- ## What's New in v8
83
+ ## What's New in v9
84
84
 
85
- **triflux v8** introduces **Tri-CLI Consensus Intelligence** — a fundamentally new approach where Claude, Codex, and Gemini independently analyze, then cross-validate through structured debate. Every Deep skill guarantees anti-herding (no bias contamination) and consensus-gated output.
85
+ **triflux v9** introduces **Harness-Native Intelligence** — speak naturally, and triflux routes to the right skill automatically. Cross-model review ensures no model approves its own work.
86
86
 
87
- ### Highlights
87
+ ### v9 Highlights
88
+
89
+ - **Natural Language Routing** — Say "review this" or "리뷰해줘" instead of memorizing `/tfx-review`. Depth modifiers ("thoroughly", "제대로") auto-escalate to Deep variants
90
+ - **Cross-Model Review** — Claude writes → Codex reviews. Codex writes → Claude reviews. Same-model self-approve is blocked. Pre-commit nudge for unreviewed files
91
+ - **Context Isolation** — Off-topic requests auto-detected; spawns a clean psmux session so your main context stays focused
92
+ - **38 Skills** — 14 Light + 10 Deep + 14 Infrastructure, organized across 10 domains
93
+ - **Codex Swarm Hardened** — PowerShell `.ps1` launchers, profile-based execution (no `--dangerously` flag), `/merge-worktree` auto-invocation for result collection
94
+ - **Skill Metadata** — Every skill labeled: wrapper/infrastructure/Light-Deep pairs. Trigger conflicts resolved
95
+
96
+ ### v8 Foundations (carried forward)
88
97
 
89
- - **35 Skills** — 11 Light + 10 Deep + 12 Infrastructure, organized across 9 domains
90
98
  - **Tri-Debate Engine** — 3-CLI independent analysis with anti-herding, cross-validation, and consensus scoring
91
99
  - **Deep/Light Variants** — Every capability has a token-efficient Light mode and a thorough Deep mode
92
100
  - **Consensus Gate** — Deep skills require 2/3+ CLI agreement; learned weights track CLI reliability over time
93
- - **Anti-Herding** — Round 1 runs in parallel with zero cross-visibility to prevent bias contamination
94
- - **Expert Panel** — Virtual expert simulation (Fowler, Newman, Porter, etc.) via `tfx-panel`
95
- - **94% Token Reduction** — `tfx-index` creates a 3KB project map replacing 58K tokens of file reads
96
- - **Persistence Loops** — `tfx-persist` (canonical, 3-party verified), `/tfx-ralph` (compat alias), and `tfx-sisyphus` (auto-routing) run until verified complete
97
- - **Hub IPC** — Lightning-fast resident Hub server with Named Pipe & HTTP MCP bridge
101
+ - **Expert Panel** — Virtual expert simulation via `tfx-panel`
102
+ - **Hub IPC** — Resident Hub server with Named Pipe & HTTP MCP bridge
98
103
  - **psmux / Windows Native** — Hybrid support for `tmux` (WSL) and `psmux` (Windows Terminal)
99
104
 
100
105
  ---
@@ -128,7 +133,7 @@ Phase 3: Resolution (if consensus < 70%)
128
133
 
129
134
  ---
130
135
 
131
- ## 35 Skills
136
+ ## 38 Skills
132
137
 
133
138
  ### Research
134
139
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "triflux",
3
- "version": "8.12.5",
3
+ "version": "9.1.0",
4
4
  "description": "CLI-first multi-model orchestrator for Claude Code — route tasks to Codex, Gemini, and Claude",
5
5
  "type": "module",
6
6
  "bin": {
@@ -0,0 +1,180 @@
1
+ #!/usr/bin/env node
2
+
3
+ import { existsSync, readFileSync, unlinkSync } from "node:fs";
4
+ import { join } from "node:path";
5
+
6
+ const SESSION_TTL_SEC = 30 * 60;
7
+ const STATE_REL_PATH = join(".omc", "state", "cross-review.json");
8
+
9
+ function readStdin() {
10
+ return new Promise((resolve) => {
11
+ let raw = "";
12
+ process.stdin.setEncoding("utf8");
13
+ process.stdin.on("data", (chunk) => {
14
+ raw += chunk;
15
+ });
16
+ process.stdin.on("end", () => resolve(raw));
17
+ process.stdin.on("error", () => resolve(""));
18
+ });
19
+ }
20
+
21
+ function parseJson(raw) {
22
+ try {
23
+ return JSON.parse(raw);
24
+ } catch {
25
+ return null;
26
+ }
27
+ }
28
+
29
+ function nowSec() {
30
+ return Math.floor(Date.now() / 1000);
31
+ }
32
+
33
+ function resolveBaseDir(payload) {
34
+ if (typeof payload?.cwd === "string" && payload.cwd.trim()) return payload.cwd;
35
+ if (typeof payload?.directory === "string" && payload.directory.trim()) return payload.directory;
36
+ return process.cwd();
37
+ }
38
+
39
+ function expectedReviewer(author) {
40
+ if (author === "claude") return "codex";
41
+ if (author === "codex") return "claude";
42
+ if (author === "gemini") return "claude";
43
+ return "";
44
+ }
45
+
46
+ function shouldTrackPath(filePath) {
47
+ if (typeof filePath !== "string" || !filePath.trim()) return false;
48
+
49
+ const lower = filePath.toLowerCase();
50
+ if (lower.startsWith(".omc/") || lower.startsWith(".claude/")) return false;
51
+ if (lower === "package-lock.json" || lower.endsWith("/package-lock.json")) return false;
52
+ if (/\.(md|lock|yml|yaml)$/i.test(lower)) return false;
53
+ return true;
54
+ }
55
+
56
+ function loadState(statePath) {
57
+ if (!existsSync(statePath)) return null;
58
+
59
+ try {
60
+ const state = JSON.parse(readFileSync(statePath, "utf8"));
61
+ const startedAt = Number(state?.session_start || 0);
62
+ const expired = !startedAt || nowSec() - startedAt > SESSION_TTL_SEC;
63
+ if (expired) {
64
+ try {
65
+ unlinkSync(statePath);
66
+ } catch {}
67
+ return null;
68
+ }
69
+ return state;
70
+ } catch {
71
+ return null;
72
+ }
73
+ }
74
+
75
+ function isGitCommitCommand(command) {
76
+ if (typeof command !== "string") return false;
77
+ return /\bgit\s+commit\b/i.test(command);
78
+ }
79
+
80
+ function nudge(message) {
81
+ process.stdout.write(JSON.stringify({
82
+ hookSpecificOutput: {
83
+ hookEventName: "PreToolUse",
84
+ additionalContext: message,
85
+ },
86
+ }));
87
+ process.exit(0);
88
+ }
89
+
90
+ function deny(message) {
91
+ process.stderr.write(message);
92
+ process.exit(2);
93
+ }
94
+
95
+ function summarizePending(entries) {
96
+ return entries
97
+ .map((item) => {
98
+ const reviewer = item.expectedReviewer || "cross-reviewer";
99
+ return ` * ${item.path} (author=${item.author}, reviewer=${reviewer})`;
100
+ })
101
+ .join("\n");
102
+ }
103
+
104
+ async function main() {
105
+ if (process.env.TFX_SKIP_CROSS_REVIEW === "1") {
106
+ process.exit(0);
107
+ }
108
+
109
+ const raw = await readStdin();
110
+ if (!raw.trim()) process.exit(0);
111
+
112
+ const payload = parseJson(raw);
113
+ if (!payload) process.exit(0);
114
+
115
+ const toolName = payload.tool_name || "";
116
+ const toolInput = payload.tool_input || {};
117
+
118
+ if (toolName !== "Bash") process.exit(0);
119
+ if (!isGitCommitCommand(toolInput.command || "")) process.exit(0);
120
+
121
+ // cwd 전파: tracker와 동일한 resolveBaseDir 사용
122
+ const baseDir = resolveBaseDir(payload);
123
+ const statePath = join(baseDir, STATE_REL_PATH);
124
+
125
+ const state = loadState(statePath);
126
+ if (!state || !state.files || typeof state.files !== "object") process.exit(0);
127
+
128
+ const pending = [];
129
+ const selfApproved = [];
130
+
131
+ for (const [path, info] of Object.entries(state.files)) {
132
+ if (!shouldTrackPath(path)) continue;
133
+ const meta = info && typeof info === "object" ? info : {};
134
+ const author = String(meta.author || "").toLowerCase();
135
+ const reviewer = String(meta.reviewer || "").toLowerCase();
136
+ const reviewed = meta.reviewed === true;
137
+ const requiredReviewer = expectedReviewer(author);
138
+
139
+ // tracker가 설정한 self_approved 플래그 명시적 체크
140
+ if (meta.self_approved === true) {
141
+ selfApproved.push({ path, author, reviewer: meta.reviewer || author, expectedReviewer: requiredReviewer });
142
+ continue;
143
+ }
144
+
145
+ if (reviewed && reviewer && reviewer === author) {
146
+ selfApproved.push({ path, author, reviewer, expectedReviewer: requiredReviewer });
147
+ continue;
148
+ }
149
+
150
+ if (reviewed && requiredReviewer && reviewer && reviewer !== requiredReviewer) {
151
+ selfApproved.push({ path, author, reviewer, expectedReviewer: requiredReviewer });
152
+ continue;
153
+ }
154
+
155
+ if (!reviewed) {
156
+ pending.push({ path, author, expectedReviewer: requiredReviewer });
157
+ }
158
+ }
159
+
160
+ if (selfApproved.length > 0) {
161
+ const lines = selfApproved
162
+ .map((item) => ` * ${item.path} (author=${item.author}, reviewer=${item.reviewer}, required=${item.expectedReviewer || "n/a"})`)
163
+ .join("\n");
164
+ deny(
165
+ `[cross-review] self-approve 차단: 동일/비허용 reviewer가 감지되었습니다.\n${lines}\n` +
166
+ "규칙: author=claude -> reviewer=codex, author=codex -> reviewer=claude",
167
+ );
168
+ }
169
+
170
+ if (pending.length > 0) {
171
+ nudge(
172
+ `[cross-review] git commit 전에 교차 검증이 필요합니다.\n${summarizePending(pending)}\n` +
173
+ "규칙: author=claude -> reviewer=codex, author=codex -> reviewer=claude",
174
+ );
175
+ }
176
+
177
+ process.exit(0);
178
+ }
179
+
180
+ main().catch(() => process.exit(0));
@@ -0,0 +1,279 @@
1
+ #!/usr/bin/env node
2
+
3
+ import { existsSync, mkdirSync, readFileSync, unlinkSync, writeFileSync } from "node:fs";
4
+ import { dirname, isAbsolute, join, relative } from "node:path";
5
+
6
+ const SESSION_TTL_SEC = 30 * 60;
7
+ const STATE_REL_PATH = join(".omc", "state", "cross-review.json");
8
+ const EXCLUDED_FILE_PATTERN = /\.(md|lock|yml|yaml)$/i;
9
+
10
+ function nowSec() {
11
+ return Math.floor(Date.now() / 1000);
12
+ }
13
+
14
+ function readStdin() {
15
+ return new Promise((resolve) => {
16
+ let raw = "";
17
+ process.stdin.setEncoding("utf8");
18
+ process.stdin.on("data", (chunk) => {
19
+ raw += chunk;
20
+ });
21
+ process.stdin.on("end", () => resolve(raw));
22
+ process.stdin.on("error", () => resolve(""));
23
+ });
24
+ }
25
+
26
+ function parseJson(raw) {
27
+ try {
28
+ return JSON.parse(raw);
29
+ } catch {
30
+ return null;
31
+ }
32
+ }
33
+
34
+ function resolveBaseDir(payload) {
35
+ if (typeof payload?.cwd === "string" && payload.cwd.trim()) return payload.cwd;
36
+ if (typeof payload?.directory === "string" && payload.directory.trim()) return payload.directory;
37
+ return process.cwd();
38
+ }
39
+
40
+ function resolveStatePath(baseDir) {
41
+ return join(baseDir, STATE_REL_PATH);
42
+ }
43
+
44
+ function createEmptyState() {
45
+ return {
46
+ files: {},
47
+ session_start: nowSec(),
48
+ };
49
+ }
50
+
51
+ function loadState(statePath) {
52
+ if (!existsSync(statePath)) return createEmptyState();
53
+
54
+ try {
55
+ const parsed = JSON.parse(readFileSync(statePath, "utf8"));
56
+ const sessionStart = Number(parsed?.session_start || 0);
57
+ const expired = !sessionStart || nowSec() - sessionStart > SESSION_TTL_SEC;
58
+ if (expired) {
59
+ try {
60
+ unlinkSync(statePath);
61
+ } catch {}
62
+ return createEmptyState();
63
+ }
64
+
65
+ return {
66
+ files: parsed?.files && typeof parsed.files === "object" ? parsed.files : {},
67
+ session_start: sessionStart,
68
+ };
69
+ } catch {
70
+ return createEmptyState();
71
+ }
72
+ }
73
+
74
+ function saveState(statePath, state) {
75
+ mkdirSync(dirname(statePath), { recursive: true });
76
+ writeFileSync(statePath, `${JSON.stringify(state, null, 2)}\n`, "utf8");
77
+ }
78
+
79
+ function normalizePath(filePath, baseDir) {
80
+ if (typeof filePath !== "string" || !filePath.trim()) return "";
81
+
82
+ const raw = filePath.trim();
83
+ let normalized = raw;
84
+
85
+ if (isAbsolute(raw)) {
86
+ const relPath = relative(baseDir, raw);
87
+ if (relPath.startsWith("..")) return "";
88
+ normalized = relPath;
89
+ }
90
+
91
+ return normalized.replace(/\\/g, "/").replace(/^\.\//, "");
92
+ }
93
+
94
+ function shouldTrackPath(filePath) {
95
+ if (!filePath) return false;
96
+ const lower = filePath.toLowerCase();
97
+
98
+ if (lower.startsWith(".omc/") || lower.startsWith(".claude/")) return false;
99
+ if (lower === "package-lock.json" || lower.endsWith("/package-lock.json")) return false;
100
+ if (EXCLUDED_FILE_PATTERN.test(lower)) return false;
101
+ return true;
102
+ }
103
+
104
+ function extractFilePath(toolInput) {
105
+ if (!toolInput || typeof toolInput !== "object") return "";
106
+ const candidate = toolInput.file_path ?? toolInput.path ?? toolInput.filePath ?? "";
107
+ return typeof candidate === "string" ? candidate : "";
108
+ }
109
+
110
+ function extractCandidatePaths(payload, baseDir) {
111
+ const candidates = new Set();
112
+
113
+ const looksLikePath = (value) => {
114
+ if (typeof value !== "string") return false;
115
+ const trimmed = value.trim();
116
+ if (!trimmed || /\s/.test(trimmed)) return false;
117
+ if (trimmed.length > 260) return false;
118
+ if (!trimmed.includes(".") && !trimmed.includes("/") && !trimmed.includes("\\")) return false;
119
+ return /^[./\\A-Za-z0-9_-]/.test(trimmed);
120
+ };
121
+
122
+ const addPath = (value) => {
123
+ if (!looksLikePath(value)) return;
124
+ const normalized = normalizePath(value, baseDir);
125
+ if (shouldTrackPath(normalized)) candidates.add(normalized);
126
+ };
127
+
128
+ const scanValue = (value, depth = 0) => {
129
+ if (depth > 3 || value == null) return;
130
+ if (typeof value === "string") {
131
+ addPath(value);
132
+ return;
133
+ }
134
+ if (Array.isArray(value)) {
135
+ for (const item of value) scanValue(item, depth + 1);
136
+ return;
137
+ }
138
+ if (typeof value !== "object") return;
139
+
140
+ for (const [key, child] of Object.entries(value)) {
141
+ const keyLower = key.toLowerCase();
142
+ if (keyLower.includes("file") || keyLower.includes("path")) {
143
+ scanValue(child, depth + 1);
144
+ }
145
+ }
146
+ };
147
+
148
+ addPath(extractFilePath(payload?.tool_input));
149
+
150
+ scanValue(payload?.tool_response);
151
+ scanValue(payload?.tool_output);
152
+ scanValue(payload?.result);
153
+ scanValue(payload?.output);
154
+
155
+ return [...candidates];
156
+ }
157
+
158
+ function collectStrings(value, out = [], depth = 0) {
159
+ if (depth > 4) return out;
160
+ if (typeof value === "string") {
161
+ out.push(value);
162
+ return out;
163
+ }
164
+ if (!value || typeof value !== "object") return out;
165
+ if (Array.isArray(value)) {
166
+ for (const item of value) collectStrings(item, out, depth + 1);
167
+ return out;
168
+ }
169
+
170
+ for (const key of Object.keys(value)) {
171
+ collectStrings(value[key], out, depth + 1);
172
+ }
173
+ return out;
174
+ }
175
+
176
+ function detectCliActor(payload) {
177
+ const lines = collectStrings(payload).join("\n");
178
+ const match = lines.match(/\bcli\s*[:=]\s*(claude|codex|gemini)\b/i);
179
+ return match ? match[1].toLowerCase() : "";
180
+ }
181
+
182
+ function detectAuthor(payload) {
183
+ const actor = detectCliActor(payload);
184
+ if (actor) return actor;
185
+ return "claude";
186
+ }
187
+
188
+ function expectedReviewer(author) {
189
+ if (author === "claude") return "codex";
190
+ if (author === "codex") return "claude";
191
+ if (author === "gemini") return "claude";
192
+ return "";
193
+ }
194
+
195
+ function applyReviewer(state, reviewer, ts) {
196
+ for (const [filePath, meta] of Object.entries(state.files)) {
197
+ if (!meta || typeof meta !== "object") continue;
198
+ if (!shouldTrackPath(filePath)) continue;
199
+
200
+ const author = String(meta.author || "").toLowerCase();
201
+ const expected = expectedReviewer(author);
202
+
203
+ if (expected && reviewer === expected) {
204
+ meta.reviewed = true;
205
+ meta.reviewer = reviewer;
206
+ meta.reviewed_ts = ts;
207
+ delete meta.self_approved;
208
+ continue;
209
+ }
210
+
211
+ if (reviewer === author) {
212
+ meta.reviewed = false;
213
+ meta.reviewer = reviewer;
214
+ meta.reviewed_ts = ts;
215
+ meta.self_approved = true;
216
+ }
217
+ }
218
+ }
219
+
220
+ async function main() {
221
+ if (process.env.TFX_SKIP_CROSS_REVIEW === "1") {
222
+ process.exit(0);
223
+ }
224
+
225
+ const raw = await readStdin();
226
+ if (!raw.trim()) {
227
+ process.exit(0);
228
+ }
229
+
230
+ const payload = parseJson(raw);
231
+ if (!payload) {
232
+ process.exit(0);
233
+ }
234
+
235
+ const baseDir = resolveBaseDir(payload);
236
+ const statePath = resolveStatePath(baseDir);
237
+ const state = loadState(statePath);
238
+ const toolName = payload.tool_name || "";
239
+ const ts = nowSec();
240
+ let changed = false;
241
+
242
+ if (toolName === "Edit" || toolName === "Write") {
243
+ const toolInput = payload.tool_input || {};
244
+ const normalizedPath = normalizePath(extractFilePath(toolInput), baseDir);
245
+ if (shouldTrackPath(normalizedPath)) {
246
+ state.files[normalizedPath] = {
247
+ author: detectAuthor(payload),
248
+ ts,
249
+ reviewed: false,
250
+ };
251
+ changed = true;
252
+ }
253
+ } else if (toolName === "Bash") {
254
+ const actor = detectCliActor(payload);
255
+ if (actor) {
256
+ const paths = extractCandidatePaths(payload, baseDir);
257
+ if (paths.length > 0) {
258
+ for (const path of paths) {
259
+ state.files[path] = {
260
+ author: actor,
261
+ ts,
262
+ reviewed: false,
263
+ };
264
+ }
265
+ } else {
266
+ applyReviewer(state, actor, ts);
267
+ }
268
+ changed = true;
269
+ }
270
+ }
271
+
272
+ if (changed) {
273
+ saveState(statePath, state);
274
+ }
275
+
276
+ process.exit(0);
277
+ }
278
+
279
+ main().catch(() => process.exit(0));