scene-capability-engine 3.6.47 → 3.6.49

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -7,6 +7,22 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ## [3.6.49] - 2026-03-14
11
+
12
+ ### Changed
13
+ - Added `scripts/release-doc-version-audit.js` and wired it into package scripts, CI, release, and steering-hygiene workflows so README release metadata drift now blocks publish instead of relying on manual sync.
14
+ - Synced `README.md`, `README.zh.md`, `.sce/README.md`, and `template/.sce/README.md` release footers to the current package version/date and removed stale hardcoded capability-version headings from long-lived project guide READMEs.
15
+ - Added a new project-calibrated refactor trigger capability via `scripts/refactor-trigger-audit.js`, plus a matching steering baseline rule: each SCE project should periodically evaluate its own file-size distribution, while `2000 / 4000 / 10000` remains the default source-file fallback when no project-specific threshold is set.
16
+
17
+ ## [3.6.48] - 2026-03-14
18
+
19
+ ### Changed
20
+ - Added `governance.duplicate_detection_scope` to studio spec governance policy with supported values `all`, `non_completed`, and `active_only`.
21
+ - Changed the default duplicate governance scope to `non_completed`, so duplicate detection compares active and stale specs but no longer treats completed historical specs as duplicate noise.
22
+ - Updated studio intake normalization and scene governance reporting to filter duplicate candidate sets by the configured scope before generating duplicate alerts.
23
+ - Synced the new duplicate governance scope into takeover baseline defaults so adopted and upgraded projects inherit the same non-completed duplicate detection behavior by default.
24
+ - Added regression coverage proving completed history specs do not trigger duplicate governance alerts under the new default scope.
25
+
10
26
  ## [3.6.47] - 2026-03-14
11
27
 
12
28
  ### Changed
package/README.md CHANGED
@@ -143,6 +143,7 @@ SCE is opinionated by default.
143
143
  - `verify` and `release` enforce problem-closure and related gates when a spec is bound.
144
144
  - Autonomous program execution applies gate evaluation, fallback-chain logic, governance replay, and auto-remediation.
145
145
  - State persistence prefers SQLite, not ad hoc local caches.
146
+ - Oversized source files must trigger periodic refactor assessment; SCE recommends project-specific thresholds, with `2000 / 4000 / 10000` as the default source-file fallback.
146
147
  - Release validation defaults to integration test coverage via `npm run test:release` for faster publish feedback.
147
148
 
148
149
  ---
@@ -216,5 +217,5 @@ MIT. See [LICENSE](LICENSE).
216
217
 
217
218
  ---
218
219
 
219
- **Version**: 3.6.34
220
- **Last Updated**: 2026-03-08
220
+ **Version**: 3.6.49
221
+ **Last Updated**: 2026-03-14
package/README.zh.md CHANGED
@@ -148,6 +148,7 @@ SCE 默认是强治理的。
148
148
  - 当 spec 绑定时,`verify` 和 `release` 默认执行 problem-closure 等相关门禁
149
149
  - `close-loop-program` 默认带 gate 评估、fallback-chain、governance replay、auto-remediation
150
150
  - 状态持久化默认优先走 SQLite,而不是零散本地缓存
151
+ - 超大源文件必须定期触发重构评估;SCE 优先建议按项目给出阈值,若项目尚未设定,则默认参考 `2000 / 4000 / 10000`
151
152
  - 发布默认验证走 integration gate:`npm run test:release`
152
153
 
153
154
  ---
@@ -221,5 +222,5 @@ MIT,见 [LICENSE](LICENSE)。
221
222
 
222
223
  ---
223
224
 
224
- **版本**:3.6.34
225
- **最后更新**:2026-03-08
225
+ **版本**:3.6.49
226
+ **最后更新**:2026-03-14
@@ -9,6 +9,7 @@ This directory stores release-facing documents:
9
9
  ## Archived Versions
10
10
 
11
11
  - [Release checklist](../release-checklist.md)
12
+ - [v3.6.48 release notes](./v3.6.48.md)
12
13
  - [v3.6.47 release notes](./v3.6.47.md)
13
14
  - [v3.6.46 release notes](./v3.6.46.md)
14
15
  - [v3.6.45 release notes](./v3.6.45.md)
@@ -0,0 +1,19 @@
1
+ # v3.6.48 Release Notes
2
+
3
+ Release date: 2026-03-14
4
+
5
+ ## Highlights
6
+
7
+ - Added `governance.duplicate_detection_scope` to studio spec governance with three supported values: `all`, `non_completed`, and `active_only`.
8
+ - Changed the default duplicate governance scope to `non_completed`, so duplicate checks now compare active and stale specs while excluding completed historical specs from duplicate pair detection.
9
+ - Synced the same default into takeover baseline, so adopted and upgraded projects inherit the lower-noise duplicate governance behavior automatically.
10
+
11
+ ## Validation
12
+
13
+ - `npx jest tests/unit/studio/spec-intake-governor.test.js --runInBand`
14
+ - `npm run prepublishOnly`
15
+
16
+ ## Release Notes
17
+
18
+ - This patch reduces governance noise in repositories with many completed template-style specs, where historical completed work previously flooded duplicate alerts and hid active governance problems.
19
+ - The default keeps duplicate sensitivity on unclosed work (`active` + `stale`) without requiring every upgraded project to hand-tune its own studio intake policy.
@@ -9,6 +9,7 @@
9
9
  ## 历史版本归档
10
10
 
11
11
  - [发布检查清单](../release-checklist.md)
12
+ - [v3.6.48 发布说明](./v3.6.48.md)
12
13
  - [v3.6.47 发布说明](./v3.6.47.md)
13
14
  - [v3.6.46 发布说明](./v3.6.46.md)
14
15
  - [v3.6.45 发布说明](./v3.6.45.md)
@@ -0,0 +1,19 @@
1
+ # v3.6.48 发布说明
2
+
3
+ 发布日期:2026-03-14
4
+
5
+ ## 重点变化
6
+
7
+ - 为 studio spec governance 增加 `governance.duplicate_detection_scope`,支持 `all`、`non_completed`、`active_only` 三种取值。
8
+ - 将默认 duplicate 检测策略调整为 `non_completed`:只比较 `active` 与 `stale` spec,不再把 `completed` 历史 spec 纳入 duplicate 对比。
9
+ - 将同样的默认值同步到 takeover baseline,确保 adopt / upgrade 后的项目自动继承这套降噪后的治理行为。
10
+
11
+ ## 验证
12
+
13
+ - `npx jest tests/unit/studio/spec-intake-governor.test.js --runInBand`
14
+ - `npm run prepublishOnly`
15
+
16
+ ## 发布说明
17
+
18
+ - 这个补丁版主要解决“历史 completed spec 太多时 duplicate 告警淹没真实 active 治理问题”的噪音问题。
19
+ - 新默认值仍保留对未收口 spec 的治理敏感度,但不再让历史归档 spec 持续制造无效 duplicate 告警。
@@ -82,7 +82,8 @@ const DEFAULT_STUDIO_INTAKE_POLICY = Object.freeze({
82
82
  require_auto_on_plan: true,
83
83
  max_active_specs_per_scene: 3,
84
84
  stale_days: 14,
85
- duplicate_similarity_threshold: 0.66
85
+ duplicate_similarity_threshold: 0.66,
86
+ duplicate_detection_scope: 'non_completed'
86
87
  },
87
88
  backfill: {
88
89
  enabled: true,
@@ -132,6 +133,14 @@ function normalizeBoolean(value, fallback = false) {
132
133
  return fallback;
133
134
  }
134
135
 
136
+ function normalizeDuplicateDetectionScope(value, fallback = 'non_completed') {
137
+ const normalized = normalizeText(value).toLowerCase();
138
+ if (['all', 'non_completed', 'active_only'].includes(normalized)) {
139
+ return normalized;
140
+ }
141
+ return fallback;
142
+ }
143
+
135
144
  function normalizeTextList(value = []) {
136
145
  if (!Array.isArray(value)) {
137
146
  return [];
@@ -330,6 +339,10 @@ function normalizeStudioIntakePolicy(raw = {}) {
330
339
  governance.duplicate_similarity_threshold,
331
340
  DEFAULT_STUDIO_INTAKE_POLICY.governance.duplicate_similarity_threshold
332
341
  ))
342
+ ),
343
+ duplicate_detection_scope: normalizeDuplicateDetectionScope(
344
+ governance.duplicate_detection_scope,
345
+ DEFAULT_STUDIO_INTAKE_POLICY.governance.duplicate_detection_scope
333
346
  )
334
347
  },
335
348
  backfill: {
@@ -898,6 +911,10 @@ function buildSceneGovernanceReport(records = [], policy = DEFAULT_STUDIO_INTAKE
898
911
  const governance = policy.governance || DEFAULT_STUDIO_INTAKE_POLICY.governance;
899
912
  const threshold = normalizeNumber(governance.duplicate_similarity_threshold, 0.66);
900
913
  const maxActive = normalizeInteger(governance.max_active_specs_per_scene, 3, 1, 200);
914
+ const duplicateScope = normalizeDuplicateDetectionScope(
915
+ governance.duplicate_detection_scope,
916
+ DEFAULT_STUDIO_INTAKE_POLICY.governance.duplicate_detection_scope
917
+ );
901
918
 
902
919
  const sceneMap = new Map();
903
920
  for (const record of records) {
@@ -918,12 +935,17 @@ function buildSceneGovernanceReport(records = [], policy = DEFAULT_STUDIO_INTAKE
918
935
  const activeSpecs = sortedSpecs.filter((item) => item.lifecycle_state === 'active');
919
936
  const staleSpecs = sortedSpecs.filter((item) => item.lifecycle_state === 'stale');
920
937
  const completedSpecs = sortedSpecs.filter((item) => item.lifecycle_state === 'completed');
938
+ const duplicateCandidateSpecs = duplicateScope === 'active_only'
939
+ ? activeSpecs
940
+ : (duplicateScope === 'non_completed'
941
+ ? sortedSpecs.filter((item) => item.lifecycle_state !== 'completed')
942
+ : sortedSpecs);
921
943
 
922
944
  const duplicates = [];
923
- for (let i = 0; i < sortedSpecs.length; i += 1) {
924
- for (let j = i + 1; j < sortedSpecs.length; j += 1) {
925
- const left = sortedSpecs[i];
926
- const right = sortedSpecs[j];
945
+ for (let i = 0; i < duplicateCandidateSpecs.length; i += 1) {
946
+ for (let j = i + 1; j < duplicateCandidateSpecs.length; j += 1) {
947
+ const left = duplicateCandidateSpecs[i];
948
+ const right = duplicateCandidateSpecs[j];
927
949
  const similarity = computeJaccard(left.tokens, right.tokens);
928
950
  if (similarity >= threshold) {
929
951
  duplicatePairs += 1;
@@ -45,6 +45,16 @@ const BACKEND_API_PRECEDENCE_CORE_PRINCIPLE_SECTION = [
45
45
  '- 除非明确要求新建接口或修改后端接口,否则禁止为了迁就前端错误调用去随意改后端实现或契约。',
46
46
  '- 默认优先修正前端请求、映射、类型和兼容处理,使其与后端接口保持一致;若怀疑后端契约错误,应先确认再改。'
47
47
  ].join('\n');
48
+ const LARGE_FILE_REFACTOR_CORE_PRINCIPLE_HEADING = '## 15. 单文件规模过大必须触发重构评估,禁止无限堆积';
49
+ const LARGE_FILE_REFACTOR_CORE_PRINCIPLE_SECTION = [
50
+ LARGE_FILE_REFACTOR_CORE_PRINCIPLE_HEADING,
51
+ '',
52
+ '- SCE 应为每个项目定期评估代码规模分布,并给出项目级的重构参考节点;禁止假设所有项目都适用同一个固定行数阈值。',
53
+ '- 若项目尚未建立自己的阈值,默认参考源文件 `2000 / 4000 / 10000` 行三档触发:分别对应“必须评估”“必须发起重构收敛”“进入红线区”。',
54
+ '- 达到项目级或默认阈值后,后续改动必须优先评估拆分模块、服务、命令面或数据职责;超过重构/红线阈值时,不得继续无计划堆积复杂度。',
55
+ '- 项目开始较小时,阈值应更早触发;项目进入长期演进后,也必须按周或发布前重新评估,而不是让早期设定永久失效。',
56
+ '- 行数阈值只是强触发信号,不代表低于阈值就可以忽略耦合、职责混杂、测试失控和理解成本问题;若复杂度已明显失控,应提前启动重构。'
57
+ ].join('\n');
48
58
  const REQUIRED_CORE_PRINCIPLE_SECTIONS = Object.freeze([
49
59
  {
50
60
  heading: CLARIFICATION_FIRST_CORE_PRINCIPLE_HEADING,
@@ -61,6 +71,10 @@ const REQUIRED_CORE_PRINCIPLE_SECTIONS = Object.freeze([
61
71
  {
62
72
  heading: BACKEND_API_PRECEDENCE_CORE_PRINCIPLE_HEADING,
63
73
  section: BACKEND_API_PRECEDENCE_CORE_PRINCIPLE_SECTION
74
+ },
75
+ {
76
+ heading: LARGE_FILE_REFACTOR_CORE_PRINCIPLE_HEADING,
77
+ section: LARGE_FILE_REFACTOR_CORE_PRINCIPLE_SECTION
64
78
  }
65
79
  ]);
66
80
 
@@ -235,7 +249,8 @@ const STUDIO_INTAKE_POLICY_DEFAULTS = Object.freeze({
235
249
  require_auto_on_plan: true,
236
250
  max_active_specs_per_scene: 3,
237
251
  stale_days: 14,
238
- duplicate_similarity_threshold: 0.66
252
+ duplicate_similarity_threshold: 0.66,
253
+ duplicate_detection_scope: 'non_completed'
239
254
  },
240
255
  backfill: {
241
256
  enabled: true,
@@ -319,7 +334,8 @@ const TAKEOVER_DEFAULTS = Object.freeze({
319
334
  require_auto_on_plan: true,
320
335
  max_active_specs_per_scene: 3,
321
336
  stale_days: 14,
322
- duplicate_similarity_threshold: 0.66
337
+ duplicate_similarity_threshold: 0.66,
338
+ duplicate_detection_scope: 'non_completed'
323
339
  },
324
340
  backfill: {
325
341
  enabled: true,
@@ -884,6 +900,8 @@ module.exports = {
884
900
  STEERING_CHANGE_EVALUATION_CORE_PRINCIPLE_SECTION,
885
901
  BACKEND_API_PRECEDENCE_CORE_PRINCIPLE_HEADING,
886
902
  BACKEND_API_PRECEDENCE_CORE_PRINCIPLE_SECTION,
903
+ LARGE_FILE_REFACTOR_CORE_PRINCIPLE_HEADING,
904
+ LARGE_FILE_REFACTOR_CORE_PRINCIPLE_SECTION,
887
905
  REQUIRED_CORE_PRINCIPLE_SECTIONS,
888
906
  ERRORBOOK_REGISTRY_DEFAULTS,
889
907
  ERRORBOOK_CONVERGENCE_DEFAULTS,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "scene-capability-engine",
3
- "version": "3.6.47",
3
+ "version": "3.6.49",
4
4
  "description": "SCE (Scene Capability Engine) - A CLI tool and npm package for spec-driven development with AI coding assistants.",
5
5
  "main": "index.js",
6
6
  "bin": {
@@ -40,9 +40,13 @@
40
40
  "test:sce-tracking": "node scripts/check-sce-tracking.js",
41
41
  "gate:npm-runtime-assets": "node scripts/npm-package-runtime-asset-check.js --fail-on-violation",
42
42
  "test:brand-consistency": "node scripts/check-branding-consistency.js",
43
+ "audit:release-docs": "node scripts/release-doc-version-audit.js --fail-on-error",
44
+ "audit:refactor-trigger": "node scripts/refactor-trigger-audit.js",
43
45
  "audit:steering": "node scripts/steering-content-audit.js --fail-on-error",
44
46
  "audit:clarification-first": "node scripts/clarification-first-audit.js --fail-on-violation",
45
47
  "audit:state-storage": "node scripts/state-storage-tiering-audit.js",
48
+ "report:release-docs": "node scripts/release-doc-version-audit.js --json",
49
+ "report:refactor-trigger": "node scripts/refactor-trigger-audit.js --json",
46
50
  "report:steering-audit": "node scripts/steering-content-audit.js --json",
47
51
  "report:clarification-first-audit": "node scripts/clarification-first-audit.js --json",
48
52
  "report:state-storage": "node scripts/state-storage-tiering-audit.js --json",
@@ -86,7 +90,7 @@
86
90
  "gate:release-asset-integrity": "node scripts/release-asset-integrity-check.js",
87
91
  "report:release-risk-remediation": "node scripts/release-risk-remediation-bundle.js --json",
88
92
  "report:moqui-core-regression": "node scripts/moqui-core-regression-suite.js --json",
89
- "prepublishOnly": "npm run test:release && npm run test:skip-audit && npm run test:sce-tracking && npm run gate:npm-runtime-assets && npm run test:brand-consistency && npm run audit:steering && npm run audit:clarification-first && npm run gate:git-managed && npm run gate:errorbook-registry-health && npm run gate:errorbook-release && npm run report:interactive-governance -- --fail-on-alert",
93
+ "prepublishOnly": "npm run test:release && npm run test:skip-audit && npm run test:sce-tracking && npm run gate:npm-runtime-assets && npm run test:brand-consistency && npm run audit:release-docs && npm run audit:steering && npm run audit:clarification-first && npm run gate:git-managed && npm run gate:errorbook-registry-health && npm run gate:errorbook-release && npm run report:interactive-governance -- --fail-on-alert",
90
94
  "publish:manual": "npm publish --access public",
91
95
  "install-global": "npm install -g .",
92
96
  "uninstall-global": "npm uninstall -g scene-capability-engine"
@@ -0,0 +1,357 @@
1
+ #!/usr/bin/env node
2
+ 'use strict';
3
+
4
+ const fs = require('fs');
5
+ const path = require('path');
6
+
7
+ const CODE_EXTENSIONS = new Set([
8
+ '.js', '.cjs', '.mjs', '.jsx',
9
+ '.ts', '.tsx',
10
+ '.py',
11
+ '.java',
12
+ '.go',
13
+ '.rb',
14
+ '.php',
15
+ '.cs'
16
+ ]);
17
+
18
+ const DEFAULT_SCAN_DIRS = [
19
+ 'src',
20
+ 'lib',
21
+ 'scripts',
22
+ 'bin',
23
+ 'app',
24
+ 'server',
25
+ 'client',
26
+ 'packages',
27
+ 'tests'
28
+ ];
29
+
30
+ const SKIP_DIRS = new Set([
31
+ '.git',
32
+ '.hg',
33
+ '.svn',
34
+ 'node_modules',
35
+ 'dist',
36
+ 'build',
37
+ 'coverage',
38
+ '.next',
39
+ '.turbo',
40
+ '.idea',
41
+ '.vscode',
42
+ '.sce/reports'
43
+ ]);
44
+
45
+ const THRESHOLD_PROFILES = Object.freeze({
46
+ source: {
47
+ floors: {
48
+ assessment: 800,
49
+ refactor: 1800
50
+ },
51
+ defaults: {
52
+ assessment: 2000,
53
+ refactor: 4000,
54
+ redline: 10000
55
+ }
56
+ },
57
+ test: {
58
+ floors: {
59
+ assessment: 1200,
60
+ refactor: 3000
61
+ },
62
+ defaults: {
63
+ assessment: 3000,
64
+ refactor: 6000,
65
+ redline: 15000
66
+ }
67
+ }
68
+ });
69
+
70
+ function parseArgs(argv = process.argv.slice(2)) {
71
+ const options = {
72
+ projectPath: process.cwd(),
73
+ json: false,
74
+ failOnRedline: false,
75
+ out: null,
76
+ scanDirs: DEFAULT_SCAN_DIRS.slice()
77
+ };
78
+
79
+ for (let index = 0; index < argv.length; index += 1) {
80
+ const token = argv[index];
81
+ const next = argv[index + 1];
82
+ if (token === '--project-path' && next) {
83
+ options.projectPath = path.resolve(next);
84
+ index += 1;
85
+ continue;
86
+ }
87
+ if (token === '--json') {
88
+ options.json = true;
89
+ continue;
90
+ }
91
+ if (token === '--fail-on-redline') {
92
+ options.failOnRedline = true;
93
+ continue;
94
+ }
95
+ if (token === '--out' && next) {
96
+ options.out = path.resolve(next);
97
+ index += 1;
98
+ continue;
99
+ }
100
+ if (token === '--scan-dir' && next) {
101
+ options.scanDirs.push(next.trim());
102
+ index += 1;
103
+ continue;
104
+ }
105
+ }
106
+
107
+ options.scanDirs = Array.from(new Set(options.scanDirs.filter(Boolean)));
108
+ return options;
109
+ }
110
+
111
+ function normalizePath(value) {
112
+ return `${value || ''}`.replace(/\\/g, '/');
113
+ }
114
+
115
+ function shouldSkipDir(relativePath, entryName) {
116
+ if (SKIP_DIRS.has(entryName)) {
117
+ return true;
118
+ }
119
+ return SKIP_DIRS.has(normalizePath(relativePath));
120
+ }
121
+
122
+ function collectFilesRecursive(rootDir, relativeRoot = '') {
123
+ if (!fs.existsSync(rootDir)) {
124
+ return [];
125
+ }
126
+
127
+ const results = [];
128
+ const entries = fs.readdirSync(rootDir, { withFileTypes: true });
129
+ for (const entry of entries) {
130
+ const absolutePath = path.join(rootDir, entry.name);
131
+ const relativePath = normalizePath(path.join(relativeRoot, entry.name));
132
+ if (entry.isDirectory()) {
133
+ if (shouldSkipDir(relativePath, entry.name)) {
134
+ continue;
135
+ }
136
+ results.push(...collectFilesRecursive(absolutePath, relativePath));
137
+ continue;
138
+ }
139
+ if (!entry.isFile()) {
140
+ continue;
141
+ }
142
+ if (!CODE_EXTENSIONS.has(path.extname(entry.name).toLowerCase())) {
143
+ continue;
144
+ }
145
+ results.push({
146
+ absolutePath,
147
+ relativePath
148
+ });
149
+ }
150
+ return results;
151
+ }
152
+
153
+ function classifyFile(relativePath) {
154
+ const normalized = normalizePath(relativePath).toLowerCase();
155
+ if (
156
+ normalized.startsWith('tests/')
157
+ || normalized.includes('/__tests__/')
158
+ || normalized.includes('.test.')
159
+ || normalized.includes('.spec.')
160
+ ) {
161
+ return 'test';
162
+ }
163
+ return 'source';
164
+ }
165
+
166
+ function countLines(content) {
167
+ if (!content) {
168
+ return 0;
169
+ }
170
+ return content.split(/\r?\n/).length;
171
+ }
172
+
173
+ function roundUp(value, step) {
174
+ if (!Number.isFinite(value) || value <= 0) {
175
+ return 0;
176
+ }
177
+ return Math.ceil(value / step) * step;
178
+ }
179
+
180
+ function quantile(sortedValues, ratio) {
181
+ if (!Array.isArray(sortedValues) || sortedValues.length === 0) {
182
+ return 0;
183
+ }
184
+ if (sortedValues.length === 1) {
185
+ return sortedValues[0];
186
+ }
187
+ const index = Math.max(0, Math.min(sortedValues.length - 1, Math.ceil(sortedValues.length * ratio) - 1));
188
+ return sortedValues[index];
189
+ }
190
+
191
+ function buildStats(files) {
192
+ const lineValues = files.map((item) => item.lines).sort((left, right) => left - right);
193
+ const count = lineValues.length;
194
+ return {
195
+ count,
196
+ max: count > 0 ? lineValues[count - 1] : 0,
197
+ p50: quantile(lineValues, 0.5),
198
+ p90: quantile(lineValues, 0.9),
199
+ p95: quantile(lineValues, 0.95)
200
+ };
201
+ }
202
+
203
+ function buildRecommendedThresholds(stats, profile) {
204
+ const assessmentRaw = roundUp(Math.max(profile.floors.assessment, stats.p90 * 1.25), 50);
205
+ const refactorRaw = roundUp(Math.max(profile.floors.refactor, stats.p95 * 1.35, assessmentRaw + 400), 50);
206
+
207
+ const assessment = Math.min(profile.defaults.assessment, Math.max(profile.floors.assessment, assessmentRaw));
208
+ const refactor = Math.min(profile.defaults.refactor, Math.max(profile.floors.refactor, refactorRaw, assessment + 400));
209
+ const redline = profile.defaults.redline;
210
+
211
+ return {
212
+ assessment,
213
+ refactor,
214
+ redline
215
+ };
216
+ }
217
+
218
+ function evaluateFile(lines, thresholds) {
219
+ if (lines >= thresholds.redline) {
220
+ return 'redline';
221
+ }
222
+ if (lines >= thresholds.refactor) {
223
+ return 'refactor';
224
+ }
225
+ if (lines >= thresholds.assessment) {
226
+ return 'assessment';
227
+ }
228
+ return 'ok';
229
+ }
230
+
231
+ function auditRefactorTriggers(options = {}) {
232
+ const projectPath = path.resolve(options.projectPath || process.cwd());
233
+ const scanDirs = Array.isArray(options.scanDirs) && options.scanDirs.length > 0
234
+ ? options.scanDirs
235
+ : DEFAULT_SCAN_DIRS;
236
+
237
+ const files = Array.from(new Set(scanDirs))
238
+ .flatMap((dirName) => collectFilesRecursive(path.join(projectPath, dirName), dirName))
239
+ .map((file) => {
240
+ const content = fs.readFileSync(file.absolutePath, 'utf8');
241
+ const kind = classifyFile(file.relativePath);
242
+ return {
243
+ file: file.relativePath,
244
+ kind,
245
+ lines: countLines(content)
246
+ };
247
+ })
248
+ .sort((left, right) => right.lines - left.lines || left.file.localeCompare(right.file));
249
+
250
+ const sourceFiles = files.filter((item) => item.kind === 'source');
251
+ const testFiles = files.filter((item) => item.kind === 'test');
252
+ const sourceStats = buildStats(sourceFiles);
253
+ const testStats = buildStats(testFiles);
254
+ const sourceThresholds = buildRecommendedThresholds(sourceStats, THRESHOLD_PROFILES.source);
255
+ const testThresholds = buildRecommendedThresholds(testStats, THRESHOLD_PROFILES.test);
256
+
257
+ const evaluatedFiles = files.map((file) => {
258
+ const thresholds = file.kind === 'test' ? testThresholds : sourceThresholds;
259
+ return {
260
+ ...file,
261
+ thresholds,
262
+ trigger: evaluateFile(file.lines, thresholds)
263
+ };
264
+ });
265
+
266
+ const offenders = evaluatedFiles.filter((item) => item.trigger !== 'ok');
267
+ const redline = offenders.filter((item) => item.trigger === 'redline');
268
+ const refactor = offenders.filter((item) => item.trigger === 'refactor');
269
+ const assessment = offenders.filter((item) => item.trigger === 'assessment');
270
+
271
+ const recommendations = [
272
+ 'Run this audit weekly and before each release to recalibrate project-specific refactor trigger points.',
273
+ 'When no project-specific threshold is agreed yet, keep the SCE default source thresholds as the outer guardrail: 2000 / 4000 / 10000 lines.',
274
+ redline.length > 0
275
+ ? 'Redline files already exist; new non-emergency changes touching those files should prioritize decomposition instead of feature accretion.'
276
+ : 'No redline file detected under the current recommended thresholds.'
277
+ ];
278
+
279
+ return {
280
+ mode: 'refactor-trigger-audit',
281
+ project_path: projectPath,
282
+ passed: redline.length === 0,
283
+ scan_dirs: scanDirs,
284
+ scanned_file_count: files.length,
285
+ cadence_recommendation: ['weekly', 'before_release'],
286
+ thresholds: {
287
+ source: sourceThresholds,
288
+ test: testThresholds
289
+ },
290
+ stats: {
291
+ source: sourceStats,
292
+ test: testStats
293
+ },
294
+ summary: {
295
+ offender_count: offenders.length,
296
+ assessment_count: assessment.length,
297
+ refactor_count: refactor.length,
298
+ redline_count: redline.length
299
+ },
300
+ offenders: offenders.slice(0, 50),
301
+ top_files: evaluatedFiles.slice(0, 20),
302
+ recommendations
303
+ };
304
+ }
305
+
306
+ function maybeWriteReport(result, outPath) {
307
+ if (!outPath) {
308
+ return;
309
+ }
310
+ fs.mkdirSync(path.dirname(outPath), { recursive: true });
311
+ fs.writeFileSync(outPath, `${JSON.stringify(result, null, 2)}\n`, 'utf8');
312
+ }
313
+
314
+ function printHumanReport(result) {
315
+ console.log(
316
+ `[refactor-trigger-audit] scanned=${result.scanned_file_count} offenders=${result.summary.offender_count} `
317
+ + `assessment=${result.summary.assessment_count} refactor=${result.summary.refactor_count} redline=${result.summary.redline_count}`
318
+ );
319
+ console.log(
320
+ `[refactor-trigger-audit] source-thresholds=${result.thresholds.source.assessment}/${result.thresholds.source.refactor}/${result.thresholds.source.redline} `
321
+ + `test-thresholds=${result.thresholds.test.assessment}/${result.thresholds.test.refactor}/${result.thresholds.test.redline}`
322
+ );
323
+ if (result.summary.redline_count > 0) {
324
+ result.offenders
325
+ .filter((item) => item.trigger === 'redline')
326
+ .slice(0, 10)
327
+ .forEach((item) => {
328
+ console.error(`[refactor-trigger-audit] redline ${item.kind} ${item.file} (${item.lines} lines)`);
329
+ });
330
+ }
331
+ }
332
+
333
+ if (require.main === module) {
334
+ const options = parseArgs(process.argv.slice(2));
335
+ const result = auditRefactorTriggers(options);
336
+ maybeWriteReport(result, options.out);
337
+
338
+ if (options.json) {
339
+ process.stdout.write(`${JSON.stringify(result, null, 2)}\n`);
340
+ } else {
341
+ printHumanReport(result);
342
+ }
343
+
344
+ if (options.failOnRedline && result.summary.redline_count > 0) {
345
+ process.exit(1);
346
+ }
347
+ }
348
+
349
+ module.exports = {
350
+ DEFAULT_SCAN_DIRS,
351
+ THRESHOLD_PROFILES,
352
+ auditRefactorTriggers,
353
+ buildRecommendedThresholds,
354
+ buildStats,
355
+ classifyFile,
356
+ parseArgs
357
+ };
@@ -0,0 +1,317 @@
1
+ #!/usr/bin/env node
2
+ 'use strict';
3
+
4
+ const fs = require('fs');
5
+ const path = require('path');
6
+
7
+ const RELEASE_DOCS = [
8
+ {
9
+ file: 'README.md',
10
+ label: 'README.md',
11
+ versionField: 'Version',
12
+ versionPattern: /\*\*Version\*\*:\s*([^\s]+)/,
13
+ updatedField: 'Last Updated',
14
+ updatedPattern: /\*\*Last Updated\*\*:\s*(\d{4}-\d{2}-\d{2})/
15
+ },
16
+ {
17
+ file: 'README.zh.md',
18
+ label: 'README.zh.md',
19
+ versionField: '版本',
20
+ versionPattern: /\*\*版本\*\*[::]\s*([^\s]+)/,
21
+ updatedField: '最后更新',
22
+ updatedPattern: /\*\*最后更新\*\*[::]\s*(\d{4}-\d{2}-\d{2})/
23
+ },
24
+ {
25
+ file: '.sce/README.md',
26
+ label: '.sce/README.md',
27
+ versionField: 'sce Version',
28
+ versionPattern: /\*\*sce Version\*\*:\s*([^\s]+)/,
29
+ updatedField: 'Last Updated',
30
+ updatedPattern: /\*\*Last Updated\*\*:\s*(\d{4}-\d{2}-\d{2})/,
31
+ forbidVersionedHeadings: true
32
+ },
33
+ {
34
+ file: 'template/.sce/README.md',
35
+ label: 'template/.sce/README.md',
36
+ versionField: 'sce Version',
37
+ versionPattern: /\*\*sce Version\*\*:\s*([^\s]+)/,
38
+ updatedField: 'Last Updated',
39
+ updatedPattern: /\*\*Last Updated\*\*:\s*(\d{4}-\d{2}-\d{2})/,
40
+ forbidVersionedHeadings: true
41
+ }
42
+ ];
43
+
44
+ const VERSIONED_HEADING_PATTERN = /^#{1,6}\s+.*\((?:v)?\d+\.\d+(?:\.\d+|\.x)\)\s*$/gm;
45
+ const CHANGELOG_RELEASE_PATTERN = /^## \[([^\]]+)\] - (\d{4}-\d{2}-\d{2})(?:\s.*)?$/gm;
46
+
47
+ function parseArgs(argv = process.argv.slice(2)) {
48
+ const options = {
49
+ projectPath: process.cwd(),
50
+ json: false,
51
+ failOnError: false,
52
+ out: null
53
+ };
54
+
55
+ for (let index = 0; index < argv.length; index += 1) {
56
+ const value = argv[index];
57
+ if (value === '--json') {
58
+ options.json = true;
59
+ continue;
60
+ }
61
+ if (value === '--fail-on-error') {
62
+ options.failOnError = true;
63
+ continue;
64
+ }
65
+ if (value === '--project-path') {
66
+ options.projectPath = path.resolve(argv[index + 1] || process.cwd());
67
+ index += 1;
68
+ continue;
69
+ }
70
+ if (value === '--out') {
71
+ options.out = path.resolve(argv[index + 1] || '');
72
+ index += 1;
73
+ continue;
74
+ }
75
+ }
76
+
77
+ return options;
78
+ }
79
+
80
+ function pushViolation(violations, file, rule, message, suggestion) {
81
+ violations.push({
82
+ severity: 'error',
83
+ file,
84
+ rule,
85
+ message,
86
+ suggestion
87
+ });
88
+ }
89
+
90
+ function loadPackageVersion(projectPath, violations) {
91
+ const packageJsonPath = path.join(projectPath, 'package.json');
92
+ if (!fs.existsSync(packageJsonPath)) {
93
+ pushViolation(
94
+ violations,
95
+ 'package.json',
96
+ 'missing_package_json',
97
+ 'package.json is required to resolve the current release version.',
98
+ 'Restore package.json before running the release doc audit.'
99
+ );
100
+ return null;
101
+ }
102
+
103
+ const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
104
+ return typeof packageJson.version === 'string' ? packageJson.version.trim() : null;
105
+ }
106
+
107
+ function extractLatestChangelogRelease(changelogContent) {
108
+ CHANGELOG_RELEASE_PATTERN.lastIndex = 0;
109
+ let match = CHANGELOG_RELEASE_PATTERN.exec(changelogContent);
110
+ if (!match) {
111
+ return null;
112
+ }
113
+
114
+ return {
115
+ version: match[1].trim(),
116
+ date: match[2].trim()
117
+ };
118
+ }
119
+
120
+ function loadLatestChangelogRelease(projectPath, packageVersion, violations) {
121
+ const changelogPath = path.join(projectPath, 'CHANGELOG.md');
122
+ if (!fs.existsSync(changelogPath)) {
123
+ pushViolation(
124
+ violations,
125
+ 'CHANGELOG.md',
126
+ 'missing_changelog',
127
+ 'CHANGELOG.md is required to resolve the latest release date.',
128
+ 'Restore CHANGELOG.md before running the release doc audit.'
129
+ );
130
+ return null;
131
+ }
132
+
133
+ const changelogContent = fs.readFileSync(changelogPath, 'utf8');
134
+ const latestRelease = extractLatestChangelogRelease(changelogContent);
135
+ if (!latestRelease) {
136
+ pushViolation(
137
+ violations,
138
+ 'CHANGELOG.md',
139
+ 'missing_release_entry',
140
+ 'Could not find a released version entry in CHANGELOG.md.',
141
+ 'Add a release entry like `## [x.y.z] - YYYY-MM-DD` before publishing.'
142
+ );
143
+ return null;
144
+ }
145
+
146
+ if (packageVersion && latestRelease.version !== packageVersion) {
147
+ pushViolation(
148
+ violations,
149
+ 'CHANGELOG.md',
150
+ 'stale_latest_release_entry',
151
+ `CHANGELOG.md latest release is ${latestRelease.version} but package.json is ${packageVersion}.`,
152
+ 'Update the top released CHANGELOG entry so version and release date match package.json.'
153
+ );
154
+ }
155
+
156
+ return latestRelease;
157
+ }
158
+
159
+ function extractField(content, pattern) {
160
+ const match = content.match(pattern);
161
+ return match ? match[1].trim() : null;
162
+ }
163
+
164
+ function collectVersionedHeadings(content) {
165
+ const matches = [];
166
+ let match = VERSIONED_HEADING_PATTERN.exec(content);
167
+ while (match) {
168
+ matches.push(match[0].trim());
169
+ match = VERSIONED_HEADING_PATTERN.exec(content);
170
+ }
171
+ VERSIONED_HEADING_PATTERN.lastIndex = 0;
172
+ return matches;
173
+ }
174
+
175
+ function auditReleaseDocs(options = {}) {
176
+ const projectPath = path.resolve(options.projectPath || process.cwd());
177
+ const violations = [];
178
+ const packageVersion = loadPackageVersion(projectPath, violations);
179
+ const latestRelease = loadLatestChangelogRelease(projectPath, packageVersion, violations);
180
+ const expectedVersion = packageVersion;
181
+ const expectedDate = latestRelease ? latestRelease.date : null;
182
+ const documents = [];
183
+
184
+ for (const doc of RELEASE_DOCS) {
185
+ const absolutePath = path.join(projectPath, doc.file);
186
+ if (!fs.existsSync(absolutePath)) {
187
+ pushViolation(
188
+ violations,
189
+ doc.file,
190
+ 'missing_release_doc',
191
+ `${doc.file} is missing.`,
192
+ `Restore ${doc.file} so release metadata stays auditable.`
193
+ );
194
+ continue;
195
+ }
196
+
197
+ const content = fs.readFileSync(absolutePath, 'utf8');
198
+ const actualVersion = extractField(content, doc.versionPattern);
199
+ const actualUpdated = extractField(content, doc.updatedPattern);
200
+ const versionedHeadings = doc.forbidVersionedHeadings
201
+ ? collectVersionedHeadings(content)
202
+ : [];
203
+
204
+ if (!actualVersion) {
205
+ pushViolation(
206
+ violations,
207
+ doc.file,
208
+ 'missing_doc_version_field',
209
+ `${doc.file} is missing the "${doc.versionField}" footer field.`,
210
+ `Add a "${doc.versionField}" footer line that matches package.json version ${expectedVersion || '<unknown>'}.`
211
+ );
212
+ } else if (expectedVersion && actualVersion !== expectedVersion) {
213
+ pushViolation(
214
+ violations,
215
+ doc.file,
216
+ 'stale_doc_version',
217
+ `${doc.file} tracks version ${actualVersion} but package.json is ${expectedVersion}.`,
218
+ `Refresh ${doc.file} so "${doc.versionField}" matches ${expectedVersion}.`
219
+ );
220
+ }
221
+
222
+ if (!actualUpdated) {
223
+ pushViolation(
224
+ violations,
225
+ doc.file,
226
+ 'missing_doc_updated_field',
227
+ `${doc.file} is missing the "${doc.updatedField}" footer field.`,
228
+ `Add a "${doc.updatedField}" footer line that matches the latest CHANGELOG release date ${expectedDate || '<unknown>'}.`
229
+ );
230
+ } else if (expectedDate && actualUpdated !== expectedDate) {
231
+ pushViolation(
232
+ violations,
233
+ doc.file,
234
+ 'stale_doc_updated_date',
235
+ `${doc.file} last updated date is ${actualUpdated} but latest CHANGELOG release date is ${expectedDate}.`,
236
+ `Refresh ${doc.file} so "${doc.updatedField}" matches ${expectedDate}.`
237
+ );
238
+ }
239
+
240
+ if (versionedHeadings.length > 0) {
241
+ pushViolation(
242
+ violations,
243
+ doc.file,
244
+ 'versioned_capability_headings',
245
+ `${doc.file} contains version-stamped headings: ${versionedHeadings.join(' | ')}.`,
246
+ 'Remove release/version markers from long-lived README headings and keep current version tracking only in the footer.'
247
+ );
248
+ }
249
+
250
+ documents.push({
251
+ file: doc.file,
252
+ path: absolutePath,
253
+ actual_version: actualVersion,
254
+ expected_version: expectedVersion,
255
+ actual_updated: actualUpdated,
256
+ expected_updated: expectedDate,
257
+ versioned_heading_count: versionedHeadings.length
258
+ });
259
+ }
260
+
261
+ return {
262
+ mode: 'release-doc-version-audit',
263
+ passed: violations.length === 0,
264
+ project_path: projectPath,
265
+ package_version: packageVersion,
266
+ changelog_release: latestRelease,
267
+ error_count: violations.length,
268
+ documents,
269
+ violations
270
+ };
271
+ }
272
+
273
+ function printHumanReport(result) {
274
+ if (result.violations.length === 0) {
275
+ console.log('Release doc version audit passed: README release metadata matches package.json and CHANGELOG.');
276
+ return;
277
+ }
278
+
279
+ console.error(`Release doc version audit found ${result.error_count} error(s).`);
280
+ for (const violation of result.violations) {
281
+ console.error(`- ${violation.file} / ${violation.rule}: ${violation.message}`);
282
+ if (violation.suggestion) {
283
+ console.error(` suggestion: ${violation.suggestion}`);
284
+ }
285
+ }
286
+ }
287
+
288
+ function maybeWriteReport(outputPath, result) {
289
+ if (!outputPath) {
290
+ return;
291
+ }
292
+ fs.mkdirSync(path.dirname(outputPath), { recursive: true });
293
+ fs.writeFileSync(outputPath, `${JSON.stringify(result, null, 2)}\n`, 'utf8');
294
+ }
295
+
296
+ if (require.main === module) {
297
+ const options = parseArgs(process.argv.slice(2));
298
+ const result = auditReleaseDocs(options);
299
+ maybeWriteReport(options.out, result);
300
+
301
+ if (options.json) {
302
+ process.stdout.write(`${JSON.stringify(result, null, 2)}\n`);
303
+ } else {
304
+ printHumanReport(result);
305
+ }
306
+
307
+ if (options.failOnError && result.error_count > 0) {
308
+ process.exit(1);
309
+ }
310
+ }
311
+
312
+ module.exports = {
313
+ RELEASE_DOCS,
314
+ auditReleaseDocs,
315
+ extractLatestChangelogRelease,
316
+ parseArgs
317
+ };
@@ -20,7 +20,7 @@ This project uses **Spec-driven development** - a structured approach where:
20
20
 
21
21
  ---
22
22
 
23
- ## 🚀 sce Capabilities (v1.45.x)
23
+ ## 🚀 sce Capabilities
24
24
 
25
25
  **IMPORTANT**: After installing or updating sce, read this section to understand all available capabilities. Using the right tool for the job ensures efficient, high-quality development.
26
26
 
@@ -63,7 +63,7 @@ This project uses **Spec-driven development** - a structured approach where:
63
63
  - Master Spec + Sub-Specs with dependency management
64
64
  - Interface contracts for cross-Spec compatibility
65
65
 
66
- ### Multi-Agent Parallel Coordination (v1.43.0)
66
+ ### Multi-Agent Parallel Coordination
67
67
  When multiple AI agents work on the same project simultaneously:
68
68
  - **AgentRegistry** (`lib/collab`) — Agent lifecycle with heartbeat monitoring
69
69
  - **TaskLockManager** (`lib/lock`) — File-based task mutual exclusion
@@ -75,7 +75,7 @@ When multiple AI agents work on the same project simultaneously:
75
75
  - All components are no-ops in single-agent mode (zero overhead)
76
76
  - See `docs/multi-agent-coordination-guide.md` for full API reference
77
77
 
78
- ### Spec-Level Steering & Context Sync (v1.44.0)
78
+ ### Spec-Level Steering & Context Sync
79
79
  Fourth steering layer (L4) and Spec lifecycle coordination for multi-agent scenarios:
80
80
  - **SpecSteering** (`lib/steering`) — Per-Spec `steering.md` CRUD with template generation, Markdown ↔ structured object roundtrip
81
81
  - **SteeringLoader** (`lib/steering`) — Unified L1-L4 four-layer steering loader with merged output
@@ -92,7 +92,7 @@ Fourth steering layer (L4) and Spec lifecycle coordination for multi-agent scena
92
92
  - `sce auto status/resume/stop/config` — Manage autonomous execution
93
93
  - Intelligent error recovery, checkpoint system, learning from history
94
94
 
95
- ### Agent Orchestrator — Multi-Agent Spec Execution (v1.45.0)
95
+ ### Agent Orchestrator — Multi-Agent Spec Execution
96
96
  Automate parallel Spec execution via Codex CLI sub-agents (replaces manual multi-terminal workflow):
97
97
  - `sce orchestrate run --specs "spec-a,spec-b,spec-c" --max-parallel 3` — Start multi-agent orchestration
98
98
  - `sce orchestrate status` — View orchestration progress (per-Spec status, overall state)
@@ -243,6 +243,6 @@ A Spec is a complete feature definition with three parts:
243
243
  ---
244
244
 
245
245
  **Project Type**: Spec-driven development
246
- **sce Version**: 1.44.x
247
- **Last Updated**: 2026-02-12
246
+ **sce Version**: 3.6.49
247
+ **Last Updated**: 2026-03-14
248
248
  **Purpose**: Guide AI tools to work effectively with this project
@@ -58,3 +58,11 @@
58
58
  - 前端调用后端 API 不匹配时,默认以后端现有接口契约为准。
59
59
  - 除非明确要求新建或修改后端接口,否则不要为了迁就前端错误调用去改后端。
60
60
  - 优先调整前端请求、映射、类型和兼容处理,使其与后端接口一致。
61
+
62
+ ## 11. 单文件规模过大时必须触发重构评估
63
+
64
+ - SCE 应先评估当前项目的代码规模分布,再给出项目级的重构参考节点;不要把一个固定阈值生硬套到所有项目。
65
+ - 如果项目还没有自己的阈值,默认参考源文件 `2000 / 4000 / 10000` 行三档触发:分别对应“必须评估”“必须发起重构收敛”“进入红线区”。
66
+ - 达到项目级或默认阈值后,继续加功能前先评估是否应拆分职责;超过重构/红线阈值时,优先做拆分和降复杂度,而不是继续堆代码。
67
+ - 项目早期应更早触发评估,后期也要按周或发布前复评,避免阈值长期失效。
68
+ - 行数阈值只是强触发信号;如果文件虽然没到阈值,但职责混杂、测试困难、理解成本失控,也应提前重构。