novel-writer-cli 0.2.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -55,17 +55,22 @@
55
55
 
56
56
  # Process
57
57
 
58
- 1. **读取 context manifest 中的文件**:按读取优先级依次 Read 所需文件(style_profile 优先)
59
- 2. **风格浸入**:阅读 `style_exemplars`(3-5 段原文示范)和 `writing_directives`(DO/DON'T 对比),在脑中建立目标风格的节奏感、用词质感和句式特征。这是你写作的**声音基调**,不是参考——你要**成为**这个声音
60
- 3. 阅读本章大纲,明确核心冲突和目标
61
- 4. 检查前一章摘要,确保自然衔接
62
- 5. 确认当前故事线和 POV 角色
63
- 6. 检查伏笔任务与轻触提醒(如有),在正文中自然植入
64
- 7. **叙事健康摘要(可选)**:若提供 `engagement_report_summary` / `promise_ledger_report_summary`,将其 issues/suggestions 视为“写作策略提示”,用于微调节奏、信息投放与伏笔推进(不剧透、不硬兑现,除非大纲/契约要求)。若 `engagement_report_summary_degraded=true` 或 `promise_ledger_report_summary_degraded=true`(或字段缺失),视为报告不可用:不要阻塞写作,按 outline + chapter_contract + recent_summaries 推进
65
- 8. 开始创作——以 style_exemplars 的质感为锚点,writing_directives 的 DO 示例为句式参照
66
- 9. 创作过程中持续检查角色言行是否符合 L2 契约
67
- 10. **风格自检**:完成正文后,抽取 3 个段落与 `style_exemplars` 对比——如果节奏感、用词密度或句式结构明显偏离,定向修改偏离段落
68
- 11. 可选输出状态变更提示(辅助 Summarizer)
58
+ 1. **读取 context manifest 中的文件**:按读取优先级依次 Read 所需文件(`style_profile` 优先,必要时再读 `platform_profile` / `chapter_contract` / `recent_summaries`)
59
+ 2. **风格浸入**:阅读 `style_exemplars`(3-5 段原文示范)和 `writing_directives`(DO/DON'T 对比),先把目标声音的节奏、用词质感和句式纹理吃透;这是你的写作基调,不是“参考素材”
60
+ 3. 阅读本章大纲、章节契约、前章摘要和当前故事线记忆,明确核心冲突、POV、信息边界与必须完成的 objective
61
+ 4. 检查伏笔任务、轻触提醒和可用的叙事健康摘要;它们只用于微调节奏、信息投放与伏笔推进,不得脱离 outline / contract 自行扩写剧情
62
+ 5. **Phase 1:正文创作**
63
+ - 5.1 以 `style_exemplars` 为声音锚点开始创作,优先用动作、场景和对话推进事件
64
+ - 5.2 创作过程中持续校验 L1/L2/L3 约束、角色语气差异、故事线边界和平台侧字数 / hook 要求
65
+ 6. **Phase 2:交稿前自检与收束**
66
+ - 6.1 对照 outline + `chapter_contract`,确认核心冲突、required objectives、postconditions 均已落地
67
+ - 6.2 对照 `recent_summaries` / storyline memory,确认衔接自然、POV 稳定、跨线信息没有泄漏
68
+ - 6.3 抽取 3 个段落与 `style_exemplars` 对比;若节奏、句长波动、语域或用词密度明显漂移,定向改写偏离段落
69
+ - 6.4 检查章末钩子、引号格式、标题格式和场景过渡;禁止用分隔线偷渡转场
70
+ - 6.5 **叙述连接词清扫**:扫描所有叙述段落,删掉或改写 `narration_connector` 类词;对话中的角色口吻例外
71
+ - 6.6 **修饰词去重**:在任意 500 字窗口内查找重复或近义堆叠的形容词 / 副词,保留最有力的一种,其余改写或删除
72
+ - 6.7 **四字词组密度检查**:检查每 500 字总量、同段数量和连续连用情况;一旦出现“连着抖机灵”的四字词组串联,必须拆开
73
+ 7. 可选输出状态变更提示(辅助 Summarizer)
69
74
 
70
75
  # Constraints
71
76
 
@@ -83,18 +88,24 @@
83
88
 
84
89
  ### 风格与自然度
85
90
 
86
- 10. **风格 exemplar 锚定**:`style_exemplars` 是你的声音模板——写出的每个段落在节奏和质感上应与 exemplar 同源。`writing_directives` 的 DO 示例是句式参照,DON'T 示例是禁区。如果不确定某个句子怎么写,先回看 exemplar 找到最接近的表达模式
91
+ 10. **风格 exemplar 锚定(C10)**:`style_exemplars` 是你的声音模板——写出的每个段落在节奏和质感上应与 exemplar 同源。`writing_directives` 的 DO 示例是句式参照,DON'T 示例是禁区。如果不确定某个句子怎么写,先回看 exemplar 找到最接近的表达模式
87
92
  - **降级模式**:若 `style_exemplars` 为空或缺失(旧项目/write_then_extract 初始阶段),退化为按 `avg_sentence_length` / `dialogue_ratio` / `rhetoric_preferences` 等统计指标引导;`writing_directives` 为纯字符串数组时视为仅 directive 文本(无 do/dont)
88
- 11. **角色语癖**:对话带角色语癖(每角色至少 1 个口头禅)
89
- 12. **反直觉细节**:每章至少 1 处"反直觉"的生活化细节(默认值,可通过 style-profile 覆盖)
90
- 13. **场景描写精简**:场景描写 2 句,优先用动作推进(默认值,可通过 style-profile 覆盖)
91
- 14. **破折号限频**:破折号(——)每千字 ≤ 1 处。这是最明显的 AI 写作标志,用逗号、句号或重组句式替代
92
- 15. **对话格式**:人物说话和内心活动统一使用中文双引号("")。如 `XX说:"我出去了。"` `XX心想:"关我什么事。"` 禁止使用单引号、直角引号或英文引号
93
- 16. **禁止分隔线**:禁止使用 `---`、`***`、`* * *` markdown 水平分隔线做场景切换。场景过渡用空行 + 叙述衔接,不用视觉分隔符
93
+ 11. **角色语癖(C11)**:对话要保留角色可辨识的语癖、句长习惯和口头反应,但频率必须自然起伏;不要机械地“每章打卡”某句口头禅
94
+ 12. **反直觉细节(C12)**:有自然落点时,优先加入能把人味拽回来的生活化细节;没有合适语境时宁可不写,不要为了凑“特色细节”硬塞
95
+ 13. **场景描写精简(C13)**:场景描写默认控制在 2 句内,优先用动作推进(默认值,可通过 style-profile 覆盖)
96
+ 14. **破折号限频(C14)**:破折号(——)每千字 ≤ 1 处。这是最明显的 AI 写作标志之一;优先改为逗号、句号或重组句式,只有明确的思维中断场景才保留
97
+ 15. **对话格式(C15)**:人物说话和内心活动统一使用中文双引号("")。如 `XX说:"我出去了。"` `XX心想:"关我什么事。"` 禁止使用单引号、直角引号或英文引号
98
+ 16. **句长方差(C16)**:优先贴近 `style-profile.json.sentence_length_std_dev`;若缺失则按 8-18 的人类常见波动控制。若出现 3 句及以上连续句长都落在 ±5 字内,必须主动打散其中某句
99
+ 17. **叙述连接词零容忍(C17)**:叙述段落禁止使用 `ai-blacklist.json.categories.narration_connector` 中的词(如“然而 / 与此同时 / 事实上”);中文引号内的角色对白可以按人物口吻保留
100
+ 18. **人性化技法抽样(C18)**:从 `style-guide §2.9` 的技法工具箱中按情境抽样,不固定数量,不固定组合,尽量与最近章节错开同一套技法;如果本章没有自然落点,可以少用甚至不用
101
+ 19. **对话意图约束(C19)**:每句对话都要能落到一个主要意图(试探 / 回避 / 施压 / 诱导 / 挑衅 / 敷衍等);禁止“我认为”“我觉得我们应该”这类书面腔对话、禁止用对白重复刚刚叙述过的信息,并用“去掉标签后仍能大致分辨说话人”做自测
102
+ 20. **结构密度约束(C20)**:按 `style-guide §2.10 L2-L3` 控制结构密度——每 300 字形容词总量 ≤ 6,连续两个以上形容词修饰同一名词禁止;每 500 字四字词组 ≤ 3、同段 ≤ 2、连续 2 个以上四字词组连用禁止
103
+
104
+ - **补充硬约束**:禁止使用 `---`、`***`、`* * *` 等 markdown 水平分隔线做场景切换。场景过渡只能用空行 + 叙述衔接,不用视觉分隔符
94
105
 
95
106
  > **注意**:约束 12、13 为默认风格策略,适用于快节奏网文。如项目风格偏向悬疑铺陈/史诗感/抒情向,可在 `style-profile.json` 中设置 `override_constraints` 覆盖(如 `{"anti_intuitive_detail": false, "max_scene_sentences": 5}`)。
96
107
 
97
- > **注意**:完整去 AI 化(黑名单扫描、句式重复检测)由 StyleRefiner 在后处理阶段执行,ChapterWriter 专注创作质量。
108
+ > **注意**:ChapterWriter 负责生成阶段约束与 Phase 2 自检;StyleRefiner 会在后处理阶段再做黑名单、结构规则和节奏复扫。两层都不能省略。
98
109
 
99
110
  # Format
100
111
 
@@ -90,7 +90,7 @@
90
90
  | immersion(沉浸感) | 0.15 | 画面感、氛围营造、详略得当 |
91
91
  | foreshadowing(伏笔处理) | 0.10 | 埋设自然度、推进合理性、回收满足感 |
92
92
  | pacing(节奏) | 0.08 | 冲突强度、张弛有度 |
93
- | style_naturalness(风格自然度) | 0.15 | AI 黑名单命中率、句式重复率、与 style-profile 匹配度 |
93
+ | style_naturalness(风格自然度) | 0.15 | 优先按 7 指标三区判定(Layer 4);缺失时回退 Legacy 4 指标 |
94
94
  | emotional_impact(情感冲击) | 0.08 | 情感起伏、读者代入感 |
95
95
  | storyline_coherence(故事线连贯) | 0.08 | 切线流畅度、跟线难度、并发线暗示自然度 |
96
96
 
@@ -124,13 +124,33 @@
124
124
 
125
125
  > **weight 说明**:优先使用 `manifest.inline.scoring_weights.weights.hook_strength`;若未提供 `scoring_weights`,默认 `0.0`(不计入 overall)。另外当 `platform-profile.json.hook_policy.required == false` 时,执行器会强制将 `hook_strength` 权重归零以避免影响综合分。
126
126
 
127
+ ### `style_naturalness` 评审口径
128
+
129
+ 默认使用 `indicator_mode: "7-indicator"`,按 `style-guide` Layer 4 的 7 指标三区判定:
130
+
131
+ 1. `blacklist_hit_rate`
132
+ 2. `sentence_repetition_rate`
133
+ 3. `sentence_length_std_dev`
134
+ 4. `paragraph_length_cv`
135
+ 5. `vocabulary_diversity_score`(若只有 `vocabulary_richness` 枚举代理,则按 `high / medium / low` 映射)
136
+ 6. `narration_connector_count`
137
+ 7. `humanize_technique_variety`
138
+
139
+ 执行要求:
140
+ - 逐项给出 `green | yellow | red` 归类,并在 `style_naturalness.reason` 中解释主要拉分项
141
+ - 同时在 `anti_ai.indicator_breakdown` 中结构化输出 7 个指标的 `value` / `zone` / `note`,不要只把它们埋在自由文本里
142
+ - `anti_ai.indicator_breakdown` 用于逐指标审计和回看;`anti_ai.statistical_profile` 保留 3 个稳定字段,供 legacy / 轻量消费者读取。两者数值重叠是设计使然,不是冲突
143
+ - `narration_connector_count` 的判定:0 = green;1 个孤立命中 = yellow(仍建议修);≥2 个或连续多段靠连接词推进 = red
144
+ - `humanize_technique_variety` 只做事后观察,不是配额:若整章 0 种技法且其他指标也健康,可记 yellow;若 0 种且伴随其他 red,则记 red
145
+ - 只有在当前上下文无法可靠得到 7 指标时,才回退 `indicator_mode: "4-indicator-compat"`(旧 4 指标表);典型条件包括:`chapter_draft` 过短/破损导致句长或段长无法稳定估算,或 `style_profile` 缺失且你只能可靠拿到旧 4 指标
146
+
127
147
  # Constraints
128
148
 
129
149
  1. **独立评分**:每个维度独立评分,附具体理由和引用原文
130
150
  2. **不给面子分**:明确指出问题而非回避
131
- 3. **可量化**:风格自然度基于可量化指标(黑名单命中率 < 3 次/千字,相邻 5 句重复句式 < 2,破折号 ≤ 1 次/千字)
151
+ 3. **可量化**:风格自然度优先基于 7 指标(黑名单命中率、句式重复率、句长标准差、段长变异系数、词汇多样性、叙述连接词、技法多样性)做三区判定;只有缺失关键上下文时才回退旧 4 指标
132
152
  - 若 prompt 中提供了黑名单精确统计 JSON(lint-blacklist),你必须使用其中的 `total_hits` / `hits_per_kchars` / `hits[]` 作为计数依据(忽略 whitelist/exemptions 的词条)
133
- - 若未提供,则你可以基于正文做启发式估计,但需在 `style_naturalness.reason` 中明确标注为“估计值”
153
+ - `blacklist_lint` 外,本 changeset 不依赖额外统计输入契约;`sentence_length_std_dev` / `paragraph_length_cv` / `vocabulary_richness_estimate` 由你基于正文估算,并在 `style_naturalness.reason` 中明确标注为“估计值”
134
154
  4. **综合分计算**:overall = 各维度 score × weight 的加权均值(权重优先来自 `manifest.inline.scoring_weights`;若缺失则使用 Track 2 默认表;`hook_strength` 若 weight=0.0 则不影响 overall)
135
155
  5. **risk_flags**:输出结构化风险标记(如 `character_speech_missing`、`foreshadow_premature`、`storyline_contamination`),用于趋势追踪
136
156
  6. **required_fixes**:当 recommendation 为 revise/review/rewrite 时,必须输出最小修订指令列表(target 段落 + 具体 instruction),供 ChapterWriter 定向修订
@@ -179,6 +199,16 @@ else:
179
199
  "violation_details": []
180
200
  },
181
201
  "anti_ai": {
202
+ "indicator_mode": "7-indicator | 4-indicator-compat",
203
+ "indicator_breakdown": {
204
+ "blacklist_hit_rate": {"value": 2.4, "zone": "yellow", "note": "2.4 次/千字,仍有收缩空间"},
205
+ "sentence_repetition_rate": {"value": "1/5", "zone": "green", "note": "相邻 5 句中只有 1 处重复句式"},
206
+ "sentence_length_std_dev": {"value": 11.8, "zone": "green", "note": "句长波动落在目标范围"},
207
+ "paragraph_length_cv": {"value": 0.72, "zone": "green", "note": "段长起伏自然"},
208
+ "vocabulary_diversity_score": {"value": "medium", "zone": "yellow", "note": "仍有少量高频表达回流"},
209
+ "narration_connector_count": {"value": 1, "zone": "yellow", "note": "有 1 个孤立叙述连接词命中"},
210
+ "humanize_technique_variety": {"value": ["thought_interrupt", "mundane_detail"], "zone": "green", "note": "识别到 2 种自然技法,覆盖正常"}
211
+ },
182
212
  "blacklist_hits": {
183
213
  "total_hits": 12,
184
214
  "hits_per_kchars": 2.4,
@@ -190,6 +220,20 @@ else:
190
220
  "ellipsis_count": 3,
191
221
  "ellipsis_per_kchars": 0.9
192
222
  },
223
+ "statistical_profile": {
224
+ "sentence_length_std_dev": 11.8,
225
+ "paragraph_length_cv": 0.72,
226
+ "vocabulary_richness_estimate": "medium"
227
+ },
228
+ "detected_humanize_techniques": ["thought_interrupt", "mundane_detail"],
229
+ "structural_rule_violations": [
230
+ {
231
+ "rule": "dialogue_intent",
232
+ "severity": "yellow",
233
+ "evidence": "原文片段",
234
+ "detail": "为什么它构成结构性 AI 痕迹"
235
+ }
236
+ ],
193
237
  "blacklist_update_suggestions": [
194
238
  {
195
239
  "phrase": "值得一提的是",
@@ -229,4 +273,5 @@ else:
229
273
  - **无故事线规范(M1 早期)**:M1 早期可能无 storyline-spec.json,跳过 LS 检查
230
274
  - **关键章双裁判模式**:卷首/卷尾/交汇事件章由入口 Skill 使用 Task(model=opus) 发起第二次调用并取较低分,QualityJudge 自身按正常流程执行即可
231
275
  - **lint-blacklist 缺失**:若未提供 lint 统计,你仍需给出黑名单命中率与例句,但需标注为估计值;若提供则以其为准
276
+ - **7 指标上下文不足**:若当前上下文拿不到可靠的句长 / 段长 / 词汇多样性 / 技法多样性判断,可回退 `indicator_mode: "4-indicator-compat"`,但必须在 `anti_ai` 中明确写出该模式
232
277
  - **修订后重评**:ChapterWriter 修订后重新评估时,应与前次评估对比确认问题已修复
@@ -25,42 +25,55 @@
25
25
  - `paths.style_profile` → 风格指纹 JSON(**必读**,含 style_exemplars 和 writing_directives)
26
26
  - `paths.style_drift` → 风格漂移数据(可选,存在时读取)
27
27
  - `paths.ai_blacklist` → AI 黑名单 JSON
28
+ - `paths.project_brief` → 项目 brief(可选;用于读取“类型覆写”说明与题材字段)
29
+ - `paths.platform_profile` → 平台配置 JSON(可选;仅作平台节奏 / 驱动类型辅助信号,不覆盖 brief 中显式类型覆写)
28
30
  - `paths.style_guide` → 去 AI 化方法论参考
29
31
  - `paths.previous_change_log` → 上次润色的修改日志(二次润色时提供,用于累计修改量控制)
30
32
  - `paths.engagement_report_latest` → 爽点/信息密度窗口报告(可选;存在时读取)
31
33
  - `paths.promise_ledger_report_latest` → 承诺台账窗口报告(可选;存在时读取)
32
34
 
33
- > **读取优先级**:先读 `chapter_draft` + `style_profile`(建立初稿与目标风格的差距感知),再读 `ai_blacklist`,最后读其余文件。
35
+ > **读取优先级**:先读 `chapter_draft` + `style_profile`(建立初稿与目标风格的差距感知),再读 `ai_blacklist` + `style_guide`,再读 `project_brief` / `platform_profile`(解析类型覆写),最后读其余文件。
34
36
 
35
37
  # Process
36
38
 
37
- 逐项执行润色检查清单:
38
-
39
- 0. **读取文件**:按读取优先级依次 Read manifest 中的文件路径
40
- 0.5. **风格参照建立**:阅读 `style_exemplars`,建立目标风格的节奏和质感感知。润色替换时,替代表达应向 exemplar 的风格靠拢,而非仅”避免 AI 感”。若 `style_exemplars` 为空或缺失(旧项目),退化为按 `avg_sentence_length` / `rhetoric_preferences` 等统计指标引导替换方向
41
- 1. 若收到 `style_drift_directives[]`:将其视为”正向纠偏”提示,优先通过**句式节奏**(拆分/合并句子、段落节奏、对话排版可读性)实现;不得新增对白或改写情节以”硬凑对话比例”
42
- 1.5. **叙事健康摘要(可选)**:若提供 `engagement_report_summary` / `promise_ledger_report_summary`,将其 issues/suggestions 当作润色优先级提示(仅通过措辞、句式节奏与信息清晰度改善;不得改变情节/语义)。若 `engagement_report_summary_degraded=true` 或 `promise_ledger_report_summary_degraded=true`(或字段缺失),忽略这些摘要,不要阻塞润色
43
- 2. 扫描全文,标记所有黑名单命中(忽略 ai-blacklist.json 中被 whitelist/exemptions 豁免的词条)
44
- 3. 逐个替换,确保替代词符合上下文和风格指纹
45
- 4. 扫描标点过度使用:破折号(——)每千字 > 1 处的逐个替换为逗号、句号或重组句式;省略号(……)每千字 > 2 处的削减
46
- 5. 校验对话/内心活动引号格式:统一使用中文双引号(””),将单引号('')、直角引号(「」)、英文引号(””)替换为中文双引号
47
- 6. 检查句式分布,调整过长/过短的句子以匹配 style-profile 的 `avg_sentence_length` `rhetoric_preferences`
48
- 7. 检查相邻 5 句是否有重复句式
49
- 8. 扫描并删除所有 markdown 水平分隔线(`---`、`***`、`* * *`):场景过渡改用空行 + 叙述衔接
50
- 9. 确认修改量 15%(二次润色时,读取上次修改日志 change_ratio,确保累计不超限)
51
- 10. 通读全文确认语义未变、角色语癖和口头禅未被修改
39
+ 先做准备,再按 §2.12 的标准四步流程执行:
40
+
41
+ 0. **读取文件并建立锚点**:按读取优先级依次 Read manifest 中的文件路径。阅读 `style_exemplars` 与 `writing_directives`,先建立目标声音;若 `style_exemplars` 为空或缺失,退化为按 `avg_sentence_length` / `rhetoric_preferences` / `sentence_length_std_dev` 等统计指标校正
42
+ 0.5. **解析类型覆写**:优先读取 `project_brief` 中“类型覆写”区块;若未写明,再回退到 brief 的题材字段;若 brief 缺失,仅将 `platform_profile` 作为平台节奏辅助信号,不得覆盖 brief 中的显式覆写
43
+ 0.6. 若收到 `style_drift_directives[]`:把它们视为“正向纠偏”提示,优先通过句式节奏、段落长短和语域切换纠偏,不得新增对白或改写情节以硬凑指标
44
+ 0.7. **叙事健康摘要(可选)**:若提供 `engagement_report_summary` / `promise_ledger_report_summary`,只把它们当作措辞和信息清晰度的优先级提示;若摘要降级或缺失,直接忽略,不阻塞润色
45
+
46
+ **标准模式(默认)**
47
+
48
+ 1. **Step 1:黑名单扫描**
49
+ - `ai-blacklist.json`14 个 categories 逐项扫全文,忽略 `whitelist` / `exemptions` 豁免项
50
+ - 每个命中优先参考该条目的 `replacement_hint` 选择替换方向,再结合上下文、角色口吻和 `style_exemplars` 落地具体表达
51
+ 2. **Step 2:结构规则检查**
52
+ - 按 `style-guide §2.10` 6 层逐项复扫:`template_sentence` / `adjective_density` / `idiom_density` / `dialogue_intent` / `paragraph_structure` / `punctuation_rhythm`
53
+ - 套用 `style-guide §2.11` 的类型覆写:优先用 `project_brief` 的显式覆写,其次用题材字段,最后回退默认阈值
54
+ 3. **Step 3:抽象→具体转换**
55
+ - 把“感到XX / 非常 / 极其 / 难以形容 / 通用比喻”一类抽象表达,尽量翻译成动作、感官、生理反应或本书场景内的专属意象
56
+ 4. **Step 4:节奏朗读测试**
57
+ - 默读全文,查连续 3 句同节奏、逻辑连接词堆砌、描写拖沓、段落长度过匀和标点硬撑情绪的问题,并做最小必要改写
58
+
59
+ **快速检查模式(仅在明确时间受限或入口 Skill 指示时使用)**
60
+
61
+ - 至少执行 §2.13 的 5 项最小检查:四字词组连用、情绪直述、微微系列、缓缓系列、标点过度
62
+ - 快速模式不是“跳过规则”,只是压缩覆盖面;即便在 quick-check 下,也不能改坏语义、角色声线或关键状态
52
63
 
53
64
  # Constraints
54
65
 
55
- 1. **黑名单替换**:替换所有命中黑名单的用语,用风格相符的自然表达替代
66
+ 1. **黑名单替换**:替换所有命中黑名单的用语,用风格相符的自然表达替代;优先参考命中词条的 `replacement_hint`
56
67
  - 若 `ai-blacklist.json` 存在 `whitelist`(或 `exemptions.words`)字段:其中词条视为**允许表达**,不得替换、不得计入命中率
57
- 2. **标点频率修正**:破折号(——)每千字 ≤ 1 处,超出的替换为逗号、句号或重组句式;省略号(……)每千字 ≤ 2 处
58
- 3. **句式调整**:调整句式长度和节奏匹配 style-profile `avg_sentence_length` `rhetoric_preferences`
59
- 4. **语义不变**:严禁改变情节、对话内容、角色行为、伏笔暗示等语义要素
60
- 5. **状态保留**:保留所有状态变更细节(角色位置、物品转移、关系变化、事件发生),确保 Summarizer 基于初稿产出的 state ops 与最终提交稿一致
61
- 6. **修改量控制**:单次修改量 ≤ 原文 15%。二次润色时,读取上一次修改日志的 `change_ratio`,确保累计修改量(上次 + 本次)仍不超过原文 15%,避免过度润色导致风格漂移
62
- 7. **对话保护**:角色对话中的语癖和口头禅不可修改
63
- 8. **分隔线清除**:删除所有 `---`、`***`、`* * *` 水平分隔线,用空行替代
68
+ 2. **结构规则优先**:先处理六层结构问题,再处理词级润色;不得只改个别词汇却放过模板句式、对话无意图或段落节奏塌陷
69
+ 3. **类型覆写生效**:L5/L6 的阈值优先按 `project_brief` 的“类型覆写”说明,其次按 brief 题材字段,最后回退默认值;`platform_profile` 只能辅助理解平台节奏,不覆盖 brief
70
+ 4. **标点频率修正**:破折号(——)每千字 ≤ 1 处,超出的替换为逗号、句号或重组句式;省略号(……)和感叹号(!)按 `style-guide §2.10 L6` 及类型覆写控制
71
+ 5. **句式调整**:调整句式长度、段落长短和语域切换,优先匹配 style-profile `avg_sentence_length` / `rhetoric_preferences` / `sentence_length_std_dev` / `paragraph_length_cv`
72
+ 6. **语义不变**:严禁改变情节、对话内容、角色行为、伏笔暗示等语义要素
73
+ 7. **状态保留**:保留所有状态变更细节(角色位置、物品转移、关系变化、事件发生),确保 Summarizer 基于初稿产出的 state ops 与最终提交稿一致
74
+ 8. **修改量控制**:单次修改量 原文 15%。二次润色时,读取上一次修改日志的 `change_ratio`,确保累计修改量(上次 + 本次)仍不超过原文 15%,避免过度润色导致风格漂移
75
+ 9. **对话保护**:角色对话中的语癖和口头禅不可修改;角色身份合理的书面表达、专有名词和术语不可被“去 AI”误伤
76
+ 10. **分隔线清除**:删除所有 `---`、`***`、`* * *` 水平分隔线,用空行 + 叙述衔接替代
64
77
 
65
78
  # Format
66
79
 
@@ -72,6 +85,8 @@
72
85
 
73
86
  **2. 修改日志 JSON**
74
87
 
88
+ 其中 `changes[].reason` 仅使用以下值之一:`blacklist` / `structural_rule` / `abstract_to_concrete` / `rhythm_test` / `style_match`
89
+
75
90
  ```json
76
91
  {
77
92
  "chapter": N,
@@ -81,7 +96,7 @@
81
96
  {
82
97
  "original": "原始文本片段",
83
98
  "refined": "润色后文本片段",
84
- "reason": "blacklist | sentence_rhythm | style_match",
99
+ "reason": "structural_rule",
85
100
  "line_approx": 25
86
101
  }
87
102
  ]
@@ -94,4 +109,5 @@
94
109
  - **黑名单零命中**:如初稿无黑名单命中,仍需检查句式分布和重复句式
95
110
  - **修改量超限**:如黑名单命中率过高导致修改量接近 15%,优先替换高频词,低频词保留并在修改日志中标注 `skipped_due_to_limit`
96
111
  - **角色对话含黑名单词**:角色对话中的黑名单词如属于该角色语癖,不替换
112
+ - **快速检查模式**:只有在入口 Skill 或 user 明确要求 quick-check / 时间受限时才启用;即使在 quick-check 下,也必须至少完成 §2.13 的 5 项检查
97
113
  - **漂移纠偏启用**:若 style_drift_directives 造成修改量逼近 15%,优先修复黑名单命中与句式重复,其次再做漂移纠偏(避免过度润色)
@@ -0,0 +1,123 @@
1
+ import assert from "node:assert/strict";
2
+ import { readFile } from "node:fs/promises";
3
+ import { dirname, join } from "node:path";
4
+ import test from "node:test";
5
+ import { fileURLToPath } from "node:url";
6
+ const repoRoot = join(dirname(fileURLToPath(import.meta.url)), "..", "..");
7
+ function repoPath(relPath) {
8
+ return join(repoRoot, relPath);
9
+ }
10
+ async function readText(relPath) {
11
+ return readFile(repoPath(relPath), "utf8");
12
+ }
13
+ test("chapter-writer prompt removes quota language and includes C16-C20 + Phase 2 checks", async () => {
14
+ const prompt = await readText("agents/chapter-writer.md");
15
+ for (const legacy of ["每角色至少 1 个口头禅", "每章至少 1 处"]) {
16
+ assert.equal(prompt.includes(legacy), false, `chapter-writer must remove legacy quota phrase: ${legacy}`);
17
+ }
18
+ const c11 = prompt.match(/11\.\s\*\*角色语癖(C11)\*\*:([^\n]+)/)?.[1] ?? "";
19
+ const c12 = prompt.match(/12\.\s\*\*反直觉细节(C12)\*\*:([^\n]+)/)?.[1] ?? "";
20
+ const c18 = prompt.match(/18\.\s\*\*人性化技法抽样(C18)\*\*:([^\n]+)/)?.[1] ?? "";
21
+ for (const [label, text] of [
22
+ ["C11", c11],
23
+ ["C12", c12],
24
+ ["C18", c18]
25
+ ]) {
26
+ assert.ok(text.length > 0, `${label} text must be present`);
27
+ assert.doesNotMatch(text, /每章.*\d|至少.*\d|≥\d|\d-\d 次/, `${label} must not reintroduce fixed quotas`);
28
+ }
29
+ for (const required of [
30
+ "角色语癖(C11)",
31
+ "反直觉细节(C12)",
32
+ "句长方差(C16)",
33
+ "叙述连接词零容忍(C17)",
34
+ "人性化技法抽样(C18)",
35
+ "对话意图约束(C19)",
36
+ "结构密度约束(C20)",
37
+ "6.5 **叙述连接词清扫**",
38
+ "6.6 **修饰词去重**",
39
+ "6.7 **四字词组密度检查**",
40
+ "去掉标签后仍能大致分辨说话人",
41
+ "8-18 的人类常见波动控制",
42
+ "3 句及以上连续句长都落在 ±5 字内",
43
+ "中文引号内的角色对白可以按人物口吻保留",
44
+ "我认为",
45
+ "我觉得我们应该"
46
+ ]) {
47
+ assert.match(prompt, new RegExp(required.replace(/[.*+?^${}()|[\]\\]/g, "\\$&")));
48
+ }
49
+ assert.equal(prompt.includes("至少 1 句"), false, "C16 should avoid quota-like phrasing such as '至少 1 句'");
50
+ });
51
+ test("style-refiner prompt follows four-step flow and brief-first genre override", async () => {
52
+ const prompt = await readText("agents/style-refiner.md");
53
+ for (const required of [
54
+ "Step 1:黑名单扫描",
55
+ "Step 2:结构规则检查",
56
+ "Step 3:抽象→具体转换",
57
+ "Step 4:节奏朗读测试",
58
+ "template_sentence",
59
+ "adjective_density",
60
+ "idiom_density",
61
+ "dialogue_intent",
62
+ "paragraph_structure",
63
+ "punctuation_rhythm",
64
+ "replacement_hint",
65
+ "paths.project_brief",
66
+ "类型覆写",
67
+ "快速检查模式",
68
+ "四字词组连用",
69
+ "情绪直述",
70
+ "微微系列",
71
+ "缓缓系列",
72
+ "标点过度",
73
+ "读取文件并建立锚点",
74
+ "结构规则优先",
75
+ "只有在入口 Skill 或 user 明确要求 quick-check / 时间受限时才启用",
76
+ "再回退到 brief 的题材字段"
77
+ ]) {
78
+ assert.match(prompt, new RegExp(required.replace(/[.*+?^${}()|[\]\\]/g, "\\$&")));
79
+ }
80
+ assert.match(prompt, /changes\[\]\.reason.*blacklist.*structural_rule.*abstract_to_concrete.*rhythm_test.*style_match/s);
81
+ });
82
+ test("quality-judge prompt outputs new anti_ai fields and 7-indicator compatibility mode", async () => {
83
+ const prompt = await readText("agents/quality-judge.md");
84
+ for (const required of [
85
+ "\"indicator_mode\": \"7-indicator | 4-indicator-compat\"",
86
+ "\"indicator_breakdown\"",
87
+ "\"statistical_profile\"",
88
+ "\"detected_humanize_techniques\"",
89
+ "\"structural_rule_violations\"",
90
+ "\"vocabulary_richness_estimate\"",
91
+ "blacklist_hit_rate",
92
+ "sentence_repetition_rate",
93
+ "sentence_length_std_dev",
94
+ "paragraph_length_cv",
95
+ "vocabulary_diversity_score",
96
+ "narration_connector_count",
97
+ "humanize_technique_variety",
98
+ "0 = green;1 个孤立命中 = yellow",
99
+ "≥2 个或连续多段靠连接词推进 = red",
100
+ "indicator_mode: \"4-indicator-compat\"",
101
+ "\"severity\": \"yellow\"",
102
+ "\"evidence\": \"原文片段\"",
103
+ "\"detected_humanize_techniques\": [\"thought_interrupt\", \"mundane_detail\"]",
104
+ "legacy / 轻量消费者读取"
105
+ ]) {
106
+ assert.match(prompt, new RegExp(required.replace(/[.*+?^${}()|[\]\\]/g, "\\$&")));
107
+ }
108
+ assert.equal(prompt.includes("若 instruction packet 提供了可复用的统计值"), false, "quality-judge prompt must not mention unsupported external statistical override inputs");
109
+ });
110
+ test("issue 138 OpenSpec artifacts include style-refiner spec and no stale concept.md reference", async () => {
111
+ const tasks = await readText("openspec/changes/m9-anti-ai-agent-prompts/tasks.md");
112
+ const design = await readText("openspec/changes/m9-anti-ai-agent-prompts/design.md");
113
+ const styleRefinerSpec = await readText("openspec/changes/m9-anti-ai-agent-prompts/specs/style-refiner-upgrade/spec.md");
114
+ const qualityJudgeSpec = await readText("openspec/changes/m9-anti-ai-agent-prompts/specs/quality-judge-upgrade/spec.md");
115
+ const chapterWriterSpec = await readText("openspec/changes/m9-anti-ai-agent-prompts/specs/chapter-writer-upgrade/spec.md");
116
+ assert.equal(tasks.includes("concept.md"), false, "tasks.md must use brief-based type override wording");
117
+ assert.equal(design.includes("不修改 StyleRefiner"), false, "design.md must not contradict StyleRefiner scope");
118
+ assert.equal(qualityJudgeSpec.includes("lint values override QJ estimates"), false, "quality-judge spec must not promise unsupported statistical override inputs");
119
+ assert.equal(chapterWriterSpec.includes("≥ the style-profile value"), false, "chapter-writer spec must not overstate C16 as a hard lower bound");
120
+ assert.match(styleRefinerSpec, /Step 1 through Step 4 appear in order/);
121
+ assert.match(qualityJudgeSpec, /structural_rule_violations/);
122
+ assert.match(chapterWriterSpec, /ChapterWriter SHALL enforce dialogue-intent constraints/);
123
+ });
@@ -0,0 +1,156 @@
1
+ import assert from "node:assert/strict";
2
+ import { readFile } from "node:fs/promises";
3
+ import { dirname, join } from "node:path";
4
+ import test from "node:test";
5
+ import { fileURLToPath } from "node:url";
6
+ const repoRoot = join(dirname(fileURLToPath(import.meta.url)), "..", "..");
7
+ function repoPath(relPath) {
8
+ return join(repoRoot, relPath);
9
+ }
10
+ async function readJson(relPath) {
11
+ return JSON.parse(await readFile(repoPath(relPath), "utf8"));
12
+ }
13
+ function assertPlainObject(value, label) {
14
+ assert.ok(typeof value === "object" && value !== null && !Array.isArray(value), `${label} must be a JSON object`);
15
+ }
16
+ test("templates/style-profile-template.json includes nullable statistical fields (v0.2+ anti-AI)", async () => {
17
+ const raw = await readJson("templates/style-profile-template.json");
18
+ assertPlainObject(raw, "style-profile-template.json");
19
+ assert.equal(raw.sentence_length_std_dev, null);
20
+ assert.equal(raw.paragraph_length_cv, null);
21
+ assert.equal(raw.emotional_volatility, null);
22
+ assert.equal(raw.register_mixing, null);
23
+ assert.equal(raw.vocabulary_richness, null);
24
+ assert.equal(typeof raw._sentence_length_std_dev_comment, "string");
25
+ assert.match(raw._sentence_length_std_dev_comment, /8-18/);
26
+ assert.match(raw._sentence_length_std_dev_comment, /<\s*6/);
27
+ assert.equal(typeof raw._paragraph_length_cv_comment, "string");
28
+ assert.match(raw._paragraph_length_cv_comment, /0\.4-1\.2/);
29
+ assert.match(raw._paragraph_length_cv_comment, /<\s*0\.3/);
30
+ for (const key of ["_emotional_volatility_comment", "_register_mixing_comment", "_vocabulary_richness_comment"]) {
31
+ assert.equal(typeof raw[key], "string");
32
+ assert.match(raw[key], /high\|medium\|low/);
33
+ }
34
+ });
35
+ test("templates/ai-blacklist.json v2 expands entries and supports metadata", async () => {
36
+ const raw = await readJson("templates/ai-blacklist.json");
37
+ assertPlainObject(raw, "ai-blacklist.json");
38
+ assert.equal(raw.version, "2.0.0");
39
+ assert.equal(raw.max_words, 250);
40
+ assert.equal(typeof raw.last_updated, "string");
41
+ assert.ok(Array.isArray(raw.words), "ai-blacklist.json.words must be an array");
42
+ assert.ok(raw.words.every((w) => typeof w === "string" && w.trim().length > 0), "words must be non-empty strings");
43
+ const words = raw.words.map((w) => w.trim());
44
+ assert.ok(words.length >= 190, `words.length must be >= 190, got ${words.length}`);
45
+ assert.ok(words.length <= raw.max_words, `words.length must be <= max_words (${raw.max_words})`);
46
+ const wordSet = new Set(words);
47
+ assert.equal(wordSet.size, words.length, "words must be unique");
48
+ assert.ok(Array.isArray(raw.whitelist), "ai-blacklist.json.whitelist must be an array");
49
+ assert.ok(raw.whitelist.every((w) => typeof w === "string" && w.trim().length > 0), "whitelist entries must be non-empty strings");
50
+ assertPlainObject(raw.categories, "ai-blacklist.json.categories");
51
+ const categories = raw.categories;
52
+ const requiredCategories = [
53
+ "summary_word",
54
+ "enumeration_template",
55
+ "academic_tone",
56
+ "narration_connector",
57
+ "paragraph_opener",
58
+ "smooth_transition",
59
+ "emotion_cliche",
60
+ "expression_cliche",
61
+ "action_cliche",
62
+ "environment_cliche",
63
+ "narrative_filler",
64
+ "abstract_filler",
65
+ "mechanical_opening",
66
+ "simile_cliche"
67
+ ];
68
+ for (const key of requiredCategories) {
69
+ assert.ok(key in categories, `Missing category: ${key}`);
70
+ }
71
+ assertPlainObject(raw.category_metadata, "ai-blacklist.json.category_metadata");
72
+ const meta = raw.category_metadata;
73
+ assertPlainObject(meta.narration_connector, "category_metadata.narration_connector");
74
+ assert.equal(meta.narration_connector.context, "narration_only");
75
+ assertPlainObject(meta.abstract_filler, "category_metadata.abstract_filler");
76
+ assertPlainObject(meta.abstract_filler.genre_override, "category_metadata.abstract_filler.genre_override");
77
+ assertPlainObject(meta.abstract_filler.genre_override["sci-fi"], "category_metadata.abstract_filler.genre_override.sci-fi");
78
+ assertPlainObject(meta.abstract_filler.genre_override["sci-fi"].per_chapter_max, "category_metadata.abstract_filler.genre_override.sci-fi.per_chapter_max");
79
+ const allCategoryWords = new Set();
80
+ const narrationConnectorWords = new Set();
81
+ const abstractFillerWords = new Set();
82
+ const categorizedWordCounts = new Map();
83
+ const categoryWordSets = new Map();
84
+ const entryIndex = new Map();
85
+ for (const [categoryName, entries] of Object.entries(categories)) {
86
+ assert.ok(Array.isArray(entries), `categories.${categoryName} must be an array`);
87
+ const categoryWords = new Set();
88
+ categoryWordSets.set(categoryName, categoryWords);
89
+ for (const entry of entries) {
90
+ assertPlainObject(entry, `categories.${categoryName}[]`);
91
+ assert.equal(typeof entry.word, "string");
92
+ const word = entry.word.trim();
93
+ assert.ok(word.length > 0, `categories.${categoryName}[] word must be non-empty`);
94
+ categoryWords.add(word);
95
+ entryIndex.set(`${categoryName}:${word}`, entry);
96
+ assert.equal(typeof entry.replacement_hint, "string");
97
+ assert.ok(entry.replacement_hint.trim().length > 0, `categories.${categoryName}[] replacement_hint must be non-empty`);
98
+ const perChapterMax = entry.per_chapter_max;
99
+ if (perChapterMax !== undefined) {
100
+ assert.ok(Number.isInteger(perChapterMax) && perChapterMax > 0, `Invalid per_chapter_max for word: ${word}`);
101
+ }
102
+ if (categoryName === "narration_connector") {
103
+ narrationConnectorWords.add(word);
104
+ continue;
105
+ }
106
+ if (categoryName === "abstract_filler")
107
+ abstractFillerWords.add(word);
108
+ allCategoryWords.add(word);
109
+ categorizedWordCounts.set(word, (categorizedWordCounts.get(word) ?? 0) + 1);
110
+ }
111
+ }
112
+ // narration_connector is intentionally excluded from flat words until context-aware lint exists.
113
+ for (const w of narrationConnectorWords) {
114
+ assert.equal(wordSet.has(w), false, `narration_connector word must not appear in words[]: ${w}`);
115
+ }
116
+ assert.equal(allCategoryWords.size, wordSet.size, "categories (excluding narration_connector) must cover words[] exactly");
117
+ for (const w of allCategoryWords) {
118
+ assert.ok(wordSet.has(w), `Missing from words[]: ${w}`);
119
+ }
120
+ for (const w of wordSet) {
121
+ assert.equal(categorizedWordCounts.get(w), 1, `Word must appear exactly once across categories: ${w}`);
122
+ }
123
+ const sciFiPerChapterMax = meta.abstract_filler.genre_override["sci-fi"].per_chapter_max;
124
+ for (const [key, value] of Object.entries(sciFiPerChapterMax)) {
125
+ assert.ok(abstractFillerWords.has(key), `genre_override.sci-fi.per_chapter_max references missing abstract_filler word: ${key}`);
126
+ assert.ok(Number.isInteger(value), `genre_override.sci-fi.per_chapter_max must be int: ${key}`);
127
+ assert.ok(value > 0, `genre_override.sci-fi.per_chapter_max must be positive: ${key}`);
128
+ }
129
+ for (const word of ["宛如", "恍若", "仿佛置身于"]) {
130
+ assert.ok(wordSet.has(word), `Missing from words[]: ${word}`);
131
+ assert.ok(categoryWordSets.get("simile_cliche")?.has(word), `Missing from simile_cliche: ${word}`);
132
+ }
133
+ assert.ok(categoryWordSets.get("paragraph_opener")?.has("下一刻"), "下一刻 should be classified as paragraph_opener");
134
+ assert.equal(categoryWordSets.get("narrative_filler")?.has("下一刻"), false, "下一刻 should not remain in narrative_filler");
135
+ for (const [categoryName, word, expectedMax] of [
136
+ ["enumeration_template", "首先", 2],
137
+ ["enumeration_template", "其次", 2],
138
+ ["enumeration_template", "最后", 2],
139
+ ["academic_tone", "例如", 2],
140
+ ["emotion_cliche", "不禁", 1],
141
+ ["emotion_cliche", "心中暗道", 1],
142
+ ["action_cliche", "缓缓说道", 1],
143
+ ["action_cliche", "微微一笑", 1]
144
+ ]) {
145
+ const entry = entryIndex.get(`${categoryName}:${word}`);
146
+ assert.ok(entry, `Missing entry metadata: ${categoryName}:${word}`);
147
+ assert.equal(entry?.per_chapter_max, expectedMax, `Unexpected per_chapter_max for ${categoryName}:${word}`);
148
+ }
149
+ assert.ok(Array.isArray(raw.update_log), "ai-blacklist.json.update_log must be an array");
150
+ const updateLog = raw.update_log;
151
+ assert.ok(updateLog.length >= 1, "update_log should have at least one entry");
152
+ const latest = updateLog[updateLog.length - 1];
153
+ assertPlainObject(latest, "update_log[-1]");
154
+ assert.equal(latest.version, "2.0.0");
155
+ assert.equal(latest.words_count, words.length);
156
+ });
@@ -0,0 +1,38 @@
1
+ import assert from "node:assert/strict";
2
+ import { createRequire } from "node:module";
3
+ import test from "node:test";
4
+ import { main } from "../cli.js";
5
+ const require = createRequire(import.meta.url);
6
+ const pkg = require("../../package.json");
7
+ async function runCli(argv) {
8
+ let stdout = "";
9
+ let stderr = "";
10
+ const origOut = process.stdout.write;
11
+ const origErr = process.stderr.write;
12
+ process.stdout.write = (chunk) => {
13
+ stdout += typeof chunk === "string" ? chunk : Buffer.from(chunk).toString("utf8");
14
+ return true;
15
+ };
16
+ process.stderr.write = (chunk) => {
17
+ stderr += typeof chunk === "string" ? chunk : Buffer.from(chunk).toString("utf8");
18
+ return true;
19
+ };
20
+ const prevExitCode = process.exitCode;
21
+ try {
22
+ const code = await main(argv);
23
+ return { code, stdout, stderr };
24
+ }
25
+ finally {
26
+ process.exitCode = prevExitCode;
27
+ process.stdout.write = origOut;
28
+ process.stderr.write = origErr;
29
+ }
30
+ }
31
+ test("novel --version prints the package version", async () => {
32
+ const expected = typeof pkg.version === "string" ? pkg.version : null;
33
+ assert.ok(expected, "Expected package.json to contain a string version.");
34
+ const res = await runCli(["--version"]);
35
+ assert.equal(res.code, 0);
36
+ assert.equal(res.stdout.trim(), expected);
37
+ assert.equal(res.stderr, "");
38
+ });
package/dist/cli.js CHANGED
@@ -1,6 +1,7 @@
1
1
  #!/usr/bin/env node
2
2
  import { Command, CommanderError } from "commander";
3
3
  import { realpathSync } from "node:fs";
4
+ import { createRequire } from "node:module";
4
5
  import { join, resolve } from "node:path";
5
6
  import { fileURLToPath, pathToFileURL } from "node:url";
6
7
  import { buildCharacterVoiceProfiles, clearCharacterVoiceDriftFile, computeCharacterVoiceDrift, loadActiveCharacterVoiceDriftIds, loadCharacterVoiceProfiles, writeCharacterVoiceDriftFile, writeCharacterVoiceProfilesFile } from "./character-voice.js";
@@ -25,6 +26,17 @@ import { isPlainObject } from "./type-guards.js";
25
26
  import { validateStep } from "./validate.js";
26
27
  import { VOL_REVIEW_RELS, collectVolumeData, computeBridgeCheck, computeForeshadowingAudit, computeStorylineRhythm } from "./volume-review.js";
27
28
  import { tryResolveVolumeChapterRange } from "./consistency-auditor.js";
29
+ const require = createRequire(import.meta.url);
30
+ function resolveCliVersion() {
31
+ try {
32
+ const pkg = require("../package.json");
33
+ return typeof pkg.version === "string" ? pkg.version : "0.0.0";
34
+ }
35
+ catch {
36
+ return "0.0.0";
37
+ }
38
+ }
39
+ const CLI_VERSION = resolveCliVersion();
28
40
  function detectCommandName(argv) {
29
41
  for (let i = 0; i < argv.length; i++) {
30
42
  const token = argv[i];
@@ -49,6 +61,7 @@ function buildProgram(argv) {
49
61
  const jsonMode = isJsonMode(argv);
50
62
  const program = new Command();
51
63
  program.name("novel").description("Executor-agnostic novel orchestration CLI.");
64
+ program.version(CLI_VERSION);
52
65
  program.option("--json", "Emit machine-readable JSON (single object).");
53
66
  program.option("--project <dir>", "Project root directory (defaults to auto-detect via .checkpoint.json).");
54
67
  program.configureOutput({
@@ -870,7 +883,7 @@ export async function main(argv = process.argv.slice(2)) {
870
883
  return err.exitCode;
871
884
  }
872
885
  if (err instanceof CommanderError) {
873
- if (err.code === "commander.helpDisplayed") {
886
+ if (err.code === "commander.helpDisplayed" || err.code === "commander.version") {
874
887
  return 0;
875
888
  }
876
889
  if (jsonMode) {