geo-ai-search-optimization 1.2.15 → 1.2.17
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +60 -0
- package/package.json +1 -1
- package/resources/geo-ai-search-optimization/references/skill-bundle-map.md +20 -0
- package/resources/geo-ai-search-optimization-agent-playbook-pack/SKILL.md +23 -0
- package/resources/geo-ai-search-optimization-agent-playbook-pack/agents/openai.yaml +4 -0
- package/resources/geo-ai-search-optimization-agent-retrospective/SKILL.md +23 -0
- package/resources/geo-ai-search-optimization-agent-retrospective/agents/openai.yaml +4 -0
- package/resources/geo-ai-search-optimization-usage/SKILL.md +24 -14
- package/src/agent-playbook-pack.js +267 -0
- package/src/agent-retrospective.js +452 -0
- package/src/agent-session.js +20 -0
- package/src/auto-flow.js +49 -0
- package/src/cli.js +69 -0
- package/src/completion-report.js +31 -0
- package/src/index.js +2 -0
- package/src/skills.js +6 -0
package/README.md
CHANGED
|
@@ -226,6 +226,47 @@ geo-ai-search-optimization agent-decision-log ./your-site --append-from ./report
|
|
|
226
226
|
- 最新建议下一步命令
|
|
227
227
|
- 可直接复制给 agent 的 decision log prompt
|
|
228
228
|
|
|
229
|
+
## Agent Retrospective 命令
|
|
230
|
+
|
|
231
|
+
如果你希望不只是看多轮决策历史,而是直接总结“为什么这几轮顺利 / 为什么总卡住”,可以直接用 `agent-retrospective`:
|
|
232
|
+
|
|
233
|
+
```bash
|
|
234
|
+
geo-ai-search-optimization agent-retrospective ./your-site
|
|
235
|
+
geo-ai-search-optimization agent-retrospective ./reports/agent-decision-log.json
|
|
236
|
+
geo-ai-search-optimization agent-retrospective ./reports/agent-decision-log.json --format json --out ./reports/agent-retrospective.json
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
`agent-retrospective` 会输出:
|
|
240
|
+
|
|
241
|
+
- 当前复盘状态
|
|
242
|
+
- 重复阻塞与重复任务包
|
|
243
|
+
- 关键学习
|
|
244
|
+
- 下一轮建议
|
|
245
|
+
- 决策分布
|
|
246
|
+
- 多轮时间线
|
|
247
|
+
- 可直接复制给 agent 的 retrospective prompt
|
|
248
|
+
|
|
249
|
+
## Agent Playbook Pack 命令
|
|
250
|
+
|
|
251
|
+
如果你希望把多轮复盘、决策历史和交接信息压成一个“下一位 agent 一拿到就能继续做”的单入口工件,可以直接用 `agent-playbook-pack`:
|
|
252
|
+
|
|
253
|
+
```bash
|
|
254
|
+
geo-ai-search-optimization agent-playbook-pack ./your-site
|
|
255
|
+
geo-ai-search-optimization agent-playbook-pack ./reports/agent-decision-log.json
|
|
256
|
+
geo-ai-search-optimization agent-playbook-pack ./reports/agent-retrospective.json --format json --out ./reports/agent-playbook-pack.json
|
|
257
|
+
```
|
|
258
|
+
|
|
259
|
+
`agent-playbook-pack` 会输出:
|
|
260
|
+
|
|
261
|
+
- 当前状态
|
|
262
|
+
- 恢复摘要
|
|
263
|
+
- 启动命令
|
|
264
|
+
- 当前包与下一包
|
|
265
|
+
- 先读什么
|
|
266
|
+
- 现在先做什么 / 现在不要做什么
|
|
267
|
+
- 后续命令
|
|
268
|
+
- 可直接复制给 agent 的 playbook prompt
|
|
269
|
+
|
|
229
270
|
## Quick Start
|
|
230
271
|
|
|
231
272
|
如果你要从 0 到 1 启动一个 GEO 项目,建议照这个顺序做。
|
|
@@ -628,6 +669,8 @@ geo-ai-search-optimization agent-progress-tracker ./your-site
|
|
|
628
669
|
geo-ai-search-optimization agent-status-board ./your-site
|
|
629
670
|
geo-ai-search-optimization agent-checkpoint ./your-site
|
|
630
671
|
geo-ai-search-optimization agent-decision-log ./your-site
|
|
672
|
+
geo-ai-search-optimization agent-retrospective ./your-site
|
|
673
|
+
geo-ai-search-optimization agent-playbook-pack ./your-site
|
|
631
674
|
geo-ai-search-optimization skills
|
|
632
675
|
geo-ai-search-optimization where
|
|
633
676
|
geo-ai-search-optimization doctor
|
|
@@ -698,6 +741,21 @@ geo-ai-search-optimization help
|
|
|
698
741
|
- 输出 do-now checklist、stop checklist、success checklist、验证命令和回报模板
|
|
699
742
|
- 新增 `geo-ai-search-optimization-agent-executor` skill
|
|
700
743
|
|
|
744
|
+
## New in 1.2.17
|
|
745
|
+
|
|
746
|
+
- 新增 `agent-playbook-pack`
|
|
747
|
+
- 把 `agent-retrospective + agent-decision-log + handoff-bundle` 压成一个单入口执行包
|
|
748
|
+
- 让下一位 agent 可以直接从 current packet、start command 和 do-now checklist 继续
|
|
749
|
+
- `completion-report` 现在也能直接吃 retrospective / handoff bundle / playbook pack 这类工件
|
|
750
|
+
- 新增 `geo-ai-search-optimization-agent-playbook-pack` skill
|
|
751
|
+
|
|
752
|
+
## New in 1.2.16
|
|
753
|
+
|
|
754
|
+
- 新增 `agent-retrospective` 命令
|
|
755
|
+
- 把多轮 decision log 压成复盘视图,识别重复阻塞、重复任务包和反复出现的决策模式
|
|
756
|
+
- 支持从 `agent-decision-log`、`agent-checkpoint`、目录、网址等输入继续生成 retrospective
|
|
757
|
+
- 新增 `geo-ai-search-optimization-agent-retrospective` skill
|
|
758
|
+
|
|
701
759
|
## New in 1.2.15
|
|
702
760
|
|
|
703
761
|
- 新增 `agent-decision-log` 命令
|
|
@@ -928,6 +986,8 @@ The installed package now includes a bundled GEO skill pack, including:
|
|
|
928
986
|
- `geo-ai-search-optimization-agent-status-board`
|
|
929
987
|
- `geo-ai-search-optimization-agent-checkpoint`
|
|
930
988
|
- `geo-ai-search-optimization-agent-decision-log`
|
|
989
|
+
- `geo-ai-search-optimization-agent-retrospective`
|
|
990
|
+
- `geo-ai-search-optimization-agent-playbook-pack`
|
|
931
991
|
- `geo-ai-search-optimization-usage`
|
|
932
992
|
- `geo-ai-search-optimization-agent-handoff`
|
|
933
993
|
- `geo-ai-search-optimization-repair-loop`
|
package/package.json
CHANGED
|
@@ -112,6 +112,26 @@ Best for:
|
|
|
112
112
|
- giving PM and the next agent a reusable decision history across rounds
|
|
113
113
|
- appending a fresh checkpoint onto an existing GEO execution history
|
|
114
114
|
|
|
115
|
+
### `geo-ai-search-optimization-agent-retrospective`
|
|
116
|
+
|
|
117
|
+
Use this when the next agent or PM should understand the pattern across several rounds, not just inherit the last decision.
|
|
118
|
+
|
|
119
|
+
Best for:
|
|
120
|
+
|
|
121
|
+
- explaining why the same blocker or packet kept recurring
|
|
122
|
+
- turning decision history into lessons learned and next-round advice
|
|
123
|
+
- producing a cross-round retrospective before meetings, closeout, or final delivery
|
|
124
|
+
|
|
125
|
+
### `geo-ai-search-optimization-agent-playbook-pack`
|
|
126
|
+
|
|
127
|
+
Use this when the next agent should get one single-entry resume pack instead of piecing together retrospective, decision history, and handoff artifacts manually.
|
|
128
|
+
|
|
129
|
+
Best for:
|
|
130
|
+
|
|
131
|
+
- handing one GEO artifact to the next agent and saying "continue from here"
|
|
132
|
+
- compressing multi-round history into one current packet, one start command, and one follow-up chain
|
|
133
|
+
- reducing restart risk after several GEO rounds or multiple handoffs
|
|
134
|
+
|
|
115
135
|
## Usage guide
|
|
116
136
|
|
|
117
137
|
### `geo-ai-search-optimization-usage`
|
|
@@ -0,0 +1,23 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: geo-ai-search-optimization-agent-playbook-pack
|
|
3
|
+
description: Bundle GEO retrospective, decision history, and handoff artifacts into one single-entry playbook for the next agent. Use when a PM or agent wants the next agent to continue from prior GEO rounds without rebuilding the whole chain, and needs a current packet, start command, do-now checklist, avoid-now checklist, and follow-up commands in one artifact.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# GEO Agent Playbook Pack
|
|
7
|
+
|
|
8
|
+
Use this skill when the next agent should not rebuild the GEO chain from scratch.
|
|
9
|
+
|
|
10
|
+
`GEO = Generative Engine Optimization`
|
|
11
|
+
|
|
12
|
+
## What it does
|
|
13
|
+
|
|
14
|
+
- reads GEO retrospective, decision log, and handoff bundle style inputs
|
|
15
|
+
- compresses them into one agent-ready entrypoint
|
|
16
|
+
- tells the next agent where to resume, what to do now, and what not to do now
|
|
17
|
+
- keeps cross-round context without forcing the next agent to read every artifact first
|
|
18
|
+
|
|
19
|
+
## Best use
|
|
20
|
+
|
|
21
|
+
- when PM wants to hand one artifact to the next agent and say "continue from here"
|
|
22
|
+
- when GEO work has already gone through multiple rounds and needs a stable resume point
|
|
23
|
+
- when the team wants one start command, one current packet, and one follow-up chain instead of many separate artifacts
|
|
@@ -0,0 +1,4 @@
|
|
|
1
|
+
interface:
|
|
2
|
+
display_name: "GEO Agent Playbook Pack"
|
|
3
|
+
short_description: "Compress GEO history into one resume entrypoint"
|
|
4
|
+
default_prompt: "Use $geo-ai-search-optimization-agent-playbook-pack to turn this GEO history into one playbook artifact with a resume summary, start command, do-now checklist, and follow-up commands for the next agent."
|
|
@@ -0,0 +1,23 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: geo-ai-search-optimization-agent-retrospective
|
|
3
|
+
description: Turn GEO decision history into a multi-round retrospective. Use when an agent or PM should understand why prior GEO rounds kept advancing, stalled on blockers, or reached closeout, and needs recurring patterns, lessons learned, and next-round guidance instead of a single checkpoint.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# GEO Agent Retrospective
|
|
7
|
+
|
|
8
|
+
Use this skill when one checkpoint or one decision log is not enough, and the team needs to understand the pattern across rounds.
|
|
9
|
+
|
|
10
|
+
`GEO = Generative Engine Optimization`
|
|
11
|
+
|
|
12
|
+
## What it does
|
|
13
|
+
|
|
14
|
+
- reads GEO decision history across multiple rounds
|
|
15
|
+
- highlights recurring blockers, repeated packets, and repeated decisions
|
|
16
|
+
- explains what changed from round to round
|
|
17
|
+
- turns history into lessons learned and next-round advice
|
|
18
|
+
|
|
19
|
+
## Best use
|
|
20
|
+
|
|
21
|
+
- when PM asks why the team keeps getting blocked in the same place
|
|
22
|
+
- when the next agent should inherit not only the latest decision, but the pattern behind it
|
|
23
|
+
- when the team wants a reusable retrospective before closeout, meetings, or final delivery
|
|
@@ -0,0 +1,4 @@
|
|
|
1
|
+
interface:
|
|
2
|
+
display_name: "GEO Agent Retrospective"
|
|
3
|
+
short_description: "Summarize cross-round GEO patterns and lessons"
|
|
4
|
+
default_prompt: "Use $geo-ai-search-optimization-agent-retrospective to turn this GEO decision history into a multi-round retrospective with recurring patterns, lessons learned, and next-round guidance."
|
|
@@ -13,7 +13,7 @@ Treat this tool as a PM-friendly GEO workflow for websites.
|
|
|
13
13
|
|
|
14
14
|
`GEO = Generative Engine Optimization`
|
|
15
15
|
|
|
16
|
-
The package is best explained as twenty-
|
|
16
|
+
The package is best explained as twenty-four layers:
|
|
17
17
|
|
|
18
18
|
1. `auto-flow`: auto-select the next skill and command chain
|
|
19
19
|
2. `agent-session`: build a runnable session for the next agent
|
|
@@ -24,19 +24,21 @@ The package is best explained as twenty-two layers:
|
|
|
24
24
|
7. `agent-status-board`: turn the execution state into a board view for PM and agents
|
|
25
25
|
8. `agent-checkpoint`: freeze the current round into a continue / unblock / closeout decision
|
|
26
26
|
9. `agent-decision-log`: preserve why each round continued, paused, or closed out
|
|
27
|
-
10. `
|
|
28
|
-
11. `
|
|
29
|
-
12. `
|
|
30
|
-
13. `
|
|
31
|
-
14. `
|
|
32
|
-
15. `
|
|
33
|
-
16. `
|
|
34
|
-
17. `
|
|
35
|
-
18. `
|
|
36
|
-
19. `
|
|
37
|
-
20. `
|
|
38
|
-
21. `
|
|
39
|
-
22. `
|
|
27
|
+
10. `agent-retrospective`: explain multi-round patterns, lessons, and next-round advice
|
|
28
|
+
11. `agent-playbook-pack`: compress retrospective, decision history, and handoff into one resume entrypoint
|
|
29
|
+
12. `skills`: inspect the bundled skill package
|
|
30
|
+
13. `onboard-url` / `onboard`: first look
|
|
31
|
+
14. `scan`: raw signal check
|
|
32
|
+
15. `audit` / `report`: diagnosis
|
|
33
|
+
16. `fix-plan` / `owner-board`: execution planning
|
|
34
|
+
17. `agent-handoff`: agent takeover package
|
|
35
|
+
18. `apply-plan`: execution loop
|
|
36
|
+
19. `completion-report`: closeout
|
|
37
|
+
20. `handoff-bundle`: all-in-one package
|
|
38
|
+
21. `share-pack`: audience-ready delivery
|
|
39
|
+
22. `export-pack`: folder export
|
|
40
|
+
23. `html-pack` / `publish-pack`: browsable and final delivery output
|
|
41
|
+
24. `pm-brief` / `roadmap`: stakeholder alignment
|
|
40
42
|
|
|
41
43
|
## Recommended command order
|
|
42
44
|
|
|
@@ -52,6 +54,8 @@ npx geo-ai-search-optimization agent-progress-tracker https://example.com
|
|
|
52
54
|
npx geo-ai-search-optimization agent-status-board https://example.com
|
|
53
55
|
npx geo-ai-search-optimization agent-checkpoint https://example.com
|
|
54
56
|
npx geo-ai-search-optimization agent-decision-log https://example.com
|
|
57
|
+
npx geo-ai-search-optimization agent-retrospective https://example.com
|
|
58
|
+
npx geo-ai-search-optimization agent-playbook-pack https://example.com
|
|
55
59
|
npx geo-ai-search-optimization onboard-url https://example.com
|
|
56
60
|
npx geo-ai-search-optimization pm-brief https://example.com
|
|
57
61
|
npx geo-ai-search-optimization roadmap https://example.com
|
|
@@ -69,6 +73,8 @@ npx geo-ai-search-optimization agent-progress-tracker ./your-site
|
|
|
69
73
|
npx geo-ai-search-optimization agent-status-board ./your-site
|
|
70
74
|
npx geo-ai-search-optimization agent-checkpoint ./your-site
|
|
71
75
|
npx geo-ai-search-optimization agent-decision-log ./your-site
|
|
76
|
+
npx geo-ai-search-optimization agent-retrospective ./your-site
|
|
77
|
+
npx geo-ai-search-optimization agent-playbook-pack ./your-site
|
|
72
78
|
npx geo-ai-search-optimization scan ./your-site
|
|
73
79
|
npx geo-ai-search-optimization audit ./your-site
|
|
74
80
|
npx geo-ai-search-optimization fix-plan ./your-site
|
|
@@ -95,6 +101,8 @@ npx geo-ai-search-optimization roadmap ./your-site
|
|
|
95
101
|
- `agent-status-board`: present the current execution state as a board with done, in-progress, blocked, next, and queued columns
|
|
96
102
|
- `agent-checkpoint`: convert the current round into a checkpoint decision for continue, unblock, or closeout
|
|
97
103
|
- `agent-decision-log`: preserve multiple rounds of checkpoint history so the next agent can inherit the reasoning
|
|
104
|
+
- `agent-retrospective`: explain why the last few rounds advanced or stalled, and turn that into lessons and next-round guidance
|
|
105
|
+
- `agent-playbook-pack`: compress retrospective, decision history, and handoff into one single-entry resume pack for the next agent
|
|
98
106
|
- `onboard-url`: first-time website check from a live URL
|
|
99
107
|
- `onboard`: interactive first-time onboarding
|
|
100
108
|
- `skills`: list the bundled skills and decide which skill or command chain fits the task
|
|
@@ -128,6 +136,8 @@ When explaining the tool to a user:
|
|
|
128
136
|
- if the user wants the next agent to present execution state as a board for PM and agent coordination, move them to `agent-status-board`
|
|
129
137
|
- if the user wants a per-round decision artifact that says continue, unblock, or close out, move them to `agent-checkpoint`
|
|
130
138
|
- if the user wants cross-round decision memory and not just one checkpoint, move them to `agent-decision-log`
|
|
139
|
+
- if the user wants to understand why multiple rounds kept advancing or getting stuck, move them to `agent-retrospective`
|
|
140
|
+
- if the user wants one artifact that the next agent can take over from immediately, move them to `agent-playbook-pack`
|
|
131
141
|
- explain the result in PM language, not implementation jargon
|
|
132
142
|
- if the user sounds new, start with `onboard-url` or `quick-start`
|
|
133
143
|
- if the user wants another agent to take over, move them to `agent-handoff`
|
|
@@ -0,0 +1,267 @@
|
|
|
1
|
+
import { createAgentDecisionLog } from "./agent-decision-log.js";
|
|
2
|
+
import { createAgentRetrospective } from "./agent-retrospective.js";
|
|
3
|
+
import { createHandoffBundle } from "./handoff-bundle.js";
|
|
4
|
+
import { writeScanOutput } from "./scan.js";
|
|
5
|
+
|
|
6
|
+
const VALID_FORMATS = new Set(["markdown", "json"]);
|
|
7
|
+
|
|
8
|
+
function normalizeFormat(format) {
|
|
9
|
+
const resolved = (format || "markdown").toLowerCase();
|
|
10
|
+
if (!VALID_FORMATS.has(resolved)) {
|
|
11
|
+
throw new Error(`不支持的 agent-playbook-pack 格式:${format}。可选值:${Array.from(VALID_FORMATS).join(", ")}`);
|
|
12
|
+
}
|
|
13
|
+
return resolved;
|
|
14
|
+
}
|
|
15
|
+
|
|
16
|
+
function inferPlaybookStatus(retrospective, decisionLog, handoffBundle) {
|
|
17
|
+
if (
|
|
18
|
+
retrospective.retrospectiveStatus === "ready-for-closeout" ||
|
|
19
|
+
decisionLog.latestDecision === "move-to-closeout"
|
|
20
|
+
) {
|
|
21
|
+
return "closeout-ready";
|
|
22
|
+
}
|
|
23
|
+
if (
|
|
24
|
+
retrospective.retrospectiveStatus === "repeated-blocker" ||
|
|
25
|
+
retrospective.retrospectiveStatus === "blocked-this-round" ||
|
|
26
|
+
decisionLog.latestDecision === "resolve-blockers"
|
|
27
|
+
) {
|
|
28
|
+
return "unblock-first";
|
|
29
|
+
}
|
|
30
|
+
if (handoffBundle.summary.executionMode === "implementation-ready") {
|
|
31
|
+
return "implementation-ready";
|
|
32
|
+
}
|
|
33
|
+
if (handoffBundle.summary.executionMode === "advice-only") {
|
|
34
|
+
return "advice-ready";
|
|
35
|
+
}
|
|
36
|
+
return "artifact-guided";
|
|
37
|
+
}
|
|
38
|
+
|
|
39
|
+
function buildResumeSummary(status, retrospective, decisionLog) {
|
|
40
|
+
if (status === "closeout-ready") {
|
|
41
|
+
return "这轮已经接近或进入收尾,不要再扩展新任务,直接整理 closeout 与交付。";
|
|
42
|
+
}
|
|
43
|
+
if (status === "unblock-first") {
|
|
44
|
+
return `当前应先解除阻塞,再继续 ${decisionLog.currentPacket?.id || "当前包"}。`;
|
|
45
|
+
}
|
|
46
|
+
if (status === "implementation-ready") {
|
|
47
|
+
return `当前可以直接接着做 ${decisionLog.currentPacket?.id || "第一包"},并按既有节奏推进。`;
|
|
48
|
+
}
|
|
49
|
+
if (status === "advice-ready") {
|
|
50
|
+
return "当前更适合先给建议与实施路径,不要假设已经拥有仓库写入权限。";
|
|
51
|
+
}
|
|
52
|
+
return `当前建议沿着最近一次决策继续推进 ${decisionLog.currentPacket?.id || "下一包"}。`;
|
|
53
|
+
}
|
|
54
|
+
|
|
55
|
+
function buildStartCommand(status, retrospective, decisionLog, handoffBundle) {
|
|
56
|
+
if (status === "closeout-ready") {
|
|
57
|
+
return `geo-ai-search-optimization completion-report ${decisionLog.source}`;
|
|
58
|
+
}
|
|
59
|
+
if (retrospective.suggestedNextCommand) {
|
|
60
|
+
return retrospective.suggestedNextCommand;
|
|
61
|
+
}
|
|
62
|
+
if (decisionLog.suggestedNextCommand) {
|
|
63
|
+
return decisionLog.suggestedNextCommand;
|
|
64
|
+
}
|
|
65
|
+
const firstPacket = handoffBundle.applyPlan.packets[0];
|
|
66
|
+
return firstPacket
|
|
67
|
+
? `geo-ai-search-optimization agent-executor ${decisionLog.source} --task ${firstPacket.id}`
|
|
68
|
+
: `geo-ai-search-optimization agent-session ${decisionLog.source}`;
|
|
69
|
+
}
|
|
70
|
+
|
|
71
|
+
function buildReadOrder() {
|
|
72
|
+
return [
|
|
73
|
+
"先读 retrospective,总结多轮里为什么推进顺利或反复卡住。",
|
|
74
|
+
"再读 decision log,确认最近一次决策、当前包和下一步命令。",
|
|
75
|
+
"最后读 handoff bundle,接着看 apply-plan 和 completion-report 的执行细节。"
|
|
76
|
+
];
|
|
77
|
+
}
|
|
78
|
+
|
|
79
|
+
function buildDoNowChecklist(status, retrospective, decisionLog) {
|
|
80
|
+
const items = [];
|
|
81
|
+
|
|
82
|
+
if (status === "closeout-ready") {
|
|
83
|
+
items.push("不要再新开执行包。");
|
|
84
|
+
items.push("直接整理 completion-report、meeting-pack 和 publish-pack。");
|
|
85
|
+
} else if (status === "unblock-first") {
|
|
86
|
+
items.push("先解决当前阻塞,不要直接继续改代码。");
|
|
87
|
+
items.push("阻塞解除后,再回到当前包继续推进。");
|
|
88
|
+
} else {
|
|
89
|
+
items.push(`沿着当前包 ${decisionLog.currentPacket?.id || "第一包"} 继续,不要重新生成一条新链。`);
|
|
90
|
+
items.push("先按已有 runbook / executor 节奏推进,再更新状态。");
|
|
91
|
+
}
|
|
92
|
+
|
|
93
|
+
items.push(...retrospective.nextRoundAdvice.slice(0, 2));
|
|
94
|
+
return Array.from(new Set(items));
|
|
95
|
+
}
|
|
96
|
+
|
|
97
|
+
function buildAvoidNowChecklist(status, retrospective) {
|
|
98
|
+
const items = [];
|
|
99
|
+
|
|
100
|
+
if (status !== "closeout-ready") {
|
|
101
|
+
items.push("不要同时切换目标、任务包和执行模式。");
|
|
102
|
+
}
|
|
103
|
+
if (status === "unblock-first") {
|
|
104
|
+
items.push("不要假装阻塞不存在,也不要跳过 preflight 直接改动。");
|
|
105
|
+
}
|
|
106
|
+
if (retrospective.retrospectiveStatus === "restarting-too-often") {
|
|
107
|
+
items.push("不要再从第一包重新开始,优先延续最近一次 current packet。");
|
|
108
|
+
}
|
|
109
|
+
if (retrospective.recurringBlockers[0]?.count >= 2) {
|
|
110
|
+
items.push(`不要忽视重复阻塞「${retrospective.recurringBlockers[0].title}」,先把它变成固定检查项。`);
|
|
111
|
+
}
|
|
112
|
+
if (status === "closeout-ready") {
|
|
113
|
+
items.push("不要在 closeout 阶段继续扩展新的修复范围。");
|
|
114
|
+
}
|
|
115
|
+
|
|
116
|
+
return items;
|
|
117
|
+
}
|
|
118
|
+
|
|
119
|
+
function buildFollowupCommands(status, source) {
|
|
120
|
+
if (status === "closeout-ready") {
|
|
121
|
+
return [
|
|
122
|
+
`geo-ai-search-optimization completion-report ${source}`,
|
|
123
|
+
`geo-ai-search-optimization meeting-pack ${source}`,
|
|
124
|
+
`geo-ai-search-optimization publish-pack ${source}`
|
|
125
|
+
];
|
|
126
|
+
}
|
|
127
|
+
|
|
128
|
+
return [
|
|
129
|
+
`geo-ai-search-optimization agent-status-board ${source}`,
|
|
130
|
+
`geo-ai-search-optimization agent-decision-log ${source}`,
|
|
131
|
+
`geo-ai-search-optimization agent-retrospective ${source}`
|
|
132
|
+
];
|
|
133
|
+
}
|
|
134
|
+
|
|
135
|
+
function buildPlaybookPrompt(pack) {
|
|
136
|
+
const lines = [
|
|
137
|
+
"你现在进入 GEO Agent Playbook 模式。",
|
|
138
|
+
`输入来源:${pack.source}`,
|
|
139
|
+
`当前状态:${pack.playbookStatus}`,
|
|
140
|
+
`恢复摘要:${pack.resumeSummary}`,
|
|
141
|
+
`启动命令:${pack.startCommand}`,
|
|
142
|
+
`当前包:${pack.currentPacket?.id || "无"}`
|
|
143
|
+
];
|
|
144
|
+
|
|
145
|
+
if (pack.nextPacket) {
|
|
146
|
+
lines.push(`下一包:${pack.nextPacket.id}|${pack.nextPacket.title}`);
|
|
147
|
+
}
|
|
148
|
+
if (pack.retrospective.recurringBlockers.length > 0) {
|
|
149
|
+
lines.push(
|
|
150
|
+
`重复阻塞:${pack.retrospective.recurringBlockers.map((item) => `${item.title} (${item.count})`).join(";")}`
|
|
151
|
+
);
|
|
152
|
+
}
|
|
153
|
+
|
|
154
|
+
lines.push("请先解释你为什么沿着这条 playbook 继续,而不是重新规划一整条新链。");
|
|
155
|
+
lines.push("然后按 read order、do now checklist、avoid now checklist 输出执行建议。");
|
|
156
|
+
lines.push("最后给出本轮完成后应该更新哪些工件。");
|
|
157
|
+
return lines.join("\n");
|
|
158
|
+
}
|
|
159
|
+
|
|
160
|
+
export async function createAgentPlaybookPack(input, options = {}) {
|
|
161
|
+
const format = normalizeFormat(options.format);
|
|
162
|
+
const retrospective = await createAgentRetrospective(input, { format: "json" });
|
|
163
|
+
const decisionLog = retrospective.decisionLog || (await createAgentDecisionLog(input, { format: "json" }));
|
|
164
|
+
const baseSource = retrospective.source || decisionLog.source || input;
|
|
165
|
+
const handoffBundle = await createHandoffBundle(baseSource, { format: "json", taskId: options.taskId });
|
|
166
|
+
|
|
167
|
+
const playbookStatus = inferPlaybookStatus(retrospective, decisionLog, handoffBundle);
|
|
168
|
+
const source = baseSource;
|
|
169
|
+
const startCommand = buildStartCommand(playbookStatus, retrospective, decisionLog, handoffBundle);
|
|
170
|
+
|
|
171
|
+
const pack = {
|
|
172
|
+
kind: "geo-agent-playbook-pack",
|
|
173
|
+
input,
|
|
174
|
+
source,
|
|
175
|
+
sourceType: decisionLog.sourceType,
|
|
176
|
+
artifactKind: decisionLog.kind,
|
|
177
|
+
format,
|
|
178
|
+
playbookStatus,
|
|
179
|
+
resumeSummary: buildResumeSummary(playbookStatus, retrospective, decisionLog),
|
|
180
|
+
currentPacket: decisionLog.currentPacket,
|
|
181
|
+
nextPacket: decisionLog.nextPacket,
|
|
182
|
+
startCommand,
|
|
183
|
+
readOrder: buildReadOrder(),
|
|
184
|
+
doNowChecklist: buildDoNowChecklist(playbookStatus, retrospective, decisionLog),
|
|
185
|
+
avoidNowChecklist: buildAvoidNowChecklist(playbookStatus, retrospective),
|
|
186
|
+
followupCommands: buildFollowupCommands(playbookStatus, source),
|
|
187
|
+
decisionLog,
|
|
188
|
+
retrospective,
|
|
189
|
+
handoffBundle,
|
|
190
|
+
playbookPrompt: ""
|
|
191
|
+
};
|
|
192
|
+
|
|
193
|
+
pack.playbookPrompt = buildPlaybookPrompt(pack);
|
|
194
|
+
return pack;
|
|
195
|
+
}
|
|
196
|
+
|
|
197
|
+
export function renderAgentPlaybookPackMarkdown(pack) {
|
|
198
|
+
const lines = [
|
|
199
|
+
"# GEO Agent Playbook Pack",
|
|
200
|
+
"",
|
|
201
|
+
`- 输入:\`${pack.source}\``,
|
|
202
|
+
`- 来源类型:\`${pack.sourceType}\``,
|
|
203
|
+
`- 当前状态:\`${pack.playbookStatus}\``,
|
|
204
|
+
`- 恢复摘要:${pack.resumeSummary}`,
|
|
205
|
+
`- 启动命令:\`${pack.startCommand}\``,
|
|
206
|
+
""
|
|
207
|
+
];
|
|
208
|
+
|
|
209
|
+
if (pack.currentPacket) {
|
|
210
|
+
lines.push("## 当前包", "", `- ${pack.currentPacket.id}|${pack.currentPacket.title}`);
|
|
211
|
+
lines.push(`- Owner:${pack.currentPacket.owner}`);
|
|
212
|
+
lines.push(`- 优先级:${pack.currentPacket.priority}`);
|
|
213
|
+
lines.push("");
|
|
214
|
+
}
|
|
215
|
+
|
|
216
|
+
if (pack.nextPacket) {
|
|
217
|
+
lines.push("## 下一包", "", `- ${pack.nextPacket.id}|${pack.nextPacket.title}`);
|
|
218
|
+
lines.push(`- Owner:${pack.nextPacket.owner}`);
|
|
219
|
+
lines.push(`- 优先级:${pack.nextPacket.priority}`);
|
|
220
|
+
lines.push("");
|
|
221
|
+
}
|
|
222
|
+
|
|
223
|
+
lines.push("## 先读什么", "");
|
|
224
|
+
for (const item of pack.readOrder) {
|
|
225
|
+
lines.push(`- ${item}`);
|
|
226
|
+
}
|
|
227
|
+
|
|
228
|
+
lines.push("", "## 现在先做什么", "");
|
|
229
|
+
for (const item of pack.doNowChecklist) {
|
|
230
|
+
lines.push(`- ${item}`);
|
|
231
|
+
}
|
|
232
|
+
|
|
233
|
+
lines.push("", "## 现在不要做什么", "");
|
|
234
|
+
if (pack.avoidNowChecklist.length === 0) {
|
|
235
|
+
lines.push("- 当前没有额外禁忌项。");
|
|
236
|
+
} else {
|
|
237
|
+
for (const item of pack.avoidNowChecklist) {
|
|
238
|
+
lines.push(`- ${item}`);
|
|
239
|
+
}
|
|
240
|
+
}
|
|
241
|
+
|
|
242
|
+
lines.push("", "## 后续命令", "");
|
|
243
|
+
for (const command of pack.followupCommands) {
|
|
244
|
+
lines.push(`- \`${command}\``);
|
|
245
|
+
}
|
|
246
|
+
|
|
247
|
+
lines.push(
|
|
248
|
+
"",
|
|
249
|
+
"## 内含工件摘要",
|
|
250
|
+
"",
|
|
251
|
+
`- decision-log:${pack.decisionLog.totalEntries} 轮`,
|
|
252
|
+
`- retrospective:${pack.retrospective.retrospectiveStatus}`,
|
|
253
|
+
`- handoff-bundle:${pack.handoffBundle.summary.executionMode} / ${pack.handoffBundle.summary.executionType}`,
|
|
254
|
+
"",
|
|
255
|
+
"## 可直接复制给 Agent 的 Playbook Prompt",
|
|
256
|
+
"",
|
|
257
|
+
"```text",
|
|
258
|
+
pack.playbookPrompt,
|
|
259
|
+
"```"
|
|
260
|
+
);
|
|
261
|
+
|
|
262
|
+
return `${lines.join("\n")}\n`;
|
|
263
|
+
}
|
|
264
|
+
|
|
265
|
+
export async function writeAgentPlaybookPackOutput(outputPath, content) {
|
|
266
|
+
return writeScanOutput(outputPath, content);
|
|
267
|
+
}
|
|
@@ -0,0 +1,452 @@
|
|
|
1
|
+
import fs from "node:fs/promises";
|
|
2
|
+
import path from "node:path";
|
|
3
|
+
import { createAgentDecisionLog } from "./agent-decision-log.js";
|
|
4
|
+
import { writeScanOutput } from "./scan.js";
|
|
5
|
+
|
|
6
|
+
const VALID_FORMATS = new Set(["markdown", "json"]);
|
|
7
|
+
|
|
8
|
+
function normalizeFormat(format) {
|
|
9
|
+
const resolved = (format || "markdown").toLowerCase();
|
|
10
|
+
if (!VALID_FORMATS.has(resolved)) {
|
|
11
|
+
throw new Error(`不支持的 agent-retrospective 格式:${format}。可选值:${Array.from(VALID_FORMATS).join(", ")}`);
|
|
12
|
+
}
|
|
13
|
+
return resolved;
|
|
14
|
+
}
|
|
15
|
+
|
|
16
|
+
async function pathExists(targetPath) {
|
|
17
|
+
try {
|
|
18
|
+
await fs.access(targetPath);
|
|
19
|
+
return true;
|
|
20
|
+
} catch {
|
|
21
|
+
return false;
|
|
22
|
+
}
|
|
23
|
+
}
|
|
24
|
+
|
|
25
|
+
function cloneForFormat(record, format) {
|
|
26
|
+
return {
|
|
27
|
+
...record,
|
|
28
|
+
format
|
|
29
|
+
};
|
|
30
|
+
}
|
|
31
|
+
|
|
32
|
+
function incrementMapCount(map, key) {
|
|
33
|
+
if (!key) {
|
|
34
|
+
return;
|
|
35
|
+
}
|
|
36
|
+
map.set(key, (map.get(key) || 0) + 1);
|
|
37
|
+
}
|
|
38
|
+
|
|
39
|
+
function toSortedFrequencyList(map, formatter) {
|
|
40
|
+
return [...map.entries()]
|
|
41
|
+
.sort((left, right) => right[1] - left[1] || String(left[0]).localeCompare(String(right[0])))
|
|
42
|
+
.map(([key, count]) => formatter(key, count));
|
|
43
|
+
}
|
|
44
|
+
|
|
45
|
+
async function resolveDecisionLog(input, options = {}) {
|
|
46
|
+
const resolvedInput = path.resolve(input);
|
|
47
|
+
if (await pathExists(resolvedInput)) {
|
|
48
|
+
try {
|
|
49
|
+
const raw = await fs.readFile(resolvedInput, "utf8");
|
|
50
|
+
const parsed = JSON.parse(raw);
|
|
51
|
+
if (parsed?.kind === "geo-agent-retrospective" && !options.forceRefresh) {
|
|
52
|
+
return {
|
|
53
|
+
retrospective: cloneForFormat(parsed, normalizeFormat(options.format)),
|
|
54
|
+
decisionLog: parsed.decisionLog || null
|
|
55
|
+
};
|
|
56
|
+
}
|
|
57
|
+
if (parsed?.kind === "geo-agent-decision-log") {
|
|
58
|
+
return {
|
|
59
|
+
retrospective: null,
|
|
60
|
+
decisionLog: parsed
|
|
61
|
+
};
|
|
62
|
+
}
|
|
63
|
+
} catch {
|
|
64
|
+
// Fall through to log generation.
|
|
65
|
+
}
|
|
66
|
+
}
|
|
67
|
+
|
|
68
|
+
return {
|
|
69
|
+
retrospective: null,
|
|
70
|
+
decisionLog: await createAgentDecisionLog(input, { format: "json" })
|
|
71
|
+
};
|
|
72
|
+
}
|
|
73
|
+
|
|
74
|
+
function analyzeProgressTrend(entries) {
|
|
75
|
+
if (entries.length === 0) {
|
|
76
|
+
return {
|
|
77
|
+
progressDelta: 0,
|
|
78
|
+
firstProgress: 0,
|
|
79
|
+
latestProgress: 0
|
|
80
|
+
};
|
|
81
|
+
}
|
|
82
|
+
|
|
83
|
+
return {
|
|
84
|
+
progressDelta: (entries.at(-1)?.progressPercent || 0) - (entries[0]?.progressPercent || 0),
|
|
85
|
+
firstProgress: entries[0]?.progressPercent || 0,
|
|
86
|
+
latestProgress: entries.at(-1)?.progressPercent || 0
|
|
87
|
+
};
|
|
88
|
+
}
|
|
89
|
+
|
|
90
|
+
function inferRetrospectiveStatus(log, blockerFrequency, decisionCounts, progressTrend) {
|
|
91
|
+
if (log.latestDecision === "move-to-closeout") {
|
|
92
|
+
return "ready-for-closeout";
|
|
93
|
+
}
|
|
94
|
+
if (
|
|
95
|
+
log.latestDecision === "resolve-blockers" &&
|
|
96
|
+
blockerFrequency.length > 0 &&
|
|
97
|
+
blockerFrequency[0].count >= 2
|
|
98
|
+
) {
|
|
99
|
+
return "repeated-blocker";
|
|
100
|
+
}
|
|
101
|
+
if (log.latestDecision === "resolve-blockers") {
|
|
102
|
+
return "blocked-this-round";
|
|
103
|
+
}
|
|
104
|
+
if ((decisionCounts["start-first-packet"] || 0) >= 2) {
|
|
105
|
+
return "restarting-too-often";
|
|
106
|
+
}
|
|
107
|
+
if (progressTrend.progressDelta > 0) {
|
|
108
|
+
return "advancing";
|
|
109
|
+
}
|
|
110
|
+
if (log.totalEntries <= 1) {
|
|
111
|
+
return "early-cycle";
|
|
112
|
+
}
|
|
113
|
+
return "stable-but-flat";
|
|
114
|
+
}
|
|
115
|
+
|
|
116
|
+
function buildKeyLearnings(log, blockerFrequency, packetFrequency, decisionCounts, progressTrend) {
|
|
117
|
+
const learnings = [];
|
|
118
|
+
|
|
119
|
+
if (blockerFrequency[0]?.count >= 2) {
|
|
120
|
+
learnings.push(`阻塞「${blockerFrequency[0].title}」重复出现 ${blockerFrequency[0].count} 次,说明进入执行前的上下文准备还不够稳定。`);
|
|
121
|
+
}
|
|
122
|
+
if ((decisionCounts["start-first-packet"] || 0) >= 2) {
|
|
123
|
+
learnings.push("多轮都回到 start-first-packet,说明执行连续性不足,团队还在反复重启。");
|
|
124
|
+
}
|
|
125
|
+
if (packetFrequency[0]?.count >= 2) {
|
|
126
|
+
learnings.push(`当前包 ${packetFrequency[0].id} 在多轮里重复出现,说明这一包是整条链最容易卡住的地方。`);
|
|
127
|
+
}
|
|
128
|
+
if (progressTrend.progressDelta > 0) {
|
|
129
|
+
learnings.push(`整体进度从 ${progressTrend.firstProgress}% 提升到 ${progressTrend.latestProgress}%,说明执行链并非停滞,而是在缓慢推进。`);
|
|
130
|
+
}
|
|
131
|
+
if (log.latestDecision === "move-to-closeout") {
|
|
132
|
+
learnings.push("最新决策已经进入 closeout,下一步重点不是再开新包,而是整理复盘与交付。");
|
|
133
|
+
}
|
|
134
|
+
if (log.latestDecision === "resolve-blockers" && blockerFrequency[0]?.count === 1) {
|
|
135
|
+
learnings.push("当前阻塞是本轮新出现的问题,适合先快速排除,再继续当前包。");
|
|
136
|
+
}
|
|
137
|
+
|
|
138
|
+
if (learnings.length === 0) {
|
|
139
|
+
learnings.push("当前决策链整体稳定,但还没有足够多的轮次来识别更强的模式。");
|
|
140
|
+
}
|
|
141
|
+
|
|
142
|
+
return learnings;
|
|
143
|
+
}
|
|
144
|
+
|
|
145
|
+
function buildRepeatedPatterns(blockerFrequency, packetFrequency, decisionCounts) {
|
|
146
|
+
const patterns = [];
|
|
147
|
+
|
|
148
|
+
if (blockerFrequency[0] && blockerFrequency[0].count >= 2) {
|
|
149
|
+
patterns.push({
|
|
150
|
+
type: "blocker",
|
|
151
|
+
label: `重复阻塞:${blockerFrequency[0].title}`,
|
|
152
|
+
count: blockerFrequency[0].count
|
|
153
|
+
});
|
|
154
|
+
}
|
|
155
|
+
if (packetFrequency[0] && packetFrequency[0].count >= 2) {
|
|
156
|
+
patterns.push({
|
|
157
|
+
type: "packet",
|
|
158
|
+
label: `反复卡在同一包:${packetFrequency[0].id}|${packetFrequency[0].title}`,
|
|
159
|
+
count: packetFrequency[0].count
|
|
160
|
+
});
|
|
161
|
+
}
|
|
162
|
+
for (const [decision, count] of Object.entries(decisionCounts)) {
|
|
163
|
+
if (count >= 2) {
|
|
164
|
+
patterns.push({
|
|
165
|
+
type: "decision",
|
|
166
|
+
label: `重复决策:${decision}`,
|
|
167
|
+
count
|
|
168
|
+
});
|
|
169
|
+
}
|
|
170
|
+
}
|
|
171
|
+
|
|
172
|
+
return patterns.sort((left, right) => right.count - left.count || left.label.localeCompare(right.label));
|
|
173
|
+
}
|
|
174
|
+
|
|
175
|
+
function buildNextRoundAdvice(log, status, blockerFrequency) {
|
|
176
|
+
const advice = [];
|
|
177
|
+
|
|
178
|
+
if (status === "repeated-blocker") {
|
|
179
|
+
advice.push("下一轮不要先做页面修改,先把重复出现的阻塞前置清掉。");
|
|
180
|
+
advice.push("把权限、模板、数据源或生成链问题单独列成 preflight checklist。");
|
|
181
|
+
} else if (status === "blocked-this-round") {
|
|
182
|
+
advice.push("这一轮先解除当前阻塞,再回到当前包继续推进。");
|
|
183
|
+
} else if (status === "ready-for-closeout") {
|
|
184
|
+
advice.push("下一轮不需要再开新执行包,应直接整理 closeout、meeting pack 和交付包。");
|
|
185
|
+
} else if (status === "restarting-too-often") {
|
|
186
|
+
advice.push("下一轮要保持执行连续性,不要再从第一包重新开始。");
|
|
187
|
+
advice.push("优先沿着最近一次 current packet 继续,而不是重新生成新的执行队列。");
|
|
188
|
+
} else if (status === "advancing") {
|
|
189
|
+
advice.push("执行链在推进,下一轮重点是保留当前节奏并继续当前包。");
|
|
190
|
+
} else {
|
|
191
|
+
advice.push("下一轮先延续最近一次决策,不要同时改变执行顺序和目标。");
|
|
192
|
+
}
|
|
193
|
+
|
|
194
|
+
if (blockerFrequency[0]?.count >= 2) {
|
|
195
|
+
advice.push(`把「${blockerFrequency[0].title}」变成固定的进入执行前检查项,避免下一轮再次卡住。`);
|
|
196
|
+
}
|
|
197
|
+
|
|
198
|
+
if (log.currentPacket) {
|
|
199
|
+
advice.push(`当前最优先仍是 ${log.currentPacket.id}|${log.currentPacket.title}`);
|
|
200
|
+
}
|
|
201
|
+
|
|
202
|
+
return advice;
|
|
203
|
+
}
|
|
204
|
+
|
|
205
|
+
function buildSuggestedNextCommand(log, status) {
|
|
206
|
+
if (status === "ready-for-closeout") {
|
|
207
|
+
return `geo-ai-search-optimization completion-report ${log.source}`;
|
|
208
|
+
}
|
|
209
|
+
if (status === "repeated-blocker" || status === "blocked-this-round") {
|
|
210
|
+
return log.suggestedNextCommand || `geo-ai-search-optimization agent-runbook ${log.source}`;
|
|
211
|
+
}
|
|
212
|
+
if (log.latestDecision === "start-first-packet" || log.latestDecision === "continue-current-packet") {
|
|
213
|
+
return log.suggestedNextCommand || `geo-ai-search-optimization agent-executor ${log.source}`;
|
|
214
|
+
}
|
|
215
|
+
return `geo-ai-search-optimization agent-decision-log ${log.source}`;
|
|
216
|
+
}
|
|
217
|
+
|
|
218
|
+
function buildFollowupCommands(log, status) {
|
|
219
|
+
if (status === "ready-for-closeout") {
|
|
220
|
+
return [
|
|
221
|
+
`geo-ai-search-optimization completion-report ${log.source}`,
|
|
222
|
+
`geo-ai-search-optimization meeting-pack ${log.source}`,
|
|
223
|
+
`geo-ai-search-optimization publish-pack ${log.source}`
|
|
224
|
+
];
|
|
225
|
+
}
|
|
226
|
+
|
|
227
|
+
return [
|
|
228
|
+
buildSuggestedNextCommand(log, status),
|
|
229
|
+
`geo-ai-search-optimization agent-status-board ${log.source}`,
|
|
230
|
+
`geo-ai-search-optimization agent-decision-log ${log.source}`
|
|
231
|
+
].filter(Boolean);
|
|
232
|
+
}
|
|
233
|
+
|
|
234
|
+
function buildRoundNarrative(entries) {
|
|
235
|
+
return entries.map((entry, index) => {
|
|
236
|
+
const previous = entries[index - 1];
|
|
237
|
+
let whatChanged = "这是目前的起始轮次。";
|
|
238
|
+
|
|
239
|
+
if (previous) {
|
|
240
|
+
if (previous.decision !== entry.decision) {
|
|
241
|
+
whatChanged = `决策从 ${previous.decision} 切换到 ${entry.decision}。`;
|
|
242
|
+
} else if ((previous.progressPercent || 0) !== (entry.progressPercent || 0)) {
|
|
243
|
+
whatChanged = `进度从 ${previous.progressPercent || 0}% 变化到 ${entry.progressPercent || 0}%。`;
|
|
244
|
+
} else if ((previous.blockedItems?.length || 0) !== (entry.blockedItems?.length || 0)) {
|
|
245
|
+
whatChanged = `阻塞数从 ${previous.blockedItems?.length || 0} 变化到 ${entry.blockedItems?.length || 0}。`;
|
|
246
|
+
} else {
|
|
247
|
+
whatChanged = "当前轮次延续了上一轮的总体判断。";
|
|
248
|
+
}
|
|
249
|
+
}
|
|
250
|
+
|
|
251
|
+
return {
|
|
252
|
+
id: entry.id,
|
|
253
|
+
createdAt: entry.createdAt,
|
|
254
|
+
decision: entry.decision,
|
|
255
|
+
checkpointType: entry.checkpointType,
|
|
256
|
+
currentPacket: entry.currentPacket,
|
|
257
|
+
note: entry.note,
|
|
258
|
+
whatChanged,
|
|
259
|
+
decisionReason: entry.decisionReason
|
|
260
|
+
};
|
|
261
|
+
});
|
|
262
|
+
}
|
|
263
|
+
|
|
264
|
+
function buildRetrospectivePrompt(report) {
|
|
265
|
+
const lines = [
|
|
266
|
+
"你现在进入 GEO 多轮复盘模式。",
|
|
267
|
+
`当前输入:${report.source}`,
|
|
268
|
+
`轮次数:${report.totalRounds}`,
|
|
269
|
+
`当前复盘状态:${report.retrospectiveStatus}`,
|
|
270
|
+
`最新决策:${report.latestDecision}`,
|
|
271
|
+
`复盘结论:${report.retrospectiveSummary}`
|
|
272
|
+
];
|
|
273
|
+
|
|
274
|
+
if (report.recurringBlockers.length > 0) {
|
|
275
|
+
lines.push(`重复阻塞:${report.recurringBlockers.map((item) => `${item.title} (${item.count})`).join(";")}`);
|
|
276
|
+
}
|
|
277
|
+
if (report.currentPacket) {
|
|
278
|
+
lines.push(`当前包:${report.currentPacket.id}|${report.currentPacket.title}`);
|
|
279
|
+
}
|
|
280
|
+
|
|
281
|
+
lines.push("请先解释这几轮里发生了什么变化,再给出下一轮最该避免的错误和最该保留的节奏。");
|
|
282
|
+
lines.push("最后输出建议下一步命令,以及对 PM 和下一位 agent 的简短建议。");
|
|
283
|
+
return lines.join("\n");
|
|
284
|
+
}
|
|
285
|
+
|
|
286
|
+
export async function createAgentRetrospective(input, options = {}) {
|
|
287
|
+
const format = normalizeFormat(options.format);
|
|
288
|
+
const { retrospective, decisionLog } = await resolveDecisionLog(input, { ...options, format });
|
|
289
|
+
|
|
290
|
+
if (retrospective) {
|
|
291
|
+
return retrospective;
|
|
292
|
+
}
|
|
293
|
+
|
|
294
|
+
if (!decisionLog) {
|
|
295
|
+
throw new Error("无法从当前输入生成 decision log,因此不能创建 retrospective。");
|
|
296
|
+
}
|
|
297
|
+
|
|
298
|
+
const entries = decisionLog.entries || [];
|
|
299
|
+
const decisionCounts = entries.reduce((accumulator, entry) => {
|
|
300
|
+
accumulator[entry.decision] = (accumulator[entry.decision] || 0) + 1;
|
|
301
|
+
return accumulator;
|
|
302
|
+
}, {});
|
|
303
|
+
|
|
304
|
+
const blockerMap = new Map();
|
|
305
|
+
const packetMap = new Map();
|
|
306
|
+
|
|
307
|
+
for (const entry of entries) {
|
|
308
|
+
for (const blocker of entry.blockedItems || []) {
|
|
309
|
+
incrementMapCount(blockerMap, blocker.title);
|
|
310
|
+
}
|
|
311
|
+
if (entry.currentPacket?.id) {
|
|
312
|
+
incrementMapCount(packetMap, `${entry.currentPacket.id}|||${entry.currentPacket.title}`);
|
|
313
|
+
}
|
|
314
|
+
}
|
|
315
|
+
|
|
316
|
+
const recurringBlockers = toSortedFrequencyList(blockerMap, (title, count) => ({
|
|
317
|
+
title,
|
|
318
|
+
count
|
|
319
|
+
}));
|
|
320
|
+
const repeatedPackets = toSortedFrequencyList(packetMap, (packedValue, count) => {
|
|
321
|
+
const [id, title] = String(packedValue).split("|||");
|
|
322
|
+
return { id, title, count };
|
|
323
|
+
});
|
|
324
|
+
const progressTrend = analyzeProgressTrend(entries);
|
|
325
|
+
const retrospectiveStatus = inferRetrospectiveStatus(decisionLog, recurringBlockers, decisionCounts, progressTrend);
|
|
326
|
+
const keyLearnings = buildKeyLearnings(decisionLog, recurringBlockers, repeatedPackets, decisionCounts, progressTrend);
|
|
327
|
+
const repeatedPatterns = buildRepeatedPatterns(recurringBlockers, repeatedPackets, decisionCounts);
|
|
328
|
+
const nextRoundAdvice = buildNextRoundAdvice(decisionLog, retrospectiveStatus, recurringBlockers);
|
|
329
|
+
const followupCommands = buildFollowupCommands(decisionLog, retrospectiveStatus);
|
|
330
|
+
|
|
331
|
+
const report = {
|
|
332
|
+
kind: "geo-agent-retrospective",
|
|
333
|
+
input,
|
|
334
|
+
source: decisionLog.source,
|
|
335
|
+
sourceType: decisionLog.sourceType,
|
|
336
|
+
artifactKind: decisionLog.kind,
|
|
337
|
+
format,
|
|
338
|
+
totalRounds: decisionLog.totalEntries,
|
|
339
|
+
retrospectiveStatus,
|
|
340
|
+
latestDecision: decisionLog.latestDecision,
|
|
341
|
+
latestDecisionReason: decisionLog.latestDecisionReason,
|
|
342
|
+
latestCheckpointType: decisionLog.latestCheckpointType,
|
|
343
|
+
currentPacket: decisionLog.currentPacket,
|
|
344
|
+
nextPacket: decisionLog.nextPacket,
|
|
345
|
+
progressTrend,
|
|
346
|
+
decisionCounts,
|
|
347
|
+
recurringBlockers,
|
|
348
|
+
repeatedPackets,
|
|
349
|
+
repeatedPatterns,
|
|
350
|
+
retrospectiveSummary: `${decisionLog.logSummary} 当前更适合:${nextRoundAdvice[0]}`,
|
|
351
|
+
keyLearnings,
|
|
352
|
+
nextRoundAdvice,
|
|
353
|
+
suggestedNextCommand: buildSuggestedNextCommand(decisionLog, retrospectiveStatus),
|
|
354
|
+
followupCommands,
|
|
355
|
+
roundNarrative: buildRoundNarrative(entries),
|
|
356
|
+
decisionLog,
|
|
357
|
+
retrospectivePrompt: ""
|
|
358
|
+
};
|
|
359
|
+
|
|
360
|
+
report.retrospectivePrompt = buildRetrospectivePrompt(report);
|
|
361
|
+
return report;
|
|
362
|
+
}
|
|
363
|
+
|
|
364
|
+
export function renderAgentRetrospectiveMarkdown(report) {
|
|
365
|
+
const lines = [
|
|
366
|
+
"# GEO Agent Retrospective",
|
|
367
|
+
"",
|
|
368
|
+
`- 输入:\`${report.source}\``,
|
|
369
|
+
`- 来源类型:\`${report.sourceType}\``,
|
|
370
|
+
`- 决策工件:\`${report.artifactKind}\``,
|
|
371
|
+
`- 轮次数:\`${report.totalRounds}\``,
|
|
372
|
+
`- 当前复盘状态:\`${report.retrospectiveStatus}\``,
|
|
373
|
+
`- 最新决策:\`${report.latestDecision}\``,
|
|
374
|
+
`- 最新检查点类型:\`${report.latestCheckpointType}\``,
|
|
375
|
+
`- 复盘总结:${report.retrospectiveSummary}`,
|
|
376
|
+
""
|
|
377
|
+
];
|
|
378
|
+
|
|
379
|
+
if (report.currentPacket) {
|
|
380
|
+
lines.push("## 当前包", "", `- ${report.currentPacket.id}|${report.currentPacket.title}`);
|
|
381
|
+
lines.push(`- Owner:${report.currentPacket.owner}`);
|
|
382
|
+
lines.push(`- 优先级:${report.currentPacket.priority}`);
|
|
383
|
+
lines.push("");
|
|
384
|
+
}
|
|
385
|
+
|
|
386
|
+
if (report.nextPacket) {
|
|
387
|
+
lines.push("## 下一包", "", `- ${report.nextPacket.id}|${report.nextPacket.title}`);
|
|
388
|
+
lines.push(`- Owner:${report.nextPacket.owner}`);
|
|
389
|
+
lines.push(`- 优先级:${report.nextPacket.priority}`);
|
|
390
|
+
lines.push("");
|
|
391
|
+
}
|
|
392
|
+
|
|
393
|
+
lines.push("## 重复模式", "");
|
|
394
|
+
if (report.repeatedPatterns.length === 0) {
|
|
395
|
+
lines.push("- 当前还没有明显重复模式。", "");
|
|
396
|
+
} else {
|
|
397
|
+
for (const pattern of report.repeatedPatterns) {
|
|
398
|
+
lines.push(`- ${pattern.label}|出现 ${pattern.count} 次`);
|
|
399
|
+
}
|
|
400
|
+
lines.push("");
|
|
401
|
+
}
|
|
402
|
+
|
|
403
|
+
lines.push("## 关键学习", "");
|
|
404
|
+
for (const item of report.keyLearnings) {
|
|
405
|
+
lines.push(`- ${item}`);
|
|
406
|
+
}
|
|
407
|
+
|
|
408
|
+
lines.push("", "## 下一轮建议", "");
|
|
409
|
+
for (const item of report.nextRoundAdvice) {
|
|
410
|
+
lines.push(`- ${item}`);
|
|
411
|
+
}
|
|
412
|
+
|
|
413
|
+
lines.push("", "## 决策分布", "");
|
|
414
|
+
for (const [decision, count] of Object.entries(report.decisionCounts)) {
|
|
415
|
+
lines.push(`- ${decision}:${count} 次`);
|
|
416
|
+
}
|
|
417
|
+
|
|
418
|
+
lines.push("", "## 轮次时间线", "");
|
|
419
|
+
for (const round of report.roundNarrative) {
|
|
420
|
+
lines.push(`### ${round.id}`);
|
|
421
|
+
lines.push("");
|
|
422
|
+
lines.push(`- 时间:\`${round.createdAt}\``);
|
|
423
|
+
lines.push(`- 决策:\`${round.decision}\``);
|
|
424
|
+
lines.push(`- 检查点类型:\`${round.checkpointType}\``);
|
|
425
|
+
lines.push(`- 原因:${round.decisionReason}`);
|
|
426
|
+
lines.push(`- 变化:${round.whatChanged}`);
|
|
427
|
+
if (round.currentPacket) {
|
|
428
|
+
lines.push(`- 当前包:${round.currentPacket.id}|${round.currentPacket.title}`);
|
|
429
|
+
}
|
|
430
|
+
if (round.note) {
|
|
431
|
+
lines.push(`- 备注:${round.note}`);
|
|
432
|
+
}
|
|
433
|
+
lines.push("");
|
|
434
|
+
}
|
|
435
|
+
|
|
436
|
+
lines.push("## 建议下一步命令", "", `- \`${report.suggestedNextCommand}\``);
|
|
437
|
+
|
|
438
|
+
if (report.followupCommands.length > 0) {
|
|
439
|
+
lines.push("", "## 后续命令", "");
|
|
440
|
+
for (const command of report.followupCommands) {
|
|
441
|
+
lines.push(`- \`${command}\``);
|
|
442
|
+
}
|
|
443
|
+
}
|
|
444
|
+
|
|
445
|
+
lines.push("", "## 可直接复制给 Agent 的 Retrospective Prompt", "", "```text", report.retrospectivePrompt, "```");
|
|
446
|
+
|
|
447
|
+
return `${lines.join("\n")}\n`;
|
|
448
|
+
}
|
|
449
|
+
|
|
450
|
+
export async function writeAgentRetrospectiveOutput(outputPath, content) {
|
|
451
|
+
return writeScanOutput(outputPath, content);
|
|
452
|
+
}
|
package/src/agent-session.js
CHANGED
|
@@ -67,6 +67,12 @@ function inferSkillForCommand(commandName, flow) {
|
|
|
67
67
|
if (commandName === "agent-decision-log") {
|
|
68
68
|
return "geo-ai-search-optimization-agent-decision-log";
|
|
69
69
|
}
|
|
70
|
+
if (commandName === "agent-retrospective") {
|
|
71
|
+
return "geo-ai-search-optimization-agent-retrospective";
|
|
72
|
+
}
|
|
73
|
+
if (commandName === "agent-playbook-pack") {
|
|
74
|
+
return "geo-ai-search-optimization-agent-playbook-pack";
|
|
75
|
+
}
|
|
70
76
|
if (commandName === "skills" || commandName === "quick-start") {
|
|
71
77
|
return "geo-ai-search-optimization-usage";
|
|
72
78
|
}
|
|
@@ -141,6 +147,10 @@ function inferStepPurpose(commandName, flow) {
|
|
|
141
147
|
return "把当前阶段压成继续 / 阻塞 / 收尾的决策检查点。";
|
|
142
148
|
case "agent-decision-log":
|
|
143
149
|
return "把每一轮 checkpoint 沉淀成可继承的决策历史。";
|
|
150
|
+
case "agent-retrospective":
|
|
151
|
+
return "把多轮决策历史压成复盘结论、重复模式和下一轮建议。";
|
|
152
|
+
case "agent-playbook-pack":
|
|
153
|
+
return "把复盘、决策历史和交接信息压成下一位 agent 的单入口执行包。";
|
|
144
154
|
case "apply-plan":
|
|
145
155
|
return "把交接结果推进到具体执行包。";
|
|
146
156
|
case "completion-report":
|
|
@@ -195,6 +205,10 @@ function inferExpectedArtifact(commandName) {
|
|
|
195
205
|
return "agent 阶段检查点工件";
|
|
196
206
|
case "agent-decision-log":
|
|
197
207
|
return "agent 决策历史工件";
|
|
208
|
+
case "agent-retrospective":
|
|
209
|
+
return "agent 多轮复盘工件";
|
|
210
|
+
case "agent-playbook-pack":
|
|
211
|
+
return "agent 单入口 playbook 包";
|
|
198
212
|
case "apply-plan":
|
|
199
213
|
return "执行包";
|
|
200
214
|
case "completion-report":
|
|
@@ -248,6 +262,12 @@ function buildStepInstructions(parsedCommand, flow) {
|
|
|
248
262
|
if (parsedCommand.commandName === "agent-decision-log") {
|
|
249
263
|
lines.push("这一步用于保留跨轮决策历史,方便下一位 agent 直接承接上一次判断。");
|
|
250
264
|
}
|
|
265
|
+
if (parsedCommand.commandName === "agent-retrospective") {
|
|
266
|
+
lines.push("这一步用于总结为什么前几轮卡住或推进顺利,并把这些模式转成下一轮建议。");
|
|
267
|
+
}
|
|
268
|
+
if (parsedCommand.commandName === "agent-playbook-pack") {
|
|
269
|
+
lines.push("这一步用于把多轮复盘和交接结果压成单入口执行包,方便下一位 agent 直接开始。");
|
|
270
|
+
}
|
|
251
271
|
if (parsedCommand.commandName === "agent-handoff" && flow.intent === "execute") {
|
|
252
272
|
lines.push("如果还是 advice-only,说明还缺仓库或本地项目上下文。");
|
|
253
273
|
}
|
package/src/auto-flow.js
CHANGED
|
@@ -59,6 +59,12 @@ function inferTaskTextMode(text) {
|
|
|
59
59
|
if (/(decision-log|decision log|决策历史|决策日志|为什么这样决定|历史决策)/i.test(normalized)) {
|
|
60
60
|
return "execute";
|
|
61
61
|
}
|
|
62
|
+
if (/(playbook-pack|playbook pack|playbook 包|agent playbook|单入口执行包|复盘执行包|接着干的入口)/i.test(normalized)) {
|
|
63
|
+
return "execute";
|
|
64
|
+
}
|
|
65
|
+
if (/(retrospective|retro|多轮复盘|执行复盘|为什么总卡住|经验总结|回顾前几轮)/i.test(normalized)) {
|
|
66
|
+
return "closeout";
|
|
67
|
+
}
|
|
62
68
|
if (/(executor|先做哪一个|先做哪一包|single task|执行第一包|先执行一个任务)/i.test(normalized)) {
|
|
63
69
|
return "execute";
|
|
64
70
|
}
|
|
@@ -159,6 +165,12 @@ function resolveEffectiveIntent(intent, detected) {
|
|
|
159
165
|
if (detected.artifactKind === "geo-completion-report") {
|
|
160
166
|
return "closeout";
|
|
161
167
|
}
|
|
168
|
+
if (detected.artifactKind === "geo-agent-retrospective") {
|
|
169
|
+
return "closeout";
|
|
170
|
+
}
|
|
171
|
+
if (detected.artifactKind === "geo-agent-playbook-pack") {
|
|
172
|
+
return detected.parsed?.playbookStatus === "closeout-ready" ? "closeout" : "execute";
|
|
173
|
+
}
|
|
162
174
|
if (
|
|
163
175
|
[
|
|
164
176
|
"geo-share-pack",
|
|
@@ -403,6 +415,26 @@ function buildCommandChain(detected, intent) {
|
|
|
403
415
|
`geo-ai-search-optimization agent-decision-log ${baseSource} --append-from ${source}`
|
|
404
416
|
];
|
|
405
417
|
}
|
|
418
|
+
case "geo-agent-retrospective":
|
|
419
|
+
return [
|
|
420
|
+
`geo-ai-search-optimization completion-report ${source}`,
|
|
421
|
+
`geo-ai-search-optimization meeting-pack ${source}`,
|
|
422
|
+
`geo-ai-search-optimization publish-pack ${source}`
|
|
423
|
+
];
|
|
424
|
+
case "geo-agent-playbook-pack": {
|
|
425
|
+
const baseSource = detected.parsed?.source || source;
|
|
426
|
+
return detected.parsed?.playbookStatus === "closeout-ready"
|
|
427
|
+
? [
|
|
428
|
+
detected.parsed?.startCommand || `geo-ai-search-optimization completion-report ${baseSource}`,
|
|
429
|
+
`geo-ai-search-optimization meeting-pack ${baseSource}`,
|
|
430
|
+
`geo-ai-search-optimization publish-pack ${baseSource}`
|
|
431
|
+
]
|
|
432
|
+
: [
|
|
433
|
+
detected.parsed?.startCommand || `geo-ai-search-optimization agent-executor ${baseSource}`,
|
|
434
|
+
`geo-ai-search-optimization agent-status-board ${baseSource}`,
|
|
435
|
+
`geo-ai-search-optimization agent-decision-log ${baseSource}`
|
|
436
|
+
];
|
|
437
|
+
}
|
|
406
438
|
case "geo-apply-plan":
|
|
407
439
|
return [
|
|
408
440
|
`geo-ai-search-optimization agent-executor ${source}`,
|
|
@@ -486,6 +518,10 @@ function pickSkillName(detected, intent) {
|
|
|
486
518
|
return "geo-ai-search-optimization-agent-checkpoint";
|
|
487
519
|
case "geo-agent-decision-log":
|
|
488
520
|
return "geo-ai-search-optimization-agent-decision-log";
|
|
521
|
+
case "geo-agent-retrospective":
|
|
522
|
+
return "geo-ai-search-optimization-agent-retrospective";
|
|
523
|
+
case "geo-agent-playbook-pack":
|
|
524
|
+
return "geo-ai-search-optimization-agent-playbook-pack";
|
|
489
525
|
case "geo-completion-report":
|
|
490
526
|
return "geo-ai-search-optimization-completion-report";
|
|
491
527
|
case "geo-handoff-bundle":
|
|
@@ -527,6 +563,7 @@ function buildSecondarySkillNames(primarySkill, intent, detected) {
|
|
|
527
563
|
"geo-agent-status-board",
|
|
528
564
|
"geo-agent-checkpoint",
|
|
529
565
|
"geo-agent-decision-log",
|
|
566
|
+
"geo-agent-playbook-pack",
|
|
530
567
|
"geo-apply-plan"
|
|
531
568
|
].includes(
|
|
532
569
|
detected.artifactKind
|
|
@@ -537,6 +574,7 @@ function buildSecondarySkillNames(primarySkill, intent, detected) {
|
|
|
537
574
|
names.add("geo-ai-search-optimization-agent-status-board");
|
|
538
575
|
names.add("geo-ai-search-optimization-agent-checkpoint");
|
|
539
576
|
names.add("geo-ai-search-optimization-agent-decision-log");
|
|
577
|
+
names.add("geo-ai-search-optimization-agent-playbook-pack");
|
|
540
578
|
names.add("geo-ai-search-optimization-agent-executor");
|
|
541
579
|
names.add("geo-ai-search-optimization-agent-runbook");
|
|
542
580
|
names.add("geo-ai-search-optimization-agent-handoff");
|
|
@@ -544,6 +582,7 @@ function buildSecondarySkillNames(primarySkill, intent, detected) {
|
|
|
544
582
|
}
|
|
545
583
|
if (intent === "closeout") {
|
|
546
584
|
names.add("geo-ai-search-optimization-completion-report");
|
|
585
|
+
names.add("geo-ai-search-optimization-agent-retrospective");
|
|
547
586
|
}
|
|
548
587
|
|
|
549
588
|
names.delete(primarySkill);
|
|
@@ -571,6 +610,8 @@ function buildStage(intent, detected) {
|
|
|
571
610
|
"geo-agent-status-board",
|
|
572
611
|
"geo-agent-checkpoint",
|
|
573
612
|
"geo-agent-decision-log",
|
|
613
|
+
"geo-agent-playbook-pack",
|
|
614
|
+
"geo-agent-retrospective",
|
|
574
615
|
"geo-apply-plan",
|
|
575
616
|
"geo-handoff-bundle"
|
|
576
617
|
].includes(detected.artifactKind)
|
|
@@ -588,6 +629,8 @@ function buildStage(intent, detected) {
|
|
|
588
629
|
"geo-agent-status-board",
|
|
589
630
|
"geo-agent-checkpoint",
|
|
590
631
|
"geo-agent-decision-log",
|
|
632
|
+
"geo-agent-playbook-pack",
|
|
633
|
+
"geo-agent-retrospective",
|
|
591
634
|
"geo-apply-plan",
|
|
592
635
|
"geo-handoff-bundle"
|
|
593
636
|
].includes(detected.artifactKind)
|
|
@@ -680,9 +723,15 @@ function buildNextAction(detected, intent, commands) {
|
|
|
680
723
|
if (detected.artifactKind === "geo-agent-decision-log") {
|
|
681
724
|
return `先运行 \`${commands[0]}\`,沿着最近一次阶段决策继续推进,而不是重新判断整条链。`;
|
|
682
725
|
}
|
|
726
|
+
if (detected.artifactKind === "geo-agent-playbook-pack") {
|
|
727
|
+
return `先运行 \`${commands[0]}\`,直接沿着 playbook 的起始命令继续,不要重新拆链。`;
|
|
728
|
+
}
|
|
683
729
|
return `先运行 \`${commands[0]}\`,把当前输入推进到 agent 可执行状态。`;
|
|
684
730
|
}
|
|
685
731
|
if (intent === "closeout") {
|
|
732
|
+
if (detected.artifactKind === "geo-agent-retrospective") {
|
|
733
|
+
return `先运行 \`${commands[0]}\`,把多轮复盘结论整理成正式 closeout 与交付结果。`;
|
|
734
|
+
}
|
|
686
735
|
return `先运行 \`${commands[0]}\`,整理本轮完成情况和剩余风险。`;
|
|
687
736
|
}
|
|
688
737
|
if (intent === "guide") {
|
package/src/cli.js
CHANGED
|
@@ -16,6 +16,12 @@ import {
|
|
|
16
16
|
} from "./agent-progress-tracker.js";
|
|
17
17
|
import { createAgentCheckpoint, renderAgentCheckpointMarkdown, writeAgentCheckpointOutput } from "./agent-checkpoint.js";
|
|
18
18
|
import { createAgentDecisionLog, renderAgentDecisionLogMarkdown, writeAgentDecisionLogOutput } from "./agent-decision-log.js";
|
|
19
|
+
import {
|
|
20
|
+
createAgentPlaybookPack,
|
|
21
|
+
renderAgentPlaybookPackMarkdown,
|
|
22
|
+
writeAgentPlaybookPackOutput
|
|
23
|
+
} from "./agent-playbook-pack.js";
|
|
24
|
+
import { createAgentRetrospective, renderAgentRetrospectiveMarkdown, writeAgentRetrospectiveOutput } from "./agent-retrospective.js";
|
|
19
25
|
import { createAgentStatusBoard, renderAgentStatusBoardMarkdown, writeAgentStatusBoardOutput } from "./agent-status-board.js";
|
|
20
26
|
import { createAgentRunbook, renderAgentRunbookMarkdown, writeAgentRunbookOutput } from "./agent-runbook.js";
|
|
21
27
|
import { createAgentSession, renderAgentSessionMarkdown, writeAgentSessionOutput } from "./agent-session.js";
|
|
@@ -83,6 +89,8 @@ function printHelp() {
|
|
|
83
89
|
" geo-ai-search-optimization agent-status-board <input> [--current <id>] [--completed <id,id>] [--blocked <reason,reason>] [--format <markdown|json>] [--out <file>]",
|
|
84
90
|
" geo-ai-search-optimization agent-checkpoint <input> [--current <id>] [--completed <id,id>] [--blocked <reason,reason>] [--format <markdown|json>] [--out <file>]",
|
|
85
91
|
" geo-ai-search-optimization agent-decision-log <input> [--append-from <file>] [--note <text>] [--current <id>] [--completed <id,id>] [--blocked <reason,reason>] [--format <markdown|json>] [--out <file>]",
|
|
92
|
+
" geo-ai-search-optimization agent-retrospective <input> [--format <markdown|json>] [--out <file>]",
|
|
93
|
+
" geo-ai-search-optimization agent-playbook-pack <input> [--task <id>] [--format <markdown|json>] [--out <file>]",
|
|
86
94
|
" geo-ai-search-optimization skills [--json]",
|
|
87
95
|
" geo-ai-search-optimization where",
|
|
88
96
|
" geo-ai-search-optimization doctor [--json]",
|
|
@@ -395,6 +403,57 @@ async function handleAgentDecisionLog(args) {
|
|
|
395
403
|
process.stdout.write(renderedOutput);
|
|
396
404
|
}
|
|
397
405
|
|
|
406
|
+
async function handleAgentRetrospective(args) {
|
|
407
|
+
const input = args.find((value) => !value.startsWith("-"));
|
|
408
|
+
if (!input) {
|
|
409
|
+
throw new Error("agent-retrospective 需要一个输入值,可以是项目路径、网站网址或已导出的工件");
|
|
410
|
+
}
|
|
411
|
+
|
|
412
|
+
const format = getFlagValue(args, "--format") || (hasFlag(args, "--json") ? "json" : undefined);
|
|
413
|
+
const retrospective = await createAgentRetrospective(input, {
|
|
414
|
+
format
|
|
415
|
+
});
|
|
416
|
+
const outputJson = retrospective.format === "json";
|
|
417
|
+
const renderedOutput = outputJson
|
|
418
|
+
? `${JSON.stringify(retrospective, null, 2)}\n`
|
|
419
|
+
: renderAgentRetrospectiveMarkdown(retrospective);
|
|
420
|
+
|
|
421
|
+
const outputPath = getFlagValue(args, "--out");
|
|
422
|
+
if (outputPath) {
|
|
423
|
+
const resolvedOutputPath = await writeAgentRetrospectiveOutput(outputPath, renderedOutput);
|
|
424
|
+
process.stdout.write(`已保存 agent retrospective:${resolvedOutputPath}\n`);
|
|
425
|
+
return;
|
|
426
|
+
}
|
|
427
|
+
|
|
428
|
+
process.stdout.write(renderedOutput);
|
|
429
|
+
}
|
|
430
|
+
|
|
431
|
+
async function handleAgentPlaybookPack(args) {
|
|
432
|
+
const input = args.find((value) => !value.startsWith("-"));
|
|
433
|
+
if (!input) {
|
|
434
|
+
throw new Error("agent-playbook-pack 需要一个输入值,可以是项目路径、网站网址或已导出的工件");
|
|
435
|
+
}
|
|
436
|
+
|
|
437
|
+
const format = getFlagValue(args, "--format") || (hasFlag(args, "--json") ? "json" : undefined);
|
|
438
|
+
const pack = await createAgentPlaybookPack(input, {
|
|
439
|
+
format,
|
|
440
|
+
taskId: getFlagValue(args, "--task")
|
|
441
|
+
});
|
|
442
|
+
const outputJson = pack.format === "json";
|
|
443
|
+
const renderedOutput = outputJson
|
|
444
|
+
? `${JSON.stringify(pack, null, 2)}\n`
|
|
445
|
+
: renderAgentPlaybookPackMarkdown(pack);
|
|
446
|
+
|
|
447
|
+
const outputPath = getFlagValue(args, "--out");
|
|
448
|
+
if (outputPath) {
|
|
449
|
+
const resolvedOutputPath = await writeAgentPlaybookPackOutput(outputPath, renderedOutput);
|
|
450
|
+
process.stdout.write(`已保存 agent playbook pack:${resolvedOutputPath}\n`);
|
|
451
|
+
return;
|
|
452
|
+
}
|
|
453
|
+
|
|
454
|
+
process.stdout.write(renderedOutput);
|
|
455
|
+
}
|
|
456
|
+
|
|
398
457
|
function handleWhere() {
|
|
399
458
|
process.stdout.write(
|
|
400
459
|
[
|
|
@@ -980,6 +1039,16 @@ export async function runCli(args = []) {
|
|
|
980
1039
|
return;
|
|
981
1040
|
}
|
|
982
1041
|
|
|
1042
|
+
if (command === "agent-retrospective") {
|
|
1043
|
+
await handleAgentRetrospective(rest);
|
|
1044
|
+
return;
|
|
1045
|
+
}
|
|
1046
|
+
|
|
1047
|
+
if (command === "agent-playbook-pack") {
|
|
1048
|
+
await handleAgentPlaybookPack(rest);
|
|
1049
|
+
return;
|
|
1050
|
+
}
|
|
1051
|
+
|
|
983
1052
|
if (command === "skills") {
|
|
984
1053
|
await handleSkills(rest);
|
|
985
1054
|
return;
|
package/src/completion-report.js
CHANGED
|
@@ -131,6 +131,37 @@ async function resolveApplyPlan(input) {
|
|
|
131
131
|
return parsed;
|
|
132
132
|
}
|
|
133
133
|
|
|
134
|
+
if (parsed.kind === "geo-handoff-bundle" && parsed.applyPlan?.kind === "geo-apply-plan") {
|
|
135
|
+
return parsed.applyPlan;
|
|
136
|
+
}
|
|
137
|
+
|
|
138
|
+
if (parsed.kind === "geo-agent-progress-tracker" && parsed.applyPlan?.kind === "geo-apply-plan") {
|
|
139
|
+
return parsed.applyPlan;
|
|
140
|
+
}
|
|
141
|
+
|
|
142
|
+
if (parsed.kind === "geo-agent-status-board" && parsed.tracker?.applyPlan?.kind === "geo-apply-plan") {
|
|
143
|
+
return parsed.tracker.applyPlan;
|
|
144
|
+
}
|
|
145
|
+
|
|
146
|
+
if (parsed.kind === "geo-agent-checkpoint" && parsed.statusBoard?.tracker?.applyPlan?.kind === "geo-apply-plan") {
|
|
147
|
+
return parsed.statusBoard.tracker.applyPlan;
|
|
148
|
+
}
|
|
149
|
+
|
|
150
|
+
if (parsed.kind === "geo-agent-decision-log") {
|
|
151
|
+
return createApplyPlan(parsed.source || parsed.latestCheckpoint?.source || resolvedPath, { format: "json" });
|
|
152
|
+
}
|
|
153
|
+
|
|
154
|
+
if (
|
|
155
|
+
parsed.kind === "geo-agent-retrospective" &&
|
|
156
|
+
parsed.decisionLog?.latestCheckpoint?.statusBoard?.tracker?.applyPlan?.kind === "geo-apply-plan"
|
|
157
|
+
) {
|
|
158
|
+
return parsed.decisionLog.latestCheckpoint.statusBoard.tracker.applyPlan;
|
|
159
|
+
}
|
|
160
|
+
|
|
161
|
+
if (parsed.kind === "geo-agent-playbook-pack" && parsed.handoffBundle?.applyPlan?.kind === "geo-apply-plan") {
|
|
162
|
+
return parsed.handoffBundle.applyPlan;
|
|
163
|
+
}
|
|
164
|
+
|
|
134
165
|
return createApplyPlan(resolvedPath, { format: "json" });
|
|
135
166
|
}
|
|
136
167
|
|
package/src/index.js
CHANGED
|
@@ -10,6 +10,8 @@ export { createApplyPlan, renderApplyPlanMarkdown, writeApplyPlanOutput } from "
|
|
|
10
10
|
export { createAgentBatchExecutor, renderAgentBatchExecutorMarkdown, writeAgentBatchExecutorOutput } from "./agent-batch-executor.js";
|
|
11
11
|
export { createAgentCheckpoint, renderAgentCheckpointMarkdown, writeAgentCheckpointOutput } from "./agent-checkpoint.js";
|
|
12
12
|
export { createAgentDecisionLog, renderAgentDecisionLogMarkdown, writeAgentDecisionLogOutput } from "./agent-decision-log.js";
|
|
13
|
+
export { createAgentPlaybookPack, renderAgentPlaybookPackMarkdown, writeAgentPlaybookPackOutput } from "./agent-playbook-pack.js";
|
|
14
|
+
export { createAgentRetrospective, renderAgentRetrospectiveMarkdown, writeAgentRetrospectiveOutput } from "./agent-retrospective.js";
|
|
13
15
|
export { createAgentHandoff, renderAgentHandoffMarkdown, writeAgentHandoffOutput } from "./agent-handoff.js";
|
|
14
16
|
export { createAgentExecutor, renderAgentExecutorMarkdown, writeAgentExecutorOutput } from "./agent-executor.js";
|
|
15
17
|
export {
|
package/src/skills.js
CHANGED
|
@@ -13,6 +13,8 @@ const SKILL_ORDER = [
|
|
|
13
13
|
"geo-ai-search-optimization-agent-status-board",
|
|
14
14
|
"geo-ai-search-optimization-agent-checkpoint",
|
|
15
15
|
"geo-ai-search-optimization-agent-decision-log",
|
|
16
|
+
"geo-ai-search-optimization-agent-retrospective",
|
|
17
|
+
"geo-ai-search-optimization-agent-playbook-pack",
|
|
16
18
|
"geo-ai-search-optimization-usage",
|
|
17
19
|
"geo-ai-search-optimization-agent-handoff",
|
|
18
20
|
"geo-ai-search-optimization-repair-loop",
|
|
@@ -35,6 +37,8 @@ const SKILL_CATEGORY = {
|
|
|
35
37
|
"geo-ai-search-optimization-agent-status-board": "execution",
|
|
36
38
|
"geo-ai-search-optimization-agent-checkpoint": "execution",
|
|
37
39
|
"geo-ai-search-optimization-agent-decision-log": "execution",
|
|
40
|
+
"geo-ai-search-optimization-agent-retrospective": "execution",
|
|
41
|
+
"geo-ai-search-optimization-agent-playbook-pack": "execution",
|
|
38
42
|
"geo-ai-search-optimization-usage": "guidance",
|
|
39
43
|
"geo-ai-search-optimization-agent-handoff": "execution",
|
|
40
44
|
"geo-ai-search-optimization-repair-loop": "execution",
|
|
@@ -176,6 +180,8 @@ export function renderBundledSkillsMarkdown(bundle) {
|
|
|
176
180
|
"- 如果要把当前执行状态直接整理成看板,再进入 agent-status-board。",
|
|
177
181
|
"- 如果要在每轮结束时做继续 / 阻塞 / 收尾决策,再进入 agent-checkpoint。",
|
|
178
182
|
"- 如果要把多轮决策沉淀成可继承的历史,再进入 agent-decision-log。",
|
|
183
|
+
"- 如果要总结多轮为什么推进顺利或反复卡住,再进入 agent-retrospective。",
|
|
184
|
+
"- 如果要把复盘、决策和交接压成下一位 agent 的单入口,再进入 agent-playbook-pack。",
|
|
179
185
|
"- 再看 usage skill,知道什么时候该跑哪个命令。",
|
|
180
186
|
"- 如果要交给 agent 执行,再进入 handoff / apply / completion 这一条执行链。",
|
|
181
187
|
"- 如果要产出给团队分发,再进入 share / export / html / publish 这一条交付链。",
|