geo-ai-search-optimization 1.2.15 → 1.2.16
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +29 -0
- package/package.json +1 -1
- package/resources/geo-ai-search-optimization/references/skill-bundle-map.md +10 -0
- package/resources/geo-ai-search-optimization-agent-retrospective/SKILL.md +23 -0
- package/resources/geo-ai-search-optimization-agent-retrospective/agents/openai.yaml +4 -0
- package/resources/geo-ai-search-optimization-usage/SKILL.md +19 -14
- package/src/agent-retrospective.js +452 -0
- package/src/agent-session.js +10 -0
- package/src/auto-flow.js +20 -0
- package/src/cli.js +32 -0
- package/src/index.js +1 -0
- package/src/skills.js +3 -0
package/README.md
CHANGED
|
@@ -226,6 +226,26 @@ geo-ai-search-optimization agent-decision-log ./your-site --append-from ./report
|
|
|
226
226
|
- 最新建议下一步命令
|
|
227
227
|
- 可直接复制给 agent 的 decision log prompt
|
|
228
228
|
|
|
229
|
+
## Agent Retrospective 命令
|
|
230
|
+
|
|
231
|
+
如果你希望不只是看多轮决策历史,而是直接总结“为什么这几轮顺利 / 为什么总卡住”,可以直接用 `agent-retrospective`:
|
|
232
|
+
|
|
233
|
+
```bash
|
|
234
|
+
geo-ai-search-optimization agent-retrospective ./your-site
|
|
235
|
+
geo-ai-search-optimization agent-retrospective ./reports/agent-decision-log.json
|
|
236
|
+
geo-ai-search-optimization agent-retrospective ./reports/agent-decision-log.json --format json --out ./reports/agent-retrospective.json
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
`agent-retrospective` 会输出:
|
|
240
|
+
|
|
241
|
+
- 当前复盘状态
|
|
242
|
+
- 重复阻塞与重复任务包
|
|
243
|
+
- 关键学习
|
|
244
|
+
- 下一轮建议
|
|
245
|
+
- 决策分布
|
|
246
|
+
- 多轮时间线
|
|
247
|
+
- 可直接复制给 agent 的 retrospective prompt
|
|
248
|
+
|
|
229
249
|
## Quick Start
|
|
230
250
|
|
|
231
251
|
如果你要从 0 到 1 启动一个 GEO 项目,建议照这个顺序做。
|
|
@@ -628,6 +648,7 @@ geo-ai-search-optimization agent-progress-tracker ./your-site
|
|
|
628
648
|
geo-ai-search-optimization agent-status-board ./your-site
|
|
629
649
|
geo-ai-search-optimization agent-checkpoint ./your-site
|
|
630
650
|
geo-ai-search-optimization agent-decision-log ./your-site
|
|
651
|
+
geo-ai-search-optimization agent-retrospective ./your-site
|
|
631
652
|
geo-ai-search-optimization skills
|
|
632
653
|
geo-ai-search-optimization where
|
|
633
654
|
geo-ai-search-optimization doctor
|
|
@@ -698,6 +719,13 @@ geo-ai-search-optimization help
|
|
|
698
719
|
- 输出 do-now checklist、stop checklist、success checklist、验证命令和回报模板
|
|
699
720
|
- 新增 `geo-ai-search-optimization-agent-executor` skill
|
|
700
721
|
|
|
722
|
+
## New in 1.2.16
|
|
723
|
+
|
|
724
|
+
- 新增 `agent-retrospective` 命令
|
|
725
|
+
- 把多轮 decision log 压成复盘视图,识别重复阻塞、重复任务包和反复出现的决策模式
|
|
726
|
+
- 支持从 `agent-decision-log`、`agent-checkpoint`、目录、网址等输入继续生成 retrospective
|
|
727
|
+
- 新增 `geo-ai-search-optimization-agent-retrospective` skill
|
|
728
|
+
|
|
701
729
|
## New in 1.2.15
|
|
702
730
|
|
|
703
731
|
- 新增 `agent-decision-log` 命令
|
|
@@ -928,6 +956,7 @@ The installed package now includes a bundled GEO skill pack, including:
|
|
|
928
956
|
- `geo-ai-search-optimization-agent-status-board`
|
|
929
957
|
- `geo-ai-search-optimization-agent-checkpoint`
|
|
930
958
|
- `geo-ai-search-optimization-agent-decision-log`
|
|
959
|
+
- `geo-ai-search-optimization-agent-retrospective`
|
|
931
960
|
- `geo-ai-search-optimization-usage`
|
|
932
961
|
- `geo-ai-search-optimization-agent-handoff`
|
|
933
962
|
- `geo-ai-search-optimization-repair-loop`
|
package/package.json
CHANGED
|
@@ -112,6 +112,16 @@ Best for:
|
|
|
112
112
|
- giving PM and the next agent a reusable decision history across rounds
|
|
113
113
|
- appending a fresh checkpoint onto an existing GEO execution history
|
|
114
114
|
|
|
115
|
+
### `geo-ai-search-optimization-agent-retrospective`
|
|
116
|
+
|
|
117
|
+
Use this when the next agent or PM should understand the pattern across several rounds, not just inherit the last decision.
|
|
118
|
+
|
|
119
|
+
Best for:
|
|
120
|
+
|
|
121
|
+
- explaining why the same blocker or packet kept recurring
|
|
122
|
+
- turning decision history into lessons learned and next-round advice
|
|
123
|
+
- producing a cross-round retrospective before meetings, closeout, or final delivery
|
|
124
|
+
|
|
115
125
|
## Usage guide
|
|
116
126
|
|
|
117
127
|
### `geo-ai-search-optimization-usage`
|
|
@@ -0,0 +1,23 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: geo-ai-search-optimization-agent-retrospective
|
|
3
|
+
description: Turn GEO decision history into a multi-round retrospective. Use when an agent or PM should understand why prior GEO rounds kept advancing, stalled on blockers, or reached closeout, and needs recurring patterns, lessons learned, and next-round guidance instead of a single checkpoint.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# GEO Agent Retrospective
|
|
7
|
+
|
|
8
|
+
Use this skill when one checkpoint or one decision log is not enough, and the team needs to understand the pattern across rounds.
|
|
9
|
+
|
|
10
|
+
`GEO = Generative Engine Optimization`
|
|
11
|
+
|
|
12
|
+
## What it does
|
|
13
|
+
|
|
14
|
+
- reads GEO decision history across multiple rounds
|
|
15
|
+
- highlights recurring blockers, repeated packets, and repeated decisions
|
|
16
|
+
- explains what changed from round to round
|
|
17
|
+
- turns history into lessons learned and next-round advice
|
|
18
|
+
|
|
19
|
+
## Best use
|
|
20
|
+
|
|
21
|
+
- when PM asks why the team keeps getting blocked in the same place
|
|
22
|
+
- when the next agent should inherit not only the latest decision, but the pattern behind it
|
|
23
|
+
- when the team wants a reusable retrospective before closeout, meetings, or final delivery
|
|
@@ -0,0 +1,4 @@
|
|
|
1
|
+
interface:
|
|
2
|
+
display_name: "GEO Agent Retrospective"
|
|
3
|
+
short_description: "Summarize cross-round GEO patterns and lessons"
|
|
4
|
+
default_prompt: "Use $geo-ai-search-optimization-agent-retrospective to turn this GEO decision history into a multi-round retrospective with recurring patterns, lessons learned, and next-round guidance."
|
|
@@ -13,7 +13,7 @@ Treat this tool as a PM-friendly GEO workflow for websites.
|
|
|
13
13
|
|
|
14
14
|
`GEO = Generative Engine Optimization`
|
|
15
15
|
|
|
16
|
-
The package is best explained as twenty-
|
|
16
|
+
The package is best explained as twenty-three layers:
|
|
17
17
|
|
|
18
18
|
1. `auto-flow`: auto-select the next skill and command chain
|
|
19
19
|
2. `agent-session`: build a runnable session for the next agent
|
|
@@ -24,19 +24,20 @@ The package is best explained as twenty-two layers:
|
|
|
24
24
|
7. `agent-status-board`: turn the execution state into a board view for PM and agents
|
|
25
25
|
8. `agent-checkpoint`: freeze the current round into a continue / unblock / closeout decision
|
|
26
26
|
9. `agent-decision-log`: preserve why each round continued, paused, or closed out
|
|
27
|
-
10. `
|
|
28
|
-
11. `
|
|
29
|
-
12. `
|
|
30
|
-
13. `
|
|
31
|
-
14. `
|
|
32
|
-
15. `
|
|
33
|
-
16. `
|
|
34
|
-
17. `
|
|
35
|
-
18. `
|
|
36
|
-
19. `
|
|
37
|
-
20. `
|
|
38
|
-
21. `
|
|
39
|
-
22. `
|
|
27
|
+
10. `agent-retrospective`: explain multi-round patterns, lessons, and next-round advice
|
|
28
|
+
11. `skills`: inspect the bundled skill package
|
|
29
|
+
12. `onboard-url` / `onboard`: first look
|
|
30
|
+
13. `scan`: raw signal check
|
|
31
|
+
14. `audit` / `report`: diagnosis
|
|
32
|
+
15. `fix-plan` / `owner-board`: execution planning
|
|
33
|
+
16. `agent-handoff`: agent takeover package
|
|
34
|
+
17. `apply-plan`: execution loop
|
|
35
|
+
18. `completion-report`: closeout
|
|
36
|
+
19. `handoff-bundle`: all-in-one package
|
|
37
|
+
20. `share-pack`: audience-ready delivery
|
|
38
|
+
21. `export-pack`: folder export
|
|
39
|
+
22. `html-pack` / `publish-pack`: browsable and final delivery output
|
|
40
|
+
23. `pm-brief` / `roadmap`: stakeholder alignment
|
|
40
41
|
|
|
41
42
|
## Recommended command order
|
|
42
43
|
|
|
@@ -52,6 +53,7 @@ npx geo-ai-search-optimization agent-progress-tracker https://example.com
|
|
|
52
53
|
npx geo-ai-search-optimization agent-status-board https://example.com
|
|
53
54
|
npx geo-ai-search-optimization agent-checkpoint https://example.com
|
|
54
55
|
npx geo-ai-search-optimization agent-decision-log https://example.com
|
|
56
|
+
npx geo-ai-search-optimization agent-retrospective https://example.com
|
|
55
57
|
npx geo-ai-search-optimization onboard-url https://example.com
|
|
56
58
|
npx geo-ai-search-optimization pm-brief https://example.com
|
|
57
59
|
npx geo-ai-search-optimization roadmap https://example.com
|
|
@@ -69,6 +71,7 @@ npx geo-ai-search-optimization agent-progress-tracker ./your-site
|
|
|
69
71
|
npx geo-ai-search-optimization agent-status-board ./your-site
|
|
70
72
|
npx geo-ai-search-optimization agent-checkpoint ./your-site
|
|
71
73
|
npx geo-ai-search-optimization agent-decision-log ./your-site
|
|
74
|
+
npx geo-ai-search-optimization agent-retrospective ./your-site
|
|
72
75
|
npx geo-ai-search-optimization scan ./your-site
|
|
73
76
|
npx geo-ai-search-optimization audit ./your-site
|
|
74
77
|
npx geo-ai-search-optimization fix-plan ./your-site
|
|
@@ -95,6 +98,7 @@ npx geo-ai-search-optimization roadmap ./your-site
|
|
|
95
98
|
- `agent-status-board`: present the current execution state as a board with done, in-progress, blocked, next, and queued columns
|
|
96
99
|
- `agent-checkpoint`: convert the current round into a checkpoint decision for continue, unblock, or closeout
|
|
97
100
|
- `agent-decision-log`: preserve multiple rounds of checkpoint history so the next agent can inherit the reasoning
|
|
101
|
+
- `agent-retrospective`: explain why the last few rounds advanced or stalled, and turn that into lessons and next-round guidance
|
|
98
102
|
- `onboard-url`: first-time website check from a live URL
|
|
99
103
|
- `onboard`: interactive first-time onboarding
|
|
100
104
|
- `skills`: list the bundled skills and decide which skill or command chain fits the task
|
|
@@ -128,6 +132,7 @@ When explaining the tool to a user:
|
|
|
128
132
|
- if the user wants the next agent to present execution state as a board for PM and agent coordination, move them to `agent-status-board`
|
|
129
133
|
- if the user wants a per-round decision artifact that says continue, unblock, or close out, move them to `agent-checkpoint`
|
|
130
134
|
- if the user wants cross-round decision memory and not just one checkpoint, move them to `agent-decision-log`
|
|
135
|
+
- if the user wants to understand why multiple rounds kept advancing or getting stuck, move them to `agent-retrospective`
|
|
131
136
|
- explain the result in PM language, not implementation jargon
|
|
132
137
|
- if the user sounds new, start with `onboard-url` or `quick-start`
|
|
133
138
|
- if the user wants another agent to take over, move them to `agent-handoff`
|
|
@@ -0,0 +1,452 @@
|
|
|
1
|
+
import fs from "node:fs/promises";
|
|
2
|
+
import path from "node:path";
|
|
3
|
+
import { createAgentDecisionLog } from "./agent-decision-log.js";
|
|
4
|
+
import { writeScanOutput } from "./scan.js";
|
|
5
|
+
|
|
6
|
+
const VALID_FORMATS = new Set(["markdown", "json"]);
|
|
7
|
+
|
|
8
|
+
function normalizeFormat(format) {
|
|
9
|
+
const resolved = (format || "markdown").toLowerCase();
|
|
10
|
+
if (!VALID_FORMATS.has(resolved)) {
|
|
11
|
+
throw new Error(`不支持的 agent-retrospective 格式:${format}。可选值:${Array.from(VALID_FORMATS).join(", ")}`);
|
|
12
|
+
}
|
|
13
|
+
return resolved;
|
|
14
|
+
}
|
|
15
|
+
|
|
16
|
+
async function pathExists(targetPath) {
|
|
17
|
+
try {
|
|
18
|
+
await fs.access(targetPath);
|
|
19
|
+
return true;
|
|
20
|
+
} catch {
|
|
21
|
+
return false;
|
|
22
|
+
}
|
|
23
|
+
}
|
|
24
|
+
|
|
25
|
+
function cloneForFormat(record, format) {
|
|
26
|
+
return {
|
|
27
|
+
...record,
|
|
28
|
+
format
|
|
29
|
+
};
|
|
30
|
+
}
|
|
31
|
+
|
|
32
|
+
function incrementMapCount(map, key) {
|
|
33
|
+
if (!key) {
|
|
34
|
+
return;
|
|
35
|
+
}
|
|
36
|
+
map.set(key, (map.get(key) || 0) + 1);
|
|
37
|
+
}
|
|
38
|
+
|
|
39
|
+
function toSortedFrequencyList(map, formatter) {
|
|
40
|
+
return [...map.entries()]
|
|
41
|
+
.sort((left, right) => right[1] - left[1] || String(left[0]).localeCompare(String(right[0])))
|
|
42
|
+
.map(([key, count]) => formatter(key, count));
|
|
43
|
+
}
|
|
44
|
+
|
|
45
|
+
async function resolveDecisionLog(input, options = {}) {
|
|
46
|
+
const resolvedInput = path.resolve(input);
|
|
47
|
+
if (await pathExists(resolvedInput)) {
|
|
48
|
+
try {
|
|
49
|
+
const raw = await fs.readFile(resolvedInput, "utf8");
|
|
50
|
+
const parsed = JSON.parse(raw);
|
|
51
|
+
if (parsed?.kind === "geo-agent-retrospective" && !options.forceRefresh) {
|
|
52
|
+
return {
|
|
53
|
+
retrospective: cloneForFormat(parsed, normalizeFormat(options.format)),
|
|
54
|
+
decisionLog: parsed.decisionLog || null
|
|
55
|
+
};
|
|
56
|
+
}
|
|
57
|
+
if (parsed?.kind === "geo-agent-decision-log") {
|
|
58
|
+
return {
|
|
59
|
+
retrospective: null,
|
|
60
|
+
decisionLog: parsed
|
|
61
|
+
};
|
|
62
|
+
}
|
|
63
|
+
} catch {
|
|
64
|
+
// Fall through to log generation.
|
|
65
|
+
}
|
|
66
|
+
}
|
|
67
|
+
|
|
68
|
+
return {
|
|
69
|
+
retrospective: null,
|
|
70
|
+
decisionLog: await createAgentDecisionLog(input, { format: "json" })
|
|
71
|
+
};
|
|
72
|
+
}
|
|
73
|
+
|
|
74
|
+
function analyzeProgressTrend(entries) {
|
|
75
|
+
if (entries.length === 0) {
|
|
76
|
+
return {
|
|
77
|
+
progressDelta: 0,
|
|
78
|
+
firstProgress: 0,
|
|
79
|
+
latestProgress: 0
|
|
80
|
+
};
|
|
81
|
+
}
|
|
82
|
+
|
|
83
|
+
return {
|
|
84
|
+
progressDelta: (entries.at(-1)?.progressPercent || 0) - (entries[0]?.progressPercent || 0),
|
|
85
|
+
firstProgress: entries[0]?.progressPercent || 0,
|
|
86
|
+
latestProgress: entries.at(-1)?.progressPercent || 0
|
|
87
|
+
};
|
|
88
|
+
}
|
|
89
|
+
|
|
90
|
+
function inferRetrospectiveStatus(log, blockerFrequency, decisionCounts, progressTrend) {
|
|
91
|
+
if (log.latestDecision === "move-to-closeout") {
|
|
92
|
+
return "ready-for-closeout";
|
|
93
|
+
}
|
|
94
|
+
if (
|
|
95
|
+
log.latestDecision === "resolve-blockers" &&
|
|
96
|
+
blockerFrequency.length > 0 &&
|
|
97
|
+
blockerFrequency[0].count >= 2
|
|
98
|
+
) {
|
|
99
|
+
return "repeated-blocker";
|
|
100
|
+
}
|
|
101
|
+
if (log.latestDecision === "resolve-blockers") {
|
|
102
|
+
return "blocked-this-round";
|
|
103
|
+
}
|
|
104
|
+
if ((decisionCounts["start-first-packet"] || 0) >= 2) {
|
|
105
|
+
return "restarting-too-often";
|
|
106
|
+
}
|
|
107
|
+
if (progressTrend.progressDelta > 0) {
|
|
108
|
+
return "advancing";
|
|
109
|
+
}
|
|
110
|
+
if (log.totalEntries <= 1) {
|
|
111
|
+
return "early-cycle";
|
|
112
|
+
}
|
|
113
|
+
return "stable-but-flat";
|
|
114
|
+
}
|
|
115
|
+
|
|
116
|
+
function buildKeyLearnings(log, blockerFrequency, packetFrequency, decisionCounts, progressTrend) {
|
|
117
|
+
const learnings = [];
|
|
118
|
+
|
|
119
|
+
if (blockerFrequency[0]?.count >= 2) {
|
|
120
|
+
learnings.push(`阻塞「${blockerFrequency[0].title}」重复出现 ${blockerFrequency[0].count} 次,说明进入执行前的上下文准备还不够稳定。`);
|
|
121
|
+
}
|
|
122
|
+
if ((decisionCounts["start-first-packet"] || 0) >= 2) {
|
|
123
|
+
learnings.push("多轮都回到 start-first-packet,说明执行连续性不足,团队还在反复重启。");
|
|
124
|
+
}
|
|
125
|
+
if (packetFrequency[0]?.count >= 2) {
|
|
126
|
+
learnings.push(`当前包 ${packetFrequency[0].id} 在多轮里重复出现,说明这一包是整条链最容易卡住的地方。`);
|
|
127
|
+
}
|
|
128
|
+
if (progressTrend.progressDelta > 0) {
|
|
129
|
+
learnings.push(`整体进度从 ${progressTrend.firstProgress}% 提升到 ${progressTrend.latestProgress}%,说明执行链并非停滞,而是在缓慢推进。`);
|
|
130
|
+
}
|
|
131
|
+
if (log.latestDecision === "move-to-closeout") {
|
|
132
|
+
learnings.push("最新决策已经进入 closeout,下一步重点不是再开新包,而是整理复盘与交付。");
|
|
133
|
+
}
|
|
134
|
+
if (log.latestDecision === "resolve-blockers" && blockerFrequency[0]?.count === 1) {
|
|
135
|
+
learnings.push("当前阻塞是本轮新出现的问题,适合先快速排除,再继续当前包。");
|
|
136
|
+
}
|
|
137
|
+
|
|
138
|
+
if (learnings.length === 0) {
|
|
139
|
+
learnings.push("当前决策链整体稳定,但还没有足够多的轮次来识别更强的模式。");
|
|
140
|
+
}
|
|
141
|
+
|
|
142
|
+
return learnings;
|
|
143
|
+
}
|
|
144
|
+
|
|
145
|
+
function buildRepeatedPatterns(blockerFrequency, packetFrequency, decisionCounts) {
|
|
146
|
+
const patterns = [];
|
|
147
|
+
|
|
148
|
+
if (blockerFrequency[0] && blockerFrequency[0].count >= 2) {
|
|
149
|
+
patterns.push({
|
|
150
|
+
type: "blocker",
|
|
151
|
+
label: `重复阻塞:${blockerFrequency[0].title}`,
|
|
152
|
+
count: blockerFrequency[0].count
|
|
153
|
+
});
|
|
154
|
+
}
|
|
155
|
+
if (packetFrequency[0] && packetFrequency[0].count >= 2) {
|
|
156
|
+
patterns.push({
|
|
157
|
+
type: "packet",
|
|
158
|
+
label: `反复卡在同一包:${packetFrequency[0].id}|${packetFrequency[0].title}`,
|
|
159
|
+
count: packetFrequency[0].count
|
|
160
|
+
});
|
|
161
|
+
}
|
|
162
|
+
for (const [decision, count] of Object.entries(decisionCounts)) {
|
|
163
|
+
if (count >= 2) {
|
|
164
|
+
patterns.push({
|
|
165
|
+
type: "decision",
|
|
166
|
+
label: `重复决策:${decision}`,
|
|
167
|
+
count
|
|
168
|
+
});
|
|
169
|
+
}
|
|
170
|
+
}
|
|
171
|
+
|
|
172
|
+
return patterns.sort((left, right) => right.count - left.count || left.label.localeCompare(right.label));
|
|
173
|
+
}
|
|
174
|
+
|
|
175
|
+
function buildNextRoundAdvice(log, status, blockerFrequency) {
|
|
176
|
+
const advice = [];
|
|
177
|
+
|
|
178
|
+
if (status === "repeated-blocker") {
|
|
179
|
+
advice.push("下一轮不要先做页面修改,先把重复出现的阻塞前置清掉。");
|
|
180
|
+
advice.push("把权限、模板、数据源或生成链问题单独列成 preflight checklist。");
|
|
181
|
+
} else if (status === "blocked-this-round") {
|
|
182
|
+
advice.push("这一轮先解除当前阻塞,再回到当前包继续推进。");
|
|
183
|
+
} else if (status === "ready-for-closeout") {
|
|
184
|
+
advice.push("下一轮不需要再开新执行包,应直接整理 closeout、meeting pack 和交付包。");
|
|
185
|
+
} else if (status === "restarting-too-often") {
|
|
186
|
+
advice.push("下一轮要保持执行连续性,不要再从第一包重新开始。");
|
|
187
|
+
advice.push("优先沿着最近一次 current packet 继续,而不是重新生成新的执行队列。");
|
|
188
|
+
} else if (status === "advancing") {
|
|
189
|
+
advice.push("执行链在推进,下一轮重点是保留当前节奏并继续当前包。");
|
|
190
|
+
} else {
|
|
191
|
+
advice.push("下一轮先延续最近一次决策,不要同时改变执行顺序和目标。");
|
|
192
|
+
}
|
|
193
|
+
|
|
194
|
+
if (blockerFrequency[0]?.count >= 2) {
|
|
195
|
+
advice.push(`把「${blockerFrequency[0].title}」变成固定的进入执行前检查项,避免下一轮再次卡住。`);
|
|
196
|
+
}
|
|
197
|
+
|
|
198
|
+
if (log.currentPacket) {
|
|
199
|
+
advice.push(`当前最优先仍是 ${log.currentPacket.id}|${log.currentPacket.title}`);
|
|
200
|
+
}
|
|
201
|
+
|
|
202
|
+
return advice;
|
|
203
|
+
}
|
|
204
|
+
|
|
205
|
+
function buildSuggestedNextCommand(log, status) {
|
|
206
|
+
if (status === "ready-for-closeout") {
|
|
207
|
+
return `geo-ai-search-optimization completion-report ${log.source}`;
|
|
208
|
+
}
|
|
209
|
+
if (status === "repeated-blocker" || status === "blocked-this-round") {
|
|
210
|
+
return log.suggestedNextCommand || `geo-ai-search-optimization agent-runbook ${log.source}`;
|
|
211
|
+
}
|
|
212
|
+
if (log.latestDecision === "start-first-packet" || log.latestDecision === "continue-current-packet") {
|
|
213
|
+
return log.suggestedNextCommand || `geo-ai-search-optimization agent-executor ${log.source}`;
|
|
214
|
+
}
|
|
215
|
+
return `geo-ai-search-optimization agent-decision-log ${log.source}`;
|
|
216
|
+
}
|
|
217
|
+
|
|
218
|
+
function buildFollowupCommands(log, status) {
|
|
219
|
+
if (status === "ready-for-closeout") {
|
|
220
|
+
return [
|
|
221
|
+
`geo-ai-search-optimization completion-report ${log.source}`,
|
|
222
|
+
`geo-ai-search-optimization meeting-pack ${log.source}`,
|
|
223
|
+
`geo-ai-search-optimization publish-pack ${log.source}`
|
|
224
|
+
];
|
|
225
|
+
}
|
|
226
|
+
|
|
227
|
+
return [
|
|
228
|
+
buildSuggestedNextCommand(log, status),
|
|
229
|
+
`geo-ai-search-optimization agent-status-board ${log.source}`,
|
|
230
|
+
`geo-ai-search-optimization agent-decision-log ${log.source}`
|
|
231
|
+
].filter(Boolean);
|
|
232
|
+
}
|
|
233
|
+
|
|
234
|
+
function buildRoundNarrative(entries) {
|
|
235
|
+
return entries.map((entry, index) => {
|
|
236
|
+
const previous = entries[index - 1];
|
|
237
|
+
let whatChanged = "这是目前的起始轮次。";
|
|
238
|
+
|
|
239
|
+
if (previous) {
|
|
240
|
+
if (previous.decision !== entry.decision) {
|
|
241
|
+
whatChanged = `决策从 ${previous.decision} 切换到 ${entry.decision}。`;
|
|
242
|
+
} else if ((previous.progressPercent || 0) !== (entry.progressPercent || 0)) {
|
|
243
|
+
whatChanged = `进度从 ${previous.progressPercent || 0}% 变化到 ${entry.progressPercent || 0}%。`;
|
|
244
|
+
} else if ((previous.blockedItems?.length || 0) !== (entry.blockedItems?.length || 0)) {
|
|
245
|
+
whatChanged = `阻塞数从 ${previous.blockedItems?.length || 0} 变化到 ${entry.blockedItems?.length || 0}。`;
|
|
246
|
+
} else {
|
|
247
|
+
whatChanged = "当前轮次延续了上一轮的总体判断。";
|
|
248
|
+
}
|
|
249
|
+
}
|
|
250
|
+
|
|
251
|
+
return {
|
|
252
|
+
id: entry.id,
|
|
253
|
+
createdAt: entry.createdAt,
|
|
254
|
+
decision: entry.decision,
|
|
255
|
+
checkpointType: entry.checkpointType,
|
|
256
|
+
currentPacket: entry.currentPacket,
|
|
257
|
+
note: entry.note,
|
|
258
|
+
whatChanged,
|
|
259
|
+
decisionReason: entry.decisionReason
|
|
260
|
+
};
|
|
261
|
+
});
|
|
262
|
+
}
|
|
263
|
+
|
|
264
|
+
function buildRetrospectivePrompt(report) {
|
|
265
|
+
const lines = [
|
|
266
|
+
"你现在进入 GEO 多轮复盘模式。",
|
|
267
|
+
`当前输入:${report.source}`,
|
|
268
|
+
`轮次数:${report.totalRounds}`,
|
|
269
|
+
`当前复盘状态:${report.retrospectiveStatus}`,
|
|
270
|
+
`最新决策:${report.latestDecision}`,
|
|
271
|
+
`复盘结论:${report.retrospectiveSummary}`
|
|
272
|
+
];
|
|
273
|
+
|
|
274
|
+
if (report.recurringBlockers.length > 0) {
|
|
275
|
+
lines.push(`重复阻塞:${report.recurringBlockers.map((item) => `${item.title} (${item.count})`).join(";")}`);
|
|
276
|
+
}
|
|
277
|
+
if (report.currentPacket) {
|
|
278
|
+
lines.push(`当前包:${report.currentPacket.id}|${report.currentPacket.title}`);
|
|
279
|
+
}
|
|
280
|
+
|
|
281
|
+
lines.push("请先解释这几轮里发生了什么变化,再给出下一轮最该避免的错误和最该保留的节奏。");
|
|
282
|
+
lines.push("最后输出建议下一步命令,以及对 PM 和下一位 agent 的简短建议。");
|
|
283
|
+
return lines.join("\n");
|
|
284
|
+
}
|
|
285
|
+
|
|
286
|
+
export async function createAgentRetrospective(input, options = {}) {
|
|
287
|
+
const format = normalizeFormat(options.format);
|
|
288
|
+
const { retrospective, decisionLog } = await resolveDecisionLog(input, { ...options, format });
|
|
289
|
+
|
|
290
|
+
if (retrospective) {
|
|
291
|
+
return retrospective;
|
|
292
|
+
}
|
|
293
|
+
|
|
294
|
+
if (!decisionLog) {
|
|
295
|
+
throw new Error("无法从当前输入生成 decision log,因此不能创建 retrospective。");
|
|
296
|
+
}
|
|
297
|
+
|
|
298
|
+
const entries = decisionLog.entries || [];
|
|
299
|
+
const decisionCounts = entries.reduce((accumulator, entry) => {
|
|
300
|
+
accumulator[entry.decision] = (accumulator[entry.decision] || 0) + 1;
|
|
301
|
+
return accumulator;
|
|
302
|
+
}, {});
|
|
303
|
+
|
|
304
|
+
const blockerMap = new Map();
|
|
305
|
+
const packetMap = new Map();
|
|
306
|
+
|
|
307
|
+
for (const entry of entries) {
|
|
308
|
+
for (const blocker of entry.blockedItems || []) {
|
|
309
|
+
incrementMapCount(blockerMap, blocker.title);
|
|
310
|
+
}
|
|
311
|
+
if (entry.currentPacket?.id) {
|
|
312
|
+
incrementMapCount(packetMap, `${entry.currentPacket.id}|||${entry.currentPacket.title}`);
|
|
313
|
+
}
|
|
314
|
+
}
|
|
315
|
+
|
|
316
|
+
const recurringBlockers = toSortedFrequencyList(blockerMap, (title, count) => ({
|
|
317
|
+
title,
|
|
318
|
+
count
|
|
319
|
+
}));
|
|
320
|
+
const repeatedPackets = toSortedFrequencyList(packetMap, (packedValue, count) => {
|
|
321
|
+
const [id, title] = String(packedValue).split("|||");
|
|
322
|
+
return { id, title, count };
|
|
323
|
+
});
|
|
324
|
+
const progressTrend = analyzeProgressTrend(entries);
|
|
325
|
+
const retrospectiveStatus = inferRetrospectiveStatus(decisionLog, recurringBlockers, decisionCounts, progressTrend);
|
|
326
|
+
const keyLearnings = buildKeyLearnings(decisionLog, recurringBlockers, repeatedPackets, decisionCounts, progressTrend);
|
|
327
|
+
const repeatedPatterns = buildRepeatedPatterns(recurringBlockers, repeatedPackets, decisionCounts);
|
|
328
|
+
const nextRoundAdvice = buildNextRoundAdvice(decisionLog, retrospectiveStatus, recurringBlockers);
|
|
329
|
+
const followupCommands = buildFollowupCommands(decisionLog, retrospectiveStatus);
|
|
330
|
+
|
|
331
|
+
const report = {
|
|
332
|
+
kind: "geo-agent-retrospective",
|
|
333
|
+
input,
|
|
334
|
+
source: decisionLog.source,
|
|
335
|
+
sourceType: decisionLog.sourceType,
|
|
336
|
+
artifactKind: decisionLog.kind,
|
|
337
|
+
format,
|
|
338
|
+
totalRounds: decisionLog.totalEntries,
|
|
339
|
+
retrospectiveStatus,
|
|
340
|
+
latestDecision: decisionLog.latestDecision,
|
|
341
|
+
latestDecisionReason: decisionLog.latestDecisionReason,
|
|
342
|
+
latestCheckpointType: decisionLog.latestCheckpointType,
|
|
343
|
+
currentPacket: decisionLog.currentPacket,
|
|
344
|
+
nextPacket: decisionLog.nextPacket,
|
|
345
|
+
progressTrend,
|
|
346
|
+
decisionCounts,
|
|
347
|
+
recurringBlockers,
|
|
348
|
+
repeatedPackets,
|
|
349
|
+
repeatedPatterns,
|
|
350
|
+
retrospectiveSummary: `${decisionLog.logSummary} 当前更适合:${nextRoundAdvice[0]}`,
|
|
351
|
+
keyLearnings,
|
|
352
|
+
nextRoundAdvice,
|
|
353
|
+
suggestedNextCommand: buildSuggestedNextCommand(decisionLog, retrospectiveStatus),
|
|
354
|
+
followupCommands,
|
|
355
|
+
roundNarrative: buildRoundNarrative(entries),
|
|
356
|
+
decisionLog,
|
|
357
|
+
retrospectivePrompt: ""
|
|
358
|
+
};
|
|
359
|
+
|
|
360
|
+
report.retrospectivePrompt = buildRetrospectivePrompt(report);
|
|
361
|
+
return report;
|
|
362
|
+
}
|
|
363
|
+
|
|
364
|
+
export function renderAgentRetrospectiveMarkdown(report) {
|
|
365
|
+
const lines = [
|
|
366
|
+
"# GEO Agent Retrospective",
|
|
367
|
+
"",
|
|
368
|
+
`- 输入:\`${report.source}\``,
|
|
369
|
+
`- 来源类型:\`${report.sourceType}\``,
|
|
370
|
+
`- 决策工件:\`${report.artifactKind}\``,
|
|
371
|
+
`- 轮次数:\`${report.totalRounds}\``,
|
|
372
|
+
`- 当前复盘状态:\`${report.retrospectiveStatus}\``,
|
|
373
|
+
`- 最新决策:\`${report.latestDecision}\``,
|
|
374
|
+
`- 最新检查点类型:\`${report.latestCheckpointType}\``,
|
|
375
|
+
`- 复盘总结:${report.retrospectiveSummary}`,
|
|
376
|
+
""
|
|
377
|
+
];
|
|
378
|
+
|
|
379
|
+
if (report.currentPacket) {
|
|
380
|
+
lines.push("## 当前包", "", `- ${report.currentPacket.id}|${report.currentPacket.title}`);
|
|
381
|
+
lines.push(`- Owner:${report.currentPacket.owner}`);
|
|
382
|
+
lines.push(`- 优先级:${report.currentPacket.priority}`);
|
|
383
|
+
lines.push("");
|
|
384
|
+
}
|
|
385
|
+
|
|
386
|
+
if (report.nextPacket) {
|
|
387
|
+
lines.push("## 下一包", "", `- ${report.nextPacket.id}|${report.nextPacket.title}`);
|
|
388
|
+
lines.push(`- Owner:${report.nextPacket.owner}`);
|
|
389
|
+
lines.push(`- 优先级:${report.nextPacket.priority}`);
|
|
390
|
+
lines.push("");
|
|
391
|
+
}
|
|
392
|
+
|
|
393
|
+
lines.push("## 重复模式", "");
|
|
394
|
+
if (report.repeatedPatterns.length === 0) {
|
|
395
|
+
lines.push("- 当前还没有明显重复模式。", "");
|
|
396
|
+
} else {
|
|
397
|
+
for (const pattern of report.repeatedPatterns) {
|
|
398
|
+
lines.push(`- ${pattern.label}|出现 ${pattern.count} 次`);
|
|
399
|
+
}
|
|
400
|
+
lines.push("");
|
|
401
|
+
}
|
|
402
|
+
|
|
403
|
+
lines.push("## 关键学习", "");
|
|
404
|
+
for (const item of report.keyLearnings) {
|
|
405
|
+
lines.push(`- ${item}`);
|
|
406
|
+
}
|
|
407
|
+
|
|
408
|
+
lines.push("", "## 下一轮建议", "");
|
|
409
|
+
for (const item of report.nextRoundAdvice) {
|
|
410
|
+
lines.push(`- ${item}`);
|
|
411
|
+
}
|
|
412
|
+
|
|
413
|
+
lines.push("", "## 决策分布", "");
|
|
414
|
+
for (const [decision, count] of Object.entries(report.decisionCounts)) {
|
|
415
|
+
lines.push(`- ${decision}:${count} 次`);
|
|
416
|
+
}
|
|
417
|
+
|
|
418
|
+
lines.push("", "## 轮次时间线", "");
|
|
419
|
+
for (const round of report.roundNarrative) {
|
|
420
|
+
lines.push(`### ${round.id}`);
|
|
421
|
+
lines.push("");
|
|
422
|
+
lines.push(`- 时间:\`${round.createdAt}\``);
|
|
423
|
+
lines.push(`- 决策:\`${round.decision}\``);
|
|
424
|
+
lines.push(`- 检查点类型:\`${round.checkpointType}\``);
|
|
425
|
+
lines.push(`- 原因:${round.decisionReason}`);
|
|
426
|
+
lines.push(`- 变化:${round.whatChanged}`);
|
|
427
|
+
if (round.currentPacket) {
|
|
428
|
+
lines.push(`- 当前包:${round.currentPacket.id}|${round.currentPacket.title}`);
|
|
429
|
+
}
|
|
430
|
+
if (round.note) {
|
|
431
|
+
lines.push(`- 备注:${round.note}`);
|
|
432
|
+
}
|
|
433
|
+
lines.push("");
|
|
434
|
+
}
|
|
435
|
+
|
|
436
|
+
lines.push("## 建议下一步命令", "", `- \`${report.suggestedNextCommand}\``);
|
|
437
|
+
|
|
438
|
+
if (report.followupCommands.length > 0) {
|
|
439
|
+
lines.push("", "## 后续命令", "");
|
|
440
|
+
for (const command of report.followupCommands) {
|
|
441
|
+
lines.push(`- \`${command}\``);
|
|
442
|
+
}
|
|
443
|
+
}
|
|
444
|
+
|
|
445
|
+
lines.push("", "## 可直接复制给 Agent 的 Retrospective Prompt", "", "```text", report.retrospectivePrompt, "```");
|
|
446
|
+
|
|
447
|
+
return `${lines.join("\n")}\n`;
|
|
448
|
+
}
|
|
449
|
+
|
|
450
|
+
export async function writeAgentRetrospectiveOutput(outputPath, content) {
|
|
451
|
+
return writeScanOutput(outputPath, content);
|
|
452
|
+
}
|
package/src/agent-session.js
CHANGED
|
@@ -67,6 +67,9 @@ function inferSkillForCommand(commandName, flow) {
|
|
|
67
67
|
if (commandName === "agent-decision-log") {
|
|
68
68
|
return "geo-ai-search-optimization-agent-decision-log";
|
|
69
69
|
}
|
|
70
|
+
if (commandName === "agent-retrospective") {
|
|
71
|
+
return "geo-ai-search-optimization-agent-retrospective";
|
|
72
|
+
}
|
|
70
73
|
if (commandName === "skills" || commandName === "quick-start") {
|
|
71
74
|
return "geo-ai-search-optimization-usage";
|
|
72
75
|
}
|
|
@@ -141,6 +144,8 @@ function inferStepPurpose(commandName, flow) {
|
|
|
141
144
|
return "把当前阶段压成继续 / 阻塞 / 收尾的决策检查点。";
|
|
142
145
|
case "agent-decision-log":
|
|
143
146
|
return "把每一轮 checkpoint 沉淀成可继承的决策历史。";
|
|
147
|
+
case "agent-retrospective":
|
|
148
|
+
return "把多轮决策历史压成复盘结论、重复模式和下一轮建议。";
|
|
144
149
|
case "apply-plan":
|
|
145
150
|
return "把交接结果推进到具体执行包。";
|
|
146
151
|
case "completion-report":
|
|
@@ -195,6 +200,8 @@ function inferExpectedArtifact(commandName) {
|
|
|
195
200
|
return "agent 阶段检查点工件";
|
|
196
201
|
case "agent-decision-log":
|
|
197
202
|
return "agent 决策历史工件";
|
|
203
|
+
case "agent-retrospective":
|
|
204
|
+
return "agent 多轮复盘工件";
|
|
198
205
|
case "apply-plan":
|
|
199
206
|
return "执行包";
|
|
200
207
|
case "completion-report":
|
|
@@ -248,6 +255,9 @@ function buildStepInstructions(parsedCommand, flow) {
|
|
|
248
255
|
if (parsedCommand.commandName === "agent-decision-log") {
|
|
249
256
|
lines.push("这一步用于保留跨轮决策历史,方便下一位 agent 直接承接上一次判断。");
|
|
250
257
|
}
|
|
258
|
+
if (parsedCommand.commandName === "agent-retrospective") {
|
|
259
|
+
lines.push("这一步用于总结为什么前几轮卡住或推进顺利,并把这些模式转成下一轮建议。");
|
|
260
|
+
}
|
|
251
261
|
if (parsedCommand.commandName === "agent-handoff" && flow.intent === "execute") {
|
|
252
262
|
lines.push("如果还是 advice-only,说明还缺仓库或本地项目上下文。");
|
|
253
263
|
}
|
package/src/auto-flow.js
CHANGED
|
@@ -59,6 +59,9 @@ function inferTaskTextMode(text) {
|
|
|
59
59
|
if (/(decision-log|decision log|决策历史|决策日志|为什么这样决定|历史决策)/i.test(normalized)) {
|
|
60
60
|
return "execute";
|
|
61
61
|
}
|
|
62
|
+
if (/(retrospective|retro|多轮复盘|执行复盘|为什么总卡住|经验总结|回顾前几轮)/i.test(normalized)) {
|
|
63
|
+
return "closeout";
|
|
64
|
+
}
|
|
62
65
|
if (/(executor|先做哪一个|先做哪一包|single task|执行第一包|先执行一个任务)/i.test(normalized)) {
|
|
63
66
|
return "execute";
|
|
64
67
|
}
|
|
@@ -159,6 +162,9 @@ function resolveEffectiveIntent(intent, detected) {
|
|
|
159
162
|
if (detected.artifactKind === "geo-completion-report") {
|
|
160
163
|
return "closeout";
|
|
161
164
|
}
|
|
165
|
+
if (detected.artifactKind === "geo-agent-retrospective") {
|
|
166
|
+
return "closeout";
|
|
167
|
+
}
|
|
162
168
|
if (
|
|
163
169
|
[
|
|
164
170
|
"geo-share-pack",
|
|
@@ -403,6 +409,12 @@ function buildCommandChain(detected, intent) {
|
|
|
403
409
|
`geo-ai-search-optimization agent-decision-log ${baseSource} --append-from ${source}`
|
|
404
410
|
];
|
|
405
411
|
}
|
|
412
|
+
case "geo-agent-retrospective":
|
|
413
|
+
return [
|
|
414
|
+
`geo-ai-search-optimization completion-report ${source}`,
|
|
415
|
+
`geo-ai-search-optimization meeting-pack ${source}`,
|
|
416
|
+
`geo-ai-search-optimization publish-pack ${source}`
|
|
417
|
+
];
|
|
406
418
|
case "geo-apply-plan":
|
|
407
419
|
return [
|
|
408
420
|
`geo-ai-search-optimization agent-executor ${source}`,
|
|
@@ -486,6 +498,8 @@ function pickSkillName(detected, intent) {
|
|
|
486
498
|
return "geo-ai-search-optimization-agent-checkpoint";
|
|
487
499
|
case "geo-agent-decision-log":
|
|
488
500
|
return "geo-ai-search-optimization-agent-decision-log";
|
|
501
|
+
case "geo-agent-retrospective":
|
|
502
|
+
return "geo-ai-search-optimization-agent-retrospective";
|
|
489
503
|
case "geo-completion-report":
|
|
490
504
|
return "geo-ai-search-optimization-completion-report";
|
|
491
505
|
case "geo-handoff-bundle":
|
|
@@ -544,6 +558,7 @@ function buildSecondarySkillNames(primarySkill, intent, detected) {
|
|
|
544
558
|
}
|
|
545
559
|
if (intent === "closeout") {
|
|
546
560
|
names.add("geo-ai-search-optimization-completion-report");
|
|
561
|
+
names.add("geo-ai-search-optimization-agent-retrospective");
|
|
547
562
|
}
|
|
548
563
|
|
|
549
564
|
names.delete(primarySkill);
|
|
@@ -571,6 +586,7 @@ function buildStage(intent, detected) {
|
|
|
571
586
|
"geo-agent-status-board",
|
|
572
587
|
"geo-agent-checkpoint",
|
|
573
588
|
"geo-agent-decision-log",
|
|
589
|
+
"geo-agent-retrospective",
|
|
574
590
|
"geo-apply-plan",
|
|
575
591
|
"geo-handoff-bundle"
|
|
576
592
|
].includes(detected.artifactKind)
|
|
@@ -588,6 +604,7 @@ function buildStage(intent, detected) {
|
|
|
588
604
|
"geo-agent-status-board",
|
|
589
605
|
"geo-agent-checkpoint",
|
|
590
606
|
"geo-agent-decision-log",
|
|
607
|
+
"geo-agent-retrospective",
|
|
591
608
|
"geo-apply-plan",
|
|
592
609
|
"geo-handoff-bundle"
|
|
593
610
|
].includes(detected.artifactKind)
|
|
@@ -683,6 +700,9 @@ function buildNextAction(detected, intent, commands) {
|
|
|
683
700
|
return `先运行 \`${commands[0]}\`,把当前输入推进到 agent 可执行状态。`;
|
|
684
701
|
}
|
|
685
702
|
if (intent === "closeout") {
|
|
703
|
+
if (detected.artifactKind === "geo-agent-retrospective") {
|
|
704
|
+
return `先运行 \`${commands[0]}\`,把多轮复盘结论整理成正式 closeout 与交付结果。`;
|
|
705
|
+
}
|
|
686
706
|
return `先运行 \`${commands[0]}\`,整理本轮完成情况和剩余风险。`;
|
|
687
707
|
}
|
|
688
708
|
if (intent === "guide") {
|
package/src/cli.js
CHANGED
|
@@ -16,6 +16,7 @@ import {
|
|
|
16
16
|
} from "./agent-progress-tracker.js";
|
|
17
17
|
import { createAgentCheckpoint, renderAgentCheckpointMarkdown, writeAgentCheckpointOutput } from "./agent-checkpoint.js";
|
|
18
18
|
import { createAgentDecisionLog, renderAgentDecisionLogMarkdown, writeAgentDecisionLogOutput } from "./agent-decision-log.js";
|
|
19
|
+
import { createAgentRetrospective, renderAgentRetrospectiveMarkdown, writeAgentRetrospectiveOutput } from "./agent-retrospective.js";
|
|
19
20
|
import { createAgentStatusBoard, renderAgentStatusBoardMarkdown, writeAgentStatusBoardOutput } from "./agent-status-board.js";
|
|
20
21
|
import { createAgentRunbook, renderAgentRunbookMarkdown, writeAgentRunbookOutput } from "./agent-runbook.js";
|
|
21
22
|
import { createAgentSession, renderAgentSessionMarkdown, writeAgentSessionOutput } from "./agent-session.js";
|
|
@@ -83,6 +84,7 @@ function printHelp() {
|
|
|
83
84
|
" geo-ai-search-optimization agent-status-board <input> [--current <id>] [--completed <id,id>] [--blocked <reason,reason>] [--format <markdown|json>] [--out <file>]",
|
|
84
85
|
" geo-ai-search-optimization agent-checkpoint <input> [--current <id>] [--completed <id,id>] [--blocked <reason,reason>] [--format <markdown|json>] [--out <file>]",
|
|
85
86
|
" geo-ai-search-optimization agent-decision-log <input> [--append-from <file>] [--note <text>] [--current <id>] [--completed <id,id>] [--blocked <reason,reason>] [--format <markdown|json>] [--out <file>]",
|
|
87
|
+
" geo-ai-search-optimization agent-retrospective <input> [--format <markdown|json>] [--out <file>]",
|
|
86
88
|
" geo-ai-search-optimization skills [--json]",
|
|
87
89
|
" geo-ai-search-optimization where",
|
|
88
90
|
" geo-ai-search-optimization doctor [--json]",
|
|
@@ -395,6 +397,31 @@ async function handleAgentDecisionLog(args) {
|
|
|
395
397
|
process.stdout.write(renderedOutput);
|
|
396
398
|
}
|
|
397
399
|
|
|
400
|
+
async function handleAgentRetrospective(args) {
|
|
401
|
+
const input = args.find((value) => !value.startsWith("-"));
|
|
402
|
+
if (!input) {
|
|
403
|
+
throw new Error("agent-retrospective 需要一个输入值,可以是项目路径、网站网址或已导出的工件");
|
|
404
|
+
}
|
|
405
|
+
|
|
406
|
+
const format = getFlagValue(args, "--format") || (hasFlag(args, "--json") ? "json" : undefined);
|
|
407
|
+
const retrospective = await createAgentRetrospective(input, {
|
|
408
|
+
format
|
|
409
|
+
});
|
|
410
|
+
const outputJson = retrospective.format === "json";
|
|
411
|
+
const renderedOutput = outputJson
|
|
412
|
+
? `${JSON.stringify(retrospective, null, 2)}\n`
|
|
413
|
+
: renderAgentRetrospectiveMarkdown(retrospective);
|
|
414
|
+
|
|
415
|
+
const outputPath = getFlagValue(args, "--out");
|
|
416
|
+
if (outputPath) {
|
|
417
|
+
const resolvedOutputPath = await writeAgentRetrospectiveOutput(outputPath, renderedOutput);
|
|
418
|
+
process.stdout.write(`已保存 agent retrospective:${resolvedOutputPath}\n`);
|
|
419
|
+
return;
|
|
420
|
+
}
|
|
421
|
+
|
|
422
|
+
process.stdout.write(renderedOutput);
|
|
423
|
+
}
|
|
424
|
+
|
|
398
425
|
function handleWhere() {
|
|
399
426
|
process.stdout.write(
|
|
400
427
|
[
|
|
@@ -980,6 +1007,11 @@ export async function runCli(args = []) {
|
|
|
980
1007
|
return;
|
|
981
1008
|
}
|
|
982
1009
|
|
|
1010
|
+
if (command === "agent-retrospective") {
|
|
1011
|
+
await handleAgentRetrospective(rest);
|
|
1012
|
+
return;
|
|
1013
|
+
}
|
|
1014
|
+
|
|
983
1015
|
if (command === "skills") {
|
|
984
1016
|
await handleSkills(rest);
|
|
985
1017
|
return;
|
package/src/index.js
CHANGED
|
@@ -10,6 +10,7 @@ export { createApplyPlan, renderApplyPlanMarkdown, writeApplyPlanOutput } from "
|
|
|
10
10
|
export { createAgentBatchExecutor, renderAgentBatchExecutorMarkdown, writeAgentBatchExecutorOutput } from "./agent-batch-executor.js";
|
|
11
11
|
export { createAgentCheckpoint, renderAgentCheckpointMarkdown, writeAgentCheckpointOutput } from "./agent-checkpoint.js";
|
|
12
12
|
export { createAgentDecisionLog, renderAgentDecisionLogMarkdown, writeAgentDecisionLogOutput } from "./agent-decision-log.js";
|
|
13
|
+
export { createAgentRetrospective, renderAgentRetrospectiveMarkdown, writeAgentRetrospectiveOutput } from "./agent-retrospective.js";
|
|
13
14
|
export { createAgentHandoff, renderAgentHandoffMarkdown, writeAgentHandoffOutput } from "./agent-handoff.js";
|
|
14
15
|
export { createAgentExecutor, renderAgentExecutorMarkdown, writeAgentExecutorOutput } from "./agent-executor.js";
|
|
15
16
|
export {
|
package/src/skills.js
CHANGED
|
@@ -13,6 +13,7 @@ const SKILL_ORDER = [
|
|
|
13
13
|
"geo-ai-search-optimization-agent-status-board",
|
|
14
14
|
"geo-ai-search-optimization-agent-checkpoint",
|
|
15
15
|
"geo-ai-search-optimization-agent-decision-log",
|
|
16
|
+
"geo-ai-search-optimization-agent-retrospective",
|
|
16
17
|
"geo-ai-search-optimization-usage",
|
|
17
18
|
"geo-ai-search-optimization-agent-handoff",
|
|
18
19
|
"geo-ai-search-optimization-repair-loop",
|
|
@@ -35,6 +36,7 @@ const SKILL_CATEGORY = {
|
|
|
35
36
|
"geo-ai-search-optimization-agent-status-board": "execution",
|
|
36
37
|
"geo-ai-search-optimization-agent-checkpoint": "execution",
|
|
37
38
|
"geo-ai-search-optimization-agent-decision-log": "execution",
|
|
39
|
+
"geo-ai-search-optimization-agent-retrospective": "execution",
|
|
38
40
|
"geo-ai-search-optimization-usage": "guidance",
|
|
39
41
|
"geo-ai-search-optimization-agent-handoff": "execution",
|
|
40
42
|
"geo-ai-search-optimization-repair-loop": "execution",
|
|
@@ -176,6 +178,7 @@ export function renderBundledSkillsMarkdown(bundle) {
|
|
|
176
178
|
"- 如果要把当前执行状态直接整理成看板,再进入 agent-status-board。",
|
|
177
179
|
"- 如果要在每轮结束时做继续 / 阻塞 / 收尾决策,再进入 agent-checkpoint。",
|
|
178
180
|
"- 如果要把多轮决策沉淀成可继承的历史,再进入 agent-decision-log。",
|
|
181
|
+
"- 如果要总结多轮为什么推进顺利或反复卡住,再进入 agent-retrospective。",
|
|
179
182
|
"- 再看 usage skill,知道什么时候该跑哪个命令。",
|
|
180
183
|
"- 如果要交给 agent 执行,再进入 handoff / apply / completion 这一条执行链。",
|
|
181
184
|
"- 如果要产出给团队分发,再进入 share / export / html / publish 这一条交付链。",
|