@chllming/wave-orchestration 0.6.3 → 0.7.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +82 -1
- package/README.md +40 -7
- package/docs/agents/wave-orchestrator-role.md +50 -0
- package/docs/agents/wave-planner-role.md +39 -0
- package/docs/context7/bundles.json +9 -0
- package/docs/context7/planner-agent/README.md +25 -0
- package/docs/context7/planner-agent/manifest.json +83 -0
- package/docs/context7/planner-agent/papers/cooperbench-why-coding-agents-cannot-be-your-teammates-yet.md +3283 -0
- package/docs/context7/planner-agent/papers/dova-deliberation-first-multi-agent-orchestration-for-autonomous-research-automation.md +1699 -0
- package/docs/context7/planner-agent/papers/dpbench-large-language-models-struggle-with-simultaneous-coordination.md +2251 -0
- package/docs/context7/planner-agent/papers/incremental-planning-to-control-a-blackboard-based-problem-solver.md +1729 -0
- package/docs/context7/planner-agent/papers/silo-bench-a-scalable-environment-for-evaluating-distributed-coordination-in-multi-agent-llm-systems.md +3747 -0
- package/docs/context7/planner-agent/papers/todoevolve-learning-to-architect-agent-planning-systems.md +1675 -0
- package/docs/context7/planner-agent/papers/verified-multi-agent-orchestration-a-plan-execute-verify-replan-framework-for-complex-query-resolution.md +1173 -0
- package/docs/context7/planner-agent/papers/why-do-multi-agent-llm-systems-fail.md +5211 -0
- package/docs/context7/planner-agent/topics/planning-and-orchestration.md +24 -0
- package/docs/evals/README.md +96 -1
- package/docs/evals/arm-templates/README.md +13 -0
- package/docs/evals/arm-templates/full-wave.json +15 -0
- package/docs/evals/arm-templates/single-agent.json +15 -0
- package/docs/evals/benchmark-catalog.json +7 -0
- package/docs/evals/cases/README.md +47 -0
- package/docs/evals/cases/wave-blackboard-inbox-targeting.json +73 -0
- package/docs/evals/cases/wave-contradiction-conflict.json +104 -0
- package/docs/evals/cases/wave-expert-routing-preservation.json +69 -0
- package/docs/evals/cases/wave-hidden-profile-private-evidence.json +81 -0
- package/docs/evals/cases/wave-premature-closure-guard.json +71 -0
- package/docs/evals/cases/wave-silo-cross-agent-state.json +77 -0
- package/docs/evals/cases/wave-simultaneous-lockstep.json +92 -0
- package/docs/evals/cooperbench/real-world-mitigation.md +341 -0
- package/docs/evals/external-benchmarks.json +85 -0
- package/docs/evals/external-command-config.sample.json +9 -0
- package/docs/evals/external-command-config.swe-bench-pro.json +8 -0
- package/docs/evals/pilots/README.md +47 -0
- package/docs/evals/pilots/swe-bench-pro-public-full-wave-review-10.json +64 -0
- package/docs/evals/pilots/swe-bench-pro-public-pilot.json +111 -0
- package/docs/evals/wave-benchmark-program.md +302 -0
- package/docs/guides/planner.md +67 -11
- package/docs/guides/terminal-surfaces.md +12 -0
- package/docs/plans/context7-wave-orchestrator.md +20 -0
- package/docs/plans/current-state.md +8 -1
- package/docs/plans/examples/wave-benchmark-improvement.md +108 -0
- package/docs/plans/examples/wave-example-live-proof.md +1 -1
- package/docs/plans/examples/wave-example-rollout-fidelity.md +340 -0
- package/docs/plans/migration.md +26 -0
- package/docs/plans/wave-orchestrator.md +60 -12
- package/docs/plans/waves/reviews/wave-1-benchmark-operator.md +118 -0
- package/docs/reference/cli-reference.md +547 -0
- package/docs/reference/coordination-and-closure.md +436 -0
- package/docs/reference/live-proof-waves.md +25 -3
- package/docs/reference/npmjs-trusted-publishing.md +3 -3
- package/docs/reference/proof-metrics.md +90 -0
- package/docs/reference/runtime-config/README.md +63 -2
- package/docs/reference/runtime-config/codex.md +2 -1
- package/docs/reference/sample-waves.md +29 -18
- package/docs/reference/wave-control.md +164 -0
- package/docs/reference/wave-planning-lessons.md +131 -0
- package/package.json +5 -4
- package/releases/manifest.json +40 -0
- package/scripts/research/agent-context-archive.mjs +18 -0
- package/scripts/research/manifests/agent-context-expanded-2026-03-22.mjs +17 -0
- package/scripts/research/sync-planner-context7-bundle.mjs +133 -0
- package/scripts/wave-orchestrator/agent-state.mjs +11 -2
- package/scripts/wave-orchestrator/artifact-schemas.mjs +232 -0
- package/scripts/wave-orchestrator/autonomous.mjs +7 -0
- package/scripts/wave-orchestrator/benchmark-cases.mjs +374 -0
- package/scripts/wave-orchestrator/benchmark-external.mjs +1384 -0
- package/scripts/wave-orchestrator/benchmark.mjs +972 -0
- package/scripts/wave-orchestrator/clarification-triage.mjs +78 -12
- package/scripts/wave-orchestrator/config.mjs +175 -0
- package/scripts/wave-orchestrator/control-cli.mjs +1216 -0
- package/scripts/wave-orchestrator/control-plane.mjs +697 -0
- package/scripts/wave-orchestrator/coord-cli.mjs +360 -2
- package/scripts/wave-orchestrator/coordination-store.mjs +211 -9
- package/scripts/wave-orchestrator/coordination.mjs +84 -0
- package/scripts/wave-orchestrator/dashboard-renderer.mjs +120 -5
- package/scripts/wave-orchestrator/dashboard-state.mjs +22 -0
- package/scripts/wave-orchestrator/evals.mjs +23 -0
- package/scripts/wave-orchestrator/executors.mjs +3 -2
- package/scripts/wave-orchestrator/feedback.mjs +55 -0
- package/scripts/wave-orchestrator/install.mjs +151 -2
- package/scripts/wave-orchestrator/launcher-closure.mjs +4 -1
- package/scripts/wave-orchestrator/launcher-runtime.mjs +33 -30
- package/scripts/wave-orchestrator/launcher.mjs +884 -36
- package/scripts/wave-orchestrator/planner-context.mjs +75 -0
- package/scripts/wave-orchestrator/planner.mjs +2270 -136
- package/scripts/wave-orchestrator/proof-cli.mjs +195 -0
- package/scripts/wave-orchestrator/proof-registry.mjs +317 -0
- package/scripts/wave-orchestrator/replay.mjs +10 -4
- package/scripts/wave-orchestrator/retry-cli.mjs +184 -0
- package/scripts/wave-orchestrator/retry-control.mjs +225 -0
- package/scripts/wave-orchestrator/shared.mjs +26 -0
- package/scripts/wave-orchestrator/swe-bench-pro-task.mjs +1004 -0
- package/scripts/wave-orchestrator/terminals.mjs +1 -1
- package/scripts/wave-orchestrator/traces.mjs +157 -2
- package/scripts/wave-orchestrator/wave-control-client.mjs +532 -0
- package/scripts/wave-orchestrator/wave-control-schema.mjs +309 -0
- package/scripts/wave-orchestrator/wave-files.mjs +144 -23
- package/scripts/wave.mjs +27 -0
- package/skills/repo-coding-rules/SKILL.md +1 -0
- package/skills/role-cont-eval/SKILL.md +1 -0
- package/skills/role-cont-qa/SKILL.md +13 -6
- package/skills/role-deploy/SKILL.md +1 -0
- package/skills/role-documentation/SKILL.md +4 -0
- package/skills/role-implementation/SKILL.md +4 -0
- package/skills/role-infra/SKILL.md +2 -1
- package/skills/role-integration/SKILL.md +15 -8
- package/skills/role-planner/SKILL.md +39 -0
- package/skills/role-planner/skill.json +21 -0
- package/skills/role-research/SKILL.md +1 -0
- package/skills/role-security/SKILL.md +2 -2
- package/skills/runtime-claude/SKILL.md +2 -1
- package/skills/runtime-codex/SKILL.md +1 -0
- package/skills/runtime-local/SKILL.md +2 -0
- package/skills/runtime-opencode/SKILL.md +1 -0
- package/skills/wave-core/SKILL.md +25 -6
- package/skills/wave-core/references/marker-syntax.md +16 -8
- package/wave.config.json +45 -0
|
@@ -0,0 +1,8 @@
|
|
|
1
|
+
{
|
|
2
|
+
"adapters": {
|
|
3
|
+
"swe-bench-pro": {
|
|
4
|
+
"single-agent": "node \"scripts/wave-orchestrator/swe-bench-pro-task.mjs\" run --instance \"{task_id}\" --arm \"{arm}\" --model \"{model_id}\" --reasoning-effort \"{reasoning_effort}\" --max-wall-clock-minutes \"{max_wall_clock_minutes}\" --max-turns \"{max_turns}\"",
|
|
5
|
+
"full-wave": "node \"scripts/wave-orchestrator/swe-bench-pro-task.mjs\" run --instance \"{task_id}\" --arm \"{arm}\" --model \"{model_id}\" --reasoning-effort \"{reasoning_effort}\" --max-wall-clock-minutes \"{max_wall_clock_minutes}\" --max-turns \"{max_turns}\""
|
|
6
|
+
}
|
|
7
|
+
}
|
|
8
|
+
}
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "External Benchmark Pilots"
|
|
3
|
+
summary: "Frozen pilot manifests for the first honest direct benchmark runs."
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# External Benchmark Pilots
|
|
7
|
+
|
|
8
|
+
These manifests freeze the first-run task selections for direct external benchmarks.
|
|
9
|
+
|
|
10
|
+
They exist to prevent:
|
|
11
|
+
|
|
12
|
+
- ad hoc task picking
|
|
13
|
+
- silent pilot drift between runs
|
|
14
|
+
- unfair re-sampling after seeing results
|
|
15
|
+
|
|
16
|
+
The current frozen direct pilot is:
|
|
17
|
+
|
|
18
|
+
- `SWE-bench Pro`
|
|
19
|
+
|
|
20
|
+
Each manifest records:
|
|
21
|
+
|
|
22
|
+
- benchmark id
|
|
23
|
+
- split assumptions
|
|
24
|
+
- sample strategy
|
|
25
|
+
- exact task ids
|
|
26
|
+
- task-level metadata needed for later aggregation
|
|
27
|
+
|
|
28
|
+
These manifests are benchmark inputs, not run history.
|
|
29
|
+
|
|
30
|
+
If a smaller or narrower batch is needed after the canonical pilot is frozen, create a
|
|
31
|
+
new derivative manifest rather than editing the original file in place.
|
|
32
|
+
|
|
33
|
+
Derivative manifests must:
|
|
34
|
+
|
|
35
|
+
- name the parent frozen manifest they were derived from
|
|
36
|
+
- explain the deterministic subset rule they use
|
|
37
|
+
- state whether they are review-only or comparison-ready
|
|
38
|
+
|
|
39
|
+
Example:
|
|
40
|
+
|
|
41
|
+
- `docs/evals/pilots/swe-bench-pro-public-full-wave-review-10.json`
|
|
42
|
+
is a review-only 10-task subset derived from the frozen 20-task SWE-bench Pro public pilot.
|
|
43
|
+
It exists for a multi-agent diagnostic sweep and does not replace the canonical
|
|
44
|
+
single-agent versus full-wave comparison.
|
|
45
|
+
|
|
46
|
+
When a derivative review batch is run, inspect the generated `failure-review.md` before
|
|
47
|
+
treating any aggregate score as capability evidence.
|
|
@@ -0,0 +1,64 @@
|
|
|
1
|
+
{
|
|
2
|
+
"version": 1,
|
|
3
|
+
"id": "swe-bench-pro-public-full-wave-review-10",
|
|
4
|
+
"benchmarkId": "swe-bench-pro",
|
|
5
|
+
"title": "SWE-bench Pro Public Full-Wave Review 10",
|
|
6
|
+
"split": "public",
|
|
7
|
+
"sampleStrategy": "first-listed-per-repo-from-frozen-20-task-pilot",
|
|
8
|
+
"sampleSource": "Derived from docs/evals/pilots/swe-bench-pro-public-pilot.json by taking the first listed task for each repository pair in the frozen 20-task public pilot.",
|
|
9
|
+
"derivedFromManifestPath": "docs/evals/pilots/swe-bench-pro-public-pilot.json",
|
|
10
|
+
"reviewOnly": true,
|
|
11
|
+
"reviewScope": "multi-agent-only-diagnostic",
|
|
12
|
+
"tasks": [
|
|
13
|
+
{
|
|
14
|
+
"taskId": "instance_NodeBB__NodeBB-04998908ba6721d64eba79ae3b65a351dcfbc5b5-vnan",
|
|
15
|
+
"repo": "NodeBB/NodeBB",
|
|
16
|
+
"repoLanguage": "js"
|
|
17
|
+
},
|
|
18
|
+
{
|
|
19
|
+
"taskId": "instance_qutebrowser__qutebrowser-f91ace96223cac8161c16dd061907e138fe85111-v059c6fdc75567943479b23ebca7c07b5e9a7f34c",
|
|
20
|
+
"repo": "qutebrowser/qutebrowser",
|
|
21
|
+
"repoLanguage": "python"
|
|
22
|
+
},
|
|
23
|
+
{
|
|
24
|
+
"taskId": "instance_ansible__ansible-f327e65d11bb905ed9f15996024f857a95592629-vba6da65a0f3baefda7a058ebbd0a8dcafb8512f5",
|
|
25
|
+
"repo": "ansible/ansible",
|
|
26
|
+
"repoLanguage": "python"
|
|
27
|
+
},
|
|
28
|
+
{
|
|
29
|
+
"taskId": "instance_internetarchive__openlibrary-4a5d2a7d24c9e4c11d3069220c0685b736d5ecde-v13642507b4fc1f8d234172bf8129942da2c2ca26",
|
|
30
|
+
"repo": "internetarchive/openlibrary",
|
|
31
|
+
"repoLanguage": "python"
|
|
32
|
+
},
|
|
33
|
+
{
|
|
34
|
+
"taskId": "instance_gravitational__teleport-3fa6904377c006497169945428e8197158667910-v626ec2a48416b10a88641359a169d99e935ff037",
|
|
35
|
+
"repo": "gravitational/teleport",
|
|
36
|
+
"repoLanguage": "go"
|
|
37
|
+
},
|
|
38
|
+
{
|
|
39
|
+
"taskId": "instance_navidrome__navidrome-7073d18b54da7e53274d11c9e2baef1242e8769e",
|
|
40
|
+
"repo": "navidrome/navidrome",
|
|
41
|
+
"repoLanguage": "go"
|
|
42
|
+
},
|
|
43
|
+
{
|
|
44
|
+
"taskId": "instance_element-hq__element-web-33e8edb3d508d6eefb354819ca693b7accc695e7",
|
|
45
|
+
"repo": "element-hq/element-web",
|
|
46
|
+
"repoLanguage": "js"
|
|
47
|
+
},
|
|
48
|
+
{
|
|
49
|
+
"taskId": "instance_future-architect__vuls-407407d306e9431d6aa0ab566baa6e44e5ba2904",
|
|
50
|
+
"repo": "future-architect/vuls",
|
|
51
|
+
"repoLanguage": "go"
|
|
52
|
+
},
|
|
53
|
+
{
|
|
54
|
+
"taskId": "instance_flipt-io__flipt-e42da21a07a5ae35835ec54f74004ebd58713874",
|
|
55
|
+
"repo": "flipt-io/flipt",
|
|
56
|
+
"repoLanguage": "go"
|
|
57
|
+
},
|
|
58
|
+
{
|
|
59
|
+
"taskId": "instance_protonmail__webclients-2c3559cad02d1090985dba7e8eb5a129144d9811",
|
|
60
|
+
"repo": "protonmail/webclients",
|
|
61
|
+
"repoLanguage": "js"
|
|
62
|
+
}
|
|
63
|
+
]
|
|
64
|
+
}
|
|
@@ -0,0 +1,111 @@
|
|
|
1
|
+
{
|
|
2
|
+
"version": 1,
|
|
3
|
+
"id": "swe-bench-pro-public-pilot",
|
|
4
|
+
"benchmarkId": "swe-bench-pro",
|
|
5
|
+
"title": "SWE-bench Pro Public Pilot",
|
|
6
|
+
"split": "public",
|
|
7
|
+
"sampleStrategy": "fixed-stratified-public-slice",
|
|
8
|
+
"sampleSource": "First 100 public rows from the Hugging Face dataset viewer, stratified to two tasks per selected repository where available.",
|
|
9
|
+
"tasks": [
|
|
10
|
+
{
|
|
11
|
+
"taskId": "instance_NodeBB__NodeBB-04998908ba6721d64eba79ae3b65a351dcfbc5b5-vnan",
|
|
12
|
+
"repo": "NodeBB/NodeBB",
|
|
13
|
+
"repoLanguage": "js"
|
|
14
|
+
},
|
|
15
|
+
{
|
|
16
|
+
"taskId": "instance_NodeBB__NodeBB-51d8f3b195bddb13a13ddc0de110722774d9bb1b-vf2cf3cbd463b7ad942381f1c6d077626485a1e9e",
|
|
17
|
+
"repo": "NodeBB/NodeBB",
|
|
18
|
+
"repoLanguage": "js"
|
|
19
|
+
},
|
|
20
|
+
{
|
|
21
|
+
"taskId": "instance_qutebrowser__qutebrowser-f91ace96223cac8161c16dd061907e138fe85111-v059c6fdc75567943479b23ebca7c07b5e9a7f34c",
|
|
22
|
+
"repo": "qutebrowser/qutebrowser",
|
|
23
|
+
"repoLanguage": "python"
|
|
24
|
+
},
|
|
25
|
+
{
|
|
26
|
+
"taskId": "instance_qutebrowser__qutebrowser-c580ebf0801e5a3ecabc54f327498bb753c6d5f2-v2ef375ac784985212b1805e1d0431dc8f1b3c171",
|
|
27
|
+
"repo": "qutebrowser/qutebrowser",
|
|
28
|
+
"repoLanguage": "python"
|
|
29
|
+
},
|
|
30
|
+
{
|
|
31
|
+
"taskId": "instance_ansible__ansible-f327e65d11bb905ed9f15996024f857a95592629-vba6da65a0f3baefda7a058ebbd0a8dcafb8512f5",
|
|
32
|
+
"repo": "ansible/ansible",
|
|
33
|
+
"repoLanguage": "python"
|
|
34
|
+
},
|
|
35
|
+
{
|
|
36
|
+
"taskId": "instance_ansible__ansible-a26c325bd8f6e2822d9d7e62f77a424c1db4fbf6-v0f01c69f1e2528b935359cfe578530722bca2c59",
|
|
37
|
+
"repo": "ansible/ansible",
|
|
38
|
+
"repoLanguage": "python"
|
|
39
|
+
},
|
|
40
|
+
{
|
|
41
|
+
"taskId": "instance_internetarchive__openlibrary-4a5d2a7d24c9e4c11d3069220c0685b736d5ecde-v13642507b4fc1f8d234172bf8129942da2c2ca26",
|
|
42
|
+
"repo": "internetarchive/openlibrary",
|
|
43
|
+
"repoLanguage": "python"
|
|
44
|
+
},
|
|
45
|
+
{
|
|
46
|
+
"taskId": "instance_internetarchive__openlibrary-dbbd9d539c6d4fd45d5be9662aa19b6d664b5137-v08d8e8889ec945ab821fb156c04c7d2e2810debb",
|
|
47
|
+
"repo": "internetarchive/openlibrary",
|
|
48
|
+
"repoLanguage": "python"
|
|
49
|
+
},
|
|
50
|
+
{
|
|
51
|
+
"taskId": "instance_gravitational__teleport-3fa6904377c006497169945428e8197158667910-v626ec2a48416b10a88641359a169d99e935ff037",
|
|
52
|
+
"repo": "gravitational/teleport",
|
|
53
|
+
"repoLanguage": "go"
|
|
54
|
+
},
|
|
55
|
+
{
|
|
56
|
+
"taskId": "instance_gravitational__teleport-c782838c3a174fdff80cafd8cd3b1aa4dae8beb2",
|
|
57
|
+
"repo": "gravitational/teleport",
|
|
58
|
+
"repoLanguage": "go"
|
|
59
|
+
},
|
|
60
|
+
{
|
|
61
|
+
"taskId": "instance_navidrome__navidrome-7073d18b54da7e53274d11c9e2baef1242e8769e",
|
|
62
|
+
"repo": "navidrome/navidrome",
|
|
63
|
+
"repoLanguage": "go"
|
|
64
|
+
},
|
|
65
|
+
{
|
|
66
|
+
"taskId": "instance_navidrome__navidrome-b65e76293a917ee2dfc5d4b373b1c62e054d0dca",
|
|
67
|
+
"repo": "navidrome/navidrome",
|
|
68
|
+
"repoLanguage": "go"
|
|
69
|
+
},
|
|
70
|
+
{
|
|
71
|
+
"taskId": "instance_element-hq__element-web-33e8edb3d508d6eefb354819ca693b7accc695e7",
|
|
72
|
+
"repo": "element-hq/element-web",
|
|
73
|
+
"repoLanguage": "js"
|
|
74
|
+
},
|
|
75
|
+
{
|
|
76
|
+
"taskId": "instance_element-hq__element-web-5dfde12c1c1c0b6e48f17e3405468593e39d9492-vnan",
|
|
77
|
+
"repo": "element-hq/element-web",
|
|
78
|
+
"repoLanguage": "js"
|
|
79
|
+
},
|
|
80
|
+
{
|
|
81
|
+
"taskId": "instance_future-architect__vuls-407407d306e9431d6aa0ab566baa6e44e5ba2904",
|
|
82
|
+
"repo": "future-architect/vuls",
|
|
83
|
+
"repoLanguage": "go"
|
|
84
|
+
},
|
|
85
|
+
{
|
|
86
|
+
"taskId": "instance_future-architect__vuls-e6c0da61324a0c04026ffd1c031436ee2be9503a",
|
|
87
|
+
"repo": "future-architect/vuls",
|
|
88
|
+
"repoLanguage": "go"
|
|
89
|
+
},
|
|
90
|
+
{
|
|
91
|
+
"taskId": "instance_flipt-io__flipt-e42da21a07a5ae35835ec54f74004ebd58713874",
|
|
92
|
+
"repo": "flipt-io/flipt",
|
|
93
|
+
"repoLanguage": "go"
|
|
94
|
+
},
|
|
95
|
+
{
|
|
96
|
+
"taskId": "instance_flipt-io__flipt-3b2c25ee8a3ac247c3fad13ad8d64ace34ec8ee7",
|
|
97
|
+
"repo": "flipt-io/flipt",
|
|
98
|
+
"repoLanguage": "go"
|
|
99
|
+
},
|
|
100
|
+
{
|
|
101
|
+
"taskId": "instance_protonmail__webclients-2c3559cad02d1090985dba7e8eb5a129144d9811",
|
|
102
|
+
"repo": "protonmail/webclients",
|
|
103
|
+
"repoLanguage": "js"
|
|
104
|
+
},
|
|
105
|
+
{
|
|
106
|
+
"taskId": "instance_protonmail__webclients-6dcf0d0b0f7965ad94be3f84971afeb437f25b02",
|
|
107
|
+
"repo": "protonmail/webclients",
|
|
108
|
+
"repoLanguage": "js"
|
|
109
|
+
}
|
|
110
|
+
]
|
|
111
|
+
}
|
|
@@ -0,0 +1,302 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "Wave Benchmark Program"
|
|
3
|
+
summary: "Locked benchmark spec for Wave-native coordination evaluations, baseline arms, scoring rules, and external benchmark positioning."
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Wave Benchmark Program
|
|
7
|
+
|
|
8
|
+
This document is the implementation-side contract for Wave benchmarking.
|
|
9
|
+
|
|
10
|
+
It complements:
|
|
11
|
+
|
|
12
|
+
- `docs/evals/benchmark-catalog.json` for benchmark vocabulary
|
|
13
|
+
- `docs/evals/cases/` for the deterministic local corpus
|
|
14
|
+
- `docs/evals/external-benchmarks.json` for external adapters and positioning
|
|
15
|
+
- `scripts/wave-orchestrator/benchmark.mjs` for execution and reporting
|
|
16
|
+
|
|
17
|
+
## First Public Claim
|
|
18
|
+
|
|
19
|
+
The first claim this benchmark program is designed to support is:
|
|
20
|
+
|
|
21
|
+
> Under equal executor assumptions, the full Wave orchestration surface improves distributed-state reconstruction, inbox targeting, routing quality, and premature-closure resistance relative to stripped-down baselines.
|
|
22
|
+
|
|
23
|
+
This is intentionally narrower than "Wave is better than all coding agents."
|
|
24
|
+
|
|
25
|
+
## Benchmark Arms
|
|
26
|
+
|
|
27
|
+
The benchmark runner supports these arms:
|
|
28
|
+
|
|
29
|
+
- `single-agent`
|
|
30
|
+
One primary owner operates from a local view of records they authored. No inbound targeted coordination is compiled into that arm, and there is no compiled shared summary, no targeted inboxes, no capability routing, and no explicit closure guard simulation.
|
|
31
|
+
- `multi-agent-minimal`
|
|
32
|
+
Multiple agents exist, but they only share a minimal global summary. There is no targeted inbox routing and no benchmark-aware closure discipline.
|
|
33
|
+
- `full-wave`
|
|
34
|
+
The current Wave projection and routing surfaces are used: canonical coordination state, compiled shared summary, targeted inboxes, request assignments, and closure-guard simulation.
|
|
35
|
+
- `full-wave-plus-improvement`
|
|
36
|
+
Reserved for later benchmark-improvement loops after a baseline is established. The runner supports the arm id, but the initial local corpus focuses on the first three arms.
|
|
37
|
+
|
|
38
|
+
## Shipped Native Families
|
|
39
|
+
|
|
40
|
+
The first shipped deterministic corpus covers one case in each of the core coordination families:
|
|
41
|
+
|
|
42
|
+
- `hidden-profile-pooling`
|
|
43
|
+
- `silo-escape`
|
|
44
|
+
- `blackboard-fidelity`
|
|
45
|
+
- `contradiction-recovery`
|
|
46
|
+
- `simultaneous-coordination`
|
|
47
|
+
- `expertise-leverage`
|
|
48
|
+
|
|
49
|
+
It also includes a cross-cutting premature-closure guard case under `hidden-profile-pooling / premature-consensus-guard`.
|
|
50
|
+
|
|
51
|
+
## Scoring Rules
|
|
52
|
+
|
|
53
|
+
Each benchmark case defines:
|
|
54
|
+
|
|
55
|
+
- `familyId`
|
|
56
|
+
- `benchmarkId`
|
|
57
|
+
- `supportedArms`
|
|
58
|
+
- `fixture`
|
|
59
|
+
- `expectations`
|
|
60
|
+
- `scoring.kind`
|
|
61
|
+
- `scoring.primaryMetric`
|
|
62
|
+
- `scoring.thresholds`
|
|
63
|
+
|
|
64
|
+
The runner computes case-level metrics from deterministic coordination fixtures using current Wave machinery where possible:
|
|
65
|
+
|
|
66
|
+
- `compileSharedSummary()`
|
|
67
|
+
- `compileAgentInbox()`
|
|
68
|
+
- `buildRequestAssignments()`
|
|
69
|
+
- `openClarificationLinkedRequests()`
|
|
70
|
+
|
|
71
|
+
The primary metric determines case pass/fail. Directionality comes from the benchmark catalog, not from the case file.
|
|
72
|
+
|
|
73
|
+
For reporting above the case level, the runner also computes a direction-aligned score:
|
|
74
|
+
|
|
75
|
+
- `higher-is-better` metrics keep their raw score
|
|
76
|
+
- `lower-is-better` metrics are flipped to `100 - rawScore`
|
|
77
|
+
|
|
78
|
+
That rule applies to:
|
|
79
|
+
|
|
80
|
+
- family `meanScore`
|
|
81
|
+
- overall and family `meanDelta`
|
|
82
|
+
- the `statisticallyConfident` comparison flag
|
|
83
|
+
|
|
84
|
+
This keeps a positive delta semantically stable: positive always means "better than baseline" even when a case's raw primary metric is lower-is-better.
|
|
85
|
+
|
|
86
|
+
## Significance And Comparative Reporting
|
|
87
|
+
|
|
88
|
+
Comparative reporting uses:
|
|
89
|
+
|
|
90
|
+
- mean score delta versus the `single-agent` baseline
|
|
91
|
+
- bootstrap confidence intervals over case deltas
|
|
92
|
+
- a confidence rule: only report a statistically confident win when the lower bound of the confidence interval is above zero
|
|
93
|
+
|
|
94
|
+
The initial implementation reports the practical delta directly and leaves final publication thresholds to operator judgment. The runner still records the per-case practical win threshold in the case definition so later work can harden claim logic without changing the corpus format.
|
|
95
|
+
|
|
96
|
+
## Corpus Design Rules
|
|
97
|
+
|
|
98
|
+
The local case corpus follows these constraints:
|
|
99
|
+
|
|
100
|
+
- deterministic and file-backed
|
|
101
|
+
- cheap enough to run in ordinary repo CI or local development
|
|
102
|
+
- focused on Wave-native surfaces, not generic model capability
|
|
103
|
+
- auditable by inspecting the case JSON, generated summaries, inboxes, and assignments
|
|
104
|
+
- extensible to live-run and trace-backed variants later
|
|
105
|
+
|
|
106
|
+
The first corpus deliberately exercises projection, routing, and closure logic before attempting expensive live multi-executor runs.
|
|
107
|
+
|
|
108
|
+
## Native Benchmarking Mode
|
|
109
|
+
|
|
110
|
+
`wave benchmark run` is the native deterministic benchmarking mode.
|
|
111
|
+
|
|
112
|
+
This mode is intentionally narrow:
|
|
113
|
+
|
|
114
|
+
- it tests the Wave substrate, not generic model capability
|
|
115
|
+
- it holds the coordination fixture constant and varies only the arm behavior
|
|
116
|
+
- it uses current Wave machinery to compile summaries, inboxes, assignments, and closure guards
|
|
117
|
+
- it is cheap and reproducible enough to run in local development and CI
|
|
118
|
+
|
|
119
|
+
What it is meant to prove:
|
|
120
|
+
|
|
121
|
+
- the blackboard projections preserve decision-changing state
|
|
122
|
+
- targeted inboxes reduce silos instead of creating them
|
|
123
|
+
- capability routing sends the right work to the right owner
|
|
124
|
+
- contradiction handling becomes explicit repair work
|
|
125
|
+
- closure guards resist premature PASS
|
|
126
|
+
|
|
127
|
+
What it does not prove by itself:
|
|
128
|
+
|
|
129
|
+
- raw coding ability on live repos
|
|
130
|
+
- leaderboard-ready external benchmark performance
|
|
131
|
+
- runtime-specific agent behavior under real tool pressure
|
|
132
|
+
|
|
133
|
+
That separation is intentional. Native mode is the first honest proof layer for a MAS tool whose core claim is about shared state, routing, synthesis, and closure discipline.
|
|
134
|
+
|
|
135
|
+
## Native Metric Contract
|
|
136
|
+
|
|
137
|
+
For each case and arm, the native runner records:
|
|
138
|
+
|
|
139
|
+
- `score`
|
|
140
|
+
The case's primary metric value.
|
|
141
|
+
- `alignedScore`
|
|
142
|
+
The direction-aligned case score used for family summaries and deltas.
|
|
143
|
+
- `passed`
|
|
144
|
+
Whether the primary metric satisfied the case threshold.
|
|
145
|
+
- `direction`
|
|
146
|
+
Whether the metric is `higher-is-better` or `lower-is-better`.
|
|
147
|
+
- `threshold`
|
|
148
|
+
The configured case threshold for the primary metric.
|
|
149
|
+
- `metrics`
|
|
150
|
+
The full metric map computed from the deterministic fixture.
|
|
151
|
+
- `details`
|
|
152
|
+
Supporting breakdowns such as matched global facts, summary facts, targeted inbox recall, assignment precision, distinct assigned agents, and whether the blocking guard tripped.
|
|
153
|
+
- `artifacts`
|
|
154
|
+
The generated `sharedSummary`, `inboxes`, `assignments`, and `blockingGuard` state used to score the arm.
|
|
155
|
+
|
|
156
|
+
The runner also records:
|
|
157
|
+
|
|
158
|
+
- `familySummary`
|
|
159
|
+
Direction-aligned mean score and pass rate per family and arm.
|
|
160
|
+
- `comparisons`
|
|
161
|
+
Direction-aligned mean delta versus `single-agent`, bootstrap confidence intervals, and a conservative `statisticallyConfident` flag.
|
|
162
|
+
|
|
163
|
+
When `waveControl` reporting is enabled, native runs also publish:
|
|
164
|
+
|
|
165
|
+
- `benchmark_run`
|
|
166
|
+
Suite-level metadata, selected arms, family summary, and comparison summary.
|
|
167
|
+
- `benchmark_item`
|
|
168
|
+
Full per-case, per-arm payloads including `score`, `alignedScore`, `passed`, `metrics`, `details`, and generated artifacts.
|
|
169
|
+
|
|
170
|
+
Native mode does **not** emit `verification` or `review` events, because there is no external verifier and no benchmark-validity split to interpret. Those are reserved for `wave benchmark external-run`.
|
|
171
|
+
|
|
172
|
+
## Native Metric Set
|
|
173
|
+
|
|
174
|
+
The current deterministic runner logs the following metrics:
|
|
175
|
+
|
|
176
|
+
| Metric | Native signal used today | Why it matters for the MAS claim |
|
|
177
|
+
| --- | --- | --- |
|
|
178
|
+
| `distributed-info-accuracy` | Percent of expected global facts visible in the integration-visible state: shared summary, integration-owner view when present, and structured assignment artifacts | Proves the team pooled distributed evidence rather than leaving it siloed |
|
|
179
|
+
| `latent-asymmetry-surfacing-rate` | Clarification recall by explicit record id when a case expects missing-fact surfacing, otherwise targeted inbox recall | Proves the system notices that important evidence is still missing before closure |
|
|
180
|
+
| `premature-convergence-rate` | `100` when a case required a blocking guard and the arm failed to keep it active, else `0` | Proves whether closure discipline resists converging on incomplete state |
|
|
181
|
+
| `global-state-reconstruction-rate` | Percent of required cross-agent facts reconstructed in the integration-visible state rather than only in owner-private inboxes | Proves communication turned into a correct shared picture, not only message traffic |
|
|
182
|
+
| `summary-fact-retention-rate` | Percent of required summary facts preserved in the shared summary | Proves summary compression is trustworthy enough to support downstream synthesis |
|
|
183
|
+
| `communication-reasoning-gap` | `100 - global-state-reconstruction-rate` | Makes failure explicit when agents talk but still fail to integrate correctly |
|
|
184
|
+
| `projection-consistency-rate` | Same summary-fidelity signal, framed for projection integrity | Proves the blackboard projections remain semantically aligned with canonical state |
|
|
185
|
+
| `targeted-inbox-recall` | Percent of expected owner-specific facts present in the right inboxes | Proves targeted context actually reaches the agents who own the work |
|
|
186
|
+
| `integration-coherence-rate` | Global-fact recall used as a proxy for integration fidelity in the deterministic corpus | Proves the synthesis layer reflects the underlying coordination state |
|
|
187
|
+
| `contradiction-detection-rate` | Targeted-fact recall on contradiction-oriented fixtures | Proves conflicting claims become visible instead of being smoothed away |
|
|
188
|
+
| `repair-closure-rate` | Assignment precision for required repair or follow-up work | Proves contradictions and blockers turn into owner-bound resolution work |
|
|
189
|
+
| `false-consensus-rate` | `100` when a contradiction/premature-close guard should have held and did not, else `0` | Proves whether the system is narrating consensus where the state is still unresolved |
|
|
190
|
+
| `deadlock-rate` | `100` when the arm failed to reach the required number of distinct owners in simultaneous-coordination cases, else `0` | Proves whether the team collapses under concurrent coordination pressure |
|
|
191
|
+
| `contention-resolution-rate` | Assignment precision in concurrent blocker cases | Proves simultaneous work can resolve rather than stall |
|
|
192
|
+
| `symmetry-breaking-rate` | Percent of the required distinct owners/choices achieved | Proves the team can break lockstep and avoid same-plan collapse |
|
|
193
|
+
| `expert-preservation-rate` | Targeted-fact recall used on expert-preservation fixtures | Proves the strongest specialist signal survives into the visible decision path |
|
|
194
|
+
| `capability-routing-precision` | Correct assignment rate for capability-routed requests | Proves the routing layer is steering work to the intended owner |
|
|
195
|
+
| `expert-performance-gap` | `100 - expert-preservation-rate` | Makes expert-signal dilution explicit as a failure measure rather than an anecdote |
|
|
196
|
+
|
|
197
|
+
Several of these metrics intentionally reuse the same deterministic signals under different benchmark families. That is not accidental. The goal is not to create an unnecessarily large metric vocabulary; it is to ask the same core question from multiple MAS failure angles:
|
|
198
|
+
|
|
199
|
+
- did the right facts reach shared state
|
|
200
|
+
- did the right owners receive the right context
|
|
201
|
+
- did conflicts become explicit repair work
|
|
202
|
+
- did closure wait for integrated proof
|
|
203
|
+
|
|
204
|
+
The important constraint is that "shared state" here does **not** mean "the union of every owner inbox." The native runner scores global reconstruction from the integration-visible artifacts, so facts that remain split across private owner views do not count as reconstructed.
|
|
205
|
+
|
|
206
|
+
## Why These Metrics Matter
|
|
207
|
+
|
|
208
|
+
The first public claim is not "Wave is a better model." It is that Wave is a better multi-agent coordination substrate.
|
|
209
|
+
|
|
210
|
+
That means the most valuable native metrics are the ones that expose the failure cases from the README:
|
|
211
|
+
|
|
212
|
+
- distributed-evidence metrics matter because a MAS that cannot pool private facts has no credible shared-state claim
|
|
213
|
+
- summary and inbox metrics matter because a blackboard is only useful if the projections stay faithful and owner-relevant
|
|
214
|
+
- routing metrics matter because specialist structure only helps if work actually lands on the right owner
|
|
215
|
+
- contradiction and repair metrics matter because visible disagreement without repair is still coordination failure
|
|
216
|
+
- premature-closure metrics matter because a MAS that can always narrate PASS is not proving anything
|
|
217
|
+
- simultaneous-coordination metrics matter because many systems look fine in serial but collapse under concurrent blockers
|
|
218
|
+
|
|
219
|
+
In other words, these metrics matter because they test the *coordination mechanism itself*, which is the actual product claim of Wave.
|
|
220
|
+
|
|
221
|
+
## External Benchmark Positioning
|
|
222
|
+
|
|
223
|
+
The external benchmark registry is split into two modes:
|
|
224
|
+
|
|
225
|
+
- `direct`
|
|
226
|
+
The benchmark is treated as a runnable external suite with a command template or adapter recipe. The current direct target is `SWE-bench Pro`.
|
|
227
|
+
- `adapted`
|
|
228
|
+
The benchmark is treated as a design reference whose failure mode should be mirrored with repo-local Wave cases. Current adapted targets are `SkillsBench`, `EvoClaw`, `HiddenBench`, `Silo-Bench`, and `DPBench`.
|
|
229
|
+
|
|
230
|
+
This keeps the first milestone honest:
|
|
231
|
+
|
|
232
|
+
- prove the Wave-specific substrate first
|
|
233
|
+
- then layer in broader external reality checks
|
|
234
|
+
|
|
235
|
+
## Current Direct Benchmark
|
|
236
|
+
|
|
237
|
+
The current direct external benchmark is:
|
|
238
|
+
|
|
239
|
+
- `SWE-bench Pro`
|
|
240
|
+
|
|
241
|
+
Why this benchmark now:
|
|
242
|
+
|
|
243
|
+
- it is contamination-resistant relative to older SWE-bench variants
|
|
244
|
+
- it has a public executable harness
|
|
245
|
+
- it exercises real repository bug-fix work without changing the Wave coordination claim into a generic terminal benchmark claim
|
|
246
|
+
|
|
247
|
+
The second direct benchmark slot is intentionally deferred until a later `CooperBench` pass.
|
|
248
|
+
|
|
249
|
+
The first direct comparison should compare only:
|
|
250
|
+
|
|
251
|
+
- `single-agent`
|
|
252
|
+
- `full-wave`
|
|
253
|
+
|
|
254
|
+
And both arms must keep the following fixed:
|
|
255
|
+
|
|
256
|
+
- model id
|
|
257
|
+
- executor id and command
|
|
258
|
+
- tool permissions
|
|
259
|
+
- temperature and reasoning settings
|
|
260
|
+
- wall-clock budget
|
|
261
|
+
- turn budget
|
|
262
|
+
- retry limit
|
|
263
|
+
- verification harness
|
|
264
|
+
- dataset version or task manifest
|
|
265
|
+
|
|
266
|
+
Execution should be driven through explicit command templates for the official benchmark harnesses rather than ad hoc shell invocation. The config shape lives at `docs/evals/external-command-config.sample.json`, and the local SWE-bench Pro harness is wired through `docs/evals/external-command-config.swe-bench-pro.json`.
|
|
267
|
+
|
|
268
|
+
## Review-Only External Subsets
|
|
269
|
+
|
|
270
|
+
After the canonical SWE-bench Pro pilot is frozen, narrower review batches may be derived for
|
|
271
|
+
diagnostic work such as a `full-wave`-only sweep.
|
|
272
|
+
|
|
273
|
+
Those runs are allowed only when they:
|
|
274
|
+
|
|
275
|
+
- derive from an already-frozen pilot manifest instead of re-sampling freely
|
|
276
|
+
- keep the review scope explicit in the manifest and report
|
|
277
|
+
- avoid presenting the result as a matched `single-agent` versus `full-wave` claim
|
|
278
|
+
|
|
279
|
+
Example:
|
|
280
|
+
|
|
281
|
+
- `docs/evals/pilots/swe-bench-pro-public-full-wave-review-10.json`
|
|
282
|
+
is a 10-task diagnostic subset derived from the frozen 20-task SWE-bench Pro pilot.
|
|
283
|
+
It is suitable for multi-agent review work before a later pairwise rerun, but it does
|
|
284
|
+
not replace the canonical direct comparison.
|
|
285
|
+
|
|
286
|
+
## Output Contract
|
|
287
|
+
|
|
288
|
+
`wave benchmark run` writes results under `.tmp/wave-benchmarks/latest/` by default:
|
|
289
|
+
|
|
290
|
+
- `results.json`
|
|
291
|
+
- `results.md`
|
|
292
|
+
|
|
293
|
+
`wave benchmark external-run` writes the same pair in its selected output directory plus:
|
|
294
|
+
|
|
295
|
+
- `failure-review.json`
|
|
296
|
+
- `failure-review.md`
|
|
297
|
+
|
|
298
|
+
The failure review is the first artifact to inspect for review-only subsets because it
|
|
299
|
+
separates verifier invalidation, setup or harness failures, dry-run planning output, and
|
|
300
|
+
trustworthy patch-quality failures.
|
|
301
|
+
|
|
302
|
+
These artifacts are local and reproducible. They are not intended to be committed as run history.
|
package/docs/guides/planner.md
CHANGED
|
@@ -10,15 +10,38 @@ It reduces repeated setup questions, stores project defaults, and generates wave
|
|
|
10
10
|
|
|
11
11
|
- `wave project setup`
|
|
12
12
|
- `wave project show`
|
|
13
|
-
- `wave draft
|
|
13
|
+
- interactive `wave draft --wave <n>`
|
|
14
|
+
- agentic `wave draft --agentic --task "..."`
|
|
15
|
+
- planner run review via `wave draft --show-run <run-id>`
|
|
16
|
+
- explicit materialization via `wave draft --apply-run <run-id>`
|
|
14
17
|
- persistent project memory in `.wave/project-profile.json`
|
|
18
|
+
- transient planner packets in `.wave/planner/runs/<run-id>/`
|
|
19
|
+
- planner-run Context7 injection via `planner.agentic.context7Bundle`
|
|
15
20
|
- JSON specs in `docs/plans/waves/specs/wave-<n>.json`
|
|
16
21
|
- rendered markdown waves in `docs/plans/waves/wave-<n>.md`
|
|
17
|
-
- component matrix updates
|
|
22
|
+
- candidate matrix previews plus canonical component matrix updates on apply
|
|
23
|
+
|
|
24
|
+
## Upgrading Adopted 0.6.x Repos
|
|
25
|
+
|
|
26
|
+
`wave upgrade` updates the installed runtime and records `.wave/install-state.json`, but it does not copy newer planner starter files into an already-adopted repo.
|
|
27
|
+
|
|
28
|
+
If `pnpm exec wave doctor` starts failing after a `0.7.x` upgrade, sync these repo-owned planner surfaces from the packaged release:
|
|
29
|
+
|
|
30
|
+
- `docs/agents/wave-planner-role.md`
|
|
31
|
+
- `skills/role-planner/`
|
|
32
|
+
- `docs/context7/planner-agent/`
|
|
33
|
+
- `docs/reference/wave-planning-lessons.md`
|
|
34
|
+
- the `planner-agentic` bundle entry in `docs/context7/bundles.json`
|
|
35
|
+
|
|
36
|
+
Then rerun:
|
|
37
|
+
|
|
38
|
+
```bash
|
|
39
|
+
pnpm exec wave doctor
|
|
40
|
+
pnpm exec wave launch --lane main --dry-run --no-dashboard
|
|
41
|
+
```
|
|
18
42
|
|
|
19
43
|
## What The Planner Does Not Yet Ship
|
|
20
44
|
|
|
21
|
-
- ad hoc transient runs
|
|
22
45
|
- forward replanning of later waves
|
|
23
46
|
- separate runtime enforcement for oversight vs dark-factory
|
|
24
47
|
|
|
@@ -46,12 +69,25 @@ This lets later drafts inherit repo-specific defaults instead of asking the same
|
|
|
46
69
|
|
|
47
70
|
## Drafting A Wave
|
|
48
71
|
|
|
49
|
-
|
|
72
|
+
Interactive draft:
|
|
50
73
|
|
|
51
74
|
```bash
|
|
52
75
|
pnpm exec wave draft --wave 1 --template implementation
|
|
53
76
|
```
|
|
54
77
|
|
|
78
|
+
Agentic draft:
|
|
79
|
+
|
|
80
|
+
```bash
|
|
81
|
+
pnpm exec wave draft --agentic --task "Add X according to the current architecture" --from-wave 3 --max-waves 2
|
|
82
|
+
pnpm exec wave draft --show-run <run-id> --json
|
|
83
|
+
pnpm exec wave draft --apply-run <run-id> --waves all
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
The planner agent reads repo-local planning sources directly, and it can also
|
|
87
|
+
prefetch a planner-specific Context7 bundle when
|
|
88
|
+
`planner.agentic.context7Bundle` points at a published library. The tracked
|
|
89
|
+
source corpus for that library lives under `docs/context7/planner-agent/`.
|
|
90
|
+
|
|
55
91
|
Supported templates today:
|
|
56
92
|
|
|
57
93
|
- `implementation`
|
|
@@ -59,12 +95,25 @@ Supported templates today:
|
|
|
59
95
|
- `infra`
|
|
60
96
|
- `release`
|
|
61
97
|
|
|
62
|
-
|
|
98
|
+
Interactive draft writes canonical waves immediately:
|
|
63
99
|
|
|
64
100
|
- `docs/plans/waves/specs/wave-<n>.json`
|
|
65
101
|
- `docs/plans/waves/wave-<n>.md`
|
|
66
102
|
|
|
67
|
-
|
|
103
|
+
Agentic draft writes a transient review packet first:
|
|
104
|
+
|
|
105
|
+
- `.wave/planner/runs/<run-id>/request.json`
|
|
106
|
+
- `.wave/planner/runs/<run-id>/sources.json`
|
|
107
|
+
- `.wave/planner/runs/<run-id>/plan.json`
|
|
108
|
+
- `.wave/planner/runs/<run-id>/verification.json`
|
|
109
|
+
- `.wave/planner/runs/<run-id>/candidate/specs/wave-<n>.json`
|
|
110
|
+
- `.wave/planner/runs/<run-id>/candidate/waves/wave-<n>.md`
|
|
111
|
+
|
|
112
|
+
Canonical `docs/plans/waves/` files are only written by `--apply-run`.
|
|
113
|
+
|
|
114
|
+
The transient packet also includes the exact planner prompt and the resolved
|
|
115
|
+
planner Context7 selection, so review can see which external planning corpus was
|
|
116
|
+
attached before the planner drafted waves.
|
|
68
117
|
|
|
69
118
|
## What The Planner Asks For
|
|
70
119
|
|
|
@@ -88,6 +137,10 @@ That gives you a wave that is much closer to launch-ready than a blank markdown
|
|
|
88
137
|
|
|
89
138
|
The planner does not auto-discover every possible skill bundle yet, but it supports explicit per-agent `### Skills` in the rendered output.
|
|
90
139
|
|
|
140
|
+
Interactive `wave draft --wave <n>` now resolves bundle names from
|
|
141
|
+
`docs/context7/bundles.json`, so wave-level and per-agent Context7 selections
|
|
142
|
+
can use any configured bundle instead of only `none`.
|
|
143
|
+
|
|
91
144
|
The more important interaction is indirect:
|
|
92
145
|
|
|
93
146
|
- project profile remembers deploy environments
|
|
@@ -99,11 +152,12 @@ So planner structure and skill resolution already reinforce each other.
|
|
|
99
152
|
## Recommended Workflow
|
|
100
153
|
|
|
101
154
|
1. Run `pnpm exec wave project setup` once for the repo.
|
|
102
|
-
2. Use `pnpm exec wave draft --wave <n> --template <template>`.
|
|
103
|
-
3. Review the generated JSON spec
|
|
104
|
-
4. Adjust repo-specific prompts, file ownership, skills, and validation commands.
|
|
105
|
-
5.
|
|
106
|
-
6.
|
|
155
|
+
2. Use either `pnpm exec wave draft --wave <n> --template <template>` or `pnpm exec wave draft --agentic --task "..." --from-wave <n>`.
|
|
156
|
+
3. Review the generated JSON spec, markdown wave, or agentic run packet.
|
|
157
|
+
4. Adjust repo-specific prompts, file ownership, deliverables, proof artifacts, skills, and validation commands.
|
|
158
|
+
5. If you used agentic draft, materialize only the accepted waves with `pnpm exec wave draft --apply-run <run-id>`.
|
|
159
|
+
6. Run `pnpm exec wave launch --lane <lane> --start-wave <n> --end-wave <n> --dry-run --no-dashboard`.
|
|
160
|
+
7. Only launch live once the dry-run artifacts look correct.
|
|
107
161
|
|
|
108
162
|
If you want concrete authored examples after the planner baseline, see [docs/reference/sample-waves.md](../reference/sample-waves.md).
|
|
109
163
|
|
|
@@ -112,6 +166,8 @@ If you want concrete authored examples after the planner baseline, see [docs/ref
|
|
|
112
166
|
- Treat the generated draft as a strong starting point, not untouchable output.
|
|
113
167
|
- Tighten validation commands before launch.
|
|
114
168
|
- Keep file ownership narrow and explicit.
|
|
169
|
+
- Treat `### Deliverables` and `### Proof artifacts` as part of the plan contract, not optional polish.
|
|
170
|
+
- Keep `docs/context7/planner-agent/` in sync with the selected planning cache slice before publishing the planner bundle to Context7.
|
|
115
171
|
- Add explicit `### Skills` only when the lane, role, runtime, and deploy-kind defaults are not enough.
|
|
116
172
|
- Use the component matrix as a planning contract, not just a reporting surface.
|
|
117
173
|
- Prefer updating the project profile when the same answers recur across waves.
|