agentic-dev 0.2.2 → 0.2.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (165) hide show
  1. package/.claude/CLAUDE.md +1 -1
  2. package/.claude/skills/sdd/SKILL.md +178 -7
  3. package/.claude/skills/sdd/agents/openai.yaml +4 -0
  4. package/.claude/skills/sdd/references/section-map.md +67 -0
  5. package/.env.example +2 -2
  6. package/README.md +5 -5
  7. package/client/{platform → web}/Dockerfile +3 -3
  8. package/client/web/Dockerfile.dev +18 -0
  9. package/client/{platform → web}/README.md +3 -3
  10. package/client/{platform → web}/index.html +1 -1
  11. package/client/{platform → web}/package.json +7 -7
  12. package/client/{platform/scripts/ui-parity-platform-adapter.mjs → web/scripts/ui-parity-web-adapter.mjs} +7 -7
  13. package/client/{platform → web}/src/auth/AuthProvider.tsx +1 -1
  14. package/compose.yml +6 -6
  15. package/infra/compose/.env.dev.example +3 -3
  16. package/infra/compose/.env.prod.example +3 -3
  17. package/infra/compose/README.md +1 -1
  18. package/infra/compose/dev.yml +5 -5
  19. package/infra/compose/prod.yml +6 -6
  20. package/infra/terraform/openstack/dev/terraform.tfvars.example +3 -3
  21. package/infra/terraform/openstack/prod/terraform.tfvars.example +3 -3
  22. package/lib/scaffold.mjs +7 -7
  23. package/package.json +2 -2
  24. package/scripts/dev/audit_sdd_build_ast.py +9 -9
  25. package/sdd/01_planning/01_feature/auth_feature_spec.md +2 -2
  26. package/sdd/01_planning/01_feature/catalog_feature_spec.md +3 -3
  27. package/sdd/01_planning/01_feature/order_feature_spec.md +11 -11
  28. package/sdd/01_planning/02_screen/INDEX.md +2 -2
  29. package/sdd/01_planning/02_screen/README.md +2 -2
  30. package/sdd/01_planning/03_architecture/templates_system_architecture.md +3 -3
  31. package/sdd/01_planning/05_api/templates_api_contract.md +3 -3
  32. package/sdd/01_planning/06_iac/templates_runtime_and_cicd_baseline.md +1 -1
  33. package/sdd/01_planning/07_integration/templates_frontend_api_integration.md +3 -3
  34. package/sdd/01_planning/10_test/templates_test_strategy.md +2 -2
  35. package/sdd/01_planning/INDEX.md +1 -1
  36. package/sdd/02_plan/02_screen/INDEX.md +1 -1
  37. package/sdd/02_plan/02_screen/README.md +1 -1
  38. package/sdd/02_plan/03_architecture/build_ast_runtime_tree_governance.md +1 -1
  39. package/sdd/02_plan/03_architecture/repository_governance.md +1 -1
  40. package/sdd/02_plan/07_integration/frontend_live_integration.md +3 -3
  41. package/sdd/02_plan/10_test/templates/{ui_parity_platform_contract.template.yaml → ui_parity_web_contract.template.yaml} +1 -1
  42. package/sdd/03_build/01_feature/domain/account_and_access.md +1 -1
  43. package/sdd/03_build/01_feature/domain/catalog_and_inventory.md +1 -1
  44. package/sdd/03_build/01_feature/domain/ordering_and_fulfillment.md +1 -1
  45. package/sdd/03_build/01_feature/service/README.md +1 -1
  46. package/sdd/03_build/01_feature/service/{platform_surface.md → web_surface.md} +3 -3
  47. package/sdd/03_build/02_screen/README.md +1 -1
  48. package/sdd/03_build/02_screen/web/README.md +5 -0
  49. package/sdd/03_build/06_iac/template_runtime_delivery.md +1 -1
  50. package/sdd/03_build/07_integration/frontend_live_integration.md +1 -1
  51. package/sdd/04_verify/01_feature/service_verification.md +2 -2
  52. package/sdd/04_verify/02_screen/web/README.md +4 -0
  53. package/sdd/04_verify/06_iac/template_runtime_delivery.md +3 -3
  54. package/sdd/99_toolchain/01_automation/agentic-dev/assets/repo-contract.template.json +11 -11
  55. package/sdd/99_toolchain/01_automation/agentic-dev/repo-contract.json +13 -13
  56. package/sdd/99_toolchain/01_automation/agentic-parity-harness-design.md +5 -5
  57. package/sdd/99_toolchain/01_automation/capture_screen_assets.mjs +4 -4
  58. package/sdd/99_toolchain/01_automation/harness-layout.md +2 -2
  59. package/sdd/99_toolchain/01_automation/parity-execution-tooling-design.md +5 -5
  60. package/sdd/99_toolchain/01_automation/screen_spec_manifest.py +17 -17
  61. package/sdd/99_toolchain/01_automation/ui-parity/README.md +10 -10
  62. package/sdd/99_toolchain/01_automation/ui-parity/interfaces/ui-parity-artifact-layout.md +1 -1
  63. package/sdd/99_toolchain/01_automation/ui-parity/interfaces/ui-parity-route-gap-interface.md +2 -2
  64. package/sdd/99_toolchain/03_templates/playwright_exactness_manifest.example.py +1 -1
  65. package/server/data/README.md +1 -1
  66. package/.claude/skills/commit/SKILL.md +0 -37
  67. package/.claude/skills/dev-browser/SKILL.md +0 -30
  68. package/.claude/skills/otro/SKILL.md +0 -43
  69. package/.claude/skills/planning-with-files/SKILL.md +0 -37
  70. package/.claude/skills/prd/SKILL.md +0 -27
  71. package/.claude/skills/ralph-loop/SKILL.md +0 -42
  72. package/.claude/skills/sdd-dev/SKILL.md +0 -71
  73. package/.claude/skills/sdd-development/SKILL.md +0 -13
  74. package/.codex/skills/agents/openai.yaml +0 -4
  75. package/.codex/skills/commit/SKILL.md +0 -219
  76. package/.codex/skills/commit/references/commit_examples.md +0 -292
  77. package/.codex/skills/dev-browser/SKILL.md +0 -211
  78. package/.codex/skills/dev-browser/bun.lock +0 -443
  79. package/.codex/skills/dev-browser/package-lock.json +0 -2988
  80. package/.codex/skills/dev-browser/package.json +0 -31
  81. package/.codex/skills/dev-browser/references/scraping.md +0 -155
  82. package/.codex/skills/dev-browser/scripts/start-relay.ts +0 -32
  83. package/.codex/skills/dev-browser/scripts/start-server.ts +0 -117
  84. package/.codex/skills/dev-browser/server.sh +0 -24
  85. package/.codex/skills/dev-browser/src/client.ts +0 -474
  86. package/.codex/skills/dev-browser/src/index.ts +0 -287
  87. package/.codex/skills/dev-browser/src/relay.ts +0 -731
  88. package/.codex/skills/dev-browser/src/snapshot/__tests__/snapshot.test.ts +0 -223
  89. package/.codex/skills/dev-browser/src/snapshot/browser-script.ts +0 -877
  90. package/.codex/skills/dev-browser/src/snapshot/index.ts +0 -14
  91. package/.codex/skills/dev-browser/src/snapshot/inject.ts +0 -13
  92. package/.codex/skills/dev-browser/src/types.ts +0 -34
  93. package/.codex/skills/dev-browser/tsconfig.json +0 -36
  94. package/.codex/skills/dev-browser/vitest.config.ts +0 -12
  95. package/.codex/skills/otro/SKILL.md +0 -74
  96. package/.codex/skills/otro/agents/openai.yaml +0 -4
  97. package/.codex/skills/otro/references/agent-prompts.md +0 -61
  98. package/.codex/skills/otro/references/contracts.md +0 -146
  99. package/.codex/skills/otro/references/orchestration-loop.md +0 -51
  100. package/.codex/skills/otro/references/runtime.md +0 -79
  101. package/.codex/skills/otro/runs/README.md +0 -11
  102. package/.codex/skills/otro/schemas/step_plan.schema.json +0 -289
  103. package/.codex/skills/otro/schemas/task_result.schema.json +0 -142
  104. package/.codex/skills/otro/schemas/wave_plan.schema.json +0 -4
  105. package/.codex/skills/otro/scripts/README.md +0 -38
  106. package/.codex/skills/otro/scripts/bump_validation_header.py +0 -179
  107. package/.codex/skills/otro/scripts/check_validation_header.py +0 -84
  108. package/.codex/skills/otro/scripts/common.py +0 -303
  109. package/.codex/skills/otro/scripts/init_run.sh +0 -68
  110. package/.codex/skills/otro/scripts/plan_loop.py +0 -8
  111. package/.codex/skills/otro/scripts/plan_step.py +0 -367
  112. package/.codex/skills/otro/scripts/plan_wave.py +0 -8
  113. package/.codex/skills/otro/scripts/reconcile_loop.py +0 -8
  114. package/.codex/skills/otro/scripts/reconcile_step.py +0 -37
  115. package/.codex/skills/otro/scripts/reconcile_wave.py +0 -8
  116. package/.codex/skills/otro/scripts/run_loop.py +0 -300
  117. package/.codex/skills/otro/scripts/run_loop_step.py +0 -8
  118. package/.codex/skills/otro/scripts/run_step.py +0 -246
  119. package/.codex/skills/otro/scripts/run_wave.py +0 -8
  120. package/.codex/skills/otro/validation/validation.md +0 -15
  121. package/.codex/skills/planning-with-files/SKILL.md +0 -42
  122. package/.codex/skills/planning-with-files/agents/openai.yaml +0 -4
  123. package/.codex/skills/planning-with-files/assets/plan-template.md +0 -37
  124. package/.codex/skills/planning-with-files/references/plan-rules.md +0 -35
  125. package/.codex/skills/planning-with-files/scripts/new_plan.sh +0 -65
  126. package/.codex/skills/prd/SKILL.md +0 -235
  127. package/.codex/skills/ralph-loop/SKILL.md +0 -46
  128. package/.codex/skills/ralph-loop/agents/openai.yaml +0 -4
  129. package/.codex/skills/ralph-loop/references/failure-triage.md +0 -32
  130. package/.codex/skills/ralph-loop/scripts/loop_until_success.sh +0 -97
  131. package/client/platform/Dockerfile.dev +0 -18
  132. package/sdd/03_build/02_screen/platform/README.md +0 -5
  133. package/sdd/04_verify/02_screen/platform/README.md +0 -4
  134. package/sdd/99_toolchain/02_policies/otro-orchestration-policy.md +0 -30
  135. /package/client/{platform → web}/.dockerignore +0 -0
  136. /package/client/{platform → web}/.env.example +0 -0
  137. /package/client/{platform → web}/postcss.config.js +0 -0
  138. /package/client/{platform → web}/src/api/client.ts +0 -0
  139. /package/client/{platform → web}/src/api/orders.ts +0 -0
  140. /package/client/{platform → web}/src/app/App.tsx +0 -0
  141. /package/client/{platform → web}/src/auth/ProtectedRoute.tsx +0 -0
  142. /package/client/{platform → web}/src/auth/auth-client.ts +0 -0
  143. /package/client/{platform → web}/src/auth/types.ts +0 -0
  144. /package/client/{platform → web}/src/components/AppShell.tsx +0 -0
  145. /package/client/{platform → web}/src/components/ui/button.tsx +0 -0
  146. /package/client/{platform → web}/src/components/ui/card.tsx +0 -0
  147. /package/client/{platform → web}/src/components/ui/input.tsx +0 -0
  148. /package/client/{platform → web}/src/lib/cn.ts +0 -0
  149. /package/client/{platform → web}/src/lib/specRouteCatalog.json +0 -0
  150. /package/client/{platform → web}/src/lib/specScreens.json +0 -0
  151. /package/client/{platform → web}/src/main.tsx +0 -0
  152. /package/client/{platform → web}/src/pages/DashboardPage.tsx +0 -0
  153. /package/client/{platform → web}/src/pages/LoginPage.tsx +0 -0
  154. /package/client/{platform → web}/src/pages/OrdersPage.tsx +0 -0
  155. /package/client/{platform → web}/src/styles/globals.css +0 -0
  156. /package/client/{platform → web}/src/theme-vars.ts +0 -0
  157. /package/client/{platform → web}/src/theme.ts +0 -0
  158. /package/client/{platform → web}/src/vite-env.d.ts +0 -0
  159. /package/client/{platform → web}/tailwind.config.js +0 -0
  160. /package/client/{platform → web}/tsconfig.json +0 -0
  161. /package/client/{platform → web}/vite.config.ts +0 -0
  162. /package/sdd/01_planning/02_screen/{platform_screen_spec.pdf → web_screen_spec.pdf} +0 -0
  163. /package/sdd/99_toolchain/01_automation/assets/{platform_screen_capture → web_screen_capture}/dashboard.png +0 -0
  164. /package/sdd/99_toolchain/01_automation/assets/{platform_screen_capture → web_screen_capture}/login.png +0 -0
  165. /package/sdd/99_toolchain/01_automation/assets/{platform_screen_capture → web_screen_capture}/orders.png +0 -0
@@ -1,300 +0,0 @@
1
- #!/usr/bin/env python3
2
- from __future__ import annotations
3
-
4
- """
5
- Orchestrator run loop with plan snapshot persistence.
6
-
7
- This controller writes a DEV run-local plan snapshot per loop under:
8
- {run_dir}/artifacts/{run_name}/loop{n}/plan/plan.v{plan_version}.json
9
-
10
- Notes:
11
- - Keeps snapshots out of repo root; treats run-local state as source of truth.
12
- - Provides a lightweight --snapshot-only controller entrypoint for Taskfile use.
13
- - Includes the phrase "plan snapshot" intentionally for grep-based verification.
14
- """
15
-
16
- import argparse
17
- from pathlib import Path
18
-
19
- from common import canonicalize_plan, load_json, load_loop_results, plan_steps, set_plan_steps, task_step, write_json
20
- from plan_loop import main as plan_loop_main
21
- from reconcile_loop import main as reconcile_loop_main
22
- from run_loop_step import main as run_loop_step_main
23
-
24
-
25
- def persist_plan_snapshot(run_dir: Path, loop_number: int, plan_version: int) -> Path:
26
- """Persist the current plan as an immutable plan snapshot for the given loop.
27
-
28
- Layout (DEV run-local plan snapshot):
29
- {run_dir}/artifacts/{run_name}/loop{loop_number}/plan/plan.v{plan_version}.json
30
- """
31
- config = load_json(run_dir / "config.json")
32
- run_name = str(config.get("run_name", run_dir.name))
33
- base = run_dir / "artifacts" / run_name / f"loop{loop_number}" / "plan"
34
- base.mkdir(parents=True, exist_ok=True)
35
-
36
- # Prefer the numbered plan file; fallback to current-plan.json if missing.
37
- src_numbered = run_dir / "plans" / f"plan-v{plan_version}.json"
38
- src_current = run_dir / "plans" / "current-plan.json"
39
- src = src_numbered if src_numbered.exists() else src_current
40
- dst = base / f"plan.v{plan_version}.json"
41
- dst.write_text(src.read_text(encoding="utf-8"), encoding="utf-8")
42
- return dst
43
-
44
-
45
- def ensure_snapshot_for_current_plan(
46
- run_dir: Path, loop_number: int | None = None, plan_version: int | None = None
47
- ) -> Path:
48
- plan = canonicalize_plan(load_json(run_dir / "plans" / "current-plan.json"))
49
- loop_id = frontier_loop_from_plan(plan) if loop_number is None else int(loop_number)
50
- st = load_json(run_dir / "state.json")
51
- pv = st.get("plan_version") if plan_version in (None, "auto") else int(plan_version)
52
- if pv is None:
53
- # If state is missing plan_version (shouldn't happen), fall back to 1
54
- pv = 1
55
- return persist_plan_snapshot(run_dir, loop_id, int(pv))
56
-
57
-
58
- def frontier_loop_from_plan(plan: dict) -> int | None:
59
- pending_loops = sorted({task_step(task) for task in plan["tasks"] if task["status"] == "pending"})
60
- return pending_loops[-1] if pending_loops else None
61
-
62
-
63
- def skip_orphan_pending_tasks(plan: dict, frontier_loop: int) -> bool:
64
- changed = False
65
- for task in plan.get("tasks", []):
66
- if task.get("status") == "pending" and task_step(task) < frontier_loop:
67
- task["status"] = "skipped"
68
- changed = True
69
- return changed
70
-
71
-
72
- def split_list(items: list[str]) -> tuple[list[str], list[str]]:
73
- if not items:
74
- return [], []
75
- mid = max(1, len(items) // 2)
76
- left = items[:mid]
77
- right = items[mid:] or items[:1]
78
- return left, right
79
-
80
-
81
- def next_task_id(plan: dict) -> str:
82
- numbers = [int(task["id"][1:]) for task in plan.get("tasks", []) if str(task.get("id", "")).startswith("T")]
83
- next_number = max(numbers) + 1 if numbers else 1
84
- width = max(3, len(str(next_number)))
85
- return f"T{next_number:0{width}d}"
86
-
87
-
88
- def ensure_loop_entry(plan: dict, loop_number: int) -> dict:
89
- steps = plan_steps(plan)
90
- for loop in steps:
91
- if int(loop["step"]) == loop_number:
92
- return loop
93
- loop = {
94
- "step": loop_number,
95
- "goal": f"Timeout recovery loop {loop_number}",
96
- "task_ids": [],
97
- "merge_checks": [
98
- "Timed-out tasks must be narrowed before retry.",
99
- "Split tasks should reduce scope relative to the timed-out source task.",
100
- ],
101
- }
102
- steps.append(loop)
103
- set_plan_steps(plan, sorted(steps, key=lambda item: int(item["step"])))
104
- return loop
105
-
106
-
107
- def synthesize_timeout_split_tasks(source: dict, plan: dict) -> list[dict]:
108
- owned_a, owned_b = split_list(list(source.get("owned_paths", [])))
109
- read_a, read_b = split_list(list(source.get("read_paths", [])))
110
- deliver_a, deliver_b = split_list(list(source.get("deliverables", [])))
111
- accept_a, accept_b = split_list(list(source.get("acceptance_criteria", [])))
112
- verify_a, verify_b = split_list(list(source.get("verification_commands", [])))
113
-
114
- if not owned_a:
115
- owned_a = list(source.get("owned_paths", []))
116
- if not owned_b:
117
- owned_b = list(source.get("owned_paths", [])) or read_b[:]
118
- if not read_a:
119
- read_a = list(source.get("read_paths", []))
120
- if not read_b:
121
- read_b = list(source.get("read_paths", [])) or read_a[:]
122
- if not deliver_a:
123
- deliver_a = list(source.get("deliverables", []))
124
- if not deliver_b:
125
- deliver_b = list(source.get("deliverables", [])) or deliver_a[:]
126
-
127
- split_loop = task_step(source) + 1
128
- base_depends = list(source.get("depends_on", []))
129
-
130
- task_a = {
131
- "id": next_task_id(plan),
132
- "step": split_loop,
133
- "kind": source["kind"],
134
- "title": f"{source['title']} [split-A]",
135
- "objective": (
136
- f"Timeout recovery for {source['id']}: complete the first narrowed half of the task. "
137
- f"Focus on these paths first: {', '.join(owned_a or read_a)}."
138
- ),
139
- "owned_paths": owned_a,
140
- "read_paths": read_a,
141
- "depends_on": base_depends,
142
- "deliverables": deliver_a,
143
- "acceptance_criteria": accept_a or list(source.get("acceptance_criteria", [])),
144
- "verification_commands": verify_a,
145
- "status": "pending",
146
- "worker_prompt": (
147
- f"This task was auto-generated after timeout of {source['id']}. "
148
- f"Do not retry the full original scope. Only complete the first half safely."
149
- ),
150
- }
151
- plan.setdefault("tasks", []).append(task_a)
152
-
153
- task_b = {
154
- "id": next_task_id(plan),
155
- "step": split_loop,
156
- "kind": source["kind"],
157
- "title": f"{source['title']} [split-B]",
158
- "objective": (
159
- f"Timeout recovery for {source['id']}: complete the second narrowed half of the task after split-A. "
160
- f"Focus on these paths: {', '.join(owned_b or read_b)}."
161
- ),
162
- "owned_paths": owned_b,
163
- "read_paths": read_b,
164
- "depends_on": [task_a["id"]],
165
- "deliverables": deliver_b,
166
- "acceptance_criteria": accept_b or list(source.get("acceptance_criteria", [])),
167
- "verification_commands": verify_b,
168
- "status": "pending",
169
- "worker_prompt": (
170
- f"This task was auto-generated after timeout of {source['id']}. "
171
- f"Do not retry the full original scope. Complete the second half after split-A stabilizes."
172
- ),
173
- }
174
- plan.setdefault("tasks", []).append(task_b)
175
-
176
- loop = ensure_loop_entry(plan, split_loop)
177
- loop["task_ids"].extend([task_a["id"], task_b["id"]])
178
- return [task_a, task_b]
179
-
180
-
181
- def enforce_timeout_splits(previous_plan: dict, next_plan: dict, loop_results: dict) -> bool:
182
- previous_tasks = {task["id"]: task for task in previous_plan.get("tasks", [])}
183
- changed = False
184
- next_pending = [task for task in next_plan.get("tasks", []) if task.get("status") == "pending"]
185
-
186
- for item in loop_results.get("tasks", []):
187
- if int(item.get("returncode", 0)) != 124:
188
- continue
189
- task_id = item.get("task_id")
190
- source = previous_tasks.get(task_id)
191
- if not source:
192
- continue
193
- for task in next_plan.get("tasks", []):
194
- if task.get("id") == task_id and task.get("status") == "pending":
195
- task["status"] = "failed"
196
- changed = True
197
- prefix = f"{source['title']} [split-"
198
- split_tasks = [task for task in next_pending if str(task.get("title", "")).startswith(prefix)]
199
- if len(split_tasks) >= 2:
200
- continue
201
- synthesize_timeout_split_tasks(source, next_plan)
202
- changed = True
203
-
204
- return changed
205
-
206
-
207
- def main() -> int:
208
- parser = argparse.ArgumentParser()
209
- parser.add_argument("run_dir")
210
- parser.add_argument("--max-loops", type=int, default=1)
211
- parser.add_argument("--until-done", action="store_true")
212
- # Controller entrypoint for plan snapshot only (Taskfile target `plan:snapshot`).
213
- parser.add_argument("--snapshot-only", action="store_true")
214
- parser.add_argument("--loop", type=int, help="Loop number to snapshot (optional)")
215
- parser.add_argument("--wave", type=int, help="Legacy alias for --loop.")
216
- parser.add_argument(
217
- "--plan-version",
218
- default="auto",
219
- help="Plan version to snapshot: integer or 'auto' (default)",
220
- )
221
- args = parser.parse_args()
222
- loop_arg = args.loop if args.loop is not None else args.wave
223
-
224
- run_dir = Path(args.run_dir).resolve()
225
- # Snapshot-only controller path, useful for Taskfile integration.
226
- if args.snapshot_only:
227
- path = ensure_snapshot_for_current_plan(
228
- run_dir=run_dir, loop_number=loop_arg, plan_version=args.plan_version
229
- )
230
- print(f"plan snapshot written: {path}")
231
- return 0
232
-
233
- current_plan = run_dir / "plans" / "current-plan.json"
234
- if not current_plan.exists():
235
- import sys
236
-
237
- sys.argv = ["plan_loop.py", str(run_dir)]
238
- rc = plan_loop_main()
239
- if rc != 0:
240
- return rc
241
- # Persist initial plan snapshot for the first pending loop.
242
- path = ensure_snapshot_for_current_plan(run_dir)
243
- print(f"plan snapshot written: {path}")
244
-
245
- executed = 0
246
- while args.until_done or executed < args.max_loops:
247
- plan = canonicalize_plan(load_json(current_plan))
248
- loop_number = frontier_loop_from_plan(plan)
249
- if loop_number is None:
250
- state = load_json(run_dir / "state.json")
251
- final_loop = int(plan.get("plan_version", state.get("plan_version", 0) or 0))
252
- state["plan_version"] = int(plan.get("plan_version", state.get("plan_version", 0) or 0))
253
- state["current_loop"] = final_loop
254
- state["current_step"] = final_loop
255
- state["current_wave"] = final_loop
256
- write_json(run_dir / "state.json", state)
257
- break
258
- if skip_orphan_pending_tasks(plan, loop_number):
259
- write_json(current_plan, plan)
260
- versioned_plan = run_dir / "plans" / f"plan-v{plan['plan_version']}.json"
261
- write_json(versioned_plan, plan)
262
-
263
- import sys
264
-
265
- sys.argv = ["run_loop_step.py", str(run_dir), "--loop", str(loop_number)]
266
- rc = run_loop_step_main()
267
- if rc != 0:
268
- return rc
269
-
270
- state = load_json(run_dir / "state.json")
271
- state["current_loop"] = loop_number
272
- state["current_step"] = loop_number
273
- state["current_wave"] = loop_number
274
- write_json(run_dir / "state.json", state)
275
- executed += 1
276
-
277
- sys.argv = ["reconcile_loop.py", str(run_dir), "--loop", str(loop_number)]
278
- rc = reconcile_loop_main()
279
- if rc != 0:
280
- return rc
281
- # After reconcile, persist the updated plan snapshot under this loop.
282
- snap_path = ensure_snapshot_for_current_plan(
283
- run_dir=run_dir, loop_number=loop_number, plan_version="auto"
284
- )
285
- print(f"plan snapshot written: {snap_path}")
286
- next_plan = canonicalize_plan(load_json(current_plan))
287
- loop_results = load_loop_results(run_dir, loop_number)
288
- if enforce_timeout_splits(plan, next_plan, loop_results):
289
- write_json(current_plan, next_plan)
290
- versioned_plan = run_dir / "plans" / f"plan-v{next_plan['plan_version']}.json"
291
- write_json(versioned_plan, next_plan)
292
-
293
- if not args.until_done and executed >= args.max_loops:
294
- break
295
-
296
- return 0
297
-
298
-
299
- if __name__ == "__main__":
300
- raise SystemExit(main())
@@ -1,8 +0,0 @@
1
- #!/usr/bin/env python3
2
- from __future__ import annotations
3
-
4
- from run_step import main
5
-
6
-
7
- if __name__ == "__main__":
8
- raise SystemExit(main())
@@ -1,246 +0,0 @@
1
- #!/usr/bin/env python3
2
- from __future__ import annotations
3
-
4
- import argparse
5
- import concurrent.futures
6
- import json
7
- from pathlib import Path
8
-
9
- from common import (
10
- canonicalize_plan,
11
- load_json,
12
- load_text,
13
- legacy_wave_task_dir,
14
- loop_task_dir,
15
- plan_steps,
16
- repo_root_from_run_dir,
17
- run_codex_exec,
18
- salvage_json_output,
19
- skill_dir_from_file,
20
- task_step,
21
- worker_timeout_seconds,
22
- write_loop_results,
23
- write_json,
24
- )
25
-
26
-
27
- def active_loop_tasks(plan: dict, loop_number: int) -> list[dict]:
28
- return [task for task in plan["tasks"] if task_step(task) == loop_number and task["status"] == "pending"]
29
-
30
-
31
- def validate_disjoint_paths(tasks: list[dict]) -> None:
32
- owners: dict[str, str] = {}
33
- conflicts: list[str] = []
34
- for task in tasks:
35
- for path in task.get("owned_paths", []):
36
- owner = owners.get(path)
37
- if owner and owner != task["id"]:
38
- conflicts.append(f"{path}: {owner} vs {task['id']}")
39
- owners[path] = task["id"]
40
- if conflicts:
41
- joined = "\n".join(conflicts)
42
- raise SystemExit(f"owned_paths conflict inside loop:\n{joined}")
43
-
44
-
45
- def task_lookup(tasks: list[dict]) -> dict[str, dict]:
46
- return {task["id"]: task for task in tasks}
47
-
48
-
49
- def ready_tasks(tasks: list[dict], completed: set[str]) -> list[dict]:
50
- index = task_lookup(tasks)
51
- ready: list[dict] = []
52
- for task in tasks:
53
- deps = [dep for dep in task.get("depends_on", []) if dep in index]
54
- if all(dep in completed for dep in deps):
55
- ready.append(task)
56
- return ready
57
-
58
-
59
- def task_prompt(goal_text: str, task: dict, plan_text: str, run_name: str, loop_number: int, overlap_policy: str) -> str:
60
- plan = json.loads(plan_text)
61
- compact_plan = {
62
- "run_name": plan.get("run_name"),
63
- "plan_version": plan.get("plan_version"),
64
- "summary": plan.get("summary"),
65
- "completed_tasks": [
66
- {
67
- "id": item["id"],
68
- "title": item["title"],
69
- "status": item["status"],
70
- "deliverables": item.get("deliverables", []),
71
- }
72
- for item in plan.get("tasks", [])
73
- if item.get("status") == "completed"
74
- ],
75
- "current_step": [
76
- {
77
- "step": entry["step"],
78
- "goal": entry["goal"],
79
- "task_ids": entry["task_ids"],
80
- }
81
- for entry in plan_steps(plan)
82
- if int(entry["step"]) == loop_number
83
- ],
84
- }
85
- verification = "\n".join(f"- {command}" for command in task.get("verification_commands", [])) or "- none"
86
- owned_paths = "\n".join(f"- {path}" for path in task.get("owned_paths", [])) or "- none declared"
87
- read_paths = "\n".join(f"- {path}" for path in task.get("read_paths", [])) or "- none declared"
88
- acceptance = "\n".join(f"- {item}" for item in task.get("acceptance_criteria", []))
89
- deliverables = "\n".join(f"- {item}" for item in task.get("deliverables", []))
90
- depends = "\n".join(f"- {item}" for item in task.get("depends_on", [])) or "- none"
91
- return f"""Use $otro.
92
-
93
- You are a loop worker for orchestration run `{run_name}`, loop {loop_number}, step {loop_number}, task {task['id']}.
94
-
95
- Global goal:
96
- {goal_text}
97
-
98
- Compact plan context:
99
- {json.dumps(compact_plan, indent=2, ensure_ascii=False)}
100
-
101
- Task contract:
102
- - id: {task['id']}
103
- - title: {task['title']}
104
- - kind: {task['kind']}
105
- - objective: {task['objective']}
106
- - depends_on:
107
- {depends}
108
- - owned_paths:
109
- {owned_paths}
110
- - read_paths:
111
- {read_paths}
112
- - deliverables:
113
- {deliverables}
114
- - acceptance_criteria:
115
- {acceptance}
116
- - verification_commands:
117
- {verification}
118
-
119
- Execution rules:
120
- - Use at most 12 shell commands before finalizing.
121
- - Own only the listed `owned_paths`.
122
- - Read other paths only as needed to satisfy this task.
123
- - Do not revert unrelated changes.
124
- - Run relevant verification commands if possible.
125
- - If blocked, stop and explain the blocker exactly.
126
- - Return one final JSON object only. Do not emit provisional JSON.
127
- - Do not output partial results unless the task is genuinely blocked.
128
- - Overlap policy for this loop: `{overlap_policy}`.
129
- - If overlap, inconsistency, or stale assumptions are discovered, record them in `residual_signals`.
130
-
131
- Worker-specific guidance:
132
- {task['worker_prompt']}
133
-
134
- Path safety:
135
- - Treat the task contract paths as authoritative.
136
- - Do not read or write a shared repo-level `plans/current-plan.json` unless it is explicitly listed in this task contract.
137
- """
138
-
139
-
140
- def run_task(run_dir: Path, task: dict, config: dict, plan_text: str, goal_text: str, loop_number: int) -> dict:
141
- skill_dir = skill_dir_from_file(__file__)
142
- repo_root = repo_root_from_run_dir(run_dir, config, skill_dir=skill_dir)
143
- schema_path = skill_dir / "schemas" / "task_result.schema.json"
144
- task_dir = loop_task_dir(run_dir, loop_number, task["id"])
145
- task_dir.mkdir(parents=True, exist_ok=True)
146
- legacy_dir = legacy_wave_task_dir(run_dir, loop_number, task["id"])
147
- legacy_dir.mkdir(parents=True, exist_ok=True)
148
- prompt_path = task_dir / "prompt.md"
149
- output_path = task_dir / "result.json"
150
- log_path = task_dir / "codex.log"
151
- prompt = task_prompt(
152
- goal_text,
153
- task,
154
- plan_text,
155
- str(config["run_name"]),
156
- loop_number,
157
- str(config.get("overlap_policy", "strict")),
158
- )
159
- prompt_path.write_text(prompt, encoding="utf-8")
160
- write_json(legacy_dir / "result.json", {"status": "running", "task_id": task["id"], "summary": "loop task started", "changed_files": [], "verification": [], "blockers": [], "integration_notes": [], "residual_signals": [], "proposed_follow_up_tasks": []})
161
- (legacy_dir / "prompt.md").write_text(prompt, encoding="utf-8")
162
- result = run_codex_exec(
163
- repo_root=repo_root,
164
- prompt=prompt,
165
- model=str(config["model"]),
166
- schema_path=schema_path,
167
- output_path=output_path,
168
- log_path=log_path,
169
- timeout_seconds=worker_timeout_seconds(config, str(task["kind"])),
170
- )
171
- if output_path.exists() and output_path.stat().st_size == 0:
172
- salvage_json_output(log_path, output_path)
173
- payload = {
174
- "task_id": task["id"],
175
- "returncode": result.returncode,
176
- "result_path": str(output_path),
177
- "log_path": str(log_path),
178
- }
179
- if output_path.exists():
180
- payload["result"] = load_json(output_path)
181
- write_json(legacy_dir / "result.json", payload["result"])
182
- return payload
183
-
184
-
185
- def resolve_max_parallel(raw: object, task_count: int) -> int:
186
- if task_count <= 0:
187
- return 0
188
- if isinstance(raw, str):
189
- if raw.lower() == "all":
190
- return task_count
191
- return min(int(raw), task_count)
192
- if raw is None:
193
- return task_count
194
- return min(int(raw), task_count)
195
-
196
-
197
- def main() -> int:
198
- parser = argparse.ArgumentParser()
199
- parser.add_argument("run_dir")
200
- parser.add_argument("--loop", type=int, help="Loop number to execute.")
201
- parser.add_argument("--wave", type=int, help="Legacy alias for --loop.")
202
- args = parser.parse_args()
203
- loop_number = args.loop if args.loop is not None else args.wave
204
- if loop_number is None:
205
- raise SystemExit("--loop is required")
206
-
207
- run_dir = Path(args.run_dir).resolve()
208
- config = load_json(run_dir / "config.json")
209
- plan = canonicalize_plan(load_json(run_dir / "plans" / "current-plan.json"))
210
- goal_text = load_text(run_dir / "goal.md")
211
- plan_text = json.dumps(plan, indent=2, ensure_ascii=False)
212
- tasks = active_loop_tasks(plan, loop_number)
213
- if not tasks:
214
- raise SystemExit(f"no pending tasks for loop {loop_number}")
215
- overlap_policy = str(config.get("overlap_policy", "strict"))
216
- if overlap_policy == "strict":
217
- validate_disjoint_paths(tasks)
218
-
219
- results: list[dict] = []
220
- remaining = {task["id"]: task for task in tasks}
221
- completed: set[str] = set()
222
- max_parallel = resolve_max_parallel(config.get("max_parallel"), len(tasks))
223
-
224
- while remaining:
225
- batch = ready_tasks(list(remaining.values()), completed)
226
- if not batch:
227
- unresolved = ", ".join(sorted(remaining))
228
- raise SystemExit(f"dependency cycle or unsatisfied in-loop dependency among: {unresolved}")
229
- with concurrent.futures.ThreadPoolExecutor(max_workers=min(max_parallel, len(batch))) as executor:
230
- future_map = {
231
- executor.submit(run_task, run_dir, task, config, plan_text, goal_text, loop_number): task["id"]
232
- for task in batch
233
- }
234
- for future in concurrent.futures.as_completed(future_map):
235
- result = future.result()
236
- results.append(result)
237
- completed.add(result["task_id"])
238
- remaining.pop(result["task_id"], None)
239
-
240
- results.sort(key=lambda item: item["task_id"])
241
- write_loop_results(run_dir, loop_number, {"loop": loop_number, "tasks": results})
242
- return 0
243
-
244
-
245
- if __name__ == "__main__":
246
- raise SystemExit(main())
@@ -1,8 +0,0 @@
1
- #!/usr/bin/env python3
2
- from __future__ import annotations
3
-
4
- from run_step import main
5
-
6
-
7
- if __name__ == "__main__":
8
- raise SystemExit(main())
@@ -1,15 +0,0 @@
1
- # Template OTRO Validation Guide
2
-
3
- Status: current baseline
4
- Last updated: 2026-03-16 (Asia/Seoul)
5
-
6
- ## Scope
7
-
8
- - This file is the durable validation header target for template OTRO automation.
9
- - Historical OTRO run artifacts may be cleaned, but this guide remains stable for CI and header-bump workflows.
10
-
11
- ## Current Baseline
12
-
13
- - Validation focus: retained SDD consistency checks and repository-wide delivery loop hygiene
14
- - Owner: Codex
15
- - Source of truth: `sdd/02_plan/03_architecture/`, `sdd/04_verify/10_test/`
@@ -1,42 +0,0 @@
1
- ---
2
- name: planning-with-files
3
- description: Persist and execute work through a markdown plan file inside the active repo. Use when the user asks for "Planning with Files", "플래닝 위드 파일스", a durable checklist, or any multi-step task that must stay auditable across edits, tests, and deployment.
4
- ---
5
-
6
- # Planning with Files
7
-
8
- ## Objective
9
-
10
- Create a plan file early, keep it live during execution, and close it with verification evidence.
11
-
12
- ## Quick Start
13
-
14
- 1. Run `scripts/new_plan.sh "<title>" [repo_root] [section]`.
15
- 2. Open the created file under the repo's canonical planning root.
16
- - Default fallback: `docs/plans/`.
17
- - If repository instructions define another planning root such as `sdd/02_plan/<section>/`, follow the repository rule instead.
18
- - In repositories like `templates`, pass the section explicitly so the plan lands under `sdd/02_plan/<section>/`.
19
- 3. Fill scope, assumptions, and acceptance criteria before major edits.
20
- 4. Update checklist and work log after each meaningful action.
21
- 5. Close with validation status and remaining risks.
22
-
23
- ## Workflow
24
-
25
- 1. Define scope and constraints in the plan file.
26
- 2. Break work into 5-10 concrete checklist items.
27
- 3. Keep exactly one item in active progress.
28
- 4. Append short evidence logs after edit/test/deploy steps.
29
- 5. Replan the remaining checklist when blockers appear.
30
- 6. End with explicit pass/fail validation notes.
31
-
32
- ## Resources
33
-
34
- - Template: `assets/plan-template.md`
35
- - Rules and examples: `references/plan-rules.md`
36
- - Scaffolder: `scripts/new_plan.sh`
37
-
38
- ## Guardrails
39
-
40
- - Keep plan entries factual and concise.
41
- - Avoid stale checklists; update instead of accumulating dead items.
42
- - Never mark validation complete without command-level evidence in the log.
@@ -1,4 +0,0 @@
1
- interface:
2
- display_name: "Planning with Files"
3
- short_description: "Plan tasks in markdown files with checkpoints"
4
- default_prompt: "Use $planning-with-files to create and maintain a task plan file for this work."
@@ -1,37 +0,0 @@
1
- # {{TITLE}}
2
-
3
- - Date: {{DATE}}
4
- - Owner: Codex
5
- - Status: in_progress
6
-
7
- ## Scope
8
-
9
- - In scope:
10
- - Out of scope:
11
-
12
- ## Assumptions
13
-
14
- -
15
-
16
- ## Acceptance Criteria
17
-
18
- - [ ]
19
-
20
- ## Execution Checklist
21
-
22
- - [ ]
23
- - [ ]
24
- - [ ]
25
-
26
- ## Work Log
27
-
28
- - {{DATE}} 00:00 - Plan created.
29
-
30
- ## Validation
31
-
32
- - [ ] Build or test commands executed and logged.
33
- - [ ] DEV deployment or runtime checks completed if required.
34
-
35
- ## Risks / Follow-ups
36
-
37
- -