open-research-protocol 0.4.26 → 0.4.27

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -12,6 +12,11 @@ read:
12
12
 
13
13
  - Read `llms.txt`.
14
14
  - Run `orp about --json`.
15
+ - Run `orp hygiene --json` before long delegation, after material writeback,
16
+ before API/remote/paid compute, and whenever dirty state grows unexpectedly.
17
+ If it reports `dirty_unclassified`, stop long-running expansion and classify
18
+ the paths, refresh generated surfaces, canonicalize useful scratch, or write a
19
+ blocker before continuing.
15
20
  - If the task benefits from fresh concepting, tasteful interface work, or
16
21
  exploratory reframing, run:
17
22
  - `orp mode nudge sleek-minimal-progressive --json`
@@ -32,6 +37,10 @@ read:
32
37
  ## 2. Select Work
33
38
 
34
39
  - Identify the target profile and canonical artifact paths.
40
+ - Treat worktree hygiene as a default-on self-healing loop. Dirty state is okay
41
+ when it is owned; invisible or unclassified dirt is not. The command is
42
+ non-destructive, so never reset, checkout, or delete files merely to make the
43
+ report look clean.
35
44
  - If the task depends on the current highest-leverage action slice, refresh ORP's agenda first:
36
45
  - `orp agenda refresh --json`
37
46
  - `orp agenda refresh-status --json`
@@ -28,6 +28,52 @@ The built-in `openai-council` profile defines three OpenAI API lanes:
28
28
 
29
29
  This follows OpenAI's current model guidance: `gpt-5.4` is the default for general-purpose, coding, reasoning, and agentic workflows; web search is enabled through the Responses API `tools` array when current information is needed; and Deep Research is available through the Responses endpoint with `o3-deep-research-2025-06-26`.
30
30
 
31
+ ## Staged Deep Research Template
32
+
33
+ ORP also includes a built-in `deep-think-web-think-deep` profile for a strict sequence:
34
+
35
+ ```text
36
+ Deep Research -> think -> think/web search -> think -> Deep Research
37
+ ```
38
+
39
+ Inspect the template before using it:
40
+
41
+ ```bash
42
+ orp research profile show deep-think-web-think-deep --json
43
+ ```
44
+
45
+ Run a dry plan with form-like fields filled by a human or an agent:
46
+
47
+ ```bash
48
+ orp research ask "Should this product use a staged research loop?" \
49
+ --profile deep-think-web-think-deep \
50
+ --field goal="Decide whether to adopt the staged loop" \
51
+ --field audience="Platform team" \
52
+ --field decision_to_support="Choose the default research workflow" \
53
+ --field project_context="ORP owns durable artifacts and secret resolution" \
54
+ --field constraints="Use one OpenAI API key first" \
55
+ --field deliverable_format="Decision memo with risks and next steps" \
56
+ --json
57
+ ```
58
+
59
+ The prompt form is intentionally general. It gives an agent a reusable customization surface, similar to filling out a product intake form for a company:
60
+
61
+ - `goal`
62
+ - `audience`
63
+ - `decision_to_support`
64
+ - `project_context`
65
+ - `constraints`
66
+ - `known_inputs`
67
+ - `source_preferences`
68
+ - `recency_requirements`
69
+ - `excluded_assumptions`
70
+ - `success_criteria`
71
+ - `deliverable_format`
72
+
73
+ Dry runs persist the generated prompt for every lane under `orp/research/<run_id>/lanes/`. Later lanes include earlier lane outputs when those outputs exist, whether they came from live calls or `--lane-fixture` files. This makes the sequence inspectable before spending live provider calls. In live mode, later staged lanes skip their provider call if an earlier required lane did not complete with text.
74
+
75
+ The staged profile keeps Deep Research foreground by default so the next lane can receive actual output. Deep Research can take a long time; pass a larger `--timeout-sec` for live runs or provide a custom profile file that sets `background=true` if you want asynchronous Deep Research behavior.
76
+
31
77
  ## API Call Moments
32
78
 
33
79
  ORP records when API keys are intended to be used:
@@ -117,6 +163,8 @@ command = "/path/to/orp/scripts/orp-mcp"
117
163
  It exposes:
118
164
 
119
165
  - `orp_research_ask`
166
+ - `orp_research_profile_list`
167
+ - `orp_research_profile_show`
120
168
  - `orp_research_status`
121
169
  - `orp_research_show`
122
170
 
@@ -739,6 +739,7 @@ The guiding rule is simple:
739
739
 
740
740
  - recover the saved workspace
741
741
  - inspect repo safety
742
+ - classify dirty state before expansion
742
743
  - resolve the right secret
743
744
  - inspect the current frontier
744
745
  - do the next honest move
@@ -756,16 +757,20 @@ If you only want the irreducible ORP loop, it is this:
756
757
  ```bash
757
758
  orp status --json
758
759
  ```
759
- 3. resolve the right secret
760
+ 3. classify dirty worktree state before expansion
761
+ ```bash
762
+ orp hygiene --json
763
+ ```
764
+ 4. resolve the right secret
760
765
  ```bash
761
766
  orp secrets ensure --alias <alias> --provider <provider> --current-project --json
762
767
  ```
763
- 4. inspect the current frontier
768
+ 5. inspect the current frontier
764
769
  ```bash
765
770
  orp frontier state --json
766
771
  ```
767
- 5. do the next honest move
768
- 6. checkpoint it
772
+ 6. do the next honest move
773
+ 7. checkpoint it
769
774
  ```bash
770
775
  orp checkpoint create -m "describe completed unit" --json
771
776
  ```
@@ -774,6 +779,7 @@ That is the shortest version of the protocol:
774
779
 
775
780
  - recover continuity
776
781
  - inspect safety
782
+ - classify dirty state
777
783
  - resolve access
778
784
  - inspect context
779
785
  - do the work
@@ -786,6 +792,7 @@ If you want the agent to stay aligned with ORP, the default check sequence shoul
786
792
  ```bash
787
793
  orp workspace tabs main
788
794
  orp status --json
795
+ orp hygiene --json
789
796
  orp secrets ensure --alias <alias> --provider <provider> --current-project --json
790
797
  orp frontier state --json
791
798
  ```
@@ -800,17 +807,18 @@ That is the practical ORP loop:
800
807
 
801
808
  1. recover the workspace ledger
802
809
  2. inspect repo safety
803
- 3. resolve the right key
804
- 4. inspect the current frontier
805
- 5. do the work
806
- 6. checkpoint it honestly
810
+ 3. classify dirty state and stop if anything is unowned
811
+ 4. resolve the right key
812
+ 5. inspect the current frontier
813
+ 6. do the work
814
+ 7. checkpoint it honestly
807
815
 
808
816
  The key point is that ORP should become the lens:
809
817
 
810
818
  - not "something we remember to use sometimes"
811
819
  - but the operating frame the agent checks before and after meaningful work
812
820
 
813
- ## If You Only Remember 8 Commands
821
+ ## If You Only Remember 9 Commands
814
822
 
815
823
  ```bash
816
824
  orp home
@@ -818,6 +826,7 @@ orp init
818
826
  orp workspace create main-cody-1
819
827
  orp workspace tabs main
820
828
  orp workspace add-tab main --path /absolute/path/to/project --resume-command "codex resume <id>"
829
+ orp hygiene --json
821
830
  orp secrets ensure --alias openai-primary --provider openai --current-project --json
822
831
  orp status --json
823
832
  orp checkpoint create -m "capture loop state" --json
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "open-research-protocol",
3
- "version": "0.4.26",
3
+ "version": "0.4.27",
4
4
  "description": "ORP CLI (Open Research Protocol): workspace ledgers, secrets, scheduling, governed execution, and agent-friendly research workflows.",
5
5
  "license": "MIT",
6
6
  "author": "Fractal Research Group <cody@frg.earth>",
@@ -1,9 +1,55 @@
1
+ import { spawnSync } from "node:child_process";
2
+ import path from "node:path";
3
+ import { fileURLToPath } from "node:url";
4
+
1
5
  import { runWorkspaceAddTab, runWorkspaceCreate, runWorkspaceRemoveTab } from "./ledger.js";
2
6
  import { runWorkspaceList } from "./list.js";
3
7
  import { runWorkspaceSlot } from "./slot.js";
4
8
  import { runWorkspaceSync } from "./sync.js";
5
9
  import { runWorkspaceTabs } from "./tabs.js";
6
10
 
11
+ const __dirname = path.dirname(fileURLToPath(import.meta.url));
12
+ const pythonCliPath = path.resolve(__dirname, "..", "..", "..", "cli", "orp.py");
13
+
14
+ function pythonCandidates() {
15
+ const candidates = [];
16
+ if (process.env.ORP_PYTHON && process.env.ORP_PYTHON.trim() !== "") {
17
+ candidates.push(process.env.ORP_PYTHON.trim());
18
+ }
19
+ if (process.platform === "win32") {
20
+ candidates.push("py");
21
+ }
22
+ candidates.push("python3", "python");
23
+ return candidates;
24
+ }
25
+
26
+ function runWorkspaceHygiene(args = []) {
27
+ let lastErr = null;
28
+ for (const py of pythonCandidates()) {
29
+ const pyArgs = py === "py" ? ["-3", pythonCliPath, "hygiene", ...args] : [pythonCliPath, "hygiene", ...args];
30
+ const result = spawnSync(py, pyArgs, { encoding: "utf8" });
31
+ if (!result.error) {
32
+ if (result.stdout) {
33
+ process.stdout.write(result.stdout);
34
+ }
35
+ if (result.stderr) {
36
+ process.stderr.write(result.stderr);
37
+ }
38
+ return result.status == null ? 1 : result.status;
39
+ }
40
+ if (result.error && result.error.code === "ENOENT") {
41
+ continue;
42
+ }
43
+ lastErr = result.error;
44
+ }
45
+ process.stderr.write("ORP workspace hygiene requires Python 3 on PATH.\n");
46
+ process.stderr.write("Tried: " + pythonCandidates().join(", ") + "\n");
47
+ if (lastErr) {
48
+ process.stderr.write(String(lastErr) + "\n");
49
+ }
50
+ return 1;
51
+ }
52
+
7
53
  function printWorkspaceHelp() {
8
54
  console.log(`ORP workspace
9
55
 
@@ -18,6 +64,7 @@ Usage:
18
64
  orp workspace remove-tab <name-or-id> (--index <n> | --path <absolute-path> | --title <title> | --resume-session-id <id> | --resume-command <text>) [--all] [--json]
19
65
  orp workspace slot <list|set|clear> ...
20
66
  orp workspace sync <name-or-id> [--workspace-file <path> | --notes-file <path>] [--dry-run] [--json]
67
+ orp workspace hygiene [--json]
21
68
  orp workspace ledger <name-or-id> [--json]
22
69
  orp workspace ledger add <name-or-id> (--path <absolute-path> | --here) [--title <title>] [--remote-url <git-url>] [--remote-branch <branch>] [--bootstrap-command <text>] [--resume-command <text> | --resume-tool <codex|claude> --resume-session-id <id> | --current-codex] [--append] [--json]
23
70
  orp workspace ledger remove <name-or-id> (--index <n> | --path <absolute-path> | --title <title> | --resume-session-id <id> | --resume-command <text>) [--all] [--json]
@@ -31,6 +78,7 @@ Commands:
31
78
  remove-tab Remove one or more saved tabs from the workspace ledger directly
32
79
  slot Assign and inspect named workspace slots like main and offhand
33
80
  sync Post a CLI-authored workspace manifest back to the hosted ORP idea
81
+ hygiene Classify dirty worktree paths before long agent expansion
34
82
  ledger Compatibility alias for the same tabs/add/remove ledger flow
35
83
 
36
84
  Notes:
@@ -41,6 +89,7 @@ Notes:
41
89
  - Use \`orp workspace add-tab ... --remote-url ... --bootstrap-command ...\` when you want ORP to remember how to recreate the repo on another machine.
42
90
  - Use \`orp workspace add-tab ...\` and \`orp workspace remove-tab ...\` when you want to edit the saved workspace ledger explicitly from Terminal.app or any other shell.
43
91
  - If you prefer the older ledger-prefixed wording, \`orp workspace ledger\`, \`orp workspace ledger add\`, and \`orp workspace ledger remove\` stay available as aliases.
92
+ - \`orp workspace hygiene --json\` is a wrapper alias for \`orp hygiene --json\`; it reports dirty path categories without resetting, checking out, or deleting files.
44
93
  - \`main\` and \`offhand\` are reserved slot selectors; use \`orp workspace slot set ...\` to assign them.
45
94
  - Syncing or editing a hosted workspace writes a managed local cache on this Mac.
46
95
  - \`<name-or-id>\` can be a saved workspace title, workspace id, idea id, or local tracked workspace title/id.
@@ -51,6 +100,7 @@ Examples:
51
100
  orp workspace create mac-main --machine-label "Mac Studio" --path /absolute/path/to/orp --remote-url git@github.com:SproutSeeds/orp.git --bootstrap-command "npm install"
52
101
  orp workspace list
53
102
  orp workspace tabs main-cody-1
103
+ orp workspace hygiene --json
54
104
  orp workspace add-tab main --here --current-codex
55
105
  orp workspace add-tab main --path /absolute/path/to/new-project --resume-command "codex resume 019d..."
56
106
  orp workspace add-tab main --path /absolute/path/to/new-project --remote-url git@github.com:org/new-project.git --bootstrap-command "npm install"
@@ -126,5 +176,9 @@ export async function runOrpWorkspaceCommand(argv = []) {
126
176
  return runWorkspaceSync(rest);
127
177
  }
128
178
 
179
+ if (subcommand === "hygiene") {
180
+ return runWorkspaceHygiene(rest);
181
+ }
182
+
129
183
  throw new Error(`unknown workspace subcommand: ${subcommand}`);
130
184
  }
@@ -38,6 +38,7 @@ test("runOrpWorkspaceCommand shows the ledger-first help surface", async () => {
38
38
  assert.match(stdout, /orp workspace ledger add <name-or-id>/);
39
39
  assert.match(stdout, /orp workspace ledger remove <name-or-id>/);
40
40
  assert.match(stdout, /orp workspace tabs <name-or-id>/);
41
+ assert.match(stdout, /orp workspace hygiene \[--json\]/);
41
42
  assert.match(stdout, /Compatibility alias for the same tabs\/add\/remove ledger flow/);
42
43
  });
43
44
 
package/scripts/orp-mcp CHANGED
@@ -25,6 +25,11 @@ TOOLS: list[dict[str, Any]] = [
25
25
  "run_id": {"type": "string"},
26
26
  "profile": {"type": "string", "default": "openai-council"},
27
27
  "profile_file": {"type": "string"},
28
+ "fields": {
29
+ "type": "object",
30
+ "description": "Prompt-template fields passed as key/value pairs.",
31
+ "additionalProperties": {"type": "string"},
32
+ },
28
33
  "execute": {"type": "boolean", "default": False},
29
34
  "lane_fixtures": {
30
35
  "type": "object",
@@ -32,7 +37,28 @@ TOOLS: list[dict[str, Any]] = [
32
37
  "additionalProperties": {"type": "string"},
33
38
  },
34
39
  "chimera_bin": {"type": "string", "default": "chimera"},
35
- "timeout_sec": {"type": "integer", "minimum": 1, "default": 120},
40
+ "timeout_sec": {"type": "integer", "minimum": 1, "description": "Override the profile per-lane timeout policy."},
41
+ },
42
+ },
43
+ },
44
+ {
45
+ "name": "orp_research_profile_list",
46
+ "description": "List built-in ORP research profiles and prompt-template surfaces.",
47
+ "inputSchema": {
48
+ "type": "object",
49
+ "properties": {
50
+ "repo_root": {"type": "string", "description": "Repository root. Defaults to current directory."},
51
+ },
52
+ },
53
+ },
54
+ {
55
+ "name": "orp_research_profile_show",
56
+ "description": "Show a built-in ORP research profile, including its lane sequence and prompt form.",
57
+ "inputSchema": {
58
+ "type": "object",
59
+ "properties": {
60
+ "repo_root": {"type": "string", "description": "Repository root. Defaults to current directory."},
61
+ "profile_id": {"type": "string", "default": "openai-council"},
36
62
  },
37
63
  },
38
64
  },
@@ -111,6 +137,10 @@ def _call_research_ask(args: dict[str, Any]) -> dict[str, Any]:
111
137
  run_id = str(args.get("run_id", "") or "").strip()
112
138
  if run_id:
113
139
  cmd.extend(["--run-id", run_id])
140
+ fields = args.get("fields")
141
+ if isinstance(fields, dict):
142
+ for key, value in fields.items():
143
+ cmd.extend(["--field", f"{key}={value}"])
114
144
  if bool(args.get("execute", False)):
115
145
  cmd.append("--execute")
116
146
  lane_fixtures = args.get("lane_fixtures")
@@ -130,6 +160,23 @@ def _call_research_ask(args: dict[str, Any]) -> dict[str, Any]:
130
160
  return _text_result(stdout.strip())
131
161
 
132
162
 
163
+ def _call_research_profile_list(args: dict[str, Any]) -> dict[str, Any]:
164
+ cmd = ["--repo-root", _repo_root(args), "research", "profile", "list", "--json"]
165
+ code, stdout, stderr = _run_orp(cmd)
166
+ if code != 0:
167
+ return _text_result((stderr or stdout or f"orp exited {code}").strip(), is_error=True)
168
+ return _text_result(stdout.strip())
169
+
170
+
171
+ def _call_research_profile_show(args: dict[str, Any]) -> dict[str, Any]:
172
+ profile_id = str(args.get("profile_id", "") or "openai-council").strip() or "openai-council"
173
+ cmd = ["--repo-root", _repo_root(args), "research", "profile", "show", profile_id, "--json"]
174
+ code, stdout, stderr = _run_orp(cmd)
175
+ if code != 0:
176
+ return _text_result((stderr or stdout or f"orp exited {code}").strip(), is_error=True)
177
+ return _text_result(stdout.strip())
178
+
179
+
133
180
  def _call_research_status(args: dict[str, Any]) -> dict[str, Any]:
134
181
  run_id = str(args.get("run_id", "") or "latest").strip() or "latest"
135
182
  cmd = ["--repo-root", _repo_root(args), "research", "status", run_id, "--json"]
@@ -174,6 +221,10 @@ def _handle(request: dict[str, Any]) -> dict[str, Any] | None:
174
221
  arguments = {}
175
222
  if name == "orp_research_ask":
176
223
  return _json_response(request_id, _call_research_ask(arguments))
224
+ if name == "orp_research_profile_list":
225
+ return _json_response(request_id, _call_research_profile_list(arguments))
226
+ if name == "orp_research_profile_show":
227
+ return _json_response(request_id, _call_research_profile_show(arguments))
177
228
  if name == "orp_research_status":
178
229
  return _json_response(request_id, _call_research_status(arguments))
179
230
  if name == "orp_research_show":