open-research-protocol 0.4.25 → 0.4.27

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -12,6 +12,11 @@ read:
12
12
 
13
13
  - Read `llms.txt`.
14
14
  - Run `orp about --json`.
15
+ - Run `orp hygiene --json` before long delegation, after material writeback,
16
+ before API/remote/paid compute, and whenever dirty state grows unexpectedly.
17
+ If it reports `dirty_unclassified`, stop long-running expansion and classify
18
+ the paths, refresh generated surfaces, canonicalize useful scratch, or write a
19
+ blocker before continuing.
15
20
  - If the task benefits from fresh concepting, tasteful interface work, or
16
21
  exploratory reframing, run:
17
22
  - `orp mode nudge sleek-minimal-progressive --json`
@@ -32,6 +37,10 @@ read:
32
37
  ## 2. Select Work
33
38
 
34
39
  - Identify the target profile and canonical artifact paths.
40
+ - Treat worktree hygiene as a default-on self-healing loop. Dirty state is okay
41
+ when it is owned; invisible or unclassified dirt is not. The command is
42
+ non-destructive, so never reset, checkout, or delete files merely to make the
43
+ report look clean.
35
44
  - If the task depends on the current highest-leverage action slice, refresh ORP's agenda first:
36
45
  - `orp agenda refresh --json`
37
46
  - `orp agenda refresh-status --json`
@@ -0,0 +1,171 @@
1
+ # ORP Research Council
2
+
3
+ ORP research council runs turn one question into a durable, tool-callable research artifact. The default profile is OpenAI-only right now, so one saved ORP key can power the full loop:
4
+
5
+ ```bash
6
+ orp research ask "Where should this system live?" --json
7
+ ```
8
+
9
+ By default this is a dry run. ORP writes the decomposition, profile, lane plan, lane JSON files, synthesized planning answer, and summary under:
10
+
11
+ ```text
12
+ orp/research/<run_id>/
13
+ ```
14
+
15
+ Live provider calls require an explicit flag:
16
+
17
+ ```bash
18
+ orp research ask "Where should this system live?" --execute --json
19
+ ```
20
+
21
+ ## Lanes
22
+
23
+ The built-in `openai-council` profile defines three OpenAI API lanes:
24
+
25
+ - `openai_reasoning_high`: `gpt-5.4` with `reasoning.effort=high` for the deliberate thinking pass.
26
+ - `openai_web_synthesis`: `gpt-5.4` with high reasoning plus Responses API web search for current public evidence and citations.
27
+ - `openai_deep_research`: `o3-deep-research-2025-06-26` with background execution and web search preview for Pro/Deep Research style investigation.
28
+
29
+ This follows OpenAI's current model guidance: `gpt-5.4` is the default for general-purpose, coding, reasoning, and agentic workflows; web search is enabled through the Responses API `tools` array when current information is needed; and Deep Research is available through the Responses endpoint with `o3-deep-research-2025-06-26`.
30
+
31
+ ## Staged Deep Research Template
32
+
33
+ ORP also includes a built-in `deep-think-web-think-deep` profile for a strict sequence:
34
+
35
+ ```text
36
+ Deep Research -> think -> think/web search -> think -> Deep Research
37
+ ```
38
+
39
+ Inspect the template before using it:
40
+
41
+ ```bash
42
+ orp research profile show deep-think-web-think-deep --json
43
+ ```
44
+
45
+ Run a dry plan with form-like fields filled by a human or an agent:
46
+
47
+ ```bash
48
+ orp research ask "Should this product use a staged research loop?" \
49
+ --profile deep-think-web-think-deep \
50
+ --field goal="Decide whether to adopt the staged loop" \
51
+ --field audience="Platform team" \
52
+ --field decision_to_support="Choose the default research workflow" \
53
+ --field project_context="ORP owns durable artifacts and secret resolution" \
54
+ --field constraints="Use one OpenAI API key first" \
55
+ --field deliverable_format="Decision memo with risks and next steps" \
56
+ --json
57
+ ```
58
+
59
+ The prompt form is intentionally general. It gives an agent a reusable customization surface, similar to filling out a product intake form for a company:
60
+
61
+ - `goal`
62
+ - `audience`
63
+ - `decision_to_support`
64
+ - `project_context`
65
+ - `constraints`
66
+ - `known_inputs`
67
+ - `source_preferences`
68
+ - `recency_requirements`
69
+ - `excluded_assumptions`
70
+ - `success_criteria`
71
+ - `deliverable_format`
72
+
73
+ Dry runs persist the generated prompt for every lane under `orp/research/<run_id>/lanes/`. Later lanes include earlier lane outputs when those outputs exist, whether they came from live calls or `--lane-fixture` files. This makes the sequence inspectable before spending live provider calls. In live mode, later staged lanes skip their provider call if an earlier required lane did not complete with text.
74
+
75
+ The staged profile keeps Deep Research foreground by default so the next lane can receive actual output. Deep Research can take a long time; pass a larger `--timeout-sec` for live runs or provide a custom profile file that sets `background=true` if you want asynchronous Deep Research behavior.
76
+
77
+ ## API Call Moments
78
+
79
+ ORP records when API keys are intended to be used:
80
+
81
+ - `plan`: local decomposition only. No API key is resolved.
82
+ - `thinking_reasoning_high`: resolve `openai-primary` immediately before the `openai_reasoning_high` lane.
83
+ - `web_synthesis`: resolve `openai-primary` immediately before the `openai_web_synthesis` lane.
84
+ - `pro_deep_research`: resolve `openai-primary` immediately before the `openai_deep_research` lane.
85
+
86
+ Dry runs write every lane with `api_call.called=false`. Live runs require `--execute`; even then, secret values are read only at the lane call moment and are not written to artifacts.
87
+
88
+ Secret values are read from environment variables first. If an env var is missing and a matching ORP Keychain secret is available, ORP can use it at execution time. Secret values are not persisted in artifacts.
89
+
90
+ The default live profile expects this ORP secret alias or env var:
91
+
92
+ - `openai-primary` / `OPENAI_API_KEY`
93
+
94
+ Store a local machine copy without the hosted secret API like this:
95
+
96
+ ```bash
97
+ printf '%s' '<openai-key>' | orp secrets keychain-add \
98
+ --alias openai-primary \
99
+ --label "OpenAI Primary" \
100
+ --provider openai \
101
+ --value-stdin \
102
+ --json
103
+ ```
104
+
105
+ ## Fixtures
106
+
107
+ Provider outputs can be attached without spending live calls:
108
+
109
+ ```bash
110
+ orp research ask "Where should this live?" \
111
+ --lane-fixture openai_reasoning_high=reports/reasoning.json \
112
+ --lane-fixture openai_web_synthesis=reports/web.txt \
113
+ --json
114
+ ```
115
+
116
+ Fixtures are useful when an OpenAI run happened outside ORP, when you are comparing model settings manually, or when tests need deterministic lane outputs.
117
+
118
+ ## OpenAI API Notes
119
+
120
+ ORP uses the Responses API for these lanes. Useful knobs in profile JSON:
121
+
122
+ - `model`: for example `gpt-5.4` or `o3-deep-research-2025-06-26`.
123
+ - `call_moment`: the named research-loop moment when this lane may resolve a key.
124
+ - `reasoning_effort`: `none`, `low`, `medium`, `high`, or `xhigh` for supported models.
125
+ - `reasoning_summary`: `auto` or `detailed` for Deep Research reasoning summaries.
126
+ - `text_verbosity`: `low`, `medium`, or `high`.
127
+ - `web_search`: `true` to add the Responses API web-search tool.
128
+ - `search_context_size`: `low`, `medium`, or `high` for web search.
129
+ - `background`: `true` for long-running Deep Research calls.
130
+ - `max_output_tokens`: hard cap for a lane response.
131
+
132
+ The default profile deliberately avoids Anthropic, xAI, and local-model lanes so a single OpenAI key is enough.
133
+
134
+ ## Project Context Timing
135
+
136
+ `orp init` creates `orp/project.json`, a process-only project context lens for the current directory. It records the authority surfaces ORP can see, the directory signals it should route on, and the default research timing policy:
137
+
138
+ - decompose locally first
139
+ - use high-reasoning API calls when a decision gate or ambiguous next action needs outside reasoning
140
+ - use web synthesis when current public facts, docs, papers, project status, or citations matter
141
+ - use Deep Research only after reasoning/web lanes expose a research-heavy gap, disagreement, source-quality issue, or literature-scale synthesis need
142
+
143
+ Run `orp project refresh --json` after adding or changing roadmap, spec, agent-guidance, docs, manifest, or command-surface files. Refreshing project context does not call a provider; live provider calls remain explicit through `orp research ask --execute`.
144
+
145
+ ## Follow-Up Commands
146
+
147
+ ```bash
148
+ orp project show --json
149
+ orp project refresh --json
150
+ orp research status latest --json
151
+ orp research show latest --json
152
+ ```
153
+
154
+ ## Codex MCP Tool
155
+
156
+ ORP also ships a tiny stdio MCP wrapper for the research commands:
157
+
158
+ ```toml
159
+ [mcp_servers.orp-research]
160
+ command = "/path/to/orp/scripts/orp-mcp"
161
+ ```
162
+
163
+ It exposes:
164
+
165
+ - `orp_research_ask`
166
+ - `orp_research_profile_list`
167
+ - `orp_research_profile_show`
168
+ - `orp_research_status`
169
+ - `orp_research_show`
170
+
171
+ Research council files are ORP process artifacts. They record decomposition, provider lane outputs, and synthesis. Canonical evidence still belongs in source repositories, linked reports, cited URLs, datasets, papers, or other primary artifacts.
@@ -43,6 +43,7 @@ orp home
43
43
  orp agents root set /absolute/path/to/projects
44
44
  orp init
45
45
  orp agents audit
46
+ orp project show --json
46
47
  orp workspace create main-cody-1
47
48
  orp workspace tabs main
48
49
  orp agenda refresh --json
@@ -64,6 +65,7 @@ That gets you:
64
65
  - an optional umbrella projects root for parent/child agent guidance
65
66
  - repo governance initialized
66
67
  - repo-level AGENTS.md and CLAUDE.md scaffolded or refreshed
68
+ - `orp/project.json` created as the local project context lens
67
69
  - a local workspace ledger
68
70
  - the main recovery surface
69
71
  - a local operating agenda
@@ -73,6 +75,8 @@ That gets you:
73
75
  - a clean repo-governance read
74
76
  - a first intentional checkpoint
75
77
 
78
+ `orp/project.json` records the current directory's authority surfaces, directory signals, and default research call timing. It is refreshed by `orp init` and can evolve with the directory through `orp project refresh --json`, especially after adding or changing roadmap, spec, agent-guidance, docs, manifest, or command-surface files.
79
+
76
80
  ## Beginner Flow
77
81
 
78
82
  This is the zero-assumption path for a new user.
@@ -735,6 +739,7 @@ The guiding rule is simple:
735
739
 
736
740
  - recover the saved workspace
737
741
  - inspect repo safety
742
+ - classify dirty state before expansion
738
743
  - resolve the right secret
739
744
  - inspect the current frontier
740
745
  - do the next honest move
@@ -752,16 +757,20 @@ If you only want the irreducible ORP loop, it is this:
752
757
  ```bash
753
758
  orp status --json
754
759
  ```
755
- 3. resolve the right secret
760
+ 3. classify dirty worktree state before expansion
761
+ ```bash
762
+ orp hygiene --json
763
+ ```
764
+ 4. resolve the right secret
756
765
  ```bash
757
766
  orp secrets ensure --alias <alias> --provider <provider> --current-project --json
758
767
  ```
759
- 4. inspect the current frontier
768
+ 5. inspect the current frontier
760
769
  ```bash
761
770
  orp frontier state --json
762
771
  ```
763
- 5. do the next honest move
764
- 6. checkpoint it
772
+ 6. do the next honest move
773
+ 7. checkpoint it
765
774
  ```bash
766
775
  orp checkpoint create -m "describe completed unit" --json
767
776
  ```
@@ -770,6 +779,7 @@ That is the shortest version of the protocol:
770
779
 
771
780
  - recover continuity
772
781
  - inspect safety
782
+ - classify dirty state
773
783
  - resolve access
774
784
  - inspect context
775
785
  - do the work
@@ -782,6 +792,7 @@ If you want the agent to stay aligned with ORP, the default check sequence shoul
782
792
  ```bash
783
793
  orp workspace tabs main
784
794
  orp status --json
795
+ orp hygiene --json
785
796
  orp secrets ensure --alias <alias> --provider <provider> --current-project --json
786
797
  orp frontier state --json
787
798
  ```
@@ -796,17 +807,18 @@ That is the practical ORP loop:
796
807
 
797
808
  1. recover the workspace ledger
798
809
  2. inspect repo safety
799
- 3. resolve the right key
800
- 4. inspect the current frontier
801
- 5. do the work
802
- 6. checkpoint it honestly
810
+ 3. classify dirty state and stop if anything is unowned
811
+ 4. resolve the right key
812
+ 5. inspect the current frontier
813
+ 6. do the work
814
+ 7. checkpoint it honestly
803
815
 
804
816
  The key point is that ORP should become the lens:
805
817
 
806
818
  - not "something we remember to use sometimes"
807
819
  - but the operating frame the agent checks before and after meaningful work
808
820
 
809
- ## If You Only Remember 8 Commands
821
+ ## If You Only Remember 9 Commands
810
822
 
811
823
  ```bash
812
824
  orp home
@@ -814,6 +826,7 @@ orp init
814
826
  orp workspace create main-cody-1
815
827
  orp workspace tabs main
816
828
  orp workspace add-tab main --path /absolute/path/to/project --resume-command "codex resume <id>"
829
+ orp hygiene --json
817
830
  orp secrets ensure --alias openai-primary --provider openai --current-project --json
818
831
  orp status --json
819
832
  orp checkpoint create -m "capture loop state" --json
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "open-research-protocol",
3
- "version": "0.4.25",
3
+ "version": "0.4.27",
4
4
  "description": "ORP CLI (Open Research Protocol): workspace ledgers, secrets, scheduling, governed execution, and agent-friendly research workflows.",
5
5
  "license": "MIT",
6
6
  "author": "Fractal Research Group <cody@frg.earth>",
@@ -1,9 +1,55 @@
1
+ import { spawnSync } from "node:child_process";
2
+ import path from "node:path";
3
+ import { fileURLToPath } from "node:url";
4
+
1
5
  import { runWorkspaceAddTab, runWorkspaceCreate, runWorkspaceRemoveTab } from "./ledger.js";
2
6
  import { runWorkspaceList } from "./list.js";
3
7
  import { runWorkspaceSlot } from "./slot.js";
4
8
  import { runWorkspaceSync } from "./sync.js";
5
9
  import { runWorkspaceTabs } from "./tabs.js";
6
10
 
11
+ const __dirname = path.dirname(fileURLToPath(import.meta.url));
12
+ const pythonCliPath = path.resolve(__dirname, "..", "..", "..", "cli", "orp.py");
13
+
14
+ function pythonCandidates() {
15
+ const candidates = [];
16
+ if (process.env.ORP_PYTHON && process.env.ORP_PYTHON.trim() !== "") {
17
+ candidates.push(process.env.ORP_PYTHON.trim());
18
+ }
19
+ if (process.platform === "win32") {
20
+ candidates.push("py");
21
+ }
22
+ candidates.push("python3", "python");
23
+ return candidates;
24
+ }
25
+
26
+ function runWorkspaceHygiene(args = []) {
27
+ let lastErr = null;
28
+ for (const py of pythonCandidates()) {
29
+ const pyArgs = py === "py" ? ["-3", pythonCliPath, "hygiene", ...args] : [pythonCliPath, "hygiene", ...args];
30
+ const result = spawnSync(py, pyArgs, { encoding: "utf8" });
31
+ if (!result.error) {
32
+ if (result.stdout) {
33
+ process.stdout.write(result.stdout);
34
+ }
35
+ if (result.stderr) {
36
+ process.stderr.write(result.stderr);
37
+ }
38
+ return result.status == null ? 1 : result.status;
39
+ }
40
+ if (result.error && result.error.code === "ENOENT") {
41
+ continue;
42
+ }
43
+ lastErr = result.error;
44
+ }
45
+ process.stderr.write("ORP workspace hygiene requires Python 3 on PATH.\n");
46
+ process.stderr.write("Tried: " + pythonCandidates().join(", ") + "\n");
47
+ if (lastErr) {
48
+ process.stderr.write(String(lastErr) + "\n");
49
+ }
50
+ return 1;
51
+ }
52
+
7
53
  function printWorkspaceHelp() {
8
54
  console.log(`ORP workspace
9
55
 
@@ -18,6 +64,7 @@ Usage:
18
64
  orp workspace remove-tab <name-or-id> (--index <n> | --path <absolute-path> | --title <title> | --resume-session-id <id> | --resume-command <text>) [--all] [--json]
19
65
  orp workspace slot <list|set|clear> ...
20
66
  orp workspace sync <name-or-id> [--workspace-file <path> | --notes-file <path>] [--dry-run] [--json]
67
+ orp workspace hygiene [--json]
21
68
  orp workspace ledger <name-or-id> [--json]
22
69
  orp workspace ledger add <name-or-id> (--path <absolute-path> | --here) [--title <title>] [--remote-url <git-url>] [--remote-branch <branch>] [--bootstrap-command <text>] [--resume-command <text> | --resume-tool <codex|claude> --resume-session-id <id> | --current-codex] [--append] [--json]
23
70
  orp workspace ledger remove <name-or-id> (--index <n> | --path <absolute-path> | --title <title> | --resume-session-id <id> | --resume-command <text>) [--all] [--json]
@@ -31,6 +78,7 @@ Commands:
31
78
  remove-tab Remove one or more saved tabs from the workspace ledger directly
32
79
  slot Assign and inspect named workspace slots like main and offhand
33
80
  sync Post a CLI-authored workspace manifest back to the hosted ORP idea
81
+ hygiene Classify dirty worktree paths before long agent expansion
34
82
  ledger Compatibility alias for the same tabs/add/remove ledger flow
35
83
 
36
84
  Notes:
@@ -41,6 +89,7 @@ Notes:
41
89
  - Use \`orp workspace add-tab ... --remote-url ... --bootstrap-command ...\` when you want ORP to remember how to recreate the repo on another machine.
42
90
  - Use \`orp workspace add-tab ...\` and \`orp workspace remove-tab ...\` when you want to edit the saved workspace ledger explicitly from Terminal.app or any other shell.
43
91
  - If you prefer the older ledger-prefixed wording, \`orp workspace ledger\`, \`orp workspace ledger add\`, and \`orp workspace ledger remove\` stay available as aliases.
92
+ - \`orp workspace hygiene --json\` is a wrapper alias for \`orp hygiene --json\`; it reports dirty path categories without resetting, checking out, or deleting files.
44
93
  - \`main\` and \`offhand\` are reserved slot selectors; use \`orp workspace slot set ...\` to assign them.
45
94
  - Syncing or editing a hosted workspace writes a managed local cache on this Mac.
46
95
  - \`<name-or-id>\` can be a saved workspace title, workspace id, idea id, or local tracked workspace title/id.
@@ -51,6 +100,7 @@ Examples:
51
100
  orp workspace create mac-main --machine-label "Mac Studio" --path /absolute/path/to/orp --remote-url git@github.com:SproutSeeds/orp.git --bootstrap-command "npm install"
52
101
  orp workspace list
53
102
  orp workspace tabs main-cody-1
103
+ orp workspace hygiene --json
54
104
  orp workspace add-tab main --here --current-codex
55
105
  orp workspace add-tab main --path /absolute/path/to/new-project --resume-command "codex resume 019d..."
56
106
  orp workspace add-tab main --path /absolute/path/to/new-project --remote-url git@github.com:org/new-project.git --bootstrap-command "npm install"
@@ -126,5 +176,9 @@ export async function runOrpWorkspaceCommand(argv = []) {
126
176
  return runWorkspaceSync(rest);
127
177
  }
128
178
 
179
+ if (subcommand === "hygiene") {
180
+ return runWorkspaceHygiene(rest);
181
+ }
182
+
129
183
  throw new Error(`unknown workspace subcommand: ${subcommand}`);
130
184
  }
@@ -38,6 +38,7 @@ test("runOrpWorkspaceCommand shows the ledger-first help surface", async () => {
38
38
  assert.match(stdout, /orp workspace ledger add <name-or-id>/);
39
39
  assert.match(stdout, /orp workspace ledger remove <name-or-id>/);
40
40
  assert.match(stdout, /orp workspace tabs <name-or-id>/);
41
+ assert.match(stdout, /orp workspace hygiene \[--json\]/);
41
42
  assert.match(stdout, /Compatibility alias for the same tabs\/add\/remove ledger flow/);
42
43
  });
43
44
 
@@ -0,0 +1,256 @@
1
+ #!/usr/bin/env python3
2
+ from __future__ import annotations
3
+
4
+ import json
5
+ from pathlib import Path
6
+ import subprocess
7
+ import sys
8
+ from typing import Any
9
+
10
+
11
+ ROOT = Path(__file__).resolve().parents[1]
12
+ CLI = ROOT / "cli" / "orp.py"
13
+
14
+
15
+ TOOLS: list[dict[str, Any]] = [
16
+ {
17
+ "name": "orp_research_ask",
18
+ "description": "Create an ORP OpenAI research-loop run with high-reasoning, web-synthesis, and deep-research call moments. Dry-run by default; set execute=true for live provider lanes.",
19
+ "inputSchema": {
20
+ "type": "object",
21
+ "required": ["question"],
22
+ "properties": {
23
+ "question": {"type": "string", "minLength": 1},
24
+ "repo_root": {"type": "string", "description": "Repository root. Defaults to current directory."},
25
+ "run_id": {"type": "string"},
26
+ "profile": {"type": "string", "default": "openai-council"},
27
+ "profile_file": {"type": "string"},
28
+ "fields": {
29
+ "type": "object",
30
+ "description": "Prompt-template fields passed as key/value pairs.",
31
+ "additionalProperties": {"type": "string"},
32
+ },
33
+ "execute": {"type": "boolean", "default": False},
34
+ "lane_fixtures": {
35
+ "type": "object",
36
+ "description": "Mapping of lane_id to fixture path.",
37
+ "additionalProperties": {"type": "string"},
38
+ },
39
+ "chimera_bin": {"type": "string", "default": "chimera"},
40
+ "timeout_sec": {"type": "integer", "minimum": 1, "description": "Override the profile per-lane timeout policy."},
41
+ },
42
+ },
43
+ },
44
+ {
45
+ "name": "orp_research_profile_list",
46
+ "description": "List built-in ORP research profiles and prompt-template surfaces.",
47
+ "inputSchema": {
48
+ "type": "object",
49
+ "properties": {
50
+ "repo_root": {"type": "string", "description": "Repository root. Defaults to current directory."},
51
+ },
52
+ },
53
+ },
54
+ {
55
+ "name": "orp_research_profile_show",
56
+ "description": "Show a built-in ORP research profile, including its lane sequence and prompt form.",
57
+ "inputSchema": {
58
+ "type": "object",
59
+ "properties": {
60
+ "repo_root": {"type": "string", "description": "Repository root. Defaults to current directory."},
61
+ "profile_id": {"type": "string", "default": "openai-council"},
62
+ },
63
+ },
64
+ },
65
+ {
66
+ "name": "orp_research_status",
67
+ "description": "Show status and lane summary for an ORP research council run.",
68
+ "inputSchema": {
69
+ "type": "object",
70
+ "properties": {
71
+ "repo_root": {"type": "string", "description": "Repository root. Defaults to current directory."},
72
+ "run_id": {"type": "string", "default": "latest"},
73
+ },
74
+ },
75
+ },
76
+ {
77
+ "name": "orp_research_show",
78
+ "description": "Show an ORP research council answer payload.",
79
+ "inputSchema": {
80
+ "type": "object",
81
+ "properties": {
82
+ "repo_root": {"type": "string", "description": "Repository root. Defaults to current directory."},
83
+ "run_id": {"type": "string", "default": "latest"},
84
+ },
85
+ },
86
+ },
87
+ ]
88
+
89
+
90
+ def _json_response(request_id: Any, result: Any | None = None, error: Any | None = None) -> dict[str, Any]:
91
+ response: dict[str, Any] = {"jsonrpc": "2.0", "id": request_id}
92
+ if error is not None:
93
+ response["error"] = error
94
+ else:
95
+ response["result"] = result if result is not None else {}
96
+ return response
97
+
98
+
99
+ def _write(payload: dict[str, Any]) -> None:
100
+ sys.stdout.write(json.dumps(payload, separators=(",", ":")) + "\n")
101
+ sys.stdout.flush()
102
+
103
+
104
+ def _text_result(text: str, *, is_error: bool = False) -> dict[str, Any]:
105
+ return {
106
+ "content": [{"type": "text", "text": text}],
107
+ "isError": is_error,
108
+ }
109
+
110
+
111
+ def _repo_root(args: dict[str, Any]) -> str:
112
+ raw = str(args.get("repo_root", "") or "").strip()
113
+ return raw or "."
114
+
115
+
116
+ def _run_orp(args: list[str]) -> tuple[int, str, str]:
117
+ proc = subprocess.run(
118
+ [sys.executable, str(CLI), *args],
119
+ capture_output=True,
120
+ text=True,
121
+ cwd=str(ROOT),
122
+ )
123
+ return proc.returncode, proc.stdout, proc.stderr
124
+
125
+
126
+ def _call_research_ask(args: dict[str, Any]) -> dict[str, Any]:
127
+ question = str(args.get("question", "") or "").strip()
128
+ if not question:
129
+ return _text_result("question is required", is_error=True)
130
+ cmd = ["--repo-root", _repo_root(args), "research", "ask", question]
131
+ profile = str(args.get("profile", "") or "").strip()
132
+ if profile:
133
+ cmd.extend(["--profile", profile])
134
+ profile_file = str(args.get("profile_file", "") or "").strip()
135
+ if profile_file:
136
+ cmd.extend(["--profile-file", profile_file])
137
+ run_id = str(args.get("run_id", "") or "").strip()
138
+ if run_id:
139
+ cmd.extend(["--run-id", run_id])
140
+ fields = args.get("fields")
141
+ if isinstance(fields, dict):
142
+ for key, value in fields.items():
143
+ cmd.extend(["--field", f"{key}={value}"])
144
+ if bool(args.get("execute", False)):
145
+ cmd.append("--execute")
146
+ lane_fixtures = args.get("lane_fixtures")
147
+ if isinstance(lane_fixtures, dict):
148
+ for lane_id, path in lane_fixtures.items():
149
+ cmd.extend(["--lane-fixture", f"{lane_id}={path}"])
150
+ chimera_bin = str(args.get("chimera_bin", "") or "").strip()
151
+ if chimera_bin:
152
+ cmd.extend(["--chimera-bin", chimera_bin])
153
+ timeout_sec = args.get("timeout_sec")
154
+ if timeout_sec is not None:
155
+ cmd.extend(["--timeout-sec", str(timeout_sec)])
156
+ cmd.append("--json")
157
+ code, stdout, stderr = _run_orp(cmd)
158
+ if code != 0:
159
+ return _text_result((stderr or stdout or f"orp exited {code}").strip(), is_error=True)
160
+ return _text_result(stdout.strip())
161
+
162
+
163
+ def _call_research_profile_list(args: dict[str, Any]) -> dict[str, Any]:
164
+ cmd = ["--repo-root", _repo_root(args), "research", "profile", "list", "--json"]
165
+ code, stdout, stderr = _run_orp(cmd)
166
+ if code != 0:
167
+ return _text_result((stderr or stdout or f"orp exited {code}").strip(), is_error=True)
168
+ return _text_result(stdout.strip())
169
+
170
+
171
+ def _call_research_profile_show(args: dict[str, Any]) -> dict[str, Any]:
172
+ profile_id = str(args.get("profile_id", "") or "openai-council").strip() or "openai-council"
173
+ cmd = ["--repo-root", _repo_root(args), "research", "profile", "show", profile_id, "--json"]
174
+ code, stdout, stderr = _run_orp(cmd)
175
+ if code != 0:
176
+ return _text_result((stderr or stdout or f"orp exited {code}").strip(), is_error=True)
177
+ return _text_result(stdout.strip())
178
+
179
+
180
+ def _call_research_status(args: dict[str, Any]) -> dict[str, Any]:
181
+ run_id = str(args.get("run_id", "") or "latest").strip() or "latest"
182
+ cmd = ["--repo-root", _repo_root(args), "research", "status", run_id, "--json"]
183
+ code, stdout, stderr = _run_orp(cmd)
184
+ if code != 0:
185
+ return _text_result((stderr or stdout or f"orp exited {code}").strip(), is_error=True)
186
+ return _text_result(stdout.strip())
187
+
188
+
189
+ def _call_research_show(args: dict[str, Any]) -> dict[str, Any]:
190
+ run_id = str(args.get("run_id", "") or "latest").strip() or "latest"
191
+ cmd = ["--repo-root", _repo_root(args), "research", "show", run_id, "--json"]
192
+ code, stdout, stderr = _run_orp(cmd)
193
+ if code != 0:
194
+ return _text_result((stderr or stdout or f"orp exited {code}").strip(), is_error=True)
195
+ return _text_result(stdout.strip())
196
+
197
+
198
+ def _handle(request: dict[str, Any]) -> dict[str, Any] | None:
199
+ request_id = request.get("id")
200
+ method = str(request.get("method", "") or "")
201
+ if method.startswith("notifications/"):
202
+ return None
203
+ if method == "initialize":
204
+ return _json_response(
205
+ request_id,
206
+ {
207
+ "protocolVersion": str(request.get("params", {}).get("protocolVersion", "2024-11-05")),
208
+ "capabilities": {"tools": {}},
209
+ "serverInfo": {"name": "orp-research", "version": "0.1.0"},
210
+ },
211
+ )
212
+ if method == "tools/list":
213
+ return _json_response(request_id, {"tools": TOOLS})
214
+ if method == "tools/call":
215
+ params = request.get("params")
216
+ if not isinstance(params, dict):
217
+ return _json_response(request_id, error={"code": -32602, "message": "params must be an object"})
218
+ name = str(params.get("name", "") or "")
219
+ arguments = params.get("arguments")
220
+ if not isinstance(arguments, dict):
221
+ arguments = {}
222
+ if name == "orp_research_ask":
223
+ return _json_response(request_id, _call_research_ask(arguments))
224
+ if name == "orp_research_profile_list":
225
+ return _json_response(request_id, _call_research_profile_list(arguments))
226
+ if name == "orp_research_profile_show":
227
+ return _json_response(request_id, _call_research_profile_show(arguments))
228
+ if name == "orp_research_status":
229
+ return _json_response(request_id, _call_research_status(arguments))
230
+ if name == "orp_research_show":
231
+ return _json_response(request_id, _call_research_show(arguments))
232
+ return _json_response(request_id, error={"code": -32601, "message": f"unknown tool: {name}"})
233
+ return _json_response(request_id, error={"code": -32601, "message": f"unknown method: {method}"})
234
+
235
+
236
+ def main() -> int:
237
+ for line in sys.stdin:
238
+ text = line.strip()
239
+ if not text:
240
+ continue
241
+ try:
242
+ request = json.loads(text)
243
+ except Exception as exc:
244
+ _write(_json_response(None, error={"code": -32700, "message": str(exc)}))
245
+ continue
246
+ if not isinstance(request, dict):
247
+ _write(_json_response(None, error={"code": -32600, "message": "request must be an object"}))
248
+ continue
249
+ response = _handle(request)
250
+ if response is not None:
251
+ _write(response)
252
+ return 0
253
+
254
+
255
+ if __name__ == "__main__":
256
+ raise SystemExit(main())