open-research-protocol 0.4.25 → 0.4.26
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +30 -0
- package/README.md +30 -13
- package/cli/orp.py +2150 -120
- package/docs/RESEARCH_COUNCIL.md +123 -0
- package/docs/START_HERE.md +4 -0
- package/package.json +1 -1
- package/scripts/orp-mcp +205 -0
- package/spec/v1/project-context.schema.json +223 -0
- package/spec/v1/research-run.schema.json +245 -0
|
@@ -0,0 +1,123 @@
|
|
|
1
|
+
# ORP Research Council
|
|
2
|
+
|
|
3
|
+
ORP research council runs turn one question into a durable, tool-callable research artifact. The default profile is OpenAI-only right now, so one saved ORP key can power the full loop:
|
|
4
|
+
|
|
5
|
+
```bash
|
|
6
|
+
orp research ask "Where should this system live?" --json
|
|
7
|
+
```
|
|
8
|
+
|
|
9
|
+
By default this is a dry run. ORP writes the decomposition, profile, lane plan, lane JSON files, synthesized planning answer, and summary under:
|
|
10
|
+
|
|
11
|
+
```text
|
|
12
|
+
orp/research/<run_id>/
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
Live provider calls require an explicit flag:
|
|
16
|
+
|
|
17
|
+
```bash
|
|
18
|
+
orp research ask "Where should this system live?" --execute --json
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
## Lanes
|
|
22
|
+
|
|
23
|
+
The built-in `openai-council` profile defines three OpenAI API lanes:
|
|
24
|
+
|
|
25
|
+
- `openai_reasoning_high`: `gpt-5.4` with `reasoning.effort=high` for the deliberate thinking pass.
|
|
26
|
+
- `openai_web_synthesis`: `gpt-5.4` with high reasoning plus Responses API web search for current public evidence and citations.
|
|
27
|
+
- `openai_deep_research`: `o3-deep-research-2025-06-26` with background execution and web search preview for Pro/Deep Research style investigation.
|
|
28
|
+
|
|
29
|
+
This follows OpenAI's current model guidance: `gpt-5.4` is the default for general-purpose, coding, reasoning, and agentic workflows; web search is enabled through the Responses API `tools` array when current information is needed; and Deep Research is available through the Responses endpoint with `o3-deep-research-2025-06-26`.
|
|
30
|
+
|
|
31
|
+
## API Call Moments
|
|
32
|
+
|
|
33
|
+
ORP records when API keys are intended to be used:
|
|
34
|
+
|
|
35
|
+
- `plan`: local decomposition only. No API key is resolved.
|
|
36
|
+
- `thinking_reasoning_high`: resolve `openai-primary` immediately before the `openai_reasoning_high` lane.
|
|
37
|
+
- `web_synthesis`: resolve `openai-primary` immediately before the `openai_web_synthesis` lane.
|
|
38
|
+
- `pro_deep_research`: resolve `openai-primary` immediately before the `openai_deep_research` lane.
|
|
39
|
+
|
|
40
|
+
Dry runs write every lane with `api_call.called=false`. Live runs require `--execute`; even then, secret values are read only at the lane call moment and are not written to artifacts.
|
|
41
|
+
|
|
42
|
+
Secret values are read from environment variables first. If an env var is missing and a matching ORP Keychain secret is available, ORP can use it at execution time. Secret values are not persisted in artifacts.
|
|
43
|
+
|
|
44
|
+
The default live profile expects this ORP secret alias or env var:
|
|
45
|
+
|
|
46
|
+
- `openai-primary` / `OPENAI_API_KEY`
|
|
47
|
+
|
|
48
|
+
Store a local machine copy without the hosted secret API like this:
|
|
49
|
+
|
|
50
|
+
```bash
|
|
51
|
+
printf '%s' '<openai-key>' | orp secrets keychain-add \
|
|
52
|
+
--alias openai-primary \
|
|
53
|
+
--label "OpenAI Primary" \
|
|
54
|
+
--provider openai \
|
|
55
|
+
--value-stdin \
|
|
56
|
+
--json
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
## Fixtures
|
|
60
|
+
|
|
61
|
+
Provider outputs can be attached without spending live calls:
|
|
62
|
+
|
|
63
|
+
```bash
|
|
64
|
+
orp research ask "Where should this live?" \
|
|
65
|
+
--lane-fixture openai_reasoning_high=reports/reasoning.json \
|
|
66
|
+
--lane-fixture openai_web_synthesis=reports/web.txt \
|
|
67
|
+
--json
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
Fixtures are useful when an OpenAI run happened outside ORP, when you are comparing model settings manually, or when tests need deterministic lane outputs.
|
|
71
|
+
|
|
72
|
+
## OpenAI API Notes
|
|
73
|
+
|
|
74
|
+
ORP uses the Responses API for these lanes. Useful knobs in profile JSON:
|
|
75
|
+
|
|
76
|
+
- `model`: for example `gpt-5.4` or `o3-deep-research-2025-06-26`.
|
|
77
|
+
- `call_moment`: the named research-loop moment when this lane may resolve a key.
|
|
78
|
+
- `reasoning_effort`: `none`, `low`, `medium`, `high`, or `xhigh` for supported models.
|
|
79
|
+
- `reasoning_summary`: `auto` or `detailed` for Deep Research reasoning summaries.
|
|
80
|
+
- `text_verbosity`: `low`, `medium`, or `high`.
|
|
81
|
+
- `web_search`: `true` to add the Responses API web-search tool.
|
|
82
|
+
- `search_context_size`: `low`, `medium`, or `high` for web search.
|
|
83
|
+
- `background`: `true` for long-running Deep Research calls.
|
|
84
|
+
- `max_output_tokens`: hard cap for a lane response.
|
|
85
|
+
|
|
86
|
+
The default profile deliberately avoids Anthropic, xAI, and local-model lanes so a single OpenAI key is enough.
|
|
87
|
+
|
|
88
|
+
## Project Context Timing
|
|
89
|
+
|
|
90
|
+
`orp init` creates `orp/project.json`, a process-only project context lens for the current directory. It records the authority surfaces ORP can see, the directory signals it should route on, and the default research timing policy:
|
|
91
|
+
|
|
92
|
+
- decompose locally first
|
|
93
|
+
- use high-reasoning API calls when a decision gate or ambiguous next action needs outside reasoning
|
|
94
|
+
- use web synthesis when current public facts, docs, papers, project status, or citations matter
|
|
95
|
+
- use Deep Research only after reasoning/web lanes expose a research-heavy gap, disagreement, source-quality issue, or literature-scale synthesis need
|
|
96
|
+
|
|
97
|
+
Run `orp project refresh --json` after adding or changing roadmap, spec, agent-guidance, docs, manifest, or command-surface files. Refreshing project context does not call a provider; live provider calls remain explicit through `orp research ask --execute`.
|
|
98
|
+
|
|
99
|
+
## Follow-Up Commands
|
|
100
|
+
|
|
101
|
+
```bash
|
|
102
|
+
orp project show --json
|
|
103
|
+
orp project refresh --json
|
|
104
|
+
orp research status latest --json
|
|
105
|
+
orp research show latest --json
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
## Codex MCP Tool
|
|
109
|
+
|
|
110
|
+
ORP also ships a tiny stdio MCP wrapper for the research commands:
|
|
111
|
+
|
|
112
|
+
```toml
|
|
113
|
+
[mcp_servers.orp-research]
|
|
114
|
+
command = "/path/to/orp/scripts/orp-mcp"
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
It exposes:
|
|
118
|
+
|
|
119
|
+
- `orp_research_ask`
|
|
120
|
+
- `orp_research_status`
|
|
121
|
+
- `orp_research_show`
|
|
122
|
+
|
|
123
|
+
Research council files are ORP process artifacts. They record decomposition, provider lane outputs, and synthesis. Canonical evidence still belongs in source repositories, linked reports, cited URLs, datasets, papers, or other primary artifacts.
|
package/docs/START_HERE.md
CHANGED
|
@@ -43,6 +43,7 @@ orp home
|
|
|
43
43
|
orp agents root set /absolute/path/to/projects
|
|
44
44
|
orp init
|
|
45
45
|
orp agents audit
|
|
46
|
+
orp project show --json
|
|
46
47
|
orp workspace create main-cody-1
|
|
47
48
|
orp workspace tabs main
|
|
48
49
|
orp agenda refresh --json
|
|
@@ -64,6 +65,7 @@ That gets you:
|
|
|
64
65
|
- an optional umbrella projects root for parent/child agent guidance
|
|
65
66
|
- repo governance initialized
|
|
66
67
|
- repo-level AGENTS.md and CLAUDE.md scaffolded or refreshed
|
|
68
|
+
- `orp/project.json` created as the local project context lens
|
|
67
69
|
- a local workspace ledger
|
|
68
70
|
- the main recovery surface
|
|
69
71
|
- a local operating agenda
|
|
@@ -73,6 +75,8 @@ That gets you:
|
|
|
73
75
|
- a clean repo-governance read
|
|
74
76
|
- a first intentional checkpoint
|
|
75
77
|
|
|
78
|
+
`orp/project.json` records the current directory's authority surfaces, directory signals, and default research call timing. It is refreshed by `orp init` and can evolve with the directory through `orp project refresh --json`, especially after adding or changing roadmap, spec, agent-guidance, docs, manifest, or command-surface files.
|
|
79
|
+
|
|
76
80
|
## Beginner Flow
|
|
77
81
|
|
|
78
82
|
This is the zero-assumption path for a new user.
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "open-research-protocol",
|
|
3
|
-
"version": "0.4.
|
|
3
|
+
"version": "0.4.26",
|
|
4
4
|
"description": "ORP CLI (Open Research Protocol): workspace ledgers, secrets, scheduling, governed execution, and agent-friendly research workflows.",
|
|
5
5
|
"license": "MIT",
|
|
6
6
|
"author": "Fractal Research Group <cody@frg.earth>",
|
package/scripts/orp-mcp
ADDED
|
@@ -0,0 +1,205 @@
|
|
|
1
|
+
#!/usr/bin/env python3
|
|
2
|
+
from __future__ import annotations
|
|
3
|
+
|
|
4
|
+
import json
|
|
5
|
+
from pathlib import Path
|
|
6
|
+
import subprocess
|
|
7
|
+
import sys
|
|
8
|
+
from typing import Any
|
|
9
|
+
|
|
10
|
+
|
|
11
|
+
ROOT = Path(__file__).resolve().parents[1]
|
|
12
|
+
CLI = ROOT / "cli" / "orp.py"
|
|
13
|
+
|
|
14
|
+
|
|
15
|
+
TOOLS: list[dict[str, Any]] = [
|
|
16
|
+
{
|
|
17
|
+
"name": "orp_research_ask",
|
|
18
|
+
"description": "Create an ORP OpenAI research-loop run with high-reasoning, web-synthesis, and deep-research call moments. Dry-run by default; set execute=true for live provider lanes.",
|
|
19
|
+
"inputSchema": {
|
|
20
|
+
"type": "object",
|
|
21
|
+
"required": ["question"],
|
|
22
|
+
"properties": {
|
|
23
|
+
"question": {"type": "string", "minLength": 1},
|
|
24
|
+
"repo_root": {"type": "string", "description": "Repository root. Defaults to current directory."},
|
|
25
|
+
"run_id": {"type": "string"},
|
|
26
|
+
"profile": {"type": "string", "default": "openai-council"},
|
|
27
|
+
"profile_file": {"type": "string"},
|
|
28
|
+
"execute": {"type": "boolean", "default": False},
|
|
29
|
+
"lane_fixtures": {
|
|
30
|
+
"type": "object",
|
|
31
|
+
"description": "Mapping of lane_id to fixture path.",
|
|
32
|
+
"additionalProperties": {"type": "string"},
|
|
33
|
+
},
|
|
34
|
+
"chimera_bin": {"type": "string", "default": "chimera"},
|
|
35
|
+
"timeout_sec": {"type": "integer", "minimum": 1, "default": 120},
|
|
36
|
+
},
|
|
37
|
+
},
|
|
38
|
+
},
|
|
39
|
+
{
|
|
40
|
+
"name": "orp_research_status",
|
|
41
|
+
"description": "Show status and lane summary for an ORP research council run.",
|
|
42
|
+
"inputSchema": {
|
|
43
|
+
"type": "object",
|
|
44
|
+
"properties": {
|
|
45
|
+
"repo_root": {"type": "string", "description": "Repository root. Defaults to current directory."},
|
|
46
|
+
"run_id": {"type": "string", "default": "latest"},
|
|
47
|
+
},
|
|
48
|
+
},
|
|
49
|
+
},
|
|
50
|
+
{
|
|
51
|
+
"name": "orp_research_show",
|
|
52
|
+
"description": "Show an ORP research council answer payload.",
|
|
53
|
+
"inputSchema": {
|
|
54
|
+
"type": "object",
|
|
55
|
+
"properties": {
|
|
56
|
+
"repo_root": {"type": "string", "description": "Repository root. Defaults to current directory."},
|
|
57
|
+
"run_id": {"type": "string", "default": "latest"},
|
|
58
|
+
},
|
|
59
|
+
},
|
|
60
|
+
},
|
|
61
|
+
]
|
|
62
|
+
|
|
63
|
+
|
|
64
|
+
def _json_response(request_id: Any, result: Any | None = None, error: Any | None = None) -> dict[str, Any]:
|
|
65
|
+
response: dict[str, Any] = {"jsonrpc": "2.0", "id": request_id}
|
|
66
|
+
if error is not None:
|
|
67
|
+
response["error"] = error
|
|
68
|
+
else:
|
|
69
|
+
response["result"] = result if result is not None else {}
|
|
70
|
+
return response
|
|
71
|
+
|
|
72
|
+
|
|
73
|
+
def _write(payload: dict[str, Any]) -> None:
|
|
74
|
+
sys.stdout.write(json.dumps(payload, separators=(",", ":")) + "\n")
|
|
75
|
+
sys.stdout.flush()
|
|
76
|
+
|
|
77
|
+
|
|
78
|
+
def _text_result(text: str, *, is_error: bool = False) -> dict[str, Any]:
|
|
79
|
+
return {
|
|
80
|
+
"content": [{"type": "text", "text": text}],
|
|
81
|
+
"isError": is_error,
|
|
82
|
+
}
|
|
83
|
+
|
|
84
|
+
|
|
85
|
+
def _repo_root(args: dict[str, Any]) -> str:
|
|
86
|
+
raw = str(args.get("repo_root", "") or "").strip()
|
|
87
|
+
return raw or "."
|
|
88
|
+
|
|
89
|
+
|
|
90
|
+
def _run_orp(args: list[str]) -> tuple[int, str, str]:
|
|
91
|
+
proc = subprocess.run(
|
|
92
|
+
[sys.executable, str(CLI), *args],
|
|
93
|
+
capture_output=True,
|
|
94
|
+
text=True,
|
|
95
|
+
cwd=str(ROOT),
|
|
96
|
+
)
|
|
97
|
+
return proc.returncode, proc.stdout, proc.stderr
|
|
98
|
+
|
|
99
|
+
|
|
100
|
+
def _call_research_ask(args: dict[str, Any]) -> dict[str, Any]:
|
|
101
|
+
question = str(args.get("question", "") or "").strip()
|
|
102
|
+
if not question:
|
|
103
|
+
return _text_result("question is required", is_error=True)
|
|
104
|
+
cmd = ["--repo-root", _repo_root(args), "research", "ask", question]
|
|
105
|
+
profile = str(args.get("profile", "") or "").strip()
|
|
106
|
+
if profile:
|
|
107
|
+
cmd.extend(["--profile", profile])
|
|
108
|
+
profile_file = str(args.get("profile_file", "") or "").strip()
|
|
109
|
+
if profile_file:
|
|
110
|
+
cmd.extend(["--profile-file", profile_file])
|
|
111
|
+
run_id = str(args.get("run_id", "") or "").strip()
|
|
112
|
+
if run_id:
|
|
113
|
+
cmd.extend(["--run-id", run_id])
|
|
114
|
+
if bool(args.get("execute", False)):
|
|
115
|
+
cmd.append("--execute")
|
|
116
|
+
lane_fixtures = args.get("lane_fixtures")
|
|
117
|
+
if isinstance(lane_fixtures, dict):
|
|
118
|
+
for lane_id, path in lane_fixtures.items():
|
|
119
|
+
cmd.extend(["--lane-fixture", f"{lane_id}={path}"])
|
|
120
|
+
chimera_bin = str(args.get("chimera_bin", "") or "").strip()
|
|
121
|
+
if chimera_bin:
|
|
122
|
+
cmd.extend(["--chimera-bin", chimera_bin])
|
|
123
|
+
timeout_sec = args.get("timeout_sec")
|
|
124
|
+
if timeout_sec is not None:
|
|
125
|
+
cmd.extend(["--timeout-sec", str(timeout_sec)])
|
|
126
|
+
cmd.append("--json")
|
|
127
|
+
code, stdout, stderr = _run_orp(cmd)
|
|
128
|
+
if code != 0:
|
|
129
|
+
return _text_result((stderr or stdout or f"orp exited {code}").strip(), is_error=True)
|
|
130
|
+
return _text_result(stdout.strip())
|
|
131
|
+
|
|
132
|
+
|
|
133
|
+
def _call_research_status(args: dict[str, Any]) -> dict[str, Any]:
|
|
134
|
+
run_id = str(args.get("run_id", "") or "latest").strip() or "latest"
|
|
135
|
+
cmd = ["--repo-root", _repo_root(args), "research", "status", run_id, "--json"]
|
|
136
|
+
code, stdout, stderr = _run_orp(cmd)
|
|
137
|
+
if code != 0:
|
|
138
|
+
return _text_result((stderr or stdout or f"orp exited {code}").strip(), is_error=True)
|
|
139
|
+
return _text_result(stdout.strip())
|
|
140
|
+
|
|
141
|
+
|
|
142
|
+
def _call_research_show(args: dict[str, Any]) -> dict[str, Any]:
|
|
143
|
+
run_id = str(args.get("run_id", "") or "latest").strip() or "latest"
|
|
144
|
+
cmd = ["--repo-root", _repo_root(args), "research", "show", run_id, "--json"]
|
|
145
|
+
code, stdout, stderr = _run_orp(cmd)
|
|
146
|
+
if code != 0:
|
|
147
|
+
return _text_result((stderr or stdout or f"orp exited {code}").strip(), is_error=True)
|
|
148
|
+
return _text_result(stdout.strip())
|
|
149
|
+
|
|
150
|
+
|
|
151
|
+
def _handle(request: dict[str, Any]) -> dict[str, Any] | None:
|
|
152
|
+
request_id = request.get("id")
|
|
153
|
+
method = str(request.get("method", "") or "")
|
|
154
|
+
if method.startswith("notifications/"):
|
|
155
|
+
return None
|
|
156
|
+
if method == "initialize":
|
|
157
|
+
return _json_response(
|
|
158
|
+
request_id,
|
|
159
|
+
{
|
|
160
|
+
"protocolVersion": str(request.get("params", {}).get("protocolVersion", "2024-11-05")),
|
|
161
|
+
"capabilities": {"tools": {}},
|
|
162
|
+
"serverInfo": {"name": "orp-research", "version": "0.1.0"},
|
|
163
|
+
},
|
|
164
|
+
)
|
|
165
|
+
if method == "tools/list":
|
|
166
|
+
return _json_response(request_id, {"tools": TOOLS})
|
|
167
|
+
if method == "tools/call":
|
|
168
|
+
params = request.get("params")
|
|
169
|
+
if not isinstance(params, dict):
|
|
170
|
+
return _json_response(request_id, error={"code": -32602, "message": "params must be an object"})
|
|
171
|
+
name = str(params.get("name", "") or "")
|
|
172
|
+
arguments = params.get("arguments")
|
|
173
|
+
if not isinstance(arguments, dict):
|
|
174
|
+
arguments = {}
|
|
175
|
+
if name == "orp_research_ask":
|
|
176
|
+
return _json_response(request_id, _call_research_ask(arguments))
|
|
177
|
+
if name == "orp_research_status":
|
|
178
|
+
return _json_response(request_id, _call_research_status(arguments))
|
|
179
|
+
if name == "orp_research_show":
|
|
180
|
+
return _json_response(request_id, _call_research_show(arguments))
|
|
181
|
+
return _json_response(request_id, error={"code": -32601, "message": f"unknown tool: {name}"})
|
|
182
|
+
return _json_response(request_id, error={"code": -32601, "message": f"unknown method: {method}"})
|
|
183
|
+
|
|
184
|
+
|
|
185
|
+
def main() -> int:
|
|
186
|
+
for line in sys.stdin:
|
|
187
|
+
text = line.strip()
|
|
188
|
+
if not text:
|
|
189
|
+
continue
|
|
190
|
+
try:
|
|
191
|
+
request = json.loads(text)
|
|
192
|
+
except Exception as exc:
|
|
193
|
+
_write(_json_response(None, error={"code": -32700, "message": str(exc)}))
|
|
194
|
+
continue
|
|
195
|
+
if not isinstance(request, dict):
|
|
196
|
+
_write(_json_response(None, error={"code": -32600, "message": "request must be an object"}))
|
|
197
|
+
continue
|
|
198
|
+
response = _handle(request)
|
|
199
|
+
if response is not None:
|
|
200
|
+
_write(response)
|
|
201
|
+
return 0
|
|
202
|
+
|
|
203
|
+
|
|
204
|
+
if __name__ == "__main__":
|
|
205
|
+
raise SystemExit(main())
|
|
@@ -0,0 +1,223 @@
|
|
|
1
|
+
{
|
|
2
|
+
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
|
3
|
+
"$id": "https://openresearchprotocol.com/spec/v1/project-context.schema.json",
|
|
4
|
+
"title": "ORP Project Context",
|
|
5
|
+
"description": "Machine-readable ORP process artifact describing a local directory's authority surfaces, directory signals, research timing policy, and evolution policy.",
|
|
6
|
+
"type": "object",
|
|
7
|
+
"additionalProperties": true,
|
|
8
|
+
"required": [
|
|
9
|
+
"schema_version",
|
|
10
|
+
"kind",
|
|
11
|
+
"project",
|
|
12
|
+
"initialized_at_utc",
|
|
13
|
+
"refreshed_at_utc",
|
|
14
|
+
"refresh_source",
|
|
15
|
+
"authority_surfaces",
|
|
16
|
+
"directory_signals",
|
|
17
|
+
"research_policy",
|
|
18
|
+
"evolution_policy",
|
|
19
|
+
"next_actions",
|
|
20
|
+
"notes"
|
|
21
|
+
],
|
|
22
|
+
"properties": {
|
|
23
|
+
"schema_version": {
|
|
24
|
+
"const": "1.0.0"
|
|
25
|
+
},
|
|
26
|
+
"kind": {
|
|
27
|
+
"const": "orp_project_context"
|
|
28
|
+
},
|
|
29
|
+
"project": {
|
|
30
|
+
"type": "object",
|
|
31
|
+
"required": [
|
|
32
|
+
"name",
|
|
33
|
+
"root"
|
|
34
|
+
],
|
|
35
|
+
"properties": {
|
|
36
|
+
"name": {
|
|
37
|
+
"type": "string",
|
|
38
|
+
"minLength": 1
|
|
39
|
+
},
|
|
40
|
+
"root": {
|
|
41
|
+
"type": "string",
|
|
42
|
+
"minLength": 1
|
|
43
|
+
}
|
|
44
|
+
}
|
|
45
|
+
},
|
|
46
|
+
"initialized_at_utc": {
|
|
47
|
+
"type": "string",
|
|
48
|
+
"minLength": 1
|
|
49
|
+
},
|
|
50
|
+
"refreshed_at_utc": {
|
|
51
|
+
"type": "string",
|
|
52
|
+
"minLength": 1
|
|
53
|
+
},
|
|
54
|
+
"refresh_source": {
|
|
55
|
+
"type": "string",
|
|
56
|
+
"minLength": 1
|
|
57
|
+
},
|
|
58
|
+
"authority_surfaces": {
|
|
59
|
+
"type": "array",
|
|
60
|
+
"items": {
|
|
61
|
+
"type": "object",
|
|
62
|
+
"required": [
|
|
63
|
+
"path",
|
|
64
|
+
"kind",
|
|
65
|
+
"role",
|
|
66
|
+
"exists",
|
|
67
|
+
"size_bytes"
|
|
68
|
+
],
|
|
69
|
+
"properties": {
|
|
70
|
+
"path": {
|
|
71
|
+
"type": "string",
|
|
72
|
+
"minLength": 1
|
|
73
|
+
},
|
|
74
|
+
"kind": {
|
|
75
|
+
"type": "string",
|
|
76
|
+
"minLength": 1
|
|
77
|
+
},
|
|
78
|
+
"role": {
|
|
79
|
+
"type": "string",
|
|
80
|
+
"minLength": 1
|
|
81
|
+
},
|
|
82
|
+
"exists": {
|
|
83
|
+
"type": "boolean"
|
|
84
|
+
},
|
|
85
|
+
"size_bytes": {
|
|
86
|
+
"type": "integer",
|
|
87
|
+
"minimum": 0
|
|
88
|
+
}
|
|
89
|
+
}
|
|
90
|
+
}
|
|
91
|
+
},
|
|
92
|
+
"directory_signals": {
|
|
93
|
+
"type": "object",
|
|
94
|
+
"required": [
|
|
95
|
+
"source_dirs",
|
|
96
|
+
"languages_or_stacks",
|
|
97
|
+
"has_tests",
|
|
98
|
+
"has_docs",
|
|
99
|
+
"has_orp_config",
|
|
100
|
+
"authority_surface_count"
|
|
101
|
+
],
|
|
102
|
+
"properties": {
|
|
103
|
+
"source_dirs": {
|
|
104
|
+
"type": "array",
|
|
105
|
+
"items": {
|
|
106
|
+
"type": "string"
|
|
107
|
+
}
|
|
108
|
+
},
|
|
109
|
+
"languages_or_stacks": {
|
|
110
|
+
"type": "array",
|
|
111
|
+
"items": {
|
|
112
|
+
"type": "string"
|
|
113
|
+
}
|
|
114
|
+
},
|
|
115
|
+
"has_tests": {
|
|
116
|
+
"type": "boolean"
|
|
117
|
+
},
|
|
118
|
+
"has_docs": {
|
|
119
|
+
"type": "boolean"
|
|
120
|
+
},
|
|
121
|
+
"has_orp_config": {
|
|
122
|
+
"type": "boolean"
|
|
123
|
+
},
|
|
124
|
+
"authority_surface_count": {
|
|
125
|
+
"type": "integer",
|
|
126
|
+
"minimum": 0
|
|
127
|
+
}
|
|
128
|
+
}
|
|
129
|
+
},
|
|
130
|
+
"research_policy": {
|
|
131
|
+
"type": "object",
|
|
132
|
+
"required": [
|
|
133
|
+
"default_timing",
|
|
134
|
+
"provider_calls_are_explicit",
|
|
135
|
+
"live_calls_require_execute",
|
|
136
|
+
"call_moments"
|
|
137
|
+
],
|
|
138
|
+
"properties": {
|
|
139
|
+
"default_timing": {
|
|
140
|
+
"type": "string",
|
|
141
|
+
"minLength": 1
|
|
142
|
+
},
|
|
143
|
+
"provider_calls_are_explicit": {
|
|
144
|
+
"type": "boolean"
|
|
145
|
+
},
|
|
146
|
+
"live_calls_require_execute": {
|
|
147
|
+
"type": "boolean"
|
|
148
|
+
},
|
|
149
|
+
"secret_alias": {
|
|
150
|
+
"type": "string"
|
|
151
|
+
},
|
|
152
|
+
"env_var": {
|
|
153
|
+
"type": "string"
|
|
154
|
+
},
|
|
155
|
+
"call_moments": {
|
|
156
|
+
"type": "array",
|
|
157
|
+
"items": {
|
|
158
|
+
"type": "object",
|
|
159
|
+
"required": [
|
|
160
|
+
"moment_id",
|
|
161
|
+
"calls_api",
|
|
162
|
+
"when"
|
|
163
|
+
],
|
|
164
|
+
"properties": {
|
|
165
|
+
"moment_id": {
|
|
166
|
+
"type": "string",
|
|
167
|
+
"minLength": 1
|
|
168
|
+
},
|
|
169
|
+
"calls_api": {
|
|
170
|
+
"type": "boolean"
|
|
171
|
+
},
|
|
172
|
+
"lane": {
|
|
173
|
+
"type": "string"
|
|
174
|
+
},
|
|
175
|
+
"model": {
|
|
176
|
+
"type": "string"
|
|
177
|
+
},
|
|
178
|
+
"when": {
|
|
179
|
+
"type": "string",
|
|
180
|
+
"minLength": 1
|
|
181
|
+
},
|
|
182
|
+
"capability_note": {
|
|
183
|
+
"type": "string"
|
|
184
|
+
}
|
|
185
|
+
}
|
|
186
|
+
}
|
|
187
|
+
},
|
|
188
|
+
"skip_research_when": {
|
|
189
|
+
"type": "array",
|
|
190
|
+
"items": {
|
|
191
|
+
"type": "string"
|
|
192
|
+
}
|
|
193
|
+
},
|
|
194
|
+
"escalate_to_deep_research_when": {
|
|
195
|
+
"type": "array",
|
|
196
|
+
"items": {
|
|
197
|
+
"type": "string"
|
|
198
|
+
}
|
|
199
|
+
}
|
|
200
|
+
}
|
|
201
|
+
},
|
|
202
|
+
"evolution_policy": {
|
|
203
|
+
"type": "object",
|
|
204
|
+
"required": [
|
|
205
|
+
"refresh_surfaces",
|
|
206
|
+
"evolution_loop",
|
|
207
|
+
"boundary"
|
|
208
|
+
]
|
|
209
|
+
},
|
|
210
|
+
"next_actions": {
|
|
211
|
+
"type": "array",
|
|
212
|
+
"items": {
|
|
213
|
+
"type": "string"
|
|
214
|
+
}
|
|
215
|
+
},
|
|
216
|
+
"notes": {
|
|
217
|
+
"type": "array",
|
|
218
|
+
"items": {
|
|
219
|
+
"type": "string"
|
|
220
|
+
}
|
|
221
|
+
}
|
|
222
|
+
}
|
|
223
|
+
}
|