stable-harness 0.0.4 → 0.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,32 +1,220 @@
1
1
  # stable-harness
2
2
 
3
+ [![npm](https://img.shields.io/npm/v/stable-harness)](https://www.npmjs.com/package/stable-harness)
4
+ [![license](https://img.shields.io/npm/l/stable-harness)](https://www.npmjs.com/package/stable-harness)
5
+
3
6
  Stable runtime and operator control plane for agent applications.
4
7
 
5
- `stable-harness` is not an agent execution framework. It loads a YAML-defined
6
- workspace, maps that workspace onto an upstream agent framework, and owns the
7
- production runtime surfaces around execution: requests, sessions, approvals,
8
- events, artifacts, memory lifecycle, recovery, governance, and protocol access.
8
+ `stable-harness` lets a team keep its chosen agent framework while adding the
9
+ production surfaces that real workspaces need: YAML inventory, runtime requests,
10
+ sessions, event traces, artifacts, memory lifecycle, governance hooks, recovery,
11
+ tool repair, and protocol access.
12
+
13
+ It is not another agent execution framework. Upstream frameworks own execution
14
+ semantics. Stable Harness owns the runtime boundary around them.
15
+
16
+ ## Why Use It
17
+
18
+ Agent frameworks are good at deciding what an agent should do next. Production
19
+ applications also need a stable layer that can be inspected, governed, resumed,
20
+ replayed, and called through predictable APIs.
21
+
22
+ Stable Harness gives you that layer without rewriting the backend:
23
+
24
+ - define agents, tools, models, memory, workflows, and protocol exposure in YAML
25
+ - run the same workspace through CLI, SDK, HTTP, and OpenAI-compatible clients
26
+ - keep DeepAgents and LangGraph behavior upstream-native through thin adapters
27
+ - validate and repair tool calls at the runtime gateway before execution
28
+ - observe upstream tool, planning, delegation, progress, memory, and artifact events
29
+ - keep downstream product logic in the workspace, not inside the framework
30
+
31
+ ## Install
32
+
33
+ ```bash
34
+ npm install stable-harness
35
+ ```
36
+
37
+ Stable Harness currently targets Node.js `>=24 <25`.
38
+
39
+ Create a workspace without cloning this repo:
9
40
 
10
- The first backend target is DeepAgents. The adapter contract is intentionally
11
- thin: upstream frameworks own execution semantics; stable-harness owns runtime
12
- assembly and lifecycle.
41
+ ```bash
42
+ npx stable-harness init ./my-agent-app
43
+ stable-harness -w ./my-agent-app
44
+ stable-harness -w ./my-agent-app --agent orchestra --tool echo_tool --tool-args-json '{"value":"hello"}'
45
+ ```
13
46
 
14
47
  ## First Run
15
48
 
49
+ Clone the repo when developing the framework itself:
50
+
16
51
  ```bash
52
+ git clone git@github.com:botbotgo/stable-harness.git
53
+ cd stable-harness
17
54
  npm install
18
55
  npm run build
19
56
  npm run check:rules
20
- npm run test
57
+ npm test
21
58
  npm run example:minimal
22
59
  ```
23
60
 
61
+ Run an existing Stable Harness workspace:
62
+
63
+ ```bash
64
+ stable-harness -w ./examples/minimal-deepagents "hello stable harness"
65
+ ```
66
+
67
+ Inspect the workspace without running an agent:
68
+
69
+ ```bash
70
+ stable-harness -w ./examples/minimal-deepagents
71
+ stable-harness agent render orchestra -w ./examples/minimal-deepagents
72
+ stable-harness workflow render review-shell -w ./examples/minimal-deepagents
73
+ ```
74
+
75
+ Start the OpenAI-compatible facade:
76
+
77
+ ```bash
78
+ stable-harness start -w ./examples/minimal-deepagents --port 8642
79
+ ```
80
+
81
+ Then point compatible clients at:
82
+
83
+ ```text
84
+ http://127.0.0.1:8642/v1
85
+ ```
86
+
87
+ ## Embed In An App
88
+
89
+ ```ts
90
+ import { createStableHarnessRuntime } from "stable-harness";
91
+
92
+ const runtime = await createStableHarnessRuntime("/path/to/workspace");
93
+
94
+ const response = await runtime.request({
95
+ input: "Review the current release evidence.",
96
+ agentId: "orchestra",
97
+ });
98
+
99
+ console.log(response.output);
100
+ ```
101
+
102
+ The runtime also exposes `subscribe`, `inspect`, `getRun`, `listRequests`,
103
+ `listSessions`, `inspectRequest`, `cancel`, and `stop` so applications can build
104
+ operator workflows around the same execution surface.
105
+
106
+ ## Workspace Shape
107
+
108
+ A workspace is a folder with Kubernetes-style YAML documents:
109
+
110
+ ```text
111
+ config/
112
+ runtime/workspace.yaml
113
+ agents/orchestra.yaml
114
+ catalogs/models.yaml
115
+ catalogs/tools.yaml
116
+ workflows/review-shell.yaml
117
+ resources/
118
+ tools/
119
+ skills/
120
+ ```
121
+
122
+ Minimal runtime:
123
+
124
+ ```yaml
125
+ apiVersion: stable-harness.dev/v1
126
+ kind: Runtime
127
+ metadata:
128
+ name: app-runtime
129
+ spec:
130
+ routing:
131
+ defaultAgentId: orchestra
132
+ protocols:
133
+ inProcess: true
134
+ openaiCompatible:
135
+ bearerToken: ${env:STABLE_HARNESS_API_KEY}
136
+ ```
137
+
138
+ Minimal agent:
139
+
140
+ ```yaml
141
+ apiVersion: stable-harness.dev/v1
142
+ kind: Agent
143
+ metadata:
144
+ name: orchestra
145
+ spec:
146
+ backend: deepagents
147
+ modelRef: local-dev
148
+ systemPrompt: You are a concise workspace agent.
149
+ tools:
150
+ - shell
151
+ subagents:
152
+ - reviewer
153
+ ```
154
+
155
+ ## Runtime Boundary
156
+
157
+ ```mermaid
158
+ flowchart LR
159
+ Client["CLI / SDK / HTTP / OpenAI-compatible client"]
160
+ Runtime["Stable Harness runtime"]
161
+ Inventory["YAML workspace inventory"]
162
+ Gateway["Tool gateway + repair policy"]
163
+ Adapter["Thin backend adapter"]
164
+ Backend["DeepAgents / LangGraph / future backend"]
165
+ Ops["Events / runs / memory / approvals / artifacts"]
166
+
167
+ Client --> Runtime
168
+ Inventory --> Runtime
169
+ Runtime --> Gateway
170
+ Runtime --> Adapter
171
+ Adapter --> Backend
172
+ Runtime --> Ops
173
+ ```
174
+
175
+ Stable Harness owns lifecycle, governance, observability, persistence, recovery,
176
+ protocol access, and tool-gateway policy. It does not infer routing from user
177
+ keywords, synthesize upstream planning calls, or rebuild backend-native agent
178
+ semantics.
179
+
180
+ ## Current Backends
181
+
182
+ | Backend | Status | Boundary |
183
+ | --- | --- | --- |
184
+ | DeepAgents | Primary adapter | Upstream execution, skills, planning, delegation, and built-ins are passed through; Stable Harness observes and governs the runtime edge. |
185
+ | LangGraph | Runtime and workflow adapter | Stable Harness can compile explicit workflow topology and expose LangGraph-compatible protocol surfaces. |
186
+ | Custom adapters | Supported through SDK | Implement `RuntimeAdapter` and declare the backend in workspace YAML. |
187
+
188
+ ## Tool Reliability
189
+
190
+ Stable Harness uses `@botbotgo/better-call` at the tool-gateway boundary. The
191
+ default CLI path configures repair mode for registered tools, so malformed or
192
+ near-miss tool calls can be repaired before execution while agent inventory,
193
+ schema validation, semantic validators, and governance policy still define what
194
+ is allowed.
195
+
196
+ This is constrained repair, not silent magic:
197
+
198
+ - unknown or unauthorized tools stay blocked
199
+ - semantic validators remain authoritative
200
+ - upstream built-ins stay upstream-owned
201
+ - repaired calls are observable through runtime events and traces
202
+
203
+ ## Protocols
204
+
205
+ - OpenAI-compatible facade: [docs/protocols/openai-compatible.md](docs/protocols/openai-compatible.md)
206
+ - LangGraph-compatible facade: [docs/protocols/langgraph-compatible.md](docs/protocols/langgraph-compatible.md)
207
+ - HTTP runtime protocol: [docs/protocols/http-runtime.md](docs/protocols/http-runtime.md)
208
+
24
209
  ## Product Boundary
25
210
 
26
- See:
211
+ Read these before adding public runtime behavior:
27
212
 
28
213
  - [Product boundary](docs/product-boundary.md)
29
214
  - [Compatibility matrix](docs/compatibility-matrix.md)
30
215
  - [Implementation blueprint](docs/implementation-blueprint.md)
31
216
  - [Engineering rules](docs/engineering-rules.md)
32
217
  - [Adapter contract](docs/adapter-contract.md)
218
+
219
+ The short rule: pass through upstream execution semantics first, then add only
220
+ small, typed, replaceable runtime capabilities around them.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "stable-harness",
3
- "version": "0.0.4",
3
+ "version": "0.0.6",
4
4
  "type": "module",
5
5
  "description": "Stable application runtime and operator control plane for agent workspaces.",
6
6
  "license": "MIT",
@@ -1,6 +1,6 @@
1
1
  export type CliArgs = {
2
2
  workspaceRoot: string;
3
- command: "request" | "start" | "stop";
3
+ command: "request" | "start" | "stop" | "init";
4
4
  workflowRenderId?: string;
5
5
  workflowInspectId?: string;
6
6
  workflowRunId?: string;
@@ -1 +1 @@
1
- export function parseArgs(e){const r=function createDefaultArgs(){return{workspaceRoot:process.cwd(),command:"request",toolArgs:void 0,trace:!1,traceJson:!1,progress:!1,serveOpenAi:!1,host:process.env.STABLE_HARNESS_OPENAI_HOST,port:process.env.STABLE_HARNESS_OPENAI_PORT?Number(process.env.STABLE_HARNESS_OPENAI_PORT):void 0,apiKey:process.env.STABLE_HARNESS_OPENAI_API_KEY,timeoutMs:Number(process.env.STABLE_HARNESS_CLI_TIMEOUT_MS??3e5),help:!1,prompt:"",shouldRunRequest:!1}}(),t=[];for(let o=0;o<e.length;o+=1)o=parseOneArg(e,o,r,t);return{...r,prompt:t.join(" "),shouldRunRequest:Boolean(r.toolId||r.workflowRunId||t.length>0)}}function parseOneArg(e,r,t,o){const n=function readNextArg(e,r){return{index:r+1,value:e[r+1]}}(e,r);if(0===o.length&&function parseTopLevelCommand(e,r,t){return"start"===e[r]?(t.command="start",t.serveOpenAi=!0,!0):"stop"===e[r]?(t.command="stop",!0):"workflow"!==e[r]||"render"!==e[r+1]&&"inspect"!==e[r+1]?"agent"===e[r]&&"render"===e[r+1]&&(Object.assign(t,function parseAgentCommand(e,r){if("render"===e[r+1])return{index:r+2,agentRenderId:e[r+2]};throw new Error("Usage: stable-harness agent render <agent-id>")}(e,r)),!0):(Object.assign(t,function parseWorkflowCommand(e,r){if("render"===e[r+1])return{index:r+2,workflowRenderId:e[r+2],workflowInspectId:void 0};if("inspect"===e[r+1])return{index:r+2,workflowRenderId:void 0,workflowInspectId:e[r+2]};throw new Error("Usage: stable-harness workflow <render|inspect> <workflow-id>")}(e,r)),!0)}(e,r,t))return function stateIndex(e,r){return"workflow"===e[r]||"agent"===e[r]?r+2:r}(e,r);if("-h"===e[r]||"--help"===e[r])t.help=!0;else if("start"===t.command&&function isProtocolName(e){return"openai"===e||"openai-compatible"===e}(e[r]))t.serveOpenAi=!0;else{if("-w"===e[r]||"--workspace"===e[r])return setString(n,t,"workspaceRoot");if("--agent"===e[r])return setString(n,t,"agentId");if("--workflow"===e[r])return setString(n,t,"workflowRunId");if("--session-id"===e[r])return setString(n,t,"sessionId");if("--tool"===e[r])return setString(n,t,"toolId");if("--tool-args-json"===e[r])return t.toolArgs=function parseJsonArg(e){try{return JSON.parse(e)}catch(e){const r=e instanceof Error?e.message:String(e);throw new Error(`Invalid --tool-args-json value: ${r}`)}}(n.value??"{}"),n.index;if("--trace"===e[r])t.trace=!0;else if("--trace-json"===e[r])t.traceJson=!0;else if("--progress"===e[r])t.progress=!0;else if("--serve-openai"===e[r])t.serveOpenAi=!0;else{if("--host"===e[r])return setString(n,t,"host");if("--port"===e[r])return t.port=Number(n.value??t.port),n.index;if("--api-key"===e[r])return setString(n,t,"apiKey");if("--timeout-ms"===e[r])return t.timeoutMs=Number(n.value??t.timeoutMs),n.index;o.push(e[r])}}return r}function setString(e,r,t){return"string"==typeof e.value&&Object.assign(r,{[t]:e.value}),e.index}export function helpText(){return["Usage:"," stable-harness -w <workspace> [--agent <id>] [prompt]"," stable-harness workflow render <workflow-id> -w <workspace>"," stable-harness workflow inspect <workflow-id> -w <workspace>"," stable-harness agent render <agent-id> -w <workspace>"," stable-harness start -w <workspace>"," stable-harness stop -w <workspace>","","Options:"," -w, --workspace <path> Workspace root."," --serve-openai Legacy alias for start."," --agent <id> Select an agent for a request."," --workflow <id> Run a configured workflow."," --session-id <id> Attach the request to an existing runtime session."," --tool <id> Invoke an explicit registered tool."," --tool-args-json <json> Tool arguments for --tool."," --trace Print trace lines."," --trace-json Print trace JSON."," --progress Legacy alias; CLI events are controlled by runtime.cli.events."," --timeout-ms <ms> Request timeout."," -h, --help Show this help.",""].join("\n")}
1
+ export function parseArgs(e){const r=function createDefaultArgs(){return{workspaceRoot:process.cwd(),command:"request",toolArgs:void 0,trace:!1,traceJson:!1,progress:!1,serveOpenAi:!1,host:process.env.STABLE_HARNESS_OPENAI_HOST,port:process.env.STABLE_HARNESS_OPENAI_PORT?Number(process.env.STABLE_HARNESS_OPENAI_PORT):void 0,apiKey:process.env.STABLE_HARNESS_OPENAI_API_KEY,timeoutMs:Number(process.env.STABLE_HARNESS_CLI_TIMEOUT_MS??3e5),help:!1,prompt:"",shouldRunRequest:!1}}(),t=[];for(let n=0;n<e.length;n+=1)n=parseOneArg(e,n,r,t);return{...r,prompt:t.join(" "),shouldRunRequest:Boolean(r.toolId||r.workflowRunId||t.length>0)}}function parseOneArg(e,r,t,n){const o=function readNextArg(e,r){return{index:r+1,value:e[r+1]}}(e,r);if(0===n.length&&function parseTopLevelCommand(e,r,t){return"start"===e[r]?(t.command="start",t.serveOpenAi=!0,!0):"stop"===e[r]?(t.command="stop",!0):"init"===e[r]?(t.command="init",!0):"workflow"!==e[r]||"render"!==e[r+1]&&"inspect"!==e[r+1]?"agent"===e[r]&&"render"===e[r+1]&&(Object.assign(t,function parseAgentCommand(e,r){if("render"===e[r+1])return{index:r+2,agentRenderId:e[r+2]};throw new Error("Usage: stable-harness agent render <agent-id>")}(e,r)),!0):(Object.assign(t,function parseWorkflowCommand(e,r){if("render"===e[r+1])return{index:r+2,workflowRenderId:e[r+2],workflowInspectId:void 0};if("inspect"===e[r+1])return{index:r+2,workflowRenderId:void 0,workflowInspectId:e[r+2]};throw new Error("Usage: stable-harness workflow <render|inspect> <workflow-id>")}(e,r)),!0)}(e,r,t))return function stateIndex(e,r){return"workflow"===e[r]||"agent"===e[r]?r+2:r}(e,r);if("-h"===e[r]||"--help"===e[r])t.help=!0;else if("start"===t.command&&function isProtocolName(e){return"openai"===e||"openai-compatible"===e}(e[r]))t.serveOpenAi=!0;else{if("-w"===e[r]||"--workspace"===e[r])return setString(o,t,"workspaceRoot");if("--agent"===e[r])return setString(o,t,"agentId");if("--workflow"===e[r])return setString(o,t,"workflowRunId");if("--session-id"===e[r])return setString(o,t,"sessionId");if("--tool"===e[r])return setString(o,t,"toolId");if("--tool-args-json"===e[r])return t.toolArgs=function parseJsonArg(e){try{return JSON.parse(e)}catch(e){const r=e instanceof Error?e.message:String(e);throw new Error(`Invalid --tool-args-json value: ${r}`)}}(o.value??"{}"),o.index;if("--trace"===e[r])t.trace=!0;else if("--trace-json"===e[r])t.traceJson=!0;else if("--progress"===e[r])t.progress=!0;else if("--serve-openai"===e[r])t.serveOpenAi=!0;else{if("--host"===e[r])return setString(o,t,"host");if("--port"===e[r])return t.port=Number(o.value??t.port),o.index;if("--api-key"===e[r])return setString(o,t,"apiKey");if("--timeout-ms"===e[r])return t.timeoutMs=Number(o.value??t.timeoutMs),o.index;n.push(e[r])}}return r}function setString(e,r,t){return"string"==typeof e.value&&Object.assign(r,{[t]:e.value}),e.index}export function helpText(){return["Usage:"," stable-harness -w <workspace> [--agent <id>] [prompt]"," stable-harness workflow render <workflow-id> -w <workspace>"," stable-harness workflow inspect <workflow-id> -w <workspace>"," stable-harness agent render <agent-id> -w <workspace>"," stable-harness init [workspace]"," stable-harness start -w <workspace>"," stable-harness stop -w <workspace>","","Options:"," -w, --workspace <path> Workspace root."," --serve-openai Legacy alias for start."," --agent <id> Select an agent for a request."," --workflow <id> Run a configured workflow."," --session-id <id> Attach the request to an existing runtime session."," --tool <id> Invoke an explicit registered tool."," --tool-args-json <json> Tool arguments for --tool."," --trace Print trace lines."," --trace-json Print trace JSON."," --progress Legacy alias; CLI events are controlled by runtime.cli.events."," --timeout-ms <ms> Request timeout."," -h, --help Show this help.",""].join("\n")}
@@ -1,2 +1,2 @@
1
1
  #!/usr/bin/env node
2
- import{realpathSync as e}from"node:fs";import{fileURLToPath as t}from"node:url";import{createDeepAgentsAdapter as r}from"@stable-harness/adapter-deepagents";import{createLangGraphWorkflowAdapter as o}from"@stable-harness/adapter-langgraph";import{createStableHarnessRuntime as s}from"@stable-harness/core";import{projectEvent as a,projectRuntimeTrace as i}from"@stable-harness/core";import{createModuleToolGateway as n}from"@stable-harness/tool-gateway";import{loadWorkspaceFromYaml as u}from"@stable-harness/workspace-yaml";import{helpText as l,parseArgs as d}from"./args.js";import{formatCliRuntimeEvent as p,readCliEventViewConfig as c,shouldEnableCliProgressNarration as w}from"./event-view.js";import{ensureCliMemoryServices as f}from"./memory/lifecycle.js";import{createCliMemoryProviders as m}from"./memory/providers.js";import{formatDetail as g,inspectWorkflow as I,renderAgent as v,renderWorkflow as y,workspaceStatus as k}from"./output.js";import{serveProtocol as h,stopProtocol as b}from"./server.js";export async function runCli(e=process.argv.slice(2)){const t=d(e);if(t.help)return void process.stdout.write(l());const o=setTimeout(()=>{process.stderr.write(`stable-harness request timed out after ${t.timeoutMs}ms\n`),process.exit(124)},t.timeoutMs),q=t.workspaceRoot;try{const e=await u(q);if(t.workflowRenderId)return void process.stdout.write(y(e,t.workflowRenderId));if(t.workflowInspectId)return void process.stdout.write(I(e,t.workflowInspectId));if(t.agentRenderId)return void process.stdout.write(v(e,t.agentRenderId));if("stop"===t.command)return clearTimeout(o),void await b(e,t);const l=await n({tools:e.tools.values(),options:{betterCall:{mode:"repair"}}});await f(e);const d=m(e),R=c(e.runtime);let C;if(C=s({workspace:e,toolGateway:l,memoryProviders:d,adapters:[r()],workflowAdapters:[createCliWorkflowAdapter(l,()=>C)],progressNarration:w(R,e.runtime)?{enabled:!0,style:"cli"}:void 0}),t.serveOpenAi)return clearTimeout(o),void await h(C,t);if(!t.shouldRunRequest)return void process.stdout.write(k(e,q));t.trace&&C.subscribe(e=>{const t=a(e);t&&process.stdout.write(`trace:${t.agentId}:${t.label}${g(t.detail)}\n`)}),C.subscribe(e=>{const t=p(e,R);t&&process.stdout.write(`${t}\n`)});const $=await C.request({input:t.prompt,agentId:t.agentId,sessionId:t.sessionId,toolCall:t.toolId?{toolId:t.toolId,args:t.toolArgs}:void 0,workflow:t.workflowRunId?{workflowId:t.workflowRunId,input:t.prompt}:void 0});if(t.trace||t.traceJson){const e=C.getRun($.requestId),r=e?i(e):[];t.traceJson&&process.stdout.write(`${JSON.stringify({trace:r})}\n`)}process.stdout.write(`${$.output}\n`)}finally{clearTimeout(o)}}function createCliWorkflowAdapter(e,t){return o({nodeResolvers:{tools:async({id:t,node:r,request:o,requestId:s,sessionId:a,state:i,workspace:n})=>{return(await e.invoke({toolId:t,args:(u=r.config,l=o.input,d=i.outputs,!0===u?.inputFromState?{...u,requestInput:l,outputs:d}:u&&"requiredInput"in u?u.requiredInput:u&&("args"in u||"cwd"in u||"timeoutMs"in u)?u:"object"==typeof l&&null!==l?l:{}),context:{workspaceRoot:n.root,requestId:s,sessionId:a,agentId:`workflow:${r.id}`}})).output;var u,l,d},agents:async({id:e,node:r,request:o,sessionId:s,state:a})=>{var i,n,u,l;return(await t().request({input:(i=e,n=o.input,u=a.outputs,l=r.config,[`Workflow node agents.${i}: synthesize the workflow evidence into the requested final output.`,`Original request: ${"string"==typeof n?n:JSON.stringify(n)}`,"Requirements:","- Produce the final answer now; do not ask follow-up questions.","- Match the original request language unless workflow config explicitly says otherwise.","- Use only the workflow outputs as evidence; call out uncertainty directly.",...l?[`Workflow node config: ${JSON.stringify(l)}`]:[],"Prior workflow outputs:",JSON.stringify(u)].join("\n")),agentId:e,sessionId:s,metadata:o.metadata})).output}}})}(function isCliEntrypoint(){const r=process.argv[1];if(!r)return!1;try{return e(t(import.meta.url))===e(r)}catch{return!1}})()&&runCli().catch(e=>{process.stderr.write(`${e instanceof Error?e.message:String(e)}\n`),process.exitCode=1});
2
+ import{realpathSync as e}from"node:fs";import{fileURLToPath as t}from"node:url";import{createDeepAgentsAdapter as r}from"@stable-harness/adapter-deepagents";import{createLangGraphWorkflowAdapter as o}from"@stable-harness/adapter-langgraph";import{createStableHarnessRuntime as s}from"@stable-harness/core";import{projectEvent as a,projectRuntimeTrace as i}from"@stable-harness/core";import{createModuleToolGateway as n}from"@stable-harness/tool-gateway";import{loadWorkspaceFromYaml as u}from"@stable-harness/workspace-yaml";import{helpText as l,parseArgs as d}from"./args.js";import{formatCliRuntimeEvent as p,readCliEventViewConfig as c,shouldEnableCliProgressNarration as m}from"./event-view.js";import{initWorkspace as w}from"./init.js";import{ensureCliMemoryServices as f}from"./memory/lifecycle.js";import{createCliMemoryProviders as g}from"./memory/providers.js";import{formatDetail as v,inspectWorkflow as I,renderAgent as k,renderWorkflow as y,workspaceStatus as h}from"./output.js";import{serveProtocol as b,stopProtocol as q}from"./server.js";export async function runCli(e=process.argv.slice(2)){const t=d(e);if(t.help)return void process.stdout.write(l());const o=setTimeout(()=>{process.stderr.write(`stable-harness request timed out after ${t.timeoutMs}ms\n`),process.exit(124)},t.timeoutMs),R=t.workspaceRoot;try{if("init"===t.command)return void process.stdout.write(await w(t.prompt||R));const e=await u(R);if(t.workflowRenderId)return void process.stdout.write(y(e,t.workflowRenderId));if(t.workflowInspectId)return void process.stdout.write(I(e,t.workflowInspectId));if(t.agentRenderId)return void process.stdout.write(k(e,t.agentRenderId));if("stop"===t.command)return clearTimeout(o),void await q(e,t);const l=await n({tools:e.tools.values(),options:{betterCall:{mode:"repair"}}});await f(e);const d=g(e),C=c(e.runtime);let $;if($=s({workspace:e,toolGateway:l,memoryProviders:d,adapters:[r()],workflowAdapters:[createCliWorkflowAdapter(l,()=>$)],progressNarration:m(C,e.runtime)?{enabled:!0,style:"cli"}:void 0}),t.serveOpenAi)return clearTimeout(o),void await b($,t);if(!t.shouldRunRequest)return void process.stdout.write(h(e,R));t.trace&&$.subscribe(e=>{const t=a(e);t&&process.stdout.write(`trace:${t.agentId}:${t.label}${v(t.detail)}\n`)}),$.subscribe(e=>{const t=p(e,C);t&&process.stdout.write(`${t}\n`)});const j=await $.request({input:t.prompt,agentId:t.agentId,sessionId:t.sessionId,toolCall:t.toolId?{toolId:t.toolId,args:t.toolArgs}:void 0,workflow:t.workflowRunId?{workflowId:t.workflowRunId,input:t.prompt}:void 0});if(t.trace||t.traceJson){const e=$.getRun(j.requestId),r=e?i(e):[];t.traceJson&&process.stdout.write(`${JSON.stringify({trace:r})}\n`)}process.stdout.write(`${j.output}\n`)}finally{clearTimeout(o)}}function createCliWorkflowAdapter(e,t){return o({nodeResolvers:{tools:async({id:t,node:r,request:o,requestId:s,sessionId:a,state:i,workspace:n})=>{return(await e.invoke({toolId:t,args:(u=r.config,l=o.input,d=i.outputs,!0===u?.inputFromState?{...u,requestInput:l,outputs:d}:u&&"requiredInput"in u?u.requiredInput:u&&("args"in u||"cwd"in u||"timeoutMs"in u)?u:"object"==typeof l&&null!==l?l:{}),context:{workspaceRoot:n.root,requestId:s,sessionId:a,agentId:`workflow:${r.id}`}})).output;var u,l,d},agents:async({id:e,node:r,request:o,sessionId:s,state:a})=>{var i,n,u,l;return(await t().request({input:(i=e,n=o.input,u=a.outputs,l=r.config,[`Workflow node agents.${i}: synthesize the workflow evidence into the requested final output.`,`Original request: ${"string"==typeof n?n:JSON.stringify(n)}`,"Requirements:","- Produce the final answer now; do not ask follow-up questions.","- Match the original request language unless workflow config explicitly says otherwise.","- Use only the workflow outputs as evidence; call out uncertainty directly.",...l?[`Workflow node config: ${JSON.stringify(l)}`]:[],"Prior workflow outputs:",JSON.stringify(u)].join("\n")),agentId:e,sessionId:s,metadata:o.metadata})).output}}})}(function isCliEntrypoint(){const r=process.argv[1];if(!r)return!1;try{return e(t(import.meta.url))===e(r)}catch{return!1}})()&&runCli().catch(e=>{process.stderr.write(`${e instanceof Error?e.message:String(e)}\n`),process.exitCode=1});
@@ -0,0 +1 @@
1
+ export declare function initWorkspace(targetRoot: string): Promise<string>;
@@ -0,0 +1 @@
1
+ import{mkdir as e,writeFile as o}from"node:fs/promises";import a from"node:path";export async function initWorkspace(o){const t=a.resolve(o||".");return await e(a.join(t,"config","agents"),{recursive:!0}),await e(a.join(t,"config","catalogs"),{recursive:!0}),await e(a.join(t,"config","runtime"),{recursive:!0}),await e(a.join(t,"resources","tools"),{recursive:!0}),await Promise.all([writeScaffoldFile(a.join(t,"config","runtime","workspace.yaml"),["apiVersion: stable-harness.dev/v1","kind: Runtime","metadata:"," name: app-runtime","spec:"," routing:"," defaultAgentId: orchestra"," protocols:"," inProcess: true"," openaiCompatible:"," host: 127.0.0.1"," port: 8642",""].join("\n")),writeScaffoldFile(a.join(t,"config","agents","orchestra.yaml"),["apiVersion: stable-harness.dev/v1","kind: Agent","metadata:"," name: orchestra","spec:"," backend: deepagents"," modelRef: local-dev"," systemPrompt: You are a concise workspace agent."," tools:"," - echo_tool"," subagents: []",""].join("\n")),writeScaffoldFile(a.join(t,"config","catalogs","models.yaml"),["apiVersion: stable-harness.dev/v1","kind: Model","metadata:"," name: local-dev","spec:"," provider: openai-compatible"," model: ${env:STABLE_HARNESS_MODEL:-gpt-4.1-mini}"," baseUrl: ${env:STABLE_HARNESS_OPENAI_BASE_URL:-https://api.openai.com/v1}"," apiKey: ${env:OPENAI_API_KEY}",""].join("\n")),writeScaffoldFile(a.join(t,"resources","tools","echo_tool.mjs"),["export const echo_tool = {"," description: 'Echo input through the Stable Harness tool gateway.',"," schema: {"," type: 'object',"," properties: { value: { type: 'string' } },"," required: ['value'],"," },"," async invoke(args) {"," return JSON.stringify({ echoed: args.value });"," },","};",""].join("\n"))]),[`Initialized Stable Harness workspace at ${t}`,"","Try:",` stable-harness -w ${shellPath(t)}`,` stable-harness -w ${shellPath(t)} --agent orchestra --tool echo_tool --tool-args-json '{"value":"hello"}'`,""].join("\n")}async function writeScaffoldFile(e,a){try{await o(e,a,{flag:"wx"})}catch(o){if(function isFileExists(e){return e instanceof Error&&"code"in e&&"EEXIST"===e.code}(o))throw new Error(`Refusing to overwrite existing scaffold file: ${e}`);throw o}}function shellPath(e){return JSON.stringify(e)}