@ryanfw/prompt-orchestration-pipeline 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Ryan Mahoney
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,290 @@
1
+ # Pipeline Orchestrator (Prompt‑Orchestration Pipeline)
2
+
3
+ A **Prompt‑orchestration pipeline (POP)** is a framework for building, running, and experimenting with complex chains of LLM tasks.
4
+
5
+ Instead of relying on a single mega‑prompt, a pipeline decomposes work into stages, applies targeted transformations, validates outputs, and composes multiple model calls into a repeatable workflow.
6
+
7
+ This repository provides a reference implementation of a prompt‑orchestration pipeline that can be consumed as an npm package by other Node.js projects. It is intentionally lightweight: just enough orchestration to run complex pipelines, inspect intermediate artifacts, and evolve new strategies.
8
+
9
+ ---
10
+
11
+ ## Why it matters
12
+
13
+ Single‑prompt strategies are fragile:
14
+
15
+ - Inputs must fit within a single context window.
16
+ - Instructions and examples compete for limited space.
17
+ - Quality control is all‑or‑nothing.
18
+
19
+ A prompt‑orchestration pipeline changes the game:
20
+
21
+ - **Chained reasoning** – break down complex problems into sequential tasks.
22
+ - **Context compression & stacking** – condense outputs into artifacts that feed the next stage.
23
+ - **Multi‑model strategies** – route subtasks to the most appropriate model (fast vs. large, cheap vs. accurate).
24
+ - **Validation loops** – enforce structure, apply quality checks, and retry when needed.
25
+ - **Experimentation** – swap tasks in and out to try new ideas without rewriting the whole system.
26
+
27
+ The result: workflows that are **more robust, interpretable, and capable** than any single prompt.
28
+
29
+ ---
30
+
31
+ ## Architecture (conceptual)
32
+
33
+ A prompt‑orchestration pipeline has **two layers**:
34
+
35
+ ### 1) Pipeline orchestration (outer layer)
36
+
37
+ The outer pipeline manages runs, state, and isolation. It is responsible for:
38
+
39
+ - Assigning a pipeline run ID for each new submission.
40
+ - Creating predictable directories for pending seeds, active runs, and completed runs.
41
+ - Spawning isolated processes for each task (so one failure doesn’t crash others).
42
+ - Tracking progress in a run‑scoped status file.
43
+ - Promoting completed runs into a repository of results with audit metadata.
44
+
45
+ **Runtime directories (in the consuming project):**
46
+
47
+ ```
48
+ my-project/
49
+ └── pipeline-data/
50
+ ├── pending/ # queue seeds here (e.g., *.json)
51
+ ├── current/ # active run state (auto‑managed)
52
+ └── complete/ # archived runs (auto‑managed)
53
+ ```
54
+
55
+ **High‑level flow**
56
+
57
+ ```mermaid
58
+ flowchart TD
59
+ A["pipeline-data/pending/*-seed.json"] --> B[Orchestrator]
60
+ B --> C["create pipeline-data/current/<id>/seed.json"]
61
+ B --> D["init pipeline-data/current/<id>/tasks-status.json"]
62
+ B --> E[Read pipeline-config/pipeline.json]
63
+ E --> F[Spawn task runner]
64
+ F --> G["write tasks/<task>/letter.json"]
65
+ G --> H[Run task inner pipeline]
66
+ H --> I["write tasks/<task>/output.json"]
67
+ I --> J[Update tasks-status.json]
68
+ J --> K{More tasks?}
69
+ K -->|yes| F
70
+ K -->|no| L[Promote to complete]
71
+ L --> M["pipeline-data/complete/<id>/**"]
72
+ L --> N["append pipeline-data/complete/runs.jsonl"]
73
+ ```
74
+
75
+ ### 2) Task orchestration (inner layer)
76
+
77
+ Each pipeline step runs through a **task runner** that executes canonical sub‑steps:
78
+
79
+ 1. **Ingestion** – retrieve existing data or context.
80
+ 2. **Pre‑processing** – compress or transform input to fit model constraints.
81
+ 3. **Prompt templating** – assemble the instruction.
82
+ 4. **Inference** – run the model call(s).
83
+ 5. **Parsing** – normalize outputs into structured form.
84
+ 6. **Validation** – check schema, quality, and semantic correctness.
85
+ 7. **Critique & refinement** – generate hints, re‑prompt, and retry if needed.
86
+ 8. **Finalization** – confirm valid output and persist artifacts.
87
+
88
+ ```mermaid
89
+ flowchart TD
90
+ S[Start task] --> I1[Ingestion]
91
+ I1 --> P1[Pre‑processing]
92
+ P1 --> T1[Prompt templating]
93
+ T1 --> INF[Inference]
94
+ INF --> PAR[Parsing]
95
+ PAR --> VS[Validate structure]
96
+ VS -->|ok| VQ[Validate quality]
97
+ VS -->|fail| ERR[Fail task and log]
98
+ VQ -->|ok| FIN[Finalize & persist]
99
+ VQ -->|fail| HINTS[Critique & hints]
100
+ HINTS --> T1
101
+ FIN --> DONE[Done]
102
+ ERR --> DONE
103
+ ```
104
+
105
+ ---
106
+
107
+ ## Section A — Library (this package)
108
+
109
+ ### Repository layout
110
+
111
+ ```
112
+ @ryan-fw/prompt-orchestration-pipeline/
113
+ ├── src/
114
+ │ ├── core/
115
+ │ │ ├── task-runner.js # Core pipeline execution
116
+ │ │ ├── pipeline-runner.js # Pipeline management
117
+ │ │ └── orchestrator.js # Workflow orchestration
118
+ │ ├── cli/
119
+ │ │ └── index.js # CLI entry point
120
+ │ ├── api/
121
+ │ │ └── index.js # Programmatic API
122
+ │ └── ui/
123
+ │ └── server.js # Optional UI server
124
+ ├── bin/
125
+ │ └── pipeline-orchestrator # CLI executable
126
+ ├── package.json
127
+ └── README.md
128
+ ```
129
+
130
+ ### Package exports & CLI
131
+
132
+ ```json
133
+ {
134
+ "name": "@ryan-fw/prompt-orchestration-pipeline",
135
+ "version": "1.0.0",
136
+ "type": "module",
137
+ "exports": {
138
+ ".": "./src/api/index.js",
139
+ "./cli": "./src/cli/index.js",
140
+ "./runner": "./src/core/task-runner.js"
141
+ },
142
+ "bin": {
143
+ "pipeline-orchestrator": "./bin/pipeline-orchestrator"
144
+ },
145
+ "dependencies": {
146
+ "chokidar": "^3.5.3",
147
+ "commander": "^11.0.0",
148
+ "express": "^4.18.0"
149
+ }
150
+ }
151
+ ```
152
+
153
+ - **CLI name:** `pipeline-orchestrator`
154
+ - **Programmatic API:** import from `@ryan-fw/prompt-orchestration-pipeline` (see `src/api/index.js`).
155
+ - **Task runner (advanced):** `@ryan-fw/prompt-orchestration-pipeline/runner`.
156
+
157
+ ---
158
+
159
+ ## Section B — Consuming project usage
160
+
161
+ ### Expected layout in a consumer project
162
+
163
+ ```
164
+ my-project/
165
+ ├── pipeline-config/
166
+ │ ├── pipeline.json # Pipeline definition (ordered list of task IDs)
167
+ │ └── tasks/ # Task implementations
168
+ │ ├── index.js # Task registry (maps task IDs → modules)
169
+ │ ├── task-a/
170
+ │ │ └── index.js
171
+ │ └── task-b/
172
+ │ └── index.js
173
+ ├── pipeline-data/ # Runtime directories (auto‑created/managed)
174
+ │ ├── pending/
175
+ │ ├── current/
176
+ │ └── complete/
177
+ ├── package.json
178
+ └── .pipelinerc.json # Optional CLI config
179
+ ```
180
+
181
+ **`pipeline.json` (example)**
182
+
183
+ ```json
184
+ {
185
+ "tasks": ["task-a", "task-b"]
186
+ }
187
+ ```
188
+
189
+ **`pipeline-config/tasks/index.js` (example registry)**
190
+
191
+ ```js
192
+ // ESM registry mapping task IDs to loader functions or modules
193
+ export default {
194
+ "task-a": () => import("./task-a/index.js"),
195
+ "task-b": () => import("./task-b/index.js"),
196
+ };
197
+ ```
198
+
199
+ > The orchestrator resolves task IDs from `pipeline.json` using this registry.
200
+
201
+ ### Install & scripts
202
+
203
+ Add the package and scripts to your consumer project:
204
+
205
+ ```json
206
+ {
207
+ "scripts": {
208
+ "pipeline": "pipeline-orchestrator start",
209
+ "pipeline:ui": "pipeline-orchestrator start --ui",
210
+ "pipeline:init": "pipeline-orchestrator init",
211
+ "pipeline:submit": "pipeline-orchestrator submit"
212
+ },
213
+ "dependencies": {
214
+ "@ryan-fw/prompt-orchestration-pipeline": "^1.0.0"
215
+ }
216
+ }
217
+ ```
218
+
219
+ ### CLI overview
220
+
221
+ - **`pipeline-orchestrator init`** – scaffolds `pipeline-config/` and `pipeline-data/` if missing.
222
+ - **`pipeline-orchestrator start`** – starts the orchestrator; watches `pipeline-data/pending/` for new seeds and processes them according to `pipeline-config/pipeline.json`.
223
+ - **`pipeline-orchestrator start --ui`** – starts the orchestrator and the optional UI server.
224
+ - **`pipeline-orchestrator submit [path]`** – submits a seed into `pipeline-data/pending/` (path can point to a JSON file).
225
+
226
+ > Run `pipeline-orchestrator --help` in your project for the most current flags.
227
+
228
+ ### Optional configuration: `.pipelinerc.json`
229
+
230
+ If present in the project root, this file can provide defaults for the CLI (e.g., custom locations). A minimal example:
231
+
232
+ ```json
233
+ {
234
+ "configDir": "./pipeline-config",
235
+ "dataDir": "./pipeline-data"
236
+ }
237
+ ```
238
+
239
+ _(Keys and defaults may vary by version; prefer `--help` for authoritative options.)_
240
+
241
+ ### Example flow in a consumer project
242
+
243
+ 1. **Initialize**: `npm run pipeline:init` to ensure folders exist.
244
+ 2. **Define**: Edit `pipeline-config/pipeline.json` and implement tasks under `pipeline-config/tasks/`.
245
+ 3. **Run**: `npm run pipeline` (or `npm run pipeline:ui` for the UI).
246
+ 4. **Submit**: Add a seed JSON to `pipeline-data/pending/` or run `npm run pipeline:submit -- ./path/to/seed.json`.
247
+ 5. **Inspect**: Watch `pipeline-data/current/<runId>` for in‑progress artifacts and `pipeline-data/complete/<runId>` for results.
248
+
249
+ ---
250
+
251
+ ## Concepts & conventions (carry‑overs)
252
+
253
+ - **Determinism** – each task persists its inputs/outputs; you can re‑run or debug any stage.
254
+ - **Isolation** – tasks run in separate processes when appropriate.
255
+ - **Artifacts** – tasks write structured artifacts (e.g., `letter.json`, `output.json`) to their run directory.
256
+ - **Status** – a `tasks-status.json` file tracks progress and outcomes across the pipeline.
257
+
258
+ ---
259
+
260
+ ## Quick troubleshooting
261
+
262
+ - **Nothing happens when I submit a seed** → Ensure the orchestrator is running and watching `pipeline-data/pending/`.
263
+ - **Task not found** → Confirm the task ID exists in `pipeline-config/tasks/index.js` and matches `pipeline.json`.
264
+ - **UI doesn’t load** → Try `pipeline-orchestrator start --ui` and check for port conflicts.
265
+
266
+ ---
267
+
268
+ ## Getting started (TL;DR)
269
+
270
+ ```bash
271
+ # 1) Install
272
+ npm i -S @ryan-fw/prompt-orchestration-pipeline
273
+
274
+ # 2) Initialize scaffold
275
+ npm run pipeline:init
276
+
277
+ # 3) Start orchestrator (optionally with UI)
278
+ npm run pipeline
279
+ # or
280
+ npm run pipeline:ui
281
+
282
+ # 4) Submit a seed (JSON file)
283
+ npm run pipeline:submit -- ./seeds/example-seed.json
284
+ ```
285
+
286
+ ---
287
+
288
+ ## Status
289
+
290
+ This is an **experimental framework**. The goal is to explore and evolve best practices for orchestrating prompts, models, and validations into reliable workflows. Feedback, issues, and contributions are welcome.
package/package.json ADDED
@@ -0,0 +1,51 @@
1
+ {
2
+ "name": "@ryanfw/prompt-orchestration-pipeline",
3
+ "version": "0.0.1",
4
+ "description": "A Prompt-orchestration pipeline (POP) is a framework for building, running, and experimenting with complex chains of LLM tasks.",
5
+ "type": "module",
6
+ "main": "src/ui/server.js",
7
+ "files": [
8
+ "src",
9
+ "README.md",
10
+ "LICENSE"
11
+ ],
12
+ "repository": {
13
+ "type": "git",
14
+ "url": "https://github.com/ryan-mahoney/prompt-orchestration-pipeline.git"
15
+ },
16
+ "publishConfig": {
17
+ "access": "public"
18
+ },
19
+ "scripts": {
20
+ "test": "vitest run",
21
+ "ui": "nodemon src/ui/server.js",
22
+ "ui:prod": "node src/ui/server.js"
23
+ },
24
+ "dependencies": {
25
+ "ajv": "^8.17.1",
26
+ "chokidar": "^3.5.3",
27
+ "dotenv": "^17.2.2",
28
+ "openai": "^5.23.1"
29
+ },
30
+ "devDependencies": {
31
+ "@vitest/coverage-v8": "^3.2.4",
32
+ "nodemon": "^3.0.2",
33
+ "prettier": "^3.0.0",
34
+ "vitest": "^3.2.4"
35
+ },
36
+ "engines": {
37
+ "node": ">=20.0.0"
38
+ },
39
+ "keywords": [
40
+ "llm",
41
+ "prompt-engineering",
42
+ "pipeline",
43
+ "orchestration",
44
+ "chatgpt",
45
+ "ai",
46
+ "workflow",
47
+ "automation"
48
+ ],
49
+ "author": "Ryan Mahoney",
50
+ "license": "MIT"
51
+ }
@@ -0,0 +1,220 @@
1
+ import { Orchestrator } from "../core/orchestrator.js";
2
+ import path from "node:path";
3
+ import fs from "node:fs/promises";
4
+ import { validateSeedOrThrow } from "../core/validation.js";
5
+
6
+ // Pure functional utilities
7
+ const createPaths = (config) => {
8
+ const {
9
+ rootDir,
10
+ dataDir = "pipeline-data",
11
+ configDir = "pipeline-config",
12
+ } = config;
13
+ return {
14
+ pending: path.join(rootDir, dataDir, "pending"),
15
+ current: path.join(rootDir, dataDir, "current"),
16
+ complete: path.join(rootDir, dataDir, "complete"),
17
+ pipeline: path.join(rootDir, configDir, "pipeline.json"),
18
+ tasks: path.join(rootDir, configDir, "tasks"),
19
+ };
20
+ };
21
+
22
+ const validateConfig = (options = {}) => ({
23
+ rootDir: options.rootDir || process.cwd(),
24
+ dataDir: options.dataDir || "pipeline-data",
25
+ configDir: options.configDir || "pipeline-config",
26
+ autoStart: options.autoStart ?? true,
27
+ ui: options.ui ?? false,
28
+ uiPort: options.uiPort || 3000,
29
+ ...options,
30
+ });
31
+
32
+ const ensureDirectories = async (paths) => {
33
+ for (const dir of Object.values(paths)) {
34
+ if (dir.endsWith(".json")) continue;
35
+ await fs.mkdir(dir, { recursive: true });
36
+ }
37
+ };
38
+
39
+ const loadPipelineDefinition = async (pipelinePath) => {
40
+ try {
41
+ const content = await fs.readFile(pipelinePath, "utf8");
42
+ const definition = JSON.parse(content);
43
+ definition.__path = pipelinePath;
44
+ return definition;
45
+ } catch (error) {
46
+ if (error.code === "ENOENT") {
47
+ throw new Error(`Pipeline definition not found at ${pipelinePath}`);
48
+ }
49
+ throw error;
50
+ }
51
+ };
52
+
53
+ const createOrchestrator = (paths, pipelineDefinition) =>
54
+ new Orchestrator({ paths, pipelineDefinition });
55
+
56
+ // Main API functions
57
+ export const createPipelineOrchestrator = async (options = {}) => {
58
+ const config = validateConfig(options);
59
+ const paths = createPaths(config);
60
+
61
+ await ensureDirectories(paths);
62
+ const pipelineDefinition = await loadPipelineDefinition(paths.pipeline);
63
+ const orchestrator = createOrchestrator(paths, pipelineDefinition);
64
+
65
+ let uiServer = null;
66
+
67
+ const state = {
68
+ config,
69
+ paths,
70
+ pipelineDefinition,
71
+ orchestrator,
72
+ uiServer,
73
+ };
74
+
75
+ // Auto-start if configured
76
+ if (config.autoStart) {
77
+ await orchestrator.start();
78
+ }
79
+
80
+ // Start UI if configured
81
+ if (config.ui) {
82
+ const { createUIServer } = await import("../ui/server.js");
83
+
84
+ // Create API object with state injection for UI server
85
+ const uiApi = {
86
+ submitJob: (seed) => submitJob(state, seed),
87
+ getStatus: (jobName) => getStatus(state, jobName),
88
+ listJobs: (status) => listJobs(state, status),
89
+ };
90
+
91
+ uiServer = createUIServer(uiApi);
92
+ uiServer.listen(config.uiPort, () => {
93
+ console.log(`Pipeline UI available at http://localhost:${config.uiPort}`);
94
+ });
95
+ state.uiServer = uiServer;
96
+ }
97
+
98
+ return state;
99
+ };
100
+
101
+ // Job management functions
102
+ export const submitJob = async (state, seed) => {
103
+ // Validate seed structure before submitting
104
+ validateSeedOrThrow(seed);
105
+
106
+ const name = seed.name;
107
+ const seedPath = path.join(state.paths.pending, `${name}-seed.json`);
108
+ await fs.writeFile(seedPath, JSON.stringify(seed, null, 2));
109
+ return { name, seedPath };
110
+ };
111
+
112
+ export const getStatus = async (state, jobName) => {
113
+ try {
114
+ const statusPath = path.join(
115
+ state.paths.current,
116
+ jobName,
117
+ "tasks-status.json"
118
+ );
119
+ return JSON.parse(await fs.readFile(statusPath, "utf8"));
120
+ } catch {}
121
+ try {
122
+ const statusPath = path.join(
123
+ state.paths.complete,
124
+ jobName,
125
+ "tasks-status.json"
126
+ );
127
+ return JSON.parse(await fs.readFile(statusPath, "utf8"));
128
+ } catch {}
129
+ return null;
130
+ };
131
+
132
+ export const listJobs = async (state, status = "all") => {
133
+ const jobs = [];
134
+
135
+ const listDirectory = async (dir, suffix = "") => {
136
+ try {
137
+ const entries = await fs.readdir(dir);
138
+ if (suffix) {
139
+ return entries
140
+ .filter((e) => e.endsWith(suffix))
141
+ .map((e) => e.replace(suffix, ""));
142
+ }
143
+ return entries;
144
+ } catch {
145
+ return [];
146
+ }
147
+ };
148
+
149
+ if (status === "all" || status === "pending") {
150
+ const pending = await listDirectory(state.paths.pending, "-seed.json");
151
+ jobs.push(...pending.map((name) => ({ name, status: "pending" })));
152
+ }
153
+
154
+ if (status === "all" || status === "current") {
155
+ const current = await listDirectory(state.paths.current);
156
+ jobs.push(...current.map((name) => ({ name, status: "current" })));
157
+ }
158
+
159
+ if (status === "all" || status === "complete") {
160
+ const complete = await listDirectory(state.paths.complete);
161
+ jobs.push(...complete.map((name) => ({ name, status: "complete" })));
162
+ }
163
+
164
+ return jobs;
165
+ };
166
+
167
+ // Control functions
168
+ export const start = async (state) => {
169
+ await state.orchestrator.start();
170
+ return state;
171
+ };
172
+
173
+ export const stop = async (state) => {
174
+ if (state.uiServer) {
175
+ await new Promise((resolve) => state.uiServer.close(resolve));
176
+ }
177
+ await state.orchestrator.stop();
178
+ return state;
179
+ };
180
+
181
+ // Backward compatibility - class-like API for easy migration
182
+ export const PipelineOrchestrator = {
183
+ async create(options = {}) {
184
+ const state = await createPipelineOrchestrator(options);
185
+
186
+ // Return an object with methods that maintain the original API
187
+ return {
188
+ config: state.config,
189
+ paths: state.paths,
190
+
191
+ async start() {
192
+ await start(state);
193
+ return this;
194
+ },
195
+
196
+ async stop() {
197
+ await stop(state);
198
+ return this;
199
+ },
200
+
201
+ async submitJob(seed) {
202
+ return submitJob(state, seed);
203
+ },
204
+
205
+ async getStatus(jobName) {
206
+ return getStatus(state, jobName);
207
+ },
208
+
209
+ async listJobs(status = "all") {
210
+ return listJobs(state, status);
211
+ },
212
+ };
213
+ },
214
+ };
215
+
216
+ // Export the original functions for direct functional usage
217
+ export { runPipeline } from "../core/task-runner.js";
218
+ export { selectModel } from "../core/task-runner.js";
219
+
220
+ export default PipelineOrchestrator;
@@ -0,0 +1,70 @@
1
+ #!/usr/bin/env node
2
+ import { Command } from "commander";
3
+ import { PipelineOrchestrator } from "../api/index.js";
4
+ import fs from "node:fs/promises";
5
+
6
+ const program = new Command();
7
+
8
+ program
9
+ .name("pipeline-orchestrator")
10
+ .description("Pipeline orchestration system")
11
+ .version("1.0.0");
12
+
13
+ program
14
+ .command("init")
15
+ .description("Initialize pipeline configuration")
16
+ .action(async () => {
17
+ const template = {
18
+ pipeline: { name: "my-pipeline", version: "1.0.0", tasks: ["example-task"] },
19
+ tasks: {
20
+ "example-task": {
21
+ `ingestion`: `export async function ingestion(context) { return { data: "example" }; }`,
22
+ `inference`: `export async function inference(context) { return { output: context.data }; }`
23
+ }
24
+ };
25
+ await fs.mkdir("pipeline-config/tasks/example-task", { recursive: true });
26
+ await fs.writeFile("pipeline-config/pipeline.json", JSON.stringify(template.pipeline, null, 2));
27
+ await fs.writeFile("pipeline-config/tasks/index.js", `export default {\n 'example-task': './example-task/index.js'\n};`);
28
+ await fs.writeFile("pipeline-config/tasks/example-task/index.js", `${template.tasks["example-task"].ingestion}\n\n${template.tasks["example-task"].inference}\n`);
29
+ console.log("Pipeline configuration initialized");
30
+ });
31
+
32
+ program
33
+ .command("start")
34
+ .description("Start the pipeline orchestrator")
35
+ .option("-u, --ui", "Start with UI server")
36
+ .option("-p, --port <port>", "UI server port", "3000")
37
+ .action(async (options) => {
38
+ const orchestrator = new PipelineOrchestrator({ ui: options.ui, uiPort: parseInt(options.port) });
39
+ await orchestrator.initialize();
40
+ console.log("Pipeline orchestrator started");
41
+ process.on("SIGINT", async () => { await orchestrator.stop(); process.exit(0); });
42
+ });
43
+
44
+ program
45
+ .command("submit <seed-file>")
46
+ .description("Submit a new job")
47
+ .action(async (seedFile) => {
48
+ const seed = JSON.parse(await fs.readFile(seedFile, "utf8"));
49
+ const orchestrator = new PipelineOrchestrator({ autoStart: false });
50
+ await orchestrator.initialize();
51
+ const job = await orchestrator.submitJob(seed);
52
+ console.log(`Job submitted: ${job.name}`);
53
+ });
54
+
55
+ program
56
+ .command("status [job-name]")
57
+ .description("Get job status")
58
+ .action(async (jobName) => {
59
+ const orchestrator = new PipelineOrchestrator({ autoStart: false });
60
+ await orchestrator.initialize();
61
+ if (jobName) {
62
+ const status = await orchestrator.getStatus(jobName);
63
+ console.log(JSON.stringify(status, null, 2));
64
+ } else {
65
+ const jobs = await orchestrator.listJobs();
66
+ console.table(jobs);
67
+ }
68
+ });
69
+
70
+ program.parse();