keystone-cli 0.1.0 β†’ 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +289 -59
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,136 +1,366 @@
1
+ <p align="center">
2
+ <img src="logo.png" width="250" alt="Keystone CLI Logo">
3
+ </p>
4
+
1
5
  # πŸ›οΈ Keystone CLI
2
6
 
3
7
  [![Bun](https://img.shields.io/badge/Bun-%23000000.svg?style=flat&logo=bun&logoColor=white)](https://bun.sh)
4
- [![NPM Version](https://img.shields.io/npm/v/keystone-cli.svg?style=flat)](https://www.npmjs.com/package/keystone-cli)
8
+ [![npm version](https://img.shields.io/npm/v/keystone-cli.svg?style=flat)](https://www.npmjs.com/package/keystone-cli)
5
9
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
10
 
7
- **Keystone** is a local-first, declarative, agentic workflow orchestrator built on **Bun**.
11
+ A local-first, declarative, agentic workflow orchestrator built on **Bun**.
8
12
 
9
- It allows you to define complex automation workflows using a simple YAML syntax, featuring first-class support for LLM agents, persistent state management via SQLite, and high-concurrency execution with built-in resilience.
13
+ Keystone allows you to define complex automation workflows using a simple YAML syntax, with first-class support for LLM agents, state persistence, and parallel execution.
10
14
 
11
15
  ---
12
16
 
13
- ## ✨ Key Features
17
+ ## ✨ Features
14
18
 
15
- - ⚑ **Local-First & Fast:** Powered by Bun with a local SQLite database. No external "cloud state" requiredβ€”your data and workflow history stay on your machine.
16
- - 🧩 **Declarative Workflows:** Define logic in YAML. Keystone automatically calculates the execution graph (DAG) and detects dependencies from your expressions.
17
- - πŸ€– **Agentic by Design:** Seamlessly integrate LLM agents defined in Markdown. Agents can use tools, which are just other workflow steps.
18
- - πŸ”Œ **Built-in MCP Server:** Expose your workflows as tools to other AI assistants (like Claude Desktop) using the Model Context Protocol.
19
- - πŸ”„ **Resilient Execution:** Built-in retries, exponential backoff, and timeouts. Interrupted workflows can be resumed exactly where they stopped.
20
- - πŸ§‘β€πŸ’» **Human-in-the-Loop:** Support for manual approval and text input steps for sensitive or creative operations.
21
- - πŸ“Š **Interactive TUI:** A beautiful terminal dashboard to monitor concurrent runs and history.
22
- - πŸ›‘οΈ **Security-First:** Automatic secret redaction from logs/database and AST-based safe expression evaluation.
19
+ - ⚑ **Local-First:** Built on Bun with a local SQLite database for state management.
20
+ - 🧩 **Declarative:** Define workflows in YAML with automatic dependency tracking (DAG).
21
+ - πŸ€– **Agentic:** First-class support for LLM agents defined in Markdown with YAML frontmatter.
22
+ - πŸ§‘β€πŸ’» **Human-in-the-Loop:** Support for manual approval and text input steps.
23
+ - πŸ”„ **Resilient:** Built-in retries, timeouts, and state persistence. Resume failed or paused runs exactly where they left off.
24
+ - πŸ“Š **TUI Dashboard:** Built-in interactive dashboard for monitoring and managing runs.
25
+ - πŸ› οΈ **Extensible:** Support for shell, file, HTTP request, LLM, and sub-workflow steps.
26
+ - πŸ”Œ **MCP Support:** Integrated Model Context Protocol server.
27
+ - πŸ›‘οΈ **Secret Redaction:** Automatically redacts environment variables and secrets from logs and outputs.
23
28
 
24
29
  ---
25
30
 
26
31
  ## πŸš€ Installation
27
32
 
28
- Ensure you have [Bun](https://bun.sh) installed (v1.0.0 or higher).
33
+ Ensure you have [Bun](https://bun.sh) installed.
29
34
 
35
+ ### Global Install (Recommended)
30
36
  ```bash
31
- # Install globally via Bun
32
- bun add -g keystone-cli
37
+ bun install -g keystone-cli
38
+ ```
39
+
40
+ ### From Source
41
+ ```bash
42
+ # Clone the repository
43
+ git clone https://github.com/mhingston/keystone-cli.git
44
+ cd keystone-cli
45
+
46
+ # Install dependencies
47
+ bun install
33
48
 
34
- # Or via NPM
35
- npm install -g keystone-cli
49
+ # Link CLI globally
50
+ bun link
36
51
  ```
37
52
 
38
53
  ### Shell Completion
39
54
 
40
- To enable tab completion for workflow names and commands:
55
+ To enable tab completion for your shell, add the following to your `.zshrc` or `.bashrc`:
41
56
 
42
- **Zsh:** Add `source <(keystone completion zsh)` to your `.zshrc`
43
- **Bash:** Add `source <(keystone completion bash)` to your `.bashrc`
57
+ **Zsh:**
58
+ ```bash
59
+ source <(keystone completion zsh)
60
+ ```
61
+
62
+ **Bash:**
63
+ ```bash
64
+ source <(keystone completion bash)
65
+ ```
44
66
 
45
67
  ---
46
68
 
47
- ## πŸš₯ Quick Start
69
+ ## 🚦 Quick Start
48
70
 
49
71
  ### 1. Initialize a Project
50
72
  ```bash
51
73
  keystone init
52
74
  ```
53
- This creates a `.keystone/` directory for configuration and a `workflows/` directory for your files.
75
+ This creates the `.keystone/` directory for configuration and `.keystone/workflows/` for your automation files.
54
76
 
55
- ### 2. Configure Environment
77
+ ### 2. Configure your Environment
56
78
  Add your API keys to the generated `.env` file:
57
79
  ```env
58
80
  OPENAI_API_KEY=sk-...
59
81
  ANTHROPIC_API_KEY=sk-ant-...
60
82
  ```
61
83
 
62
- ### 3. Run Your First Workflow
84
+ ### 3. Run a Workflow
63
85
  ```bash
64
86
  keystone run basic-shell
65
87
  ```
88
+ Keystone automatically looks in `.keystone/workflows/` (locally and in your home directory) for `.yaml` or `.yml` files.
89
+
90
+ ### 4. Monitor with the Dashboard
91
+ ```bash
92
+ keystone ui
93
+ ```
94
+
95
+ ---
96
+
97
+ ## βš™οΈ Configuration
98
+
99
+ Keystone uses a local configuration file at `.keystone/config.yaml` to manage model providers and model mappings.
100
+
101
+ ```yaml
102
+ default_provider: openai
103
+
104
+ providers:
105
+ openai:
106
+ type: openai
107
+ base_url: https://api.openai.com/v1
108
+ api_key_env: OPENAI_API_KEY
109
+ default_model: gpt-4o
110
+ anthropic:
111
+ type: anthropic
112
+ base_url: https://api.anthropic.com/v1
113
+ api_key_env: ANTHROPIC_API_KEY
114
+ default_model: claude-3-5-sonnet-20240620
115
+ groq:
116
+ type: openai
117
+ base_url: https://api.groq.com/openai/v1
118
+ api_key_env: GROQ_API_KEY
119
+ default_model: llama-3.3-70b-versatile
120
+
121
+ model_mappings:
122
+ "gpt-*": openai
123
+ "claude-*": anthropic
124
+ "o1-*": openai
125
+ "llama-*": groq
126
+
127
+ mcp_servers:
128
+ filesystem:
129
+ command: npx
130
+ args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
131
+ github:
132
+ command: npx
133
+ args: ["-y", "@modelcontextprotocol/server-github"]
134
+ env:
135
+ GITHUB_PERSONAL_ACCESS_TOKEN: "${GITHUB_TOKEN}"
136
+
137
+ storage:
138
+ retention_days: 30
139
+ ```
140
+
141
+ ### Model & Provider Resolution
142
+
143
+ Keystone resolves which provider to use for a model in the following order:
144
+
145
+ 1. **Explicit Provider:** Use the `provider` field in an agent or step definition.
146
+ 2. **Provider Prefix:** Use the `provider:model` syntax (e.g., `model: copilot:gpt-4o`).
147
+ 3. **Model Mappings:** Matches the model name against the `model_mappings` in your config (supports suffix `*` for prefix matching).
148
+ 4. **Default Provider:** Falls back to the `default_provider` defined in your config.
149
+
150
+ #### Example: Explicit Provider in Agent
151
+ **`.keystone/workflows/agents/summarizer.md`**
152
+ ```markdown
153
+ ---
154
+ name: summarizer
155
+ provider: anthropic
156
+ model: claude-3-5-sonnet-latest
157
+ ---
158
+ ```
159
+
160
+ #### Example: Provider Prefix in Step
161
+ ```yaml
162
+ - id: notify
163
+ type: llm
164
+ agent: summarizer
165
+ model: copilot:gpt-4o
166
+ prompt: ...
167
+ ```
168
+
169
+ ### OpenAI Compatible Providers
170
+ You can add any OpenAI-compatible provider (Groq, Together AI, Perplexity, Local Ollama, etc.) by setting the `type` to `openai` and providing the `base_url` and `api_key_env`.
171
+
172
+ ### GitHub Copilot Support
173
+ Keystone supports using your GitHub Copilot subscription directly. To authenticate:
174
+ ```bash
175
+ keystone auth login
176
+ ```
177
+ Then, you can use Copilot in your configuration:
178
+ ```yaml
179
+ providers:
180
+ copilot:
181
+ type: copilot
182
+ default_model: gpt-4o
183
+ ```
184
+ API keys are handled automatically after login.
185
+
186
+ API keys should be stored in a `.env` file in your project root:
187
+ - `OPENAI_API_KEY`
188
+ - `ANTHROPIC_API_KEY`
66
189
 
67
190
  ---
68
191
 
69
- ## βš™οΈ How it Works
192
+ ## πŸ“ Workflow Example
70
193
 
71
- ### Workflows (.yaml)
72
- Workflows are defined by steps. Steps run in **parallel** by default unless a dependency is defined via `needs` or detected in an expression like `${{ steps.previous_step.output }}`.
194
+ Workflows are defined in YAML. Dependencies are automatically resolved based on the `needs` field, and **Keystone also automatically detects implicit dependencies** from your `${{ }}` expressions.
73
195
 
74
196
  ```yaml
75
- name: analyze-repo
197
+ name: build-and-notify
198
+ description: Build the project and notify the team
199
+
200
+ inputs:
201
+ branch:
202
+ type: string
203
+ default: main
204
+
76
205
  steps:
77
- - id: list_files
206
+ - id: checkout
207
+ type: shell
208
+ run: git checkout ${{ inputs.branch }}
209
+
210
+ - id: install
211
+ type: shell
212
+ # Implicit dependency on 'checkout' detected from expression below
213
+ if: ${{ steps.checkout.status == 'success' }}
214
+ run: bun install
215
+
216
+ - id: build
78
217
  type: shell
79
- run: ls -R
80
- transform: stdout.split('\n')
218
+ needs: [install] # Explicit dependency
219
+ run: bun run build
220
+ retry:
221
+ count: 3
222
+ backoff: exponential
81
223
 
82
- - id: analyze
224
+ - id: notify
83
225
  type: llm
84
- foreach: ${{ steps.list_files.output }}
85
- concurrency: 5
86
- agent: code-reviewer
87
- prompt: "Analyze this file: ${{ item }}"
226
+ # Implicit dependency on 'build' detected from expression below
227
+ agent: summarizer
228
+ prompt: |
229
+ The build for branch "${{ inputs.branch }}" was successful.
230
+ Result: ${{ steps.build.output }}
231
+ Please write a concise 1-sentence summary for Slack.
232
+
233
+ outputs:
234
+ slack_message: ${{ steps.notify.output }}
88
235
  ```
89
236
 
90
- ### Agents (.md)
91
- Agents are defined in Markdown with YAML frontmatter. This keeps the "personality" and tools of the agent together in a human-readable format.
237
+ ---
238
+
239
+ ## πŸ—οΈ Step Types
92
240
 
241
+ Keystone supports several specialized step types:
242
+
243
+ - `shell`: Run arbitrary shell commands.
244
+ - `llm`: Prompt an agent and get structured or unstructured responses. Supports `schema` (JSON Schema) for structured output.
245
+ - `request`: Make HTTP requests (GET, POST, etc.).
246
+ - `file`: Read, write, or append to files.
247
+ - `human`: Pause execution for manual confirmation or text input.
248
+ - `inputType: confirm`: Simple Enter-to-continue prompt.
249
+ - `inputType: text`: Prompt for a string input, available via `${{ steps.id.output }}`.
250
+ - `workflow`: Trigger another workflow as a sub-step.
251
+ - `sleep`: Pause execution for a specified duration.
252
+
253
+ All steps support common features like `needs` (dependencies), `if` (conditionals), `retry`, `timeout`, `foreach` (parallel iteration), and `transform` (post-process output using expressions).
254
+
255
+ ---
256
+
257
+ ## πŸ€– Agent Definitions
258
+
259
+ Agents are defined in Markdown files with YAML frontmatter, making them easy to read and version control.
260
+
261
+ **`.keystone/workflows/agents/summarizer.md`**
93
262
  ```markdown
94
263
  ---
95
- name: code-reviewer
96
- model: claude-3-5-sonnet-latest
264
+ name: summarizer
265
+ provider: openai
266
+ model: gpt-4o
267
+ description: Summarizes technical logs into human-readable messages
268
+ ---
269
+
270
+ You are a technical communications expert. Your goal is to take technical output
271
+ (like build logs or test results) and provide a concise, professional summary.
272
+ ```
273
+
274
+ ### Agent Tools
275
+
276
+ Agents can be equipped with tools, which are essentially workflow steps they can choose to execute. You can define tools in the agent definition, or directly in an LLM step within a workflow.
277
+
278
+ **`.keystone/workflows/agents/developer.md`**
279
+ ```markdown
280
+ ---
281
+ name: developer
97
282
  tools:
98
- - name: read_file
283
+ - name: list_files
284
+ description: List files in the current directory
99
285
  execution:
100
- type: file
101
- op: read
102
- path: "${{ args.path }}"
286
+ id: list-files-tool
287
+ type: shell
288
+ run: ls -F
103
289
  ---
104
- You are an expert security researcher. Review the provided code for vulnerabilities.
290
+ You are a software developer. You can use tools to explore the codebase.
291
+ ```
292
+
293
+ ### Model Context Protocol (MCP)
294
+
295
+ Keystone supports connecting to external MCP servers to give agents access to a wide range of pre-built tools and resources. You can configure MCP servers globally or directly in an LLM step.
296
+
297
+ #### Global MCP Servers
298
+ Define shared MCP servers in `.keystone/config.yaml` to reuse them across different workflows. Keystone ensures that multiple steps using the same global server will share a single running process.
299
+
300
+ ```yaml
301
+ mcp_servers:
302
+ filesystem:
303
+ command: npx
304
+ args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
105
305
  ```
106
306
 
307
+ #### Using MCP in Steps
308
+ You can use global servers, define local ones, or include all global servers at once.
309
+
310
+ ```yaml
311
+ - id: analyze_code
312
+ type: llm
313
+ agent: developer
314
+ # Option 1: Explicitly include global servers by name
315
+ # Option 2: Define a local one-off server (standard object syntax)
316
+ mcpServers:
317
+ - filesystem
318
+ - name: custom-tool
319
+ command: node
320
+ args: ["./scripts/custom-mcp.js"]
321
+
322
+ # Option 3: Automatically include ALL global servers
323
+ useGlobalMcp: true
324
+
325
+ prompt: "Analyze the architecture of this project."
326
+ ```
327
+
328
+ In these examples, the agent will have access to all tools provided by the MCP servers (like `list_directory`, `read_file`, etc.) in addition to any tools defined in the agent or the step itself.
329
+
107
330
  ---
108
331
 
109
- ## πŸ› οΈ CLI Reference
332
+ ## πŸ› οΈ CLI Commands
110
333
 
111
334
  | Command | Description |
112
335
  | :--- | :--- |
113
336
  | `init` | Initialize a new Keystone project |
114
- | `run <workflow>` | Execute a workflow (supports `-i key=val` for inputs) |
115
- | `resume <run_id>` | Resume a paused or failed workflow run |
337
+ | `run <workflow>` | Execute a workflow by name or path |
338
+ | `resume <run_id>` | Resume a failed or paused workflow |
339
+ | `validate [path]` | Check workflow files (defaults to `.keystone/workflows/` or matches a workflow name) |
340
+ | `workflows` | List available workflows |
341
+ | `history` | Show recent workflow runs |
342
+ | `logs <run_id>` | View logs and step status for a specific run |
343
+ | `graph <workflow>` | Generate a Mermaid diagram of the workflow by name or path |
344
+ | `config` | Show current configuration and provider settings |
345
+ | `auth <login/status/logout>` | Manage GitHub Copilot authentication |
116
346
  | `ui` | Open the interactive TUI dashboard |
117
- | `mcp` | Start the MCP server to use workflows in other tools |
118
- | `graph <workflow>` | Visualize the DAG as an ASCII or Mermaid diagram |
119
- | `history` | List recent runs and their status |
120
- | `auth login` | Authenticate with GitHub for Copilot support |
121
- | `validate` | Check workflow files for schema and logic errors |
347
+ | `mcp` | Start the Model Context Protocol server |
348
+ | `completion [shell]` | Generate shell completion script (zsh, bash) |
349
+ | `prune` | Cleanup old run data from the database (also automated via `storage.retention_days`) |
122
350
 
123
351
  ---
124
352
 
125
- ## πŸ”’ Security & Privacy
353
+ ## πŸ“‚ Project Structure
126
354
 
127
- 1. **Local State:** All run history, logs, and outputs are stored in a local SQLite database (`.keystone/state.db`).
128
- 2. **Redaction:** Keystone automatically scans for your environment variables and masks them in all logs and database entries.
129
- 3. **AST Evaluation:** Expressions are parsed into an Abstract Syntax Tree and executed in a sandbox, preventing arbitrary code execution within `${{ }}` blocks.
130
- 4. **Shell Safety:** Use the built-in `escape()` function when passing user input to shell commands to prevent injection.
355
+ - `src/db/`: SQLite persistence layer.
356
+ - `src/runner/`: The core execution engine, handles parallelization and retries.
357
+ - `src/parser/`: Zod-powered validation for workflows and agents.
358
+ - `src/expression/`: `${{ }}` expression evaluator.
359
+ - `src/ui/`: Ink-powered TUI dashboard.
360
+ - `.keystone/workflows/`: Your YAML workflow definitions.
131
361
 
132
362
  ---
133
363
 
134
364
  ## πŸ“„ License
135
365
 
136
- MIT Β© [Mark Hingston](https://github.com/mhingston)
366
+ MIT
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "keystone-cli",
3
- "version": "0.1.0",
3
+ "version": "0.1.1",
4
4
  "description": "A local-first, declarative, agentic workflow orchestrator built on Bun",
5
5
  "type": "module",
6
6
  "bin": {