keystone-cli 1.3.0 → 2.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +127 -140
- package/package.json +6 -3
- package/src/cli.ts +54 -369
- package/src/commands/init.ts +15 -29
- package/src/db/memory-db.test.ts +45 -0
- package/src/db/memory-db.ts +47 -21
- package/src/db/sqlite-setup.ts +26 -3
- package/src/db/workflow-db.ts +12 -5
- package/src/parser/config-schema.ts +17 -13
- package/src/parser/schema.ts +4 -2
- package/src/runner/__test__/llm-mock-setup.ts +173 -0
- package/src/runner/__test__/llm-test-setup.ts +271 -0
- package/src/runner/engine-executor.test.ts +25 -18
- package/src/runner/executors/blueprint-executor.ts +0 -1
- package/src/runner/executors/dynamic-executor.ts +11 -6
- package/src/runner/executors/engine-executor.ts +5 -1
- package/src/runner/executors/llm-executor.ts +502 -1033
- package/src/runner/executors/memory-executor.ts +35 -19
- package/src/runner/executors/plan-executor.ts +0 -1
- package/src/runner/executors/types.ts +4 -4
- package/src/runner/llm-adapter.integration.test.ts +151 -0
- package/src/runner/llm-adapter.ts +270 -1398
- package/src/runner/llm-clarification.test.ts +91 -106
- package/src/runner/llm-executor.test.ts +217 -1181
- package/src/runner/memoization.test.ts +0 -1
- package/src/runner/recovery-security.test.ts +51 -20
- package/src/runner/reflexion.test.ts +55 -18
- package/src/runner/standard-tools-integration.test.ts +137 -87
- package/src/runner/step-executor.test.ts +36 -80
- package/src/runner/step-executor.ts +0 -2
- package/src/runner/test-harness.ts +3 -29
- package/src/runner/tool-integration.test.ts +122 -73
- package/src/runner/workflow-runner.ts +110 -49
- package/src/runner/workflow-scheduler.ts +11 -1
- package/src/runner/workflow-summary.ts +144 -0
- package/src/utils/auth-manager.test.ts +10 -520
- package/src/utils/auth-manager.ts +3 -756
- package/src/utils/config-loader.ts +12 -0
- package/src/utils/constants.ts +0 -17
- package/src/utils/process-sandbox.ts +15 -3
- package/src/runner/llm-adapter-runtime.test.ts +0 -209
- package/src/runner/llm-adapter.test.ts +0 -1012
package/README.md
CHANGED
|
@@ -34,7 +34,7 @@ Keystone allows you to define complex automation workflows using a simple YAML s
|
|
|
34
34
|
|
|
35
35
|
---
|
|
36
36
|
|
|
37
|
-
## <a id="features"
|
|
37
|
+
## <a id="features">✨ Features</a>
|
|
38
38
|
|
|
39
39
|
- ⚡ **Local-First:** Built on Bun with a local SQLite database for state management.
|
|
40
40
|
- 🧩 **Declarative:** Define workflows in YAML with automatic dependency tracking (DAG).
|
|
@@ -52,7 +52,7 @@ Keystone allows you to define complex automation workflows using a simple YAML s
|
|
|
52
52
|
|
|
53
53
|
---
|
|
54
54
|
|
|
55
|
-
## <a id="installation"
|
|
55
|
+
## <a id="installation">🚀 Installation</a>
|
|
56
56
|
|
|
57
57
|
Ensure you have [Bun](https://bun.sh) installed.
|
|
58
58
|
|
|
@@ -90,7 +90,7 @@ source <(keystone completion bash)
|
|
|
90
90
|
|
|
91
91
|
---
|
|
92
92
|
|
|
93
|
-
## <a id="quick-start"
|
|
93
|
+
## <a id="quick-start">🚦 Quick Start</a>
|
|
94
94
|
|
|
95
95
|
### 1. Initialize a Project
|
|
96
96
|
```bash
|
|
@@ -98,53 +98,64 @@ keystone init
|
|
|
98
98
|
```
|
|
99
99
|
This creates the `.keystone/` directory for configuration and seeds `.keystone/workflows/` plus `.keystone/workflows/agents/` with bundled workflows and agents (see "Bundled Workflows" below).
|
|
100
100
|
|
|
101
|
-
### 2.
|
|
102
|
-
|
|
101
|
+
### 2. Install AI SDK Providers
|
|
102
|
+
Keystone uses the **Vercel AI SDK**. Install the provider packages you need:
|
|
103
|
+
```bash
|
|
104
|
+
npm install @ai-sdk/openai @ai-sdk/anthropic
|
|
105
|
+
# Or use other AI SDK providers like @ai-sdk/google, @ai-sdk/mistral, etc.
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
### 3. Configure Providers
|
|
109
|
+
Edit `.keystone/config.yaml` to configure your providers:
|
|
110
|
+
```yaml
|
|
111
|
+
default_provider: openai
|
|
112
|
+
|
|
113
|
+
providers:
|
|
114
|
+
openai:
|
|
115
|
+
package: "@ai-sdk/openai"
|
|
116
|
+
api_key_env: OPENAI_API_KEY
|
|
117
|
+
default_model: gpt-4o
|
|
118
|
+
|
|
119
|
+
anthropic:
|
|
120
|
+
package: "@ai-sdk/anthropic"
|
|
121
|
+
api_key_env: ANTHROPIC_API_KEY
|
|
122
|
+
default_model: claude-3-5-sonnet-20240620
|
|
123
|
+
|
|
124
|
+
model_mappings:
|
|
125
|
+
"gpt-*": openai
|
|
126
|
+
"claude-*": anthropic
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
Then add your API keys to `.env`:
|
|
103
130
|
```env
|
|
104
131
|
OPENAI_API_KEY=sk-...
|
|
105
132
|
ANTHROPIC_API_KEY=sk-ant-...
|
|
106
133
|
```
|
|
107
|
-
Alternatively, you can use the built-in authentication management:
|
|
108
|
-
```bash
|
|
109
|
-
keystone auth login openai
|
|
110
|
-
keystone auth login anthropic
|
|
111
|
-
keystone auth login anthropic-claude
|
|
112
|
-
keystone auth login openai-chatgpt
|
|
113
|
-
keystone auth login gemini
|
|
114
|
-
keystone auth login github
|
|
115
|
-
```
|
|
116
|
-
Use `anthropic-claude` for Claude Pro/Max subscriptions (OAuth) instead of an API key.
|
|
117
|
-
Use `openai-chatgpt` for ChatGPT Plus/Pro subscriptions (OAuth) instead of an API key.
|
|
118
|
-
Use `gemini` (alias `google-gemini`) for Google Gemini subscriptions (OAuth) instead of an API key.
|
|
119
|
-
Use `github` to authenticate GitHub Copilot via the GitHub device flow.
|
|
120
134
|
|
|
121
|
-
|
|
135
|
+
See the [Configuration](#configuration) section for more details on BYOP (Bring Your Own Provider).
|
|
136
|
+
|
|
137
|
+
### 4. Run a Workflow
|
|
122
138
|
```bash
|
|
123
139
|
keystone run scaffold-feature
|
|
124
140
|
```
|
|
125
141
|
Keystone automatically looks in `.keystone/workflows/` (locally and in your home directory) for `.yaml` or `.yml` files.
|
|
126
142
|
|
|
127
|
-
###
|
|
143
|
+
### 5. Monitor with the Dashboard
|
|
128
144
|
```bash
|
|
129
145
|
keystone ui
|
|
130
146
|
```
|
|
131
147
|
|
|
132
148
|
---
|
|
133
149
|
|
|
134
|
-
## <a id="bundled-workflows"
|
|
150
|
+
## <a id="bundled-workflows">🧰 Bundled Workflows</a>
|
|
135
151
|
|
|
136
152
|
`keystone init` seeds these workflows under `.keystone/workflows/` (and the agents they rely on under `.keystone/workflows/agents/`):
|
|
137
153
|
|
|
138
|
-
Top-level workflows (seeded in `.keystone/workflows/`):
|
|
154
|
+
Top-level utility workflows (seeded in `.keystone/workflows/`):
|
|
139
155
|
- `scaffold-feature.yaml`: Interactive workflow scaffolder. Prompts for requirements, plans files, generates content, and writes them.
|
|
140
156
|
- `decompose-problem.yaml`: Decomposes a problem into research/implementation/review tasks, waits for approval, runs sub-workflows, and summarizes.
|
|
141
157
|
- `dev.yaml`: Self-bootstrapping DevMode workflow for an interactive plan/implement/verify loop.
|
|
142
|
-
- `
|
|
143
|
-
- `full-feature-demo.yaml`: A comprehensive workflow demonstrating multiple step types (shell, file, request, etc.).
|
|
144
|
-
- `script-example.yaml`: Demonstrates sandboxed JavaScript execution.
|
|
145
|
-
- `artifact-example.yaml`: Demonstrates artifact upload and download between steps.
|
|
146
|
-
- `idempotency-example.yaml`: Demonstrates safe retries for side-effecting steps.
|
|
147
|
-
- `dynamic-demo.yaml`: Demonstrates LLM-driven dynamic workflow orchestration where steps are generated at runtime.
|
|
158
|
+
- `dynamic-decompose.yaml`: Dynamic version of decompose-problem using LLM-driven orchestration.
|
|
148
159
|
|
|
149
160
|
Sub-workflows (seeded in `.keystone/workflows/`):
|
|
150
161
|
- `scaffold-plan.yaml`: Generates a file plan from `requirements` input.
|
|
@@ -158,15 +169,14 @@ Example runs:
|
|
|
158
169
|
```bash
|
|
159
170
|
keystone run scaffold-feature
|
|
160
171
|
keystone run decompose-problem -i problem="Add caching to the API" -i context="Node/Bun service"
|
|
161
|
-
keystone run
|
|
162
|
-
keystone run dynamic-demo -i task="Set up a Node.js project with TypeScript"
|
|
172
|
+
keystone run dev "Improve the user profile UI"
|
|
163
173
|
```
|
|
164
174
|
|
|
165
175
|
Sub-workflows are used by the top-level workflows, but can be run directly if you want just one phase.
|
|
166
176
|
|
|
167
177
|
---
|
|
168
178
|
|
|
169
|
-
## <a id="configuration"
|
|
179
|
+
## <a id="configuration">⚙️ Configuration</a>
|
|
170
180
|
|
|
171
181
|
Keystone loads configuration from project `.keystone/config.yaml` (and user-level config; see `keystone config show` for search order) to manage model providers and model mappings.
|
|
172
182
|
|
|
@@ -181,42 +191,27 @@ State is stored at `.keystone/state.db` by default (project-local).
|
|
|
181
191
|
default_provider: openai
|
|
182
192
|
|
|
183
193
|
providers:
|
|
194
|
+
# Example: Using a standard AI SDK provider package (Bring Your Own Provider)
|
|
184
195
|
openai:
|
|
185
|
-
|
|
196
|
+
package: "@ai-sdk/openai"
|
|
186
197
|
base_url: https://api.openai.com/v1
|
|
187
198
|
api_key_env: OPENAI_API_KEY
|
|
188
199
|
default_model: gpt-4o
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
base_url: https://api.openai.com/v1
|
|
192
|
-
default_model: gpt-5-codex
|
|
200
|
+
|
|
201
|
+
# Example: Using another provider
|
|
193
202
|
anthropic:
|
|
194
|
-
|
|
195
|
-
base_url: https://api.anthropic.com/v1
|
|
203
|
+
package: "@ai-sdk/anthropic"
|
|
196
204
|
api_key_env: ANTHROPIC_API_KEY
|
|
197
205
|
default_model: claude-3-5-sonnet-20240620
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
type: google-gemini
|
|
204
|
-
base_url: https://cloudcode-pa.googleapis.com
|
|
205
|
-
default_model: gemini-1.5-pro
|
|
206
|
-
groq:
|
|
207
|
-
type: openai
|
|
208
|
-
base_url: https://api.groq.com/openai/v1
|
|
209
|
-
api_key_env: GROQ_API_KEY
|
|
210
|
-
default_model: llama-3.3-70b-versatile
|
|
206
|
+
|
|
207
|
+
# Example: Using a custom provider script
|
|
208
|
+
# my-custom-provider:
|
|
209
|
+
# script: "./providers/my-provider.ts"
|
|
210
|
+
# default_model: my-special-model
|
|
211
211
|
|
|
212
212
|
model_mappings:
|
|
213
|
-
"gpt-5*": openai-chatgpt
|
|
214
213
|
"gpt-*": openai
|
|
215
|
-
"claude-4*": anthropic-claude
|
|
216
214
|
"claude-*": anthropic
|
|
217
|
-
"gemini-*": google-gemini
|
|
218
|
-
"o1-*": openai
|
|
219
|
-
"llama-*": groq
|
|
220
215
|
|
|
221
216
|
mcp_servers:
|
|
222
217
|
filesystem:
|
|
@@ -242,39 +237,49 @@ storage:
|
|
|
242
237
|
|
|
243
238
|
expression:
|
|
244
239
|
strict: false
|
|
240
|
+
|
|
241
|
+
logging:
|
|
242
|
+
suppress_security_warning: false
|
|
243
|
+
suppress_ai_sdk_warnings: false
|
|
245
244
|
```
|
|
246
245
|
|
|
247
|
-
|
|
246
|
+
### Storage Configuration
|
|
248
247
|
|
|
249
|
-
|
|
248
|
+
The `storage` section controls data retention and security for workflow runs:
|
|
250
249
|
|
|
251
|
-
|
|
250
|
+
- **`retention_days`**: Sets the default window used by `keystone maintenance` / `keystone prune` commands to clean up old run data.
|
|
251
|
+
- **`redact_secrets_at_rest`**: Controls whether secret inputs and known secrets are redacted before storing run data (default `true`).
|
|
252
252
|
|
|
253
|
-
|
|
254
|
-
features:
|
|
255
|
-
context_injection:
|
|
256
|
-
enabled: true # Opt-in feature (default: false)
|
|
257
|
-
search_depth: 3 # How many directories up to search (default: 3)
|
|
258
|
-
sources: # Which context sources to include
|
|
259
|
-
- readme # README.md files
|
|
260
|
-
- agents_md # AGENTS.md files
|
|
261
|
-
- cursor_rules # .cursor/rules or .claude/rules
|
|
262
|
-
```
|
|
253
|
+
### Logging Configuration
|
|
263
254
|
|
|
264
|
-
|
|
265
|
-
1. Search from the workflow directory up to the project root
|
|
266
|
-
2. Find the nearest `README.md` and `AGENTS.md` files
|
|
267
|
-
3. Parse rules from `.cursor/rules` or `.claude/rules` directories
|
|
268
|
-
4. Prepend this context to the LLM system prompt
|
|
255
|
+
The `logging` section allows you to suppress warnings:
|
|
269
256
|
|
|
270
|
-
|
|
257
|
+
- **`suppress_security_warning`**: Silences the "Security Warning" about running workflows from untrusted sources (default `false`).
|
|
258
|
+
- **`suppress_ai_sdk_warnings`**: Silences internal warnings from the Vercel AI SDK, such as compatibility mode messages (default `false`).
|
|
259
|
+
|
|
260
|
+
### Bring Your Own Provider (BYOP)
|
|
261
|
+
|
|
262
|
+
Keystone uses the **Vercel AI SDK**, allowing you to use any compatible provider. You must install the provider package (e.g., `@ai-sdk/openai`, `ai-sdk-provider-gemini-cli`) so Keystone can resolve it.
|
|
263
|
+
|
|
264
|
+
Keystone searches for provider packages in:
|
|
265
|
+
1. **Local `node_modules`**: The project where you run `keystone`.
|
|
266
|
+
2. **Global `node_modules`**: Your system-wide npm/bun/yarn directory.
|
|
267
|
+
|
|
268
|
+
To install a provider globally:
|
|
269
|
+
```bash
|
|
270
|
+
bun install -g ai-sdk-provider-gemini-cli
|
|
271
|
+
# or
|
|
272
|
+
npm install -g @ai-sdk/openai
|
|
273
|
+
```
|
|
274
|
+
|
|
275
|
+
Then configure it in `.keystone/config.yaml` using the `package` field.
|
|
271
276
|
|
|
272
277
|
### Model & Provider Resolution
|
|
273
278
|
|
|
274
279
|
Keystone resolves which provider to use for a model in the following order:
|
|
275
280
|
|
|
276
281
|
1. **Explicit Provider:** Use the `provider` field in an agent or step definition.
|
|
277
|
-
2. **Provider Prefix:** Use the `provider:model` syntax (e.g., `model:
|
|
282
|
+
2. **Provider Prefix:** Use the `provider:model` syntax (e.g., `model: anthropic:claude-3-5-sonnet-latest`).
|
|
278
283
|
3. **Model Mappings:** Matches the model name against the `model_mappings` in your config (supports suffix `*` for prefix matching).
|
|
279
284
|
4. **Default Provider:** Falls back to the `default_provider` defined in your config.
|
|
280
285
|
|
|
@@ -293,75 +298,55 @@ model: claude-3-5-sonnet-latest
|
|
|
293
298
|
- id: notify
|
|
294
299
|
type: llm
|
|
295
300
|
agent: summarizer
|
|
296
|
-
model:
|
|
301
|
+
model: anthropic:claude-3-5-sonnet-latest
|
|
297
302
|
prompt: ...
|
|
298
303
|
```
|
|
299
304
|
|
|
300
305
|
### OpenAI Compatible Providers
|
|
301
|
-
You can add any OpenAI-compatible provider (Together AI, Perplexity, Local Ollama, etc.) by
|
|
302
|
-
|
|
303
|
-
### GitHub Copilot Support
|
|
304
|
-
|
|
305
|
-
Keystone supports using your GitHub Copilot subscription directly. To authenticate (using the GitHub Device Flow):
|
|
306
|
-
|
|
307
|
-
```bash
|
|
308
|
-
keystone auth login github
|
|
309
|
-
```
|
|
310
|
-
|
|
311
|
-
Then, you can use Copilot in your configuration:
|
|
306
|
+
You can add any OpenAI-compatible provider (Together AI, Perplexity, Local Ollama, etc.) by using the `@ai-sdk/openai` package and providing the `base_url` and `api_key_env`.
|
|
312
307
|
|
|
313
308
|
```yaml
|
|
314
309
|
providers:
|
|
315
|
-
|
|
316
|
-
|
|
317
|
-
|
|
310
|
+
ollama:
|
|
311
|
+
package: "@ai-sdk/openai"
|
|
312
|
+
base_url: http://localhost:11434/v1
|
|
313
|
+
api_key_env: OLLAMA_API_KEY # Can be any value for local Ollama
|
|
314
|
+
default_model: llama3.2
|
|
318
315
|
```
|
|
319
316
|
|
|
320
|
-
|
|
321
|
-
|
|
322
|
-
### OpenAI ChatGPT Plus/Pro (OAuth)
|
|
323
|
-
|
|
324
|
-
Keystone supports using your ChatGPT Plus/Pro subscription (OAuth) instead of an API key:
|
|
325
|
-
|
|
326
|
-
```bash
|
|
327
|
-
keystone auth login openai-chatgpt
|
|
328
|
-
```
|
|
329
|
-
|
|
330
|
-
Then map models to the `openai-chatgpt` provider in your config.
|
|
331
|
-
|
|
332
|
-
### Anthropic Claude Pro/Max (OAuth)
|
|
333
|
-
|
|
334
|
-
Keystone supports using your Claude Pro/Max subscription (OAuth) instead of an API key:
|
|
335
|
-
|
|
336
|
-
```bash
|
|
337
|
-
keystone auth login anthropic-claude
|
|
338
|
-
```
|
|
317
|
+
### API Key Management
|
|
339
318
|
|
|
340
|
-
|
|
319
|
+
For other providers, store API keys in a `.env` file in your project root:
|
|
320
|
+
- `OPENAI_API_KEY`
|
|
321
|
+
- `ANTHROPIC_API_KEY`
|
|
341
322
|
|
|
342
|
-
###
|
|
323
|
+
### Context Injection (Opt-in)
|
|
343
324
|
|
|
344
|
-
Keystone
|
|
325
|
+
Keystone can automatically inject project context files (`README.md`, `AGENTS.md`, `.cursor/rules`, `.claude/rules`) into LLM system prompts. This helps agents understand your project's conventions and guidelines.
|
|
345
326
|
|
|
346
|
-
```
|
|
347
|
-
|
|
327
|
+
```yaml
|
|
328
|
+
features:
|
|
329
|
+
context_injection:
|
|
330
|
+
enabled: true # Opt-in feature (default: false)
|
|
331
|
+
search_depth: 3 # How many directories up to search (default: 3)
|
|
332
|
+
sources: # Which context sources to include
|
|
333
|
+
- readme # README.md files
|
|
334
|
+
- agents_md # AGENTS.md files
|
|
335
|
+
- cursor_rules # .cursor/rules or .claude/rules
|
|
348
336
|
```
|
|
349
337
|
|
|
350
|
-
|
|
351
|
-
|
|
352
|
-
|
|
338
|
+
When enabled, Keystone will:
|
|
339
|
+
1. Search from the workflow directory up to the project root
|
|
340
|
+
2. Find the nearest `README.md` and `AGENTS.md` files
|
|
341
|
+
3. Parse rules from `.cursor/rules` or `.claude/rules` directories
|
|
342
|
+
4. Prepend this context to the LLM system prompt
|
|
353
343
|
|
|
354
|
-
|
|
355
|
-
- `OPENAI_API_KEY`
|
|
356
|
-
- `ANTHROPIC_API_KEY`
|
|
344
|
+
Context is cached for 1 minute to avoid redundant file reads.
|
|
357
345
|
|
|
358
|
-
Or use the `keystone auth login` command to securely store them in your local machine's configuration:
|
|
359
|
-
- `keystone auth login openai`
|
|
360
|
-
- `keystone auth login anthropic`
|
|
361
346
|
|
|
362
347
|
---
|
|
363
348
|
|
|
364
|
-
## <a id="workflow-example"
|
|
349
|
+
## <a id="workflow-example">📝 Workflow Example</a>
|
|
365
350
|
|
|
366
351
|
Workflows are defined in YAML. Dependencies are automatically resolved based on the `needs` field, and **Keystone also automatically detects implicit dependencies** from your `${{ }}` expressions.
|
|
367
352
|
|
|
@@ -444,7 +429,7 @@ expression:
|
|
|
444
429
|
|
|
445
430
|
---
|
|
446
431
|
|
|
447
|
-
## <a id="step-types"
|
|
432
|
+
## <a id="step-types">🏗️ Step Types</a>
|
|
448
433
|
|
|
449
434
|
Keystone supports several specialized step types:
|
|
450
435
|
|
|
@@ -777,7 +762,7 @@ Until `strategy.matrix` is wired end-to-end, use explicit `foreach` with an arra
|
|
|
777
762
|
|
|
778
763
|
---
|
|
779
764
|
|
|
780
|
-
## <a id="advanced-features"
|
|
765
|
+
## <a id="advanced-features">🔧 Advanced Features</a>
|
|
781
766
|
|
|
782
767
|
### Idempotency Keys
|
|
783
768
|
|
|
@@ -990,7 +975,7 @@ You can also define a workflow-level `compensate` step to handle overall cleanup
|
|
|
990
975
|
|
|
991
976
|
---
|
|
992
977
|
|
|
993
|
-
## <a id="agent-definitions"
|
|
978
|
+
## <a id="agent-definitions">🤖 Agent Definitions</a>
|
|
994
979
|
|
|
995
980
|
Agents are defined in Markdown files with YAML frontmatter, making them easy to read and version control.
|
|
996
981
|
|
|
@@ -1174,7 +1159,7 @@ In these examples, the agent will have access to all tools provided by the MCP s
|
|
|
1174
1159
|
|
|
1175
1160
|
---
|
|
1176
1161
|
|
|
1177
|
-
## <a id="cli-commands"
|
|
1162
|
+
## <a id="cli-commands">🛠️ CLI Commands</a>
|
|
1178
1163
|
|
|
1179
1164
|
| Command | Description |
|
|
1180
1165
|
| :--- | :--- |
|
|
@@ -1197,9 +1182,6 @@ In these examples, the agent will have access to all tools provided by the MCP s
|
|
|
1197
1182
|
| `dev <task>` | Run the self-bootstrapping DevMode workflow |
|
|
1198
1183
|
| `manifest` | Show embedded assets manifest |
|
|
1199
1184
|
| `config show` | Show current configuration and discovery paths (alias: `list`) |
|
|
1200
|
-
| `auth status [provider]` | Show authentication status |
|
|
1201
|
-
| `auth login [provider]` | Login to an authentication provider (github, openai, anthropic, openai-chatgpt, anthropic-claude, gemini/google-gemini) |
|
|
1202
|
-
| `auth logout [provider]` | Logout and clear authentication tokens |
|
|
1203
1185
|
| `ui` | Open the interactive TUI dashboard |
|
|
1204
1186
|
| `mcp start` | Start the Keystone MCP server |
|
|
1205
1187
|
| `mcp login <server>` | Login to a remote MCP server |
|
|
@@ -1238,19 +1220,22 @@ Input keys passed via `-i key=val` must be alphanumeric/underscore and cannot be
|
|
|
1238
1220
|
### Dry Run
|
|
1239
1221
|
`keystone run --dry-run` prints shell commands without executing them and skips non-shell steps (including human prompts). Outputs from skipped steps are empty, so conditional branches may differ from a real run.
|
|
1240
1222
|
|
|
1241
|
-
## <a id="security"
|
|
1223
|
+
## <a id="security">🛡️ Security</a>
|
|
1242
1224
|
|
|
1243
1225
|
### Shell Execution
|
|
1244
1226
|
Keystone blocks shell commands that match common injection/destructive patterns (like `rm -rf /` or pipes to shells). To run them, set `allowInsecure: true` on the step. Prefer `${{ escape(...) }}` when interpolating user input.
|
|
1245
1227
|
|
|
1246
|
-
You can bypass this check if you trust the command:
|
|
1247
|
-
```yaml
|
|
1248
1228
|
- id: deploy
|
|
1249
1229
|
type: shell
|
|
1250
1230
|
run: ./deploy.sh ${{ inputs.env }}
|
|
1251
1231
|
allowInsecure: true
|
|
1252
1232
|
```
|
|
1253
1233
|
|
|
1234
|
+
#### Troubleshooting Security Errors
|
|
1235
|
+
If you see a `Security Error: Evaluated command contains shell metacharacters`, it means your command contains characters like `\n`, `|`, or `&` that were not explicitly escaped or are not in the safe whitelist.
|
|
1236
|
+
- **Fix 1**: Use `${{ escape(steps.id.output) }}` for any dynamic values.
|
|
1237
|
+
- **Fix 2**: Set `allowInsecure: true` if the command naturally uses special characters (like `echo "line1\nline2"`).
|
|
1238
|
+
|
|
1254
1239
|
### Expression Safety
|
|
1255
1240
|
Expressions `${{ }}` are evaluated using a safe AST parser (`jsep`) which:
|
|
1256
1241
|
- Prevents arbitrary code execution (no `eval` or `Function`).
|
|
@@ -1266,12 +1251,14 @@ Request steps enforce SSRF protections and require HTTPS by default. Cross-origi
|
|
|
1266
1251
|
|
|
1267
1252
|
---
|
|
1268
1253
|
|
|
1269
|
-
## <a id="architecture"
|
|
1254
|
+
## <a id="architecture">🏗️ Architecture</a>
|
|
1270
1255
|
|
|
1271
1256
|
```mermaid
|
|
1272
1257
|
graph TD
|
|
1273
1258
|
CLI[CLI Entry Point] --> WR[WorkflowRunner]
|
|
1274
1259
|
CLI --> MCPServer[MCP Server]
|
|
1260
|
+
Config[ConfigLoader] --> WR
|
|
1261
|
+
Config --> Adapter
|
|
1275
1262
|
|
|
1276
1263
|
subgraph "Core Orchestration"
|
|
1277
1264
|
WR --> Scheduler[WorkflowScheduler]
|
|
@@ -1302,12 +1289,12 @@ graph TD
|
|
|
1302
1289
|
EX --> Join[Join Step]
|
|
1303
1290
|
EX --> Blueprint[Blueprint Step]
|
|
1304
1291
|
|
|
1305
|
-
LLM -->
|
|
1306
|
-
|
|
1292
|
+
LLM --> Adapter[LLM Adapter (AI SDK)]
|
|
1293
|
+
Adapter --> Providers[OpenAI, Anthropic, Gemini, Copilot, etc.]
|
|
1307
1294
|
LLM --> MCPClient[MCP Client]
|
|
1308
1295
|
```
|
|
1309
1296
|
|
|
1310
|
-
## <a id="project-structure"
|
|
1297
|
+
## <a id="project-structure">📂 Project Structure</a>
|
|
1311
1298
|
|
|
1312
1299
|
- `src/cli.ts`: CLI entry point.
|
|
1313
1300
|
- `src/db/`: SQLite persistence layer.
|
|
@@ -1322,6 +1309,6 @@ graph TD
|
|
|
1322
1309
|
|
|
1323
1310
|
---
|
|
1324
1311
|
|
|
1325
|
-
## <a id="license"
|
|
1312
|
+
## <a id="license">📄 License</a>
|
|
1326
1313
|
|
|
1327
1314
|
MIT
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "keystone-cli",
|
|
3
|
-
"version": "
|
|
3
|
+
"version": "2.0.1",
|
|
4
4
|
"description": "A local-first, declarative, agentic workflow orchestrator built on Bun",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"bin": {
|
|
@@ -8,7 +8,9 @@
|
|
|
8
8
|
},
|
|
9
9
|
"scripts": {
|
|
10
10
|
"dev": "bun run src/cli.ts",
|
|
11
|
-
"test": "bun test",
|
|
11
|
+
"test": "bun test --timeout 60000",
|
|
12
|
+
"test:adapter": "SKIP_LLM_MOCK=1 bun test ./src/runner/llm-adapter.integration.test.ts --timeout 60000",
|
|
13
|
+
"test:unit": "bun test --timeout 60000 --filter '!llm-adapter.integration.test.ts'",
|
|
12
14
|
"lint": "biome check .",
|
|
13
15
|
"lint:fix": "biome check --write .",
|
|
14
16
|
"format": "biome format --write .",
|
|
@@ -30,6 +32,7 @@
|
|
|
30
32
|
"@jsep-plugin/object": "^1.2.2",
|
|
31
33
|
"@types/react": "^19.0.0",
|
|
32
34
|
"@xenova/transformers": "^2.17.2",
|
|
35
|
+
"ai": "^6.0.3",
|
|
33
36
|
"ajv": "^8.12.0",
|
|
34
37
|
"commander": "^12.1.0",
|
|
35
38
|
"dagre": "^0.8.5",
|
|
@@ -41,7 +44,7 @@
|
|
|
41
44
|
"jsep": "^1.4.0",
|
|
42
45
|
"react": "^19.0.0",
|
|
43
46
|
"sqlite-vec": "0.1.6",
|
|
44
|
-
"zod": "^3.
|
|
47
|
+
"zod": "^3.25.76",
|
|
45
48
|
"zod-to-json-schema": "^3.25.1"
|
|
46
49
|
},
|
|
47
50
|
"optionalDependencies": {
|