keystone-cli 1.3.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (42) hide show
  1. package/README.md +114 -140
  2. package/package.json +6 -3
  3. package/src/cli.ts +54 -369
  4. package/src/commands/init.ts +15 -29
  5. package/src/db/memory-db.test.ts +45 -0
  6. package/src/db/memory-db.ts +47 -21
  7. package/src/db/sqlite-setup.ts +26 -3
  8. package/src/db/workflow-db.ts +12 -5
  9. package/src/parser/config-schema.ts +11 -13
  10. package/src/parser/schema.ts +4 -2
  11. package/src/runner/__test__/llm-mock-setup.ts +173 -0
  12. package/src/runner/__test__/llm-test-setup.ts +271 -0
  13. package/src/runner/engine-executor.test.ts +25 -18
  14. package/src/runner/executors/blueprint-executor.ts +0 -1
  15. package/src/runner/executors/dynamic-executor.ts +11 -6
  16. package/src/runner/executors/engine-executor.ts +5 -1
  17. package/src/runner/executors/llm-executor.ts +502 -1033
  18. package/src/runner/executors/memory-executor.ts +35 -19
  19. package/src/runner/executors/plan-executor.ts +0 -1
  20. package/src/runner/executors/types.ts +4 -4
  21. package/src/runner/llm-adapter.integration.test.ts +151 -0
  22. package/src/runner/llm-adapter.ts +263 -1401
  23. package/src/runner/llm-clarification.test.ts +91 -106
  24. package/src/runner/llm-executor.test.ts +217 -1181
  25. package/src/runner/memoization.test.ts +0 -1
  26. package/src/runner/recovery-security.test.ts +51 -20
  27. package/src/runner/reflexion.test.ts +55 -18
  28. package/src/runner/standard-tools-integration.test.ts +137 -87
  29. package/src/runner/step-executor.test.ts +36 -80
  30. package/src/runner/step-executor.ts +0 -2
  31. package/src/runner/test-harness.ts +3 -29
  32. package/src/runner/tool-integration.test.ts +122 -73
  33. package/src/runner/workflow-runner.ts +92 -35
  34. package/src/runner/workflow-scheduler.ts +11 -1
  35. package/src/runner/workflow-summary.ts +144 -0
  36. package/src/utils/auth-manager.test.ts +10 -520
  37. package/src/utils/auth-manager.ts +3 -756
  38. package/src/utils/config-loader.ts +12 -0
  39. package/src/utils/constants.ts +0 -17
  40. package/src/utils/process-sandbox.ts +15 -3
  41. package/src/runner/llm-adapter-runtime.test.ts +0 -209
  42. package/src/runner/llm-adapter.test.ts +0 -1012
package/README.md CHANGED
@@ -34,7 +34,7 @@ Keystone allows you to define complex automation workflows using a simple YAML s
34
34
 
35
35
  ---
36
36
 
37
- ## <a id="features"></a>✨ Features
37
+ ## <a id="features">✨ Features</a>
38
38
 
39
39
  - ⚡ **Local-First:** Built on Bun with a local SQLite database for state management.
40
40
  - 🧩 **Declarative:** Define workflows in YAML with automatic dependency tracking (DAG).
@@ -52,7 +52,7 @@ Keystone allows you to define complex automation workflows using a simple YAML s
52
52
 
53
53
  ---
54
54
 
55
- ## <a id="installation"></a>🚀 Installation
55
+ ## <a id="installation">🚀 Installation</a>
56
56
 
57
57
  Ensure you have [Bun](https://bun.sh) installed.
58
58
 
@@ -90,7 +90,7 @@ source <(keystone completion bash)
90
90
 
91
91
  ---
92
92
 
93
- ## <a id="quick-start"></a>🚦 Quick Start
93
+ ## <a id="quick-start">🚦 Quick Start</a>
94
94
 
95
95
  ### 1. Initialize a Project
96
96
  ```bash
@@ -98,53 +98,64 @@ keystone init
98
98
  ```
99
99
  This creates the `.keystone/` directory for configuration and seeds `.keystone/workflows/` plus `.keystone/workflows/agents/` with bundled workflows and agents (see "Bundled Workflows" below).
100
100
 
101
- ### 2. Configure your Environment
102
- Add your API keys to the generated `.env` file:
101
+ ### 2. Install AI SDK Providers
102
+ Keystone uses the **Vercel AI SDK**. Install the provider packages you need:
103
+ ```bash
104
+ npm install @ai-sdk/openai @ai-sdk/anthropic
105
+ # Or use other AI SDK providers like @ai-sdk/google, @ai-sdk/mistral, etc.
106
+ ```
107
+
108
+ ### 3. Configure Providers
109
+ Edit `.keystone/config.yaml` to configure your providers:
110
+ ```yaml
111
+ default_provider: openai
112
+
113
+ providers:
114
+ openai:
115
+ package: "@ai-sdk/openai"
116
+ api_key_env: OPENAI_API_KEY
117
+ default_model: gpt-4o
118
+
119
+ anthropic:
120
+ package: "@ai-sdk/anthropic"
121
+ api_key_env: ANTHROPIC_API_KEY
122
+ default_model: claude-3-5-sonnet-20240620
123
+
124
+ model_mappings:
125
+ "gpt-*": openai
126
+ "claude-*": anthropic
127
+ ```
128
+
129
+ Then add your API keys to `.env`:
103
130
  ```env
104
131
  OPENAI_API_KEY=sk-...
105
132
  ANTHROPIC_API_KEY=sk-ant-...
106
133
  ```
107
- Alternatively, you can use the built-in authentication management:
108
- ```bash
109
- keystone auth login openai
110
- keystone auth login anthropic
111
- keystone auth login anthropic-claude
112
- keystone auth login openai-chatgpt
113
- keystone auth login gemini
114
- keystone auth login github
115
- ```
116
- Use `anthropic-claude` for Claude Pro/Max subscriptions (OAuth) instead of an API key.
117
- Use `openai-chatgpt` for ChatGPT Plus/Pro subscriptions (OAuth) instead of an API key.
118
- Use `gemini` (alias `google-gemini`) for Google Gemini subscriptions (OAuth) instead of an API key.
119
- Use `github` to authenticate GitHub Copilot via the GitHub device flow.
120
134
 
121
- ### 3. Run a Workflow
135
+ See the [Configuration](#configuration) section for more details on BYOP (Bring Your Own Provider).
136
+
137
+ ### 4. Run a Workflow
122
138
  ```bash
123
139
  keystone run scaffold-feature
124
140
  ```
125
141
  Keystone automatically looks in `.keystone/workflows/` (locally and in your home directory) for `.yaml` or `.yml` files.
126
142
 
127
- ### 4. Monitor with the Dashboard
143
+ ### 5. Monitor with the Dashboard
128
144
  ```bash
129
145
  keystone ui
130
146
  ```
131
147
 
132
148
  ---
133
149
 
134
- ## <a id="bundled-workflows"></a>🧰 Bundled Workflows
150
+ ## <a id="bundled-workflows">🧰 Bundled Workflows</a>
135
151
 
136
152
  `keystone init` seeds these workflows under `.keystone/workflows/` (and the agents they rely on under `.keystone/workflows/agents/`):
137
153
 
138
- Top-level workflows (seeded in `.keystone/workflows/`):
154
+ Top-level utility workflows (seeded in `.keystone/workflows/`):
139
155
  - `scaffold-feature.yaml`: Interactive workflow scaffolder. Prompts for requirements, plans files, generates content, and writes them.
140
156
  - `decompose-problem.yaml`: Decomposes a problem into research/implementation/review tasks, waits for approval, runs sub-workflows, and summarizes.
141
157
  - `dev.yaml`: Self-bootstrapping DevMode workflow for an interactive plan/implement/verify loop.
142
- - `agent-handoff.yaml`: Demonstrates agent handoffs and tool-driven context updates.
143
- - `full-feature-demo.yaml`: A comprehensive workflow demonstrating multiple step types (shell, file, request, etc.).
144
- - `script-example.yaml`: Demonstrates sandboxed JavaScript execution.
145
- - `artifact-example.yaml`: Demonstrates artifact upload and download between steps.
146
- - `idempotency-example.yaml`: Demonstrates safe retries for side-effecting steps.
147
- - `dynamic-demo.yaml`: Demonstrates LLM-driven dynamic workflow orchestration where steps are generated at runtime.
158
+ - `dynamic-decompose.yaml`: Dynamic version of decompose-problem using LLM-driven orchestration.
148
159
 
149
160
  Sub-workflows (seeded in `.keystone/workflows/`):
150
161
  - `scaffold-plan.yaml`: Generates a file plan from `requirements` input.
@@ -158,15 +169,14 @@ Example runs:
158
169
  ```bash
159
170
  keystone run scaffold-feature
160
171
  keystone run decompose-problem -i problem="Add caching to the API" -i context="Node/Bun service"
161
- keystone run agent-handoff -i topic="billing" -i user="Ada"
162
- keystone run dynamic-demo -i task="Set up a Node.js project with TypeScript"
172
+ keystone run dev "Improve the user profile UI"
163
173
  ```
164
174
 
165
175
  Sub-workflows are used by the top-level workflows, but can be run directly if you want just one phase.
166
176
 
167
177
  ---
168
178
 
169
- ## <a id="configuration"></a>⚙️ Configuration
179
+ ## <a id="configuration">⚙️ Configuration</a>
170
180
 
171
181
  Keystone loads configuration from project `.keystone/config.yaml` (and user-level config; see `keystone config show` for search order) to manage model providers and model mappings.
172
182
 
@@ -181,42 +191,27 @@ State is stored at `.keystone/state.db` by default (project-local).
181
191
  default_provider: openai
182
192
 
183
193
  providers:
194
+ # Example: Using a standard AI SDK provider package (Bring Your Own Provider)
184
195
  openai:
185
- type: openai
196
+ package: "@ai-sdk/openai"
186
197
  base_url: https://api.openai.com/v1
187
198
  api_key_env: OPENAI_API_KEY
188
199
  default_model: gpt-4o
189
- openai-chatgpt:
190
- type: openai-chatgpt
191
- base_url: https://api.openai.com/v1
192
- default_model: gpt-5-codex
200
+
201
+ # Example: Using another provider
193
202
  anthropic:
194
- type: anthropic
195
- base_url: https://api.anthropic.com/v1
203
+ package: "@ai-sdk/anthropic"
196
204
  api_key_env: ANTHROPIC_API_KEY
197
205
  default_model: claude-3-5-sonnet-20240620
198
- anthropic-claude:
199
- type: anthropic-claude
200
- base_url: https://api.anthropic.com/v1
201
- default_model: claude-3-5-sonnet-20240620
202
- google-gemini:
203
- type: google-gemini
204
- base_url: https://cloudcode-pa.googleapis.com
205
- default_model: gemini-1.5-pro
206
- groq:
207
- type: openai
208
- base_url: https://api.groq.com/openai/v1
209
- api_key_env: GROQ_API_KEY
210
- default_model: llama-3.3-70b-versatile
206
+
207
+ # Example: Using a custom provider script
208
+ # my-custom-provider:
209
+ # script: "./providers/my-provider.ts"
210
+ # default_model: my-special-model
211
211
 
212
212
  model_mappings:
213
- "gpt-5*": openai-chatgpt
214
213
  "gpt-*": openai
215
- "claude-4*": anthropic-claude
216
214
  "claude-*": anthropic
217
- "gemini-*": google-gemini
218
- "o1-*": openai
219
- "llama-*": groq
220
215
 
221
216
  mcp_servers:
222
217
  filesystem:
@@ -244,37 +239,36 @@ expression:
244
239
  strict: false
245
240
  ```
246
241
 
247
- `storage.retention_days` sets the default window used by `keystone maintenance` / `keystone prune`. `storage.redact_secrets_at_rest` controls whether secret inputs and known secrets are redacted before storing run data (default `true`).
242
+ ### Storage Configuration
248
243
 
249
- ### Context Injection (Opt-in)
244
+ The `storage` section controls data retention and security for workflow runs:
250
245
 
251
- Keystone can automatically inject project context files (`README.md`, `AGENTS.md`, `.cursor/rules`, `.claude/rules`) into LLM system prompts. This helps agents understand your project's conventions and guidelines.
246
+ - **`retention_days`**: Sets the default window used by `keystone maintenance` / `keystone prune` commands to clean up old run data.
247
+ - **`redact_secrets_at_rest`**: Controls whether secret inputs and known secrets are redacted before storing run data (default `true`).
252
248
 
253
- ```yaml
254
- features:
255
- context_injection:
256
- enabled: true # Opt-in feature (default: false)
257
- search_depth: 3 # How many directories up to search (default: 3)
258
- sources: # Which context sources to include
259
- - readme # README.md files
260
- - agents_md # AGENTS.md files
261
- - cursor_rules # .cursor/rules or .claude/rules
262
- ```
249
+ ### Bring Your Own Provider (BYOP)
263
250
 
264
- When enabled, Keystone will:
265
- 1. Search from the workflow directory up to the project root
266
- 2. Find the nearest `README.md` and `AGENTS.md` files
267
- 3. Parse rules from `.cursor/rules` or `.claude/rules` directories
268
- 4. Prepend this context to the LLM system prompt
251
+ Keystone uses the **Vercel AI SDK**, allowing you to use any compatible provider. You must install the provider package (e.g., `@ai-sdk/openai`, `ai-sdk-provider-gemini-cli`) so Keystone can resolve it.
269
252
 
270
- Context is cached for 1 minute to avoid redundant file reads.
253
+ Keystone searches for provider packages in:
254
+ 1. **Local `node_modules`**: The project where you run `keystone`.
255
+ 2. **Global `node_modules`**: Your system-wide npm/bun/yarn directory.
256
+
257
+ To install a provider globally:
258
+ ```bash
259
+ bun install -g ai-sdk-provider-gemini-cli
260
+ # or
261
+ npm install -g @ai-sdk/openai
262
+ ```
263
+
264
+ Then configure it in `.keystone/config.yaml` using the `package` field.
271
265
 
272
266
  ### Model & Provider Resolution
273
267
 
274
268
  Keystone resolves which provider to use for a model in the following order:
275
269
 
276
270
  1. **Explicit Provider:** Use the `provider` field in an agent or step definition.
277
- 2. **Provider Prefix:** Use the `provider:model` syntax (e.g., `model: copilot:gpt-4o`).
271
+ 2. **Provider Prefix:** Use the `provider:model` syntax (e.g., `model: anthropic:claude-3-5-sonnet-latest`).
278
272
  3. **Model Mappings:** Matches the model name against the `model_mappings` in your config (supports suffix `*` for prefix matching).
279
273
  4. **Default Provider:** Falls back to the `default_provider` defined in your config.
280
274
 
@@ -293,75 +287,55 @@ model: claude-3-5-sonnet-latest
293
287
  - id: notify
294
288
  type: llm
295
289
  agent: summarizer
296
- model: copilot:gpt-4o
290
+ model: anthropic:claude-3-5-sonnet-latest
297
291
  prompt: ...
298
292
  ```
299
293
 
300
294
  ### OpenAI Compatible Providers
301
- You can add any OpenAI-compatible provider (Together AI, Perplexity, Local Ollama, etc.) by setting the `type` to `openai` and providing the `base_url` and `api_key_env`.
302
-
303
- ### GitHub Copilot Support
304
-
305
- Keystone supports using your GitHub Copilot subscription directly. To authenticate (using the GitHub Device Flow):
306
-
307
- ```bash
308
- keystone auth login github
309
- ```
310
-
311
- Then, you can use Copilot in your configuration:
295
+ You can add any OpenAI-compatible provider (Together AI, Perplexity, Local Ollama, etc.) by using the `@ai-sdk/openai` package and providing the `base_url` and `api_key_env`.
312
296
 
313
297
  ```yaml
314
298
  providers:
315
- copilot:
316
- type: copilot
317
- default_model: gpt-4o
318
- ```
319
-
320
- Authentication tokens for Copilot are managed automatically after the initial login.
321
-
322
- ### OpenAI ChatGPT Plus/Pro (OAuth)
323
-
324
- Keystone supports using your ChatGPT Plus/Pro subscription (OAuth) instead of an API key:
325
-
326
- ```bash
327
- keystone auth login openai-chatgpt
299
+ ollama:
300
+ package: "@ai-sdk/openai"
301
+ base_url: http://localhost:11434/v1
302
+ api_key_env: OLLAMA_API_KEY # Can be any value for local Ollama
303
+ default_model: llama3.2
328
304
  ```
329
305
 
330
- Then map models to the `openai-chatgpt` provider in your config.
331
-
332
- ### Anthropic Claude Pro/Max (OAuth)
333
-
334
- Keystone supports using your Claude Pro/Max subscription (OAuth) instead of an API key:
335
-
336
- ```bash
337
- keystone auth login anthropic-claude
338
- ```
306
+ ### API Key Management
339
307
 
340
- Then map models to the `anthropic-claude` provider in your config. This flow uses the Claude web auth code and refreshes tokens automatically.
308
+ For other providers, store API keys in a `.env` file in your project root:
309
+ - `OPENAI_API_KEY`
310
+ - `ANTHROPIC_API_KEY`
341
311
 
342
- ### Google Gemini (OAuth)
312
+ ### Context Injection (Opt-in)
343
313
 
344
- Keystone supports using your Google Gemini subscription (OAuth) instead of an API key:
314
+ Keystone can automatically inject project context files (`README.md`, `AGENTS.md`, `.cursor/rules`, `.claude/rules`) into LLM system prompts. This helps agents understand your project's conventions and guidelines.
345
315
 
346
- ```bash
347
- keystone auth login gemini
316
+ ```yaml
317
+ features:
318
+ context_injection:
319
+ enabled: true # Opt-in feature (default: false)
320
+ search_depth: 3 # How many directories up to search (default: 3)
321
+ sources: # Which context sources to include
322
+ - readme # README.md files
323
+ - agents_md # AGENTS.md files
324
+ - cursor_rules # .cursor/rules or .claude/rules
348
325
  ```
349
326
 
350
- Then map models to the `google-gemini` provider in your config.
351
-
352
- ### API Key Management
327
+ When enabled, Keystone will:
328
+ 1. Search from the workflow directory up to the project root
329
+ 2. Find the nearest `README.md` and `AGENTS.md` files
330
+ 3. Parse rules from `.cursor/rules` or `.claude/rules` directories
331
+ 4. Prepend this context to the LLM system prompt
353
332
 
354
- For other providers, you can either store API keys in a `.env` file in your project root:
355
- - `OPENAI_API_KEY`
356
- - `ANTHROPIC_API_KEY`
333
+ Context is cached for 1 minute to avoid redundant file reads.
357
334
 
358
- Or use the `keystone auth login` command to securely store them in your local machine's configuration:
359
- - `keystone auth login openai`
360
- - `keystone auth login anthropic`
361
335
 
362
336
  ---
363
337
 
364
- ## <a id="workflow-example"></a>📝 Workflow Example
338
+ ## <a id="workflow-example">📝 Workflow Example</a>
365
339
 
366
340
  Workflows are defined in YAML. Dependencies are automatically resolved based on the `needs` field, and **Keystone also automatically detects implicit dependencies** from your `${{ }}` expressions.
367
341
 
@@ -444,7 +418,7 @@ expression:
444
418
 
445
419
  ---
446
420
 
447
- ## <a id="step-types"></a>🏗️ Step Types
421
+ ## <a id="step-types">🏗️ Step Types</a>
448
422
 
449
423
  Keystone supports several specialized step types:
450
424
 
@@ -777,7 +751,7 @@ Until `strategy.matrix` is wired end-to-end, use explicit `foreach` with an arra
777
751
 
778
752
  ---
779
753
 
780
- ## <a id="advanced-features"></a>🔧 Advanced Features
754
+ ## <a id="advanced-features">🔧 Advanced Features</a>
781
755
 
782
756
  ### Idempotency Keys
783
757
 
@@ -990,7 +964,7 @@ You can also define a workflow-level `compensate` step to handle overall cleanup
990
964
 
991
965
  ---
992
966
 
993
- ## <a id="agent-definitions"></a>🤖 Agent Definitions
967
+ ## <a id="agent-definitions">🤖 Agent Definitions</a>
994
968
 
995
969
  Agents are defined in Markdown files with YAML frontmatter, making them easy to read and version control.
996
970
 
@@ -1174,7 +1148,7 @@ In these examples, the agent will have access to all tools provided by the MCP s
1174
1148
 
1175
1149
  ---
1176
1150
 
1177
- ## <a id="cli-commands"></a>🛠️ CLI Commands
1151
+ ## <a id="cli-commands">🛠️ CLI Commands</a>
1178
1152
 
1179
1153
  | Command | Description |
1180
1154
  | :--- | :--- |
@@ -1197,9 +1171,6 @@ In these examples, the agent will have access to all tools provided by the MCP s
1197
1171
  | `dev <task>` | Run the self-bootstrapping DevMode workflow |
1198
1172
  | `manifest` | Show embedded assets manifest |
1199
1173
  | `config show` | Show current configuration and discovery paths (alias: `list`) |
1200
- | `auth status [provider]` | Show authentication status |
1201
- | `auth login [provider]` | Login to an authentication provider (github, openai, anthropic, openai-chatgpt, anthropic-claude, gemini/google-gemini) |
1202
- | `auth logout [provider]` | Logout and clear authentication tokens |
1203
1174
  | `ui` | Open the interactive TUI dashboard |
1204
1175
  | `mcp start` | Start the Keystone MCP server |
1205
1176
  | `mcp login <server>` | Login to a remote MCP server |
@@ -1238,19 +1209,22 @@ Input keys passed via `-i key=val` must be alphanumeric/underscore and cannot be
1238
1209
  ### Dry Run
1239
1210
  `keystone run --dry-run` prints shell commands without executing them and skips non-shell steps (including human prompts). Outputs from skipped steps are empty, so conditional branches may differ from a real run.
1240
1211
 
1241
- ## <a id="security"></a>🛡️ Security
1212
+ ## <a id="security">🛡️ Security</a>
1242
1213
 
1243
1214
  ### Shell Execution
1244
1215
  Keystone blocks shell commands that match common injection/destructive patterns (like `rm -rf /` or pipes to shells). To run them, set `allowInsecure: true` on the step. Prefer `${{ escape(...) }}` when interpolating user input.
1245
1216
 
1246
- You can bypass this check if you trust the command:
1247
- ```yaml
1248
1217
  - id: deploy
1249
1218
  type: shell
1250
1219
  run: ./deploy.sh ${{ inputs.env }}
1251
1220
  allowInsecure: true
1252
1221
  ```
1253
1222
 
1223
+ #### Troubleshooting Security Errors
1224
+ If you see a `Security Error: Evaluated command contains shell metacharacters`, it means your command contains characters like `\n`, `|`, or `&` that were not explicitly escaped or are not in the safe whitelist.
1225
+ - **Fix 1**: Use `${{ escape(steps.id.output) }}` for any dynamic values.
1226
+ - **Fix 2**: Set `allowInsecure: true` if the command naturally uses special characters (like `echo "line1\nline2"`).
1227
+
1254
1228
  ### Expression Safety
1255
1229
  Expressions `${{ }}` are evaluated using a safe AST parser (`jsep`) which:
1256
1230
  - Prevents arbitrary code execution (no `eval` or `Function`).
@@ -1266,7 +1240,7 @@ Request steps enforce SSRF protections and require HTTPS by default. Cross-origi
1266
1240
 
1267
1241
  ---
1268
1242
 
1269
- ## <a id="architecture"></a>🏗️ Architecture
1243
+ ## <a id="architecture">🏗️ Architecture</a>
1270
1244
 
1271
1245
  ```mermaid
1272
1246
  graph TD
@@ -1302,12 +1276,12 @@ graph TD
1302
1276
  EX --> Join[Join Step]
1303
1277
  EX --> Blueprint[Blueprint Step]
1304
1278
 
1305
- LLM --> Adapters[LLM Adapters]
1306
- Adapters --> Providers[OpenAI, Anthropic, Gemini, Copilot, etc.]
1279
+ LLM --> Adapter[LLM Adapter (AI SDK)]
1280
+ Adapter --> Providers[OpenAI, Anthropic, Gemini, Copilot, etc.]
1307
1281
  LLM --> MCPClient[MCP Client]
1308
1282
  ```
1309
1283
 
1310
- ## <a id="project-structure"></a>📂 Project Structure
1284
+ ## <a id="project-structure">📂 Project Structure</a>
1311
1285
 
1312
1286
  - `src/cli.ts`: CLI entry point.
1313
1287
  - `src/db/`: SQLite persistence layer.
@@ -1322,6 +1296,6 @@ graph TD
1322
1296
 
1323
1297
  ---
1324
1298
 
1325
- ## <a id="license"></a>📄 License
1299
+ ## <a id="license">📄 License</a>
1326
1300
 
1327
1301
  MIT
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "keystone-cli",
3
- "version": "1.3.0",
3
+ "version": "2.0.0",
4
4
  "description": "A local-first, declarative, agentic workflow orchestrator built on Bun",
5
5
  "type": "module",
6
6
  "bin": {
@@ -8,7 +8,9 @@
8
8
  },
9
9
  "scripts": {
10
10
  "dev": "bun run src/cli.ts",
11
- "test": "bun test",
11
+ "test": "bun test --timeout 60000",
12
+ "test:adapter": "SKIP_LLM_MOCK=1 bun test ./src/runner/llm-adapter.integration.test.ts --timeout 60000",
13
+ "test:unit": "bun test --timeout 60000 --filter '!llm-adapter.integration.test.ts'",
12
14
  "lint": "biome check .",
13
15
  "lint:fix": "biome check --write .",
14
16
  "format": "biome format --write .",
@@ -30,6 +32,7 @@
30
32
  "@jsep-plugin/object": "^1.2.2",
31
33
  "@types/react": "^19.0.0",
32
34
  "@xenova/transformers": "^2.17.2",
35
+ "ai": "^6.0.3",
33
36
  "ajv": "^8.12.0",
34
37
  "commander": "^12.1.0",
35
38
  "dagre": "^0.8.5",
@@ -41,7 +44,7 @@
41
44
  "jsep": "^1.4.0",
42
45
  "react": "^19.0.0",
43
46
  "sqlite-vec": "0.1.6",
44
- "zod": "^3.23.8",
47
+ "zod": "^3.25.76",
45
48
  "zod-to-json-schema": "^3.25.1"
46
49
  },
47
50
  "optionalDependencies": {