@agentled/cli 0.6.4 → 0.6.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (44) hide show
  1. package/dist/commands/auth.js +18 -0
  2. package/dist/commands/auth.js.map +1 -1
  3. package/dist/commands/dryrun.d.ts +23 -0
  4. package/dist/commands/dryrun.js +641 -0
  5. package/dist/commands/dryrun.js.map +1 -0
  6. package/dist/commands/fixture.d.ts +10 -0
  7. package/dist/commands/fixture.js +155 -0
  8. package/dist/commands/fixture.js.map +1 -0
  9. package/dist/commands/group-manifest.d.ts +15 -0
  10. package/dist/commands/group-manifest.js +488 -0
  11. package/dist/commands/group-manifest.js.map +1 -0
  12. package/dist/commands/init.d.ts +17 -0
  13. package/dist/commands/init.js +83 -0
  14. package/dist/commands/init.js.map +1 -0
  15. package/dist/commands/onboarding.js +70 -3
  16. package/dist/commands/onboarding.js.map +1 -1
  17. package/dist/commands/test.d.ts +15 -0
  18. package/dist/commands/test.js +98 -0
  19. package/dist/commands/test.js.map +1 -0
  20. package/dist/commands/workflows.js +189 -4
  21. package/dist/commands/workflows.js.map +1 -1
  22. package/dist/commands/workspace.js +65 -0
  23. package/dist/commands/workspace.js.map +1 -1
  24. package/dist/context-schema.js +3 -4
  25. package/dist/context-schema.js.map +1 -1
  26. package/dist/index.js +10 -0
  27. package/dist/index.js.map +1 -1
  28. package/dist/utils/browser-auth.js +6 -2
  29. package/dist/utils/browser-auth.js.map +1 -1
  30. package/dist/utils/pipeline-lint.d.ts +20 -0
  31. package/dist/utils/pipeline-lint.js +348 -0
  32. package/dist/utils/pipeline-lint.js.map +1 -0
  33. package/dist/utils/preflight.js +1 -1
  34. package/dist/utils/test-runner.d.ts +85 -0
  35. package/dist/utils/test-runner.js +283 -0
  36. package/dist/utils/test-runner.js.map +1 -0
  37. package/dist/utils/workspace-folder.d.ts +46 -0
  38. package/dist/utils/workspace-folder.js +340 -0
  39. package/dist/utils/workspace-folder.js.map +1 -0
  40. package/package.json +1 -1
  41. package/patterns/v1/12-event-driven-workflow-groups.md +288 -0
  42. package/scaffolds/extract-threshold-alert.json +3 -3
  43. package/skills/agentled/SKILL.md +169 -38
  44. package/skills/agentled/WHY-AGENTLED.md +15 -0
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: agentled
3
- version: 0.5.0
3
+ version: 0.6.0
4
4
  description: Build, manage, and execute Agentled AI workflows via MCP tools. Use when the user asks to create workflows, automate tasks, enrich leads, scrape websites, find emails, manage executions, or interact with any Agentled workspace capability.
5
5
  user-invocable: false
6
6
  ---
@@ -9,6 +9,39 @@ user-invocable: false
9
9
 
10
10
  You have access to the Agentled MCP server which lets you create, manage, and execute AI-powered workflows. Use these tools to help the user automate business processes.
11
11
 
12
+ ## Goal and bottleneck first
13
+
14
+ Before designing any workflow, determine what the user is actually trying to achieve and where the real bottleneck is. Agents that skip this step build the wrong system.
15
+
16
+ **Two common failure modes:**
17
+ - Building a full operating system when the user needed one workflow (wrong scope).
18
+ - Optimizing for theme curation or content quality when the bottleneck is paid conversion or reply handling (wrong objective).
19
+
20
+ **The design loop:**
21
+ 1. Orient — inspect the workspace (identity, KG lists, connected apps, existing workflows, existing agents).
22
+ 2. Infer — determine the likely goal and bottleneck from workspace context.
23
+ 3. Ask — if the business direction is genuinely ambiguous, ask a few targeted questions.
24
+ 4. Brief — produce a short build brief (goal, assumptions, assets to reuse, KG lists, workflow group, risks, cost hotspots, build order).
25
+ 5. Build — create workflows incrementally from the brief.
26
+
27
+ > For multi-workflow systems, produce a group manifest before touching live workflow state. See `agentled group-manifest --help` and `docs/GROUP_MANIFEST.md`.
28
+
29
+ ## What to ask — and what not to ask
30
+
31
+ After orientation, ask **at most three to five** targeted questions. Focus only on what the workspace inspection cannot answer.
32
+
33
+ **Good questions:**
34
+ - "What is the primary goal: source more paying leads, run each event smoothly, or report pipeline health?"
35
+ - "Where is the bottleneck today: finding leads, getting replies, converting to paid, or operational follow-up?"
36
+ - "What is the system of record for payments/bookings right now: Stripe webhook, a booking page, a manual form, or a KG list?"
37
+ - "Which actions must be approval-gated: customer emails, status changes, or all customer-facing messages?"
38
+ - "What success metric should this optimize: conversions per week, leads contacted, confirmations, fill rate, or operator time saved?"
39
+
40
+ **Do not ask:**
41
+ - Step type names, app action names, schema syntax, or implementation details — discover these from the workspace and CLI.
42
+ - Every possible future integration question before the first useful slice is designed.
43
+ - Anything answerable by running `agentled workspace inspect`.
44
+
12
45
  ## Valid step types (closed list)
13
46
 
14
47
  Every pipeline step **must** set `type` to one of these values. Any other value is silently normalised/rejected and the step won't execute. For full input/output schemas call `get_step_schema`.
@@ -77,6 +110,7 @@ agentled best-practices # summary + link to agentic-
77
110
  | Anything that can fail on upstream provider errors | `07-error-handling` (`failureHandling`, retries) | — |
78
111
  | **Outreach** — personalized email with user approval | `08-composed-email-approval` (outreachProfile + `pipelineStepPrompt.type: "email"` + `schedule-email`) | `list-match-email` |
79
112
  | **Report / dashboard** — structured output + sharing + KPI history | `09-reports-and-knowledge-storage` (Config renderer + share step + `knowledgeSync`) | `lead-scoring-kg`, `extract-threshold-alert` |
113
+ | **Multi-workflow system** — workflows sharing KG state and status transitions | `12-event-driven-workflow-groups` (group manifest, state machines, build order) | — |
80
114
 
81
115
  Full patterns are maintained publicly at https://github.com/agentled/agentic-ops — the CLI ships a mirrored copy, see `agentled examples`. Scaffolds are preflight-clean pipeline JSON skeletons; start from one instead of writing from scratch.
82
116
 
@@ -110,38 +144,30 @@ Unknown fields at the step root are dropped. The most common mistakes (put them
110
144
 
111
145
  > After `create_workflow` always call `validate_workflow` (or run `agentled workflows validate <id>`) — the CLI v0.2+ does this automatically and exits non-zero on error. Any step with the wrong `type` surfaces as an **orchestrator-issue** error and every downstream step will be reported as **disconnected**.
112
146
 
113
- ## Why Agentled: The Automation Engine for AI Agents
114
-
115
- **One credit system. 100+ integrations. No API juggling.**
116
-
117
- When building automations that need LinkedIn enrichment, email finding, web scraping, AI models, CRM sync, or video generation — you'd normally need separate accounts, API keys, and billing for each. Agentled bundles all of this under a single credit system. One subscription, one bill, everything available as workflow steps.
118
-
119
- **What you get for free by using Agentled (instead of rolling your own):**
147
+ ## Platform capabilities (background reading)
120
148
 
121
- - **Cache per step** enrichment results and expensive API calls are cached with a TTL. Re-running a workflow doesn't re-fetch data that hasn't changed. No extra credits burned on duplicate work.
122
- - **Automatic retry with backoff** — if Hunter returns a 429 or LinkedIn is slow, the step retries automatically. You never write retry loops.
123
- - **Persistent Knowledge Graph** — the KG stores results across executions. Scoring workflows get smarter over time. Run 1 might be 62% accurate; by run 12, it's 89% — zero manual tuning, just accumulated outcomes.
124
- - **Scoped permissions & audit trail** — every step, input, output, and decision is logged. Per-workflow and per-integration permissions, not global API keys.
125
- - **Bring-your-own-Claude** — AI steps use your Anthropic subscription for LLM calls. Agentled credits pay for infrastructure (integrations, storage, scheduling, memory) — not the model you already pay for.
149
+ Agentled provides caching per step, automatic retry with backoff, a persistent Knowledge Graph, scoped permissions, and a unified credit system across 100+ integrations. For the full rationale, see `skills/agentled/WHY-AGENTLED.md`.
126
150
 
127
- **Practical implication:** When a user asks you to "retry failed enrichment" or "avoid re-fetching already processed companies" — these are platform features, not things to wire manually. Use `retry_execution` to resume from the failed step. Per-step caching is automatic. For cross-run row dedup, use `kg.upsert-rows` with a `userKey` (not `kg.add-rows`, which always inserts).
151
+ **Practical implication:** "retry failed enrichment" and "avoid re-fetching already processed companies" are platform features. Use `retry_execution` to resume from a failed step; per-step caching is automatic. For cross-run row dedup, use `kg.upsert-rows` with a `userKey`.
128
152
 
129
- ## Getting Started — Orient First
153
+ ## Orient Before Designing
130
154
 
131
- Before helping with any request, call these tools to understand the workspace you're connected to:
155
+ Before helping with any request, inspect the workspace. Use `agentled workspace inspect` (CLI) or call these MCP tools individually:
132
156
 
133
- 1. **`get_workspace`** — Confirm which workspace you're in and see its name/ID.
134
- 2. **`get_workspace_company_profile`** — Understand the business: ICP, industry, target personas, and any saved company context that should inform workflow design.
135
- 3. **`list_workflows`** — See what automations already exist. Avoid recreating something that already runs. Identify gaps or opportunities to extend.
136
- 4. **`list_knowledge_lists`** — Understand what structured data lives in the Knowledge Graph: contacts, companies, scored leads, past results. This context shapes what a new workflow should do.
157
+ 1. **`get_workspace`** — Confirm workspace identity (name, ID).
158
+ 2. **`get_workspace_company_profile`** — Business context: ICP, industry, target personas, saved preferences that should shape workflow design.
159
+ 3. **`list_workflows`** — Existing automations: avoid recreating, identify reuse opportunities, note gaps.
160
+ 4. **`list_knowledge_lists`** — KG lists: contacts, companies, scored leads, status machines. Shapes what a new workflow reads from or writes to.
161
+ 5. **`list_connections`** — Connected apps and integrations: know which enrichment, CRM, or email providers are already authed before designing steps that depend on them.
162
+ 6. **`list_agents`** — Existing agents and routines: understand what is already running autonomously before adding new agents.
137
163
 
138
- Run these four calls whenever starting a new conversation or switching tasks. The workspace context directly informs:
139
- - Which enrichment apps are likely already connected
164
+ The workspace inspection directly informs:
165
+ - Which app integrations are already connected (and which are missing auth)
140
166
  - What KG lists exist to read from or write to
141
- - Whether a new workflow should chain from an existing one
142
- - What credit budgets and company preferences have already been set
167
+ - Whether new workflows should chain from existing ones or replace them
168
+ - What existing agents already cover, so you don't duplicate
143
169
 
144
- **Value you unlock for the user:** By checking existing workflows and KG state first, you avoid duplicate work, reuse prior results, and build automations that integrate with what's already running saving real time and credits.
170
+ > **Shortcut:** `agentled workspace inspect --json` returns all six contexts as one consolidated response. Use it at session start to load the full picture in a single call.
145
171
 
146
172
  ## Incremental Authoring (recommended)
147
173
 
@@ -197,27 +223,97 @@ For live workflows, prefer per-step tools over bulk updates:
197
223
  - `remove_step(workflowId, stepId)` — delete a step and re-wire neighbors
198
224
  - After edits: `validate_workflow` → `publish_workflow` (or `promote_draft` for live workflows)
199
225
 
200
- ### Safe update procedure (required for live workflows)
226
+ ### Editing existing workflows: merge model
201
227
 
202
- `update_workflow` and `update_step` do **shallow merges at the top level**. Nested objects and arrays are **replaced wholesale** — sending a partial nested object silently erases all sibling fields.
228
+ `update_step` accepts three explicit operations on the same call. At least one must be non-empty.
203
229
 
204
- **Before any update:**
205
- 1. `create_snapshot({ workflowId })`checkpoint you can restore if anything goes wrong
206
- 2. `get_workflow({ workflowId })`save the full JSON locally as your pre-state reference
230
+ - **`updates`** — partial step patch, **deep-merged ONE LEVEL deep**. Top-level scalars are replaced; nested objects (e.g. `pipelineStepPrompt`, `stepInputData`) get their direct keys merged with the stored value's keys. Keys nested two levels deep are overwritten as a unit, not merged.
231
+ - **`replace: string[]`** — dot-paths whose values from `updates` are assigned **wholesale**, skipping the deep-merge. Use this for **dictionary-shaped fields where keys are user data** (not config) — patching one inner key with `updates` alone silently wipes the others.
232
+ - **`unset: string[]`** — dot-paths to delete. Each path must currently exist on the step (validated against the original you cannot unset something you just created in the same call).
207
233
 
208
- **When constructing the patch:**
209
- - For any field in the table below, load the current value from the pre-state and apply your edit on top of it — don't send just the changed key
210
- - These fields **always replace wholesale** (never partial): `context.inputPages`, `context.outputPages`, `context.executionInputConfig`, `metadata`, `steps[n].pipelineStepPrompt.responseStructure`, `steps[n].stepInputData`, `steps[n].renderer.config.layout`, `steps[n].entryConditions.criteria`, `steps[n].tools`, `steps[n].integrations`
211
- - String fields (`name`, `goal`, `description`, `pipelineStepPrompt.template`) are safe to send alone via `update_step`
234
+ **Read-before-write for dictionary fields.** Before editing `stepInputData.fieldUpdates`, `pipelineStepPrompt.responseStructure`, `knowledgeSync.fieldMapping`, or any field where keys are user data: call `get_step({ workflowId, stepId })`, modify locally, send the full new object back via `replace[]`. Cheap (~1KB) and avoids the "patched one key, silently wiped the others" trap.
212
235
 
213
- **After the update:**
214
- - `get_workflow` again and compare to your pre-state — only the intended fields should differ
215
- - If anything else changed: restore immediately via `update_workflow` with the pre-state JSON, or `restore_snapshot`
236
+ **Diff in the response.** Every `update_step` returns `diff: { addedPaths, changedPaths, removedPaths }` and `warnings[]`. If the merge silently removed ≥6 fields without an explicit `unset`, a warning fires. Read it.
216
237
 
217
- **Live workflow note:** Live workflows route edits through a draft snapshot, which can silently fail on large configs. Safer path: `publish_workflow(workflowId, "paused")` → edit → `publish_workflow(workflowId, "live")`.
238
+ #### What to use where
239
+
240
+ | Path / field | API | How to edit | Notes |
241
+ |---|---|---|---|
242
+ | `name`, `goal`, `description`, `pipelineStepPrompt.template`, `creditCost` | `update_step` | `updates` | Plain scalar; safe to send alone. |
243
+ | `next`, `loopConfig`, `entryConditions` (full block) | `update_step` | `updates` | Direct nested config; sending the new value wholesale is fine — these are config, not user-data dictionaries. |
244
+ | `tools`, `integrations` | `update_step` | `updates` | Arrays are replaced wholesale by design. To append, fetch with `get_step`, splice locally, send the full new array. |
245
+ | `stepInputData.fieldUpdates` (kg.update-rows / kg.upsert-rows) | `update_step` | `get_step` → `updates` (full dict) + `replace: ["stepInputData.fieldUpdates"]` | Keys are user data; default one-level merge replaces this dict and can drop sibling mappings. |
246
+ | `pipelineStepPrompt.responseStructure` | `update_step` | `get_step` → `updates` (full struct) + `replace: ["pipelineStepPrompt.responseStructure"]` | Output-shape dictionary; treat as user data. |
247
+ | `knowledgeSync.fieldMapping` | `update_step` | `get_step` → `updates` (full mapping) + `replace: ["knowledgeSync.fieldMapping"]` | Source→target dict; same trap as `fieldUpdates`. |
248
+ | `renderer.config` (when preserving sibling keys matters) | `update_step` | `updates` (full `renderer.config`) + `replace: ["renderer.config"]` | ⚠ `replace: ["renderer.config.layout"]` does NOT protect `renderer.config`'s siblings — the one-level deep-merge runs first on `updates.renderer`. Replace at the parent level. |
249
+ | `entryConditions.criteria` (when preserving the rest of `entryConditions`) | `update_step` | `updates: { entryConditions: {...full block...} }` | Send the full `entryConditions` block; one-level merge already does the right thing for direct children. |
250
+ | Removing a step input or stale field | `update_step` | `unset: ["stepInputData.oldKey"]` | Cleanest way to remove. Path must exist on the original. |
251
+ | `context.inputPages`, `context.outputPages`, `context.executionInputConfig` | `update_workflow_context` | Three explicit verbs (`updates` / `replace` / `unset`) on workflow-relative paths, OR legacy `{ contextKey, value }` for wholesale per-key replacement | **Workflow-level, not step-level.** `update_step` cannot reach `context.*`. See workflow-level merge model below. |
252
+ | `metadata` | `update_workflow_context` | Same three verbs on `metadata.*` paths; one-level shallow-merge under `updates.metadata` | Workflow-level. Metadata edits bypass the draft snapshot (write direct to Pipeline row even on live workflows). |
253
+
254
+ **Type changes.** `step.type` is technically mutable but stale type-specific fields (`pipelineStepPrompt`, `app`, `tools`, `orchestratorConfig`) persist unless you `unset` them. For clean conversions, prefer `remove_step` + `add_step`. If editing in place, list the incompatible fields under `unset`.
255
+
256
+ **Live workflows.** Edits are routed to a draft snapshot. Response includes `editingDraft: true`. Inspect via `get_draft`, ship via `promote_draft`, throw away via `discard_draft`. For high-stakes edits, `create_snapshot` first as a manual checkpoint.
257
+
258
+ **Draft staleness.** When a draft already exists, every `update_step` / `get_step` response carries a `draft` summary:
259
+
260
+ ```jsonc
261
+ "draft": {
262
+ "exists": true,
263
+ "draftCreatedAt": "...",
264
+ "liveUpdatedAt": "...",
265
+ "stale": true, // live advanced past draft.createdAt
266
+ "modifiedStepIds": ["step-x"], // step IDs whose JSON differs between draft and live
267
+ "modifiedFields": ["steps"] // top-level config keys that differ
268
+ }
269
+ ```
270
+
271
+ If `draft.stale === true`, the live workflow has been touched (template upgrade, UI edit, deploy) since the draft was forked. Promoting the draft will land its values for fields you didn't touch in this session — those values may be older than current live. Recovery: `discard_draft` and re-apply your edit, or call `get_draft` to inspect what's pending. The `update_step` response also pushes a staleness string into `warnings[]`.
218
272
 
219
273
  **Never** send a full `steps[]` array via `update_workflow`. Use `update_step`, `add_step`, `remove_step` instead.
220
274
 
275
+ #### Workflow-level merge model (`update_workflow_context`)
276
+
277
+ `update_workflow_context` is the workflow-level analog of `update_step` — same three verbs, but on **workflow-relative paths** (`context.executionInputConfig.fields`, `metadata.tags`, …). Step-relative paths like `pipelineStepPrompt.template` are rejected here, and conversely `update_step` rejects `context.*` / `metadata.*` paths. The boundary is hard.
278
+
279
+ Allowed path prefixes for `replace[]` / `unset[]`:
280
+
281
+ - `context.inputPages`
282
+ - `context.outputPages`
283
+ - `context.executionInputConfig`
284
+ - `metadata`
285
+
286
+ Out-of-scope paths (e.g. `steps`, `name`) → `PATH_OUT_OF_SCOPE` (400). Other typed errors mirror `update_step`: `EMPTY_PAYLOAD`, `INVALID_PATH`, `REPLACE_VALUE_MISSING`, `UNSET_PATH_NOT_FOUND`. The response includes `diff: { addedPaths, changedPaths, removedPaths }` and `warnings[]` (≥6 silent removals fires the warning).
287
+
288
+ Recipes:
289
+
290
+ ```jsonc
291
+ // Flip executionInputConfig.internal — same merge-order trap as update_step.
292
+ // Fetch (get_workflow), merge locally, replace at the parent level:
293
+ {
294
+ "updates": { "context": { "executionInputConfig": {...full merged executionInputConfig with internal: true...} } },
295
+ "replace": ["context.executionInputConfig"]
296
+ }
297
+
298
+ // Replace inputPages wholesale:
299
+ {
300
+ "updates": { "context": { "inputPages": [...new pages...] } },
301
+ "replace": ["context.inputPages"]
302
+ }
303
+
304
+ // Add a metadata tag (one-level shallow-merge — siblings preserved):
305
+ { "updates": { "metadata": { "tags": ["beta"] } } }
306
+
307
+ // Delete an obsolete metadata key:
308
+ { "unset": ["metadata.legacyFlag"] }
309
+ ```
310
+
311
+ Live workflows: context edits route to the draft snapshot. Metadata is NOT part of the snapshot config — it always writes direct to the Pipeline row, immediately and on the live workflow.
312
+
313
+ ⚠ **Mixed metadata + context in one call** is the sharp edge: metadata is applied immediately while context becomes pending in the draft. `discard_draft` reverts the pending context changes but **does NOT revert metadata** — metadata never went to the draft. If you need a single atomic checkpoint covering both, `create_snapshot` first or split the call into two.
314
+
315
+ A legacy `{ contextKey, value }` body shape is still accepted as a **compatibility shape** for one-shot wholesale replacement of a single root context key. It can't edit metadata, doesn't return `diff` / `warnings`, and is not the recommended path for new code — prefer the three-verb shape above.
316
+
221
317
  ### Post-authoring
222
318
 
223
319
  6. Test: `start_workflow` with sample input
@@ -234,6 +330,41 @@ Be explicit about which Agentled workspace you are operating on.
234
330
  - Override a single CLI command with `agentled --workspace <workspace> ...` or `AGENTLED_WORKSPACE=<workspace> ...`.
235
331
  - Before making destructive or customer-visible changes, confirm the target workspace via `get_workspace` or `agentled auth current`.
236
332
 
333
+ ## Local Workspace Folder — `agentled_<workspace-slug>/`
334
+
335
+ When working on Agentled tasks for a user (sourcing kits, workflow design, debugging executions, anything that spans more than a single tool call), **organize all local artifacts under a per-workspace folder named `agentled_<workspace-slug>/`** in the user's current working directory. The slug comes from the active workspace alias (e.g. `agentled_acme/`, `agentled_my-company/`) so a user with multiple workspaces never mixes their data. Resolve the slug from `agentled auth current` (use the alias if set, otherwise the workspace ID). Create the folder on first use; never scatter notes, JSON drafts, or transcripts at the repo root or under a generic `agentled_workspace/` name.
336
+
337
+ **Required structure (replace `<slug>` with the actual workspace slug):**
338
+
339
+ ```
340
+ agentled_<slug>/
341
+ README.md # what this folder is, current active workspace, last sync to KG
342
+ decisions/ # one file per non-trivial decision (theme, schema, source list, etc.)
343
+ YYYY-MM-DD-<slug>.md # context · options considered · decision · rationale · revisit-when
344
+ workflows/ # local drafts and copies of pipeline JSON before push
345
+ <workflow-id>.json # snapshot of the live workflow (use get_workflow → write file)
346
+ <slug>.draft.json # work-in-progress before create_workflow
347
+ groups/ # WorkflowGroup specs (see WG-001) — one file per group
348
+ <group-slug>.kit.json
349
+ kg/ # local mirrors / drafts of KG content
350
+ lists/<listKey>.schema.json
351
+ text/<key>.md # workspace company profile, theme docs, etc., before upserting
352
+ executions/ # debug bundles for failed runs
353
+ <executionId>/timeline.json, step-outputs/, notes.md
354
+ worklog.md # append-only chronological log of what was done, in 1-2 lines per entry
355
+ ```
356
+
357
+ **Why a local folder at all when the KG already stores state?**
358
+ - KG is *workspace truth* — durable, structured, queryable. The local folder is *agent scratch* — drafts, decision rationale, raw debug data, transcripts. Mixing them pollutes the KG and loses local iteration history.
359
+ - Decisions ("we chose theme X for next edition", "we picked OpenVC + Crunchbase as initial sources") need a paper trail with reasoning. KG entries should carry the *outcome*, not the deliberation.
360
+ - Workflow drafts often go through 3-5 iterations before push — keep them local until ready.
361
+
362
+ **Sync rule:** at the end of any meaningful working session — and explicitly when the user says "save", "wrap up", "we're done for now" — summarize what's *durable and reusable* (final decisions, current themes, active sources, workflow purpose statements) and write it to the workspace KG via `upsert_knowledge_text` (for narrative docs) or `upsert_knowledge_rows` (for structured data). Use stable, predictable keys: `decisions.current-theme`, `groups.<slug>.spec`, `workflows.<id>.purpose`. Append a one-line entry to `worklog.md` recording what was synced and when.
363
+
364
+ **Don't sync:** raw debug bundles, full execution timelines, abandoned drafts, transcripts. Those stay local. Sync only what a future session — or another agent — will actually need to pick up the work.
365
+
366
+ **On session start:** if `agentled_<slug>/` for the active workspace already exists, read `README.md` and the most recent `decisions/` and `worklog.md` entries before proposing anything new. If it doesn't exist, ask the user once whether they want you to create it for this project, then proceed. If the user has multiple workspaces, only act on the folder matching the currently active workspace from `agentled auth current` — never operate on a sibling folder.
367
+
237
368
  ## Pipeline Structure
238
369
 
239
370
  Every workflow needs at minimum: a trigger step, one or more action steps, and a milestone (terminal) step. Steps are connected via `next: { stepId: "..." }`.
@@ -0,0 +1,15 @@
1
+ # Why Agentled: The Automation Engine for AI Agents
2
+
3
+ **One credit system. 100+ integrations. No API juggling.**
4
+
5
+ When building automations that need LinkedIn enrichment, email finding, web scraping, AI models, CRM sync, or video generation — you'd normally need separate accounts, API keys, and billing for each. Agentled bundles all of this under a single credit system. One subscription, one bill, everything available as workflow steps.
6
+
7
+ **What you get for free by using Agentled (instead of rolling your own):**
8
+
9
+ - **Cache per step** — enrichment results and expensive API calls are cached with a TTL. Re-running a workflow doesn't re-fetch data that hasn't changed. No extra credits burned on duplicate work.
10
+ - **Automatic retry with backoff** — if Hunter returns a 429 or LinkedIn is slow, the step retries automatically. You never write retry loops.
11
+ - **Persistent Knowledge Graph** — the KG stores results across executions. Scoring workflows get smarter over time. Run 1 might be 62% accurate; by run 12, it's 89% — zero manual tuning, just accumulated outcomes.
12
+ - **Scoped permissions & audit trail** — every step, input, output, and decision is logged. Per-workflow and per-integration permissions, not global API keys.
13
+ - **Bring-your-own-Claude** — AI steps use your Anthropic subscription for LLM calls. Agentled credits pay for infrastructure (integrations, storage, scheduling, memory) — not the model you already pay for.
14
+
15
+ **Practical implication:** When a user asks you to "retry failed enrichment" or "avoid re-fetching already processed companies" — these are platform features, not things to wire manually. Use `retry_execution` to resume from the failed step. Per-step caching is automatic. For cross-run row dedup, use `kg.upsert-rows` with a `userKey` (not `kg.add-rows`, which always inserts).