@agentled/cli 0.1.6 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (53) hide show
  1. package/README.md +136 -0
  2. package/dist/builtin-tools-catalog.d.ts +37 -0
  3. package/dist/builtin-tools-catalog.js +96 -0
  4. package/dist/builtin-tools-catalog.js.map +1 -0
  5. package/dist/commands/auth.js +30 -0
  6. package/dist/commands/auth.js.map +1 -1
  7. package/dist/commands/examples.d.ts +15 -0
  8. package/dist/commands/examples.js +100 -0
  9. package/dist/commands/examples.js.map +1 -0
  10. package/dist/commands/scaffold.d.ts +14 -0
  11. package/dist/commands/scaffold.js +103 -0
  12. package/dist/commands/scaffold.js.map +1 -0
  13. package/dist/commands/schema.d.ts +10 -0
  14. package/dist/commands/schema.js +107 -0
  15. package/dist/commands/schema.js.map +1 -0
  16. package/dist/commands/skills.d.ts +9 -0
  17. package/dist/commands/skills.js +94 -0
  18. package/dist/commands/skills.js.map +1 -0
  19. package/dist/commands/tools.d.ts +10 -0
  20. package/dist/commands/tools.js +53 -0
  21. package/dist/commands/tools.js.map +1 -0
  22. package/dist/commands/workflows.js +227 -9
  23. package/dist/commands/workflows.js.map +1 -1
  24. package/dist/context-schema.d.ts +37 -0
  25. package/dist/context-schema.js +108 -0
  26. package/dist/context-schema.js.map +1 -0
  27. package/dist/index.js +8 -0
  28. package/dist/index.js.map +1 -1
  29. package/dist/utils/preflight.d.ts +25 -0
  30. package/dist/utils/preflight.js +293 -0
  31. package/dist/utils/preflight.js.map +1 -0
  32. package/dist/utils/skills.d.ts +49 -0
  33. package/dist/utils/skills.js +214 -0
  34. package/dist/utils/skills.js.map +1 -0
  35. package/package.json +4 -1
  36. package/patterns/v1/00-why-agentic-ops.md +107 -0
  37. package/patterns/v1/01-trigger-design.md +107 -0
  38. package/patterns/v1/02-dedup-gates.md +135 -0
  39. package/patterns/v1/03-credit-efficiency.md +130 -0
  40. package/patterns/v1/04-loop-patterns.md +147 -0
  41. package/patterns/v1/05-child-workflow-contracts.md +151 -0
  42. package/patterns/v1/06-conditional-routing.md +151 -0
  43. package/patterns/v1/07-error-handling.md +157 -0
  44. package/patterns/v1/08-composed-email-approval.md +130 -0
  45. package/patterns/v1/09-reports-and-knowledge-storage.md +166 -0
  46. package/scaffolds/README.md +62 -0
  47. package/scaffolds/ai-with-tools.json +49 -0
  48. package/scaffolds/email-polling-dedup.json +71 -0
  49. package/scaffolds/extract-threshold-alert.json +131 -0
  50. package/scaffolds/lead-scoring-kg.json +84 -0
  51. package/scaffolds/list-match-email.json +131 -0
  52. package/scaffolds/minimal.json +20 -0
  53. package/skills/agentled/SKILL.md +573 -0
@@ -0,0 +1,130 @@
1
+ # 03 — Credit efficiency: not burning money while building
2
+
3
+ **Problem**: Developers restart full workflow executions to debug a single failed step, burning credits on work that was already done correctly.
4
+
5
+ **Why it fails silently**: The restarted execution appears to succeed. The wasted spend accumulates in the background — 3–5× expected credit usage during development — until the invoice arrives.
6
+
7
+ ---
8
+
9
+ ## The core discipline: fix → retry → verify
10
+
11
+ Every debugging cycle should follow exactly this sequence:
12
+
13
+ 1. **Identify** the failed step and its error
14
+ 2. **Fix** the configuration, prompt, or code
15
+ 3. **Retry from the failed step** — not from the beginning
16
+ 4. **Verify** the step output
17
+
18
+ Starting a new full execution to debug a failed step is the most expensive habit in agentic development. It re-runs every step that already succeeded: the enrichment API call, the LLM prompt, the database read. All paid again. None of them changed.
19
+
20
+ ---
21
+
22
+ ## Anti-pattern
23
+
24
+ ```
25
+ Execution fails at step 5 (AI scoring)
26
+ → Developer reads the error
27
+ → Fixes the prompt
28
+ → Starts a NEW execution from step 1
29
+ → Steps 1-4 run again: enrichment (5 credits), profile fetch (2 credits), web scrape (0 credits), data parse (0 credits)
30
+ → Step 5 runs with the fixed prompt
31
+ → Total wasted: 7 credits × every debug cycle
32
+ ```
33
+
34
+ In a workflow with 3 debug cycles per feature: 21 wasted credits before it works.
35
+
36
+ ---
37
+
38
+ ## Correct pattern
39
+
40
+ ```
41
+ Execution fails at step 5 (AI scoring)
42
+ → Developer reads the error
43
+ → Fixes the prompt in the workflow config
44
+ → Retries from step 5 — the platform reuses outputs from steps 1-4
45
+ → Step 5 runs with the fixed prompt
46
+ → Total wasted: 0 credits
47
+ ```
48
+
49
+ Most workflow platforms expose a "retry from this step" action on failed executions. Use it every time.
50
+
51
+ ---
52
+
53
+ ## Test steps in isolation before wiring them
54
+
55
+ Before adding a step to a live workflow, test it standalone with representative input data:
56
+
57
+ ```bash
58
+ # Test an AI step with real input — no execution, no credits for upstream steps
59
+ test_ai_action(
60
+ template: "Analyze this company: {{input.company}}. Score fit 0-100.",
61
+ responseStructure: { score: "number", reasoning: "string" },
62
+ input: { company: { name: "Stripe", industry: "fintech", employees: 4000 } }
63
+ )
64
+
65
+ # Test a code step in the same sandbox as production
66
+ test_code_action(
67
+ code: "return input.items.filter(i => i.score > 70)",
68
+ input: { items: [{ name: "A", score: 85 }, { name: "B", score: 60 }] }
69
+ )
70
+ ```
71
+
72
+ This catches errors before they're in a running execution. Zero credits for upstream steps.
73
+
74
+ ---
75
+
76
+ ## Mock downstream steps with prior output
77
+
78
+ When you need to test a downstream step (step 6) but don't want to re-run expensive upstream steps (steps 1-5):
79
+
80
+ 1. Find a prior execution where steps 1-5 succeeded
81
+ 2. Copy the output of step 5 from that execution
82
+ 3. Use it as mock input to `test_ai_action` or `test_code_action` for step 6
83
+
84
+ ```javascript
85
+ // Prior execution step 5 output (saved from execution abc-123):
86
+ const priorOutput = {
87
+ company: { name: "Stripe", score: 85, signals: ["YC", "series B"] }
88
+ };
89
+
90
+ // Test step 6 in isolation using that output
91
+ test_ai_action(
92
+ template: "Based on this profile, draft a 3-sentence outreach: {{input.company}}",
93
+ input: priorOutput
94
+ )
95
+ ```
96
+
97
+ No re-enrichment. No re-fetching. No wasted credits.
98
+
99
+ ---
100
+
101
+ ## One execution at a time
102
+
103
+ Don't start a new execution while one is in flight for the same workflow. Reasons:
104
+ - Parallel executions on the same data produce duplicate writes
105
+ - You can't read the output of execution A while debugging it if execution B is also running
106
+ - If both fail, you now have two half-processed states to reconcile
107
+
108
+ The discipline: start → observe → retry or fix → verify. Sequential, not parallel.
109
+
110
+ ---
111
+
112
+ ## Credit cost by step type (reference)
113
+
114
+ | Step type | Typical cost | Notes |
115
+ |---|---|---|
116
+ | AI action (standard model) | 5–15 credits | Varies by model tier and output length |
117
+ | Data enrichment (LinkedIn, Hunter) | 2–5 credits | Per-record cost |
118
+ | Web scrape | 0 credits | Free |
119
+ | HTTP request | 0 credits | Free |
120
+ | Code step | 0 credits | Free |
121
+ | Knowledge graph read/write | 1 credit | Flat |
122
+ | Browser automation | 10–15 credits | Per task |
123
+
124
+ Expensive steps are AI and enrichment. These are the ones you never want to re-run unnecessarily.
125
+
126
+ ---
127
+
128
+ ## One-line rule
129
+
130
+ > When a step fails, fix it and retry from that step — never start a new execution; use isolated step testing to catch errors before they're in a running workflow.
@@ -0,0 +1,147 @@
1
+ # 04 — Loop patterns: iterating without N+1 or data loss
2
+
3
+ **Problem**: Loops in agentic workflows silently drop items, produce N+1 API calls, or pass incomplete results to downstream steps because the loop hasn't finished yet.
4
+
5
+ **Why it fails silently**: A loop that processes 10 items looks the same in logs as one that processes 9 — the missing item has no error, just an absence. Downstream steps that read loop output before completion get partial data with no warning.
6
+
7
+ ---
8
+
9
+ ## The loop completion trap
10
+
11
+ The most common loop mistake: a downstream step reads loop results before the loop has finished.
12
+
13
+ ```yaml
14
+ # Wrong: downstream step starts before loop finishes
15
+ steps:
16
+ - id: enrich-companies
17
+ type: loop
18
+ over: "{{input.companies}}"
19
+ step: enrich-each
20
+
21
+ - id: generate-report # starts immediately, reads partial results
22
+ type: ai-action
23
+ input: "{{steps.enrich-companies.results}}"
24
+ ```
25
+
26
+ In async execution, `generate-report` may start with 3 of 10 companies enriched. The report is incomplete. No error is raised.
27
+
28
+ ---
29
+
30
+ ## Anti-pattern
31
+
32
+ ```yaml
33
+ # Wrong: no loop completion gate
34
+ - id: process-items
35
+ type: loop
36
+ over: "{{steps.fetch.items}}"
37
+ step: process-each
38
+
39
+ - id: summarize # may run with 0 items if loop is still in flight
40
+ type: ai-action
41
+ prompt: "Summarize these results: {{steps.process-items.outputs}}"
42
+ ```
43
+
44
+ ---
45
+
46
+ ## Correct pattern
47
+
48
+ Add a `loop_completion` entry condition on every step that consumes loop output:
49
+
50
+ ```yaml
51
+ - id: process-items
52
+ type: loop
53
+ over: "{{steps.fetch.items}}"
54
+ step: process-each
55
+
56
+ - id: summarize
57
+ type: ai-action
58
+ entryConditions:
59
+ onCriteriaFail: "wait" # block until condition is met
60
+ conditionText: "Wait for all processing to complete"
61
+ criteria:
62
+ - type: loop_completion
63
+ stepId: process-items # which loop to wait for
64
+ operator: "=="
65
+ value: true
66
+ prompt: "Summarize these results: {{steps.process-items.outputs}}"
67
+ ```
68
+
69
+ `onCriteriaFail: "wait"` blocks this step until all loop iterations finish. The step then runs once with the complete output.
70
+
71
+ ---
72
+
73
+ ## Pairing loop results back to source records
74
+
75
+ After a loop that calls an external API or runs an AI step per item, you often need to pair each result back to the original record for a KG or CRM write.
76
+
77
+ The problem: loop outputs are indexed by iteration order, not by the original record's ID.
78
+
79
+ ```javascript
80
+ // Code step: pair loop outputs with source records
81
+ const sourceItems = input.sourceItems; // original array
82
+ const loopOutputs = input.loopOutputs; // same-length array of results
83
+
84
+ return sourceItems.map((item, index) => ({
85
+ ...item, // original fields
86
+ ...loopOutputs[index], // enriched fields
87
+ sourceId: item.id, // explicit ID link
88
+ }));
89
+ ```
90
+
91
+ Place this code step after the loop completion gate, before the write step.
92
+
93
+ ---
94
+
95
+ ## N+1: when to loop vs when to batch
96
+
97
+ A loop that calls an LLM or enrichment API once per item is an N+1 pattern. For 100 items: 100 API calls, 100 credit charges, 100× the latency.
98
+
99
+ **Ask: does the API support batch input?**
100
+
101
+ ```yaml
102
+ # Wrong (N+1): one LLM call per item
103
+ - id: classify-each
104
+ type: loop
105
+ over: "{{input.emails}}"
106
+ step:
107
+ type: ai-action
108
+ prompt: "Classify this email: {{currentItem.body}}"
109
+
110
+ # Correct (batch): one LLM call for all items
111
+ - id: classify-all
112
+ type: ai-action
113
+ prompt: |
114
+ Classify each of these emails. Return a JSON array in the same order.
115
+ Emails: {{input.emails}}
116
+ responseStructure:
117
+ classifications: "array of { id: string, category: string, priority: string }"
118
+ ```
119
+
120
+ Not every step supports batching — enrichment APIs often don't. But AI steps almost always do. Default to batch for AI classification, extraction, and scoring over lists.
121
+
122
+ ---
123
+
124
+ ## Fire-and-forget anti-pattern
125
+
126
+ ```yaml
127
+ # Wrong: loop dispatches child workflows with no completion tracking
128
+ - id: dispatch-scoring
129
+ type: loop
130
+ over: "{{input.candidates}}"
131
+ step:
132
+ type: call-workflow
133
+ workflowId: score-candidate
134
+ input: "{{currentItem}}"
135
+
136
+ - id: aggregate-scores # starts immediately — child workflows haven't finished
137
+ type: ai-action
138
+ prompt: "Aggregate these scores: {{steps.dispatch-scoring.outputs}}"
139
+ ```
140
+
141
+ When the loop calls child workflows, completion tracking is especially important — child workflow execution time varies. Always add a `loop_completion` gate before aggregating.
142
+
143
+ ---
144
+
145
+ ## One-line rule
146
+
147
+ > Always gate the step that consumes loop output on `loop_completion` with `onCriteriaFail: "wait"` — loops run asynchronously and downstream steps will read partial data without it.
@@ -0,0 +1,151 @@
1
+ # 05 — Child workflow contracts: composable workflows with typed returns
2
+
3
+ **Problem**: Monolithic workflows become unmaintainable, and child workflows called from orchestrators fail silently because their return contracts aren't defined — the calling workflow gets `undefined` for every field it tries to read.
4
+
5
+ **Why it fails silently**: A child workflow that ends with a `milestone` step instead of a `return` step completes successfully from the platform's perspective. The calling workflow receives no data and no error. Every field reference like `{{steps.call-child.score}}` resolves to empty string.
6
+
7
+ ---
8
+
9
+ ## The milestone vs return mistake
10
+
11
+ ```yaml
12
+ # Wrong: child workflow ends with milestone
13
+ - id: score-company
14
+ type: ai-action
15
+ prompt: "Score this company 0-100..."
16
+ responseStructure:
17
+ score: "number"
18
+ decision: "string"
19
+
20
+ - id: done # ← milestone = terminal, no data returned to caller
21
+ type: milestone
22
+ name: "Complete"
23
+ ```
24
+
25
+ The calling orchestrator runs `call-workflow` and gets back nothing. `{{steps.call-child.score}}` is empty. No error is raised.
26
+
27
+ ---
28
+
29
+ ## Anti-pattern
30
+
31
+ ```yaml
32
+ # Wrong: child workflow
33
+ steps:
34
+ - id: enrich
35
+ ...
36
+ - id: score
37
+ ...
38
+ - id: done # milestone doesn't return data
39
+ type: milestone
40
+
41
+ # Calling orchestrator:
42
+ - id: call-child
43
+ type: call-workflow
44
+ input: { companyUrl: "{{input.url}}" }
45
+
46
+ - id: use-result
47
+ type: ai-action
48
+ prompt: "Based on score {{steps.call-child.score}}..."
49
+ # ^ always empty — milestone returned nothing
50
+ ```
51
+
52
+ ---
53
+
54
+ ## Correct pattern
55
+
56
+ Child workflows must end with a `return` step that explicitly declares what they return:
57
+
58
+ ```yaml
59
+ # Correct: child workflow
60
+ steps:
61
+ - id: enrich
62
+ ...
63
+ - id: score
64
+ type: ai-action
65
+ responseStructure:
66
+ score: "number 0-100"
67
+ decision: "invest | pass | monitor"
68
+ summary: "string"
69
+
70
+ - id: return-results # ← return step, not milestone
71
+ type: return
72
+ returnConfig:
73
+ fields:
74
+ - name: score # the name the caller uses
75
+ stepId: score # which step produced it
76
+ field: score # which field from that step
77
+ - name: decision
78
+ stepId: score
79
+ field: decision
80
+ - name: summary
81
+ stepId: score
82
+ field: summary
83
+
84
+ # Calling orchestrator:
85
+ - id: call-child
86
+ type: call-workflow
87
+ input: { companyUrl: "{{input.url}}" }
88
+
89
+ - id: use-result
90
+ type: ai-action
91
+ prompt: "Based on score {{steps.call-child.score}}, decision: {{steps.call-child.decision}}..."
92
+ # ^ now populated correctly
93
+ ```
94
+
95
+ ---
96
+
97
+ ## Designing a return contract
98
+
99
+ A return contract is the interface between the child workflow and its callers. Treat it like a function signature:
100
+
101
+ 1. **Be explicit**: list every field the caller might need — don't assume they'll dig into nested objects
102
+ 2. **Use flat field names**: `score` not `scoringCard.total_score` — callers reference these as template variables
103
+ 3. **Match names to caller expectations**: if the orchestrator uses `{{steps.call-child.decision}}`, the return contract must export a field named `decision`
104
+ 4. **Version changes carefully**: adding fields is safe; renaming or removing fields breaks every orchestrator that calls this child
105
+
106
+ ```yaml
107
+ # Comprehensive return contract
108
+ returnConfig:
109
+ fields:
110
+ - { name: score, stepId: score-step, field: total_score }
111
+ - { name: decision, stepId: score-step, field: decision }
112
+ - { name: summary, stepId: score-step, field: executive_summary }
113
+ - { name: teamEvaluations, stepId: eval-team, field: evaluations }
114
+ - { name: rawData, stepId: enrich, field: companyProfile }
115
+ ```
116
+
117
+ ---
118
+
119
+ ## The god-workflow anti-pattern
120
+
121
+ A single workflow with 25+ steps that handles intake, enrichment, scoring, routing, outreach, and CRM sync:
122
+
123
+ ```
124
+ trigger → fetch → enrich → score → route → draft-email → approve → send →
125
+ update-crm → update-kg → notify-slack → generate-report → archive → done
126
+ ```
127
+
128
+ Problems:
129
+ - A failure in step 12 requires re-running steps 1-11
130
+ - You can't reuse the scoring logic in another context
131
+ - Testing requires running the entire pipeline end-to-end
132
+ - A single developer change can break the entire flow
133
+
134
+ **Break it into composable child workflows:**
135
+
136
+ ```
137
+ Orchestrator:
138
+ trigger → call: enrich-workflow → call: score-workflow → call: route-workflow → done
139
+
140
+ enrich-workflow: fetch → enrich → return { profile }
141
+ score-workflow: receive profile → score → return { score, decision }
142
+ route-workflow: receive decision → route → draft → approve → send → return { sent }
143
+ ```
144
+
145
+ Each child workflow can be tested independently, retried independently, and reused by other orchestrators.
146
+
147
+ ---
148
+
149
+ ## One-line rule
150
+
151
+ > Child workflows must end with a `return` step (not `milestone`) with an explicit `returnConfig.fields` list — milestone completes silently with no data returned to the caller.
@@ -0,0 +1,151 @@
1
+ # 06 — Conditional routing: conditions that actually fire
2
+
3
+ **Problem**: Entry conditions on workflow steps silently fail to apply — steps run unconditionally, or skip unconditionally — because the wrong field names are used to configure them.
4
+
5
+ **Why it fails silently**: The wrong field names (`conditions` instead of `criteria`, `field` instead of `variable`) are not validation errors. The platform silently ignores unrecognized fields and applies no condition at all. The step runs every time, or skips every time, with no visible indication that the condition isn't being evaluated.
6
+
7
+ ---
8
+
9
+ ## The field name trap
10
+
11
+ This is the most subtle silent failure in workflow configuration. Two pairs of field names look interchangeable but aren't:
12
+
13
+ | Wrong | Correct |
14
+ |---|---|
15
+ | `conditions` | `criteria` |
16
+ | `field` | `variable` |
17
+
18
+ ```yaml
19
+ # Wrong: unrecognized field names — condition is silently ignored
20
+ entryConditions:
21
+ onCriteriaFail: "skip"
22
+ conditions: # ← wrong: should be "criteria"
23
+ - field: "{{input.score}}" # ← wrong: should be "variable"
24
+ operator: ">"
25
+ value: 70
26
+
27
+ # Correct: recognized field names — condition is evaluated
28
+ entryConditions:
29
+ onCriteriaFail: "skip"
30
+ criteria: # ← correct
31
+ - variable: "{{input.score}}" # ← correct
32
+ operator: ">"
33
+ value: 70
34
+ ```
35
+
36
+ The wrong version doesn't error. The step runs unconditionally, as if no condition existed.
37
+
38
+ ---
39
+
40
+ ## Anti-pattern
41
+
42
+ ```yaml
43
+ # Wrong: routes HOT leads to Slack, but the condition is silently ignored
44
+ - id: notify-hot-lead
45
+ type: app-action
46
+ app: slack
47
+ entryConditions:
48
+ onCriteriaFail: "skip"
49
+ conditions: # ← wrong field name
50
+ - field: "{{steps.score.category}}" # ← wrong field name
51
+ operator: "=="
52
+ value: "HOT"
53
+ # Result: every lead triggers a Slack notification, including COLD and WARM
54
+ ```
55
+
56
+ ---
57
+
58
+ ## Correct pattern
59
+
60
+ ```yaml
61
+ # Correct: HOT leads only
62
+ - id: notify-hot-lead
63
+ type: app-action
64
+ app: slack
65
+ entryConditions:
66
+ onCriteriaFail: "skip"
67
+ conditionText: "Only notify for HOT leads"
68
+ criteria: # ← correct
69
+ - variable: "{{steps.score.category}}" # ← correct
70
+ operator: "=="
71
+ value: "HOT"
72
+ ```
73
+
74
+ ---
75
+
76
+ ## How to verify a condition is actually firing
77
+
78
+ When a condition behaves unexpectedly (step always runs, or always skips), check:
79
+
80
+ 1. **Field names**: is it `criteria`/`variable`? Not `conditions`/`field`?
81
+ 2. **Variable path**: does `{{steps.score.category}}` actually resolve? Check the referenced step's output in execution logs.
82
+ 3. **Value type**: comparing a string `"70"` to a number `70` with `>` may not work as expected — check that types match.
83
+ 4. **Null safety**: `isNotNull` checks should come before value comparisons to avoid null reference issues.
84
+
85
+ ```yaml
86
+ # Pattern: null-safe condition chain
87
+ criteria:
88
+ - variable: "{{steps.enrich.company}}"
89
+ operator: "isNotNull" # check existence first
90
+ - variable: "{{steps.score.total}}"
91
+ operator: ">"
92
+ value: 70 # then compare value
93
+ ```
94
+
95
+ ---
96
+
97
+ ## The LLM-as-router anti-pattern
98
+
99
+ Using an AI step to make a binary routing decision that a simple condition can handle:
100
+
101
+ ```yaml
102
+ # Wrong: burning 10 credits to decide yes/no
103
+ - id: decide-routing
104
+ type: ai-action
105
+ creditCost: 10
106
+ prompt: |
107
+ Given this lead's score of {{steps.score.value}}, should we send a Slack alert?
108
+ Score above 80 means yes. Below 80 means no.
109
+ Return: { shouldAlert: boolean }
110
+
111
+ - id: send-alert
112
+ entryConditions:
113
+ criteria:
114
+ - variable: "{{steps.decide-routing.shouldAlert}}"
115
+ operator: "=="
116
+ value: true
117
+ ```
118
+
119
+ ```yaml
120
+ # Correct: free condition, no LLM call needed
121
+ - id: send-alert
122
+ entryConditions:
123
+ onCriteriaFail: "skip"
124
+ conditionText: "Only alert for high-score leads"
125
+ criteria:
126
+ - variable: "{{steps.score.value}}"
127
+ operator: ">"
128
+ value: 80
129
+ ```
130
+
131
+ Use AI for decisions that require reasoning, judgment, or context. Use conditions for decisions that follow a deterministic rule.
132
+
133
+ ---
134
+
135
+ ## `onCriteriaFail` reference
136
+
137
+ | Value | Behavior |
138
+ |---|---|
139
+ | `"skip"` | Skip this step, continue to the next |
140
+ | `"stop"` | Stop the entire execution |
141
+ | `"wait"` | Block this step until criteria are met (used for loop/group completion) |
142
+
143
+ Common mistakes:
144
+ - Using `"stop"` when you mean `"skip"` — stops the whole workflow instead of just bypassing one step
145
+ - Using `"skip"` when you mean `"wait"` — the step runs immediately with incomplete data instead of waiting for a loop to finish
146
+
147
+ ---
148
+
149
+ ## One-line rule
150
+
151
+ > Always use `criteria` (not `conditions`) and `variable` (not `field`) in entry conditions — wrong field names are silently ignored and the condition never evaluates.