@pgflow/core 0.6.0 → 0.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -94,6 +94,106 @@ The SQL Core handles the workflow lifecycle through these key operations:
94
94
 
95
95
  <a href="./assets/flow-lifecycle.svg"><img src="./assets/flow-lifecycle.svg" alt="Flow Lifecycle" width="25%" height="25%"></a>
96
96
 
97
+ ## Step Types
98
+
99
+ pgflow supports two fundamental step types that control how tasks are created and executed:
100
+
101
+ ### Single Steps (Default)
102
+
103
+ Single steps are the standard step type where each step creates exactly one task when started. These steps process their input as a whole and return a single output value.
104
+
105
+ ```sql
106
+ -- Regular single step definition
107
+ SELECT pgflow.add_step('my_flow', 'process_data');
108
+ ```
109
+
110
+ ### Map Steps
111
+
112
+ Map steps enable parallel processing of arrays by automatically creating multiple tasks - one for each array element. The system handles task distribution, parallel execution, and output aggregation transparently.
113
+
114
+ ```sql
115
+ -- Map step definition (step_type => 'map')
116
+ SELECT pgflow.add_step(
117
+ flow_slug => 'my_flow',
118
+ step_slug => 'process_items',
119
+ deps_slugs => ARRAY['fetch_items'],
120
+ step_type => 'map'
121
+ );
122
+ ```
123
+
124
+ #### Key Characteristics
125
+
126
+ - **Multiple Task Creation**: The SQL core creates N tasks for a map step (one per array element), unlike single steps which create one task
127
+ - **Element Distribution**: The SQL core distributes individual array elements to tasks based on `task_index`
128
+ - **Output Aggregation**: The SQL core aggregates task outputs back into an array for dependent steps
129
+ - **Constraint**: Map steps can have at most one dependency (which must return an array), or zero dependencies (then flow input must be an array)
130
+
131
+ #### Map Step Execution Flow
132
+
133
+ 1. **Array Input Validation**: The SQL core validates that the input is an array
134
+ 2. **Task Creation**: The SQL core creates N tasks with indices 0 to N-1
135
+ 3. **Element Distribution**: The SQL core assigns `array[task_index]` as input to each task
136
+ 4. **Parallel Execution**: Edge workers execute tasks independently in parallel
137
+ 5. **Output Collection**: The SQL core aggregates outputs preserving array order
138
+ 6. **Dependent Activation**: The SQL core passes the aggregated array to dependent steps
139
+
140
+ #### Root Map vs Dependent Map
141
+
142
+ **Root Map Steps** process the flow's input array directly:
143
+ ```sql
144
+ -- Root map: no dependencies, processes flow input
145
+ SELECT pgflow.add_step(
146
+ flow_slug => 'batch_processor',
147
+ step_slug => 'process_each',
148
+ step_type => 'map'
149
+ );
150
+
151
+ -- Starting the flow with array input
152
+ SELECT pgflow.start_flow(
153
+ flow_slug => 'batch_processor',
154
+ input => '[1, 2, 3, 4, 5]'::jsonb
155
+ );
156
+ ```
157
+
158
+ **Dependent Map Steps** process another step's array output:
159
+ ```sql
160
+ -- Dependent map: processes the array from 'fetch_items'
161
+ SELECT pgflow.add_step(
162
+ flow_slug => 'data_pipeline',
163
+ step_slug => 'transform_each',
164
+ deps_slugs => ARRAY['fetch_items'],
165
+ step_type => 'map'
166
+ );
167
+ ```
168
+
169
+ #### Edge Cases and Special Behaviors
170
+
171
+ 1. **Empty Array Cascade**: When a map step receives an empty array (`[]`):
172
+ - The SQL core completes it immediately without creating tasks
173
+ - The completed map step outputs an empty array
174
+ - Any dependent map steps also receive empty arrays and complete immediately
175
+ - This cascades through the entire chain of map steps in a single transaction
176
+ - Example: `[] → map1 → [] → map2 → [] → map3 → []` all complete together
177
+
178
+ 2. **NULL Values**: NULL array elements are preserved and distributed to their respective tasks
179
+
180
+ 3. **Non-Array Input**: The SQL core fails the step when input is not an array
181
+
182
+ 4. **Type Violations**: When a single step outputs non-array data to a map step, the SQL core fails the entire run (stores the invalid output for debugging, archives all queued messages, prevents orphaned tasks)
183
+
184
+ #### Implementation Details
185
+
186
+ Map steps utilize several database fields for state management:
187
+ - `initial_tasks`: Number of tasks to create (NULL until array size is known)
188
+ - `remaining_tasks`: Tracks incomplete tasks for the step
189
+ - `task_index`: Identifies which array element each task processes
190
+ - `step_type`: Column value 'map' triggers map behavior
191
+
192
+ The aggregation process ensures:
193
+ - **Order Preservation**: Task outputs maintain array element ordering
194
+ - **NULL Handling**: NULL outputs are included in the aggregated array
195
+ - **Atomicity**: Aggregation occurs within the same transaction as task completion
196
+
97
197
  ## Example flow and its life
98
198
 
99
199
  Let's walk through creating and running a workflow that fetches a website,
@@ -231,10 +331,16 @@ The system handles failures by:
231
331
  - Preventing processing until the visibility timeout expires
232
332
  3. When retries are exhausted:
233
333
  - Marking the task as 'failed'
334
+ - Storing the task output (even for failed tasks)
234
335
  - Marking the step as 'failed'
235
336
  - Marking the run as 'failed'
236
337
  - Archiving the message in PGMQ
237
- - Notifying workers to abort pending tasks (future feature)
338
+ - **Archiving all queued messages for the failed run** (preventing orphaned messages)
339
+ 4. Additional failure handling:
340
+ - **No retries on already-failed runs** - tasks are immediately marked as failed
341
+ - **Graceful type constraint violations** - handled without exceptions when single steps feed map steps
342
+ - **Stores invalid output on type violations** - captures the output that caused the violation for debugging
343
+ - **Performance-optimized message archiving** using indexed queries
238
344
 
239
345
  #### Retries and Timeouts
240
346
 
@@ -268,81 +374,17 @@ delay = base_delay * (2 ^ attempts_count)
268
374
 
269
375
  Timeouts are enforced by setting the message visibility timeout to the step's timeout value plus a small buffer. If a worker doesn't acknowledge completion or failure within this period, the task becomes visible again and can be retried.
270
376
 
271
- ## TypeScript Flow DSL
377
+ ## Workflow Definition with TypeScript DSL
272
378
 
273
- > [!NOTE]
274
- > TypeScript Flow DSL is a Work In Progress and is not ready yet!
275
-
276
- ### Overview
277
-
278
- While the SQL Core engine handles workflow definitions and state management, the primary way to define and work with your workflow logic is via the Flow DSL in TypeScript. This DSL offers a fluent API that makes it straightforward to outline the steps in your flow with full type safety.
279
-
280
- ### Type Inference System
281
-
282
- The most powerful feature of the Flow DSL is its **automatic type inference system**:
283
-
284
- 1. You only need to annotate the initial Flow input type
285
- 2. The return type of each step is automatically inferred from your handler function
286
- 3. These return types become available in the payload of dependent steps
287
- 4. The TypeScript compiler builds a complete type graph matching your workflow DAG
288
-
289
- This means you get full IDE autocompletion and type checking throughout your workflow without manual type annotations.
290
-
291
- ### Basic Example
292
-
293
- Here's an example that matches our website analysis workflow:
294
-
295
- ```ts
296
- // Provide a type for the input of the Flow
297
- type Input = {
298
- url: string;
299
- };
300
-
301
- const AnalyzeWebsite = new Flow<Input>({
302
- slug: 'analyze_website',
303
- maxAttempts: 3,
304
- baseDelay: 5,
305
- timeout: 10,
306
- })
307
- .step(
308
- { slug: 'website' },
309
- async (input) => await scrapeWebsite(input.run.url)
310
- )
311
- .step(
312
- { slug: 'sentiment', dependsOn: ['website'], timeout: 30, maxAttempts: 5 },
313
- async (input) => await analyzeSentiment(input.website.content)
314
- )
315
- .step(
316
- { slug: 'summary', dependsOn: ['website'] },
317
- async (input) => await summarizeWithAI(input.website.content)
318
- )
319
- .step(
320
- { slug: 'saveToDb', dependsOn: ['sentiment', 'summary'] },
321
- async (input) =>
322
- await saveToDb({
323
- websiteUrl: input.run.url,
324
- sentiment: input.sentiment.score,
325
- summary: input.summary,
326
- }).status
327
- );
328
- ```
329
-
330
- ### How Payload Types Are Built
331
-
332
- The payload object for each step is constructed dynamically based on:
379
+ The SQL Core is the DAG orchestration engine that handles dependency resolution, step state management, and task spawning. However, workflows are defined using the TypeScript Flow DSL, which compiles user intent into the SQL primitives that populate the definition tables (`flows`, `steps`, `deps`).
333
380
 
334
- 1. **The `run` property**: Always contains the original workflow input
335
- 2. **Dependency outputs**: Each dependency's output is available under a key matching the dependency's ID
336
- 3. **DAG structure**: Only outputs from direct dependencies are included in the payload
381
+ See the [@pgflow/dsl package](../dsl/README.md) for complete documentation on:
382
+ - Expressing workflows with type-safe method chaining
383
+ - Step types (`.step()`, `.array()`, `.map()`)
384
+ - Compilation to SQL migrations
385
+ - Type inference and handler context
337
386
 
338
- This means your step handlers receive exactly the data they need, properly typed, without any manual type declarations beyond the initial Flow input type.
339
-
340
- ### Benefits of Automatic Type Inference
341
-
342
- - **Refactoring safety**: Change a step's output, and TypeScript will flag all dependent steps that need updates
343
- - **Discoverability**: IDE autocompletion shows exactly what data is available in each step
344
- - **Error prevention**: Catch typos and type mismatches at compile time, not runtime
345
- - **Documentation**: The types themselves serve as living documentation of your workflow's data flow
387
+ The SQL Core executes these compiled definitions, managing when steps are ready, how many tasks to create (1 for single steps, N for map steps), and how to aggregate results.
346
388
 
347
389
  ## Data Flow
348
390
 
@@ -379,6 +421,46 @@ The `saveToDb` step depends on both `sentiment` and `summary`:
379
421
  }
380
422
  ```
381
423
 
424
+ ### Map Step Handler Inputs
425
+
426
+ Map step tasks receive a fundamentally different input structure than single step tasks. Instead of receiving an object with `run` and dependency keys, **map tasks receive only their assigned array element**:
427
+
428
+ #### Example: Processing user IDs
429
+
430
+ ```json
431
+ // Flow input (for root map) or dependency output:
432
+ ["user123", "user456", "user789"]
433
+
434
+ // What each map task receives:
435
+ // Task 0: "user123"
436
+ // Task 1: "user456"
437
+ // Task 2: "user789"
438
+
439
+ // NOT this:
440
+ // { "run": {...}, "dependency": [...] }
441
+ ```
442
+
443
+ This means:
444
+ - Map handlers process individual elements in isolation
445
+ - Map handlers cannot access the original flow input (`run`)
446
+ - Map handlers cannot access other dependencies
447
+ - Map handlers focus solely on transforming their assigned element
448
+
449
+ #### Map Step Outputs Become Arrays
450
+
451
+ When a step depends on a map step, it receives the aggregated array output:
452
+
453
+ ```json
454
+ // If 'process_users' is a map step that processed ["user1", "user2"]
455
+ // and output [{"name": "Alice"}, {"name": "Bob"}]
456
+
457
+ // A step depending on 'process_users' receives:
458
+ {
459
+ "run": { /* original flow input */ },
460
+ "process_users": [{"name": "Alice"}, {"name": "Bob"}] // Full array
461
+ }
462
+ ```
463
+
382
464
  ### Run Completion
383
465
 
384
466
  When all steps in a run are completed, the run status is automatically updated to 'completed' and its output is set. The output is an aggregation of all the outputs from final steps (steps that have no dependents):
package/dist/CHANGELOG.md CHANGED
@@ -1,5 +1,98 @@
1
1
  # @pgflow/core
2
2
 
3
+ ## 0.7.0
4
+
5
+ ### Minor Changes
6
+
7
+ - 524db03: Add map step type infrastructure in SQL core
8
+
9
+ ⚠️ **This migration includes automatic data migration**
10
+
11
+ The migration will automatically update existing `step_states` rows to satisfy new constraints. This should complete without issues due to strict check constraints enforced in previous versions.
12
+
13
+ 💡 **Recommended: Verify before deploying to production**
14
+
15
+ If you have existing production data and want to verify the migration will succeed cleanly, run this **read-only check query** (does not modify data) in **Supabase Studio** against your **production database**:
16
+
17
+ 1. Open Supabase Studio → SQL Editor
18
+ 2. Copy contents of `pkgs/core/queries/PRE_MIGRATION_CHECK_20251006073122.sql`
19
+ 3. Execute against your production database (not local dev!)
20
+ 4. Review results
21
+
22
+ **Expected output for successful migration:**
23
+
24
+ ```
25
+ type | identifier | details
26
+ ---------------------------|---------------------------|------------------------------------------
27
+ DATA_BACKFILL_STARTED | run=def67890 step=process | initial_tasks will be set to 1 (...)
28
+ DATA_BACKFILL_COMPLETED | Found 100 completed steps | initial_tasks will be set to 1 (...)
29
+ INFO_SUMMARY | total_step_states=114 | created=0 started=1 completed=113 failed=0
30
+ ```
31
+
32
+ **Interpretation:**
33
+
34
+ - ✅ Only `DATA_BACKFILL_*` and `INFO_SUMMARY` rows? **Safe to migrate**
35
+ - ⚠️ These are expected data migrations handled automatically by the migration
36
+ - 🆘 Unexpected rows or errors? Copy output and share on Discord for help
37
+
38
+ 📝 **Note:** This check identifies data that needs migration but does not modify anything. Only useful for production databases with existing runs.
39
+
40
+ **Automatic data updates:**
41
+
42
+ - Sets `initial_tasks = 1` for all existing steps (correct for pre-map-step schema)
43
+ - Sets `remaining_tasks = NULL` for 'created' status steps (new semantics)
44
+
45
+ No manual intervention required.
46
+
47
+ ***
48
+
49
+ ## Changes
50
+
51
+ This patch introduces the foundation for map step functionality in the SQL core layer:
52
+
53
+ ### Schema Changes
54
+
55
+ - Added `step_type` column to `steps` table with constraint allowing 'single' or 'map' values
56
+ - Added `initial_tasks` column to `step_states` table (defaults to 1, stores planned task count)
57
+ - Modified `remaining_tasks` column to be nullable (NULL = not started, >0 = active countdown)
58
+ - Added constraint `remaining_tasks_state_consistency` to ensure `remaining_tasks` is only set when step has started
59
+ - Removed `only_single_task_per_step` constraint from `step_tasks` table to allow multiple tasks per step
60
+
61
+ ### Function Updates
62
+
63
+ - **`add_step()`**: Now accepts `step_type` parameter (defaults to 'single') with validation that map steps can have at most 1 dependency
64
+ - **`start_flow()`**: Sets `initial_tasks = 1` for all steps (map step array handling will come in future phases)
65
+ - **`start_ready_steps()`**: Copies `initial_tasks` to `remaining_tasks` when starting a step, maintaining proper task counting semantics
66
+
67
+ ### Testing
68
+
69
+ - Added comprehensive test coverage for map step creation and validation
70
+ - All existing tests pass with the new schema changes
71
+ - Tests validate the new step_type parameter and dependency constraints for map steps
72
+
73
+ This is Phase 2a of the map step implementation, establishing the SQL infrastructure needed for parallel task execution in future phases.
74
+
75
+ ### Patch Changes
76
+
77
+ - 524db03: Improve failure handling and prevent orphaned messages in queue
78
+
79
+ - Archive all queued messages when a run fails to prevent resource waste
80
+ - Handle type constraint violations gracefully without exceptions
81
+ - Store output on failed tasks (including type violations) for debugging
82
+ - Add performance index for efficient message archiving
83
+ - Prevent retries on already-failed runs
84
+ - Update table constraint to allow output storage on failed tasks
85
+
86
+ - Updated dependencies [524db03]
87
+ - Updated dependencies [524db03]
88
+ - @pgflow/dsl@0.7.0
89
+
90
+ ## 0.6.1
91
+
92
+ ### Patch Changes
93
+
94
+ - @pgflow/dsl@0.6.1
95
+
3
96
  ## 0.6.0
4
97
 
5
98
  ### Patch Changes
@@ -33,7 +33,7 @@ export class PgflowSqlClient {
33
33
  SELECT pgflow.complete_task(
34
34
  run_id => ${stepTask.run_id}::uuid,
35
35
  step_slug => ${stepTask.step_slug}::text,
36
- task_index => ${0}::int,
36
+ task_index => ${stepTask.task_index}::int,
37
37
  output => ${this.sql.json(output || null)}::jsonb
38
38
  );
39
39
  `;
@@ -48,7 +48,7 @@ export class PgflowSqlClient {
48
48
  SELECT pgflow.fail_task(
49
49
  run_id => ${stepTask.run_id}::uuid,
50
50
  step_slug => ${stepTask.step_slug}::text,
51
- task_index => ${0}::int,
51
+ task_index => ${stepTask.task_index}::int,
52
52
  error_message => ${errorString}::text
53
53
  );
54
54
  `;
package/dist/README.md CHANGED
@@ -94,6 +94,106 @@ The SQL Core handles the workflow lifecycle through these key operations:
94
94
 
95
95
  <a href="./assets/flow-lifecycle.svg"><img src="./assets/flow-lifecycle.svg" alt="Flow Lifecycle" width="25%" height="25%"></a>
96
96
 
97
+ ## Step Types
98
+
99
+ pgflow supports two fundamental step types that control how tasks are created and executed:
100
+
101
+ ### Single Steps (Default)
102
+
103
+ Single steps are the standard step type where each step creates exactly one task when started. These steps process their input as a whole and return a single output value.
104
+
105
+ ```sql
106
+ -- Regular single step definition
107
+ SELECT pgflow.add_step('my_flow', 'process_data');
108
+ ```
109
+
110
+ ### Map Steps
111
+
112
+ Map steps enable parallel processing of arrays by automatically creating multiple tasks - one for each array element. The system handles task distribution, parallel execution, and output aggregation transparently.
113
+
114
+ ```sql
115
+ -- Map step definition (step_type => 'map')
116
+ SELECT pgflow.add_step(
117
+ flow_slug => 'my_flow',
118
+ step_slug => 'process_items',
119
+ deps_slugs => ARRAY['fetch_items'],
120
+ step_type => 'map'
121
+ );
122
+ ```
123
+
124
+ #### Key Characteristics
125
+
126
+ - **Multiple Task Creation**: The SQL core creates N tasks for a map step (one per array element), unlike single steps which create one task
127
+ - **Element Distribution**: The SQL core distributes individual array elements to tasks based on `task_index`
128
+ - **Output Aggregation**: The SQL core aggregates task outputs back into an array for dependent steps
129
+ - **Constraint**: Map steps can have at most one dependency (which must return an array), or zero dependencies (then flow input must be an array)
130
+
131
+ #### Map Step Execution Flow
132
+
133
+ 1. **Array Input Validation**: The SQL core validates that the input is an array
134
+ 2. **Task Creation**: The SQL core creates N tasks with indices 0 to N-1
135
+ 3. **Element Distribution**: The SQL core assigns `array[task_index]` as input to each task
136
+ 4. **Parallel Execution**: Edge workers execute tasks independently in parallel
137
+ 5. **Output Collection**: The SQL core aggregates outputs preserving array order
138
+ 6. **Dependent Activation**: The SQL core passes the aggregated array to dependent steps
139
+
140
+ #### Root Map vs Dependent Map
141
+
142
+ **Root Map Steps** process the flow's input array directly:
143
+ ```sql
144
+ -- Root map: no dependencies, processes flow input
145
+ SELECT pgflow.add_step(
146
+ flow_slug => 'batch_processor',
147
+ step_slug => 'process_each',
148
+ step_type => 'map'
149
+ );
150
+
151
+ -- Starting the flow with array input
152
+ SELECT pgflow.start_flow(
153
+ flow_slug => 'batch_processor',
154
+ input => '[1, 2, 3, 4, 5]'::jsonb
155
+ );
156
+ ```
157
+
158
+ **Dependent Map Steps** process another step's array output:
159
+ ```sql
160
+ -- Dependent map: processes the array from 'fetch_items'
161
+ SELECT pgflow.add_step(
162
+ flow_slug => 'data_pipeline',
163
+ step_slug => 'transform_each',
164
+ deps_slugs => ARRAY['fetch_items'],
165
+ step_type => 'map'
166
+ );
167
+ ```
168
+
169
+ #### Edge Cases and Special Behaviors
170
+
171
+ 1. **Empty Array Cascade**: When a map step receives an empty array (`[]`):
172
+ - The SQL core completes it immediately without creating tasks
173
+ - The completed map step outputs an empty array
174
+ - Any dependent map steps also receive empty arrays and complete immediately
175
+ - This cascades through the entire chain of map steps in a single transaction
176
+ - Example: `[] → map1 → [] → map2 → [] → map3 → []` all complete together
177
+
178
+ 2. **NULL Values**: NULL array elements are preserved and distributed to their respective tasks
179
+
180
+ 3. **Non-Array Input**: The SQL core fails the step when input is not an array
181
+
182
+ 4. **Type Violations**: When a single step outputs non-array data to a map step, the SQL core fails the entire run (stores the invalid output for debugging, archives all queued messages, prevents orphaned tasks)
183
+
184
+ #### Implementation Details
185
+
186
+ Map steps utilize several database fields for state management:
187
+ - `initial_tasks`: Number of tasks to create (NULL until array size is known)
188
+ - `remaining_tasks`: Tracks incomplete tasks for the step
189
+ - `task_index`: Identifies which array element each task processes
190
+ - `step_type`: Column value 'map' triggers map behavior
191
+
192
+ The aggregation process ensures:
193
+ - **Order Preservation**: Task outputs maintain array element ordering
194
+ - **NULL Handling**: NULL outputs are included in the aggregated array
195
+ - **Atomicity**: Aggregation occurs within the same transaction as task completion
196
+
97
197
  ## Example flow and its life
98
198
 
99
199
  Let's walk through creating and running a workflow that fetches a website,
@@ -231,10 +331,16 @@ The system handles failures by:
231
331
  - Preventing processing until the visibility timeout expires
232
332
  3. When retries are exhausted:
233
333
  - Marking the task as 'failed'
334
+ - Storing the task output (even for failed tasks)
234
335
  - Marking the step as 'failed'
235
336
  - Marking the run as 'failed'
236
337
  - Archiving the message in PGMQ
237
- - Notifying workers to abort pending tasks (future feature)
338
+ - **Archiving all queued messages for the failed run** (preventing orphaned messages)
339
+ 4. Additional failure handling:
340
+ - **No retries on already-failed runs** - tasks are immediately marked as failed
341
+ - **Graceful type constraint violations** - handled without exceptions when single steps feed map steps
342
+ - **Stores invalid output on type violations** - captures the output that caused the violation for debugging
343
+ - **Performance-optimized message archiving** using indexed queries
238
344
 
239
345
  #### Retries and Timeouts
240
346
 
@@ -268,81 +374,17 @@ delay = base_delay * (2 ^ attempts_count)
268
374
 
269
375
  Timeouts are enforced by setting the message visibility timeout to the step's timeout value plus a small buffer. If a worker doesn't acknowledge completion or failure within this period, the task becomes visible again and can be retried.
270
376
 
271
- ## TypeScript Flow DSL
377
+ ## Workflow Definition with TypeScript DSL
272
378
 
273
- > [!NOTE]
274
- > TypeScript Flow DSL is a Work In Progress and is not ready yet!
275
-
276
- ### Overview
277
-
278
- While the SQL Core engine handles workflow definitions and state management, the primary way to define and work with your workflow logic is via the Flow DSL in TypeScript. This DSL offers a fluent API that makes it straightforward to outline the steps in your flow with full type safety.
279
-
280
- ### Type Inference System
281
-
282
- The most powerful feature of the Flow DSL is its **automatic type inference system**:
283
-
284
- 1. You only need to annotate the initial Flow input type
285
- 2. The return type of each step is automatically inferred from your handler function
286
- 3. These return types become available in the payload of dependent steps
287
- 4. The TypeScript compiler builds a complete type graph matching your workflow DAG
288
-
289
- This means you get full IDE autocompletion and type checking throughout your workflow without manual type annotations.
290
-
291
- ### Basic Example
292
-
293
- Here's an example that matches our website analysis workflow:
294
-
295
- ```ts
296
- // Provide a type for the input of the Flow
297
- type Input = {
298
- url: string;
299
- };
300
-
301
- const AnalyzeWebsite = new Flow<Input>({
302
- slug: 'analyze_website',
303
- maxAttempts: 3,
304
- baseDelay: 5,
305
- timeout: 10,
306
- })
307
- .step(
308
- { slug: 'website' },
309
- async (input) => await scrapeWebsite(input.run.url)
310
- )
311
- .step(
312
- { slug: 'sentiment', dependsOn: ['website'], timeout: 30, maxAttempts: 5 },
313
- async (input) => await analyzeSentiment(input.website.content)
314
- )
315
- .step(
316
- { slug: 'summary', dependsOn: ['website'] },
317
- async (input) => await summarizeWithAI(input.website.content)
318
- )
319
- .step(
320
- { slug: 'saveToDb', dependsOn: ['sentiment', 'summary'] },
321
- async (input) =>
322
- await saveToDb({
323
- websiteUrl: input.run.url,
324
- sentiment: input.sentiment.score,
325
- summary: input.summary,
326
- }).status
327
- );
328
- ```
329
-
330
- ### How Payload Types Are Built
331
-
332
- The payload object for each step is constructed dynamically based on:
379
+ The SQL Core is the DAG orchestration engine that handles dependency resolution, step state management, and task spawning. However, workflows are defined using the TypeScript Flow DSL, which compiles user intent into the SQL primitives that populate the definition tables (`flows`, `steps`, `deps`).
333
380
 
334
- 1. **The `run` property**: Always contains the original workflow input
335
- 2. **Dependency outputs**: Each dependency's output is available under a key matching the dependency's ID
336
- 3. **DAG structure**: Only outputs from direct dependencies are included in the payload
381
+ See the [@pgflow/dsl package](../dsl/README.md) for complete documentation on:
382
+ - Expressing workflows with type-safe method chaining
383
+ - Step types (`.step()`, `.array()`, `.map()`)
384
+ - Compilation to SQL migrations
385
+ - Type inference and handler context
337
386
 
338
- This means your step handlers receive exactly the data they need, properly typed, without any manual type declarations beyond the initial Flow input type.
339
-
340
- ### Benefits of Automatic Type Inference
341
-
342
- - **Refactoring safety**: Change a step's output, and TypeScript will flag all dependent steps that need updates
343
- - **Discoverability**: IDE autocompletion shows exactly what data is available in each step
344
- - **Error prevention**: Catch typos and type mismatches at compile time, not runtime
345
- - **Documentation**: The types themselves serve as living documentation of your workflow's data flow
387
+ The SQL Core executes these compiled definitions, managing when steps are ready, how many tasks to create (1 for single steps, N for map steps), and how to aggregate results.
346
388
 
347
389
  ## Data Flow
348
390
 
@@ -379,6 +421,46 @@ The `saveToDb` step depends on both `sentiment` and `summary`:
379
421
  }
380
422
  ```
381
423
 
424
+ ### Map Step Handler Inputs
425
+
426
+ Map step tasks receive a fundamentally different input structure than single step tasks. Instead of receiving an object with `run` and dependency keys, **map tasks receive only their assigned array element**:
427
+
428
+ #### Example: Processing user IDs
429
+
430
+ ```json
431
+ // Flow input (for root map) or dependency output:
432
+ ["user123", "user456", "user789"]
433
+
434
+ // What each map task receives:
435
+ // Task 0: "user123"
436
+ // Task 1: "user456"
437
+ // Task 2: "user789"
438
+
439
+ // NOT this:
440
+ // { "run": {...}, "dependency": [...] }
441
+ ```
442
+
443
+ This means:
444
+ - Map handlers process individual elements in isolation
445
+ - Map handlers cannot access the original flow input (`run`)
446
+ - Map handlers cannot access other dependencies
447
+ - Map handlers focus solely on transforming their assigned element
448
+
449
+ #### Map Step Outputs Become Arrays
450
+
451
+ When a step depends on a map step, it receives the aggregated array output:
452
+
453
+ ```json
454
+ // If 'process_users' is a map step that processed ["user1", "user2"]
455
+ // and output [{"name": "Alice"}, {"name": "Bob"}]
456
+
457
+ // A step depending on 'process_users' receives:
458
+ {
459
+ "run": { /* original flow input */ },
460
+ "process_users": [{"name": "Alice"}, {"name": "Bob"}] // Full array
461
+ }
462
+ ```
463
+
382
464
  ### Run Completion
383
465
 
384
466
  When all steps in a run are completed, the run status is automatically updated to 'completed' and its output is set. The output is an aggregation of all the outputs from final steps (steps that have no dependents):