@pgflow/core 0.0.0-test-snapshot-releases2-8d5d9bc1-20250922101158 → 0.0.0-update-supabase-868977e5-20251119071021

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (31) hide show
  1. package/README.md +177 -73
  2. package/dist/ATLAS.md +32 -0
  3. package/dist/CHANGELOG.md +796 -0
  4. package/dist/PgflowSqlClient.d.ts +17 -0
  5. package/dist/PgflowSqlClient.d.ts.map +1 -0
  6. package/dist/PgflowSqlClient.js +70 -0
  7. package/dist/README.md +497 -0
  8. package/dist/database-types.d.ts +1007 -0
  9. package/dist/database-types.d.ts.map +1 -0
  10. package/dist/database-types.js +8 -0
  11. package/dist/index.d.ts +4 -0
  12. package/dist/index.d.ts.map +1 -0
  13. package/dist/index.js +2 -0
  14. package/dist/package.json +32 -0
  15. package/dist/supabase/migrations/20250429164909_pgflow_initial.sql +579 -0
  16. package/dist/supabase/migrations/20250517072017_pgflow_fix_poll_for_tasks_to_use_separate_statement_for_polling.sql +101 -0
  17. package/dist/supabase/migrations/20250609105135_pgflow_add_start_tasks_and_started_status.sql +371 -0
  18. package/dist/supabase/migrations/20250610180554_pgflow_add_set_vt_batch_and_use_it_in_start_tasks.sql +127 -0
  19. package/dist/supabase/migrations/20250614124241_pgflow_add_realtime.sql +501 -0
  20. package/dist/supabase/migrations/20250619195327_pgflow_fix_fail_task_missing_realtime_event.sql +185 -0
  21. package/dist/supabase/migrations/20250627090700_pgflow_fix_function_search_paths.sql +6 -0
  22. package/dist/supabase/migrations/20250707210212_pgflow_add_opt_start_delay.sql +103 -0
  23. package/dist/supabase/migrations/20250719205006_pgflow_worker_deprecation.sql +2 -0
  24. package/dist/supabase/migrations/20251006073122_pgflow_add_map_step_type.sql +1244 -0
  25. package/dist/supabase/migrations/20251103222045_pgflow_fix_broadcast_order_and_timestamp_handling.sql +622 -0
  26. package/dist/supabase/migrations/20251104080523_pgflow_upgrade_pgmq_1_5_1.sql +93 -0
  27. package/dist/tsconfig.lib.tsbuildinfo +1 -0
  28. package/dist/types.d.ts +93 -0
  29. package/dist/types.d.ts.map +1 -0
  30. package/dist/types.js +1 -0
  31. package/package.json +4 -4
package/README.md CHANGED
@@ -47,6 +47,17 @@ This package focuses on:
47
47
 
48
48
  The actual execution of workflow tasks is handled by the [Edge Worker](../edge-worker/README.md), which calls back to the SQL Core to acknowledge task completion or failure.
49
49
 
50
+ ## Requirements
51
+
52
+ > [!IMPORTANT]
53
+ > **pgmq Version Requirement** (since v0.8.0)
54
+ >
55
+ > pgflow v0.8.0 and later requires **pgmq 1.5.0 or higher**. This version of pgflow will NOT work with pgmq 1.4.x or earlier.
56
+ >
57
+ > - **Supabase Cloud**: Recent versions include pgmq 1.5.0+ by default
58
+ > - **Self-hosted**: You must upgrade pgmq to version 1.5.0+ before upgrading pgflow
59
+ > - **Version Check**: Run `SELECT extversion FROM pg_extension WHERE extname = 'pgmq';` to verify your pgmq version
60
+
50
61
  ## Key Features
51
62
 
52
63
  - **Declarative Workflows**: Define flows and steps via SQL tables
@@ -94,6 +105,111 @@ The SQL Core handles the workflow lifecycle through these key operations:
94
105
 
95
106
  <a href="./assets/flow-lifecycle.svg"><img src="./assets/flow-lifecycle.svg" alt="Flow Lifecycle" width="25%" height="25%"></a>
96
107
 
108
+ ## Step Types
109
+
110
+ pgflow supports two fundamental step types that control how tasks are created and executed:
111
+
112
+ ### Single Steps (Default)
113
+
114
+ Single steps are the standard step type where each step creates exactly one task when started. These steps process their input as a whole and return a single output value.
115
+
116
+ ```sql
117
+ -- Regular single step definition
118
+ SELECT pgflow.add_step('my_flow', 'process_data');
119
+ ```
120
+
121
+ ### Map Steps
122
+
123
+ Map steps enable parallel processing of arrays by automatically creating multiple tasks - one for each array element. The system handles task distribution, parallel execution, and output aggregation transparently.
124
+
125
+ ```sql
126
+ -- Map step definition (step_type => 'map')
127
+ SELECT pgflow.add_step(
128
+ flow_slug => 'my_flow',
129
+ step_slug => 'process_items',
130
+ deps_slugs => ARRAY['fetch_items'],
131
+ step_type => 'map'
132
+ );
133
+ ```
134
+
135
+ #### Key Characteristics
136
+
137
+ - **Multiple Task Creation**: The SQL core creates N tasks for a map step (one per array element), unlike single steps which create one task
138
+ - **Element Distribution**: The SQL core distributes individual array elements to tasks based on `task_index`
139
+ - **Output Aggregation**: The SQL core aggregates task outputs back into an array for dependent steps
140
+ - **Constraint**: Map steps can have at most one dependency (which must return an array), or zero dependencies (then flow input must be an array)
141
+
142
+ #### Map Step Execution Flow
143
+
144
+ 1. **Array Input Validation**: The SQL core validates that the input is an array
145
+ 2. **Task Creation**: The SQL core creates N tasks with indices 0 to N-1
146
+ 3. **Element Distribution**: The SQL core assigns `array[task_index]` as input to each task
147
+ 4. **Parallel Execution**: Edge workers execute tasks independently in parallel
148
+ 5. **Output Collection**: The SQL core aggregates outputs preserving array order
149
+ 6. **Dependent Activation**: The SQL core passes the aggregated array to dependent steps
150
+
151
+ #### Root Map vs Dependent Map
152
+
153
+ **Root Map Steps** process the flow's input array directly:
154
+
155
+ ```sql
156
+ -- Root map: no dependencies, processes flow input
157
+ SELECT pgflow.add_step(
158
+ flow_slug => 'batch_processor',
159
+ step_slug => 'process_each',
160
+ step_type => 'map'
161
+ );
162
+
163
+ -- Starting the flow with array input
164
+ SELECT pgflow.start_flow(
165
+ flow_slug => 'batch_processor',
166
+ input => '[1, 2, 3, 4, 5]'::jsonb
167
+ );
168
+ ```
169
+
170
+ **Dependent Map Steps** process another step's array output:
171
+
172
+ ```sql
173
+ -- Dependent map: processes the array from 'fetch_items'
174
+ SELECT pgflow.add_step(
175
+ flow_slug => 'data_pipeline',
176
+ step_slug => 'transform_each',
177
+ deps_slugs => ARRAY['fetch_items'],
178
+ step_type => 'map'
179
+ );
180
+ ```
181
+
182
+ #### Edge Cases and Special Behaviors
183
+
184
+ 1. **Empty Array Cascade**: When a map step receives an empty array (`[]`):
185
+
186
+ - The SQL core completes it immediately without creating tasks
187
+ - The completed map step outputs an empty array
188
+ - Any dependent map steps also receive empty arrays and complete immediately
189
+ - This cascades through the entire chain of map steps in a single transaction
190
+ - Example: `[] → map1 → [] → map2 → [] → map3 → []` all complete together
191
+
192
+ 2. **NULL Values**: NULL array elements are preserved and distributed to their respective tasks
193
+
194
+ 3. **Non-Array Input**: The SQL core fails the step when input is not an array
195
+
196
+ 4. **Type Violations**: When a single step outputs non-array data to a map step, the SQL core fails the entire run (stores the invalid output for debugging, archives all queued messages, prevents orphaned tasks)
197
+
198
+ #### Implementation Details
199
+
200
+ Map steps utilize several database fields for state management:
201
+
202
+ - `initial_tasks`: Number of tasks to create (NULL until array size is known)
203
+ - `remaining_tasks`: Tracks incomplete tasks for the step
204
+ - `task_index`: Identifies which array element each task processes
205
+ - `step_type`: Column value 'map' triggers map behavior
206
+
207
+ The aggregation process ensures:
208
+
209
+ - **Order Preservation**: Task outputs maintain array element ordering
210
+ - **NULL Handling**: NULL outputs are included in the aggregated array
211
+ - **Atomicity**: Aggregation occurs within the same transaction as task completion
212
+
97
213
  ## Example flow and its life
98
214
 
99
215
  Let's walk through creating and running a workflow that fetches a website,
@@ -162,8 +278,9 @@ When a workflow starts:
162
278
  The Edge Worker uses a two-phase approach to retrieve and start tasks:
163
279
 
164
280
  **Phase 1 - Reserve Messages:**
281
+
165
282
  ```sql
166
- SELECT * FROM pgflow.read_with_poll(
283
+ SELECT * FROM pgmq.read_with_poll(
167
284
  queue_name => 'analyze_website',
168
285
  vt => 60, -- visibility timeout in seconds
169
286
  qty => 5 -- maximum number of messages to fetch
@@ -171,6 +288,7 @@ SELECT * FROM pgflow.read_with_poll(
171
288
  ```
172
289
 
173
290
  **Phase 2 - Start Tasks:**
291
+
174
292
  ```sql
175
293
  SELECT * FROM pgflow.start_tasks(
176
294
  flow_slug => 'analyze_website',
@@ -231,10 +349,16 @@ The system handles failures by:
231
349
  - Preventing processing until the visibility timeout expires
232
350
  3. When retries are exhausted:
233
351
  - Marking the task as 'failed'
352
+ - Storing the task output (even for failed tasks)
234
353
  - Marking the step as 'failed'
235
354
  - Marking the run as 'failed'
236
355
  - Archiving the message in PGMQ
237
- - Notifying workers to abort pending tasks (future feature)
356
+ - **Archiving all queued messages for the failed run** (preventing orphaned messages)
357
+ 4. Additional failure handling:
358
+ - **No retries on already-failed runs** - tasks are immediately marked as failed
359
+ - **Graceful type constraint violations** - handled without exceptions when single steps feed map steps
360
+ - **Stores invalid output on type violations** - captures the output that caused the violation for debugging
361
+ - **Performance-optimized message archiving** using indexed queries
238
362
 
239
363
  #### Retries and Timeouts
240
364
 
@@ -268,81 +392,18 @@ delay = base_delay * (2 ^ attempts_count)
268
392
 
269
393
  Timeouts are enforced by setting the message visibility timeout to the step's timeout value plus a small buffer. If a worker doesn't acknowledge completion or failure within this period, the task becomes visible again and can be retried.
270
394
 
271
- ## TypeScript Flow DSL
395
+ ## Workflow Definition with TypeScript DSL
272
396
 
273
- > [!NOTE]
274
- > TypeScript Flow DSL is a Work In Progress and is not ready yet!
275
-
276
- ### Overview
277
-
278
- While the SQL Core engine handles workflow definitions and state management, the primary way to define and work with your workflow logic is via the Flow DSL in TypeScript. This DSL offers a fluent API that makes it straightforward to outline the steps in your flow with full type safety.
279
-
280
- ### Type Inference System
281
-
282
- The most powerful feature of the Flow DSL is its **automatic type inference system**:
283
-
284
- 1. You only need to annotate the initial Flow input type
285
- 2. The return type of each step is automatically inferred from your handler function
286
- 3. These return types become available in the payload of dependent steps
287
- 4. The TypeScript compiler builds a complete type graph matching your workflow DAG
288
-
289
- This means you get full IDE autocompletion and type checking throughout your workflow without manual type annotations.
290
-
291
- ### Basic Example
292
-
293
- Here's an example that matches our website analysis workflow:
294
-
295
- ```ts
296
- // Provide a type for the input of the Flow
297
- type Input = {
298
- url: string;
299
- };
300
-
301
- const AnalyzeWebsite = new Flow<Input>({
302
- slug: 'analyze_website',
303
- maxAttempts: 3,
304
- baseDelay: 5,
305
- timeout: 10,
306
- })
307
- .step(
308
- { slug: 'website' },
309
- async (input) => await scrapeWebsite(input.run.url)
310
- )
311
- .step(
312
- { slug: 'sentiment', dependsOn: ['website'], timeout: 30, maxAttempts: 5 },
313
- async (input) => await analyzeSentiment(input.website.content)
314
- )
315
- .step(
316
- { slug: 'summary', dependsOn: ['website'] },
317
- async (input) => await summarizeWithAI(input.website.content)
318
- )
319
- .step(
320
- { slug: 'saveToDb', dependsOn: ['sentiment', 'summary'] },
321
- async (input) =>
322
- await saveToDb({
323
- websiteUrl: input.run.url,
324
- sentiment: input.sentiment.score,
325
- summary: input.summary,
326
- }).status
327
- );
328
- ```
397
+ The SQL Core is the DAG orchestration engine that handles dependency resolution, step state management, and task spawning. However, workflows are defined using the TypeScript Flow DSL, which compiles user intent into the SQL primitives that populate the definition tables (`flows`, `steps`, `deps`).
329
398
 
330
- ### How Payload Types Are Built
399
+ See the [@pgflow/dsl package](../dsl/README.md) for complete documentation on:
331
400
 
332
- The payload object for each step is constructed dynamically based on:
401
+ - Expressing workflows with type-safe method chaining
402
+ - Step types (`.step()`, `.array()`, `.map()`)
403
+ - Compilation to SQL migrations
404
+ - Type inference and handler context
333
405
 
334
- 1. **The `run` property**: Always contains the original workflow input
335
- 2. **Dependency outputs**: Each dependency's output is available under a key matching the dependency's ID
336
- 3. **DAG structure**: Only outputs from direct dependencies are included in the payload
337
-
338
- This means your step handlers receive exactly the data they need, properly typed, without any manual type declarations beyond the initial Flow input type.
339
-
340
- ### Benefits of Automatic Type Inference
341
-
342
- - **Refactoring safety**: Change a step's output, and TypeScript will flag all dependent steps that need updates
343
- - **Discoverability**: IDE autocompletion shows exactly what data is available in each step
344
- - **Error prevention**: Catch typos and type mismatches at compile time, not runtime
345
- - **Documentation**: The types themselves serve as living documentation of your workflow's data flow
406
+ The SQL Core executes these compiled definitions, managing when steps are ready, how many tasks to create (1 for single steps, N for map steps), and how to aggregate results.
346
407
 
347
408
  ## Data Flow
348
409
 
@@ -379,6 +440,49 @@ The `saveToDb` step depends on both `sentiment` and `summary`:
379
440
  }
380
441
  ```
381
442
 
443
+ ### Map Step Handler Inputs
444
+
445
+ Map step tasks receive a fundamentally different input structure than single step tasks. Instead of receiving an object with `run` and dependency keys, **map tasks receive only their assigned array element**:
446
+
447
+ #### Example: Processing user IDs
448
+
449
+ ```json
450
+ // Flow input (for root map) or dependency output:
451
+ ["user123", "user456", "user789"]
452
+
453
+ // What each map task receives:
454
+ // Task 0: "user123"
455
+ // Task 1: "user456"
456
+ // Task 2: "user789"
457
+
458
+ // NOT this:
459
+ // { "run": {...}, "dependency": [...] }
460
+ ```
461
+
462
+ This means:
463
+
464
+ - Map handlers process individual elements in isolation
465
+ - Map handlers cannot access the original flow input (`run`)
466
+ - Map handlers cannot access other dependencies
467
+ - Map handlers focus solely on transforming their assigned element
468
+
469
+ #### Map Step Outputs Become Arrays
470
+
471
+ When a step depends on a map step, it receives the aggregated array output:
472
+
473
+ ```json
474
+ // If 'process_users' is a map step that processed ["user1", "user2"]
475
+ // and output [{"name": "Alice"}, {"name": "Bob"}]
476
+
477
+ // A step depending on 'process_users' receives:
478
+ {
479
+ "run": {
480
+ /* original flow input */
481
+ },
482
+ "process_users": [{ "name": "Alice" }, { "name": "Bob" }] // Full array
483
+ }
484
+ ```
485
+
382
486
  ### Run Completion
383
487
 
384
488
  When all steps in a run are completed, the run status is automatically updated to 'completed' and its output is set. The output is an aggregation of all the outputs from final steps (steps that have no dependents):
package/dist/ATLAS.md ADDED
@@ -0,0 +1,32 @@
1
+ # Atlas setup
2
+
3
+ We use [Atlas](https://atlasgo.io/docs) to generate migrations from the declarative schemas stored in `./schemas/` folder.
4
+
5
+ ## Configuration
6
+
7
+ The setup is configured in `atlas.hcl`.
8
+
9
+ It is set to compare `schemas/` to what is in `supabase/migrations/`.
10
+
11
+ ### Docker dev image
12
+
13
+ Atlas requires a dev database to be available for computing diffs.
14
+ The database must be empty, but contain everything needed for the schemas to apply.
15
+
16
+ We need a configured [PGMQ](https://github.com/tembo-io/pgmq) extension, which Atlas does not support
17
+ in their dev images.
18
+
19
+ That's why this setup relies on a custom built image `jumski/postgres-17-pgmq:latest`.
20
+
21
+ Inspect `Dockerfile.atlas` to see how it is built.
22
+
23
+ See also `./scripts/build-atlas-postgres-image` and `./scripts/push-atlas-postgres-image` scripts for building and pushing the image.
24
+
25
+ ## Workflow
26
+
27
+ 1. Make sure you start with a clean database (`pnpm supabase db reset`).
28
+ 1. Modify the schemas in `schemas/` to a desired state.
29
+ 1. Run `./scripts/atlas-migrate-diff <migration-name>` to create a new migration based on the diff.
30
+ 1. Run `pnpm supabase migration up` to apply the migration.
31
+ 1. In case of any errors, remove the generated migration file, make changes in `schemas/` and repeat the process.
32
+ 1. After the migration is applied, verify it does not break tests with `nx test:pgtap`