@pgflow/core 0.0.0-array-map-steps-cd94242a-20251008042921 → 0.0.0-control-plane-a947cb71-20251121164755

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -47,6 +47,17 @@ This package focuses on:
47
47
 
48
48
  The actual execution of workflow tasks is handled by the [Edge Worker](../edge-worker/README.md), which calls back to the SQL Core to acknowledge task completion or failure.
49
49
 
50
+ ## Requirements
51
+
52
+ > [!IMPORTANT]
53
+ > **pgmq Version Requirement** (since v0.8.0)
54
+ >
55
+ > pgflow v0.8.0 and later requires **pgmq 1.5.0 or higher**. This version of pgflow will NOT work with pgmq 1.4.x or earlier.
56
+ >
57
+ > - **Supabase Cloud**: Recent versions include pgmq 1.5.0+ by default
58
+ > - **Self-hosted**: You must upgrade pgmq to version 1.5.0+ before upgrading pgflow
59
+ > - **Version Check**: Run `SELECT extversion FROM pg_extension WHERE extname = 'pgmq';` to verify your pgmq version
60
+
50
61
  ## Key Features
51
62
 
52
63
  - **Declarative Workflows**: Define flows and steps via SQL tables
@@ -140,6 +151,7 @@ SELECT pgflow.add_step(
140
151
  #### Root Map vs Dependent Map
141
152
 
142
153
  **Root Map Steps** process the flow's input array directly:
154
+
143
155
  ```sql
144
156
  -- Root map: no dependencies, processes flow input
145
157
  SELECT pgflow.add_step(
@@ -156,6 +168,7 @@ SELECT pgflow.start_flow(
156
168
  ```
157
169
 
158
170
  **Dependent Map Steps** process another step's array output:
171
+
159
172
  ```sql
160
173
  -- Dependent map: processes the array from 'fetch_items'
161
174
  SELECT pgflow.add_step(
@@ -169,6 +182,7 @@ SELECT pgflow.add_step(
169
182
  #### Edge Cases and Special Behaviors
170
183
 
171
184
  1. **Empty Array Cascade**: When a map step receives an empty array (`[]`):
185
+
172
186
  - The SQL core completes it immediately without creating tasks
173
187
  - The completed map step outputs an empty array
174
188
  - Any dependent map steps also receive empty arrays and complete immediately
@@ -184,12 +198,14 @@ SELECT pgflow.add_step(
184
198
  #### Implementation Details
185
199
 
186
200
  Map steps utilize several database fields for state management:
201
+
187
202
  - `initial_tasks`: Number of tasks to create (NULL until array size is known)
188
203
  - `remaining_tasks`: Tracks incomplete tasks for the step
189
204
  - `task_index`: Identifies which array element each task processes
190
205
  - `step_type`: Column value 'map' triggers map behavior
191
206
 
192
207
  The aggregation process ensures:
208
+
193
209
  - **Order Preservation**: Task outputs maintain array element ordering
194
210
  - **NULL Handling**: NULL outputs are included in the aggregated array
195
211
  - **Atomicity**: Aggregation occurs within the same transaction as task completion
@@ -262,8 +278,9 @@ When a workflow starts:
262
278
  The Edge Worker uses a two-phase approach to retrieve and start tasks:
263
279
 
264
280
  **Phase 1 - Reserve Messages:**
281
+
265
282
  ```sql
266
- SELECT * FROM pgflow.read_with_poll(
283
+ SELECT * FROM pgmq.read_with_poll(
267
284
  queue_name => 'analyze_website',
268
285
  vt => 60, -- visibility timeout in seconds
269
286
  qty => 5 -- maximum number of messages to fetch
@@ -271,6 +288,7 @@ SELECT * FROM pgflow.read_with_poll(
271
288
  ```
272
289
 
273
290
  **Phase 2 - Start Tasks:**
291
+
274
292
  ```sql
275
293
  SELECT * FROM pgflow.start_tasks(
276
294
  flow_slug => 'analyze_website',
@@ -379,6 +397,7 @@ Timeouts are enforced by setting the message visibility timeout to the step's ti
379
397
  The SQL Core is the DAG orchestration engine that handles dependency resolution, step state management, and task spawning. However, workflows are defined using the TypeScript Flow DSL, which compiles user intent into the SQL primitives that populate the definition tables (`flows`, `steps`, `deps`).
380
398
 
381
399
  See the [@pgflow/dsl package](../dsl/README.md) for complete documentation on:
400
+
382
401
  - Expressing workflows with type-safe method chaining
383
402
  - Step types (`.step()`, `.array()`, `.map()`)
384
403
  - Compilation to SQL migrations
@@ -441,6 +460,7 @@ Map step tasks receive a fundamentally different input structure than single ste
441
460
  ```
442
461
 
443
462
  This means:
463
+
444
464
  - Map handlers process individual elements in isolation
445
465
  - Map handlers cannot access the original flow input (`run`)
446
466
  - Map handlers cannot access other dependencies
@@ -456,8 +476,10 @@ When a step depends on a map step, it receives the aggregated array output:
456
476
 
457
477
  // A step depending on 'process_users' receives:
458
478
  {
459
- "run": { /* original flow input */ },
460
- "process_users": [{"name": "Alice"}, {"name": "Bob"}] // Full array
479
+ "run": {
480
+ /* original flow input */
481
+ },
482
+ "process_users": [{ "name": "Alice" }, { "name": "Bob" }] // Full array
461
483
  }
462
484
  ```
463
485
 
package/dist/ATLAS.md CHANGED
@@ -16,7 +16,7 @@ The database must be empty, but contain everything needed for the schemas to app
16
16
  We need a configured [PGMQ](https://github.com/tembo-io/pgmq) extension, which Atlas does not support
17
17
  in their dev images.
18
18
 
19
- That's why this setup relies on a custom built image `jumski/postgres-15-pgmq:latest`.
19
+ That's why this setup relies on a custom built image `jumski/postgres-17-pgmq:latest`.
20
20
 
21
21
  Inspect `Dockerfile.atlas` to see how it is built.
22
22
 
package/dist/CHANGELOG.md CHANGED
@@ -1,29 +1,118 @@
1
1
  # @pgflow/core
2
2
 
3
- ## 0.0.0-array-map-steps-cd94242a-20251008042921
3
+ ## 0.0.0-control-plane-a947cb71-20251121164755
4
+
5
+ ### Patch Changes
6
+
7
+ - @pgflow/dsl@0.0.0-control-plane-a947cb71-20251121164755
8
+
9
+ ## 0.8.0
10
+
11
+ ### Minor Changes
12
+
13
+ - 7380237: BREAKING CHANGE: pgflow 0.8.0 requires pgmq 1.5.0+, PostgreSQL 17, and Supabase CLI 2.34.3+
14
+
15
+ This version modernizes infrastructure dependencies and will NOT work with pgmq 1.4.x or earlier. The migration includes a compatibility check that aborts with a clear error message if requirements are not met.
16
+
17
+ **Requirements:**
18
+
19
+ - pgmq 1.5.0 or higher (previously supported 1.4.x)
20
+ - PostgreSQL 17 (from 15)
21
+ - Supabase CLI 2.34.3 or higher (includes pgmq 1.5.0+)
22
+
23
+ **For Supabase users:** Upgrade your Supabase CLI to 2.34.3+ which includes pgmq 1.5.0 by default.
24
+
25
+ **For self-hosted users:** Upgrade pgmq to 1.5.0+ and PostgreSQL to 17 before upgrading pgflow.
26
+
27
+ **If you cannot upgrade immediately:** Stay on pgflow 0.7.x until your infrastructure is ready. The migration safety check ensures you cannot accidentally upgrade to an incompatible version.
28
+
29
+ ### Patch Changes
30
+
31
+ - @pgflow/dsl@0.8.0
32
+
33
+ ## 0.7.3
34
+
35
+ ### Patch Changes
36
+
37
+ - @pgflow/dsl@0.7.3
38
+
39
+ ## 0.7.2
40
+
41
+ ### Patch Changes
42
+
43
+ - c22a1e5: Fix missing realtime broadcasts for step:started and step:completed events
44
+
45
+ **Critical bug fix:** Clients were not receiving `step:started` events when steps transitioned to Started status, and `step:completed` events for empty map steps and cascade completions were also missing.
46
+
47
+ **Root cause:** PostgreSQL query optimizer was eliminating CTEs containing `realtime.send()` calls because they were not referenced by subsequent operations or the final RETURN statement.
48
+
49
+ **Solution:** Moved `realtime.send()` calls directly into RETURNING clauses of UPDATE statements, ensuring they execute atomically with state changes and cannot be optimized away.
50
+
51
+ **Changes:**
52
+
53
+ - `start_ready_steps()`: Broadcasts step:started and step:completed events in RETURNING clauses
54
+ - `cascade_complete_taskless_steps()`: Broadcasts step:completed events atomically with cascade completion
55
+ - `complete_task()`: Added PERFORM statements for run:failed and step:failed broadcasts
56
+ - Client: Added `applySnapshot()` methods to FlowRun and FlowStep for proper initial state hydration without event emission
57
+ - @pgflow/dsl@0.7.2
58
+
59
+ ## 0.7.1
60
+
61
+ ### Patch Changes
62
+
63
+ - a71b371: Fix installation failures on new Supabase projects by removing pgmq version pin.
64
+
65
+ Supabase upgraded to pgmq 1.5.1 in Postgres 17.6.1.016+ (https://github.com/supabase/postgres/pull/1668), but pgflow was pinned to 1.4.4, causing "extension has no installation script" errors on fresh instances.
66
+
67
+ Only affects new projects - existing installations are unaffected and require no action.
68
+
69
+ Thanks to @kallebysantos for reporting this issue!
70
+
71
+ - @pgflow/dsl@0.7.1
72
+
73
+ ## 0.7.0
4
74
 
5
75
  ### Minor Changes
6
76
 
7
77
  - 524db03: Add map step type infrastructure in SQL core
8
78
 
9
- ## 🚨🚨🚨 CRITICAL MIGRATION WARNING 🚨🚨🚨
79
+ ⚠️ **This migration includes automatic data migration**
80
+
81
+ The migration will automatically update existing `step_states` rows to satisfy new constraints. This should complete without issues due to strict check constraints enforced in previous versions.
10
82
 
11
- **THIS MIGRATION REQUIRES MANUAL DATA UPDATE BEFORE DEPLOYMENT!**
83
+ 💡 **Recommended: Verify before deploying to production**
12
84
 
13
- The migration adds a new constraint `remaining_tasks_state_consistency` that will **FAIL ON EXISTING DATA** if not handled properly.
85
+ If you have existing production data and want to verify the migration will succeed cleanly, run this **read-only check query** (does not modify data) in **Supabase Studio** against your **production database**:
14
86
 
15
- ### Required Data Migration:
87
+ 1. Open Supabase Studio → SQL Editor
88
+ 2. Copy contents of `pkgs/core/queries/PRE_MIGRATION_CHECK_20251006073122.sql`
89
+ 3. Execute against your production database (not local dev!)
90
+ 4. Review results
16
91
 
17
- Before applying this migration to any environment with existing data, you MUST include:
92
+ **Expected output for successful migration:**
18
93
 
19
- ```sql
20
- -- CRITICAL: Update existing step_states to satisfy new constraint
21
- UPDATE pgflow.step_states
22
- SET remaining_tasks = NULL
23
- WHERE status = 'created';
24
94
  ```
95
+ type | identifier | details
96
+ ---------------------------|---------------------------|------------------------------------------
97
+ DATA_BACKFILL_STARTED | run=def67890 step=process | initial_tasks will be set to 1 (...)
98
+ DATA_BACKFILL_COMPLETED | Found 100 completed steps | initial_tasks will be set to 1 (...)
99
+ INFO_SUMMARY | total_step_states=114 | created=0 started=1 completed=113 failed=0
100
+ ```
101
+
102
+ **Interpretation:**
103
+
104
+ - ✅ Only `DATA_BACKFILL_*` and `INFO_SUMMARY` rows? **Safe to migrate**
105
+ - ⚠️ These are expected data migrations handled automatically by the migration
106
+ - 🆘 Unexpected rows or errors? Copy output and share on Discord for help
107
+
108
+ 📝 **Note:** This check identifies data that needs migration but does not modify anything. Only useful for production databases with existing runs.
109
+
110
+ **Automatic data updates:**
111
+
112
+ - Sets `initial_tasks = 1` for all existing steps (correct for pre-map-step schema)
113
+ - Sets `remaining_tasks = NULL` for 'created' status steps (new semantics)
25
114
 
26
- **Without this update, the migration WILL FAIL in production!** The new constraint requires that `remaining_tasks` can only be set when `status != 'created'`.
115
+ No manual intervention required.
27
116
 
28
117
  ***
29
118
 
@@ -66,7 +155,7 @@
66
155
 
67
156
  - Updated dependencies [524db03]
68
157
  - Updated dependencies [524db03]
69
- - @pgflow/dsl@0.0.0-array-map-steps-cd94242a-20251008042921
158
+ - @pgflow/dsl@0.7.0
70
159
 
71
160
  ## 0.6.1
72
161
 
@@ -9,7 +9,7 @@ export class PgflowSqlClient {
9
9
  async readMessages(queueName, visibilityTimeout, batchSize, maxPollSeconds = 5, pollIntervalMs = 200) {
10
10
  return await this.sql `
11
11
  SELECT *
12
- FROM pgflow.read_with_poll(
12
+ FROM pgmq.read_with_poll(
13
13
  queue_name => ${queueName},
14
14
  vt => ${visibilityTimeout},
15
15
  qty => ${batchSize},
package/dist/README.md CHANGED
@@ -47,6 +47,17 @@ This package focuses on:
47
47
 
48
48
  The actual execution of workflow tasks is handled by the [Edge Worker](../edge-worker/README.md), which calls back to the SQL Core to acknowledge task completion or failure.
49
49
 
50
+ ## Requirements
51
+
52
+ > [!IMPORTANT]
53
+ > **pgmq Version Requirement** (since v0.8.0)
54
+ >
55
+ > pgflow v0.8.0 and later requires **pgmq 1.5.0 or higher**. This version of pgflow will NOT work with pgmq 1.4.x or earlier.
56
+ >
57
+ > - **Supabase Cloud**: Recent versions include pgmq 1.5.0+ by default
58
+ > - **Self-hosted**: You must upgrade pgmq to version 1.5.0+ before upgrading pgflow
59
+ > - **Version Check**: Run `SELECT extversion FROM pg_extension WHERE extname = 'pgmq';` to verify your pgmq version
60
+
50
61
  ## Key Features
51
62
 
52
63
  - **Declarative Workflows**: Define flows and steps via SQL tables
@@ -140,6 +151,7 @@ SELECT pgflow.add_step(
140
151
  #### Root Map vs Dependent Map
141
152
 
142
153
  **Root Map Steps** process the flow's input array directly:
154
+
143
155
  ```sql
144
156
  -- Root map: no dependencies, processes flow input
145
157
  SELECT pgflow.add_step(
@@ -156,6 +168,7 @@ SELECT pgflow.start_flow(
156
168
  ```
157
169
 
158
170
  **Dependent Map Steps** process another step's array output:
171
+
159
172
  ```sql
160
173
  -- Dependent map: processes the array from 'fetch_items'
161
174
  SELECT pgflow.add_step(
@@ -169,6 +182,7 @@ SELECT pgflow.add_step(
169
182
  #### Edge Cases and Special Behaviors
170
183
 
171
184
  1. **Empty Array Cascade**: When a map step receives an empty array (`[]`):
185
+
172
186
  - The SQL core completes it immediately without creating tasks
173
187
  - The completed map step outputs an empty array
174
188
  - Any dependent map steps also receive empty arrays and complete immediately
@@ -184,12 +198,14 @@ SELECT pgflow.add_step(
184
198
  #### Implementation Details
185
199
 
186
200
  Map steps utilize several database fields for state management:
201
+
187
202
  - `initial_tasks`: Number of tasks to create (NULL until array size is known)
188
203
  - `remaining_tasks`: Tracks incomplete tasks for the step
189
204
  - `task_index`: Identifies which array element each task processes
190
205
  - `step_type`: Column value 'map' triggers map behavior
191
206
 
192
207
  The aggregation process ensures:
208
+
193
209
  - **Order Preservation**: Task outputs maintain array element ordering
194
210
  - **NULL Handling**: NULL outputs are included in the aggregated array
195
211
  - **Atomicity**: Aggregation occurs within the same transaction as task completion
@@ -262,8 +278,9 @@ When a workflow starts:
262
278
  The Edge Worker uses a two-phase approach to retrieve and start tasks:
263
279
 
264
280
  **Phase 1 - Reserve Messages:**
281
+
265
282
  ```sql
266
- SELECT * FROM pgflow.read_with_poll(
283
+ SELECT * FROM pgmq.read_with_poll(
267
284
  queue_name => 'analyze_website',
268
285
  vt => 60, -- visibility timeout in seconds
269
286
  qty => 5 -- maximum number of messages to fetch
@@ -271,6 +288,7 @@ SELECT * FROM pgflow.read_with_poll(
271
288
  ```
272
289
 
273
290
  **Phase 2 - Start Tasks:**
291
+
274
292
  ```sql
275
293
  SELECT * FROM pgflow.start_tasks(
276
294
  flow_slug => 'analyze_website',
@@ -379,6 +397,7 @@ Timeouts are enforced by setting the message visibility timeout to the step's ti
379
397
  The SQL Core is the DAG orchestration engine that handles dependency resolution, step state management, and task spawning. However, workflows are defined using the TypeScript Flow DSL, which compiles user intent into the SQL primitives that populate the definition tables (`flows`, `steps`, `deps`).
380
398
 
381
399
  See the [@pgflow/dsl package](../dsl/README.md) for complete documentation on:
400
+
382
401
  - Expressing workflows with type-safe method chaining
383
402
  - Step types (`.step()`, `.array()`, `.map()`)
384
403
  - Compilation to SQL migrations
@@ -441,6 +460,7 @@ Map step tasks receive a fundamentally different input structure than single ste
441
460
  ```
442
461
 
443
462
  This means:
463
+
444
464
  - Map handlers process individual elements in isolation
445
465
  - Map handlers cannot access the original flow input (`run`)
446
466
  - Map handlers cannot access other dependencies
@@ -456,8 +476,10 @@ When a step depends on a map step, it receives the aggregated array output:
456
476
 
457
477
  // A step depending on 'process_users' receives:
458
478
  {
459
- "run": { /* original flow input */ },
460
- "process_users": [{"name": "Alice"}, {"name": "Bob"}] // Full array
479
+ "run": {
480
+ /* original flow input */
481
+ },
482
+ "process_users": [{ "name": "Alice" }, { "name": "Bob" }] // Full array
461
483
  }
462
484
  ```
463
485