@pgflow/dsl 0.0.5 → 0.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (71) hide show
  1. package/package.json +4 -1
  2. package/CHANGELOG.md +0 -7
  3. package/__tests__/runtime/flow.test.ts +0 -121
  4. package/__tests__/runtime/steps.test.ts +0 -183
  5. package/__tests__/runtime/utils.test.ts +0 -149
  6. package/__tests__/types/dsl-types.test-d.ts +0 -103
  7. package/__tests__/types/example-flow.test-d.ts +0 -76
  8. package/__tests__/types/extract-flow-input.test-d.ts +0 -71
  9. package/__tests__/types/extract-flow-steps.test-d.ts +0 -74
  10. package/__tests__/types/getStepDefinition.test-d.ts +0 -65
  11. package/__tests__/types/step-input.test-d.ts +0 -212
  12. package/__tests__/types/step-output.test-d.ts +0 -55
  13. package/brainstorming/condition/condition-alternatives.md +0 -219
  14. package/brainstorming/condition/condition-with-flexibility.md +0 -303
  15. package/brainstorming/condition/condition.md +0 -139
  16. package/brainstorming/condition/implementation-plan.md +0 -372
  17. package/brainstorming/dsl/cli-json-schema.md +0 -225
  18. package/brainstorming/dsl/cli.md +0 -179
  19. package/brainstorming/dsl/create-compilator.md +0 -25
  20. package/brainstorming/dsl/dsl-analysis-2.md +0 -166
  21. package/brainstorming/dsl/dsl-analysis.md +0 -512
  22. package/brainstorming/dsl/dsl-critique.md +0 -41
  23. package/brainstorming/fanouts/fanout-subflows-flattened-vs-subruns.md +0 -213
  24. package/brainstorming/fanouts/fanouts-task-index.md +0 -150
  25. package/brainstorming/fanouts/fanouts-with-conditions-and-subflows.md +0 -239
  26. package/brainstorming/subflows/branching.ts.md +0 -38
  27. package/brainstorming/subflows/subflows-callbacks.ts.md +0 -124
  28. package/brainstorming/subflows/subflows-classes.ts.md +0 -83
  29. package/brainstorming/subflows/subflows-flattening-versioned.md +0 -119
  30. package/brainstorming/subflows/subflows-flattening.md +0 -138
  31. package/brainstorming/subflows/subflows.md +0 -118
  32. package/brainstorming/subflows/subruns-table.md +0 -282
  33. package/brainstorming/subflows/subruns.md +0 -315
  34. package/brainstorming/versioning/breaking-and-non-breaking-flow-changes.md +0 -259
  35. package/docs/refactor-edge-worker.md +0 -146
  36. package/docs/versioning.md +0 -19
  37. package/eslint.config.cjs +0 -22
  38. package/out-tsc/vitest/__tests__/runtime/flow.test.d.ts +0 -2
  39. package/out-tsc/vitest/__tests__/runtime/flow.test.d.ts.map +0 -1
  40. package/out-tsc/vitest/__tests__/runtime/steps.test.d.ts +0 -2
  41. package/out-tsc/vitest/__tests__/runtime/steps.test.d.ts.map +0 -1
  42. package/out-tsc/vitest/__tests__/runtime/utils.test.d.ts +0 -2
  43. package/out-tsc/vitest/__tests__/runtime/utils.test.d.ts.map +0 -1
  44. package/out-tsc/vitest/__tests__/types/dsl-types.test-d.d.ts +0 -2
  45. package/out-tsc/vitest/__tests__/types/dsl-types.test-d.d.ts.map +0 -1
  46. package/out-tsc/vitest/__tests__/types/example-flow.test-d.d.ts +0 -2
  47. package/out-tsc/vitest/__tests__/types/example-flow.test-d.d.ts.map +0 -1
  48. package/out-tsc/vitest/__tests__/types/extract-flow-input.test-d.d.ts +0 -2
  49. package/out-tsc/vitest/__tests__/types/extract-flow-input.test-d.d.ts.map +0 -1
  50. package/out-tsc/vitest/__tests__/types/extract-flow-steps.test-d.d.ts +0 -2
  51. package/out-tsc/vitest/__tests__/types/extract-flow-steps.test-d.d.ts.map +0 -1
  52. package/out-tsc/vitest/__tests__/types/getStepDefinition.test-d.d.ts +0 -2
  53. package/out-tsc/vitest/__tests__/types/getStepDefinition.test-d.d.ts.map +0 -1
  54. package/out-tsc/vitest/__tests__/types/step-input.test-d.d.ts +0 -2
  55. package/out-tsc/vitest/__tests__/types/step-input.test-d.d.ts.map +0 -1
  56. package/out-tsc/vitest/__tests__/types/step-output.test-d.d.ts +0 -2
  57. package/out-tsc/vitest/__tests__/types/step-output.test-d.d.ts.map +0 -1
  58. package/out-tsc/vitest/tsconfig.spec.tsbuildinfo +0 -1
  59. package/out-tsc/vitest/vite.config.d.ts +0 -3
  60. package/out-tsc/vitest/vite.config.d.ts.map +0 -1
  61. package/project.json +0 -28
  62. package/prompts/edge-worker-refactor.md +0 -105
  63. package/src/dsl.ts +0 -318
  64. package/src/example-flow.ts +0 -67
  65. package/src/index.ts +0 -1
  66. package/src/utils.ts +0 -84
  67. package/tsconfig.json +0 -13
  68. package/tsconfig.lib.json +0 -26
  69. package/tsconfig.spec.json +0 -35
  70. package/typecheck.log +0 -120
  71. package/vite.config.ts +0 -57
@@ -1,512 +0,0 @@
1
- # Comprehensive Critique of a Database-Centric, DSL-Driven Flow Approach
2
-
3
- Below is a detailed analysis of using a TypeScript-based DSL to declaratively define and manage workflows (DAGs) directly in PostgreSQL (for Supabase or similar). The approach is often referred to as “DSL-in-the-DB.”
4
-
5
- We’ll first outline **10 common pitfalls and problems** you might face. Each pitfall has a short code snippet or example that illustrates the issue. Next, we’ll discuss **10 positive use cases** where this approach really shines.
6
-
7
- ---
8
-
9
- ## 1. Potential Pitfalls and Challenges
10
-
11
- ### 1.1 Complex Conditional Flows
12
-
13
- **Description**
14
- When you define flows with many conditional branching (e.g., short-circuiting certain steps if a condition is met), concurrency and runIf conditions can make your definitions complex. Developers may struggle with the logic or misuse the condition checks.
15
-
16
- **Example**
17
-
18
- ```ts
19
- // "runIf" or "runUnless" might cause confusion if combined incorrectly:
20
- const SomeComplexFlow = new Flow<{ userIsVIP: boolean }>()
21
- .step(
22
- {
23
- slug: 'special_vip_step',
24
- dependsOn: ['check_vip'],
25
- runIf: { run: { userIsVIP: true } }, // runs only for VIP
26
- },
27
- async () => {
28
- // ...
29
- return { msg: 'Completed VIP step.' };
30
- }
31
- )
32
- .step(
33
- {
34
- slug: 'normal_step',
35
- dependsOn: ['check_vip'],
36
- runUnless: { run: { userIsVIP: true } }, // runs only for non-VIP
37
- },
38
- async () => {
39
- // ...
40
- return { msg: 'Completed normal step.' };
41
- }
42
- );
43
-
44
- // If a developer forgets to handle unexpected logic (like userIsVIP = undefined),
45
- // the runIf conditions can fail or skip steps unintentionally.
46
- ```
47
-
48
- **Key Issue**
49
-
50
- - Hard-to-debug mixing of `runIf` and `runUnless` conditions, potentially leading to partial or skipped steps.
51
- - In some cases, you might need nested conditional flows, which become verbose.
52
-
53
- ---
54
-
55
- ### 1.2 Over-Reliance on Database Transactions
56
-
57
- **Description**
58
- Everything runs within database constructs. Handling long-running tasks entirely inside transactions or relying too heavily on transactional guarantees might lead to performance or locking issues.
59
- **Example**
60
-
61
- ```sql
62
- -- Suppose we have a step that triggers a heavy operation:
63
- SELECT pgflow.add_step(
64
- flow_slug => 'heavy_computation_flow',
65
- step_slug => 'ml_training',
66
- deps => ARRAY['data_fetch']
67
- );
68
-
69
- -- Worker picks up 'ml_training' but it runs for hours:
70
- -- If we rely on immediate DB transaction, we risk timeouts or locks.
71
- ```
72
-
73
- **Key Issue**
74
-
75
- - When tasks run too long, the worker might exceed the DB or locked row’s timeouts.
76
-
77
- ---
78
-
79
- ### 1.3 Incorrect or Missing Step Ordering
80
-
81
- **Description**
82
- The DSL typically enforces topological ordering, but you can still forget dependencies or incorrectly define them, causing cycles or orphan steps.
83
-
84
- **Example**
85
-
86
- ```sql
87
- -- Mistakenly adding a step that points back to an ancestor (cycle):
88
- SELECT pgflow.add_step('some_flow', 'step_a');
89
- SELECT pgflow.add_step('some_flow', 'step_b', deps => ARRAY['step_a']);
90
- SELECT pgflow.add_step('some_flow', 'step_a', deps => ARRAY['step_b']);
91
- -- This redefines a cycle that the system may refuse or produce an error for.
92
- ```
93
-
94
- **Key Issue**
95
-
96
- - Cycles or misordered “root steps” cause the entire flow to fail at definition time.
97
- - Breaking big flows into smaller subflows might help, but a large monolithic flow can create confusion.
98
-
99
- ---
100
-
101
- ### 1.4 Data Bloating in JSON Fields
102
-
103
- **Description**
104
- Each step’s output is stored in JSON (or JSONB). Large outputs can bloat database storage, degrade performance, or hamper indexing.
105
-
106
- **Example**
107
-
108
- ```ts
109
- // Step that might store large data:
110
- .step(
111
- { slug: 'fetch_big_report' },
112
- async (input) => {
113
- // Potentially returns massive JSON
114
- return { bigReport: "..." };
115
- }
116
- )
117
- ```
118
-
119
- **Key Issue**
120
-
121
- - Over time, you accumulate large JSON payloads in the DB.
122
- - Hard to index or query partial fields from large JSON unless carefully planned (e.g., domain-specific partial storage).
123
-
124
- ---
125
-
126
- ### 1.5 Versioning Flows in Production
127
-
128
- **Description**
129
- Making changes or versioning existing flows while runs are in progress can cause confusion. Some “in-flight” runs rely on old definitions; new runs rely on updated definitions.
130
-
131
- **Example**
132
-
133
- ```sql
134
- -- Attempting to rename a step slug in production:
135
- UPDATE steps
136
- SET slug = 'fetch_data_v2'
137
- WHERE slug = 'fetch_data_v1'
138
- AND flow_slug = 'my_flow';
139
- -- Could break or orphan old runs that reference the old slug.
140
- ```
141
-
142
- **Key Issue**
143
-
144
- - If you rename or remove steps in the DB definition while older runs still reference them, you can end up with incomplete or stuck runs.
145
-
146
- ---
147
-
148
- ### 1.6 Mismatch Between TypeScript DSL and SQL State
149
-
150
- **Description**
151
- The TypeScript DSL might produce a shape that is out of sync with the actual SQL definitions. If you forget a migration step or run local code that doesn’t match the DB schema, you’ll see inconsistent runs.
152
-
153
- **Example**
154
-
155
- ```ts
156
- // TypeScript code defines "step_C" dependsOn: ["step_B"],
157
- // but in the DB, "step_C" might not have that dependency due to a missed migration script.
158
- .step(
159
- { slug: 'step_C', dependsOn: ['step_B'] },
160
- async (input) => { /* ... */ }
161
- );
162
- ```
163
-
164
- **Key Issue**
165
-
166
- - Accidental drift between the DSL and the actual DB.
167
- - Forces you to maintain either an automated migration step or a manual alignment process.
168
-
169
- ---
170
-
171
- ### 1.7 Potential Overhead for Simple Flows
172
-
173
- **Description**
174
- If your flows are simple “do step A, then step B,” the overhead of designing them as DAG definitions, SQL objects, and DSL code might be too heavy.
175
-
176
- **Example**
177
-
178
- ```ts
179
- // Overkill for a simple 2-step sequence:
180
- const SimpleFlow = new Flow<{ name: string }>()
181
- .step({ slug: 'stepA' }, async (input) => `Hello, ${input.run.name}`)
182
- .step({ slug: 'stepB', dependsOn: ['stepA'] }, async (input) =>
183
- console.log(input.stepA)
184
- );
185
-
186
- // Could just do it in a direct function call with fewer abstractions.
187
- ```
188
-
189
- **Key Issue**
190
-
191
- - Overhead of setting up the DSL, storing definitions, polling tasks, etc., might not justify the complexity for trivial flows.
192
-
193
- ---
194
-
195
- ### 1.8 Debugging Failed Steps with Partial Data
196
-
197
- **Description**
198
- When steps fail, partial data might remain. If you rely on the DSL to pass data from step to step, you might not see intermediate debugging logs in your usual console.
199
-
200
- **Example**
201
-
202
- ```ts
203
- // A step that frequently fails, but only partial data is in DB logs
204
- .step(
205
- { slug: 'fragile_api_call' },
206
- async (input) => {
207
- // If this fails mid-call, you might not see intermediate states
208
- // unless you explicitly log them or store them in partial steps.
209
- }
210
- );
211
- ```
212
-
213
- **Key Issue**
214
-
215
- - Observability might require a more robust approach than simply storing JSON outputs.
216
- - Hard to step through (like a debugger) inside a distributed worker scenario.
217
-
218
- ---
219
-
220
- ### 1.9 Handling Secrets and Sensitive Data
221
-
222
- **Description**
223
- When outputs or inputs contain sensitive info (API tokens, user data, secrets), storing them as plain JSON in the DB can be risky or require special encryption logic.
224
-
225
- **Example**
226
-
227
- ```ts
228
- // Suppose input has tokens:
229
- type Input = { userToken: string; url: string };
230
- const SecureFlow = new Flow<Input>().step(
231
- { slug: 'use_token' },
232
- async (payload) => {
233
- // We might inadvertently store userToken in final logs or outputs
234
- return { usedToken: payload.run.userToken };
235
- }
236
- );
237
- ```
238
-
239
- **Key Issue**
240
-
241
- - If logs or DB entries are compromised, secrets might leak.
242
- - PBKDF or encryption at rest might help, but is not always built-in.
243
-
244
- ---
245
-
246
- ### 1.10 Managing Parallelism vs. Resource Limits
247
-
248
- **Description**
249
- The DSL can trigger multiple parallel steps. On a small Supabase plan, you might overrun resource or concurrency limits if you scale flows too aggressively.
250
-
251
- **Example**
252
-
253
- ```ts
254
- // Each root step can spawn 20 parallel sub-steps:
255
- const BigParallelFlow = new Flow<{ items: number[] }>()
256
- .step({ slug: 'split' }, async (payload) => payload.run.items)
257
- // ...
258
- .step(
259
- { slug: 'fan_out_step_19', dependsOn: ['split'] },
260
- (async) /* ... */ => {}
261
- )
262
- .step(
263
- { slug: 'fan_out_step_20', dependsOn: ['split'] },
264
- (async) /* ... */ => {}
265
- );
266
- ```
267
-
268
- **Key Issue**
269
-
270
- - The database queue might produce 20 parallel tasks, each worker might open heavy connections or saturate your CPU.
271
- - On limited orchestrations, it can lead to unexpected slowdowns, partial failures, or queue time expansions.
272
-
273
- ---
274
-
275
- ## 2. Positive and Promising Use Cases
276
-
277
- Despite the above pitfalls, there are numerous scenarios where this DSL approach is very powerful.
278
-
279
- ### 2.1 Clear Separation of Concerns
280
-
281
- **Description**
282
- You keep the “what to run and in what order” in the database, while your actual logic is in edge functions or TypeScript. This separation clarifies “flow structure” vs. “execution code.”
283
-
284
- **Example**
285
-
286
- ```ts
287
- // Flow definition is purely describing steps:
288
- const MyFlow = new Flow<{ userId: string }>()
289
- .step({ slug: 'get_user_data' }, async (p) =>
290
- fetchUserProfileFromDb(p.run.userId)
291
- )
292
- .step({ slug: 'process_data', dependsOn: ['get_user_data'] }, async (p) =>
293
- doSomeComputation(p.get_user_data)
294
- );
295
-
296
- // The DB has a stable definition: flow = MyFlow.
297
- ```
298
-
299
- **Key Benefit**
300
-
301
- - DB queries remain straightforward; step orchestration logic is separate.
302
- - A single source of truth in the DB for which steps exist and how they connect.
303
-
304
- ---
305
-
306
- ### 2.2 Automatic Retry and Exact-Once Semantics
307
-
308
- **Description**
309
- Combining the queue (e.g., pgmq) with step state transitions ensures at-most-once or exactly-once semantics. The DSL automatically handles retries, skipping steps if already completed, etc.
310
-
311
- **Example**
312
-
313
- ```ts
314
- const RetryFlow = new Flow<{ value: number }>().step(
315
- { slug: 'division_step', maxAttempts: 3, baseDelay: 2 },
316
- async (p) => {
317
- if (p.run.value === 0) throw new Error("Can't divide by zero!");
318
- return { result: 100 / p.run.value };
319
- }
320
- );
321
- ```
322
-
323
- **Key Benefit**
324
-
325
- - Automatic exponential backoff after failures reduces developer overhead.
326
- - The system handles concurrency, so you only write your logic.
327
-
328
- ---
329
-
330
- ### 2.3 Parallel Steps for Faster Execution
331
-
332
- **Description**
333
- DAG shape means you can define steps that run in parallel, drastically speeding up certain pipelines. The DSL approach makes parallel definitions simple.
334
-
335
- **Example**
336
-
337
- ```ts
338
- // Summarize and sentiment-analyze in parallel:
339
- const AnalyzeParallel = new Flow<{ text: string }>()
340
- .step({ slug: 'sentiment' }, async (p) => analyzeSentiment(p.run.text))
341
- .step({ slug: 'summary' }, async (p) => summarizeWithLLM(p.run.text))
342
- .step({ slug: 'combine', dependsOn: ['sentiment', 'summary'] }, async (p) => {
343
- return {
344
- combined: `Summary: ${p.summary}, Sentiment: ${p.sentiment}`,
345
- };
346
- });
347
- ```
348
-
349
- **Key Benefit**
350
-
351
- - You define concurrency simply by listing dependencies.
352
- - No extra code to manage parallel tasks or threads; the worker pool does the heavy lifting.
353
-
354
- ---
355
-
356
- ### 2.4 Observability with Step-Level State
357
-
358
- **Description**
359
- Each step’s status and output is tracked in PostgreSQL. This makes it easy to query run history, see which steps failed, and retrieve the partial outputs.
360
-
361
- **Example**
362
-
363
- ```sql
364
- -- Querying the DB table "step_states" for a given run:
365
- SELECT step_slug, status, output
366
- FROM step_states
367
- WHERE run_id = '<some_run_id>';
368
- ```
369
-
370
- **Key Benefit**
371
-
372
- - Simple SQL queries to see current or historical status.
373
- - Easy to integrate with dashboards or a BI tool (on Supabase).
374
-
375
- ---
376
-
377
- ### 2.5 Transactional Consistency
378
-
379
- **Description**
380
- If a step depends on multiple prior steps, you can be sure the outputs from those steps are fully consistent in the DB. The system handles commits atomically.
381
-
382
- **Example**
383
-
384
- ```sql
385
- -- When edge function calls complete_task, it either commits everything or rolls back:
386
- SELECT pgflow.complete_task(
387
- run_id => 'some_run_id',
388
- step_slug => 'fetch_data',
389
- output => '{"data":"..."}'
390
- );
391
- /*
392
- This ensures next steps only see the output if transaction committed.
393
- Otherwise they retry tasks.
394
- */
395
- ```
396
-
397
- **Key Benefit**
398
-
399
- - Minimizes risk of partial writes or half-updated states in concurrency scenarios.
400
- - Straightforward approach to ensuring data integrity.
401
-
402
- ---
403
-
404
- ### 2.6 Scalable Worker Model
405
-
406
- **Description**
407
- The worker poll mechanism can scale horizontally: multiple edge function instances can poll the same queue. The flow engine keeps track of tasks in flight using PG concurrency primitives.
408
-
409
- **Key Benefit**
410
-
411
- - Works well in serverless environments (e.g., multiple Supabase edge functions).
412
- - Good for “burst” scenarios: you can scale up workers quickly if many tasks come in at once.
413
-
414
- ---
415
-
416
- ### 2.7 Automatic Type Inference and IntelliSense
417
-
418
- **Description**
419
- The TypeScript DSL can infer the shape of inputs, outputs, and step dependencies, giving you better auto-completion in your IDE.
420
-
421
- **Example**
422
-
423
- ```ts
424
- const InferredFlow = new Flow<{ name: string }>()
425
- .step({ slug: 'greet' }, (p) => `Hello, ${p.run.name}`)
426
- .step({ slug: 'uppercase', dependsOn: ['greet'] }, (p) =>
427
- p.greet.toUpperCase()
428
- );
429
-
430
- // 'p.greet' is inferred as string from the previous step.
431
- // No manual type definitions for 'uppercase' needed other than the initial flow input.
432
- ```
433
-
434
- **Key Benefit**
435
-
436
- - Less boilerplate.
437
- - TypeScript ensures you don’t access data that wasn’t provided by a dependency.
438
-
439
- ---
440
-
441
- ### 2.8 Flexible “Fan-In” or “Fan-Out”
442
-
443
- **Description**
444
- Declarative steps let you do fan-out to process multiple items in parallel and then fan-in to combine outputs. This is especially handy for data transformations or chunked tasks.
445
-
446
- **Example**
447
-
448
- ```ts
449
- // Suppose you have an array of items:
450
- .step({ slug: 'split_items' }, async (p) => p.run.items)
451
- .step(
452
- { slug: 'process_each_item', dependsOn: ['split_items'], fanOut: true },
453
- async (p, chunkIndex) => handleSingleItem(p.split_items[chunkIndex])
454
- )
455
- .step(
456
- { slug: 'combine_results', dependsOn: ['process_each_item'] },
457
- async (p) => aggregateAll(p.process_each_item)
458
- );
459
- ```
460
-
461
- **Key Benefit**
462
-
463
- - DSL approach can automatically generate multiple sub-tasks for each item.
464
- - You get aggregated results at the “fan-in” step.
465
-
466
- ---
467
-
468
- ### 2.9 Persistent State for Long-Lived Processes
469
-
470
- **Description**
471
- Workflows can remain “running” for days or weeks, especially if you have delayed steps or wait states. The DB acts as a single source of truth.
472
-
473
- **Example**
474
-
475
- ```sql
476
- -- A step that triggers a follow-up after 7 days:
477
- SELECT pgflow.add_step(
478
- flow_slug => 'user_onboarding',
479
- step_slug => 'follow_up_email',
480
- deps => ARRAY['initial_welcome'],
481
- delay_seconds => 604800
482
- );
483
- ```
484
-
485
- **Key Benefit**
486
-
487
- - No need for a separate “cron” to wait for 7 days. The engine can hold the step until the delay passes.
488
- - You can pick up the flow run exactly where it left off after downtime/deployments.
489
-
490
- ---
491
-
492
- ### 2.10 Single Platform (Supabase + Postgres + Edge Functions)
493
-
494
- **Description**
495
- You don’t need Kubernetes, external queues, or heavy infra. You run SQL plus minimal worker code in Edge Functions. This lowers the operational overhead.
496
-
497
- **Key Benefit**
498
-
499
- - Ideal for small/medium apps that want a serverless or single cloud.
500
- - Easy to manage in the Supabase ecosystem (auth, db, functions).
501
-
502
- ---
503
-
504
- ## Concluding Thoughts
505
-
506
- Using a database-centric DSL to manage flows has powerful advantages—particularly the robust state tracking, type inference, and parallel step orchestration. However, it also introduces potential complexities:
507
-
508
- - **Workflow versioning** can be tough.
509
- - **Data bloat** and **sensitive data** might need special handling.
510
- - **Excessive overhead** for simple use cases.
511
-
512
- For teams building multi-step, parallelizable, or long-running processes in a single Postgres-based environment (e.g., Supabase), this approach can unify data and orchestration tracking under one roof. It’s a compelling solution, as long as you plan carefully around potential pitfalls such as concurrency, conditional branching complexity, and output storage.
@@ -1,41 +0,0 @@
1
- # Critique of Current Flow DSL Implementation
2
-
3
- Below are several key points where the current Flow DSL could be improved or clarified. These focus on non-trivial issues that aren’t already covered by TypeScript’s type system or foreign key constraints enforcing topological ordering.
4
-
5
- ## 1. Step Name Collisions
6
- Currently, when adding steps using the `step()` method, there is no mechanism for detecting or preventing duplicate step slugs within the same Flow instance. Accidentally reusing an existing slug would silently overwrite the corresponding `stepDefinitions` entry, leading to unexpected behavior.
7
-
8
- ### Possible Improvements
9
- - Throw an error if a step is defined more than once with the same slug.
10
- - Provide a helper or check to ensure slugs remain unique in a given Flow.
11
-
12
- ## 2. Cross-Flow Step References
13
- There is no hard check to ensure that a `dependsOn` array only references steps within the same Flow. If a user mistakenly references a slug that belongs to a different Flow or that doesn’t exist in the current Flow, they’d only discover the error at execution time (e.g., missing output in a payload).
14
-
15
- ### Possible Improvements
16
- - Validate each `dependsOn` slug to ensure it exists in the current `stepDefinitions`.
17
- - Fail early (i.e., throw) at call time when an invalid dependency is specified.
18
-
19
- ## 3. Missing Flow-Level Default Fallback
20
- Although each step can define overrides for `maxAttempts`, `baseDelay`, and `timeout`, there is no fallback logic at runtime if a step-level option is undefined. For instance, if a user sets a `baseDelay` at the flow level only, a step requiring that delay but not providing its own override should inherit the flow’s value rather than defaulting to `undefined`.
21
-
22
- ### Possible Improvements
23
- - When building `stepOptionsStore`, merge step-level overrides with flow-level defaults.
24
- - Expose a utility method (e.g., `getEffectiveOptions(stepSlug)`) that returns the merged set of options for each step.
25
-
26
- ## 4. Runtime Validation of Handler Output Shape
27
- Since handlers return arbitrary JSON, there is little reassurance at runtime if a handler’s actual returned shape mismatches the inferred type. The TypeScript compiler can help, but cases of dynamic data or incorrectly typed library calls could pass the compiler but break at runtime.
28
-
29
- ### Possible Improvements
30
- - Provide an optional debug or development mode that checks a handler’s returned data against the inferred type structure (e.g., via JSON schema or similar approach).
31
-
32
- ## 5. Handling of Late-Added Steps
33
- The current structure re-creates a new `Flow` each time `.step()` is called. If a user tries to add new steps after certain operations or references have been made (such as storing an existing Flow instance for execution), it may cause confusion or partial definitions. The final shape of the Flow is only fully determined after all `.step()` calls, but nothing prevents referencing incomplete or out-of-order definitions in the meantime.
34
-
35
- ### Possible Improvements
36
- - Enforce a “building phase” vs. “execution phase” distinction—only allow `.step()` declarations before you attempt to retrieve or execute the Flow definitions.
37
- - Consider a final “freeze” step to lock the DSL so no further definitions can be added.
38
-
39
- ---
40
-
41
- By addressing these points, the Flow DSL will be more robust, more fault-tolerant, and less prone to subtle errors that can arise in real-world usage.