@dojocho/effect-ts 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (149) hide show
  1. package/DOJO.md +22 -0
  2. package/dojo.json +50 -0
  3. package/katas/001-hello-effect/SENSEI.md +72 -0
  4. package/katas/001-hello-effect/solution.test.ts +35 -0
  5. package/katas/001-hello-effect/solution.ts +16 -0
  6. package/katas/002-transform-with-map/SENSEI.md +72 -0
  7. package/katas/002-transform-with-map/solution.test.ts +33 -0
  8. package/katas/002-transform-with-map/solution.ts +16 -0
  9. package/katas/003-generator-pipelines/SENSEI.md +72 -0
  10. package/katas/003-generator-pipelines/solution.test.ts +40 -0
  11. package/katas/003-generator-pipelines/solution.ts +29 -0
  12. package/katas/004-flatmap-and-chaining/SENSEI.md +80 -0
  13. package/katas/004-flatmap-and-chaining/solution.test.ts +34 -0
  14. package/katas/004-flatmap-and-chaining/solution.ts +18 -0
  15. package/katas/005-pipe-composition/SENSEI.md +81 -0
  16. package/katas/005-pipe-composition/solution.test.ts +41 -0
  17. package/katas/005-pipe-composition/solution.ts +19 -0
  18. package/katas/006-handle-errors/SENSEI.md +86 -0
  19. package/katas/006-handle-errors/solution.test.ts +53 -0
  20. package/katas/006-handle-errors/solution.ts +30 -0
  21. package/katas/007-tagged-errors/SENSEI.md +79 -0
  22. package/katas/007-tagged-errors/solution.test.ts +82 -0
  23. package/katas/007-tagged-errors/solution.ts +37 -0
  24. package/katas/008-error-patterns/SENSEI.md +89 -0
  25. package/katas/008-error-patterns/solution.test.ts +41 -0
  26. package/katas/008-error-patterns/solution.ts +38 -0
  27. package/katas/009-option-type/SENSEI.md +96 -0
  28. package/katas/009-option-type/solution.test.ts +49 -0
  29. package/katas/009-option-type/solution.ts +26 -0
  30. package/katas/010-either-and-exit/SENSEI.md +86 -0
  31. package/katas/010-either-and-exit/solution.test.ts +33 -0
  32. package/katas/010-either-and-exit/solution.ts +17 -0
  33. package/katas/011-services-and-context/SENSEI.md +82 -0
  34. package/katas/011-services-and-context/solution.test.ts +23 -0
  35. package/katas/011-services-and-context/solution.ts +17 -0
  36. package/katas/012-layers/SENSEI.md +73 -0
  37. package/katas/012-layers/solution.test.ts +23 -0
  38. package/katas/012-layers/solution.ts +26 -0
  39. package/katas/013-testing-effects/SENSEI.md +88 -0
  40. package/katas/013-testing-effects/solution.test.ts +41 -0
  41. package/katas/013-testing-effects/solution.ts +20 -0
  42. package/katas/014-schema-basics/SENSEI.md +81 -0
  43. package/katas/014-schema-basics/solution.test.ts +35 -0
  44. package/katas/014-schema-basics/solution.ts +25 -0
  45. package/katas/015-domain-modeling/SENSEI.md +85 -0
  46. package/katas/015-domain-modeling/solution.test.ts +46 -0
  47. package/katas/015-domain-modeling/solution.ts +42 -0
  48. package/katas/016-retry-and-schedule/SENSEI.md +72 -0
  49. package/katas/016-retry-and-schedule/solution.test.ts +26 -0
  50. package/katas/016-retry-and-schedule/solution.ts +23 -0
  51. package/katas/017-parallel-effects/SENSEI.md +70 -0
  52. package/katas/017-parallel-effects/solution.test.ts +33 -0
  53. package/katas/017-parallel-effects/solution.ts +17 -0
  54. package/katas/018-race-and-timeout/SENSEI.md +75 -0
  55. package/katas/018-race-and-timeout/solution.test.ts +30 -0
  56. package/katas/018-race-and-timeout/solution.ts +27 -0
  57. package/katas/019-ref-and-state/SENSEI.md +72 -0
  58. package/katas/019-ref-and-state/solution.test.ts +29 -0
  59. package/katas/019-ref-and-state/solution.ts +16 -0
  60. package/katas/020-fibers/SENSEI.md +80 -0
  61. package/katas/020-fibers/solution.test.ts +23 -0
  62. package/katas/020-fibers/solution.ts +23 -0
  63. package/katas/021-acquire-release/SENSEI.md +57 -0
  64. package/katas/021-acquire-release/solution.test.ts +23 -0
  65. package/katas/021-acquire-release/solution.ts +22 -0
  66. package/katas/022-scoped-layers/SENSEI.md +52 -0
  67. package/katas/022-scoped-layers/solution.test.ts +35 -0
  68. package/katas/022-scoped-layers/solution.ts +19 -0
  69. package/katas/023-resource-patterns/SENSEI.md +52 -0
  70. package/katas/023-resource-patterns/solution.test.ts +20 -0
  71. package/katas/023-resource-patterns/solution.ts +13 -0
  72. package/katas/024-streams-basics/SENSEI.md +61 -0
  73. package/katas/024-streams-basics/solution.test.ts +30 -0
  74. package/katas/024-streams-basics/solution.ts +16 -0
  75. package/katas/025-stream-operations/SENSEI.md +59 -0
  76. package/katas/025-stream-operations/solution.test.ts +26 -0
  77. package/katas/025-stream-operations/solution.ts +17 -0
  78. package/katas/026-combining-streams/SENSEI.md +54 -0
  79. package/katas/026-combining-streams/solution.test.ts +20 -0
  80. package/katas/026-combining-streams/solution.ts +16 -0
  81. package/katas/027-data-pipelines/SENSEI.md +58 -0
  82. package/katas/027-data-pipelines/solution.test.ts +22 -0
  83. package/katas/027-data-pipelines/solution.ts +16 -0
  84. package/katas/028-logging-and-spans/SENSEI.md +58 -0
  85. package/katas/028-logging-and-spans/solution.test.ts +50 -0
  86. package/katas/028-logging-and-spans/solution.ts +20 -0
  87. package/katas/029-http-client/SENSEI.md +59 -0
  88. package/katas/029-http-client/solution.test.ts +49 -0
  89. package/katas/029-http-client/solution.ts +24 -0
  90. package/katas/030-capstone/SENSEI.md +63 -0
  91. package/katas/030-capstone/solution.test.ts +67 -0
  92. package/katas/030-capstone/solution.ts +55 -0
  93. package/katas/031-config-and-environment/SENSEI.md +77 -0
  94. package/katas/031-config-and-environment/solution.test.ts +38 -0
  95. package/katas/031-config-and-environment/solution.ts +11 -0
  96. package/katas/032-cause-and-defects/SENSEI.md +90 -0
  97. package/katas/032-cause-and-defects/solution.test.ts +50 -0
  98. package/katas/032-cause-and-defects/solution.ts +23 -0
  99. package/katas/033-pattern-matching/SENSEI.md +86 -0
  100. package/katas/033-pattern-matching/solution.test.ts +36 -0
  101. package/katas/033-pattern-matching/solution.ts +28 -0
  102. package/katas/034-deferred-and-coordination/SENSEI.md +85 -0
  103. package/katas/034-deferred-and-coordination/solution.test.ts +25 -0
  104. package/katas/034-deferred-and-coordination/solution.ts +24 -0
  105. package/katas/035-queue-and-backpressure/SENSEI.md +100 -0
  106. package/katas/035-queue-and-backpressure/solution.test.ts +25 -0
  107. package/katas/035-queue-and-backpressure/solution.ts +21 -0
  108. package/katas/036-schema-advanced/SENSEI.md +81 -0
  109. package/katas/036-schema-advanced/solution.test.ts +55 -0
  110. package/katas/036-schema-advanced/solution.ts +19 -0
  111. package/katas/037-cache-and-memoization/SENSEI.md +73 -0
  112. package/katas/037-cache-and-memoization/solution.test.ts +47 -0
  113. package/katas/037-cache-and-memoization/solution.ts +24 -0
  114. package/katas/038-metrics/SENSEI.md +91 -0
  115. package/katas/038-metrics/solution.test.ts +39 -0
  116. package/katas/038-metrics/solution.ts +23 -0
  117. package/katas/039-managed-runtime/SENSEI.md +75 -0
  118. package/katas/039-managed-runtime/solution.test.ts +29 -0
  119. package/katas/039-managed-runtime/solution.ts +19 -0
  120. package/katas/040-request-batching/SENSEI.md +87 -0
  121. package/katas/040-request-batching/solution.test.ts +56 -0
  122. package/katas/040-request-batching/solution.ts +32 -0
  123. package/package.json +22 -0
  124. package/skills/effect-patterns-building-apis/SKILL.md +2393 -0
  125. package/skills/effect-patterns-building-data-pipelines/SKILL.md +1876 -0
  126. package/skills/effect-patterns-concurrency/SKILL.md +2999 -0
  127. package/skills/effect-patterns-concurrency-getting-started/SKILL.md +351 -0
  128. package/skills/effect-patterns-core-concepts/SKILL.md +3199 -0
  129. package/skills/effect-patterns-domain-modeling/SKILL.md +1385 -0
  130. package/skills/effect-patterns-error-handling/SKILL.md +1212 -0
  131. package/skills/effect-patterns-error-handling-resilience/SKILL.md +179 -0
  132. package/skills/effect-patterns-error-management/SKILL.md +1668 -0
  133. package/skills/effect-patterns-getting-started/SKILL.md +237 -0
  134. package/skills/effect-patterns-making-http-requests/SKILL.md +1756 -0
  135. package/skills/effect-patterns-observability/SKILL.md +1586 -0
  136. package/skills/effect-patterns-platform/SKILL.md +1195 -0
  137. package/skills/effect-patterns-platform-getting-started/SKILL.md +179 -0
  138. package/skills/effect-patterns-project-setup--execution/SKILL.md +233 -0
  139. package/skills/effect-patterns-resource-management/SKILL.md +827 -0
  140. package/skills/effect-patterns-scheduling/SKILL.md +451 -0
  141. package/skills/effect-patterns-scheduling-periodic-tasks/SKILL.md +763 -0
  142. package/skills/effect-patterns-streams/SKILL.md +2052 -0
  143. package/skills/effect-patterns-streams-getting-started/SKILL.md +421 -0
  144. package/skills/effect-patterns-streams-sinks/SKILL.md +1181 -0
  145. package/skills/effect-patterns-testing/SKILL.md +1632 -0
  146. package/skills/effect-patterns-tooling-and-debugging/SKILL.md +1125 -0
  147. package/skills/effect-patterns-value-handling/SKILL.md +676 -0
  148. package/tsconfig.json +20 -0
  149. package/vitest.config.ts +3 -0
@@ -0,0 +1,2999 @@
1
+ ---
2
+ name: effect-patterns-concurrency
3
+ description: Effect-TS patterns for Concurrency. Use when working with concurrency in Effect-TS applications.
4
+ ---
5
+ # Effect-TS Patterns: Concurrency
6
+ This skill provides 20 curated Effect-TS patterns for concurrency.
7
+ Use this skill when working on tasks related to:
8
+ - concurrency
9
+ - Best practices in Effect-TS applications
10
+ - Real-world patterns and solutions
11
+
12
+ ---
13
+
14
+ ## 🟡 Intermediate Patterns
15
+
16
+ ### Race Concurrent Effects for the Fastest Result
17
+
18
+ **Rule:** Use Effect.race to get the result from the first of several effects to succeed, automatically interrupting the losers.
19
+
20
+ **Good Example:**
21
+
22
+ A classic use case is checking a fast cache before falling back to a slower database. We can race the cache lookup against the database query.
23
+
24
+ ```typescript
25
+ import { Effect, Option } from "effect";
26
+
27
+ type User = { id: number; name: string };
28
+
29
+ // Simulate a slower cache lookup that might find nothing (None)
30
+ const checkCache: Effect.Effect<Option.Option<User>> = Effect.succeed(
31
+ Option.none()
32
+ ).pipe(
33
+ Effect.delay("200 millis") // Made slower so database wins
34
+ );
35
+
36
+ // Simulate a faster database query that will always find the data
37
+ const queryDatabase: Effect.Effect<Option.Option<User>> = Effect.succeed(
38
+ Option.some({ id: 1, name: "Paul" })
39
+ ).pipe(
40
+ Effect.delay("50 millis") // Made faster so it wins the race
41
+ );
42
+
43
+ // Race them. The database should win and return the user data.
44
+ const program = Effect.race(checkCache, queryDatabase).pipe(
45
+ // The result of the race is an Option, so we can handle it.
46
+ Effect.flatMap((result: Option.Option<User>) =>
47
+ Option.match(result, {
48
+ onNone: () => Effect.fail("User not found anywhere."),
49
+ onSome: (user) => Effect.succeed(user),
50
+ })
51
+ )
52
+ );
53
+
54
+ // In this case, the database wins the race.
55
+ const programWithResults = Effect.gen(function* () {
56
+ try {
57
+ const user = yield* program;
58
+ yield* Effect.log(`User found: ${JSON.stringify(user)}`);
59
+ return user;
60
+ } catch (error) {
61
+ yield* Effect.logError(`Error: ${error}`);
62
+ throw error;
63
+ }
64
+ }).pipe(
65
+ Effect.catchAll((error) =>
66
+ Effect.gen(function* () {
67
+ yield* Effect.logError(`Handled error: ${error}`);
68
+ return null;
69
+ })
70
+ )
71
+ );
72
+
73
+ Effect.runPromise(programWithResults);
74
+
75
+ // Also demonstrate with logging
76
+ const programWithLogging = Effect.gen(function* () {
77
+ yield* Effect.logInfo("Starting race between cache and database...");
78
+
79
+ try {
80
+ const user = yield* program;
81
+ yield* Effect.logInfo(
82
+ `Success: Found user ${user.name} with ID ${user.id}`
83
+ );
84
+ return user;
85
+ } catch (error) {
86
+ yield* Effect.logInfo("This won't be reached due to Effect error handling");
87
+ return null;
88
+ }
89
+ }).pipe(
90
+ Effect.catchAll((error) =>
91
+ Effect.gen(function* () {
92
+ yield* Effect.logInfo(`Handled error: ${error}`);
93
+ return null;
94
+ })
95
+ )
96
+ );
97
+
98
+ Effect.runPromise(programWithLogging);
99
+ ```
100
+
101
+ ---
102
+
103
+ **Anti-Pattern:**
104
+
105
+ Don't use `Effect.race` if you need the results of _all_ the effects. That is the job of `Effect.all`. Using `race` in this scenario will cause you to lose data, as all but one of the effects will be interrupted and their results discarded.
106
+
107
+ ```typescript
108
+ import { Effect } from "effect";
109
+
110
+ const fetchProfile = Effect.succeed({ name: "Paul" });
111
+ const fetchPermissions = Effect.succeed(["admin", "editor"]);
112
+
113
+ // ❌ WRONG: This will only return either the profile OR the permissions,
114
+ // whichever resolves first. You will lose the other piece of data.
115
+ const incompleteData = Effect.race(fetchProfile, fetchPermissions);
116
+
117
+ // ✅ CORRECT: Use Effect.all when you need all the results.
118
+ const completeData = Effect.all([fetchProfile, fetchPermissions]);
119
+ ```
120
+
121
+ **Rationale:**
122
+
123
+ When you have multiple effects that can produce the same type of result, and you only care about the one that finishes first, use `Effect.race(effectA, effectB)`.
124
+
125
+ ---
126
+
127
+
128
+ `Effect.race` is a powerful concurrency primitive for performance and resilience. It starts all provided effects in parallel. The moment one of them succeeds, `Effect.race` immediately interrupts all the other "losing" effects and returns the winning result. If one of the effects fails before any have succeeded, the race is not over; the remaining effects continue to run. The entire race only fails if _all_ participating effects fail.
129
+
130
+ This is commonly used for:
131
+
132
+ - **Performance:** Querying multiple redundant data sources (e.g., two API replicas) and taking the response from whichever is faster.
133
+ - **Implementing Timeouts:** Racing a primary effect against a delayed `Effect.fail`, effectively creating a timeout mechanism.
134
+
135
+ ---
136
+
137
+ ---
138
+
139
+ ### Concurrency Pattern 2: Rate Limit Concurrent Access with Semaphore
140
+
141
+ **Rule:** Use Semaphore to limit concurrent access to resources, preventing overload and enabling fair resource distribution.
142
+
143
+ **Good Example:**
144
+
145
+ This example demonstrates limiting concurrent database connections using a Semaphore, preventing connection pool exhaustion.
146
+
147
+ ```typescript
148
+ import { Effect, Semaphore, Fiber } from "effect";
149
+
150
+ interface QueryResult {
151
+ readonly id: number;
152
+ readonly result: string;
153
+ readonly duration: number;
154
+ }
155
+
156
+ // Simulate a database query that holds a connection
157
+ const executeQuery = (
158
+ queryId: number,
159
+ connectionId: number,
160
+ durationMs: number
161
+ ): Effect.Effect<QueryResult> =>
162
+ Effect.gen(function* () {
163
+ const startTime = Date.now();
164
+
165
+ yield* Effect.log(
166
+ `[Query ${queryId}] Using connection ${connectionId}, duration: ${durationMs}ms`
167
+ );
168
+
169
+ // Simulate query execution
170
+ yield* Effect.sleep(`${durationMs} millis`);
171
+
172
+ const duration = Date.now() - startTime;
173
+
174
+ return {
175
+ id: queryId,
176
+ result: `Result from query ${queryId}`,
177
+ duration,
178
+ };
179
+ });
180
+
181
+ // Pool configuration
182
+ interface ConnectionPoolConfig {
183
+ readonly maxConnections: number;
184
+ readonly queryTimeout?: number;
185
+ }
186
+
187
+ // Create a rate-limited query executor
188
+ const createRateLimitedQueryExecutor = (
189
+ config: ConnectionPoolConfig
190
+ ): Effect.Effect<
191
+ (queryId: number, durationMs: number) => Effect.Effect<QueryResult>
192
+ > =>
193
+ Effect.gen(function* () {
194
+ const semaphore = yield* Semaphore.make(config.maxConnections);
195
+ let connectionCounter = 0;
196
+
197
+ return (queryId: number, durationMs: number) =>
198
+ Effect.gen(function* () {
199
+ // Acquire a permit (wait if none available)
200
+ yield* Semaphore.acquire(semaphore);
201
+
202
+ const connectionId = ++connectionCounter;
203
+
204
+ // Use try-finally to ensure permit is released
205
+ const result = yield* executeQuery(queryId, connectionId, durationMs).pipe(
206
+ Effect.ensuring(
207
+ Semaphore.release(semaphore).pipe(
208
+ Effect.tap(() =>
209
+ Effect.log(`[Query ${queryId}] Released connection ${connectionId}`)
210
+ )
211
+ )
212
+ )
213
+ );
214
+
215
+ return result;
216
+ });
217
+ });
218
+
219
+ // Simulate multiple queries arriving
220
+ const program = Effect.gen(function* () {
221
+ const executor = yield* createRateLimitedQueryExecutor({
222
+ maxConnections: 3, // Only 3 concurrent connections
223
+ });
224
+
225
+ // Generate 10 queries with varying durations
226
+ const queries = Array.from({ length: 10 }, (_, i) => ({
227
+ id: i + 1,
228
+ duration: 500 + Math.random() * 1500, // 500-2000ms
229
+ }));
230
+
231
+ console.log(`\n[POOL] Starting with max 3 concurrent connections\n`);
232
+
233
+ // Execute all queries with concurrency limit
234
+ const results = yield* Effect.all(
235
+ queries.map((q) =>
236
+ executor(q.id, Math.round(q.duration)).pipe(Effect.fork)
237
+ )
238
+ ).pipe(
239
+ Effect.andThen((fibers) =>
240
+ Effect.all(fibers.map((fiber) => Fiber.join(fiber)))
241
+ )
242
+ );
243
+
244
+ console.log(`\n[POOL] All queries completed\n`);
245
+
246
+ // Summary
247
+ const totalDuration = results.reduce((sum, r) => sum + r.duration, 0);
248
+ const avgDuration = totalDuration / results.length;
249
+
250
+ console.log(`[SUMMARY]`);
251
+ console.log(` Total queries: ${results.length}`);
252
+ console.log(` Avg duration: ${Math.round(avgDuration)}ms`);
253
+ console.log(` Total time: ${Math.max(...results.map((r) => r.duration))}ms (parallel)`);
254
+ });
255
+
256
+ Effect.runPromise(program);
257
+ ```
258
+
259
+ This pattern:
260
+
261
+ 1. **Creates a Semaphore** with fixed permit count
262
+ 2. **Acquires permit** before using connection
263
+ 3. **Executes operation** while holding permit
264
+ 4. **Releases permit** in finally block (guaranteed)
265
+ 5. **Fair queuing** of waiting queries
266
+
267
+ ---
268
+
269
+ **Rationale:**
270
+
271
+ When you need to limit how many operations can run concurrently (e.g., max 10 database connections, max 5 API calls per second), use `Semaphore`. A Semaphore tracks a pool of permits; operations acquire a permit before proceeding and release it when done. Waiting operations are queued fairly.
272
+
273
+ ---
274
+
275
+
276
+ Resource constraints require limiting concurrency:
277
+
278
+ - **Connection pools**: Database limited to N connections
279
+ - **API rate limits**: Service allows only M requests per second
280
+ - **Memory limits**: Large operations can't all run simultaneously
281
+ - **CPU constraints**: Too many threads waste cycles on context switching
282
+ - **Backpressure**: Prevent downstream from being overwhelmed
283
+
284
+ Without Semaphore:
285
+
286
+ - All operations run simultaneously, exhausting resources
287
+ - Connection pool overflows, requests fail
288
+ - Memory pressure causes garbage collection pauses
289
+ - No fair ordering (first-come-first-served)
290
+
291
+ With Semaphore:
292
+
293
+ - Fixed concurrency limit
294
+ - Fair queuing of waiting operations
295
+ - Backpressure naturally flows upstream
296
+ - Clear ownership of permits
297
+
298
+ ---
299
+
300
+ ---
301
+
302
+ ### Manage Shared State Safely with Ref
303
+
304
+ **Rule:** Use Ref to manage shared, mutable state concurrently, ensuring atomicity.
305
+
306
+ **Good Example:**
307
+
308
+ This program simulates 1,000 concurrent fibers all trying to increment a shared counter. Because we use `Ref.update`, every single increment is applied atomically, and the final result is always correct.
309
+
310
+ ```typescript
311
+ import { Effect, Ref } from "effect";
312
+
313
+ const program = Effect.gen(function* () {
314
+ // Create a new Ref with an initial value of 0
315
+ const ref = yield* Ref.make(0);
316
+
317
+ // Define an effect that increments the counter by 1
318
+ const increment = Ref.update(ref, (n) => n + 1);
319
+
320
+ // Create an array of 1,000 increment effects
321
+ const tasks = Array.from({ length: 1000 }, () => increment);
322
+
323
+ // Run all 1,000 effects concurrently
324
+ yield* Effect.all(tasks, { concurrency: "unbounded" });
325
+
326
+ // Get the final value of the counter
327
+ return yield* Ref.get(ref);
328
+ });
329
+
330
+ // The result will always be 1000
331
+ const programWithLogging = Effect.gen(function* () {
332
+ const result = yield* program;
333
+ yield* Effect.log(`Final counter value: ${result}`);
334
+ return result;
335
+ });
336
+
337
+ Effect.runPromise(programWithLogging);
338
+ ```
339
+
340
+ ---
341
+
342
+ **Anti-Pattern:**
343
+
344
+ The anti-pattern is using a standard JavaScript variable for shared state. The following example is not guaranteed to produce the correct result.
345
+
346
+ ```typescript
347
+ import { Effect } from "effect";
348
+
349
+ // ❌ WRONG: This is a classic race condition.
350
+ const programWithRaceCondition = Effect.gen(function* () {
351
+ let count = 0; // A plain, mutable variable
352
+
353
+ // An effect that reads, increments, and writes the variable
354
+ const increment = Effect.sync(() => {
355
+ const current = count;
356
+ // Another fiber could run between this read and the write below!
357
+ count = current + 1;
358
+ });
359
+
360
+ const tasks = Array.from({ length: 1000 }, () => increment);
361
+
362
+ yield* Effect.all(tasks, { concurrency: "unbounded" });
363
+
364
+ return count;
365
+ });
366
+
367
+ // The result is unpredictable and will likely be less than 1000.
368
+ Effect.runPromise(programWithRaceCondition).then(console.log);
369
+ ```
370
+
371
+ **Rationale:**
372
+
373
+ When you need to share mutable state between different concurrent fibers, create a `Ref<A>`. Use `Ref.get` to read the value and `Ref.update` or `Ref.set` to modify it. All operations on a `Ref` are atomic.
374
+
375
+ ---
376
+
377
+
378
+ Directly using a mutable variable (e.g., `let myState = ...`) in a concurrent system is dangerous. Multiple fibers could try to read and write to it at the same time, leading to race conditions and unpredictable results.
379
+
380
+ `Ref` solves this by wrapping the state in a fiber-safe container. It's like a synchronized, in-memory cell. All operations on a `Ref` are atomic effects, guaranteeing that updates are applied correctly without being interrupted or interleaved with other updates. This eliminates race conditions and ensures data integrity.
381
+
382
+ ---
383
+
384
+ ---
385
+
386
+ ### Run Independent Effects in Parallel with Effect.all
387
+
388
+ **Rule:** Use Effect.all to execute a collection of independent effects concurrently.
389
+
390
+ **Good Example:**
391
+
392
+ Imagine fetching a user's profile and their latest posts from two different API endpoints. These are independent operations and can be run in parallel to save time.
393
+
394
+ ```typescript
395
+ import { Effect } from "effect";
396
+
397
+ // Simulate fetching a user, takes 1 second
398
+ const fetchUser = Effect.succeed({ id: 1, name: "Paul" }).pipe(
399
+ Effect.delay("1 second")
400
+ );
401
+
402
+ // Simulate fetching posts, takes 1.5 seconds
403
+ const fetchPosts = Effect.succeed([{ title: "Effect is great" }]).pipe(
404
+ Effect.delay("1.5 seconds")
405
+ );
406
+
407
+ // Run both effects concurrently - must specify concurrency option!
408
+ const program = Effect.all([fetchUser, fetchPosts], {
409
+ concurrency: "unbounded",
410
+ });
411
+
412
+ // The resulting effect will succeed with a tuple: [{id, name}, [{title}]]
413
+ // Total execution time will be ~1.5 seconds (the duration of the longest task).
414
+ const programWithLogging = Effect.gen(function* () {
415
+ const results = yield* program;
416
+ yield* Effect.log(`Results: ${JSON.stringify(results)}`);
417
+ return results;
418
+ });
419
+
420
+ Effect.runPromise(programWithLogging);
421
+ ```
422
+
423
+ ---
424
+
425
+ **Anti-Pattern:**
426
+
427
+ The anti-pattern is running independent tasks sequentially using `Effect.gen`. This is inefficient and unnecessarily slows down your application.
428
+
429
+ ```typescript
430
+ import { Effect } from "effect";
431
+ import { fetchUser, fetchPosts } from "./somewhere"; // From previous example
432
+
433
+ // ❌ WRONG: This is inefficient.
434
+ const program = Effect.gen(function* () {
435
+ // fetchUser runs and completes...
436
+ const user = yield* fetchUser;
437
+ // ...only then does fetchPosts begin.
438
+ const posts = yield* fetchPosts;
439
+ return [user, posts];
440
+ });
441
+
442
+ // Total execution time will be ~2.5 seconds (1s + 1.5s),
443
+ // which is a full second slower than the parallel version.
444
+ Effect.runPromise(program).then(console.log);
445
+ ```
446
+
447
+ **Rationale:**
448
+
449
+ When you have multiple `Effect`s that do not depend on each other's results, run them concurrently using `Effect.all`. This will execute all effects at the same time and return a new `Effect` that succeeds with a tuple containing all the results.
450
+
451
+ ---
452
+
453
+
454
+ Running tasks sequentially when they could be done in parallel is a common source of performance bottlenecks. `Effect.all` is the solution. It's the direct equivalent of `Promise.all` in the Effect ecosystem.
455
+
456
+ Instead of waiting for Task A to finish before starting Task B, `Effect.all` starts all tasks simultaneously. The total time to complete is determined by the duration of the _longest_ running effect, not the sum of all durations. If any single effect in the collection fails, the entire `Effect.all` will fail immediately.
457
+
458
+ ---
459
+
460
+ ---
461
+
462
+ ### Concurrency Pattern 3: Coordinate Multiple Fibers with Latch
463
+
464
+ **Rule:** Use Latch to coordinate multiple fibers awaiting a common completion signal, enabling fan-out/fan-in and barrier synchronization patterns.
465
+
466
+ **Good Example:**
467
+
468
+ This example demonstrates a fan-out/fan-in pattern: spawn 5 worker fibers that process tasks in parallel, and coordinate to know when all are complete.
469
+
470
+ ```typescript
471
+ import { Effect, Latch, Fiber, Ref } from "effect";
472
+
473
+ interface WorkResult {
474
+ readonly workerId: number;
475
+ readonly taskId: number;
476
+ readonly result: string;
477
+ readonly duration: number;
478
+ }
479
+
480
+ // Simulate a long-running task
481
+ const processTask = (
482
+ workerId: number,
483
+ taskId: number
484
+ ): Effect.Effect<WorkResult> =>
485
+ Effect.gen(function* () {
486
+ const startTime = Date.now();
487
+ const duration = 100 + Math.random() * 400; // 100-500ms
488
+
489
+ yield* Effect.log(
490
+ `[Worker ${workerId}] Starting task ${taskId} (duration: ${Math.round(duration)}ms)`
491
+ );
492
+
493
+ yield* Effect.sleep(`${Math.round(duration)} millis`);
494
+
495
+ const elapsed = Date.now() - startTime;
496
+
497
+ yield* Effect.log(
498
+ `[Worker ${workerId}] ✓ Completed task ${taskId} in ${elapsed}ms`
499
+ );
500
+
501
+ return {
502
+ workerId,
503
+ taskId,
504
+ result: `Result from worker ${workerId} on task ${taskId}`,
505
+ duration: elapsed,
506
+ };
507
+ });
508
+
509
+ // Fan-out/Fan-in with Latch
510
+ const fanOutFanIn = Effect.gen(function* () {
511
+ const numWorkers = 5;
512
+ const tasksPerWorker = 3;
513
+
514
+ // Create latch: will countdown from (numWorkers) when all workers complete
515
+ const workersCompleteLatch = yield* Latch.make(numWorkers);
516
+
517
+ // Track results from all workers
518
+ const results = yield* Ref.make<WorkResult[]>([]);
519
+
520
+ // Worker fiber that processes tasks sequentially
521
+ const createWorker = (workerId: number) =>
522
+ Effect.gen(function* () {
523
+ try {
524
+ yield* Effect.log(`[Worker ${workerId}] ▶ Starting`);
525
+
526
+ // Process multiple tasks
527
+ for (let i = 1; i <= tasksPerWorker; i++) {
528
+ const result = yield* processTask(workerId, i);
529
+ yield* Ref.update(results, (rs) => [...rs, result]);
530
+ }
531
+
532
+ yield* Effect.log(`[Worker ${workerId}] ✓ All tasks completed`);
533
+ } finally {
534
+ // Signal completion to latch
535
+ yield* Latch.countDown(workersCompleteLatch);
536
+ yield* Effect.log(`[Worker ${workerId}] Signaled latch`);
537
+ }
538
+ });
539
+
540
+ // Spawn all workers as background fibers
541
+ console.log(`\n[COORDINATOR] Spawning ${numWorkers} workers...\n`);
542
+
543
+ const workerFibers = yield* Effect.all(
544
+ Array.from({ length: numWorkers }, (_, i) =>
545
+ createWorker(i + 1).pipe(Effect.fork)
546
+ )
547
+ );
548
+
549
+ // Wait for all workers to complete
550
+ console.log(`\n[COORDINATOR] Waiting for all workers to finish...\n`);
551
+
552
+ yield* Latch.await(workersCompleteLatch);
553
+
554
+ console.log(`\n[COORDINATOR] All workers completed!\n`);
555
+
556
+ // Join all fibers to ensure cleanup
557
+ yield* Effect.all(workerFibers.map((fiber) => Fiber.join(fiber)));
558
+
559
+ // Aggregate results
560
+ const allResults = yield* Ref.get(results);
561
+
562
+ console.log(`[SUMMARY]`);
563
+ console.log(` Total workers: ${numWorkers}`);
564
+ console.log(` Tasks per worker: ${tasksPerWorker}`);
565
+ console.log(` Total tasks: ${allResults.length}`);
566
+ console.log(
567
+ ` Avg task duration: ${Math.round(
568
+ allResults.reduce((sum, r) => sum + r.duration, 0) / allResults.length
569
+ )}ms`
570
+ );
571
+ });
572
+
573
+ Effect.runPromise(fanOutFanIn);
574
+ ```
575
+
576
+ This pattern:
577
+
578
+ 1. **Creates Latch** with count = number of workers
579
+ 2. **Spawns worker fibers** as background tasks
580
+ 3. **Each worker processes tasks** independently
581
+ 4. **Signals Latch** when work completes (countDown)
582
+ 5. **Coordinator awaits** until all workers signal
583
+ 6. **Aggregates results** from all workers
584
+
585
+ ---
586
+
587
+ **Rationale:**
588
+
589
+ When you need multiple fibers to coordinate and wait for a shared completion condition, use `Latch`. A Latch is a countdown synchronization object: you initialize it with N, each fiber calls `countDown()`, and all waiting fibers are released when the count reaches zero. This enables fan-out/fan-in patterns and barrier synchronization.
590
+
591
+ ---
592
+
593
+
594
+ Multi-fiber coordination requires synchronization:
595
+
596
+ - **Parallel initialization**: Wait for all services to start before proceeding
597
+ - **Fan-out/fan-in**: Spawn multiple workers, collect results when all done
598
+ - **Barrier synchronization**: All fibers wait at a checkpoint before proceeding
599
+ - **Graceful shutdown**: Wait for all active fibers to complete
600
+ - **Aggregation patterns**: Process streams in parallel, combine when ready
601
+
602
+ Unlike `Deferred` (one producer signals once), `Latch`:
603
+
604
+ - Supports multiple signalers (each `countDown()`)
605
+ - Used with known count of participants (countdown from N to 0)
606
+ - Enables barrier patterns (all wait for all)
607
+ - Fair queuing of waiting fibers
608
+
609
+ ---
610
+
611
+ ---
612
+
613
+ ### Concurrency Pattern 5: Broadcast Events with PubSub
614
+
615
+ **Rule:** Use PubSub to broadcast events to multiple subscribers, enabling event-driven architectures where publishers and subscribers are loosely coupled.
616
+
617
+ **Good Example:**
618
+
619
+ This example demonstrates a multi-subscriber event broadcast system with independent handlers.
620
+
621
+ ```typescript
622
+ import { Effect, PubSub, Fiber, Ref } from "effect";
623
+
624
+ interface StateChangeEvent {
625
+ readonly id: string;
626
+ readonly oldValue: string;
627
+ readonly newValue: string;
628
+ readonly timestamp: number;
629
+ }
630
+
631
+ interface Subscriber {
632
+ readonly name: string;
633
+ readonly events: StateChangeEvent[];
634
+ }
635
+
636
+ // Create subscribers that react to events
637
+ const createSubscriber = (
638
+ name: string,
639
+ pubsub: PubSub.PubSub<StateChangeEvent>,
640
+ events: Ref.Ref<StateChangeEvent[]>
641
+ ): Effect.Effect<void> =>
642
+ Effect.gen(function* () {
643
+ yield* Effect.log(`[${name}] ✓ Subscribed`);
644
+
645
+ // Get subscriber handle
646
+ const subscription = yield* PubSub.subscribe(pubsub);
647
+
648
+ // Listen for events indefinitely
649
+ while (true) {
650
+ const event = yield* subscription.take();
651
+
652
+ yield* Effect.log(
653
+ `[${name}] Received event: ${event.oldValue} → ${event.newValue}`
654
+ );
655
+
656
+ // Simulate processing
657
+ yield* Effect.sleep("50 millis");
658
+
659
+ // Store event (example action)
660
+ yield* Ref.update(events, (es) => [...es, event]);
661
+
662
+ yield* Effect.log(`[${name}] ✓ Processed event`);
663
+ }
664
+ });
665
+
666
+ // Publisher that broadcasts events
667
+ const publisher = (
668
+ pubsub: PubSub.PubSub<StateChangeEvent>,
669
+ eventCount: number
670
+ ): Effect.Effect<void> =>
671
+ Effect.gen(function* () {
672
+ yield* Effect.log(`[PUBLISHER] Starting, publishing ${eventCount} events`);
673
+
674
+ for (let i = 1; i <= eventCount; i++) {
675
+ const event: StateChangeEvent = {
676
+ id: `event-${i}`,
677
+ oldValue: `state-${i - 1}`,
678
+ newValue: `state-${i}`,
679
+ timestamp: Date.now(),
680
+ };
681
+
682
+ // Publish to all subscribers
683
+ const size = yield* PubSub.publish(pubsub, event);
684
+
685
+ yield* Effect.log(
686
+ `[PUBLISHER] Published event to ${size} subscribers`
687
+ );
688
+
689
+ // Simulate delay between events
690
+ yield* Effect.sleep("200 millis");
691
+ }
692
+
693
+ yield* Effect.log(`[PUBLISHER] ✓ All events published`);
694
+ });
695
+
696
+ // Main: coordinate publisher and multiple subscribers
697
+ const program = Effect.gen(function* () {
698
+ // Create PubSub with bounded capacity
699
+ const pubsub = yield* PubSub.bounded<StateChangeEvent>(5);
700
+
701
+ // Create storage for each subscriber's events
702
+ const subscriber1Events = yield* Ref.make<StateChangeEvent[]>([]);
703
+ const subscriber2Events = yield* Ref.make<StateChangeEvent[]>([]);
704
+ const subscriber3Events = yield* Ref.make<StateChangeEvent[]>([]);
705
+
706
+ console.log(`\n[MAIN] Starting PubSub event broadcast system\n`);
707
+
708
+ // Subscribe 3 independent subscribers
709
+ const sub1Fiber = yield* createSubscriber(
710
+ "SUBSCRIBER-1",
711
+ pubsub,
712
+ subscriber1Events
713
+ ).pipe(Effect.fork);
714
+
715
+ const sub2Fiber = yield* createSubscriber(
716
+ "SUBSCRIBER-2",
717
+ pubsub,
718
+ subscriber2Events
719
+ ).pipe(Effect.fork);
720
+
721
+ const sub3Fiber = yield* createSubscriber(
722
+ "SUBSCRIBER-3",
723
+ pubsub,
724
+ subscriber3Events
725
+ ).pipe(Effect.fork);
726
+
727
+ // Wait for subscriptions to establish
728
+ yield* Effect.sleep("100 millis");
729
+
730
+ // Start publisher
731
+ const publisherFiber = yield* publisher(pubsub, 5).pipe(Effect.fork);
732
+
733
+ // Wait for publisher to finish
734
+ yield* Fiber.join(publisherFiber);
735
+
736
+ // Wait a bit for subscribers to process last events
737
+ yield* Effect.sleep("1 second");
738
+
739
+ // Shut down
740
+ yield* PubSub.shutdown(pubsub);
741
+ yield* Fiber.join(sub1Fiber).pipe(Effect.catchAll(() => Effect.void));
742
+ yield* Fiber.join(sub2Fiber).pipe(Effect.catchAll(() => Effect.void));
743
+ yield* Fiber.join(sub3Fiber).pipe(Effect.catchAll(() => Effect.void));
744
+
745
+ // Print summary
746
+ const events1 = yield* Ref.get(subscriber1Events);
747
+ const events2 = yield* Ref.get(subscriber2Events);
748
+ const events3 = yield* Ref.get(subscriber3Events);
749
+
750
+ console.log(`\n[SUMMARY]`);
751
+ console.log(` Subscriber 1 received: ${events1.length} events`);
752
+ console.log(` Subscriber 2 received: ${events2.length} events`);
753
+ console.log(` Subscriber 3 received: ${events3.length} events`);
754
+ });
755
+
756
+ Effect.runPromise(program);
757
+ ```
758
+
759
+ This pattern:
760
+
761
+ 1. **Creates PubSub** for event distribution
762
+ 2. **Multiple subscribers** listen independently
763
+ 3. **Publisher broadcasts** events to all
764
+ 4. **Each subscriber** processes at own pace
765
+
766
+ ---
767
+
768
+ **Rationale:**
769
+
770
+ When multiple fibers need to react to the same events, use `PubSub`:
771
+
772
+ - **Publisher** sends events once
773
+ - **Subscribers** each receive a copy
774
+ - **Decoupled**: Publisher doesn't know about subscribers
775
+ - **Fan-out**: One event → multiple independent handlers
776
+
777
+ PubSub variants: `bounded` (backpressure), `unbounded`, `sliding`.
778
+
779
+ ---
780
+
781
+
782
+ Event distribution without PubSub creates coupling:
783
+
784
+ - **Direct references**: Publisher calls subscribers directly (tight coupling)
785
+ - **Ordering issues**: Publisher blocks on slowest subscriber
786
+ - **Scalability**: Adding subscribers slows down publisher
787
+ - **Testing**: Hard to mock multiple subscribers
788
+
789
+ PubSub enables:
790
+
791
+ - **Loose coupling**: Publishers emit, subscribers listen independently
792
+ - **Parallel delivery**: All subscribers notified simultaneously
793
+ - **Scalability**: Add subscribers without affecting publisher
794
+ - **Testing**: Mock single PubSub rather than all subscribers
795
+
796
+ Real-world example: System state changes
797
+ - **Direct**: StateManager calls UserNotifier, AuditLogger, MetricsCollector (tight coupling)
798
+ - **PubSub**: StateManager publishes `StateChanged` event; subscribers listen independently
799
+
800
+ ---
801
+
802
+ ---
803
+
804
+ ### Process a Collection in Parallel with Effect.forEach
805
+
806
+ **Rule:** Use Effect.forEach with the `concurrency` option to process a collection in parallel with a fixed limit.
807
+
808
+ **Good Example:**
809
+
810
+ Imagine you have a list of 100 user IDs and you need to fetch the data for each one. `Effect.forEach` with a concurrency of 10 will process them in controlled parallel batches.
811
+
812
+ ```typescript
813
+ import { Clock, Effect } from "effect";
814
+
815
+ // Mock function to simulate fetching a user by ID
816
+ const fetchUserById = (id: number) =>
817
+ Effect.gen(function* () {
818
+ yield* Effect.logInfo(`Fetching user ${id}...`);
819
+ yield* Effect.sleep("1 second"); // Simulate network delay
820
+ return { id, name: `User ${id}`, email: `user${id}@example.com` };
821
+ });
822
+
823
+ const userIds = Array.from({ length: 10 }, (_, i) => i + 1);
824
+
825
+ // Process the entire array, but only run 5 fetches at a time.
826
+ const program = Effect.gen(function* () {
827
+ yield* Effect.logInfo("Starting parallel processing...");
828
+
829
+ const startTime = yield* Clock.currentTimeMillis;
830
+ const users = yield* Effect.forEach(userIds, fetchUserById, {
831
+ concurrency: 5, // Limit to 5 concurrent operations
832
+ });
833
+ const endTime = yield* Clock.currentTimeMillis;
834
+
835
+ yield* Effect.logInfo(
836
+ `Processed ${users.length} users in ${endTime - startTime}ms`
837
+ );
838
+ yield* Effect.logInfo(
839
+ `First few users: ${JSON.stringify(users.slice(0, 3), null, 2)}`
840
+ );
841
+
842
+ return users;
843
+ });
844
+
845
+ // The result will be an array of all user objects.
846
+ // The total time will be much less than running them sequentially.
847
+ Effect.runPromise(program);
848
+ ```
849
+
850
+ ---
851
+
852
+ **Anti-Pattern:**
853
+
854
+ The anti-pattern is using `Effect.all` to process a large or dynamically-sized collection. This can lead to unpredictable and potentially catastrophic resource consumption.
855
+
856
+ ```typescript
857
+ import { Effect } from "effect";
858
+ import { userIds, fetchUserById } from "./somewhere"; // From previous example
859
+
860
+ // ❌ DANGEROUS: This will attempt to start 100 concurrent network requests.
861
+ // If userIds had 10,000 items, this could crash your application or get you blocked by an API.
862
+ const program = Effect.all(userIds.map(fetchUserById));
863
+ ```
864
+
865
+ **Rationale:**
866
+
867
+ To process an iterable (like an array) of items concurrently, use `Effect.forEach`. To avoid overwhelming systems, always specify the `{ concurrency: number }` option to limit how many effects run at the same time.
868
+
869
+ ---
870
+
871
+
872
+ Running `Effect.all` on a large array of tasks is dangerous. If you have 1,000 items, it will try to start 1,000 concurrent fibers at once, which can exhaust memory, overwhelm your CPU, or hit API rate limits.
873
+
874
+ `Effect.forEach` with a concurrency limit solves this problem elegantly. It acts as a concurrent processing pool. It will start processing items up to your specified limit (e.g., 10 at a time). As soon as one task finishes, it will pick up the next available item from the list, ensuring that no more than 10 tasks are ever running simultaneously. This provides massive performance gains over sequential processing while maintaining stability and control.
875
+
876
+ ---
877
+
878
+ ---
879
+
880
+ ### Concurrency Pattern 6: Race and Timeout Competing Effects
881
+
882
+ **Rule:** Use race to compete effects and timeout to enforce deadlines, enabling cancellation when operations exceed time limits or complete.
883
+
884
+ **Good Example:**
885
+
886
+ This example demonstrates racing competing effects and handling timeouts.
887
+
888
+ ```typescript
889
+ import { Effect, Fiber } from "effect";
890
+
891
+ interface DataSource {
892
+ readonly name: string;
893
+ readonly latencyMs: number;
894
+ }
895
+
896
+ // Simulate fetching from different sources
897
+ const fetchFromSource = (source: DataSource): Effect.Effect<string> =>
898
+ Effect.gen(function* () {
899
+ yield* Effect.log(
900
+ `[${source.name}] Starting fetch (latency: ${source.latencyMs}ms)`
901
+ );
902
+
903
+ yield* Effect.sleep(`${source.latencyMs} millis`);
904
+
905
+ const result = `Data from ${source.name}`;
906
+
907
+ yield* Effect.log(`[${source.name}] ✓ Completed`);
908
+
909
+ return result;
910
+ });
911
+
912
+ // Main: demonstrate race patterns
913
+ const program = Effect.gen(function* () {
914
+ console.log(`\n[RACE] Competing effects with race and timeout\n`);
915
+
916
+ // Example 1: Simple race (fastest wins)
917
+ console.log(`[1] Racing 3 data sources:\n`);
918
+
919
+ const sources: DataSource[] = [
920
+ { name: "Primary DC", latencyMs: 200 },
921
+ { name: "Backup DC", latencyMs: 150 },
922
+ { name: "Cache", latencyMs: 50 },
923
+ ];
924
+
925
+ const raceResult = yield* Effect.race(
926
+ fetchFromSource(sources[0]),
927
+ Effect.race(fetchFromSource(sources[1]), fetchFromSource(sources[2]))
928
+ );
929
+
930
+ console.log(`\nWinner: ${raceResult}\n`);
931
+
932
+ // Example 2: Timeout - succeed within deadline
933
+ console.log(`[2] Timeout with fast operation:\n`);
934
+
935
+ const fastOp = fetchFromSource({ name: "Fast Op", latencyMs: 100 }).pipe(
936
+ Effect.timeout("500 millis")
937
+ );
938
+
939
+ const fastResult = yield* fastOp;
940
+
941
+ console.log(`✓ Completed within timeout: ${fastResult}\n`);
942
+
943
+ // Example 3: Timeout - exceed deadline
944
+ console.log(`[3] Timeout with slow operation:\n`);
945
+
946
+ const slowOp = fetchFromSource({ name: "Slow Op", latencyMs: 2000 }).pipe(
947
+ Effect.timeout("500 millis"),
948
+ Effect.either
949
+ );
950
+
951
+ const timeoutResult = yield* slowOp;
952
+
953
+ if (timeoutResult._tag === "Left") {
954
+ console.log(`✗ Operation timed out after 500ms\n`);
955
+ }
956
+
957
+ // Example 4: Race with timeout fallback
958
+ console.log(`[4] Race with fallback on timeout:\n`);
959
+
960
+ const primary = fetchFromSource({ name: "Primary", latencyMs: 300 });
961
+
962
+ const fallback = fetchFromSource({ name: "Fallback", latencyMs: 100 });
963
+
964
+ const raceWithFallback = primary.pipe(
965
+ Effect.timeout("150 millis"),
966
+ Effect.catchAll(() => {
967
+ yield* Effect.log(`[PRIMARY] Timed out, using fallback`);
968
+
969
+ return fallback;
970
+ })
971
+ );
972
+
973
+ const fallbackResult = yield* raceWithFallback;
974
+
975
+ console.log(`Result: ${fallbackResult}\n`);
976
+
977
+ // Example 5: Race all (collect all winners)
978
+ console.log(`[5] Race all - multiple sources:\n`);
979
+
980
+ const raceAllResult = yield* Effect.raceAll(
981
+ sources.map((s) =>
982
+ fetchFromSource(s).pipe(
983
+ Effect.map((data) => ({ source: s.name, data }))
984
+ )
985
+ )
986
+ );
987
+
988
+ console.log(`First to complete: ${raceAllResult.source}\n`);
989
+ });
990
+
991
+ Effect.runPromise(program);
992
+ ```
993
+
994
+ ---
995
+
996
+ **Rationale:**
997
+
998
+ Race and timeout coordinate competing effects:
999
+
1000
+ - **race**: Multiple effects compete, first to succeed wins
1001
+ - **timeout**: Effect fails if not completed in time
1002
+ - **raceAll**: Race multiple effects, collect winners
1003
+ - **timeoutFail**: Fail with specific error on timeout
1004
+
1005
+ Pattern: `Effect.race(effect1, effect2)` or `effect.pipe(Effect.timeout(duration))`
1006
+
1007
+ ---
1008
+
1009
+
1010
+ Without race/timeout, competing effects create issues:
1011
+
1012
+ - **Deadlocks**: Waiting for all to complete unnecessarily
1013
+ - **Hanging requests**: No deadline enforcement
1014
+ - **Wasted resources**: Slow operations continue indefinitely
1015
+ - **No fallback**: Can't switch to alternative on timeout
1016
+
1017
+ Race/timeout enable:
1018
+
1019
+ - **Fastest-wins**: Take first success
1020
+ - **Deadline enforcement**: Fail after time limit
1021
+ - **Resource cleanup**: Cancel slower operations
1022
+ - **Fallback patterns**: Alternative if primary times out
1023
+
1024
+ Real-world example: Multi-datacenter request
1025
+ - **Without race**: Wait for slowest response
1026
+ - **With race**: Get response from fastest datacenter
1027
+
1028
+ ---
1029
+
1030
+ ---
1031
+
1032
+ ### Concurrency Pattern 1: Coordinate Async Operations with Deferred
1033
+
1034
+ **Rule:** Use Deferred for one-time async coordination between fibers, enabling multiple consumers to wait for a single producer's result.
1035
+
1036
+ **Good Example:**
1037
+
1038
+ This example demonstrates a service startup pattern where multiple workers wait for initialization to complete before starting processing.
1039
+
1040
+ ```typescript
1041
+ import { Effect, Deferred, Fiber } from "effect";
1042
+
1043
+ interface ServiceConfig {
1044
+ readonly name: string;
1045
+ readonly port: number;
1046
+ }
1047
+
1048
+ interface Service {
1049
+ readonly name: string;
1050
+ readonly isReady: Deferred.Deferred<void>;
1051
+ }
1052
+
1053
+ // Simulate a service that takes time to initialize
1054
+ const createService = (config: ServiceConfig): Effect.Effect<Service> =>
1055
+ Effect.gen(function* () {
1056
+ const isReady = yield* Deferred.make<void>();
1057
+
1058
+ return { name: config.name, isReady };
1059
+ });
1060
+
1061
+ // Initialize the service (runs in background)
1062
+ const initializeService = (service: Service): Effect.Effect<void> =>
1063
+ Effect.gen(function* () {
1064
+ yield* Effect.log(`[${service.name}] Starting initialization...`);
1065
+
1066
+ // Simulate initialization work
1067
+ yield* Effect.sleep("1 second");
1068
+
1069
+ yield* Effect.log(`[${service.name}] Initialization complete`);
1070
+
1071
+ // Signal that service is ready
1072
+ yield* Deferred.succeed(service.isReady, undefined);
1073
+ });
1074
+
1075
+ // A worker that waits for service to be ready before starting
1076
+ const createWorker = (
1077
+ id: number,
1078
+ services: Service[]
1079
+ ): Effect.Effect<void> =>
1080
+ Effect.gen(function* () {
1081
+ yield* Effect.log(`[Worker ${id}] Starting, waiting for services...`);
1082
+
1083
+ // Wait for all services to be ready
1084
+ yield* Effect.all(
1085
+ services.map((service) =>
1086
+ Deferred.await(service.isReady).pipe(
1087
+ Effect.tapError((error) =>
1088
+ Effect.log(
1089
+ `[Worker ${id}] Error waiting for ${service.name}: ${error}`
1090
+ )
1091
+ )
1092
+ )
1093
+ )
1094
+ );
1095
+
1096
+ yield* Effect.log(`[Worker ${id}] All services ready, starting work`);
1097
+
1098
+ // Simulate worker processing
1099
+ for (let i = 0; i < 3; i++) {
1100
+ yield* Effect.sleep("500 millis");
1101
+ yield* Effect.log(`[Worker ${id}] Processing task ${i + 1}`);
1102
+ }
1103
+
1104
+ yield* Effect.log(`[Worker ${id}] Complete`);
1105
+ });
1106
+
1107
+ // Main program
1108
+ const program = Effect.gen(function* () {
1109
+ // Create services
1110
+ const apiService = yield* createService({ name: "API", port: 3000 });
1111
+ const dbService = yield* createService({ name: "Database", port: 5432 });
1112
+ const cacheService = yield* createService({ name: "Cache", port: 6379 });
1113
+
1114
+ const services = [apiService, dbService, cacheService];
1115
+
1116
+ // Start initializing services in background
1117
+ const initFibers = yield* Effect.all(
1118
+ services.map((service) => initializeService(service).pipe(Effect.fork))
1119
+ );
1120
+
1121
+ // Start workers that wait for services
1122
+ const workerFibers = yield* Effect.all(
1123
+ [1, 2, 3].map((id) => createWorker(id, services).pipe(Effect.fork))
1124
+ );
1125
+
1126
+ // Wait for all workers to complete
1127
+ yield* Effect.all(workerFibers.map((fiber) => Fiber.join(fiber)));
1128
+
1129
+ // Cancel initialization fibers (they're done anyway)
1130
+ yield* Effect.all(initFibers.map((fiber) => Fiber.interrupt(fiber)));
1131
+
1132
+ yield* Effect.log(`\n[MAIN] All workers completed`);
1133
+ });
1134
+
1135
+ Effect.runPromise(program);
1136
+ ```
1137
+
1138
+ This pattern:
1139
+
1140
+ 1. **Creates Deferred instances** for each service's readiness
1141
+ 2. **Starts initialization** in background fibers
1142
+ 3. **Workers wait** for all services via `Deferred.await`
1143
+ 4. **Service signals completion** via `Deferred.succeed`
1144
+ 5. **Workers resume** when all dependencies ready
1145
+
1146
+ ---
1147
+
1148
+ **Rationale:**
1149
+
1150
+ When you need multiple fibers to wait for a single async event (e.g., service initialization, data availability, external signal), use `Deferred`. A Deferred is a one-shot promise that exactly one fiber completes, and many fibers can wait for. This avoids polling and provides clean async signaling.
1151
+
1152
+ ---
1153
+
1154
+
1155
+ Many concurrent systems need to coordinate on events:
1156
+
1157
+ - **Service initialization**: Wait for all services to start before accepting requests
1158
+ - **Data availability**: Wait for initial data load before processing
1159
+ - **External events**: Wait for webhook, signal, or message
1160
+ - **Startup gates**: All workers wait for leader to signal start
1161
+
1162
+ Without Deferred:
1163
+
1164
+ - Polling wastes CPU (check repeatedly)
1165
+ - Callbacks become complex (multiple consumers)
1166
+ - No clean semantics for "wait for this one thing"
1167
+ - Error propagation unclear
1168
+
1169
+ With Deferred:
1170
+
1171
+ - Non-blocking wait (fiber suspends)
1172
+ - One fiber produces, many consume
1173
+ - Clear completion or failure
1174
+ - Efficient wakeup when ready
1175
+
1176
+ ---
1177
+
1178
+ ---
1179
+
1180
+ ### Concurrency Pattern 4: Distribute Work with Queue
1181
+
1182
+ **Rule:** Use Queue to distribute work between producers and consumers with built-in backpressure, enabling flexible pipeline coordination.
1183
+
1184
+ **Good Example:**
1185
+
1186
+ This example demonstrates a producer-consumer pipeline with a bounded queue for buffering work items.
1187
+
1188
+ ```typescript
1189
+ import { Effect, Queue, Fiber, Ref } from "effect";
1190
+
1191
+ interface WorkItem {
1192
+ readonly id: number;
1193
+ readonly data: string;
1194
+ readonly timestamp: number;
1195
+ }
1196
+
1197
+ interface WorkResult {
1198
+ readonly itemId: number;
1199
+ readonly processed: string;
1200
+ readonly duration: number;
1201
+ }
1202
+
1203
+ // Producer: generates work items
1204
+ const producer = (
1205
+ queue: Queue.Enqueue<WorkItem>,
1206
+ count: number
1207
+ ): Effect.Effect<void> =>
1208
+ Effect.gen(function* () {
1209
+ yield* Effect.log(`[PRODUCER] Starting, generating ${count} items`);
1210
+
1211
+ for (let i = 1; i <= count; i++) {
1212
+ const item: WorkItem = {
1213
+ id: i,
1214
+ data: `Item ${i}`,
1215
+ timestamp: Date.now(),
1216
+ };
1217
+
1218
+ const start = Date.now();
1219
+
1220
+ // Enqueue - will block if queue is full (backpressure)
1221
+ yield* Queue.offer(queue, item);
1222
+
1223
+ const delay = Date.now() - start;
1224
+
1225
+ if (delay > 0) {
1226
+ yield* Effect.log(
1227
+ `[PRODUCER] Item ${i} enqueued (waited ${delay}ms due to backpressure)`
1228
+ );
1229
+ } else {
1230
+ yield* Effect.log(`[PRODUCER] Item ${i} enqueued`);
1231
+ }
1232
+
1233
+ // Simulate work
1234
+ yield* Effect.sleep("50 millis");
1235
+ }
1236
+
1237
+ yield* Effect.log(`[PRODUCER] ✓ All items enqueued`);
1238
+ });
1239
+
1240
+ // Consumer: processes work items
1241
+ const consumer = (
1242
+ queue: Queue.Dequeue<WorkItem>,
1243
+ consumerId: number,
1244
+ results: Ref.Ref<WorkResult[]>
1245
+ ): Effect.Effect<void> =>
1246
+ Effect.gen(function* () {
1247
+ yield* Effect.log(`[CONSUMER ${consumerId}] Starting`);
1248
+
1249
+ while (true) {
1250
+ // Dequeue - will block if queue is empty
1251
+ const item = yield* Queue.take(queue).pipe(Effect.either);
1252
+
1253
+ if (item._tag === "Left") {
1254
+ yield* Effect.log(`[CONSUMER ${consumerId}] Queue closed, stopping`);
1255
+ return;
1256
+ }
1257
+
1258
+ const workItem = item.right;
1259
+ const startTime = Date.now();
1260
+
1261
+ yield* Effect.log(
1262
+ `[CONSUMER ${consumerId}] Processing ${workItem.data}`
1263
+ );
1264
+
1265
+ // Simulate processing
1266
+ yield* Effect.sleep("150 millis");
1267
+
1268
+ const duration = Date.now() - startTime;
1269
+ const result: WorkResult = {
1270
+ itemId: workItem.id,
1271
+ processed: `${workItem.data} [processed by consumer ${consumerId}]`,
1272
+ duration,
1273
+ };
1274
+
1275
+ yield* Ref.update(results, (rs) => [...rs, result]);
1276
+
1277
+ yield* Effect.log(
1278
+ `[CONSUMER ${consumerId}] ✓ Completed ${workItem.data} in ${duration}ms`
1279
+ );
1280
+ }
1281
+ });
1282
+
1283
+ // Main: coordinate producer and consumers
1284
+ const program = Effect.gen(function* () {
1285
+ // Create bounded queue with capacity 3
1286
+ const queue = yield* Queue.bounded<WorkItem>(3);
1287
+ const results = yield* Ref.make<WorkResult[]>([]);
1288
+
1289
+ console.log(`\n[MAIN] Starting producer-consumer pipeline with queue size 3\n`);
1290
+
1291
+ // Spawn producer
1292
+ const producerFiber = yield* producer(queue, 10).pipe(Effect.fork);
1293
+
1294
+ // Spawn 2 consumers
1295
+ const consumer1 = yield* consumer(queue, 1, results).pipe(Effect.fork);
1296
+ const consumer2 = yield* consumer(queue, 2, results).pipe(Effect.fork);
1297
+
1298
+ // Wait for producer to finish
1299
+ yield* Fiber.join(producerFiber);
1300
+
1301
+ // Give consumers time to finish
1302
+ yield* Effect.sleep("3 seconds");
1303
+
1304
+ // Close queue and wait for consumers
1305
+ yield* Queue.shutdown(queue);
1306
+ yield* Fiber.join(consumer1);
1307
+ yield* Fiber.join(consumer2);
1308
+
1309
+ // Summary
1310
+ const allResults = yield* Ref.get(results);
1311
+ const totalDuration = allResults.reduce((sum, r) => sum + r.duration, 0);
1312
+
1313
+ console.log(`\n[SUMMARY]`);
1314
+ console.log(` Items processed: ${allResults.length}`);
1315
+ console.log(
1316
+ ` Avg processing time: ${Math.round(totalDuration / allResults.length)}ms`
1317
+ );
1318
+ });
1319
+
1320
+ Effect.runPromise(program);
1321
+ ```
1322
+
1323
+ This pattern:
1324
+
1325
+ 1. **Creates bounded queue** with capacity (backpressure point)
1326
+ 2. **Producer enqueues** items (blocks if full)
1327
+ 3. **Consumers dequeue** and process (each at own pace)
1328
+ 4. **Queue coordinates** flow automatically
1329
+
1330
+ ---
1331
+
1332
+ **Rationale:**
1333
+
1334
+ When multiple fibers need to coordinate work asynchronously, use `Queue`:
1335
+
1336
+ - **Producers** add items (enqueue)
1337
+ - **Consumers** remove and process items (dequeue)
1338
+ - **Backpressure** built-in: producers wait if queue is full
1339
+ - **Decoupling**: Producers don't block on consumer speed
1340
+
1341
+ Queue variants: `bounded` (size limit), `unbounded` (unlimited), `dropping` (discards on overflow).
1342
+
1343
+ ---
1344
+
1345
+
1346
+ Direct producer-consumer coordination creates problems:
1347
+
1348
+ - **Blocking**: Producer waits for consumer to finish
1349
+ - **Tight coupling**: Producer depends on consumer speed
1350
+ - **Memory pressure**: Fast producer floods memory with results
1351
+ - **No backpressure**: Downstream overload propagates upstream
1352
+
1353
+ Queue solves these:
1354
+
1355
+ - **Asynchronous**: Producer enqueues and continues
1356
+ - **Decoupled**: Producer/consumer independent
1357
+ - **Backpressure**: Producer waits when queue full (natural flow control)
1358
+ - **Throughput**: Consumer processes at own pace
1359
+
1360
+ Real-world example: API request handler + database writer
1361
+ - **Direct**: Handler waits for DB write (blocking, slow requests)
1362
+ - **Queue**: Handler enqueues write and returns immediately (responsive)
1363
+
1364
+ ---
1365
+
1366
+ ---
1367
+
1368
+
1369
+ ## 🟠 Advanced Patterns
1370
+
1371
+ ### Add Caching by Wrapping a Layer
1372
+
1373
+ **Rule:** Use a wrapping Layer to add cross-cutting concerns like caching to a service without altering its original implementation.
1374
+
1375
+ **Good Example:**
1376
+
1377
+ We have a `WeatherService` that makes slow API calls. We create a `WeatherService.cached` wrapper layer that adds an in-memory cache using a `Ref` and a `Map`.
1378
+
1379
+ ```typescript
1380
+ import { Effect, Layer, Ref } from "effect";
1381
+
1382
+ // 1. Define the service interface
1383
+ class WeatherService extends Effect.Service<WeatherService>()(
1384
+ "WeatherService",
1385
+ {
1386
+ sync: () => ({
1387
+ getForecast: (city: string) => Effect.succeed(`Sunny in ${city}`),
1388
+ }),
1389
+ }
1390
+ ) {}
1391
+
1392
+ // 2. The "Live" implementation that is slow
1393
+ const WeatherServiceLive = Layer.succeed(
1394
+ WeatherService,
1395
+ WeatherService.of({
1396
+ _tag: "WeatherService",
1397
+ getForecast: (city) =>
1398
+ Effect.succeed(`Sunny in ${city}`).pipe(
1399
+ Effect.delay("2 seconds"),
1400
+ Effect.tap(() => Effect.log(`Fetched live forecast for ${city}`))
1401
+ ),
1402
+ })
1403
+ );
1404
+
1405
+ // 3. The Caching Wrapper Layer
1406
+ const WeatherServiceCached = Layer.effect(
1407
+ WeatherService,
1408
+ Effect.gen(function* () {
1409
+ // It REQUIRES the original WeatherService
1410
+ const underlyingService = yield* WeatherService;
1411
+ const cache = yield* Ref.make(new Map<string, string>());
1412
+
1413
+ return WeatherService.of({
1414
+ _tag: "WeatherService",
1415
+ getForecast: (city) =>
1416
+ Ref.get(cache).pipe(
1417
+ Effect.flatMap((map) =>
1418
+ map.has(city)
1419
+ ? Effect.log(`Cache HIT for ${city}`).pipe(
1420
+ Effect.as(map.get(city)!)
1421
+ )
1422
+ : Effect.log(`Cache MISS for ${city}`).pipe(
1423
+ Effect.flatMap(() => underlyingService.getForecast(city)),
1424
+ Effect.tap((forecast) =>
1425
+ Ref.update(cache, (map) => map.set(city, forecast))
1426
+ )
1427
+ )
1428
+ )
1429
+ ),
1430
+ });
1431
+ })
1432
+ );
1433
+
1434
+ // 4. Compose the final layer. The wrapper is provided with the live implementation.
1435
+ const AppLayer = Layer.provide(WeatherServiceCached, WeatherServiceLive);
1436
+
1437
+ // 5. The application logic
1438
+ const program = Effect.gen(function* () {
1439
+ const weather = yield* WeatherService;
1440
+ yield* weather.getForecast("London"); // First call is slow (MISS)
1441
+ yield* weather.getForecast("London"); // Second call is instant (HIT)
1442
+ });
1443
+
1444
+ Effect.runPromise(Effect.provide(program, AppLayer));
1445
+ ```
1446
+
1447
+ ---
1448
+
1449
+ **Anti-Pattern:**
1450
+
1451
+ Modifying the original service implementation to include caching logic directly. This violates the Single Responsibility Principle by mixing the core logic of fetching weather with the cross-cutting concern of caching.
1452
+
1453
+ ```typescript
1454
+ // ❌ WRONG: The service is now responsible for both its logic and its caching strategy.
1455
+ const WeatherServiceWithInlineCache = Layer.effect(
1456
+ WeatherService,
1457
+ Effect.gen(function* () {
1458
+ const cache = yield* Ref.make(new Map<string, string>());
1459
+ return WeatherService.of({
1460
+ getForecast: (city) => {
1461
+ // ...caching logic mixed directly with fetching logic...
1462
+ return Effect.succeed("...");
1463
+ },
1464
+ });
1465
+ })
1466
+ );
1467
+ ```
1468
+
1469
+ **Rationale:**
1470
+
1471
+ To add cross-cutting concerns like caching to a service, create a "wrapper" `Layer`. This is a layer that takes the original service's `Layer` as input (as a dependency) and returns a new `Layer`. The new layer provides the same service interface but wraps the original methods with additional logic (e.g., checking a cache before calling the original method).
1472
+
1473
+ ---
1474
+
1475
+
1476
+ You often want to add functionality like caching, logging, or metrics to a service without polluting its core business logic. The wrapper layer pattern is a clean way to achieve this.
1477
+
1478
+ By creating a layer that _requires_ the original service, you can get an instance of it from the context, and then provide a _new_ implementation of that same service that calls the original.
1479
+
1480
+ This approach is powerful because:
1481
+
1482
+ - **It's Non-Invasive:** The original service (`DatabaseLive`) remains completely unchanged.
1483
+ - **It's Composable:** You can apply multiple wrappers. You could wrap a database layer with a caching layer, then wrap that with a metrics layer.
1484
+ - **It's Explicit:** The composition is clearly defined at the application's top level where you build your final `AppLayer`.
1485
+
1486
+ ---
1487
+
1488
+ ---
1489
+
1490
+ ### State Management Pattern 1: Synchronized Reference with SynchronizedRef
1491
+
1492
+ **Rule:** Use SynchronizedRef for thread-safe mutable state that must be updated consistently across concurrent operations, with atomic modifications.
1493
+
1494
+ **Good Example:**
1495
+
1496
+ This example demonstrates synchronized reference patterns.
1497
+
1498
+ ```typescript
1499
+ import { Effect, Ref, Fiber, Deferred } from "effect";
1500
+
1501
+ interface Counter {
1502
+ readonly value: number;
1503
+ readonly updates: number;
1504
+ }
1505
+
1506
+ interface Account {
1507
+ readonly balance: number;
1508
+ readonly transactions: string[];
1509
+ }
1510
+
1511
+ const program = Effect.gen(function* () {
1512
+ console.log(
1513
+ `\n[SYNCHRONIZED REFERENCES] Concurrent state management\n`
1514
+ );
1515
+
1516
+ // Example 1: Basic counter with atomic updates
1517
+ console.log(`[1] Atomic counter increments:\n`);
1518
+
1519
+ const counter = yield* Ref.make<Counter>({
1520
+ value: 0,
1521
+ updates: 0,
1522
+ });
1523
+
1524
+ // Simulate 5 concurrent increments
1525
+ const incrementTasks = Array.from({ length: 5 }, (_, i) =>
1526
+ Effect.gen(function* () {
1527
+ for (let j = 0; j < 20; j++) {
1528
+ yield* Ref.modify(counter, (current) => [
1529
+ undefined,
1530
+ {
1531
+ value: current.value + 1,
1532
+ updates: current.updates + 1,
1533
+ },
1534
+ ]);
1535
+
1536
+ if (j === 0 || j === 19) {
1537
+ yield* Effect.log(
1538
+ `[FIBER ${i}] Increment ${j === 0 ? "start" : "end"}`
1539
+ );
1540
+ }
1541
+ }
1542
+ })
1543
+ );
1544
+
1545
+ // Run concurrently
1546
+ yield* Effect.all(incrementTasks, { concurrency: "unbounded" });
1547
+
1548
+ const finalCounter = yield* Ref.get(counter);
1549
+
1550
+ yield* Effect.log(
1551
+ `[RESULT] Counter: ${finalCounter.value} (expected 100)`
1552
+ );
1553
+ yield* Effect.log(
1554
+ `[RESULT] Updates: ${finalCounter.updates} (expected 100)\n`
1555
+ );
1556
+
1557
+ // Example 2: Bank account with transaction isolation
1558
+ console.log(`[2] Account with atomic transfers:\n`);
1559
+
1560
+ const account = yield* Ref.make<Account>({
1561
+ balance: 1000,
1562
+ transactions: [],
1563
+ });
1564
+
1565
+ const transfer = (amount: number, description: string) =>
1566
+ Ref.modify(account, (current) => {
1567
+ if (current.balance < amount) {
1568
+ // Insufficient funds, don't modify
1569
+ return [
1570
+ { success: false, reason: "insufficient-funds" },
1571
+ current, // Unchanged
1572
+ ];
1573
+ }
1574
+
1575
+ // Atomic: deduct + record transaction
1576
+ return [
1577
+ { success: true, reason: "transferred" },
1578
+ {
1579
+ balance: current.balance - amount,
1580
+ transactions: [
1581
+ ...current.transactions,
1582
+ `${description}: -$${amount}`,
1583
+ ],
1584
+ },
1585
+ ];
1586
+ });
1587
+
1588
+ // Test transfer
1589
+ const t1 = yield* transfer(100, "Coffee");
1590
+
1591
+ yield* Effect.log(`[TRANSFER 1] ${t1.success ? "✓" : "✗"} ${t1.reason}`);
1592
+
1593
+ const t2 = yield* transfer(2000, "Electronics");
1594
+
1595
+ yield* Effect.log(`[TRANSFER 2] ${t2.success ? "✓" : "✗"} ${t2.reason}`);
1596
+
1597
+ const t3 = yield* transfer(200, "Groceries");
1598
+
1599
+ yield* Effect.log(`[TRANSFER 3] ${t3.success ? "✓" : "✗"} ${t3.reason}\n`);
1600
+
1601
+ // Example 3: Concurrent reads don't block writes
1602
+ console.log(`[3] Concurrent reads and writes:\n`);
1603
+
1604
+ const state = yield* Ref.make({ value: 0, readers: 0 });
1605
+
1606
+ const read = Effect.gen(function* () {
1607
+ const snapshot = yield* Ref.get(state);
1608
+
1609
+ yield* Effect.log(
1610
+ `[READ] Got value: ${snapshot.value}`
1611
+ );
1612
+
1613
+ return snapshot.value;
1614
+ });
1615
+
1616
+ const write = (newValue: number) =>
1617
+ Ref.set(state, { value: newValue, readers: 0 });
1618
+
1619
+ // Concurrent operations
1620
+ const mixed = Effect.all(
1621
+ [
1622
+ read,
1623
+ write(10),
1624
+ read,
1625
+ write(20),
1626
+ read,
1627
+ ],
1628
+ { concurrency: "unbounded" }
1629
+ );
1630
+
1631
+ yield* mixed;
1632
+
1633
+ // Example 4: Compare-and-set pattern (retry on failure)
1634
+ console.log(`\n[4] Compare-and-set (optimistic updates):\n`);
1635
+
1636
+ const versionedState = yield* Ref.make({ version: 0, data: "initial" });
1637
+
1638
+ const updateWithVersion = (newData: string) =>
1639
+ Effect.gen(function* () {
1640
+ let retries = 0;
1641
+
1642
+ while (retries < 3) {
1643
+ const current = yield* Ref.get(versionedState);
1644
+
1645
+ // Try to update (check-and-set)
1646
+ const result = yield* Ref.modify(versionedState, (s) => {
1647
+ if (s.version === current.version) {
1648
+ // No concurrent update, proceed
1649
+ return [
1650
+ { success: true },
1651
+ {
1652
+ version: s.version + 1,
1653
+ data: newData,
1654
+ },
1655
+ ];
1656
+ }
1657
+
1658
+ // Version changed, conflict
1659
+ return [{ success: false }, s];
1660
+ });
1661
+
1662
+ if (result.success) {
1663
+ yield* Effect.log(
1664
+ `[CAS] Updated on attempt ${retries + 1}`
1665
+ );
1666
+
1667
+ return true;
1668
+ }
1669
+
1670
+ retries++;
1671
+
1672
+ yield* Effect.log(
1673
+ `[CAS] Conflict detected, retrying (attempt ${retries + 1})`
1674
+ );
1675
+ }
1676
+
1677
+ return false;
1678
+ });
1679
+
1680
+ const casResult = yield* updateWithVersion("updated-data");
1681
+
1682
+ yield* Effect.log(`[CAS] Success: ${casResult}\n`);
1683
+
1684
+ // Example 5: State with subscriptions (notify on change)
1685
+ console.log(`[5] State changes with notification:\n`);
1686
+
1687
+ interface Notification {
1688
+ oldValue: unknown;
1689
+ newValue: unknown;
1690
+ timestamp: Date;
1691
+ }
1692
+
1693
+ const observedState = yield* Ref.make<{ value: number; lastChange: Date }>({
1694
+ value: 0,
1695
+ lastChange: new Date(),
1696
+ });
1697
+
1698
+ const updateAndNotify = (newValue: number) =>
1699
+ Ref.modify(observedState, (current) => {
1700
+ const notification: Notification = {
1701
+ oldValue: current.value,
1702
+ newValue,
1703
+ timestamp: new Date(),
1704
+ };
1705
+
1706
+ yield* Effect.log(
1707
+ `[NOTIFY] ${current.value} → ${newValue} at ${notification.timestamp.toISOString()}`
1708
+ );
1709
+
1710
+ return [
1711
+ notification,
1712
+ {
1713
+ value: newValue,
1714
+ lastChange: notification.timestamp,
1715
+ },
1716
+ ];
1717
+ });
1718
+
1719
+ // Trigger changes
1720
+ for (const val of [5, 10, 15]) {
1721
+ yield* updateAndNotify(val);
1722
+ }
1723
+
1724
+ // Example 6: Atomic batch updates
1725
+ console.log(`\n[6] Batch atomic updates:\n`);
1726
+
1727
+ interface BatchState {
1728
+ items: string[];
1729
+ locked: boolean;
1730
+ version: number;
1731
+ }
1732
+
1733
+ const batchState = yield* Ref.make<BatchState>({
1734
+ items: [],
1735
+ locked: false,
1736
+ version: 0,
1737
+ });
1738
+
1739
+ const addItems = (newItems: string[]) =>
1740
+ Ref.modify(batchState, (current) => {
1741
+ // All items added atomically
1742
+ return [
1743
+ { added: newItems.length },
1744
+ {
1745
+ items: [...current.items, ...newItems],
1746
+ locked: false,
1747
+ version: current.version + 1,
1748
+ },
1749
+ ];
1750
+ });
1751
+
1752
+ const batch1 = yield* addItems(["item1", "item2", "item3"]);
1753
+
1754
+ yield* Effect.log(
1755
+ `[BATCH 1] Added ${batch1.added} items`
1756
+ );
1757
+
1758
+ const batch2 = yield* addItems(["item4", "item5"]);
1759
+
1760
+ yield* Effect.log(
1761
+ `[BATCH 2] Added ${batch2.added} items`
1762
+ );
1763
+
1764
+ const finalBatch = yield* Ref.get(batchState);
1765
+
1766
+ yield* Effect.log(
1767
+ `[RESULT] Total items: ${finalBatch.items.length}, Version: ${finalBatch.version}`
1768
+ );
1769
+ });
1770
+
1771
+ Effect.runPromise(program);
1772
+ ```
1773
+
1774
+ ---
1775
+
1776
+ **Rationale:**
1777
+
1778
+ Synchronized references manage shared state safely:
1779
+
1780
+ - **Atomic updates**: All-or-nothing modifications
1781
+ - **Consistent reads**: Snapshot consistency
1782
+ - **Lock-free optimism**: Try updates, retry on failure
1783
+ - **Compare-and-set**: Atomic check-and-update
1784
+ - **Transaction safety**: Multiple operations as one
1785
+
1786
+ Pattern: `Ref.make()`, `Ref.modify()`, `Ref.set()`, `Ref.get()`
1787
+
1788
+ ---
1789
+
1790
+
1791
+ Shared mutable state without synchronization causes problems:
1792
+
1793
+ **Problem 1: Data races**
1794
+ - Fiber A reads counter (value: 5)
1795
+ - Fiber B reads counter (value: 5)
1796
+ - Fiber A writes counter + 1 (value: 6)
1797
+ - Fiber B writes counter + 1 (value: 6)
1798
+ - Expected: 7, Got: 6 (lost update)
1799
+
1800
+ **Problem 2: Inconsistent snapshots**
1801
+ - Transaction reads user.balance (100)
1802
+ - User spent money elsewhere
1803
+ - Transaction reads user.balance again (90)
1804
+ - Now inconsistent within same transaction
1805
+
1806
+ **Problem 3: Race conditions**
1807
+ - Check inventory (10 items)
1808
+ - Check passes
1809
+ - Before purchase, inventory goes to 0 (race)
1810
+ - Purchase fails, user frustrated
1811
+
1812
+ **Problem 4: Deadlocks**
1813
+ - Fiber A locks state, tries to acquire another
1814
+ - Fiber B holds that state, tries to acquire first
1815
+ - Both stuck forever
1816
+
1817
+ Solutions:
1818
+
1819
+ **Atomic operations**:
1820
+ - Read and modify as single operation
1821
+ - No intermediate states visible
1822
+ - No race window
1823
+ - Guaranteed consistency
1824
+
1825
+ **Compare-and-set**:
1826
+ - "If value is X, change to Y" (atomic)
1827
+ - Fails if another fiber changed it
1828
+ - Retry automatically
1829
+ - No locks needed
1830
+
1831
+ **Snapshot isolation**:
1832
+ - Read complete snapshot
1833
+ - All operations see consistent view
1834
+ - Modifications build on snapshot
1835
+ - Merge changes safely
1836
+
1837
+ ---
1838
+
1839
+ ---
1840
+
1841
+ ### Manage Resource Lifecycles with Scope
1842
+
1843
+ **Rule:** Use Scope for fine-grained, manual control over resource lifecycles and cleanup guarantees.
1844
+
1845
+ **Good Example:**
1846
+
1847
+ This example shows how to acquire a resource (like a file handle), use it, and have `Scope` guarantee its release.
1848
+
1849
+ ```typescript
1850
+ import { Effect, Scope } from "effect";
1851
+
1852
+ // Simulate acquiring and releasing a resource
1853
+ const acquireFile = Effect.log("File opened").pipe(
1854
+ Effect.as({ write: (data: string) => Effect.log(`Wrote: ${data}`) })
1855
+ );
1856
+ const releaseFile = Effect.log("File closed.");
1857
+
1858
+ // Create a "scoped" effect. This effect, when used, will acquire the
1859
+ // resource and register its release action with the current scope.
1860
+ const scopedFile = Effect.acquireRelease(acquireFile, () => releaseFile);
1861
+
1862
+ // The main program that uses the scoped resource
1863
+ const program = Effect.gen(function* () {
1864
+ // Effect.scoped "uses" the resource. It runs the acquire effect,
1865
+ // provides the resource to the inner effect, and ensures the
1866
+ // release effect is run when this block completes.
1867
+ const file = yield* Effect.scoped(scopedFile);
1868
+
1869
+ yield* file.write("hello");
1870
+ yield* file.write("world");
1871
+
1872
+ // The file will be automatically closed here.
1873
+ });
1874
+
1875
+ Effect.runPromise(program);
1876
+ /*
1877
+ Output:
1878
+ File opened
1879
+ Wrote: hello
1880
+ Wrote: world
1881
+ File closed
1882
+ */
1883
+ ```
1884
+
1885
+ ---
1886
+
1887
+ **Anti-Pattern:**
1888
+
1889
+ Manual resource management without the guarantees of `Scope`. This is brittle because if an error occurs after the resource is acquired but before it's released, the release logic is never executed.
1890
+
1891
+ ```typescript
1892
+ import { Effect } from "effect";
1893
+ import { acquireFile, releaseFile } from "./somewhere"; // From previous example
1894
+
1895
+ // ❌ WRONG: This will leak the resource if an error happens.
1896
+ const program = Effect.gen(function* () {
1897
+ const file = yield* acquireFile;
1898
+
1899
+ // If this operation fails...
1900
+ yield* Effect.fail("Something went wrong!");
1901
+
1902
+ // ...this line will never be reached, and the file will never be closed.
1903
+ yield* releaseFile;
1904
+ });
1905
+ ```
1906
+
1907
+ **Rationale:**
1908
+
1909
+ A `Scope` is a context that collects finalizers (cleanup effects). When you need fine-grained control over resource lifecycles, you can work with `Scope` directly. The most common pattern is to create a resource within a scope using `Effect.acquireRelease` and then use it via `Effect.scoped`.
1910
+
1911
+ ---
1912
+
1913
+
1914
+ `Scope` is the fundamental building block for all resource management in Effect. While higher-level APIs like `Layer.scoped` and `Stream` are often sufficient, understanding `Scope` is key to advanced use cases.
1915
+
1916
+ A `Scope` guarantees that any finalizers added to it will be executed when the scope is closed, regardless of whether the associated computation succeeds, fails, or is interrupted. This provides a rock-solid guarantee against resource leaks.
1917
+
1918
+ This is especially critical in concurrent applications. When a parent fiber is interrupted, it closes its scope, which in turn automatically interrupts all its child fibers and runs all their finalizers in a structured, predictable order.
1919
+
1920
+ ---
1921
+
1922
+ ---
1923
+
1924
+ ### Run Background Tasks with Effect.fork
1925
+
1926
+ **Rule:** Use Effect.fork to start a non-blocking background process and manage its lifecycle via its Fiber.
1927
+
1928
+ **Good Example:**
1929
+
1930
+ This program forks a background process that logs a "tick" every second. The main process does its own work for 5 seconds and then explicitly interrupts the background logger before exiting.
1931
+
1932
+ ```typescript
1933
+ import { Effect, Fiber } from "effect";
1934
+
1935
+ // A long-running effect that logs a message every second, forever
1936
+ // Effect.forever creates an infinite loop that repeats the effect
1937
+ // This simulates a background service like a health check or monitoring task
1938
+ const tickingClock = Effect.log("tick").pipe(
1939
+ Effect.delay("1 second"), // Wait 1 second between ticks
1940
+ Effect.forever // Repeat indefinitely - this creates an infinite effect
1941
+ );
1942
+
1943
+ const program = Effect.gen(function* () {
1944
+ yield* Effect.log("Forking the ticking clock into the background.");
1945
+
1946
+ // Start the clock, but don't wait for it.
1947
+ // Effect.fork creates a new fiber that runs concurrently with the main program
1948
+ // The main fiber continues immediately without waiting for the background task
1949
+ // This is essential for non-blocking background operations
1950
+ const clockFiber = yield* Effect.fork(tickingClock);
1951
+
1952
+ // At this point, we have two fibers running:
1953
+ // 1. The main fiber (this program)
1954
+ // 2. The background clock fiber (ticking every second)
1955
+
1956
+ yield* Effect.log("Main process is now doing other work for 5 seconds...");
1957
+
1958
+ // Simulate the main application doing work
1959
+ // While this sleep happens, the background clock continues ticking
1960
+ // This demonstrates true concurrency - both fibers run simultaneously
1961
+ yield* Effect.sleep("5 seconds");
1962
+
1963
+ yield* Effect.log("Main process is done. Interrupting the clock fiber.");
1964
+
1965
+ // Stop the background process.
1966
+ // Fiber.interrupt sends an interruption signal to the fiber
1967
+ // This allows the fiber to perform cleanup operations before terminating
1968
+ // Without this, the background task would continue running indefinitely
1969
+ yield* Fiber.interrupt(clockFiber);
1970
+
1971
+ // Important: Always clean up background fibers to prevent resource leaks
1972
+ // In a real application, you might want to:
1973
+ // 1. Use Fiber.join instead of interrupt to wait for graceful completion
1974
+ // 2. Handle interruption signals within the background task
1975
+ // 3. Implement proper shutdown procedures
1976
+
1977
+ yield* Effect.log("Program finished.");
1978
+
1979
+ // Key concepts demonstrated:
1980
+ // 1. Fork creates concurrent fibers without blocking
1981
+ // 2. Background tasks run independently of the main program
1982
+ // 3. Fiber interruption provides controlled shutdown
1983
+ // 4. Multiple fibers can run simultaneously on the same thread pool
1984
+ });
1985
+
1986
+ // This example shows how to:
1987
+ // - Run background tasks that don't block the main program
1988
+ // - Manage fiber lifecycles (create, run, interrupt)
1989
+ // - Coordinate between multiple concurrent operations
1990
+ // - Properly clean up resources when shutting down
1991
+ Effect.runPromise(program);
1992
+ ```
1993
+
1994
+ ---
1995
+
1996
+ **Anti-Pattern:**
1997
+
1998
+ The anti-pattern is using `Effect.fork` when you immediately need the result of the computation. This is an overly complicated and less readable way of just running the effect directly.
1999
+
2000
+ ```typescript
2001
+ import { Effect, Fiber } from "effect";
2002
+
2003
+ const someEffect = Effect.succeed(42);
2004
+
2005
+ // ❌ WRONG: This is unnecessarily complex.
2006
+ const program = Effect.gen(function* () {
2007
+ const fiber = yield* Effect.fork(someEffect);
2008
+ // You immediately wait for the result, defeating the purpose of forking.
2009
+ const result = yield* Fiber.join(fiber);
2010
+ return result;
2011
+ });
2012
+
2013
+ // ✅ CORRECT: Just run the effect directly if you need its result right away.
2014
+ const simplerProgram = Effect.gen(function* () {
2015
+ const result = yield* someEffect;
2016
+ return result;
2017
+ });
2018
+ ```
2019
+
2020
+ **Rationale:**
2021
+
2022
+ To start an `Effect` in the background without blocking the current execution flow, use `Effect.fork`. This immediately returns a `Fiber`, which is a handle to the running computation that you can use to manage its lifecycle (e.g., interrupt it or wait for its result).
2023
+
2024
+ ---
2025
+
2026
+
2027
+ Unlike `Effect.all` or a direct `yield*`, which wait for the computation to complete, `Effect.fork` is a "fire and forget" operation. It starts the effect on a new, concurrent fiber and immediately returns control to the parent fiber.
2028
+
2029
+ This is essential for managing long-running background tasks like:
2030
+
2031
+ - A web server listener.
2032
+ - A message queue consumer.
2033
+ - A periodic cache cleanup job.
2034
+
2035
+ The returned `Fiber` object is your remote control for the background task. You can use `Fiber.interrupt` to safely stop it (ensuring all its finalizers are run) or `Fiber.join` to wait for it to complete at some later point.
2036
+
2037
+ ---
2038
+
2039
+ ---
2040
+
2041
+ ### Execute Long-Running Apps with Effect.runFork
2042
+
2043
+ **Rule:** Use Effect.runFork to launch a long-running application as a manageable, detached fiber.
2044
+
2045
+ **Good Example:**
2046
+
2047
+ This example starts a simple "server" that runs forever. We use `runFork` to launch it and then use the returned `Fiber` to shut it down gracefully after 5 seconds.
2048
+
2049
+ ```typescript
2050
+ import { Effect, Fiber } from "effect";
2051
+
2052
+ // A server that listens for requests forever
2053
+ const server = Effect.log("Server received a request.").pipe(
2054
+ Effect.delay("1 second"),
2055
+ Effect.forever
2056
+ );
2057
+
2058
+ Effect.runSync(Effect.log("Starting server..."));
2059
+
2060
+ // Launch the server as a detached, top-level fiber
2061
+ const appFiber = Effect.runFork(server);
2062
+
2063
+ // In a real app, you would listen for OS signals.
2064
+ // Here, we simulate a shutdown signal after 5 seconds.
2065
+ setTimeout(() => {
2066
+ const shutdownProgram = Effect.gen(function* () {
2067
+ yield* Effect.log("Shutdown signal received. Interrupting server fiber...");
2068
+ // This ensures all cleanup logic within the server effect would run.
2069
+ yield* Fiber.interrupt(appFiber);
2070
+ });
2071
+ Effect.runPromise(shutdownProgram);
2072
+ }, 5000);
2073
+ ```
2074
+
2075
+ ---
2076
+
2077
+ **Anti-Pattern:**
2078
+
2079
+ Using `runFork` when you immediately need the result of the effect. If you call `runFork` and then immediately call `Fiber.join` on the result, you have simply implemented a more complex and less direct version of `runPromise`.
2080
+
2081
+ ```typescript
2082
+ import { Effect, Fiber } from "effect";
2083
+
2084
+ const someEffect = Effect.succeed(42);
2085
+
2086
+ // ❌ WRONG: This is just a complicated way to write `Effect.runPromise(someEffect)`
2087
+ const resultPromise = Effect.runFork(someEffect).pipe(
2088
+ Fiber.join,
2089
+ Effect.runPromise
2090
+ );
2091
+ ```
2092
+
2093
+ **Rationale:**
2094
+
2095
+ To launch a long-running application (like a server or daemon) as a non-blocking, top-level process, use `Effect.runFork`. It immediately returns a `Fiber` representing your running application, which you can use to manage its lifecycle.
2096
+
2097
+ ---
2098
+
2099
+
2100
+ Unlike `Effect.runPromise`, which waits for the effect to complete, `Effect.runFork` starts the effect and immediately returns a `Fiber`. This is the ideal way to run an application that is meant to run forever, because it gives you a handle to the process.
2101
+
2102
+ The most critical use case for this is enabling graceful shutdown. You can start your application with `runFork`, and then set up listeners for OS signals (like `SIGINT` for Ctrl+C). When a shutdown signal is received, you call `Fiber.interrupt` on the application fiber, which guarantees that all finalizers (like closing database connections) are run before the process exits.
2103
+
2104
+ ---
2105
+
2106
+ ---
2107
+
2108
+ ### State Management Pattern 2: Observable State with SubscriptionRef
2109
+
2110
+ **Rule:** Combine Ref with PubSub to create observable state where changes trigger notifications, enabling reactive state management.
2111
+
2112
+ **Good Example:**
2113
+
2114
+ This example demonstrates observable state patterns.
2115
+
2116
+ ```typescript
2117
+ import { Effect, Ref, PubSub, Stream } from "effect";
2118
+
2119
+ interface StateChange<T> {
2120
+ readonly previous: T;
2121
+ readonly current: T;
2122
+ readonly timestamp: Date;
2123
+ readonly reason: string;
2124
+ }
2125
+
2126
+ interface Observable<T> {
2127
+ readonly get: () => Effect.Effect<T>;
2128
+ readonly set: (value: T, reason: string) => Effect.Effect<void>;
2129
+ readonly subscribe: () => Stream.Stream<StateChange<T>>;
2130
+ readonly modify: (f: (current: T) => T, reason: string) => Effect.Effect<void>;
2131
+ }
2132
+
2133
+ const program = Effect.gen(function* () {
2134
+ console.log(
2135
+ `\n[OBSERVABLE STATE] Reactive state management\n`
2136
+ );
2137
+
2138
+ // Create observable
2139
+ const createObservable = <T,>(initialValue: T): Effect.Effect<Observable<T>> =>
2140
+ Effect.gen(function* () {
2141
+ const state = yield* Ref.make(initialValue);
2142
+ const changeStream = yield* PubSub.unbounded<StateChange<T>>();
2143
+
2144
+ return {
2145
+ get: () => Ref.get(state),
2146
+
2147
+ set: (value: T, reason: string) =>
2148
+ Effect.gen(function* () {
2149
+ const previous = yield* Ref.get(state);
2150
+
2151
+ if (previous === value) {
2152
+ return; // No change
2153
+ }
2154
+
2155
+ yield* Ref.set(state, value);
2156
+
2157
+ const change: StateChange<T> = {
2158
+ previous,
2159
+ current: value,
2160
+ timestamp: new Date(),
2161
+ reason,
2162
+ };
2163
+
2164
+ yield* PubSub.publish(changeStream, change);
2165
+ }),
2166
+
2167
+ subscribe: () =>
2168
+ PubSub.subscribe(changeStream),
2169
+
2170
+ modify: (f: (current: T) => T, reason: string) =>
2171
+ Effect.gen(function* () {
2172
+ const previous = yield* Ref.get(state);
2173
+ const updated = f(previous);
2174
+
2175
+ if (previous === updated) {
2176
+ return; // No change
2177
+ }
2178
+
2179
+ yield* Ref.set(state, updated);
2180
+
2181
+ const change: StateChange<T> = {
2182
+ previous,
2183
+ current: updated,
2184
+ timestamp: new Date(),
2185
+ reason,
2186
+ };
2187
+
2188
+ yield* PubSub.publish(changeStream, change);
2189
+ }),
2190
+ };
2191
+ });
2192
+
2193
+ // Example 1: Basic observable counter
2194
+ console.log(`[1] Observable counter:\n`);
2195
+
2196
+ const counter = yield* createObservable(0);
2197
+
2198
+ // Subscribe to changes
2199
+ const printChanges = counter.subscribe().pipe(
2200
+ Stream.tap((change) =>
2201
+ Effect.log(
2202
+ `[CHANGE] ${change.previous} → ${change.current} (${change.reason})`
2203
+ )
2204
+ ),
2205
+ Stream.take(5), // Limit to 5 changes for demo
2206
+ Stream.runDrain
2207
+ );
2208
+
2209
+ // Make changes
2210
+ yield* counter.set(1, "increment");
2211
+ yield* counter.set(2, "increment");
2212
+ yield* counter.set(5, "reset");
2213
+
2214
+ // Wait for changes to be processed
2215
+ yield* Effect.sleep("100 millis");
2216
+
2217
+ // Example 2: Derived state (computed values)
2218
+ console.log(`\n[2] Derived state (total from items):\n`);
2219
+
2220
+ interface ShoppingCart {
2221
+ readonly items: Array<{ id: string; price: number }>;
2222
+ readonly discount: number;
2223
+ }
2224
+
2225
+ const cart = yield* createObservable<ShoppingCart>({
2226
+ items: [],
2227
+ discount: 0,
2228
+ });
2229
+
2230
+ const computeTotal = (state: ShoppingCart): number => {
2231
+ const subtotal = state.items.reduce((sum, item) => sum + item.price, 0);
2232
+ return subtotal * (1 - state.discount);
2233
+ };
2234
+
2235
+ // Create derived observable
2236
+ const total = yield* createObservable(computeTotal(yield* cart.get()));
2237
+
2238
+ // Subscribe to cart changes, update total
2239
+ const updateTotalOnCartChange = cart.subscribe().pipe(
2240
+ Stream.tap((change) =>
2241
+ Effect.gen(function* () {
2242
+ const newTotal = computeTotal(change.current);
2243
+
2244
+ yield* total.set(newTotal, "recalculated-from-cart");
2245
+
2246
+ yield* Effect.log(
2247
+ `[TOTAL] Recalculated: $${newTotal.toFixed(2)}`
2248
+ );
2249
+ })
2250
+ ),
2251
+ Stream.take(10),
2252
+ Stream.runDrain
2253
+ );
2254
+
2255
+ // Make cart changes
2256
+ yield* cart.modify(
2257
+ (state) => ({
2258
+ ...state,
2259
+ items: [
2260
+ ...state.items,
2261
+ { id: "item1", price: 19.99 },
2262
+ ],
2263
+ }),
2264
+ "add-item"
2265
+ );
2266
+
2267
+ yield* cart.modify(
2268
+ (state) => ({
2269
+ ...state,
2270
+ items: [
2271
+ ...state.items,
2272
+ { id: "item2", price: 29.99 },
2273
+ ],
2274
+ }),
2275
+ "add-item"
2276
+ );
2277
+
2278
+ yield* cart.modify(
2279
+ (state) => ({
2280
+ ...state,
2281
+ discount: 0.1,
2282
+ }),
2283
+ "apply-discount"
2284
+ );
2285
+
2286
+ yield* Effect.sleep("200 millis");
2287
+
2288
+ // Example 3: Effect triggering on state change
2289
+ console.log(`\n[3] Effects triggered by state changes:\n`);
2290
+
2291
+ type AppStatus = "idle" | "loading" | "ready" | "error";
2292
+
2293
+ const appStatus = yield* createObservable<AppStatus>("idle");
2294
+
2295
+ // Define effects for each status
2296
+ const handleStatusChange = appStatus.subscribe().pipe(
2297
+ Stream.tap((change) =>
2298
+ Effect.gen(function* () {
2299
+ yield* Effect.log(
2300
+ `[STATUS] ${change.previous} → ${change.current}`
2301
+ );
2302
+
2303
+ switch (change.current) {
2304
+ case "loading":
2305
+ yield* Effect.log(`[EFFECT] Starting loading animation`);
2306
+ break;
2307
+
2308
+ case "ready":
2309
+ yield* Effect.log(`[EFFECT] Hiding spinner, showing content`);
2310
+ break;
2311
+
2312
+ case "error":
2313
+ yield* Effect.log(`[EFFECT] Showing error message`);
2314
+ yield* Effect.log(`[TELEMETRY] Logging error event`);
2315
+ break;
2316
+
2317
+ default:
2318
+ yield* Effect.log(`[EFFECT] Resetting UI`);
2319
+ }
2320
+ })
2321
+ ),
2322
+ Stream.take(6),
2323
+ Stream.runDrain
2324
+ );
2325
+
2326
+ // Trigger status changes
2327
+ yield* appStatus.set("loading", "user-clicked");
2328
+ yield* appStatus.set("ready", "data-loaded");
2329
+ yield* appStatus.set("loading", "user-refreshed");
2330
+ yield* appStatus.set("error", "api-failed");
2331
+
2332
+ yield* Effect.sleep("200 millis");
2333
+
2334
+ // Example 4: Multi-level state aggregation
2335
+ console.log(`\n[4] Aggregated state from multiple sources:\n`);
2336
+
2337
+ interface UserProfile {
2338
+ name: string;
2339
+ email: string;
2340
+ role: string;
2341
+ }
2342
+
2343
+ interface AppState {
2344
+ user: UserProfile | null;
2345
+ notifications: number;
2346
+ theme: "light" | "dark";
2347
+ }
2348
+
2349
+ const appState = yield* createObservable<AppState>({
2350
+ user: null,
2351
+ notifications: 0,
2352
+ theme: "light",
2353
+ });
2354
+
2355
+ // Subscribe to track changes
2356
+ const trackChanges = appState.subscribe().pipe(
2357
+ Stream.tap((change) => {
2358
+ if (change.current.user && !change.previous.user) {
2359
+ return Effect.log(`[EVENT] User logged in: ${change.current.user.name}`);
2360
+ }
2361
+
2362
+ if (!change.current.user && change.previous.user) {
2363
+ return Effect.log(`[EVENT] User logged out`);
2364
+ }
2365
+
2366
+ if (change.current.notifications !== change.previous.notifications) {
2367
+ return Effect.log(
2368
+ `[NOTIFY] ${change.current.notifications} notifications`
2369
+ );
2370
+ }
2371
+
2372
+ if (change.current.theme !== change.previous.theme) {
2373
+ return Effect.log(`[THEME] Switched to ${change.current.theme}`);
2374
+ }
2375
+
2376
+ return Effect.succeed(undefined);
2377
+ }),
2378
+ Stream.take(10),
2379
+ Stream.runDrain
2380
+ );
2381
+
2382
+ // Make changes
2383
+ yield* appState.modify(
2384
+ (state) => ({
2385
+ ...state,
2386
+ user: { name: "Alice", email: "alice@example.com", role: "admin" },
2387
+ }),
2388
+ "user-login"
2389
+ );
2390
+
2391
+ yield* appState.modify(
2392
+ (state) => ({
2393
+ ...state,
2394
+ notifications: 5,
2395
+ }),
2396
+ "new-notifications"
2397
+ );
2398
+
2399
+ yield* appState.modify(
2400
+ (state) => ({
2401
+ ...state,
2402
+ theme: "dark",
2403
+ }),
2404
+ "user-preference"
2405
+ );
2406
+
2407
+ yield* Effect.sleep("200 millis");
2408
+
2409
+ // Example 5: State snapshot and history
2410
+ console.log(`\n[5] State history tracking:\n`);
2411
+
2412
+ interface HistoryEntry<T> {
2413
+ value: T;
2414
+ timestamp: Date;
2415
+ reason: string;
2416
+ }
2417
+
2418
+ const history = yield* Ref.make<HistoryEntry<number>[]>([]);
2419
+
2420
+ const trackedCounter = yield* createObservable(0);
2421
+
2422
+ const trackHistory = trackedCounter.subscribe().pipe(
2423
+ Stream.tap((change) =>
2424
+ Effect.gen(function* () {
2425
+ yield* Ref.modify(history, (h) => [
2426
+ undefined,
2427
+ [
2428
+ ...h,
2429
+ {
2430
+ value: change.current,
2431
+ timestamp: change.timestamp,
2432
+ reason: change.reason,
2433
+ },
2434
+ ],
2435
+ ]);
2436
+
2437
+ yield* Effect.log(
2438
+ `[HISTORY] Recorded: ${change.current} (${change.reason})`
2439
+ );
2440
+ })
2441
+ ),
2442
+ Stream.take(5),
2443
+ Stream.runDrain
2444
+ );
2445
+
2446
+ // Make changes
2447
+ for (let i = 1; i <= 4; i++) {
2448
+ yield* trackedCounter.set(i, `step-${i}`);
2449
+ }
2450
+
2451
+ yield* Effect.sleep("200 millis");
2452
+
2453
+ // Print history
2454
+ const hist = yield* Ref.get(history);
2455
+
2456
+ yield* Effect.log(`\n[HISTORY] ${hist.length} entries:`);
2457
+
2458
+ for (const entry of hist) {
2459
+ yield* Effect.log(
2460
+ ` - ${entry.value} (${entry.reason})`
2461
+ );
2462
+ }
2463
+ });
2464
+
2465
+ Effect.runPromise(program);
2466
+ ```
2467
+
2468
+ ---
2469
+
2470
+ **Rationale:**
2471
+
2472
+ Observable state enables reactive patterns:
2473
+
2474
+ - **State binding**: UI binds to state, auto-updates on change
2475
+ - **Subscribers**: Multiple handlers notified on change
2476
+ - **Event streams**: Changes become event streams
2477
+ - **Derived state**: Compute values from state changes
2478
+ - **Effect triggering**: Changes trigger side effects
2479
+
2480
+ Pattern: Combine `Ref` + `PubSub` or custom subscription system
2481
+
2482
+ ---
2483
+
2484
+
2485
+ Passive state causes problems:
2486
+
2487
+ **Problem 1: Stale UI**
2488
+ - State changes in backend
2489
+ - UI doesn't know
2490
+ - User sees old data
2491
+ - Manual refresh required
2492
+
2493
+ **Problem 2: Cascading updates**
2494
+ - User changes form field
2495
+ - Need to update 5 other fields
2496
+ - Manual imperative code
2497
+ - Fragile, easy to miss one
2498
+
2499
+ **Problem 3: Derived state**
2500
+ - Total = sum of items
2501
+ - Manual update on each item change
2502
+ - Duplicate code everywhere
2503
+ - Bug: total not updated when items change
2504
+
2505
+ **Problem 4: Side effects**
2506
+ - User enables feature
2507
+ - Multiple things must happen
2508
+ - Analytics, notifications, API calls
2509
+ - All imperative, hard to maintain
2510
+
2511
+ Solutions:
2512
+
2513
+ **Observable state**:
2514
+ - State change = event
2515
+ - Subscribers notified
2516
+ - UI binds directly
2517
+ - Auto-updates
2518
+
2519
+ **Reactive flows**:
2520
+ - Define how state flows
2521
+ - `newTotal = items.sum()`
2522
+ - Automatic recalculation
2523
+ - No manual updates
2524
+
2525
+ **Side effect chaining**:
2526
+ - When state changes to "complete"
2527
+ - Send notification
2528
+ - Log event
2529
+ - Trigger cleanup
2530
+ - All declaratively
2531
+
2532
+ ---
2533
+
2534
+ ---
2535
+
2536
+ ### Implement Graceful Shutdown for Your Application
2537
+
2538
+ **Rule:** Use Effect.runFork and OS signal listeners to implement graceful shutdown for long-running applications.
2539
+
2540
+ **Good Example:**
2541
+
2542
+ This example creates a server with a "scoped" database connection. It uses `runFork` to start the server and sets up a `SIGINT` handler to interrupt the server fiber, which in turn guarantees the database finalizer is called.
2543
+
2544
+ ```typescript
2545
+ import { Effect, Layer, Fiber, Context, Scope } from "effect";
2546
+ import * as http from "http";
2547
+
2548
+ // 1. A service with a finalizer for cleanup
2549
+ class Database extends Effect.Service<Database>()("Database", {
2550
+ effect: Effect.gen(function* () {
2551
+ yield* Effect.log("Acquiring DB connection");
2552
+ return {
2553
+ query: () => Effect.succeed("data"),
2554
+ };
2555
+ }),
2556
+ }) {}
2557
+
2558
+ // 2. The main server logic
2559
+ const server = Effect.gen(function* () {
2560
+ const db = yield* Database;
2561
+
2562
+ // Create server with proper error handling
2563
+ const httpServer = yield* Effect.sync(() => {
2564
+ const server = http.createServer((_req, res) => {
2565
+ Effect.runFork(
2566
+ Effect.provide(
2567
+ db.query().pipe(Effect.map((data) => res.end(data))),
2568
+ Database.Default
2569
+ )
2570
+ );
2571
+ });
2572
+ return server;
2573
+ });
2574
+
2575
+ // Add a finalizer to close the server
2576
+ yield* Effect.addFinalizer(() =>
2577
+ Effect.gen(function* () {
2578
+ httpServer.close();
2579
+ yield* Effect.log("Server closed");
2580
+ })
2581
+ );
2582
+
2583
+ // Start server with error handling
2584
+ yield* Effect.async<void, Error>((resume) => {
2585
+ httpServer.once("error", (err: Error) => {
2586
+ resume(Effect.fail(new Error(`Failed to start server: ${err.message}`)));
2587
+ });
2588
+
2589
+ httpServer.listen(3456, () => {
2590
+ resume(Effect.succeed(void 0));
2591
+ });
2592
+ });
2593
+
2594
+ yield* Effect.log("Server started on port 3456. Press Ctrl+C to exit.");
2595
+
2596
+ // For testing purposes, we'll run for a short time instead of forever
2597
+ yield* Effect.sleep("2 seconds");
2598
+ yield* Effect.log("Shutting down gracefully...");
2599
+ });
2600
+
2601
+ // 3. Provide the layer and launch with runFork
2602
+ const app = Effect.provide(server.pipe(Effect.scoped), Database.Default);
2603
+
2604
+ // 4. Run the app and handle shutdown
2605
+ Effect.runPromise(app).catch((error) => {
2606
+ Effect.runSync(Effect.logError("Application error: " + error));
2607
+ process.exit(1);
2608
+ });
2609
+ ```
2610
+
2611
+ ---
2612
+
2613
+ **Anti-Pattern:**
2614
+
2615
+ Letting the Node.js process exit without proper cleanup. If you run a long-running effect with `Effect.runPromise` or don't handle OS signals, pressing Ctrl+C will terminate the process abruptly, and none of your `Effect` finalizers will have a chance to run.
2616
+
2617
+ ```typescript
2618
+ import { Effect } from "effect";
2619
+ import { app } from "./somewhere"; // From previous example
2620
+
2621
+ // ❌ WRONG: This will run the server, but Ctrl+C will kill it instantly.
2622
+ // The database connection finalizer will NOT be called.
2623
+ Effect.runPromise(app);
2624
+ ```
2625
+
2626
+ **Rationale:**
2627
+
2628
+ To enable graceful shutdown for a long-running application:
2629
+
2630
+ 1. Define services with cleanup logic in `scoped` `Layer`s using `Effect.addFinalizer` or `Effect.acquireRelease`.
2631
+ 2. Launch your main application `Effect` using `Effect.runFork` to get a `Fiber` handle to the running process.
2632
+ 3. Set up listeners for process signals like `SIGINT` (Ctrl+C) and `SIGTERM`.
2633
+ 4. In the signal handler, call `Fiber.interrupt` on your application's fiber.
2634
+
2635
+ ---
2636
+
2637
+
2638
+ When a server process is terminated, you need to ensure that it cleans up properly. This includes closing database connections, finishing in-flight requests, and releasing file handles. Failing to do so can lead to resource leaks or data corruption.
2639
+
2640
+ Effect's structured concurrency makes this robust and easy. When a fiber is interrupted, Effect guarantees that it will run all finalizers registered within that fiber's scope, in the reverse order they were acquired.
2641
+
2642
+ By launching your app with `runFork`, you get a `Fiber` that represents the entire application. Triggering `Fiber.interrupt` on this top-level fiber initiates a clean, orderly shutdown sequence for all its resources.
2643
+
2644
+ ---
2645
+
2646
+ ---
2647
+
2648
+ ### Decouple Fibers with Queues and PubSub
2649
+
2650
+ **Rule:** Use Queue for point-to-point work distribution and PubSub for broadcast messaging between fibers.
2651
+
2652
+ **Good Example:**
2653
+
2654
+ A producer fiber adds jobs to a `Queue`, and a worker fiber takes jobs off the queue to process them.
2655
+
2656
+ ```typescript
2657
+ import { Effect, Queue, Fiber } from "effect";
2658
+
2659
+ const program = Effect.gen(function* () {
2660
+ yield* Effect.logInfo("Starting queue demo...");
2661
+
2662
+ // Create a bounded queue that can hold a maximum of 10 items.
2663
+ // This prevents memory issues by applying backpressure when the queue is full.
2664
+ // If a producer tries to add to a full queue, it will suspend until space is available.
2665
+ const queue = yield* Queue.bounded<string>(10);
2666
+ yield* Effect.logInfo("Created bounded queue");
2667
+
2668
+ // Producer Fiber: Add a job to the queue every second.
2669
+ // This fiber runs independently and continuously produces work items.
2670
+ // The producer-consumer pattern decouples work generation from work processing.
2671
+ const producer = yield* Effect.gen(function* () {
2672
+ let i = 0;
2673
+ while (true) {
2674
+ const job = `job-${i++}`;
2675
+ yield* Effect.logInfo(`Producing ${job}...`);
2676
+
2677
+ // Queue.offer adds an item to the queue. If the queue is full,
2678
+ // this operation will suspend the fiber until space becomes available.
2679
+ // This provides natural backpressure control.
2680
+ yield* Queue.offer(queue, job);
2681
+
2682
+ // Sleep for 500ms between job creation. This controls the production rate.
2683
+ // Producer is faster than consumer (500ms vs 1000ms) to demonstrate queue buffering.
2684
+ yield* Effect.sleep("500 millis");
2685
+ }
2686
+ }).pipe(Effect.fork); // Fork creates a new fiber that runs concurrently
2687
+
2688
+ yield* Effect.logInfo("Started producer fiber");
2689
+
2690
+ // Worker Fiber: Take a job from the queue and process it.
2691
+ // This fiber runs independently and processes work items as they become available.
2692
+ // Multiple workers could be created to scale processing capacity.
2693
+ const worker = yield* Effect.gen(function* () {
2694
+ while (true) {
2695
+ // Queue.take removes and returns an item from the queue.
2696
+ // If the queue is empty, this operation will suspend the fiber
2697
+ // until an item becomes available. This prevents busy-waiting.
2698
+ const job = yield* Queue.take(queue);
2699
+ yield* Effect.logInfo(`Processing ${job}...`);
2700
+
2701
+ // Simulate work by sleeping for 1 second.
2702
+ // This makes the worker slower than the producer, causing queue buildup.
2703
+ yield* Effect.sleep("1 second");
2704
+ yield* Effect.logInfo(`Completed ${job}`);
2705
+ }
2706
+ }).pipe(Effect.fork); // Fork creates another independent fiber
2707
+
2708
+ yield* Effect.logInfo("Started worker fiber");
2709
+
2710
+ // Let them run for a while...
2711
+ // The main fiber sleeps while the producer and worker fibers run concurrently.
2712
+ // During this time, you'll see the queue acting as a buffer between
2713
+ // the fast producer and slow worker.
2714
+ yield* Effect.logInfo("Running for 10 seconds...");
2715
+ yield* Effect.sleep("10 seconds");
2716
+ yield* Effect.logInfo("Done!");
2717
+
2718
+ // Interrupt both fibers to clean up resources.
2719
+ // Fiber.interrupt sends an interruption signal to the fiber,
2720
+ // allowing it to perform cleanup operations before terminating.
2721
+ // This is safer than forcefully killing fibers.
2722
+ yield* Fiber.interrupt(producer);
2723
+ yield* Fiber.interrupt(worker);
2724
+
2725
+ // Note: In a real application, you might want to:
2726
+ // 1. Drain the queue before interrupting workers
2727
+ // 2. Use Fiber.join to wait for graceful shutdown
2728
+ // 3. Handle interruption signals in the fiber loops
2729
+ });
2730
+
2731
+ // Run the program
2732
+ // This demonstrates the producer-consumer pattern with Effect fibers:
2733
+ // - Fibers are lightweight threads that can be created in large numbers
2734
+ // - Queues provide safe communication between fibers
2735
+ // - Backpressure prevents resource exhaustion
2736
+ // - Interruption allows for graceful shutdown
2737
+ Effect.runPromise(program);
2738
+ ```
2739
+
2740
+
2741
+ A publisher sends an event, and multiple subscribers react to it independently.
2742
+
2743
+ ```typescript
2744
+ import { Effect, PubSub } from "effect";
2745
+
2746
+ const program = Effect.gen(function* () {
2747
+ const pubsub = yield* PubSub.bounded<string>(10);
2748
+
2749
+ // Subscriber 1: The "Audit" service
2750
+ const auditSub = PubSub.subscribe(pubsub).pipe(
2751
+ Effect.flatMap((subscription) =>
2752
+ Effect.gen(function* () {
2753
+ while (true) {
2754
+ const event = yield* Queue.take(subscription);
2755
+ yield* Effect.log(`AUDIT: Received event: ${event}`);
2756
+ }
2757
+ })
2758
+ ),
2759
+ Effect.fork
2760
+ );
2761
+
2762
+ // Subscriber 2: The "Notifier" service
2763
+ const notifierSub = PubSub.subscribe(pubsub).pipe(
2764
+ Effect.flatMap((subscription) =>
2765
+ Effect.gen(function* () {
2766
+ while (true) {
2767
+ const event = yield* Queue.take(subscription);
2768
+ yield* Effect.log(`NOTIFIER: Sending notification for: ${event}`);
2769
+ }
2770
+ })
2771
+ ),
2772
+ Effect.fork
2773
+ );
2774
+
2775
+ // Give subscribers time to start
2776
+ yield* Effect.sleep("1 second");
2777
+
2778
+ // Publisher: Publish an event that both subscribers will receive.
2779
+ yield* PubSub.publish(pubsub, "user_logged_in");
2780
+ });
2781
+ ```
2782
+
2783
+ ---
2784
+
2785
+ **Anti-Pattern:**
2786
+
2787
+ Simulating a queue with a simple `Ref<A[]>`. This approach is inefficient due to polling and is not safe from race conditions without manual, complex locking mechanisms. It also lacks critical features like back-pressure.
2788
+
2789
+ ```typescript
2790
+ import { Effect, Ref } from "effect";
2791
+
2792
+ // ❌ WRONG: This is inefficient and prone to race conditions.
2793
+ const program = Effect.gen(function* () {
2794
+ const queueRef = yield* Ref.make<string[]>([]);
2795
+
2796
+ // Producer adds to the array
2797
+ const producer = Ref.update(queueRef, (q) => [...q, "new_item"]);
2798
+
2799
+ // Consumer has to constantly poll the array to see if it's empty.
2800
+ const consumer = Ref.get(queueRef).pipe(
2801
+ Effect.flatMap(
2802
+ (q) =>
2803
+ q.length > 0
2804
+ ? Ref.set(queueRef, q.slice(1)).pipe(Effect.as(q[0]))
2805
+ : Effect.sleep("1 second").pipe(Effect.flatMap(() => consumer)) // Inefficient polling
2806
+ )
2807
+ );
2808
+ });
2809
+ ```
2810
+
2811
+ **Rationale:**
2812
+
2813
+ To enable communication between independent, concurrent fibers, use one of Effect's specialized data structures:
2814
+
2815
+ - **`Queue<A>`**: For distributing work items. Each item put on the queue is taken and processed by only **one** consumer.
2816
+ - **`PubSub<A>`**: For broadcasting events. Each message published is delivered to **every** subscriber.
2817
+
2818
+ ---
2819
+
2820
+
2821
+ Directly calling functions between different logical parts of a concurrent application creates tight coupling, making the system brittle and hard to scale. `Queue` and `PubSub` solve this by acting as asynchronous, fiber-safe message brokers.
2822
+
2823
+ This decouples the **producer** of a message from its **consumer(s)**. The producer doesn't need to know who is listening, or how many listeners there are. This allows you to build resilient, scalable systems where you can add or remove workers/listeners without changing the producer's code.
2824
+
2825
+ Furthermore, bounded `Queue`s and `PubSub`s provide automatic **back-pressure**. If consumers can't keep up, the producer will automatically pause before adding new items, preventing your system from becoming overloaded.
2826
+
2827
+ ---
2828
+
2829
+ ---
2830
+
2831
+ ### Poll for Status Until a Task Completes
2832
+
2833
+ **Rule:** Use Effect.race to run a repeating polling task that is automatically interrupted when a main task completes.
2834
+
2835
+ **Good Example:**
2836
+
2837
+ This program simulates a long-running data processing job. While it's running, a separate effect polls for its status every 2 seconds. When the main job finishes after 10 seconds, the polling automatically stops.
2838
+
2839
+ ```typescript
2840
+ import { Effect, Schedule, Duration } from "effect";
2841
+
2842
+ // The main task that takes a long time to complete
2843
+ const longRunningJob = Effect.log("Data processing complete!").pipe(
2844
+ Effect.delay(Duration.seconds(10))
2845
+ );
2846
+
2847
+ // The polling task that checks the status
2848
+ const pollStatus = Effect.log("Polling for job status: In Progress...");
2849
+
2850
+ // A schedule that repeats the polling task every 2 seconds, forever
2851
+ const pollingSchedule = Schedule.fixed(Duration.seconds(2));
2852
+
2853
+ // The complete polling effect that will run indefinitely until interrupted
2854
+ const repeatingPoller = pollStatus.pipe(Effect.repeat(pollingSchedule));
2855
+
2856
+ // Race the main job against the poller.
2857
+ // The longRunningJob will win after 10 seconds, interrupting the poller.
2858
+ const program = Effect.race(longRunningJob, repeatingPoller);
2859
+
2860
+ Effect.runPromise(program);
2861
+ /*
2862
+ Output:
2863
+ Polling for job status: In Progress...
2864
+ Polling for job status: In Progress...
2865
+ Polling for job status: In Progress...
2866
+ Polling for job status: In Progress...
2867
+ Polling for job status: In Progress...
2868
+ Data processing complete!
2869
+ */
2870
+ ```
2871
+
2872
+ ---
2873
+
2874
+ **Anti-Pattern:**
2875
+
2876
+ Manually managing the lifecycle of the polling fiber. This is more verbose, imperative, and error-prone. You have to remember to interrupt the polling fiber in all possible exit paths (success, failure, etc.), which `Effect.race` does for you automatically.
2877
+
2878
+ ```typescript
2879
+ import { Effect, Fiber } from "effect";
2880
+ import { longRunningJob, repeatingPoller } from "./somewhere";
2881
+
2882
+ // ❌ WRONG: Manual fiber management is complex.
2883
+ const program = Effect.gen(function* () {
2884
+ // Manually fork the poller into the background
2885
+ const pollerFiber = yield* Effect.fork(repeatingPoller);
2886
+
2887
+ try {
2888
+ // Run the main job
2889
+ const result = yield* longRunningJob;
2890
+ return result;
2891
+ } finally {
2892
+ // You MUST remember to interrupt the poller when you're done.
2893
+ yield* Fiber.interrupt(pollerFiber);
2894
+ }
2895
+ });
2896
+ ```
2897
+
2898
+ **Rationale:**
2899
+
2900
+ To run a periodic task (a "poller") that should only run for the duration of another main task, combine them using `Effect.race`. The main task will "win" the race upon completion, which automatically interrupts and cleans up the repeating polling effect.
2901
+
2902
+ ---
2903
+
2904
+
2905
+ This pattern elegantly solves the problem of coordinating a long-running job with a status-checking mechanism. Instead of manually managing fibers with `fork` and `interrupt`, you can declare this relationship with `Effect.race`.
2906
+
2907
+ The key is that the polling effect is set up to repeat on a schedule that runs indefinitely (or for a very long time). Because it never completes on its own, it can never "win" the race. The main task is the only one that can complete successfully. When it does, it wins the race, and Effect's structured concurrency guarantees that the losing effect (the poller) is safely interrupted.
2908
+
2909
+ This creates a self-contained, declarative, and leak-free unit of work.
2910
+
2911
+ ---
2912
+
2913
+ ---
2914
+
2915
+ ### Understand Fibers as Lightweight Threads
2916
+
2917
+ **Rule:** Understand that a Fiber is a lightweight, virtual thread managed by the Effect runtime for massive concurrency.
2918
+
2919
+ **Good Example:**
2920
+
2921
+ This program demonstrates the efficiency of fibers by forking 100,000 of them. Each fiber does a small amount of work (sleeping for 1 second). Trying to do this with 100,000 OS threads would instantly crash any system.
2922
+
2923
+ ```typescript
2924
+ import { Effect, Fiber } from "effect";
2925
+
2926
+ const program = Effect.gen(function* () {
2927
+ // Demonstrate the lightweight nature of fibers by creating 100,000 of them
2928
+ // This would be impossible with OS threads due to memory and context switching overhead
2929
+ const fiberCount = 100_000;
2930
+ yield* Effect.log(`Forking ${fiberCount} fibers...`);
2931
+
2932
+ // Create an array of 100,000 simple effects
2933
+ // Each effect sleeps for 1 second and then returns its index
2934
+ // This simulates lightweight concurrent tasks
2935
+ const tasks = Array.from({ length: fiberCount }, (_, i) =>
2936
+ Effect.sleep("1 second").pipe(Effect.as(i))
2937
+ );
2938
+
2939
+ // Fork all of them into background fibers
2940
+ // Effect.fork creates a new fiber for each task without blocking
2941
+ // This demonstrates fiber creation scalability - 100k fibers created almost instantly
2942
+ // Each fiber is much lighter than an OS thread (typically ~1KB vs ~8MB per thread)
2943
+ const fibers = yield* Effect.forEach(tasks, Effect.fork);
2944
+
2945
+ yield* Effect.log(
2946
+ "All fibers have been forked. Now waiting for them to complete..."
2947
+ );
2948
+
2949
+ // Wait for all fibers to finish their work
2950
+ // Fiber.joinAll waits for all fibers to complete and collects their results
2951
+ // This demonstrates fiber coordination - managing thousands of concurrent operations
2952
+ // The runtime efficiently schedules these fibers using a work-stealing thread pool
2953
+ const results = yield* Fiber.joinAll(fibers);
2954
+
2955
+ yield* Effect.log(`All ${results.length} fibers have completed.`);
2956
+
2957
+ // Key insights from this example:
2958
+ // 1. Fibers are extremely lightweight - 100k fibers use minimal memory
2959
+ // 2. Fiber creation is fast - no expensive OS thread allocation
2960
+ // 3. The Effect runtime efficiently schedules fibers across available CPU cores
2961
+ // 4. Fibers can be suspended and resumed without blocking OS threads
2962
+ // 5. This enables massive concurrency for I/O-bound operations
2963
+ });
2964
+
2965
+ // This program runs successfully, demonstrating the low overhead of fibers.
2966
+ // Try running this with OS threads - you'd likely hit system limits around 1000-10000 threads
2967
+ // With fibers, 100k+ concurrent operations are easily achievable
2968
+ Effect.runPromise(program);
2969
+ ```
2970
+
2971
+ ---
2972
+
2973
+ **Anti-Pattern:**
2974
+
2975
+ The anti-pattern is thinking that a `Fiber` is the same as an OS thread. This can lead to incorrect assumptions about performance and behavior.
2976
+
2977
+ - **Don't assume parallelism on CPU-bound tasks:** In a standard Node.js environment, all fibers run on a single OS thread. If you run 10 CPU-intensive tasks on 10 fibers, they will not run in parallel on 10 different CPU cores. They will share time on the single main thread. Fibers provide massive concurrency for I/O-bound tasks (like network requests), not CPU-bound parallelism.
2978
+ - **Don't worry about blocking:** A `Fiber` that is "sleeping" or waiting for I/O (like `Effect.sleep` or a `fetch` request) does not block the underlying OS thread. The Effect runtime simply puts it aside and uses the thread to run other ready fibers.
2979
+
2980
+ **Rationale:**
2981
+
2982
+ Think of a `Fiber` as a "virtual thread" or a "green thread." It is the fundamental unit of concurrency in Effect. Every `Effect` you run is executed on a `Fiber`. Unlike OS threads, which are heavy and limited, you can create hundreds of thousands or even millions of fibers without issue.
2983
+
2984
+ ---
2985
+
2986
+
2987
+ In traditional multi-threaded programming, each thread is managed by the operating system, consumes significant memory (for its stack), and involves expensive context switching. This limits the number of concurrent threads you can realistically create.
2988
+
2989
+ Effect's `Fiber`s are different. They are managed entirely by the Effect runtime, not the OS. They are incredibly lightweight data structures that don't have their own OS thread stack. The Effect runtime uses a cooperative scheduling mechanism to run many fibers on a small pool of OS threads (often just one in Node.js).
2990
+
2991
+ This model, known as M:N threading (M fibers on N OS threads), allows for a massive level of concurrency that is impossible with traditional threads. It's what makes Effect so powerful for building highly concurrent applications like servers, data pipelines, and real-time systems.
2992
+
2993
+ When you use operators like `Effect.fork` or `Effect.all`, you are creating new fibers.
2994
+
2995
+ ---
2996
+
2997
+ ---
2998
+
2999
+