@bluelibs/runner 6.3.0 → 6.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,37 @@
1
+ ---
2
+ name: runner
3
+ description: Main skill for building applications with BlueLibs Runner. Use when Agent needs help modeling resources, tasks, events, hooks, middleware, tags, errors, runtime lifecycle, validation, observability, or testing in apps built with Runner, and when it should navigate Runner documentation from the compact guide into the correct in-depth guide chapter. Can be used for architecting and developing Runner as well.
4
+ ---
5
+
6
+ # Runner
7
+
8
+ Start with `./references/COMPACT_GUIDE.md`.
9
+ It is the fast path for Runner's core mental model, public API shape, and common contracts.
10
+
11
+ When the task needs deeper documentation, open the matching chapter from `./references/guide-units/`:
12
+
13
+ - `02-resources.md` for resources, app composition, ownership, boundaries, exports, subtree policy, and overrides
14
+ - `02b-tasks.md` for tasks, schemas, dependencies, result validation, and execution patterns
15
+ - `02c-events-and-hooks.md` for events, hooks, event payload contracts, and decoupled flow design
16
+ - `02d-middleware.md` for task/resource middleware and cross-cutting behavior
17
+ - `02e-tags.md` for tags, discovery, and policy-style metadata
18
+ - `02f-errors.md` for typed Runner errors and `.throws(...)`
19
+ - `03-runtime-lifecycle.md` for `run(...)`, startup, shutdown, pause/resume, and run options
20
+ - `04-features.md` for advanced built-in features such as HTTP shutdown patterns, execution context and signal propagation, cron scheduling, semaphores, and queue/semaphore utilities
21
+ - `04b-serialization-validation.md` for serialization, validation, DTO boundaries, and trust-boundary parsing
22
+ - `04c-security.md` for identity-aware execution, partitioning, and security patterns
23
+ - `05-observability.md` for logs, metrics, traces, and health strategy
24
+ - `06-meta-and-internals.md` for `meta(...)`, canonical ids and namespacing, and built-in internal services such as `resources.runtime`, `resources.store`, and `resources.taskRunner`
25
+ - `08-testing.md` for unit, focused integration, and full integration testing
26
+
27
+ When the task is about documentation authoring or guide composition itself, also read:
28
+
29
+ - `DOCS_STYLE_GUIDE.md`
30
+ - `INDEX_GUIDE.md`
31
+ - `INDEX_README.md`
32
+
33
+ Use the more specialized skills when the task leaves general Runner app usage:
34
+
35
+ - use `runner-remote-lanes-specialist` for Remote Lanes work
36
+ - use `runner-durable-workflow-specialist` for Durable Workflows work
37
+ - use `runner-architect` for Runner framework internals, public architecture, or design changes
@@ -0,0 +1,117 @@
1
+ ---
2
+ name: runner-durable-workflow-specialist
3
+ description: Specialized guidance for using Runner Durable Workflows in applications. Use when Codex needs to design replay-safe workflows, choose durable resources or backends, model `step`/`sleep`/`waitForSignal` flows, wire workflow start and signal entry points, handle recovery, scheduling, rollback, or audit inspection, or debug replay and persistence behavior in Runner Durable Workflows.
4
+ ---
5
+
6
+ # Runner Durable Workflow Specialist
7
+
8
+ Use this skill for application-level Durable Workflow work.
9
+ This is about building with durable workflows, not changing the workflow engine internals.
10
+
11
+ ## Start Here
12
+
13
+ Read in this order:
14
+
15
+ - `./references/DURABLE_WORKFLOWS.md` for the main guide and canonical examples.
16
+ - `./references/DURABLE_WORKFLOWS_AI.md` for the shorter token-friendly field guide.
17
+ - `../../../readmes/MULTI_PLATFORM.md` when platform assumptions matter, because durable workflows are Node-only.
18
+
19
+ Keep the core mental model front and center:
20
+
21
+ - a workflow does not resume the instruction pointer
22
+ - it re-runs from the top and reuses persisted step results
23
+ - side effects belong inside `durableContext.step(...)`
24
+
25
+ ## Design The Workflow From Checkpoints
26
+
27
+ Start from the workflow contract before writing code:
28
+
29
+ 1. Identify the long-running business flow.
30
+ 2. Decide which moments must survive restarts, deploys, or worker moves.
31
+ 3. Turn those moments into durable checkpoints:
32
+ - `step(...)`
33
+ - `sleep(...)`
34
+ - `waitForSignal(...)`
35
+ 4. Decide how the workflow starts:
36
+ - `start(...)`
37
+ - `startAndWait(...)`
38
+ 5. Decide how outside systems interact later:
39
+ - signals
40
+ - schedules
41
+ - operator/store inspection
42
+
43
+ Stable step ids are part of the workflow contract.
44
+ If a task includes side effects outside durable steps, treat that as a bug until proven otherwise.
45
+
46
+ ## Wire Durable Resources Correctly
47
+
48
+ Build from the supported Runner pattern:
49
+
50
+ - register `resources.durable` once for tags and durable events
51
+ - fork a concrete backend such as `resources.memoryWorkflow` or `resources.redisWorkflow`
52
+ - inject the durable resource and call `durable.use()` inside the workflow task
53
+ - tag workflow tasks with `tags.durableWorkflow`
54
+
55
+ Prefer:
56
+
57
+ - `memoryWorkflow` for local development and tests
58
+ - `redisWorkflow` when the workflow must survive real process boundaries and production restarts
59
+
60
+ Keep starter tasks, routes, or handlers separate from the durable workflow task itself.
61
+ Start workflows explicitly through the durable service instead of exposing the workflow body as an ordinary remote task entry point.
62
+
63
+ ## Keep Replay Safety Honest
64
+
65
+ When implementing or reviewing durable logic:
66
+
67
+ - put external side effects inside `durableContext.step(...)`
68
+ - keep step ids stable and descriptive
69
+ - use explicit `stepId` options for production-facing sleeps, emits, and signal waits when needed
70
+ - use `durableContext.switch(...)` for replay-safe branching when the flow shape matters
71
+ - model compensation deliberately with `up(...)`, `down(...)`, and `rollback()`
72
+
73
+ Common smell:
74
+
75
+ - using plain mutable local flow assumptions as if the instruction pointer resumes where it left off
76
+
77
+ ## Signals, Scheduling, And Recovery
78
+
79
+ Be explicit about how the workflow moves through time:
80
+
81
+ - use signals for external approvals or domain events
82
+ - use sleep for time-based waiting
83
+ - use schedules for delayed or recurring durable starts
84
+ - use recovery on startup when incomplete executions must resume
85
+
86
+ When a task asks for observability or support tooling, prefer:
87
+
88
+ - `store.getExecution(...)`
89
+ - durable operator helpers when available
90
+ - audit entries and durable events for timeline visibility
91
+
92
+ ## Local Development And Testing
93
+
94
+ In tests:
95
+
96
+ - build the smallest app that expresses the workflow contract
97
+ - assert replay-safe behavior, not only happy-path completion
98
+ - test signal delivery, timeout behavior, recovery, and rollback when relevant
99
+ - prefer local durable backends first, then broader integration only when the transport or persistence layer is the actual subject
100
+
101
+ When debugging, check these first:
102
+
103
+ - missing `tags.durableWorkflow`
104
+ - workflow started through the wrong surface
105
+ - side effects outside `step(...)`
106
+ - unstable or changed step ids
107
+ - missing polling or recovery expectations
108
+ - confusion between workflow result state and current execution detail in the store
109
+
110
+ ## Finish Cleanly
111
+
112
+ Before finishing:
113
+
114
+ - confirm every side effect sits behind a durable checkpoint
115
+ - confirm workflow entry and signal paths are explicit
116
+ - confirm the selected backend matches the runtime environment
117
+ - run focused tests first, then `npm run qa`
@@ -0,0 +1,114 @@
1
+ ---
2
+ name: runner-remote-lanes-specialist
3
+ description: Specialized guidance for using Runner Remote Lanes in applications. Use when Codex needs to choose between Event Lanes and RPC Lanes, design lane topology with profiles and bindings, configure transport modes (`network`, `transparent`, `local-simulated`), wire HTTP exposure or communicator resources, debug lane auth or serializer-boundary issues, or test distributed routing with Runner Remote Lanes.
4
+ ---
5
+
6
+ # Runner Remote Lanes Specialist
7
+
8
+ Use this skill for application-level Remote Lanes work.
9
+ This is about building with Remote Lanes, not changing Runner internals.
10
+
11
+ ## Start Here
12
+
13
+ Read in this order:
14
+
15
+ - `./references/REMOTE_LANES.md` for the main guide and canonical examples.
16
+ - `./references/REMOTE_LANES_AI.md` for the compact AI field guide when you need a faster refresher.
17
+ - `../../../readmes/REMOTE_LANES_HTTP_POLICY.md` only when the task is specifically about HTTP transport policy.
18
+
19
+ Treat Remote Lanes as a routing layer that should preserve your domain definitions.
20
+ Tasks and events stay normal Runner definitions; topology decides where they run.
21
+
22
+ ## Choose The Right Lane Model
23
+
24
+ Make the first decision explicit:
25
+
26
+ - Use Event Lanes for async fire-and-forget event propagation.
27
+ - Use RPC Lanes for synchronous task or event RPC.
28
+ - Use both only when the architecture genuinely needs request/response plus downstream propagation.
29
+
30
+ If the user is undecided, recommend:
31
+
32
+ - Event Lanes for decoupled propagation and queue semantics.
33
+ - RPC Lanes for request/response behavior with remote results.
34
+
35
+ ## Build The Topology Deliberately
36
+
37
+ Design Remote Lanes from three moving parts:
38
+
39
+ - `lane`: the named routing boundary
40
+ - `profile`: which runtime role is active in this process
41
+ - `binding`: which queue or communicator backs that lane
42
+
43
+ When implementing or reviewing a lane setup:
44
+
45
+ 1. Identify which tasks or events should be lane-assigned.
46
+ 2. Choose Event Lanes or RPC Lanes.
47
+ 3. Define lanes first.
48
+ 4. Define topology with explicit `profiles` and `bindings`.
49
+ 5. Register the correct Node resource:
50
+ - `eventLanesResource`
51
+ - `rpcLanesResource`
52
+ 6. Choose the right mode:
53
+ - `network` for real transport
54
+ - `transparent` for fastest local smoke tests
55
+ - `local-simulated` for serializer-boundary and auth-path simulation
56
+
57
+ Prefer explicit lane assignment via lane builders or Runner tags.
58
+
59
+ ## Keep Boundaries Honest
60
+
61
+ Remote Lanes are Node-only on the server side.
62
+
63
+ - Import server resources from `@bluelibs/runner/node`.
64
+ - Keep domain logic outside lane transport config.
65
+ - Use topology to decide where work runs, not whether the business action should exist.
66
+ - Keep hook/task business rules in application code, not in routing assumptions.
67
+ - Remember that auth for HTTP exposure and auth for lane execution are separate concerns.
68
+
69
+ For RPC Lanes:
70
+
71
+ - Distinguish `exposure.http.auth` from lane `binding.auth`.
72
+ - Ensure every served or assigned lane has a communicator binding.
73
+
74
+ For Event Lanes:
75
+
76
+ - Model queue ownership and retry policy intentionally.
77
+ - Keep in mind that dead-letter behavior belongs to queue or broker policy.
78
+
79
+ ## Local Development And Testing
80
+
81
+ Use the lightest mode that proves the contract:
82
+
83
+ - `transparent` when you only need call-site shape and local behavior.
84
+ - `local-simulated` when you need serialization, async-context filtering, or auth-path simulation.
85
+ - `network` when you need real integration with queues or remote communicators.
86
+
87
+ In tests:
88
+
89
+ - Start with the smallest app/resource graph that expresses the lane contract.
90
+ - Assert through emitted events, task results, queue effects, or exposure behavior.
91
+ - Add focused tests for auth readiness, missing bindings, and mode-specific behavior when those are part of the task.
92
+
93
+ ## Debugging Heuristics
94
+
95
+ When Remote Lanes misbehave, check these first:
96
+
97
+ - wrong lane type chosen for the use case
98
+ - lane not assigned at all
99
+ - profile does not `consume` or `serve` the target lane
100
+ - missing queue or communicator binding
101
+ - missing signer or verifier material
102
+ - `transparent` or `local-simulated` mode hiding a network-only expectation
103
+ - serializer boundary mismatch or async-context policy mismatch
104
+
105
+ If the problem mentions HTTP transport details, multipart uploads, or exposure policy, read `../../../readmes/REMOTE_LANES_HTTP_POLICY.md` before changing code.
106
+
107
+ ## Finish Cleanly
108
+
109
+ Before finishing:
110
+
111
+ - confirm the lane choice still matches the latency and coupling model
112
+ - confirm topology is explicit and profile-driven
113
+ - confirm Node-only assumptions stay in Node surfaces
114
+ - run focused tests first, then `npm run qa`
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@bluelibs/runner",
3
- "version": "6.3.0",
3
+ "version": "6.3.1",
4
4
  "description": "BlueLibs Runner",
5
5
  "sideEffects": false,
6
6
  "main": "dist/universal/index.cjs",
@@ -178,16 +178,6 @@
178
178
  "typescript": {
179
179
  "definition": "dist/types/index.d.ts"
180
180
  },
181
- "files": [
182
- "dist",
183
- "node.js",
184
- "node.d.ts",
185
- "index.d.ts",
186
- "node",
187
- "README.md",
188
- "readmes/COMPACT_GUIDE.md",
189
- "LICENSE.md"
190
- ],
191
181
  "license": "MIT",
192
182
  "dependencies": {
193
183
  "cron-parser": "^5.4.0",
@@ -0,0 +1,131 @@
1
+ # Benchmark System
2
+
3
+ ← [Back to main README](../README.md)
4
+
5
+ This project includes a comprehensive benchmark system to track performance regressions over time.
6
+
7
+ ## Overview
8
+
9
+ The benchmark system has been designed to be **statistically reliable** and **CI-friendly**, addressing the common issues with micro-benchmarks:
10
+
11
+ - Multiple runs with statistical analysis (median, percentiles)
12
+ - Proper warmup phases to stabilize JIT compilation
13
+ - Environment-aware thresholds (higher tolerance in CI)
14
+ - Severity classification (major vs minor regressions)
15
+ - Trend monitoring with warnings
16
+
17
+ ## Running Benchmarks
18
+
19
+ ### Full Benchmark Suite
20
+
21
+ ```bash
22
+ # Run all benchmarks (takes ~2-3 minutes)
23
+ npx jest --config=config/jest/jest.bench.config.js
24
+
25
+ # Run with output to file
26
+ BENCHMARK_OUTPUT=results.json npx jest --config=config/jest/jest.bench.config.js
27
+ ```
28
+
29
+ ### Single Benchmark
30
+
31
+ ```bash
32
+ npx jest --config=config/jest/jest.bench.config.js --testNamePattern="basic task execution"
33
+ ```
34
+
35
+ ## Benchmark Configuration
36
+
37
+ Configuration is stored in `config/benchmarks/benchmarks.config.json`:
38
+
39
+ ```json
40
+ {
41
+ "threshold": 0.3, // 30% tolerance for local runs
42
+ "ciThreshold": 0.4, // 40% tolerance for CI runs
43
+ "metricThresholds": {
44
+ // Per-metric overrides
45
+ "cacheMiddleware.speedupFactor": 0.2
46
+ }
47
+ }
48
+ ```
49
+
50
+ ## Comparing Results
51
+
52
+ ```bash
53
+ # Compare current results against baseline
54
+ node scripts/compare-benchmarks.mjs config/benchmarks/baseline.json config/benchmarks/benchmark-results.json config/benchmarks/benchmarks.config.json
55
+ ```
56
+
57
+ The comparison script provides:
58
+
59
+ - **Environment detection** (CI vs Local)
60
+ - **Severity classification** (Major vs Minor regressions)
61
+ - **Trend warnings** for concerning changes within thresholds
62
+ - **Statistical context** showing actual vs expected values
63
+
64
+ ## Updating Baselines
65
+
66
+ When performance characteristics legitimately change (new features, architectural changes):
67
+
68
+ ```bash
69
+ # Update baseline with current environment
70
+ ./scripts/update-baseline.sh
71
+ ```
72
+
73
+ **Important:** Only update baselines when:
74
+
75
+ - You've made intentional performance changes
76
+ - The current environment is representative
77
+ - Changes have been reviewed and approved
78
+
79
+ ## Statistical Approach
80
+
81
+ Each benchmark runs multiple times (3-5 runs) and reports:
82
+
83
+ - **Median** - Primary comparison metric (robust against outliers)
84
+ - **25th/75th percentiles** - Spread indication
85
+ - **Min/Max** - Full range
86
+ - **All values** - Complete transparency
87
+
88
+ This approach provides much more reliable results than single-run measurements.
89
+
90
+ ## CI Integration
91
+
92
+ The system automatically:
93
+
94
+ - Detects CI environments and uses relaxed thresholds
95
+ - Only fails builds on **major regressions** (>60% by default)
96
+ - Shows **minor regressions** as warnings
97
+ - Provides context about environment differences
98
+
99
+ ## Troubleshooting
100
+
101
+ ### "Screaming CI" (False Positives)
102
+
103
+ If CI frequently fails with minor performance differences:
104
+
105
+ 1. Increase `ciThreshold` in config (try 0.5-0.6)
106
+ 2. Check if baseline was generated in similar environment
107
+ 3. Consider updating baseline if environment has changed
108
+
109
+ ### Inconsistent Results
110
+
111
+ If results vary wildly between runs:
112
+
113
+ 1. Check for background processes during benchmarks
114
+ 2. Ensure sufficient warmup iterations
115
+ 3. Consider running fewer concurrent jobs in CI
116
+
117
+ ### Major Regressions
118
+
119
+ If you see legitimate major regressions:
120
+
121
+ 1. Identify the change that caused it
122
+ 2. Determine if it's intentional (new feature trade-off)
123
+ 3. Optimize the regression or update baseline if acceptable
124
+
125
+ ## Best Practices
126
+
127
+ 1. **Run benchmarks in consistent environments**
128
+ 2. **Update baselines sparingly** - only when necessary
129
+ 3. **Review benchmark changes** like any other code
130
+ 4. **Monitor trends** - small consistent changes may indicate gradual regression
131
+ 5. **Don't over-optimize** - focus on real-world performance impact
@@ -0,0 +1,233 @@
1
+ # Framework Comparison
2
+
3
+ > Detailed side-by-side comparison of Runner with NestJS, Effect, and DI-only containers. For the quick matrix, see the [Getting Started](#how-does-it-compare) section.
4
+
5
+ ---
6
+
7
+ ## Side-by-Side: The Same Feature in Both Frameworks
8
+
9
+ Let's compare implementing the same user service in Runner and NestJS:
10
+
11
+ <table>
12
+ <tr>
13
+ <td width="50%" valign="top">
14
+
15
+ **NestJS Approach** (~45 lines)
16
+
17
+ ```typescript
18
+ // user.dto.ts
19
+ import { IsString, IsEmail } from "class-validator";
20
+
21
+ export class CreateUserDto {
22
+ @IsString()
23
+ name: string;
24
+
25
+ @IsEmail()
26
+ email: string;
27
+ }
28
+
29
+ // user.service.ts
30
+ import { Injectable } from "@nestjs/common";
31
+ import { InjectRepository } from "@nestjs/typeorm";
32
+ import { Repository } from "typeorm";
33
+
34
+ @Injectable()
35
+ export class UserService {
36
+ constructor(
37
+ @InjectRepository(User)
38
+ private userRepo: Repository<User>,
39
+ private readonly mailer: MailerService,
40
+ private readonly logger: LoggerService,
41
+ ) {}
42
+
43
+ async createUser(dto: CreateUserDto) {
44
+ const user = await this.userRepo.save(dto);
45
+ await this.mailer.sendWelcome(user.email);
46
+ this.logger.log(`Created user ${user.id}`);
47
+ return user;
48
+ }
49
+ }
50
+
51
+ // user.module.ts
52
+ @Module({
53
+ imports: [TypeOrmModule.forFeature([User])],
54
+ providers: [UserService, MailerService],
55
+ controllers: [UserController],
56
+ })
57
+ export class UserModule {}
58
+ ```
59
+
60
+ </td>
61
+ <td width="50%" valign="top">
62
+
63
+ **Runner Approach** (~25 lines)
64
+
65
+ ```typescript
66
+ // users.ts
67
+ import { r, resources } from "@bluelibs/runner";
68
+ import { z } from "zod";
69
+
70
+ const createUser = r
71
+ .task("users.create")
72
+ .dependencies({
73
+ db,
74
+ mailer,
75
+ logger: resources.logger,
76
+ })
77
+ .inputSchema(
78
+ z.object({
79
+ name: z.string(),
80
+ email: z.string().email(),
81
+ }),
82
+ )
83
+ .run(async (input, { db, mailer, logger }) => {
84
+ const user = await db.users.insert(input);
85
+ await mailer.sendWelcome(user.email);
86
+ await logger.info(`Created user ${user.id}`);
87
+ return user;
88
+ })
89
+ .build();
90
+
91
+ // Register in app
92
+ const app = r.resource("app").register([db, mailer, createUser]).build();
93
+ ```
94
+
95
+ </td>
96
+ </tr>
97
+ <tr>
98
+ <td>
99
+
100
+ **Unit Testing in NestJS:**
101
+
102
+ ```typescript
103
+ describe("UserService", () => {
104
+ it("creates user", async () => {
105
+ // Direct instantiation - no module needed
106
+ const service = new UserService(mockRepo, mockMailer, mockLogger);
107
+ const result = await service.createUser({
108
+ name: "Ada",
109
+ email: "ada@test.com",
110
+ });
111
+ expect(result.id).toBeDefined();
112
+ });
113
+ });
114
+ ```
115
+
116
+ **Integration Testing in NestJS:**
117
+
118
+ ```typescript
119
+ describe("UserService (integration)", () => {
120
+ let service: UserService;
121
+
122
+ beforeEach(async () => {
123
+ const module = await Test.createTestingModule({
124
+ providers: [
125
+ UserService,
126
+ { provide: getRepositoryToken(User), useFactory: mockRepository },
127
+ { provide: MailerService, useValue: mockMailer },
128
+ { provide: LoggerService, useValue: mockLogger },
129
+ ],
130
+ }).compile();
131
+
132
+ service = module.get(UserService);
133
+ });
134
+
135
+ it("creates user through DI", async () => {
136
+ const result = await service.createUser({
137
+ name: "Ada",
138
+ email: "ada@test.com",
139
+ });
140
+ expect(result.id).toBeDefined();
141
+ });
142
+ });
143
+ ```
144
+
145
+ </td>
146
+ <td valign="top">
147
+
148
+ **Unit Testing in Runner:**
149
+
150
+ ```typescript
151
+ describe("createUser", () => {
152
+ it("creates user", async () => {
153
+ // Direct call - bypasses middleware
154
+ const result = await createUser.run(
155
+ { name: "Ada", email: "ada@test.com" },
156
+ {
157
+ db: mockDb,
158
+ mailer: mockMailer,
159
+ logger: mockLogger,
160
+ },
161
+ );
162
+ expect(result.id).toBeDefined();
163
+ });
164
+ });
165
+ ```
166
+
167
+ **Integration Testing in Runner:**
168
+
169
+ ```typescript
170
+ describe("createUser (integration)", () => {
171
+ it("creates user through full pipeline", async () => {
172
+ // r.override(base, fn) builds replacement definitions
173
+ // .overrides([...]) applies them in this test container
174
+ const testApp = r
175
+ .resource("test")
176
+ .register([app])
177
+ .overrides([
178
+ r.override(db, async () => mockDb),
179
+ r.override(mailer, async () => mockMailer),
180
+ ])
181
+ .build();
182
+
183
+ const { runTask, dispose } = await run(testApp);
184
+ const result = await runTask(createUser, {
185
+ name: "Ada",
186
+ email: "ada@test.com",
187
+ });
188
+ expect(result.id).toBeDefined();
189
+ await dispose();
190
+ });
191
+ });
192
+ ```
193
+
194
+ </td>
195
+ </tr>
196
+ </table>
197
+
198
+ ---
199
+
200
+ ## Detailed Capability Comparison
201
+
202
+ | Capability | NestJS | Runner |
203
+ | --------------- | -------------------------------------------- | ---------------------------------------------------------------------- |
204
+ | **Reliability** | Add external libs (e.g., `nestjs-retry`) | Built-in: retry, circuit breaker, rate limit, cache, timeout, fallback |
205
+ | **Type Safety** | Manual typing for DI tokens | Full inference from `.dependencies()` and `.with()` |
206
+ | **Test Setup** | `Test.createTestingModule()` boilerplate | `task.run(input, mocks)` -- one line |
207
+ | **Scope** | Web framework (HTTP-centric) | Application toolkit (any TypeScript app) |
208
+ | **Middleware** | Guards, interceptors, pipes (HTTP lifecycle) | Composable, type-safe, with journal introspection |
209
+ | **Concurrency** | Bring your own | Built-in Semaphore and Queue primitives |
210
+ | **Bundle Size** | Large (full framework) | Tree-shakable (import what you use) |
211
+
212
+ > **TL;DR:** NestJS gives you a structured web framework with conventions. Runner gives you a composable toolkit with **production-ready reliability built in** -- you bring the structure that fits your app.
213
+
214
+ ---
215
+
216
+ ## Runner vs Effect (TypeScript)
217
+
218
+ Both Runner and Effect are functional-first and offer full type inference. They solve different problems:
219
+
220
+ | Aspect | Runner | Effect |
221
+ | --------------------- | --------------------------------------------- | ---------------------------------------------- |
222
+ | **Core abstraction** | Tasks (plain async functions) + Resources | `Effect<A, E, R>` (algebraic effect wrapper) |
223
+ | **Code style** | Standard async/await | Generators or pipe-based combinators |
224
+ | **Error model** | `r.error()` typed helpers, `throws` contracts | Typed error channel (`E` in `Effect<A, E, R>`) |
225
+ | **DI** | `.dependencies()` with full inference | Layers and Services |
226
+ | **Lifecycle** | `init` / `dispose` on resources | Layer acquisition / release |
227
+ | **Middleware** | First-class, composable, with journal | Aspect-oriented via Layer composition |
228
+ | **Durable Workflows** | Built-in (Node) | Not built-in |
229
+ | **HTTP Remote Lanes** | Built-in (Node server, any fetch client) | Not built-in |
230
+ | **Adoption path** | Incremental -- wrap existing async functions | Pervasive -- all code wrapped in `Effect` |
231
+ | **Learning curve** | Gentle (familiar async/await) | Steep (FP concepts: fibers, layers, schemas) |
232
+
233
+ **Choose Runner** when you want production reliability primitives, familiar async/await, and incremental adoption. **Choose Effect** when you want algebraic effects, structured concurrency, and full compile-time error tracking at the cost of a steeper learning curve and pervasive wrapper types.