mongodash 2.1.0 → 2.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (68) hide show
  1. package/README.md +70 -5
  2. package/dist/dashboard/index.html +8 -8
  3. package/dist/lib/playground/server.js +1 -8
  4. package/dist/lib/src/ConcurrentRunner.js +1 -8
  5. package/dist/lib/src/OnError.js +1 -8
  6. package/dist/lib/src/OnInfo.js +1 -8
  7. package/dist/lib/src/createContinuousLock.js +3 -8
  8. package/dist/lib/src/createContinuousLock.js.map +1 -1
  9. package/dist/lib/src/cronTasks.js +1 -8
  10. package/dist/lib/src/getCollection.js +1 -8
  11. package/dist/lib/src/getMongoClient.js +1 -8
  12. package/dist/lib/src/globalsCollection.js +1 -8
  13. package/dist/lib/src/index.js +1 -8
  14. package/dist/lib/src/initPromise.js +1 -8
  15. package/dist/lib/src/mongoCompatibility.js +1 -8
  16. package/dist/lib/src/parseInterval.js +1 -8
  17. package/dist/lib/src/prefixFilterKeys.js +1 -8
  18. package/dist/lib/src/processInBatches.js +1 -8
  19. package/dist/lib/src/reactiveTasks/LeaderElector.js +1 -8
  20. package/dist/lib/src/reactiveTasks/MetricsCollector.js +1 -8
  21. package/dist/lib/src/reactiveTasks/MetricsCollector.js.map +1 -1
  22. package/dist/lib/src/reactiveTasks/ReactiveTaskManager.js +1 -8
  23. package/dist/lib/src/reactiveTasks/ReactiveTaskOps.js +1 -8
  24. package/dist/lib/src/reactiveTasks/ReactiveTaskPlanner.js +1 -8
  25. package/dist/lib/src/reactiveTasks/ReactiveTaskReconciler.js +1 -8
  26. package/dist/lib/src/reactiveTasks/ReactiveTaskRegistry.js +1 -8
  27. package/dist/lib/src/reactiveTasks/ReactiveTaskRepository.js +1 -8
  28. package/dist/lib/src/reactiveTasks/ReactiveTaskRetryStrategy.js +1 -8
  29. package/dist/lib/src/reactiveTasks/ReactiveTaskTypes.js +1 -8
  30. package/dist/lib/src/reactiveTasks/ReactiveTaskWorker.js +1 -8
  31. package/dist/lib/src/reactiveTasks/compileWatchProjection.js +1 -8
  32. package/dist/lib/src/reactiveTasks/index.js +1 -8
  33. package/dist/lib/src/reactiveTasks/queryToExpression.js +1 -8
  34. package/dist/lib/src/reactiveTasks/validateTaskFilter.js +1 -8
  35. package/dist/lib/src/task-management/OperationalTaskController.js +1 -8
  36. package/dist/lib/src/task-management/index.js +1 -8
  37. package/dist/lib/src/task-management/serveDashboard.js +1 -9
  38. package/dist/lib/src/task-management/serveDashboard.js.map +1 -1
  39. package/dist/lib/src/task-management/types.js +1 -8
  40. package/dist/lib/src/withLock.js +1 -8
  41. package/dist/lib/src/withTransaction.js +1 -8
  42. package/dist/lib/tools/check-db-connection.js +1 -8
  43. package/dist/lib/tools/clean-testing-databases.js +1 -8
  44. package/dist/lib/tools/prepare-republish.js +1 -8
  45. package/dist/lib/tools/test-matrix-local.js +1 -8
  46. package/dist/lib/tools/testingDatabase.js +1 -8
  47. package/docs/.vitepress/cache/deps/_metadata.json +6 -6
  48. package/docs/.vitepress/config.mts +20 -1
  49. package/docs/.vitepress/theme/style.css +5 -0
  50. package/docs/getting-started.md +75 -9
  51. package/docs/index.md +4 -1
  52. package/docs/initialization.md +1 -1
  53. package/docs/public/logo-backgroundless.png +0 -0
  54. package/docs/public/logo.png +0 -0
  55. package/docs/reactive-tasks/configuration.md +89 -0
  56. package/docs/reactive-tasks/core-concepts.md +56 -0
  57. package/docs/reactive-tasks/evolution.md +62 -0
  58. package/docs/reactive-tasks/examples.md +66 -0
  59. package/docs/reactive-tasks/getting-started.md +54 -0
  60. package/docs/reactive-tasks/guides.md +237 -0
  61. package/docs/reactive-tasks/index.md +44 -0
  62. package/docs/reactive-tasks/management.md +86 -0
  63. package/docs/reactive-tasks/monitoring.md +76 -0
  64. package/docs/reactive-tasks/policy-cleanup.md +70 -0
  65. package/docs/reactive-tasks/policy-retry.md +60 -0
  66. package/docs/reactive-tasks/reconciliation.md +40 -0
  67. package/package.json +11 -10
  68. package/docs/reactive-tasks.md +0 -914
@@ -1,914 +0,0 @@
1
- # Reactive Tasks
2
-
3
- A powerful, distributed task execution system built on top of [MongoDB Change Streams](https://www.mongodb.com/docs/manual/changeStreams/).
4
-
5
- Reactive Tasks allow you to define background jobs that trigger automatically when your data changes. This enables **Model Data-Driven Flows**, where business logic is triggered by state changes (e.g., `status: 'paid'`) rather than explicit calls. The system handles **concurrency**, **retries**, **deduplication**, and **monitoring** out of the box.
6
-
7
- ## Features
8
-
9
- - **Reactive**: Tasks triggered instantly (near-real-time) by database changes (insert/update).
10
- - **Distributed**: Safe to run on multiple instances (Kubernetes/Serverless). Only one instance processes a specific task for a specific document at a time.
11
- - **Efficient Listener**: Regardless of the number of application instances, **only one instance (the leader)** listens to the MongoDB Change Stream. This minimizes database load significantly (O(1) connections), though it implies that the total ingestion throughput is limited by the single leader instance.
12
- - **Reliable**: Built-in retry mechanisms (exponential backoff) and "Dead Letter Queue" logic.
13
- - **Efficient**: Uses MongoDB Driver for low-latency updates and avoids polling where possible.
14
- - **Memory Efficiency**: The system is designed to handle large datasets. During live scheduling (Change Streams), reconciliation, and periodic cleanup, the library only loads the `_id`'s of the source documents into memory, keeping the footprint low regardless of the collection size. Note that task *storage* size depends on your `watchProjection` configuration—see [Storage Optimization](#change-detection--storage-optimization).
15
- - **Observability**: First-class Prometheus metrics support.
16
- - **Dashboard**: A visual [Dashboard](./dashboard.md) to monitor, retry, and debug tasks.
17
-
18
- ## Architecture & Scalability
19
-
20
- The system uses a **Leader-Worker** architecture to balance efficiency and scalability.
21
-
22
- ### 1. The Leader (Planner)
23
- - **Role**: A single instance is elected as the **Leader**.
24
- - **Responsibility**: It listens to the MongoDB Change Stream, calculates the necessary tasks (based on `watchProjection`), and persists them into the `_tasks` collection. To minimize memory usage, it only fetches the document `_id` from the Change Stream event.
25
- > [!NOTE]
26
- > **Database Resolution**: The Change Stream is established on the database of the **first registered reactive task**.
27
- - **Resilience**: Leadership is maintained via a distributed lock with a heartbeat. If the leader crashes, another instance automatically takes over (Failover).
28
-
29
- ### 2. The Workers (Executors)
30
- - **Role**: *Every* instance (including the leader) runs a set of **Worker** threads (managed by the event loop).
31
- - **Responsibility**: Workers poll the `_tasks` collection for `pending` jobs, lock them, and execute the `handler`.
32
- - **Adaptive Polling**: Workers use an **adaptive polling** mechanism.
33
- - **Idle**: If no tasks are found, the polling frequency automatically lowers (saves CPU/IO).
34
- - **Busy**: If tasks are found (or the **local** Leader signals new work), the frequency speeds up immediately to process the queue as fast as possible. Workers on other instances will speed up once they independently find a task during their regular polling.
35
-
36
- ## Reactive vs Scheduled Tasks
37
-
38
- It is important to distinguish between Reactive Tasks and standard schedulers (like Agenda or BullMQ).
39
-
40
- - **Reactive Tasks (Reactors)**: Triggered by **state changes** (data). "When Order is Paid, send email". This guarantees consistency with data.
41
- - **Schedulers**: Triggered by **time**. "Send email at 2:00 PM".
42
-
43
- Reactive Tasks support time-based operations via `debounceMs` (e.g., "Wait 1m after data change to settle") and `deferCurrent` (e.g., "Retry in 5m"), but they are fundamentally event-driven. If you need purely time-based jobs (e.g., "Daily Report" without any data change trigger), you can trigger them via a [Cron job](./cron-tasks.md), although you can model them as "Run on insert to 'daily_reports' collection".
44
-
45
- ## Advantages over Standard Messaging
46
-
47
- Using Reactive Tasks instead of a traditional message broker (RabbitMQ, Kafka) provides distinct architectural benefits:
48
-
49
- 1. **Lean Stack & Simplified DevOps**:
50
- - Eliminates the need to manage, scale, and secure external message brokers.
51
- - **Zero-Config Development**: Local testing requires only the database connection—no extra Docker containers or infrastructure to spin up.
52
-
53
- 2. **Transactional Consistency (Solving the "Dual Write" Problem)**:
54
- - *The Problem*: In standard architectures, writing to the database and publishing an event are two separate operations. If the database write succeeds but the message flush fails (network error, crash), your system enters an inconsistent state.
55
- - *The Solution*: With Reactive Tasks, the "event" **is** the database write. The task is triggered electronically by the MongoDB Oplog. This guarantees that **if and only if** data is persisted, the corresponding task will be scheduled—ensuring 100% data consistency without distributed transactions.
56
-
57
- 3. **Inspectable State**:
58
- - The task queue is stored in a standard MongoDB collection (`[collection]_tasks`), not in a hidden broker queue.
59
- - You can use standard tools (MongoDB Compass, Atlas Data Explorer, simple queries) to inspect pending jobs, debug failures, and analyze queue distribution without needing specialized queue management interfaces.
60
-
61
-
62
- ## Getting Started
63
-
64
- ### 1. Initialization
65
-
66
- Ensure `mongodash` is initialized with a global `uri` or `mongoClient`.
67
-
68
- ```typescript
69
- import mongodash from 'mongodash';
70
-
71
- await mongodash.init({
72
- uri: 'mongodb://localhost:27017/my-app'
73
- });
74
- ```
75
-
76
- ### 2. Define a Task
77
-
78
- Use the `reactiveTask` function to register a task. You define *what* to watch and *how* to process it.
79
-
80
- ```typescript
81
- import { reactiveTask } from 'mongodash';
82
-
83
- // Define a task that sends an email when a User is created or updated
84
- // multiple tasks can listen to the same collection!
85
- await reactiveTask({
86
- task: 'send-welcome-email', // Unique Job ID
87
- collection: 'users', // Collection to watch
88
-
89
- // Optional: Only trigger if specific fields change
90
- watchProjection: { status: 1, email: 1 },
91
-
92
- // Optional: Filter documents (Standard Query or Aggregation Expression)
93
- filter: { status: 'active' },
94
-
95
- // The logic to execute
96
- handler: async (context) => {
97
- // Fetch the document (verifies filter & optimistic locking)
98
- const userDoc = await context.getDocument();
99
-
100
- console.log(`Processing user: ${userDoc._id}`);
101
- await sendEmail(userDoc.email, 'Welcome!');
102
- }
103
- });
104
- ```
105
-
106
- ### 3. Start the System
107
-
108
- After registering all tasks, start the scheduler. This will assume leadership (if possible) and start processing.
109
-
110
- ```typescript
111
- import { startReactiveTasks } from 'mongodash';
112
-
113
- await startReactiveTasks();
114
- ```
115
-
116
- ### 4. Common Use Cases
117
-
118
- Reactive Tasks are versatile. Here are a few patterns you can implement:
119
-
120
- #### A. Webhook Delivery & Data Sync
121
- Perfect for reliable delivery of data to external systems. If the external API is down, Mongodash will automatically retry with exponential backoff.
122
-
123
- ```typescript
124
- await reactiveTask({
125
- task: 'sync-order-to-erp',
126
- collection: 'orders',
127
- filter: { status: 'paid' }, // Only sync when paid
128
- watchProjection: { status: 1 }, // Only check when status changes
129
-
130
- handler: async (context) => {
131
- const order = await context.getDocument();
132
- await axios.post('https://erp-system.com/api/orders', order);
133
- }
134
- });
135
- ```
136
-
137
- #### B. Async Statistics Recalculation
138
- Offload heavy calculations from the main request path. When a raw document changes, update the aggregated view in the background.
139
-
140
- ```typescript
141
- await reactiveTask({
142
- task: 'recalc-product-rating',
143
- collection: 'reviews',
144
- debounce: '5s', // Re-calc at most once every 5 seconds per product
145
-
146
- handler: async (context) => {
147
- // We only watched 'status', so we might need the full doc?
148
- // Or if we have the ID, that's enough for aggregation:
149
- const { docId } = context;
150
-
151
- // Calculate new average
152
- const stats = await calculateAverageRating(docId);
153
-
154
- // Update product document
155
- await db.collection('products').updateOne(
156
- { _id: docId },
157
- { $set: { rating: stats.rating, reviewCount: stats.count } }
158
- );
159
- }
160
- });
161
- ```
162
-
163
- #### C. Pub-Sub (Event Bus)
164
- Use Reactive Tasks as a distributed Event Bus. By creating an events collection and watching only the `_id`, you effectively create a listener that triggers **only on new insertions**.
165
-
166
- ```typescript
167
- await reactiveTask({
168
- task: 'send-welcome-sequence',
169
- collection: 'app_events',
170
-
171
- // TRICK: _id never changes.
172
- // This config ensures the handler ONLY runs when a new document is inserted.
173
- watchProjection: { _id: 1 },
174
- filter: { type: 'user-registered' },
175
-
176
- handler: async (context) => {
177
- const event = await context.getDocument();
178
- await emailService.sendWelcome(event.payload.email);
179
- }
180
- });
181
- ```
182
-
183
- ### 5. Advanced Initialization
184
-
185
- You can customize the scheduler behavior via `mongodash.init`:
186
-
187
- ```typescript
188
- await mongodash.init({
189
- uri: '...',
190
-
191
- // Instance ID: Unique identifier for this scheduler instance.
192
- // Used for leader election, metrics aggregation, and debugging.
193
- // If not provided, a random ObjectId hex string will be generated.
194
- instanceId: 'my-app-worker-1',
195
-
196
- // Concurrency: Number of parallel workers on the *current instance* (default: 5)
197
- // Total system concurrency = (reactiveTaskConcurrency * number of instances)
198
- reactiveTaskConcurrency: 10,
199
-
200
- // Globals Collection: Used for coordination and leadership (default: '_mongodash_globals')
201
- globalsCollection: 'my_custom_globals',
202
-
203
- // Filter: Run ONLY specific tasks on this instance (e.g. for scaling or time-based windows).
204
- // This function is called regularly (every poll cycle) for every pending task.
205
- // Example: Time-Based Filtering (e.g. only run 'nightly-job' during night hours)
206
- reactiveTaskFilter: ({ task }) => {
207
- if (task === 'nightly-job') {
208
- const hour = new Date().getHours();
209
- return hour >= 0 && hour < 6; // Only process between 00:00 - 06:00
210
- }
211
- return true; // Process all other tasks normally
212
- },
213
-
214
- // Caller: Wrap execution to add context, logging, or error handling.
215
- // Example: Generating a Correlation ID for distributed tracing
216
- reactiveTaskCaller: async (taskFn) => {
217
- const correlationId = crypto.randomUUID();
218
- return AsyncContext.run({ correlationId }, async () => {
219
- console.log(`[${correlationId}] Starting task...`);
220
- try {
221
- await taskFn();
222
- console.log(`[${correlationId}] Task finished.`);
223
- } catch (e) {
224
- console.error(`[${correlationId}] Task failed:`, e);
225
- throw e;
226
- }
227
- });
228
- },
229
-
230
- monitoring: {
231
- enabled: true,
232
- pushIntervalMs: 60000
233
- },
234
-
235
- // Cleanup Interval: How often to run periodic cleanup of orphaned tasks.
236
- // Accepts duration strings ('24h'), milliseconds, or cron expressions.
237
- // Default: '24h'
238
- reactiveTaskCleanupInterval: '24h'
239
- });
240
- ```
241
-
242
- ### Task Options
243
-
244
- | Option | Type | Description |
245
- | :--- | :--- | :--- |
246
- | `task` | `string` | **Required**. Unique identifier for the task type. |
247
- | `collection` | `string` | **Required**. Name of the MongoDB collection to watch. |
248
- | `handler` | `(context) => Promise<void>` | **Required**. Async function to process the task. Use `context.getDocument()` to get the document. |
249
- | `filter` | `Document` | Standard Query (e.g., `{ status: 'pending' }`) OR Aggregation Expression (e.g., `{ $eq: ['$status', 'pending'] }`). Aggregation syntax unlocks powerful features like using `$$NOW` for time-based filtering. |
250
- | `watchProjection` | `Document` | MongoDB Projection. Task only triggers if the projected result changes. Supports inclusion `{ a: 1 }` and computed fields. |
251
- | `debounce` | `number \| string` | Debounce window (ms or duration string). Default: `1000`. Useful to group rapid updates. |
252
- | `retryPolicy` | `RetryPolicy` | Configuration for retries on failure. |
253
- | `cleanupPolicy` | `CleanupPolicy` | Configuration for automatic cleanup of orphaned task records. See [Cleanup Policy](#cleanup-policy). |
254
- | `executionHistoryLimit` | `number` | Number of past execution entries to keep in `_tasks` doc. Default: `5`. |
255
- | `evolution` | `EvolutionConfig` | Configuration for handling task logic updates (versioning, reconciliation policies). |
256
-
257
- ### Change Detection & Storage Optimization
258
-
259
- To ensure reliability and efficiency, the system needs to determine *when* to trigger a task.
260
-
261
- **How it works:**
262
- 1. **State Persistence**: For every source document, a corresponding "task document" is stored in the `[collection]_tasks` collection.
263
- 2. **Snapshotting**: This task document holds a snapshot of the source document's fields (specifically, the result of `watchProjection`).
264
- 3. **Diffing**: When an event occurs (or during reconciliation), the system compares the current state of the document against the stored snapshot (`lastObservedValues`).
265
- 4. **No-Op**: If the watched fields haven't changed, **no task is triggered**. This guarantees reliability and prevents redundant processing.
266
-
267
- **Storage Implications:**
268
- - **Task Persistence**: The task document remains in the `_tasks` collection as long as the source document exists. It is only removed when the source document is deleted.
269
- - **Optimization**: If `watchProjection` is **not defined**, the system copies the **entire source document** into the task document.
270
- - **Recommendation**: For collections with **large documents** or **large datasets**, always define `watchProjection`. This significantly reduces storage usage and improves performance by copying only the necessary data subset.
271
- - **Tip**: If you want to trigger the task on *any* change but avoid storing the full document, watch a versioning field like `updatedAt`, `lastModifiedAt`, or `_version`.
272
- ```typescript
273
- // Triggers on any update (assuming your app updates 'updatedAt'),
274
- // but stores ONLY the 'updatedAt' date in the tasks collection.
275
- watchProjection: { updatedAt: 1 }
276
- ```
277
-
278
- ### Late-Binding Filter Check & `getDocument`
279
-
280
- Critically, the library performs a **runtime check** when you call `await context.getDocument()` inside your handler.
281
-
282
- 1. **Lock Task**: The worker locks the task.
283
- 2. **Fetch & Verify**: When you call `await context.getDocument()`, it performs an atomic fetch that ensures:
284
- * **Filter Match**: The document still matches your `filter` configuration.
285
- * **Data Consistency**: The watched fields (`watchProjection`) have NOT changed since the task was triggered (Optimistic Locking).
286
- * **Existence**: The document still exists.
287
-
288
- If any of these conditions fail, `getDocument` throws a `TaskConditionFailedError`. The worker catches this error effectively **skipping** the task and marking it as 'completed'.
289
-
290
- **Why is this important?**
291
- * **Race Conditions**: Imagine a "Back-In-Stock" task triggered when `inventory > 0`. If the item sells out immediately (`inventory` returns to `0`) *while* the task is waiting in the queue, this check prevents sending a false notification.
292
- * **Optimistic Concurrency**: If the data changed significantly (e.g. `status` changed from `paid` to `refunded`) between trigger and execution, the task is skipped to effectively "cancel" the stale operation. A new task for the new state (`refunded`) will likely be in the queue anyway.
293
-
294
- #### Advanced Usage: Options & Transactions
295
-
296
- The `getDocument(options)` method accepts standard MongoDB `FindOptions`, allowing you to optimize performance or ensure consistency.
297
-
298
- **1. Projections (Partial Fetch)**
299
- If your source document is large but you only need a few fields, use `projection`.
300
-
301
- ```typescript
302
- const user = await context.getDocument({
303
- projection: { email: 1, firstName: 1 }
304
- });
305
- ```
306
-
307
- **2. Transactions (`session`)**
308
- To ensure atomic updates across multiple collections, pass a `session` to `getDocument`. This ensures that the document fetch and your subsequent writes happen within the same transaction snapshot.
309
-
310
- ```typescript
311
- import { withTransaction } from 'mongodash';
312
-
313
- handler: async (context) => {
314
- await withTransaction(async (session) => {
315
- // Pass session to getDocument to participate in the transaction
316
- const doc = await context.getDocument({ session });
317
-
318
- // Perform other operations in the same transaction
319
- await otherCollection.updateOne({ _id: doc.refId }, { $set: { ... } }, { session });
320
- });
321
- }
322
- ```
323
-
324
- **3. Locking Resources (`withLock`)**
325
- While the *task itself* is locked (ensuring only one worker processes this specific task instance), you might need to lock shared resources if your handler accesses data outside the source document.
326
-
327
- You can use `context.watchedValues` to get IDs needed for locking *before* you fetch the document.
328
-
329
- ```typescript
330
- import { withLock } from 'mongodash';
331
-
332
- handler: async (context) => {
333
- // Use watchedValues to get the ID for locking
334
- const accountId = context.watchedValues.accountId;
335
-
336
- // Lock a shared resource
337
- await withLock(`account-update-${accountId}`, async () => {
338
- // Now it is safe to fetch and process
339
- const doc = await context.getDocument();
340
- // ... safe exclusive access to the account ...
341
- });
342
- }
343
- ```
344
-
345
-
346
-
347
- ### Cleanup Policy
348
-
349
- The Cleanup Policy controls automatic deletion of orphaned task records — tasks whose source documents have been deleted or no longer match the configured filter.
350
-
351
- #### Configuration
352
-
353
- ```typescript
354
- cleanupPolicy?: {
355
- deleteWhen?: 'sourceDocumentDeleted' | 'sourceDocumentDeletedOrNoLongerMatching' | 'never';
356
- keepFor?: string | number;
357
- }
358
- ```
359
-
360
- | Property | Type | Default | Description |
361
- |----------|------|---------|-------------|
362
- | `deleteWhen` | `string` | `'sourceDocumentDeleted'` | When to trigger task deletion |
363
- | `keepFor` | `string \| number` | `'24h'` | Grace period before deletion (e.g., `'1h'`, `'7d'`, or `86400000` ms) |
364
-
365
- #### Deletion Strategies (`deleteWhen`)
366
-
367
- | Strategy | Behavior |
368
- |----------|----------|
369
- | `sourceDocumentDeleted` | **Default.** Task deleted only when its source document is deleted from the database. Filter mismatches are ignored. |
370
- | `sourceDocumentDeletedOrNoLongerMatching` | Task deleted when source document is deleted **OR** when it no longer matches the task's `filter`. Useful for cases the change of document is permament and it is not expected the document could match in the future again and retrigger because of that. Also useful for `$$NOW`-based or dynamic filters. |
371
- | `never` | Tasks are never automatically deleted. Use for audit trails or manual cleanup scenarios. |
372
-
373
- #### Grace Period Calculation
374
-
375
- The `keepFor` grace period is measured from `MAX(updatedAt, lastFinalizedAt)`:
376
-
377
- - **`updatedAt`**: When the source document's watched fields (`watchProjection`) last changed
378
- - **`lastFinalizedAt`**: When a worker last completed or failed the task
379
-
380
- This ensures tasks are protected if either:
381
- 1. The source data changed recently, OR
382
- 2. A worker processed the task recently
383
-
384
- #### Example: Dynamic Filter Cleanup
385
-
386
- ```typescript
387
- await reactiveTask({
388
- task: 'remind-pending-order',
389
- collection: 'orders',
390
- // Match orders pending for more than 24 hours
391
- filter: { $expr: { $gt: ['$$NOW', { $add: ['$createdAt', 24 * 60 * 60 * 1000] }] } },
392
-
393
- cleanupPolicy: {
394
- deleteWhen: 'sourceDocumentDeletedOrNoLongerMatching',
395
- keepFor: '1h', // Keep it at least 1 hour after last scheduled matching or finalization
396
- },
397
-
398
- handler: async (order) => { /* Send reminder email */ }
399
- });
400
- ```
401
-
402
- #### Scheduler-Level Configuration
403
-
404
- Control how often the cleanup runs using `reactiveTaskCleanupInterval` in scheduler options. Cleanup is performed in **batches** (default 1000 items) to ensure stability on large datasets.
405
-
406
- ```typescript
407
- scheduler.configure({
408
- reactiveTaskCleanupInterval: '12h', // Run cleanup every 12 hours (default: '24h')
409
- });
410
- ```
411
-
412
- Supported formats:
413
- - Duration string: `'1h'`, `'24h'`, `'7d'`
414
- - Milliseconds: `86400000`
415
- - Cron expression: `'CRON 0 3 * * *'` (e.g., daily at 3 AM)
416
-
417
-
418
- ### Filter Evolution & Reconciliation
419
-
420
- Reactive Tasks are designed to evolve with your application. As you deploy new versions of your code, you might change the `filter`, `watchProjection`, or the `handler` logic itself. The system automatically detects these changes and adapts the task state accordingly.
421
-
422
- You can control this behavior using the optional `evolution` configuration:
423
-
424
- ```typescript
425
- await reactiveTask({
426
- task: 'process-order',
427
- collection: 'orders',
428
- filter: { status: 'paid', amount: { $gt: 100 } },
429
-
430
- // Logic Versioning
431
- evolution: {
432
- // Increment this when you change the handler code and want to re-process tasks
433
- handlerVersion: 2,
434
-
435
- // What to do when version increments?
436
- // - 'none': Do nothing (default).
437
- // - 'reprocess_failed': Reset all 'failed' tasks to 'pending' to retry with new code.
438
- // - 'reprocess_all': Reset ALL tasks (even completed ones) to 'pending'.
439
- onHandlerVersionChange: 'reprocess_failed',
440
-
441
- // If 'filter' or 'watchProjection' changes, should we run reconciliation?
442
- // Default: true
443
- reconcileOnTriggerChange: true
444
- },
445
-
446
- handler: async (order) => { /* ... */ }
447
- });
448
- ```
449
-
450
- #### 1. Trigger Evolution (Filter / Projection)
451
-
452
- When the scheduler starts, it compares the current `filter` and `watchProjection` with the stored configuration from the previous deployment.
453
-
454
- * **Narrowing the Filter** (e.g., `amount > 50` → `amount > 100`):
455
- * **Pending Tasks**: Workers will pick up pending tasks. Before execution, they perform a "Late-Binding Check". If the document no longer matches the new filter (e.g. amount is 75), the task is **skipped** (completed) without running the handler.
456
- * **Existing Tasks**: Tasks for documents that no longer match are **not deleted** immediately; they remain as history but won't satisfy the filter for future updates. See the cleanup policies for more details.
457
-
458
- * **Widening the Filter** (e.g., `amount > 100` → `amount > 50`):
459
- * **Reconciliation**: The system detects the filter change and automatically triggers a **Reconciliation** scan for this specific task.
460
- * **Backfilling**: It scans the collection for documents that *now* match the new filter (e.g. amount 75) but don't have a task yet. It schedules new tasks for them immediately.
461
- * *Note*: This ensures specific newly-matched documents get processed without needing a manual migration script.
462
-
463
- > [!WARNING]
464
- > **Dynamic Filters (e.g., `$$NOW`)**: If your filter uses time-based expressions to "widen" the range automatically over time (e.g. `{ $expr: { $lte: ['$releaseDate', '$$NOW'] } }`), this does **NOT** trigger reconciliation. The scheduler only detects changes to the *filter definition object*. Documents that match purely because time has passed (without a data change) will **not** be picked up. For time-based triggers, use a [Cron Task](./cron-tasks.md).
465
-
466
- #### 2. Logic Evolution (Handler Versioning)
467
-
468
- Sometimes you fix a bug in your handler and want to retry failed tasks, or you implement a new feature (e.g. generic data migration) and want to re-run the task for *every* document.
469
-
470
- * **Versioning**: Increment `evolution.handlerVersion` (integer, default 1).
471
- * **Policies (`onHandlerVersionChange`)**:
472
- * `'none'`: The system acknowledges the new version but doesn't touch existing task states. New executions will use the new code.
473
- * `'reprocess_failed'`: Finds all tasks currently in `failed` status and resets them to `pending` (resetting attempts count). Useful for bug fixes.
474
- * `'reprocess_all'`: Resets **ALL** tasks (failed, completed) to `pending`. Useful for migrations or re-calculating data for the entire dataset.
475
-
476
- > [!TIP]
477
- > Use `reprocess_failed` for bug fixes and `reprocess_all` sparingly for data migrations. The system automatically handles the "reset" operation efficiently using database-side updates.
478
-
479
- ### Retry Policy
480
-
481
-
482
- You can configure the retry behavior using the `retryPolicy` option.
483
-
484
- **General Options**
485
-
486
- | Option | Type | Default | Description |
487
- | :--- | :--- | :--- | :--- |
488
- | `type` | `string` | **Required** | `'fixed'`, `'linear'`, `'exponential'`, `'series'`, or `'cron'` |
489
- | `maxAttempts` | `number` | `5`* | Maximum total attempts (use `-1` for unlimited). |
490
- | `maxDuration` | `string \| number` | `undefined` | Stop retrying if elapsed time exceeds this value. |
491
- | `resetRetriesOnDataChange` | `boolean` | `true` | Reset attempt count if the source document changes. |
492
-
493
- *\* If `maxDuration` is specified, `maxAttempts` defaults to unlimited.*
494
-
495
- #### Policy Specific Settings
496
-
497
- | Policy | Property | Default | Description |
498
- | :--- | :--- | :--- | :--- |
499
- | **`fixed`** | `interval` | - | Delay between retries (e.g., `'10s'`). |
500
- | **`linear`** | `interval` | - | Base delay multiplied by `attempt` number. |
501
- | **`exponential`** | `min` | `'10s'` | Initial delay for the first retry. |
502
- | **`exponential`** | `max` | `'1d'` | Maximum delay cap for backoff. |
503
- | **`exponential`** | `factor` | `2` | Multiplication factor per attempt. |
504
- | **`series`** | `intervals` | - | Array of fixed delays (e.g., `['1m', '5m', '15m']`). |
505
- | **`cron`** | `expression` | - | Standard cron string for scheduling retries. |
506
-
507
- ### Examples
508
-
509
- ```typescript
510
- // 1. Give up after 24 hours (infinite attempts within that window)
511
- retryPolicy: {
512
- maxDuration: '24h',
513
- type: 'exponential',
514
- min: '10s',
515
- max: '1h'
516
- }
517
-
518
- // 2. Exact retry ladder (try after 1m, then 5m, then 15m, then fail)
519
- retryPolicy: {
520
- maxAttempts: 4, // 1st run + 3 retries
521
- type: 'series',
522
- intervals: ['1m', '5m', '15m']
523
- }
524
-
525
- // 3. Series with last interval reuse
526
- // Sequence: 1m, 5m, 5m, 5m ... (last one repeats)
527
- retryPolicy: {
528
- maxAttempts: 10,
529
- type: 'series',
530
- intervals: ['1m', '5m']
531
- }
532
-
533
- // 4. Permanent retries every hour
534
- retryPolicy: {
535
- maxAttempts: -1,
536
- type: 'fixed',
537
- interval: '1h'
538
- }
539
- ```
540
-
541
- ### Flow Control (Defer / Throttle)
542
-
543
- Sometimes you need dynamic control over task execution speed based on external factors (e.g., rate limits of a 3rd party API) or business logic.
544
-
545
- The `handler` receives a `context` object that exposes flow control methods.
546
-
547
- #### 1. Deferral (`deferCurrent`)
548
-
549
- Delays the **current** task execution. The task is put back into the queue specifically for this document and will not be picked up again until the specified time.
550
-
551
- This is useful for:
552
- * **Rate Limits**: "API returned 429, try again in 30 seconds."
553
- * **Business Waits**: "Customer created, but wait 1 hour before sending first email."
554
-
555
- ```typescript
556
- await reactiveTask({
557
- task: 'send-webhook',
558
- collection: 'events',
559
- handler: async (context) => {
560
- const doc = await context.getDocument();
561
- try {
562
- await sendWebhook(doc);
563
- } catch (err) {
564
- if (err.status === 429) {
565
- const retryAfter = err.headers['retry-after'] || 30; // seconds
566
-
567
- // Defer THIS task only.
568
- // It resets status to 'pending' and schedules it for future.
569
- // It does NOT increment attempt count (it's not a failure).
570
- context.deferCurrent(retryAfter * 1000);
571
- return;
572
- }
573
- throw err; // Use standard retry policy for other errors
574
- }
575
- }
576
- });
577
- ```
578
-
579
- #### 2. Throttling (`throttleAll`)
580
-
581
- Pauses all FUTURE tasks of this type for a specified duration. This serves as a "Circuit Breaker" when an external system (e.g., CRM, Payment Gateway) is unresponsive or returns overload errors (503, 429).
582
-
583
- ```typescript
584
- context.throttleAll(60 * 1000); // Pause this task type for 1 minute
585
- ```
586
-
587
- > [!IMPORTANT]
588
- > **Cluster Behavior (Instance-Local)**
589
- > `throttleAll` operates only in the memory of the current instance (worker).
590
- > In a distributed environment (e.g., Kubernetes with multiple pods), other instances will not know about the issue immediately. They will continue processing until they independently encounter the error and trigger their own `throttleAll`.
591
- >
592
- > **Result**: The load on the external service will not drop to zero immediately but will decrease gradually as individual instances hit the "circuit breaker".
593
-
594
- > [!NOTE]
595
- > **Current Task**
596
- > `throttleAll` does not affect the currently running task. If you want to postpone the current task (so it counts as pending and retries after the pause), you must explicitly call `deferCurrent()`.
597
-
598
- **Example (Service Down):**
599
-
600
- ```typescript
601
- await reactiveTask({
602
- task: 'sync-to-crm',
603
- collection: 'users',
604
- handler: async (context) => {
605
- // Note: You can throttle even before fetching the doc if you know the service is down!
606
- try {
607
- const doc = await context.getDocument();
608
- await crmApi.update(doc);
609
- } catch (err) {
610
- // If service is unavailable (503) or circuit breaker is open
611
- if (err.status === 503 || err.isCircuitBreakerOpen) {
612
- console.warn('CRM is down, pausing tasks for 1 minute.');
613
-
614
- // 1. Stop processing future tasks of this type on this instance
615
- context.throttleAll(60 * 1000);
616
-
617
- // 2. Defer the CURRENT task so it retries after the pause
618
- context.deferCurrent(60 * 1000);
619
- return;
620
- }
621
- throw err; // Standard retry policy for other errors
622
- }
623
- }
624
- });
625
- ```
626
-
627
-
628
-
629
- ### Reconciliation & Reliability
630
-
631
- The system includes a self-healing mechanism called **Reconciliation**.
632
-
633
- **What is it?**
634
- It is a "full scan" process that ensures the state of your tasks matches the actual data in your collections. It iterates through your source collections (efficiently, fetching only `_id`) and ensures every document has the correct corresponding tasks planned.
635
-
636
- **When does it run?**
637
- 1. **On Startup (Partial)**: When `startReactiveTasks()` is called, the leader performs a reconciliation only for tasks that have **never been reconciled before**. This ensures that newly added tasks catch up with existing data.
638
- 2. **On History Loss**: If the MongoDB Change Stream buffer (Oplog) is full and events are lost (Error code 280), the system automatically triggers full reconciliation to ensure consistency is restored.
639
-
640
- Reconciliation is **persistent and resilient**.
641
- - **Checkpoints**: The system saves its progress (`lastId`) periodically to the database (`_mongodash_planner_meta`).
642
- - **Resumable**: If the process is interrupted (e.g., deployment, crash), it effectively **resumes** from the last checkpoint upon restart, preventing re-processing of already reconciled documents.
643
- - **Invalidation**: If the set of tasks being reconciled changes (e.g., you deploy a version with a NEW task definition for the same collection), the system detects this change, invalidates the checkpoint, and restarts reconciliation from the beginning to ensure the new task is applied to the entire collection.
644
-
645
- **What to expect?**
646
- - **No Data Loss**: Even if your specific localized Oplog history is lost, the system will eventually process every document.
647
- - **Performance**: The scan is optimized (uses batching and projection of `_id` only), but it performs a **full collection scan**. On huge collections (millions of docs), this causes increased database load during startup or recovery.
648
- - **Batch Processing**: Both reconciliation and periodic cleanup process documents in batches to avoid overwhelming the database and the application memory.
649
-
650
- > [!CAUTION]
651
- > **Limitations of `$$NOW` in filters**
652
- > MongoDB Change Streams only trigger when a document is physically updated. If your `filter` depends on time passing (e.g., `dueAt: { $lte: '$$NOW' }`), the task **will not** trigger automatically just because time passed. It will only be picked up during:
653
- > 1. A physical update to the source document.
654
- > 2. The next system restart, if the reconciliation is run.
655
- > 3. Manual re-triggers via `retryReactiveTasks()`.
656
- - **Configuration Matters**: Reconciliation respects your `filter` and `watchProjection`.
657
- - If a document doesn't match the `filter`, no task is planned.
658
- - If the `watchProjection` hasn't changed since the last run (comparing `lastObservedValues`), the task is **not** re-triggered.
659
- - **Recommendation**: Carefully configure `filter` and `watchProjection` to minimize unnecessary processing during reconciliation.
660
-
661
- ## Idempotency & Re-execution
662
-
663
- The system is designed with an **At-Least-Once** execution guarantee. This is a fundamental property of distributed systems that value reliability over "exactly-once".
664
-
665
- While the system strives to execute your handler exactly once per event, there are specific scenarios where it might execute multiple times for the same document state. Therefore, **your `handler` must be idempotent**.
666
-
667
- ### Common Re-execution Scenarios
668
-
669
- 1. **Transient Failures (Retries)**: If a worker crashes or loses network connectivity during execution (before marking the task `completed`), the lock will expire. Another worker will pick up the task and retry it.
670
- 2. **Reconciliation Recovery**: If task records are deleted (e.g. manual cleanup) but source documents remain, once a reconciliation runs, it recreates them as `pending`.
671
- 3. **Filter Re-matching** If a document is no longer matching the task filter, the task is deleted because the **sourceDocumentDeletedOrNoLongerMatching** cleanup policy is used and then the document is changed back again to match the task filter, the task will be recreated as `pending`.
672
- 4. **Explicit Reprocessing**: You might trigger re-execution manually (via `retryReactiveTasks`) or through schema evolution policies (`reprocess_all`).
673
-
674
- ### Designing Idempotent Handlers
675
-
676
- Ensure your handler allows multiple executions without adverse side effects.
677
-
678
- **Example**:
679
- ```typescript
680
- handler: async (context) => {
681
- // 1. Fetch document (with verification)
682
- const order = await context.getDocument();
683
-
684
- // 2. Check if the work is already done
685
- if (order.emailSent) return;
686
-
687
- // 3. Perform the side-effect
688
- await sendEmail(order.userId, "Order Received");
689
-
690
- // 4. Mark as done (using atomic update)
691
- await db.collection('orders').updateOne(
692
- { _id: order._id },
693
- { $set: { emailSent: true } }
694
- );
695
- }
696
- ```
697
-
698
-
699
-
700
- ## Task States & Lifecycle
701
-
702
- Every task record in the `_tasks` collection follows a specific lifecycle:
703
-
704
- | Status | Description |
705
- | :--- | :--- |
706
- | `pending` | Task is waiting to be processed by a worker. This is the initial state after scheduling or a re-trigger. |
707
- | `processing` | Task is currently locked and being worked on by an active worker instance. |
708
- | `processing_dirty` | **Concurrency Guard.** New data was detected while the worker was already processing the previous state. The task will be reset to `pending` immediately after the current run finishes to ensure no updates are missed. |
709
- | `completed` | Task was processed successfully or it was not matching the filter during the last attempt. |
710
- | `failed` | Task permanently failed after exceeding all retries or the `maxDuration` window. |
711
-
712
- ---
713
-
714
- ## Monitoring
715
-
716
- Mongodash provides built-in Prometheus metrics to monitor your reactive tasks.
717
-
718
- > [!NOTE]
719
- > **Dependency Required**: You must install `prom-client` yourself to use this feature. It is an optional peer dependency.
720
- > ```bash
721
- > npm install prom-client
722
- > ```
723
-
724
- ### Configuration
725
-
726
- Monitoring is configured in the initialization options under the `monitoring` key:
727
-
728
- ```typescript
729
- await mongodash.init({
730
- // ...
731
- monitoring: {
732
- enabled: true, // Default: true
733
- pushIntervalMs: 60000, // How often instances synchronize metrics (default: 1m)
734
- scrapeMode: 'cluster', // 'cluster' (default) or 'local'
735
- readPreference: 'secondaryPreferred' // 'primary', 'secondaryPreferred' etc.
736
- }
737
- });
738
- ```
739
-
740
- - **scrapeMode**:
741
- - `'cluster'` (Default): Returns aggregated system-wide metrics. Any instance can respond to this request (by fetching state from the DB). It aggregates metrics from all other active instances. (Recommended for Load Balancers / Heroku)
742
- - `'local'`: Returns local metrics for THIS instance. If this instance is the Leader, it ALSO includes Global System Metrics (Queue Depth, Lag) so they are reported exactly once in the cluster. (Recommended for K8s Pod Monitors)
743
-
744
- ### Retrieving Metrics
745
-
746
- Expose the metrics endpoint (e.g., in Express):
747
-
748
- ```typescript
749
- import { getPrometheusMetrics } from 'mongodash';
750
-
751
- app.get('/metrics', async (req, res) => {
752
- const registry = await getPrometheusMetrics();
753
-
754
- if (registry) {
755
- res.set('Content-Type', registry.contentType);
756
- return res.end(await registry.metrics());
757
- }
758
-
759
- res.status(503).send('Monitoring disabled');
760
- });
761
- ```
762
-
763
- ### Available Metrics
764
-
765
- The system exposes the following metrics with standardized labels:
766
-
767
- | Metric Name | Type | Labels | Description |
768
- | :--- | :--- | :--- | :--- |
769
- | `reactive_tasks_duration_seconds` | Histogram | `task_name`, `status` | Distribution of task processing time (success/failure). |
770
- | `reactive_tasks_retries_total` | Counter | `task_name` | Total number of retries attempted. |
771
- | `reactive_tasks_queue_depth` | Gauge | `task_name`, `status` | Current number of tasks in the queue, grouped by status (`pending`, `processing`, `processing_dirty`, `failed`). |
772
- | `reactive_tasks_global_lag_seconds` | Gauge | `task_name` | Age of the oldest `pending` task, measured from `initialScheduledAt` (or `scheduledAt` if not deferred). This ensures deferred tasks still reflect their true waiting time. |
773
- | `reactive_tasks_change_stream_lag_seconds` | Gauge | *none* | Time difference between now and the last processed Change Stream event. |
774
- | `reactive_tasks_last_reconciliation_timestamp_seconds` | Gauge | *none* | Timestamp when the last full reconciliation (recovery) finished. |
775
-
776
- ### Grafana Dashboard
777
-
778
- A comprehensive **Grafana Dashboard** ("Reactive Tasks - System Overview") is included with the package.
779
-
780
- It provides real-time visibility into:
781
- - System Health & Global Lag
782
- - Throughput & Latency Heatmaps
783
- - Queue Depth & Composition
784
- - Error Rates & Retries
785
-
786
- You can find the dashboard JSON file at:
787
- `node_modules/mongodash/grafana/reactive_tasks.json`
788
-
789
- Import this file directly into Grafana to get started.
790
-
791
- ## Task Management & DLQ
792
-
793
- You can programmatically manage tasks, investigate failures, and handle Dead Letter Queues (DLQ) using the exported management API.
794
-
795
- These functions allow you to build custom admin UIs or automated recovery workflows.
796
-
797
- ### Listing Tasks
798
-
799
- Use `getReactiveTasks` to inspect the queue. You can filter by task name, status, error message, or properties of the **source document**.
800
-
801
- ```typescript
802
- import { getReactiveTasks } from 'mongodash';
803
-
804
- // list currently failed tasks
805
- const failedTasks = await getReactiveTasks({
806
- task: 'send-welcome-email',
807
- status: 'failed'
808
- });
809
-
810
- // list with pagination
811
- const page1 = await getReactiveTasks(
812
- { task: 'send-welcome-email' },
813
- { limit: 50, skip: 0, sort: { scheduledAt: -1 } }
814
- );
815
-
816
- // Advanced: Helper to find task by properties of the SOURCE document
817
- // This is powerful: "Find the task associated with Order #123"
818
- const orderTasks = await getReactiveTasks({
819
- task: 'sync-order',
820
- sourceDocFilter: { _id: 'order-123' }
821
- });
822
-
823
- // Advanced: Find tasks where source document matches complex filter
824
- // "Find sync tasks for all VIP users"
825
- const vipTasks = await getReactiveTasks({
826
- task: 'sync-order',
827
- sourceDocFilter: { isVip: true }
828
- });
829
- ```
830
-
831
- ### Counting Tasks
832
-
833
- Use `countReactiveTasks` for metrics or UI badges.
834
-
835
- ```typescript
836
- import { countReactiveTasks } from 'mongodash';
837
-
838
- const dlqSize = await countReactiveTasks({
839
- task: 'send-welcome-email',
840
- status: 'failed'
841
- });
842
- ```
843
-
844
- ### Retrying Tasks
845
-
846
- Use `retryReactiveTasks` to manually re-trigger tasks. This is useful for DLQ recovery after fixing a bug.
847
-
848
- This operation is **concurrency-safe**. If a task is currently `processing`, it will be marked to re-run immediately after the current execution finishes (`processing_dirty`), ensuring no race conditions.
849
-
850
- ```typescript
851
- import { retryReactiveTasks } from 'mongodash';
852
-
853
- // Retry ALL failed tasks for a specific job
854
- const result = await retryReactiveTasks({
855
- task: 'send-welcome-email',
856
- status: 'failed'
857
- });
858
- console.log(`Retried ${result.modifiedCount} tasks.`);
859
-
860
- // Retry specific task by Source Document ID
861
- await retryReactiveTasks({
862
- task: 'sync-order',
863
- sourceDocFilter: { _id: 'order-123' }
864
- });
865
-
866
- // Bulk Retry: Retry all tasks for "VIP" orders
867
- // This efficiently finds matching tasks and schedules them for execution.
868
- await retryReactiveTasks({
869
- task: 'sync-order',
870
- sourceDocFilter: { isVip: true }
871
- });
872
- ```
873
-
874
- ## Graceful Shutdown
875
-
876
- When shutting down your application, call `stopReactiveTasks()` in your termination signal handlers to ensure in-progress tasks complete and resources are released cleanly.
877
-
878
- **Recommended Pattern:**
879
-
880
- ```typescript
881
- import { stopReactiveTasks } from 'mongodash';
882
-
883
- const gracefulShutdown = async (signal: string) => {
884
- console.log(`${signal} received, shutting down...`);
885
-
886
- // Set timeout to force exit if shutdown hangs
887
- const timeout = setTimeout(() => {
888
- console.error('Shutdown timeout, forcing exit');
889
- process.exit(1);
890
- }, 30000);
891
-
892
- try {
893
- await stopReactiveTasks(); // Stop tasks BEFORE closing DB
894
- await server.close(); // Close your HTTP server
895
- await db.disconnect(); // Close database connection
896
-
897
- clearTimeout(timeout);
898
- process.exit(0);
899
- } catch (err) {
900
- console.error('Shutdown error:', err);
901
- process.exit(1);
902
- }
903
- };
904
-
905
- process.on('SIGTERM', () => gracefulShutdown('SIGTERM')); // Docker, K8s
906
- process.on('SIGINT', () => gracefulShutdown('SIGINT')); // Ctrl+C
907
- ```
908
-
909
- > [!IMPORTANT]
910
- > Always call `stopReactiveTasks()` **before** closing database connections, as the stop process needs to communicate with MongoDB.
911
-
912
- > [!NOTE]
913
- > **Self-Healing Design**: While graceful shutdown is recommended best practice, the system is designed to be resilient. If your application crashes or is forcefully terminated, task locks will automatically expire after a timeout (default: 1 minute), allowing other instances to pick up and process the unfinished tasks. Similarly, leadership locks expire, ensuring another instance takes over. This guarantees eventual task processing even in failure scenarios.
914
-