@rotorsoft/act 0.6.33 → 0.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,199 +1,252 @@
1
- # @rotorsoft/act [![NPM Version](https://img.shields.io/npm/v/@rotorsoft/act.svg)](https://www.npmjs.com/package/@rotorsoft/act)
1
+ # @rotorsoft/act
2
2
 
3
- [Act](../../README.md) core library
3
+ [![NPM Version](https://img.shields.io/npm/v/@rotorsoft/act.svg)](https://www.npmjs.com/package/@rotorsoft/act)
4
+ [![NPM Downloads](https://img.shields.io/npm/dm/@rotorsoft/act.svg)](https://www.npmjs.com/package/@rotorsoft/act)
5
+ [![Build Status](https://github.com/rotorsoft/act-root/actions/workflows/ci-cd.yml/badge.svg?branch=master)](https://github.com/rotorsoft/act-root/actions/workflows/ci-cd.yml)
6
+ [![Coverage Status](https://coveralls.io/repos/github/Rotorsoft/act-root/badge.svg?branch=master)](https://coveralls.io/github/Rotorsoft/act-root?branch=master)
7
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
8
+
9
+ [Act](../../README.md) core library - Event Sourcing + CQRS + Actor Model framework for TypeScript.
10
+
11
+ ## Installation
12
+
13
+ ```sh
14
+ npm install @rotorsoft/act
15
+ # or
16
+ pnpm add @rotorsoft/act
17
+ ```
18
+
19
+ **Requirements:** Node.js >= 22.18.0
20
+
21
+ ## Quick Start
22
+
23
+ ```typescript
24
+ import { act, state } from "@rotorsoft/act";
25
+ import { z } from "zod";
26
+
27
+ const Counter = state("Counter", z.object({ count: z.number() }))
28
+ .init(() => ({ count: 0 }))
29
+ .emits({ Incremented: z.object({ amount: z.number() }) })
30
+ .patch({
31
+ Incremented: (event, state) => ({ count: state.count + event.data.amount }),
32
+ })
33
+ .on("increment", z.object({ by: z.number() }))
34
+ .emit((action) => ["Incremented", { amount: action.by }])
35
+ .build();
36
+
37
+ const app = act().with(Counter).build();
38
+
39
+ await app.do("increment", { stream: "counter1", actor: { id: "1", name: "User" } }, { by: 5 });
40
+ const snapshot = await app.load(Counter, "counter1");
41
+ console.log(snapshot.state.count); // 5
42
+ ```
43
+
44
+ ## Related
45
+
46
+ - [@rotorsoft/act-pg](https://www.npmjs.com/package/@rotorsoft/act-pg) - PostgreSQL adapter for production deployments
47
+ - [Full Documentation](https://rotorsoft.github.io/act-root/)
48
+ - [API Reference](https://rotorsoft.github.io/act-root/docs/api/)
49
+ - [Examples](https://github.com/rotorsoft/act-root/tree/master/packages)
50
+
51
+ ---
4
52
 
5
53
  ## Event Store
6
54
 
7
- The event store in this architecture serves as the single source of truth for system state, persisting all changes as immutable events. It acts as both a storage mechanism and a queryable event history, enabling efficient replayability, debugging, and distributed event-driven processing.
55
+ The event store serves as the single source of truth for system state, persisting all changes as immutable events. It provides both durable storage and a queryable event history, enabling replayability, debugging, and distributed event-driven processing.
8
56
 
9
57
  ### Append-Only, Immutable Event Log
10
58
 
11
- Unlike traditional databases that update records in place, the event store follows an append-only model, meaning:
59
+ Unlike traditional databases that update records in place, the event store follows an append-only model:
12
60
 
13
- - All state changes are recorded as new events, never modifying past data.
14
- - Events are immutable, ensuring a complete historical record of all changes.
15
- - Each event is time-stamped and versioned, allowing precise state reconstruction at any point in time.
61
+ - All state changes are recorded as new events past data is never modified.
62
+ - Events are immutable, providing a complete historical record.
63
+ - Each event is time-stamped and versioned, allowing state reconstruction at any point in time.
16
64
 
17
- This immutability is critical for auditability, debugging, and ensuring consistent state reconstruction across distributed systems.
65
+ This immutability is critical for auditability, debugging, and consistent state reconstruction across distributed systems.
18
66
 
19
- ### Event Streams for State Aggregation
67
+ ### Event Streams
20
68
 
21
- Events are not stored in a single, monolithic table but are instead grouped into event streams, each representing a unique entity or domain process.
69
+ Events are grouped into streams, each representing a unique entity or domain process:
22
70
 
23
71
  - Each entity instance (e.g., a user, order, or transaction) has its own stream.
24
- - Events within a stream maintain a strict order, ensuring that state is replayed correctly.
25
- - Streams can be dynamically created and partitioned, allowing for horizontal scalability.
72
+ - Events within a stream maintain strict ordering for correct state replay.
73
+ - Streams are created dynamically as new entities appear.
26
74
 
27
75
  For example, an Order aggregate might have a stream containing:
28
76
 
29
- 1. OrderCreated
30
- 2. OrderItemAdded
31
- 3. OrderItemRemoved
32
- 4. OrderShipped
77
+ 1. `OrderCreated`
78
+ 2. `OrderItemAdded`
79
+ 3. `OrderItemRemoved`
80
+ 4. `OrderShipped`
81
+
82
+ Reconstructing the order's state means replaying these events in sequence, producing a deterministic result.
33
83
 
34
- A consumer reconstructing the order’s state would replay these events in order, rather than relying on a snapshot-based approach.
84
+ ### Optimistic Concurrency
35
85
 
36
- ### Optimistic Concurrency and Versioning
86
+ Each event stream maintains a version number for conflict detection:
37
87
 
38
- Each event stream supports optimistic concurrency control by maintaining a version number per stream.
88
+ - When committing events, the system verifies the stream's version matches the expected version.
89
+ - If another process has written events in the meantime, a `ConcurrencyError` is thrown.
90
+ - The caller can retry with the latest stream state, preventing lost updates.
39
91
 
40
- - When appending an event, the system verifies that the stream’s version matches the expected version.
41
- - If another process has written an event in the meantime, the append operation is rejected to prevent race conditions.
42
- - Consumers can retry with the latest stream state, preventing lost updates.
92
+ This ensures strong consistency without heavyweight locks.
43
93
 
44
- This ensures strong consistency in distributed systems without requiring heavyweight locks.
94
+ ```typescript
95
+ // Version is tracked automatically — concurrent writes to the same stream are detected
96
+ await app.do("increment", { stream: "counter1", actor }, { by: 1 });
97
+ ```
45
98
 
46
99
  ### Querying
47
100
 
48
- Events in the store can be retrieved via two primary methods:
101
+ Events can be retrieved in two ways:
102
+
103
+ - **Load** — Fetch and replay all events for a given stream, reconstructing its current state:
104
+ ```typescript
105
+ const snapshot = await app.load(Counter, "counter1");
106
+ ```
107
+ - **Query** — Filter events by stream, name, time range, correlation ID, or position, with support for forward and backward traversal:
108
+ ```typescript
109
+ const events = await app.query_array({ stream: "counter1", names: ["Incremented"], limit: 10 });
110
+ ```
111
+
112
+ ### Snapshots
49
113
 
50
- - Stream-based retrieval (load): Fetching all events for a given stream in order.
51
- - Query: Provides multiple ways to filter and sort events, enabling efficient state reconstruction.
114
+ Replaying all events from the beginning for every request can be expensive for long-lived streams. Act supports configurable snapshotting:
52
115
 
53
- This enables both on-demand querying for state reconstruction and real-time processing for event-driven architectures.
116
+ ```typescript
117
+ const Account = state("Account", schema)
118
+ // ...
119
+ .snap((snap) => snap.patchCount >= 10) // snapshot every 10 events
120
+ .build();
121
+ ```
54
122
 
55
- ### Snapshots for Efficient State Reconstruction
123
+ When loading state, the system first loads the latest snapshot and replays only the events that came after it. For example, instead of replaying 1,000 events for an account balance, the system loads a snapshot and applies only the last few transactions.
56
124
 
57
- Replaying all events from the beginning for every request can be inefficient. To optimize state reconstruction:
125
+ ### Storage Backends
58
126
 
59
- - Snapshots are periodically stored, capturing the computed state of an entity.
60
- - When retrieving an entity’s state, the system first loads the latest snapshot and replays only newer events.
61
- - This reduces query time while maintaining full event traceability.
127
+ The event store uses a port/adapter pattern, making it easy to swap implementations:
62
128
 
63
- For example, instead of replaying 1,000 events for an account balance, the system might load a snapshot with the latest balance and only apply the last few transactions.
129
+ - **InMemoryStore** (included) Fast, ephemeral storage for development and testing.
130
+ - **[PostgresStore](https://www.npmjs.com/package/@rotorsoft/act-pg)** — Production-ready with ACID guarantees, connection pooling, and distributed processing.
64
131
 
65
- ### Event Storage Backend
132
+ ```typescript
133
+ import { store } from "@rotorsoft/act";
134
+ import { PostgresStore } from "@rotorsoft/act-pg";
66
135
 
67
- The event store can be implemented using different storage solutions, depending on system requirements:
136
+ // Development: in-memory (default)
137
+ const s = store();
68
138
 
69
- - Relational Databases (PostgreSQL, MySQL): Storing events in an append-only table with indexing for fast retrieval.
70
- - NoSQL Databases (Cassandra, DynamoDB, MongoDB): Using key-value or document stores to manage streams efficiently.
71
- - Event-Specific Databases (EventStoreDB, Kafka, Pulsar): Purpose-built for high-performance event sourcing with built-in subscriptions and replication.
139
+ // Production: inject PostgreSQL
140
+ store(new PostgresStore({ host: "localhost", database: "myapp", user: "postgres", password: "secret" }));
141
+ ```
72
142
 
73
- ### Indexing and Retrieval Optimization
143
+ Custom store implementations must fulfill the `Store` interface contract (see [CLAUDE.md](../../CLAUDE.md) or the source for details).
74
144
 
75
- To ensure high performance when querying events:
145
+ ### Performance Considerations
76
146
 
77
- - Events are indexed by stream ID and timestamp for fast lookups.
78
- - Materialized views can be used for common queries (e.g., the latest event per stream).
79
- - Partitioning strategies help distribute event streams across multiple nodes, improving scalability.
147
+ - Events are indexed by stream and version for fast lookups, with additional indexes on timestamps and correlation IDs.
148
+ - Use snapshots for states with long event histories to avoid full replay on every load.
149
+ - The PostgreSQL adapter supports connection pooling and partitioning for high-volume deployments.
150
+ - Active event streams remain in fast storage; consider archival strategies for very large datasets.
80
151
 
81
- ### Retention and Archival
152
+ ## Event-Driven Processing
82
153
 
83
- Since event data grows indefinitely, a retention policy is needed:
154
+ Act handles event-driven workflows through stream leasing and correlation, ensuring ordered, non-duplicated event processing without external message queues. The event store itself acts as the message backbone — events are written once and consumed by multiple independent reaction handlers.
84
155
 
85
- - Active event streams remain in fast storage for quick access.
86
- - Older events are archived in cold storage while keeping snapshots for quick recovery.
87
- - Event compression techniques can be used to reduce storage overhead without losing historical data.
156
+ ### Reactions
88
157
 
89
- ## Event-Driven Processing with Stream Leasing and Correlation
158
+ Reactions are asynchronous handlers triggered by events. They can update other state streams, trigger external integrations, or drive cross-aggregate workflows:
90
159
 
91
- This architecture is designed to handle event-driven workflows efficiently while ensuring ordered and non-duplicated event processing. Instead of a queueing system, it dynamically processes events from an event store and correlates them with specific event streams. The approach improves scalability, fault tolerance, and event visibility while maintaining strong guarantees for event processing.
160
+ ```typescript
161
+ const app = act()
162
+ .with(Account)
163
+ .with(AuditLog)
164
+ .on("Deposited")
165
+ .do((event) => [{ name: "LogEntry", data: { message: `Deposit: ${event.data.amount}` } }])
166
+ .to((event) => `audit-${event.stream}`) // resolver determines target stream
167
+ .build();
168
+ ```
92
169
 
93
- ### Event-Centric Processing Instead of Queues
170
+ Resolvers dynamically determine which stream a reaction targets, enabling flexible event routing without hardcoded dependencies. They can include source regex patterns to limit which streams trigger the reaction.
94
171
 
95
- Rather than storing messages in a queue and tracking explicit positions, this architecture treats the event store as the single source of truth. Events are written once and can be consumed by multiple independent consumers. This decoupling allows:
172
+ ### Stream Leasing
96
173
 
97
- - Independent consumers that can process the same event stream in different ways.
98
- - Efficient event querying without maintaining redundant queue states.
99
- - Flexible event correlation, where consumers can derive dependencies dynamically rather than following a strict order.
174
+ Rather than processing events immediately, Act uses a leasing mechanism to coordinate distributed consumers. The application fetches events and pushes them to reaction handlers by leasing correlated streams:
100
175
 
101
- ### Stream Leasing for Ordered Event Processing
176
+ - **Per-stream ordering** Events within a stream are processed sequentially.
177
+ - **Temporary ownership** — Leases expire after a configurable duration, allowing re-processing if a consumer fails.
178
+ - **Backpressure** — Only a limited number of leases can be active at a time, preventing consumer overload.
102
179
 
103
- Each consumer does not simply fetch and process events immediately; instead, events are fetched by the application and pushed to consumers by leasing the events of each correlated stream. Leasing prevents multiple consumers from processing the same event concurrently, ensuring:
180
+ If a lease expires due to failure, the stream is automatically re-leased to another consumer, ensuring no event is permanently lost.
104
181
 
105
- - Per-stream ordering, where events related to a specific stream are processed sequentially.
106
- - Temporary ownership of events, allowing retries if a lease expires before acknowledgment.
107
- - Backpressure control, as only a limited number of leases can be active at a time, preventing overwhelming consumers.
182
+ ### Event Correlation
108
183
 
109
- If a lease expires due to failure or unresponsiveness, the event can be re-leased to another consumer, ensuring no event is permanently lost.
184
+ Act tracks causation chains across actions and reactions using correlation metadata:
110
185
 
111
- ### Event Correlation and Dynamic Stream Resolution
186
+ - Each action/event carries a `correlation` ID (request trace) and `causation` ID (what triggered it).
187
+ - Reactions can discover new streams to process by querying uncommitted events with matching correlation IDs.
188
+ - This enables full workflow tracing — from the initial user action through every downstream reaction.
112
189
 
113
- A key challenge in event-driven systems is understanding which stream an event belongs to and how it should be processed. Instead of hardcoding event routing logic, this system enables:
190
+ ```typescript
191
+ // Correlate events to discover new streams for processing
192
+ await app.correlate();
114
193
 
115
- - Dynamic correlation, where events are linked to streams based on resolver functions.
116
- - Multi-stream dependency tracking, allowing one event to trigger multiple related processes.
117
- - Implicit event grouping, ensuring that related events are processed in the correct sequence.
194
+ // Or run periodic background correlation
195
+ app.start_correlations();
196
+ ```
118
197
 
119
- For example, if an event pertains to a transaction across multiple users, the system can determine which user streams should handle it dynamically.
198
+ ### Parallel Execution with Retry and Blocking
120
199
 
121
- ### Parallel Execution with Retry and Blocking Strategies
200
+ While events within a stream are processed in order, multiple streams can be processed concurrently:
122
201
 
123
- While events are processed in an ordered fashion within a stream, multiple streams can be processed concurrently. The architecture includes:
202
+ - **Parallel handling** Multiple streams are drained simultaneously for throughput.
203
+ - **Retry with backoff** — Transient failures trigger retries before escalation.
204
+ - **Stream blocking** — After exhausting retries, a stream is blocked to prevent cascading errors. Blocked streams can be inspected and unblocked manually.
124
205
 
125
- - Parallel event handling, improving throughput by distributing processing load.
126
- - Retry mechanisms with exponential backoff, ensuring transient failures do not cause data loss.
127
- - Blocking strategies, where streams with consistent failures can be temporarily halted to prevent cascading errors.
206
+ ### Draining
128
207
 
129
- A stream is only blocked after exhausting a configurable number of retries, reducing the risk of infinite failure loops.
208
+ The `drain` method processes pending reactions across all subscribed streams:
130
209
 
131
- ### Draining and Acknowledgment for Fault Tolerance
210
+ ```typescript
211
+ // Process pending reactions
212
+ await app.drain({ streamLimit: 100, eventLimit: 1000 });
213
+ ```
132
214
 
133
- Once an event has been successfully processed, it is acknowledged to release its lease.
134
- This design ensures:
215
+ Drain cycles continue until all reactions have caught up to the latest events. Consumers only process new work — acknowledged events are skipped, and failed events are re-leased automatically.
135
216
 
136
- - Consumers only process new work, reducing idle resource usage.
137
- - Failure recovery without manual intervention, as failed events can be re-leased automatically.
138
- - Clear event lifecycle management, with visibility into pending, processing, and completed events.
217
+ ### Real-Time Notifications
139
218
 
140
- ### Persistent Event Store with Optimized Querying
219
+ When using the PostgreSQL backend, the store emits `NOTIFY` events on each commit, enabling consumers to react immediately via `LISTEN` rather than polling. This reduces latency and unnecessary database queries in production deployments.
141
220
 
142
- Since events are stored persistently rather than transiently queued, the system must efficiently query and retrieve relevant events. The event store supports:
221
+ ## Dual-Frontier Drain
143
222
 
144
- - Efficient filtering, allowing consumers to retrieve only the events relevant to them.
145
- - Indexing strategies for fast lookups, optimizing performance for high-volume event processing.
146
- - Retention policies, ensuring historical event data is accessible for audits without overloading the system.
223
+ In event-sourced systems, consumers often subscribe to multiple event streams that advance at different rates: some produce bursts of events, while others stay idle for long periods. New streams can also be discovered while processing events from existing streams.
147
224
 
148
- ### Real-Time Notifications and Asynchronous Processing
225
+ Naive approaches have fundamental trade-offs:
149
226
 
150
- To reduce polling overhead, the system can utilize real-time event notifications via database triggers or a pub-sub mechanism. This allows consumers to:
227
+ - Strictly serial processing across all streams blocks fast streams behind slow ones.
228
+ - Fully independent processing risks inconsistent cross-stream states.
229
+ - Prioritizing new streams over existing ones risks missing important events.
151
230
 
152
- - React to new events immediately, improving responsiveness.
153
- - Reduce unnecessary database queries, optimizing system performance.
154
- - Enable distributed event processing, where multiple instances can coordinate workload distribution.
231
+ Act addresses this with the **Dual-Frontier Drain** strategy.
155
232
 
156
- ### Scalable Consumer Management
233
+ ### How It Works
157
234
 
158
- As the system scales, multiple consumer instances may need to process events in parallel. The architecture ensures that:
235
+ Each drain cycle divides streams into two sets:
159
236
 
160
- - Each consumer instance handles an exclusive subset of events, avoiding conflicts.
161
- - Leases distribute events evenly across consumers, preventing hotspots.
162
- - Idle consumers are dynamically assigned new workloads, ensuring efficient resource utilization.
237
+ - **Leading frontier** Streams already near the latest known event (the global frontier). These continue processing without waiting.
238
+ - **Lagging frontier** Streams that are behind or newly discovered. These are advanced quickly to catch up.
163
239
 
164
- ## Dual-Frontier Drain
240
+ **Fast-forwarding:** If a lagging stream has no matching events in the current window, its watermark is advanced using the leading frontier's position. This prevents stale streams from blocking global convergence.
165
241
 
166
- In event-sourced systems, consumers often subscribe to multiple event streams.
167
- These streams advance at different rates: some produce bursts of events, while others may stay idle for long periods.
168
- New streams can also be discovered while proccesing events from existing streams.
242
+ **Dynamic correlation:** Event resolvers dynamically discover and add new streams as events arrive. Resolvers can include source regex patterns to limit which streams are matched. When a new matching stream is discovered, it joins the drain immediately.
169
243
 
170
- The following issues arise:
244
+ ### Why It Matters
171
245
 
172
- - Strictly serial processing across all streams would block fast streams.
173
- - Fully independent processing risks inconsistent states.
174
- - Prioritizing new streams over existing ones risks missing important events.
246
+ - **Fast recovery** Newly discovered or previously idle streams catch up quickly.
247
+ - **No global blocking** Fast streams are never paused to wait for slower ones.
248
+ - **Eventual convergence** All reactions end up aligned on the same global event position.
249
+
250
+ ## License
175
251
 
176
- Act addresses this with the Dual-Frontier Drain strategy.
177
-
178
- ### Key features
179
-
180
- - Dynamic correlation
181
- - Event resolvers dynamically correlate streams as new events arrive.
182
- - Resolvers can include a source regex to limit matched streams by name.
183
- - When a new stream matching the resolver is discovered, it is added immediately to the drain process.
184
- - Dual frontiers
185
- - Each drain cycle calculates two sets of streams:
186
- - Leading frontier – streams already near the latest known event (the global frontier).
187
- - Lagging frontier – streams behind or newly discovered.
188
- - Fast-forwarding lagging streams
189
- - Lagging streams are advanced quickly. If they have no matching events in the current window, their watermarks are advanced using the leading watermarks.
190
- - This prevents stale streams from blocking global convergence.
191
- - Parallel processing
192
- - While lagging streams catch up, leading streams continue processing without waiting.
193
- - All reactions eventually converge on the global frontier.
194
-
195
- ### Why it matters
196
-
197
- - Fast recovery: Newly discovered or previously idle streams catch up quickly.
198
- - No global blocking: Fast streams are never paused to wait for slower ones.
199
- - Consistent state: All reactions end up aligned on the same event position.
252
+ [MIT](https://github.com/rotorsoft/act-root/blob/master/LICENSE)