@rotorsoft/act 0.6.33 → 0.7.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +179 -126
- package/dist/.tsbuildinfo +1 -1
- package/dist/@types/act-builder.d.ts +25 -1
- package/dist/@types/act-builder.d.ts.map +1 -1
- package/dist/@types/act.d.ts +3 -1
- package/dist/@types/act.d.ts.map +1 -1
- package/dist/@types/state-builder.d.ts +1 -1
- package/dist/@types/state-builder.d.ts.map +1 -1
- package/dist/index.cjs +41 -7
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +41 -7
- package/dist/index.js.map +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,199 +1,252 @@
|
|
|
1
|
-
# @rotorsoft/act
|
|
1
|
+
# @rotorsoft/act
|
|
2
2
|
|
|
3
|
-
[
|
|
3
|
+
[](https://www.npmjs.com/package/@rotorsoft/act)
|
|
4
|
+
[](https://www.npmjs.com/package/@rotorsoft/act)
|
|
5
|
+
[](https://github.com/rotorsoft/act-root/actions/workflows/ci-cd.yml)
|
|
6
|
+
[](https://coveralls.io/github/Rotorsoft/act-root?branch=master)
|
|
7
|
+
[](https://opensource.org/licenses/MIT)
|
|
8
|
+
|
|
9
|
+
[Act](../../README.md) core library - Event Sourcing + CQRS + Actor Model framework for TypeScript.
|
|
10
|
+
|
|
11
|
+
## Installation
|
|
12
|
+
|
|
13
|
+
```sh
|
|
14
|
+
npm install @rotorsoft/act
|
|
15
|
+
# or
|
|
16
|
+
pnpm add @rotorsoft/act
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
**Requirements:** Node.js >= 22.18.0
|
|
20
|
+
|
|
21
|
+
## Quick Start
|
|
22
|
+
|
|
23
|
+
```typescript
|
|
24
|
+
import { act, state } from "@rotorsoft/act";
|
|
25
|
+
import { z } from "zod";
|
|
26
|
+
|
|
27
|
+
const Counter = state("Counter", z.object({ count: z.number() }))
|
|
28
|
+
.init(() => ({ count: 0 }))
|
|
29
|
+
.emits({ Incremented: z.object({ amount: z.number() }) })
|
|
30
|
+
.patch({
|
|
31
|
+
Incremented: (event, state) => ({ count: state.count + event.data.amount }),
|
|
32
|
+
})
|
|
33
|
+
.on("increment", z.object({ by: z.number() }))
|
|
34
|
+
.emit((action) => ["Incremented", { amount: action.by }])
|
|
35
|
+
.build();
|
|
36
|
+
|
|
37
|
+
const app = act().with(Counter).build();
|
|
38
|
+
|
|
39
|
+
await app.do("increment", { stream: "counter1", actor: { id: "1", name: "User" } }, { by: 5 });
|
|
40
|
+
const snapshot = await app.load(Counter, "counter1");
|
|
41
|
+
console.log(snapshot.state.count); // 5
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
## Related
|
|
45
|
+
|
|
46
|
+
- [@rotorsoft/act-pg](https://www.npmjs.com/package/@rotorsoft/act-pg) - PostgreSQL adapter for production deployments
|
|
47
|
+
- [Full Documentation](https://rotorsoft.github.io/act-root/)
|
|
48
|
+
- [API Reference](https://rotorsoft.github.io/act-root/docs/api/)
|
|
49
|
+
- [Examples](https://github.com/rotorsoft/act-root/tree/master/packages)
|
|
50
|
+
|
|
51
|
+
---
|
|
4
52
|
|
|
5
53
|
## Event Store
|
|
6
54
|
|
|
7
|
-
The event store
|
|
55
|
+
The event store serves as the single source of truth for system state, persisting all changes as immutable events. It provides both durable storage and a queryable event history, enabling replayability, debugging, and distributed event-driven processing.
|
|
8
56
|
|
|
9
57
|
### Append-Only, Immutable Event Log
|
|
10
58
|
|
|
11
|
-
Unlike traditional databases that update records in place, the event store follows an append-only model
|
|
59
|
+
Unlike traditional databases that update records in place, the event store follows an append-only model:
|
|
12
60
|
|
|
13
|
-
- All state changes are recorded as new events
|
|
14
|
-
- Events are immutable,
|
|
15
|
-
- Each event is time-stamped and versioned, allowing
|
|
61
|
+
- All state changes are recorded as new events — past data is never modified.
|
|
62
|
+
- Events are immutable, providing a complete historical record.
|
|
63
|
+
- Each event is time-stamped and versioned, allowing state reconstruction at any point in time.
|
|
16
64
|
|
|
17
|
-
This immutability is critical for auditability, debugging, and
|
|
65
|
+
This immutability is critical for auditability, debugging, and consistent state reconstruction across distributed systems.
|
|
18
66
|
|
|
19
|
-
### Event Streams
|
|
67
|
+
### Event Streams
|
|
20
68
|
|
|
21
|
-
Events are
|
|
69
|
+
Events are grouped into streams, each representing a unique entity or domain process:
|
|
22
70
|
|
|
23
71
|
- Each entity instance (e.g., a user, order, or transaction) has its own stream.
|
|
24
|
-
- Events within a stream maintain
|
|
25
|
-
- Streams
|
|
72
|
+
- Events within a stream maintain strict ordering for correct state replay.
|
|
73
|
+
- Streams are created dynamically as new entities appear.
|
|
26
74
|
|
|
27
75
|
For example, an Order aggregate might have a stream containing:
|
|
28
76
|
|
|
29
|
-
1. OrderCreated
|
|
30
|
-
2. OrderItemAdded
|
|
31
|
-
3. OrderItemRemoved
|
|
32
|
-
4. OrderShipped
|
|
77
|
+
1. `OrderCreated`
|
|
78
|
+
2. `OrderItemAdded`
|
|
79
|
+
3. `OrderItemRemoved`
|
|
80
|
+
4. `OrderShipped`
|
|
81
|
+
|
|
82
|
+
Reconstructing the order's state means replaying these events in sequence, producing a deterministic result.
|
|
33
83
|
|
|
34
|
-
|
|
84
|
+
### Optimistic Concurrency
|
|
35
85
|
|
|
36
|
-
|
|
86
|
+
Each event stream maintains a version number for conflict detection:
|
|
37
87
|
|
|
38
|
-
|
|
88
|
+
- When committing events, the system verifies the stream's version matches the expected version.
|
|
89
|
+
- If another process has written events in the meantime, a `ConcurrencyError` is thrown.
|
|
90
|
+
- The caller can retry with the latest stream state, preventing lost updates.
|
|
39
91
|
|
|
40
|
-
|
|
41
|
-
- If another process has written an event in the meantime, the append operation is rejected to prevent race conditions.
|
|
42
|
-
- Consumers can retry with the latest stream state, preventing lost updates.
|
|
92
|
+
This ensures strong consistency without heavyweight locks.
|
|
43
93
|
|
|
44
|
-
|
|
94
|
+
```typescript
|
|
95
|
+
// Version is tracked automatically — concurrent writes to the same stream are detected
|
|
96
|
+
await app.do("increment", { stream: "counter1", actor }, { by: 1 });
|
|
97
|
+
```
|
|
45
98
|
|
|
46
99
|
### Querying
|
|
47
100
|
|
|
48
|
-
Events
|
|
101
|
+
Events can be retrieved in two ways:
|
|
102
|
+
|
|
103
|
+
- **Load** — Fetch and replay all events for a given stream, reconstructing its current state:
|
|
104
|
+
```typescript
|
|
105
|
+
const snapshot = await app.load(Counter, "counter1");
|
|
106
|
+
```
|
|
107
|
+
- **Query** — Filter events by stream, name, time range, correlation ID, or position, with support for forward and backward traversal:
|
|
108
|
+
```typescript
|
|
109
|
+
const events = await app.query_array({ stream: "counter1", names: ["Incremented"], limit: 10 });
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
### Snapshots
|
|
49
113
|
|
|
50
|
-
|
|
51
|
-
- Query: Provides multiple ways to filter and sort events, enabling efficient state reconstruction.
|
|
114
|
+
Replaying all events from the beginning for every request can be expensive for long-lived streams. Act supports configurable snapshotting:
|
|
52
115
|
|
|
53
|
-
|
|
116
|
+
```typescript
|
|
117
|
+
const Account = state("Account", schema)
|
|
118
|
+
// ...
|
|
119
|
+
.snap((snap) => snap.patchCount >= 10) // snapshot every 10 events
|
|
120
|
+
.build();
|
|
121
|
+
```
|
|
54
122
|
|
|
55
|
-
|
|
123
|
+
When loading state, the system first loads the latest snapshot and replays only the events that came after it. For example, instead of replaying 1,000 events for an account balance, the system loads a snapshot and applies only the last few transactions.
|
|
56
124
|
|
|
57
|
-
|
|
125
|
+
### Storage Backends
|
|
58
126
|
|
|
59
|
-
|
|
60
|
-
- When retrieving an entity’s state, the system first loads the latest snapshot and replays only newer events.
|
|
61
|
-
- This reduces query time while maintaining full event traceability.
|
|
127
|
+
The event store uses a port/adapter pattern, making it easy to swap implementations:
|
|
62
128
|
|
|
63
|
-
|
|
129
|
+
- **InMemoryStore** (included) — Fast, ephemeral storage for development and testing.
|
|
130
|
+
- **[PostgresStore](https://www.npmjs.com/package/@rotorsoft/act-pg)** — Production-ready with ACID guarantees, connection pooling, and distributed processing.
|
|
64
131
|
|
|
65
|
-
|
|
132
|
+
```typescript
|
|
133
|
+
import { store } from "@rotorsoft/act";
|
|
134
|
+
import { PostgresStore } from "@rotorsoft/act-pg";
|
|
66
135
|
|
|
67
|
-
|
|
136
|
+
// Development: in-memory (default)
|
|
137
|
+
const s = store();
|
|
68
138
|
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
139
|
+
// Production: inject PostgreSQL
|
|
140
|
+
store(new PostgresStore({ host: "localhost", database: "myapp", user: "postgres", password: "secret" }));
|
|
141
|
+
```
|
|
72
142
|
|
|
73
|
-
|
|
143
|
+
Custom store implementations must fulfill the `Store` interface contract (see [CLAUDE.md](../../CLAUDE.md) or the source for details).
|
|
74
144
|
|
|
75
|
-
|
|
145
|
+
### Performance Considerations
|
|
76
146
|
|
|
77
|
-
- Events are indexed by stream
|
|
78
|
-
-
|
|
79
|
-
-
|
|
147
|
+
- Events are indexed by stream and version for fast lookups, with additional indexes on timestamps and correlation IDs.
|
|
148
|
+
- Use snapshots for states with long event histories to avoid full replay on every load.
|
|
149
|
+
- The PostgreSQL adapter supports connection pooling and partitioning for high-volume deployments.
|
|
150
|
+
- Active event streams remain in fast storage; consider archival strategies for very large datasets.
|
|
80
151
|
|
|
81
|
-
|
|
152
|
+
## Event-Driven Processing
|
|
82
153
|
|
|
83
|
-
|
|
154
|
+
Act handles event-driven workflows through stream leasing and correlation, ensuring ordered, non-duplicated event processing without external message queues. The event store itself acts as the message backbone — events are written once and consumed by multiple independent reaction handlers.
|
|
84
155
|
|
|
85
|
-
|
|
86
|
-
- Older events are archived in cold storage while keeping snapshots for quick recovery.
|
|
87
|
-
- Event compression techniques can be used to reduce storage overhead without losing historical data.
|
|
156
|
+
### Reactions
|
|
88
157
|
|
|
89
|
-
|
|
158
|
+
Reactions are asynchronous handlers triggered by events. They can update other state streams, trigger external integrations, or drive cross-aggregate workflows:
|
|
90
159
|
|
|
91
|
-
|
|
160
|
+
```typescript
|
|
161
|
+
const app = act()
|
|
162
|
+
.with(Account)
|
|
163
|
+
.with(AuditLog)
|
|
164
|
+
.on("Deposited")
|
|
165
|
+
.do((event) => [{ name: "LogEntry", data: { message: `Deposit: ${event.data.amount}` } }])
|
|
166
|
+
.to((event) => `audit-${event.stream}`) // resolver determines target stream
|
|
167
|
+
.build();
|
|
168
|
+
```
|
|
92
169
|
|
|
93
|
-
|
|
170
|
+
Resolvers dynamically determine which stream a reaction targets, enabling flexible event routing without hardcoded dependencies. They can include source regex patterns to limit which streams trigger the reaction.
|
|
94
171
|
|
|
95
|
-
|
|
172
|
+
### Stream Leasing
|
|
96
173
|
|
|
97
|
-
|
|
98
|
-
- Efficient event querying without maintaining redundant queue states.
|
|
99
|
-
- Flexible event correlation, where consumers can derive dependencies dynamically rather than following a strict order.
|
|
174
|
+
Rather than processing events immediately, Act uses a leasing mechanism to coordinate distributed consumers. The application fetches events and pushes them to reaction handlers by leasing correlated streams:
|
|
100
175
|
|
|
101
|
-
|
|
176
|
+
- **Per-stream ordering** — Events within a stream are processed sequentially.
|
|
177
|
+
- **Temporary ownership** — Leases expire after a configurable duration, allowing re-processing if a consumer fails.
|
|
178
|
+
- **Backpressure** — Only a limited number of leases can be active at a time, preventing consumer overload.
|
|
102
179
|
|
|
103
|
-
|
|
180
|
+
If a lease expires due to failure, the stream is automatically re-leased to another consumer, ensuring no event is permanently lost.
|
|
104
181
|
|
|
105
|
-
|
|
106
|
-
- Temporary ownership of events, allowing retries if a lease expires before acknowledgment.
|
|
107
|
-
- Backpressure control, as only a limited number of leases can be active at a time, preventing overwhelming consumers.
|
|
182
|
+
### Event Correlation
|
|
108
183
|
|
|
109
|
-
|
|
184
|
+
Act tracks causation chains across actions and reactions using correlation metadata:
|
|
110
185
|
|
|
111
|
-
|
|
186
|
+
- Each action/event carries a `correlation` ID (request trace) and `causation` ID (what triggered it).
|
|
187
|
+
- Reactions can discover new streams to process by querying uncommitted events with matching correlation IDs.
|
|
188
|
+
- This enables full workflow tracing — from the initial user action through every downstream reaction.
|
|
112
189
|
|
|
113
|
-
|
|
190
|
+
```typescript
|
|
191
|
+
// Correlate events to discover new streams for processing
|
|
192
|
+
await app.correlate();
|
|
114
193
|
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
194
|
+
// Or run periodic background correlation
|
|
195
|
+
app.start_correlations();
|
|
196
|
+
```
|
|
118
197
|
|
|
119
|
-
|
|
198
|
+
### Parallel Execution with Retry and Blocking
|
|
120
199
|
|
|
121
|
-
|
|
200
|
+
While events within a stream are processed in order, multiple streams can be processed concurrently:
|
|
122
201
|
|
|
123
|
-
|
|
202
|
+
- **Parallel handling** — Multiple streams are drained simultaneously for throughput.
|
|
203
|
+
- **Retry with backoff** — Transient failures trigger retries before escalation.
|
|
204
|
+
- **Stream blocking** — After exhausting retries, a stream is blocked to prevent cascading errors. Blocked streams can be inspected and unblocked manually.
|
|
124
205
|
|
|
125
|
-
|
|
126
|
-
- Retry mechanisms with exponential backoff, ensuring transient failures do not cause data loss.
|
|
127
|
-
- Blocking strategies, where streams with consistent failures can be temporarily halted to prevent cascading errors.
|
|
206
|
+
### Draining
|
|
128
207
|
|
|
129
|
-
|
|
208
|
+
The `drain` method processes pending reactions across all subscribed streams:
|
|
130
209
|
|
|
131
|
-
|
|
210
|
+
```typescript
|
|
211
|
+
// Process pending reactions
|
|
212
|
+
await app.drain({ streamLimit: 100, eventLimit: 1000 });
|
|
213
|
+
```
|
|
132
214
|
|
|
133
|
-
|
|
134
|
-
This design ensures:
|
|
215
|
+
Drain cycles continue until all reactions have caught up to the latest events. Consumers only process new work — acknowledged events are skipped, and failed events are re-leased automatically.
|
|
135
216
|
|
|
136
|
-
-
|
|
137
|
-
- Failure recovery without manual intervention, as failed events can be re-leased automatically.
|
|
138
|
-
- Clear event lifecycle management, with visibility into pending, processing, and completed events.
|
|
217
|
+
### Real-Time Notifications
|
|
139
218
|
|
|
140
|
-
|
|
219
|
+
When using the PostgreSQL backend, the store emits `NOTIFY` events on each commit, enabling consumers to react immediately via `LISTEN` rather than polling. This reduces latency and unnecessary database queries in production deployments.
|
|
141
220
|
|
|
142
|
-
|
|
221
|
+
## Dual-Frontier Drain
|
|
143
222
|
|
|
144
|
-
-
|
|
145
|
-
- Indexing strategies for fast lookups, optimizing performance for high-volume event processing.
|
|
146
|
-
- Retention policies, ensuring historical event data is accessible for audits without overloading the system.
|
|
223
|
+
In event-sourced systems, consumers often subscribe to multiple event streams that advance at different rates: some produce bursts of events, while others stay idle for long periods. New streams can also be discovered while processing events from existing streams.
|
|
147
224
|
|
|
148
|
-
|
|
225
|
+
Naive approaches have fundamental trade-offs:
|
|
149
226
|
|
|
150
|
-
|
|
227
|
+
- Strictly serial processing across all streams blocks fast streams behind slow ones.
|
|
228
|
+
- Fully independent processing risks inconsistent cross-stream states.
|
|
229
|
+
- Prioritizing new streams over existing ones risks missing important events.
|
|
151
230
|
|
|
152
|
-
|
|
153
|
-
- Reduce unnecessary database queries, optimizing system performance.
|
|
154
|
-
- Enable distributed event processing, where multiple instances can coordinate workload distribution.
|
|
231
|
+
Act addresses this with the **Dual-Frontier Drain** strategy.
|
|
155
232
|
|
|
156
|
-
###
|
|
233
|
+
### How It Works
|
|
157
234
|
|
|
158
|
-
|
|
235
|
+
Each drain cycle divides streams into two sets:
|
|
159
236
|
|
|
160
|
-
-
|
|
161
|
-
-
|
|
162
|
-
- Idle consumers are dynamically assigned new workloads, ensuring efficient resource utilization.
|
|
237
|
+
- **Leading frontier** — Streams already near the latest known event (the global frontier). These continue processing without waiting.
|
|
238
|
+
- **Lagging frontier** — Streams that are behind or newly discovered. These are advanced quickly to catch up.
|
|
163
239
|
|
|
164
|
-
|
|
240
|
+
**Fast-forwarding:** If a lagging stream has no matching events in the current window, its watermark is advanced using the leading frontier's position. This prevents stale streams from blocking global convergence.
|
|
165
241
|
|
|
166
|
-
|
|
167
|
-
These streams advance at different rates: some produce bursts of events, while others may stay idle for long periods.
|
|
168
|
-
New streams can also be discovered while proccesing events from existing streams.
|
|
242
|
+
**Dynamic correlation:** Event resolvers dynamically discover and add new streams as events arrive. Resolvers can include source regex patterns to limit which streams are matched. When a new matching stream is discovered, it joins the drain immediately.
|
|
169
243
|
|
|
170
|
-
|
|
244
|
+
### Why It Matters
|
|
171
245
|
|
|
172
|
-
-
|
|
173
|
-
-
|
|
174
|
-
-
|
|
246
|
+
- **Fast recovery** — Newly discovered or previously idle streams catch up quickly.
|
|
247
|
+
- **No global blocking** — Fast streams are never paused to wait for slower ones.
|
|
248
|
+
- **Eventual convergence** — All reactions end up aligned on the same global event position.
|
|
249
|
+
|
|
250
|
+
## License
|
|
175
251
|
|
|
176
|
-
|
|
177
|
-
|
|
178
|
-
### Key features
|
|
179
|
-
|
|
180
|
-
- Dynamic correlation
|
|
181
|
-
- Event resolvers dynamically correlate streams as new events arrive.
|
|
182
|
-
- Resolvers can include a source regex to limit matched streams by name.
|
|
183
|
-
- When a new stream matching the resolver is discovered, it is added immediately to the drain process.
|
|
184
|
-
- Dual frontiers
|
|
185
|
-
- Each drain cycle calculates two sets of streams:
|
|
186
|
-
- Leading frontier – streams already near the latest known event (the global frontier).
|
|
187
|
-
- Lagging frontier – streams behind or newly discovered.
|
|
188
|
-
- Fast-forwarding lagging streams
|
|
189
|
-
- Lagging streams are advanced quickly. If they have no matching events in the current window, their watermarks are advanced using the leading watermarks.
|
|
190
|
-
- This prevents stale streams from blocking global convergence.
|
|
191
|
-
- Parallel processing
|
|
192
|
-
- While lagging streams catch up, leading streams continue processing without waiting.
|
|
193
|
-
- All reactions eventually converge on the global frontier.
|
|
194
|
-
|
|
195
|
-
### Why it matters
|
|
196
|
-
|
|
197
|
-
- Fast recovery: Newly discovered or previously idle streams catch up quickly.
|
|
198
|
-
- No global blocking: Fast streams are never paused to wait for slower ones.
|
|
199
|
-
- Consistent state: All reactions end up aligned on the same event position.
|
|
252
|
+
[MIT](https://github.com/rotorsoft/act-root/blob/master/LICENSE)
|