@rotorsoft/act 0.19.1 → 0.21.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +44 -13
- package/dist/.tsbuildinfo +1 -1
- package/dist/@types/act.d.ts +2 -2
- package/dist/@types/act.d.ts.map +1 -1
- package/dist/@types/adapters/InMemoryCache.d.ts +29 -0
- package/dist/@types/adapters/InMemoryCache.d.ts.map +1 -0
- package/dist/@types/adapters/InMemoryStore.d.ts +15 -15
- package/dist/@types/adapters/InMemoryStore.d.ts.map +1 -1
- package/dist/@types/adapters/index.d.ts +3 -0
- package/dist/@types/adapters/index.d.ts.map +1 -0
- package/dist/@types/event-sourcing.d.ts +4 -0
- package/dist/@types/event-sourcing.d.ts.map +1 -1
- package/dist/@types/index.d.ts +1 -0
- package/dist/@types/index.d.ts.map +1 -1
- package/dist/@types/ports.d.ts +18 -2
- package/dist/@types/ports.d.ts.map +1 -1
- package/dist/@types/types/ports.d.ts +75 -48
- package/dist/@types/types/ports.d.ts.map +1 -1
- package/dist/@types/types/reaction.d.ts +0 -13
- package/dist/@types/types/reaction.d.ts.map +1 -1
- package/dist/index.cjs +200 -121
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +197 -121
- package/dist/index.js.map +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -169,16 +169,46 @@ store(new PostgresStore({ host: "localhost", database: "myapp", user: "postgres"
|
|
|
169
169
|
|
|
170
170
|
Custom store implementations must fulfill the `Store` interface contract (see [CLAUDE.md](../../CLAUDE.md) or the source for details).
|
|
171
171
|
|
|
172
|
+
### Cache
|
|
173
|
+
|
|
174
|
+
Cache is always-on with `InMemoryCache` as the default. It avoids full event replay on every `load()` by storing the latest state checkpoint in memory. On `load()`, the cache is checked first — only events committed after the cached position are replayed from the store. Actions update the cache automatically after each successful commit and invalidate on concurrency errors.
|
|
175
|
+
|
|
176
|
+
```typescript
|
|
177
|
+
import { cache } from "@rotorsoft/act";
|
|
178
|
+
|
|
179
|
+
// Cache is active by default (InMemoryCache, LRU, maxSize 1000)
|
|
180
|
+
// load() and action() use it transparently — no setup needed
|
|
181
|
+
|
|
182
|
+
// Replace with a custom adapter (e.g., Redis) for distributed caching:
|
|
183
|
+
cache(new RedisCache({ url: "redis://localhost:6379" }));
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
The `Cache` interface is async, so you can implement adapters backed by Redis or other external caches. `InMemoryCache` is included as a fast, in-process LRU implementation.
|
|
187
|
+
|
|
188
|
+
#### Snapshots vs Cache
|
|
189
|
+
|
|
190
|
+
Cache and snapshots are the same checkpoint pattern at different layers:
|
|
191
|
+
|
|
192
|
+
- **Cache** (in-memory) — checked first on every `load()`. Eliminates store round-trips entirely on warm hits.
|
|
193
|
+
- **Snapshots** (in-store) — written to the event store as `__snapshot__` events. Used as a fallback on cache miss (cold start, eviction, process restart) to avoid replaying the entire event stream.
|
|
194
|
+
|
|
195
|
+
On cache hit, snapshot events in the store are skipped (`with_snaps: false`). On cache miss, the store is queried with `with_snaps: true` to find the latest snapshot and replay only events after it.
|
|
196
|
+
|
|
172
197
|
### Performance Considerations
|
|
173
198
|
|
|
199
|
+
- **Cache is always-on** — warm reads skip the store entirely, delivering consistent throughput (7-46x faster than uncached). No configuration needed.
|
|
200
|
+
- **Use snapshots for cold-start resilience** — on process restart or LRU eviction, snaps limit how much of the event stream must be replayed. Set `.snap((s) => s.patches >= 50)` for most use cases.
|
|
201
|
+
- **Cache invalidation is automatic** — concurrency errors (`ERR_CONCURRENCY`) invalidate the stale cache entry, forcing a fresh load from the store on the next access.
|
|
202
|
+
- **Snap writes are fire-and-forget** — `snap()` commits to the store asynchronously after `action()` returns. The cache is updated synchronously within `action()`, so subsequent reads see the post-snap state immediately without waiting for the store write.
|
|
203
|
+
- **Atomic claim eliminates poll→lease overhead** — `claim()` fuses discovery and locking into a single SQL transaction using `FOR UPDATE SKIP LOCKED`, saving one round-trip per drain cycle and eliminating contention between workers.
|
|
174
204
|
- Events are indexed by stream and version for fast lookups, with additional indexes on timestamps and correlation IDs.
|
|
175
|
-
- Use snapshots for states with long event histories to avoid full replay on every load.
|
|
176
205
|
- The PostgreSQL adapter supports connection pooling and partitioning for high-volume deployments.
|
|
177
|
-
|
|
206
|
+
|
|
207
|
+
For detailed benchmark data and performance evolution history, see [PERFORMANCE.md](PERFORMANCE.md).
|
|
178
208
|
|
|
179
209
|
## Event-Driven Processing
|
|
180
210
|
|
|
181
|
-
Act handles event-driven workflows through stream
|
|
211
|
+
Act handles event-driven workflows through atomic stream claiming and correlation, ensuring ordered, non-duplicated event processing without external message queues. The event store itself acts as the message backbone — events are written once and consumed by multiple independent reaction handlers.
|
|
182
212
|
|
|
183
213
|
### Reactions
|
|
184
214
|
|
|
@@ -196,27 +226,28 @@ const app = act()
|
|
|
196
226
|
|
|
197
227
|
Resolvers dynamically determine which stream a reaction targets, enabling flexible event routing without hardcoded dependencies. They can include source regex patterns to limit which streams trigger the reaction.
|
|
198
228
|
|
|
199
|
-
### Stream
|
|
229
|
+
### Stream Claiming
|
|
200
230
|
|
|
201
|
-
Rather than processing events immediately, Act uses
|
|
231
|
+
Rather than processing events immediately, Act uses an atomic claim mechanism to coordinate distributed consumers. The `claim()` method atomically discovers and locks streams in a single operation using PostgreSQL's `FOR UPDATE SKIP LOCKED` pattern — competing consumers never block each other, and locked rows are silently skipped. This is the same pattern used by pgBoss, Graphile Worker, and other production job queues.
|
|
202
232
|
|
|
203
233
|
- **Per-stream ordering** — Events within a stream are processed sequentially.
|
|
204
|
-
- **Temporary ownership** —
|
|
205
|
-
- **
|
|
234
|
+
- **Temporary ownership** — Claims expire after a configurable duration, allowing re-processing if a consumer fails.
|
|
235
|
+
- **Zero-contention** — `FOR UPDATE SKIP LOCKED` means workers never block each other; locked rows are silently skipped.
|
|
236
|
+
- **Backpressure** — Only a limited number of claims can be active at a time, preventing consumer overload.
|
|
206
237
|
|
|
207
|
-
If a
|
|
238
|
+
If a claim expires due to failure, the stream is automatically re-claimed by another consumer, ensuring no event is permanently lost.
|
|
208
239
|
|
|
209
240
|
### Event Correlation
|
|
210
241
|
|
|
211
242
|
Act tracks causation chains across actions and reactions using correlation metadata:
|
|
212
243
|
|
|
213
244
|
- Each action/event carries a `correlation` ID (request trace) and `causation` ID (what triggered it).
|
|
214
|
-
-
|
|
245
|
+
- `app.correlate()` scans events, discovers new target streams via reaction resolvers, and registers them with `subscribe()`. It returns `{ subscribed, last_id }` where `subscribed` is the count of newly registered streams.
|
|
215
246
|
- This enables full workflow tracing — from the initial user action through every downstream reaction.
|
|
216
247
|
|
|
217
248
|
```typescript
|
|
218
|
-
// Correlate events to discover new streams for processing
|
|
219
|
-
await app.correlate();
|
|
249
|
+
// Correlate events to discover and subscribe new streams for processing
|
|
250
|
+
const { subscribed, last_id } = await app.correlate();
|
|
220
251
|
|
|
221
252
|
// Or run periodic background correlation
|
|
222
253
|
app.start_correlations();
|
|
@@ -243,12 +274,12 @@ app.settle();
|
|
|
243
274
|
|
|
244
275
|
// Subscribe to the "settled" lifecycle event
|
|
245
276
|
app.on("settled", (drain) => {
|
|
246
|
-
// drain has { fetched,
|
|
277
|
+
// drain has { fetched, claimed, acked, blocked }
|
|
247
278
|
// notify SSE clients, update caches, etc.
|
|
248
279
|
});
|
|
249
280
|
```
|
|
250
281
|
|
|
251
|
-
Drain cycles continue until all reactions have caught up to the latest events. Consumers only process new work — acknowledged events are skipped, and failed
|
|
282
|
+
Drain cycles continue until all reactions have caught up to the latest events. Consumers only process new work — acknowledged events are skipped, and failed streams are re-claimed automatically.
|
|
252
283
|
|
|
253
284
|
The `settle()` method is the recommended production pattern — it debounces rapid commits (10ms default), runs correlate→drain in a loop until the system is consistent, and emits a `"settled"` event when done.
|
|
254
285
|
|