odac 1.4.6 → 1.4.8
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +37 -0
- package/client/odac.js +1 -1
- package/docs/ai/README.md +1 -1
- package/docs/ai/skills/SKILL.md +1 -1
- package/docs/ai/skills/backend/database.md +103 -12
- package/docs/ai/skills/backend/ipc.md +71 -12
- package/docs/ai/skills/backend/views.md +6 -1
- package/docs/backend/00-getting-started/01-quick-start.md +77 -0
- package/docs/backend/07-views/03-template-syntax.md +5 -0
- package/docs/backend/07-views/04-request-data.md +13 -0
- package/docs/backend/08-database/05-write-behind-cache.md +230 -0
- package/docs/backend/13-utilities/02-ipc.md +117 -0
- package/docs/index.json +10 -0
- package/package.json +1 -1
- package/src/Database/WriteBuffer.js +605 -0
- package/src/Database.js +32 -1
- package/src/Ipc.js +343 -81
- package/src/Odac.js +2 -1
- package/src/Storage.js +4 -2
- package/src/View.js +33 -18
- package/test/Database/WriteBuffer/_recoverFromCheckpoint.test.js +207 -0
- package/test/Database/WriteBuffer/buffer.test.js +143 -0
- package/test/Database/WriteBuffer/flush.test.js +192 -0
- package/test/Database/WriteBuffer/get.test.js +72 -0
- package/test/Database/WriteBuffer/increment.test.js +118 -0
- package/test/Database/WriteBuffer/update.test.js +178 -0
- package/test/Ipc/hset.test.js +59 -0
- package/test/Ipc/incrBy.test.js +65 -0
- package/test/Ipc/lock.test.js +62 -0
- package/test/Ipc/rpush.test.js +68 -0
- package/test/Ipc/sadd.test.js +68 -0
- package/test/View/addNavigateAttribute.test.js +53 -0
- package/test/View/print.test.js +45 -1
- package/test/View/tags.test.js +132 -0
|
@@ -0,0 +1,230 @@
|
|
|
1
|
+
# Write-Behind Cache
|
|
2
|
+
|
|
3
|
+
At high traffic, individual database writes for common operations — like incrementing a page view counter or stamping a user's last-active date — quickly saturate your connection pool. One million page views = one million `UPDATE` queries.
|
|
4
|
+
|
|
5
|
+
ODAC's **Write-Behind Cache** solves this by buffering writes in memory and flushing them to the database in efficient batches. The only change to your code is adding `.buffer` to the chain.
|
|
6
|
+
|
|
7
|
+
```javascript
|
|
8
|
+
// Without buffer — 1 DB write per request
|
|
9
|
+
await Odac.DB.posts.where(postId).update({views: Odac.DB.raw('views + 1')})
|
|
10
|
+
|
|
11
|
+
// With buffer — 1 DB write per flush interval, for all requests combined
|
|
12
|
+
await Odac.DB.posts.buffer.where(postId).increment('views')
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## How It Works
|
|
18
|
+
|
|
19
|
+
**Architecture: Ipc-Backed, Driver-Agnostic**
|
|
20
|
+
|
|
21
|
+
All buffered state is held in `Odac.Ipc`. The active IPC driver determines the scaling model:
|
|
22
|
+
|
|
23
|
+
| Driver | Scope | When to use |
|
|
24
|
+
|---|---|---|
|
|
25
|
+
| `memory` (default) | Single machine — cluster workers share state via Primary process | Single-server deployments |
|
|
26
|
+
| `redis` | Multi-machine — all servers share state in Redis | Horizontal scaling behind a load balancer |
|
|
27
|
+
|
|
28
|
+
```
|
|
29
|
+
// Memory driver (default)
|
|
30
|
+
Worker 1 ─┐
|
|
31
|
+
Worker 2 ─┼──→ Primary (Ipc memory store) ──→ DB (batch flush every 5s)
|
|
32
|
+
Worker N ─┘
|
|
33
|
+
|
|
34
|
+
// Redis driver
|
|
35
|
+
Server A ─┐
|
|
36
|
+
Server B ─┼──→ Redis (Ipc state) ──→ DB (flush — distributed lock prevents duplicate writes)
|
|
37
|
+
Server C ─┘
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
A **distributed lock** (`Ipc.lock`) guarantees that only one process or server flushes at a time, even across multiple machines.
|
|
41
|
+
|
|
42
|
+
**Crash Safety via LMDB Checkpoint** *(memory driver only)*
|
|
43
|
+
|
|
44
|
+
Every 30 seconds, pending buffer data is written to the local LMDB store. On a crash and restart, ODAC recovers this checkpoint and flushes it to the database before accepting any traffic. When using the Redis driver, Redis itself provides durability — LMDB checkpoints are skipped.
|
|
45
|
+
|
|
46
|
+
---
|
|
47
|
+
|
|
48
|
+
## Three Operations
|
|
49
|
+
|
|
50
|
+
### 1. Counter Increment
|
|
51
|
+
|
|
52
|
+
Accumulates numeric deltas. Multiple increments to the same column merge into a single `UPDATE col = col + delta` at flush time.
|
|
53
|
+
|
|
54
|
+
```javascript
|
|
55
|
+
// Increment by 1 (default)
|
|
56
|
+
await Odac.DB.posts.buffer.where(postId).increment('views')
|
|
57
|
+
|
|
58
|
+
// Increment by a custom amount
|
|
59
|
+
await Odac.DB.posts.buffer.where(postId).increment('likes', 5)
|
|
60
|
+
await Odac.DB.downloads.buffer.where(fileId).increment('count', 3)
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
**Read the current value** — returns `DB base + pending delta`, always accurate:
|
|
64
|
+
|
|
65
|
+
```javascript
|
|
66
|
+
const currentViews = await Odac.DB.posts.buffer.where(postId).get('views')
|
|
67
|
+
// → 4527 (e.g., 4500 in DB + 27 buffered, not yet flushed)
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
**Composite primary key:**
|
|
71
|
+
|
|
72
|
+
```javascript
|
|
73
|
+
await Odac.DB.post_stats.buffer
|
|
74
|
+
.where({post_id: 123, date: '2026-04-01'})
|
|
75
|
+
.increment('views')
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
---
|
|
79
|
+
|
|
80
|
+
### 2. Last-Write-Wins Update
|
|
81
|
+
|
|
82
|
+
Buffers column SET operations for a row. If the same row is updated multiple times before a flush, the values are merged — the latest value for each column wins. The entire pending set for a row is written in a single `UPDATE` at flush.
|
|
83
|
+
|
|
84
|
+
```javascript
|
|
85
|
+
// 50 requests update the same user → 1 UPDATE at flush
|
|
86
|
+
await Odac.DB.users.buffer.where(userId).update({active_date: new Date()})
|
|
87
|
+
await Odac.DB.users.buffer.where(userId).update({last_ip: req.ip})
|
|
88
|
+
// → UPDATE users SET active_date = ?, last_ip = ? WHERE id = ? (one query for all 50 requests)
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
**Composite primary key:**
|
|
92
|
+
|
|
93
|
+
```javascript
|
|
94
|
+
await Odac.DB.user_prefs.buffer
|
|
95
|
+
.where({user_id: 1, pref_key: 'theme'})
|
|
96
|
+
.update({pref_value: 'dark'})
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
**Combine with increment** — both flush in the same cycle:
|
|
100
|
+
|
|
101
|
+
```javascript
|
|
102
|
+
await Odac.DB.users.buffer.where(userId).increment('login_count')
|
|
103
|
+
await Odac.DB.users.buffer.where(userId).update({active_date: new Date(), last_ip: req.ip})
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
---
|
|
107
|
+
|
|
108
|
+
### 3. Batch Insert
|
|
109
|
+
|
|
110
|
+
Queues rows in memory and inserts them in chunks of 1,000 at flush time. Ideal for audit logs, analytics events, and activity streams where individual inserts are wasteful.
|
|
111
|
+
|
|
112
|
+
```javascript
|
|
113
|
+
await Odac.DB.activity_log.buffer.insert({
|
|
114
|
+
user_id: userId,
|
|
115
|
+
action: 'page_view',
|
|
116
|
+
meta: req.url,
|
|
117
|
+
created_at: Date.now()
|
|
118
|
+
})
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
The queue auto-flushes immediately if it exceeds `maxQueueSize` (default: 10,000 rows).
|
|
122
|
+
|
|
123
|
+
---
|
|
124
|
+
|
|
125
|
+
## Manual Flush
|
|
126
|
+
|
|
127
|
+
Force an immediate flush for a specific table or for all buffered tables:
|
|
128
|
+
|
|
129
|
+
```javascript
|
|
130
|
+
// Flush a single table
|
|
131
|
+
await Odac.DB.posts.buffer.flush()
|
|
132
|
+
|
|
133
|
+
// Flush all buffered tables across all connections
|
|
134
|
+
await Odac.DB.buffer.flush()
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
> Graceful shutdown (`SIGTERM`/`SIGINT`) triggers a final flush automatically before the DB connections are closed. You do not need to call `flush()` in your shutdown handlers.
|
|
138
|
+
|
|
139
|
+
---
|
|
140
|
+
|
|
141
|
+
## Configuration
|
|
142
|
+
|
|
143
|
+
Add a `buffer` section to your `odac.json`:
|
|
144
|
+
|
|
145
|
+
```json
|
|
146
|
+
{
|
|
147
|
+
"buffer": {
|
|
148
|
+
"flushInterval": 5000,
|
|
149
|
+
"checkpointInterval": 30000,
|
|
150
|
+
"maxQueueSize": 10000,
|
|
151
|
+
"primaryKey": "id"
|
|
152
|
+
}
|
|
153
|
+
}
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
| Option | Default | Description |
|
|
157
|
+
|---|---|---|
|
|
158
|
+
| `flushInterval` | `5000` | How often (ms) to flush pending data to the database |
|
|
159
|
+
| `checkpointInterval` | `30000` | How often (ms) to write a crash-recovery checkpoint to LMDB *(memory driver only)* |
|
|
160
|
+
| `maxQueueSize` | `10000` | Auto-flush the insert queue when it reaches this many rows |
|
|
161
|
+
| `primaryKey` | `"id"` | Default primary key column name for scalar `where()` values |
|
|
162
|
+
|
|
163
|
+
---
|
|
164
|
+
|
|
165
|
+
## Horizontal Scaling
|
|
166
|
+
|
|
167
|
+
To share buffer state across multiple servers, switch the `ipc` driver to `redis`:
|
|
168
|
+
|
|
169
|
+
```json
|
|
170
|
+
{
|
|
171
|
+
"ipc": {
|
|
172
|
+
"driver": "redis",
|
|
173
|
+
"redis": "default"
|
|
174
|
+
}
|
|
175
|
+
}
|
|
176
|
+
```
|
|
177
|
+
|
|
178
|
+
With the Redis driver active:
|
|
179
|
+
- All `increment`, `update`, and `insert` operations go to Redis atomically.
|
|
180
|
+
- Any server can trigger a `flush()` — the distributed lock ensures no server writes twice.
|
|
181
|
+
- LMDB checkpoints are skipped (Redis persistence provides the durability guarantee).
|
|
182
|
+
|
|
183
|
+
No code changes are required in your application. The `.buffer` API is identical regardless of driver.
|
|
184
|
+
|
|
185
|
+
---
|
|
186
|
+
|
|
187
|
+
## Named Database Connections
|
|
188
|
+
|
|
189
|
+
The buffer respects your multi-connection configuration. Access it via the named connection, then the table:
|
|
190
|
+
|
|
191
|
+
```javascript
|
|
192
|
+
// Default connection
|
|
193
|
+
await Odac.DB.posts.buffer.where(postId).increment('views')
|
|
194
|
+
|
|
195
|
+
// Named connection: 'analytics'
|
|
196
|
+
await Odac.DB.analytics.events.buffer.insert({type: 'click', target: '#cta'})
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
---
|
|
200
|
+
|
|
201
|
+
## Guarantees
|
|
202
|
+
|
|
203
|
+
| Scenario | Behaviour |
|
|
204
|
+
|---|---|
|
|
205
|
+
| Worker crash | No data loss — all state is in the Primary process (memory) or Redis |
|
|
206
|
+
| Primary crash | Pending data recovered from LMDB checkpoint on next startup *(memory driver)* |
|
|
207
|
+
| Server crash (Redis) | Pending data is durable in Redis — recovered on next flush cycle |
|
|
208
|
+
| DB flush error | Data is retained in Ipc and retried on the next flush cycle |
|
|
209
|
+
| Graceful shutdown | Automatic final flush before connections close |
|
|
210
|
+
| `get()` after `increment()` | Returns base + buffered delta — always accurate, no extra DB read |
|
|
211
|
+
| Concurrent workers | Primary serializes all writes (memory) or Redis atomic ops prevent races |
|
|
212
|
+
| Multiple servers | Distributed lock guarantees exactly one flush at a time |
|
|
213
|
+
|
|
214
|
+
---
|
|
215
|
+
|
|
216
|
+
## When to Use (and Not Use)
|
|
217
|
+
|
|
218
|
+
**Use Write-Behind Cache for:**
|
|
219
|
+
- Page/post view counters
|
|
220
|
+
- Download counters, like/upvote counts
|
|
221
|
+
- User last-active timestamps, last IP
|
|
222
|
+
- Activity logs, analytics events, audit trails
|
|
223
|
+
- Any write that is not immediately safety-critical and occurs on every request
|
|
224
|
+
|
|
225
|
+
**Do not use for:**
|
|
226
|
+
- Operations where the write must be visible to the *same* request that triggered it
|
|
227
|
+
- Inserts that return generated IDs you need immediately (use direct `insert()`)
|
|
228
|
+
|
|
229
|
+
> [!WARNING]
|
|
230
|
+
> **Never use Write-Behind Cache for financial or safety-critical operations** — payment records, order confirmations, balance changes, inventory decrements, or any write where data loss or a delayed flush would have real-world consequences. The buffer does not guarantee that data reaches the database before a crash. Use direct database transactions for anything that matters immediately.
|
|
@@ -71,3 +71,120 @@ await Odac.Ipc.publish('chat:global', { user: 'Emre', text: 'Hello World' });
|
|
|
71
71
|
|
|
72
72
|
> [!TIP]
|
|
73
73
|
> When using `memory` driver, the subscription listener is registered in the current worker. When a message is published, it goes to the Main process and is then broadcasted to all subscribed workers.
|
|
74
|
+
|
|
75
|
+
### Atomic Counters
|
|
76
|
+
|
|
77
|
+
Use `incrBy` / `decrBy` to atomically increment or decrement a numeric key. These are safe to call from multiple workers simultaneously — no read-then-write race conditions.
|
|
78
|
+
|
|
79
|
+
```javascript
|
|
80
|
+
// Increment by 1 — returns new value
|
|
81
|
+
await Odac.Ipc.incrBy('page:views', 1) // → 1
|
|
82
|
+
await Odac.Ipc.incrBy('page:views', 5) // → 6
|
|
83
|
+
|
|
84
|
+
// Decrement
|
|
85
|
+
await Odac.Ipc.decrBy('page:views', 2) // → 4
|
|
86
|
+
|
|
87
|
+
// Read the result
|
|
88
|
+
const views = await Odac.Ipc.get('page:views') // → 4
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
> [!NOTE]
|
|
92
|
+
> Keys that don't exist yet are initialised to `0` before the operation.
|
|
93
|
+
|
|
94
|
+
---
|
|
95
|
+
|
|
96
|
+
### Hash Maps
|
|
97
|
+
|
|
98
|
+
Store and retrieve structured per-key data. Fields are merged on every `hset` call — existing fields not mentioned in the call are preserved.
|
|
99
|
+
|
|
100
|
+
```javascript
|
|
101
|
+
// Set fields (merged, not overwritten)
|
|
102
|
+
await Odac.Ipc.hset('user:42', {active_date: new Date(), last_ip: '1.2.3.4'})
|
|
103
|
+
await Odac.Ipc.hset('user:42', {score: 100})
|
|
104
|
+
|
|
105
|
+
// Retrieve all fields
|
|
106
|
+
const data = await Odac.Ipc.hgetall('user:42')
|
|
107
|
+
// → {active_date: ..., last_ip: '1.2.3.4', score: 100}
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
---
|
|
111
|
+
|
|
112
|
+
### Lists
|
|
113
|
+
|
|
114
|
+
Append items to a shared list and read them back in order. Useful for queues and event streams.
|
|
115
|
+
|
|
116
|
+
```javascript
|
|
117
|
+
// Append items to the right — returns new list length
|
|
118
|
+
await Odac.Ipc.rpush('jobs', {type: 'email', to: 'a@b.com'})
|
|
119
|
+
await Odac.Ipc.rpush('jobs', {type: 'sms'}, {type: 'push'}) // → 3
|
|
120
|
+
|
|
121
|
+
// Read a range (0-indexed, -1 = last item)
|
|
122
|
+
const pending = await Odac.Ipc.lrange('jobs', 0, -1)
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
---
|
|
126
|
+
|
|
127
|
+
### Sets
|
|
128
|
+
|
|
129
|
+
Maintain a collection of unique string members.
|
|
130
|
+
|
|
131
|
+
```javascript
|
|
132
|
+
// Add members
|
|
133
|
+
await Odac.Ipc.sadd('online', 'user:1', 'user:2', 'user:3')
|
|
134
|
+
|
|
135
|
+
// List all members
|
|
136
|
+
const online = await Odac.Ipc.smembers('online') // → ['user:1', 'user:2', 'user:3']
|
|
137
|
+
|
|
138
|
+
// Remove members — returns number of members actually removed
|
|
139
|
+
await Odac.Ipc.srem('online', 'user:2')
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
---
|
|
143
|
+
|
|
144
|
+
### Distributed Locks
|
|
145
|
+
|
|
146
|
+
Acquire a mutex across all workers and servers before entering a critical section. The TTL prevents deadlocks if a process crashes while holding the lock.
|
|
147
|
+
|
|
148
|
+
```javascript
|
|
149
|
+
// Attempt to acquire the lock (TTL in seconds)
|
|
150
|
+
const acquired = await Odac.Ipc.lock('report:generate', 30)
|
|
151
|
+
|
|
152
|
+
if (!acquired) {
|
|
153
|
+
// Another process is already running this — skip
|
|
154
|
+
return
|
|
155
|
+
}
|
|
156
|
+
|
|
157
|
+
try {
|
|
158
|
+
// Critical section — only one process runs this at a time
|
|
159
|
+
await generateReport()
|
|
160
|
+
} finally {
|
|
161
|
+
// Always release, even on error
|
|
162
|
+
await Odac.Ipc.unlock('report:generate')
|
|
163
|
+
}
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
> [!TIP]
|
|
167
|
+
> With the `redis` driver, locks work across multiple servers — making them true distributed locks.
|
|
168
|
+
|
|
169
|
+
---
|
|
170
|
+
|
|
171
|
+
## Method Reference
|
|
172
|
+
|
|
173
|
+
| Method | Description |
|
|
174
|
+
|---|---|
|
|
175
|
+
| `set(key, value, ttl?)` | Store a value, with optional TTL in seconds |
|
|
176
|
+
| `get(key)` | Retrieve a value |
|
|
177
|
+
| `del(key)` | Delete a key |
|
|
178
|
+
| `incrBy(key, delta)` | Atomically increment a numeric key |
|
|
179
|
+
| `decrBy(key, delta)` | Atomically decrement a numeric key |
|
|
180
|
+
| `hset(key, fields)` | Merge fields into a hash map |
|
|
181
|
+
| `hgetall(key)` | Retrieve all fields of a hash map |
|
|
182
|
+
| `rpush(key, ...items)` | Append items to a list |
|
|
183
|
+
| `lrange(key, start, stop)` | Read a range of list items |
|
|
184
|
+
| `sadd(key, ...members)` | Add members to a set |
|
|
185
|
+
| `smembers(key)` | Get all members of a set |
|
|
186
|
+
| `srem(key, ...members)` | Remove members from a set |
|
|
187
|
+
| `lock(key, ttl)` | Acquire a mutex lock |
|
|
188
|
+
| `unlock(key)` | Release a mutex lock |
|
|
189
|
+
| `subscribe(channel, handler)` | Subscribe to a Pub/Sub channel |
|
|
190
|
+
| `publish(channel, message)` | Publish a message to a channel |
|
package/docs/index.json
CHANGED