resplite 1.2.4 → 1.2.8
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +179 -274
- package/package.json +1 -6
- package/scripts/create-interface-smoke.js +32 -0
- package/skills/README.md +22 -0
- package/skills/resplite-command-vertical-slice/SKILL.md +134 -0
- package/skills/resplite-ft-search-workbench/SKILL.md +138 -0
- package/skills/resplite-migration-cutover-assistant/SKILL.md +138 -0
- package/spec/00-INDEX.md +37 -0
- package/spec/01-overview-and-goals.md +125 -0
- package/spec/02-protocol-and-commands.md +174 -0
- package/spec/03-data-model-ttl-transactions.md +157 -0
- package/spec/04-cache-architecture.md +171 -0
- package/spec/05-scan-admin-implementation.md +379 -0
- package/spec/06-migration-strategy-core.md +79 -0
- package/spec/07-type-lists.md +202 -0
- package/spec/08-type-sorted-sets.md +220 -0
- package/spec/{SPEC_D.md → 09-search-ft-commands.md} +3 -1
- package/spec/{SPEC_E.md → 10-blocking-commands.md} +3 -1
- package/spec/{SPEC_F.md → 11-migration-dirty-registry.md} +61 -147
- package/src/commands/object.js +17 -0
- package/src/commands/registry.js +2 -0
- package/src/engine/engine.js +11 -0
- package/src/migration/apply-dirty.js +8 -1
- package/src/migration/index.js +48 -4
- package/src/migration/migrate-search.js +25 -6
- package/src/migration/tracker.js +23 -0
- package/test/integration/migration-dirty-tracker.test.js +9 -4
- package/test/integration/object-idletime.test.js +51 -0
- package/test/unit/migrate-search.test.js +50 -2
- package/spec/SPEC_A.md +0 -1171
- package/spec/SPEC_B.md +0 -426
- package/src/cli/import-from-redis.js +0 -194
- package/src/cli/resplite-dirty-tracker.js +0 -92
- package/src/cli/resplite-import.js +0 -296
- package/test/contract/import-from-redis.test.js +0 -83
package/README.md
CHANGED
|
@@ -21,45 +21,19 @@ Building this project surfaced a clear finding: **Redis running inside Docker**
|
|
|
21
21
|
|
|
22
22
|
The strongest use case is **migrating a non-replicated Redis instance that has grown large** (tens of GB). You don't need to manage replicas, AOF, or RDB. Once migrated, you get a single SQLite file and latency that is good enough for most workloads. The built-in migration tooling (see [Migration from Redis](#migration-from-redis)) handles datasets of that size with minimal downtime.
|
|
23
23
|
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
A typical comparison is **Redis (e.g. in Docker)** on one side and **RESPLite locally** on the other. In that setup, RESPLite often shows **better latency** because it avoids Docker networking and runs in the same process/host. The benchmark below uses RESPLite with the **default** PRAGMA template only.
|
|
27
|
-
|
|
28
|
-
**Example results (Redis vs RESPLite, default pragma, 10k iterations):**
|
|
29
|
-
|
|
30
|
-
| Suite | Redis (Docker) | RESPLite (default) |
|
|
31
|
-
|-----------------|----------------|--------------------|
|
|
32
|
-
| PING | 8.79K/s | 37.36K/s |
|
|
33
|
-
| SET+GET | 4.68K/s | 11.96K/s |
|
|
34
|
-
| MSET+MGET(10) | 4.41K/s | 5.81K/s |
|
|
35
|
-
| INCR | 9.54K/s | 18.97K/s |
|
|
36
|
-
| HSET+HGET | 4.40K/s | 11.91K/s |
|
|
37
|
-
| HGETALL(50) | 8.39K/s | 11.01K/s |
|
|
38
|
-
| HLEN(50) | 9.36K/s | 31.21K/s |
|
|
39
|
-
| SADD+SMEMBERS | 9.27K/s | 17.37K/s |
|
|
40
|
-
| LPUSH+LRANGE | 8.34K/s | 14.27K/s |
|
|
41
|
-
| LREM | 4.37K/s | 6.08K/s |
|
|
42
|
-
| ZADD+ZRANGE | 7.80K/s | 17.12K/s |
|
|
43
|
-
| SET+DEL | 4.39K/s | 9.57K/s |
|
|
44
|
-
| FT.SEARCH | 8.36K/s | 8.22K/s |
|
|
45
|
-
|
|
46
|
-
*Run `npm run benchmark -- --template default` to reproduce. Numbers depend on host and whether Redis is native or in Docker.*
|
|
24
|
+
### Benchmark snapshot
|
|
47
25
|
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
```bash
|
|
51
|
-
# Terminal 1: Redis on 6379 (e.g. docker run -p 6379:6379 redis). Terminal 2: RESPLite on 6380
|
|
52
|
-
RESPLITE_PORT=6380 npm start
|
|
26
|
+
Representative results against Redis in Docker on the same host:
|
|
53
27
|
|
|
54
|
-
|
|
55
|
-
|
|
28
|
+
| Suite | Redis (Docker) | RESPLite (default) |
|
|
29
|
+
|---------------|----------------|--------------------|
|
|
30
|
+
| PING | 8.79K/s | 37.36K/s |
|
|
31
|
+
| SET+GET | 4.68K/s | 11.96K/s |
|
|
32
|
+
| HSET+HGET | 4.40K/s | 11.91K/s |
|
|
33
|
+
| ZADD+ZRANGE | 7.80K/s | 17.12K/s |
|
|
34
|
+
| FT.SEARCH | 8.36K/s | 8.22K/s |
|
|
56
35
|
|
|
57
|
-
|
|
58
|
-
npm run benchmark -- --template default
|
|
59
|
-
|
|
60
|
-
# Custom iterations and ports
|
|
61
|
-
npm run benchmark -- --iterations 10000 --redis-port 6379 --resplite-port 6380
|
|
62
|
-
```
|
|
36
|
+
The full benchmark table is available later in [Benchmark](#benchmark-redis-vs-resplite).
|
|
63
37
|
|
|
64
38
|
## Install
|
|
65
39
|
|
|
@@ -67,124 +41,60 @@ npm run benchmark -- --iterations 10000 --redis-port 6379 --resplite-port 6380
|
|
|
67
41
|
npm install resplite
|
|
68
42
|
```
|
|
69
43
|
|
|
70
|
-
##
|
|
71
|
-
|
|
72
|
-
```bash
|
|
73
|
-
npm start
|
|
74
|
-
```
|
|
75
|
-
|
|
76
|
-
By default the server listens on port **6379** and stores data in `data.db` in the current directory.
|
|
77
|
-
|
|
78
|
-
```bash
|
|
79
|
-
redis-cli -p 6379
|
|
80
|
-
> PING
|
|
81
|
-
PONG
|
|
82
|
-
> SET foo bar
|
|
83
|
-
OK
|
|
84
|
-
> GET foo
|
|
85
|
-
"bar"
|
|
86
|
-
```
|
|
87
|
-
|
|
88
|
-
### Standalone server script (fixed port)
|
|
89
|
-
|
|
90
|
-
Run this as a persistent background process (`node server.js`). RESPLite will listen on port 6380 and stay up until the process receives SIGINT (Ctrl+C) or SIGTERM; then it closes the server and exits cleanly. If you kill the process (e.g. SIGKILL or force quit), all client connections are closed as well — with the default configuration the server runs in the same process, so when the process exits the TCP server and its connections are torn down.
|
|
91
|
-
|
|
92
|
-
```javascript
|
|
93
|
-
// server.js
|
|
94
|
-
import { createRESPlite } from 'resplite/embed';
|
|
95
|
-
|
|
96
|
-
const srv = await createRESPlite({ port: 6380, db: './data.db' });
|
|
97
|
-
console.log(`RESPLite listening on ${srv.host}:${srv.port}`);
|
|
98
|
-
|
|
99
|
-
```
|
|
100
|
-
|
|
101
|
-
Then connect from any other script or process:
|
|
44
|
+
## AI Skill
|
|
102
45
|
|
|
103
46
|
```bash
|
|
104
|
-
|
|
47
|
+
npx skills add https://github.com/clasen/RESPLite
|
|
105
48
|
```
|
|
106
49
|
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
| Variable | Default | Description |
|
|
110
|
-
|---|---|---|
|
|
111
|
-
| `RESPLITE_PORT` | `6379` | Server port |
|
|
112
|
-
| `RESPLITE_DB` | `./data.db` | SQLite database file |
|
|
113
|
-
| `RESPLITE_PRAGMA_TEMPLATE` | `default` | SQLite PRAGMA preset (see below) |
|
|
114
|
-
|
|
115
|
-
### PRAGMA templates
|
|
116
|
-
|
|
117
|
-
| Template | Description | Key settings |
|
|
118
|
-
|---|---|---|
|
|
119
|
-
| `default` | Balanced durability and speed (recommended) | WAL, synchronous=NORMAL, 20 MB cache |
|
|
120
|
-
| `performance` | Maximum throughput, reduced crash safety | WAL, synchronous=OFF, 64 MB cache, 512 MB mmap, exclusive locking |
|
|
121
|
-
| `safety` | Crash-safe writes at the cost of speed | WAL, synchronous=FULL, 20 MB cache |
|
|
122
|
-
| `minimal` | Only WAL + foreign keys | WAL, foreign_keys=ON |
|
|
123
|
-
| `none` | No pragmas applied — pure SQLite defaults | — |
|
|
124
|
-
|
|
125
|
-
## Programmatic usage (embedded)
|
|
126
|
-
|
|
127
|
-
RESPLite can be started and consumed entirely within a single Node.js script — no separate process needed. This is exactly how the test suite works.
|
|
128
|
-
|
|
129
|
-
### Minimal example
|
|
50
|
+
## JavaScript quick start
|
|
130
51
|
|
|
131
|
-
|
|
132
|
-
import { createClient } from 'redis';
|
|
133
|
-
import { createRESPlite } from 'resplite/embed';
|
|
52
|
+
The recommended way to use RESPLite is from your own Node.js script, creating the server with the options and observability hooks your app needs. If you prefer a standalone server or terminal workflow, see [CLI and standalone server reference](#cli-and-standalone-server-reference) below.
|
|
134
53
|
|
|
135
|
-
|
|
136
|
-
const client = createClient({ socket: { port: srv.port, host: '127.0.0.1' } });
|
|
137
|
-
await client.connect();
|
|
54
|
+
### Recommended server script
|
|
138
55
|
|
|
139
|
-
|
|
140
|
-
console.log(await client.get('hello')); // → "world"
|
|
141
|
-
|
|
142
|
-
await client.quit();
|
|
143
|
-
await srv.close();
|
|
144
|
-
```
|
|
145
|
-
|
|
146
|
-
### Observability (event hooks)
|
|
147
|
-
|
|
148
|
-
When embedding RESPLite you can pass optional hooks to log unknown commands, command errors, or socket errors (e.g. for `warn`/`error` in your logger). The client still receives the same RESP responses; hooks are for observability only.
|
|
56
|
+
In a typical app, you start RESPLite from your own process and attach hooks for observability. The client still receives the same RESP responses; hooks are for logging and monitoring only.
|
|
149
57
|
|
|
150
58
|
```javascript
|
|
151
|
-
import
|
|
152
|
-
const log =
|
|
59
|
+
import LemonLog from 'lemonlog';
|
|
60
|
+
const log = new LemonLog('RESPlite');
|
|
153
61
|
|
|
154
62
|
const srv = await createRESPlite({
|
|
155
63
|
port: 6380,
|
|
156
64
|
db: './data.db',
|
|
157
65
|
hooks: {
|
|
158
66
|
onUnknownCommand({ command, argsCount, clientAddress }) {
|
|
159
|
-
log.warn({ command, argsCount, clientAddress }, '
|
|
67
|
+
log.warn({ command, argsCount, clientAddress }, 'unsupported command');
|
|
160
68
|
},
|
|
161
69
|
onCommandError({ command, error, clientAddress }) {
|
|
162
|
-
log.warn({ command, error, clientAddress }, '
|
|
70
|
+
log.warn({ command, error, clientAddress }, 'command error');
|
|
163
71
|
},
|
|
164
72
|
onSocketError({ error, clientAddress }) {
|
|
165
|
-
log.error({ err: error, clientAddress }, '
|
|
73
|
+
log.error({ err: error, clientAddress }, 'connection error');
|
|
166
74
|
},
|
|
167
75
|
},
|
|
168
76
|
});
|
|
169
77
|
```
|
|
170
78
|
|
|
171
|
-
|
|
172
|
-
|------|--------------------|
|
|
173
|
-
| `onUnknownCommand` | Client sent a command not implemented by RESPLite (e.g. `SUBSCRIBE`, `PUBLISH`). |
|
|
174
|
-
| `onCommandError` | A command failed (wrong type, invalid args, or handler threw). |
|
|
175
|
-
| `onSocketError` | The connection socket emitted an error (e.g. `ECONNRESET`). |
|
|
79
|
+
Available hooks:
|
|
176
80
|
|
|
177
|
-
|
|
81
|
+
- `onUnknownCommand`: client sent a command not implemented by RESPLite, such as `SUBSCRIBE` or `PUBLISH`.
|
|
82
|
+
- `onCommandError`: a command failed because of wrong type, invalid args, or a handler error.
|
|
83
|
+
- `onSocketError`: the connection socket emitted an error, for example `ECONNRESET`.
|
|
178
84
|
|
|
179
|
-
|
|
85
|
+
If you want a tiny in-process smoke test that starts RESPLite and connects with the `redis` client in the same script, see [Minimal embedded example](#minimal-embedded-example) below.
|
|
180
86
|
|
|
181
|
-
Migration
|
|
87
|
+
## Migration from Redis
|
|
88
|
+
|
|
89
|
+
RESPLite is a good fit for migrating **non-replicated Redis** instances that have **grown large** (e.g. tens of GB) and where RESPLite's latency is acceptable. The recommended path is to drive the migration from a Node.js script via `resplite/migration`, keeping preflight, dirty tracking, bulk import, cutover, and verification in one place.
|
|
182
90
|
|
|
183
|
-
###
|
|
91
|
+
### Recommended migration script
|
|
184
92
|
|
|
185
|
-
|
|
93
|
+
The full flow can run from a single script: inspect Redis, enable keyspace notifications, track dirty keys in-process, bulk import with checkpoints, apply dirty keys during cutover, verify, and disconnect cleanly.
|
|
186
94
|
|
|
187
95
|
```javascript
|
|
96
|
+
import { stdin, stdout } from 'node:process';
|
|
97
|
+
import { createInterface } from 'node:readline/promises';
|
|
188
98
|
import { createMigration } from 'resplite/migration';
|
|
189
99
|
|
|
190
100
|
const m = createMigration({
|
|
@@ -192,12 +102,11 @@ const m = createMigration({
|
|
|
192
102
|
to: './resplite.db', // destination SQLite DB path (required)
|
|
193
103
|
runId: 'my-migration-1', // unique run ID (required for bulk/status/applyDirty)
|
|
194
104
|
|
|
195
|
-
// optional
|
|
196
|
-
scanCount:
|
|
197
|
-
batchKeys:
|
|
105
|
+
// optional
|
|
106
|
+
scanCount: 5000,
|
|
107
|
+
batchKeys: 1000,
|
|
198
108
|
batchBytes: 64 * 1024 * 1024, // 64 MB
|
|
199
109
|
maxRps: 0, // 0 = unlimited
|
|
200
|
-
pragmaTemplate: 'default',
|
|
201
110
|
|
|
202
111
|
// If your Redis deployment renamed CONFIG for security:
|
|
203
112
|
// configCommand: 'MYCONFIG',
|
|
@@ -217,10 +126,19 @@ const ks = await m.enableKeyspaceNotifications();
|
|
|
217
126
|
// → { ok: true, previous: '', applied: 'KEA' }
|
|
218
127
|
// If CONFIG is renamed and configCommand was not set, ok=false and error explains how to fix it.
|
|
219
128
|
|
|
129
|
+
// Step 0c — Start dirty tracking (in-process, same script)
|
|
130
|
+
await m.startDirtyTracker({
|
|
131
|
+
onProgress: (p) => {
|
|
132
|
+
// one callback per keyspace event tracked during bulk/cutover
|
|
133
|
+
console.log(`[dirty ${p.totalEvents}] event=${p.event} key=${p.key}`);
|
|
134
|
+
},
|
|
135
|
+
});
|
|
136
|
+
|
|
220
137
|
// Step 1 — Bulk import (checkpointed, resumable). Same script to start or continue.
|
|
221
138
|
// Use keyCountEstimate from preflight to show progress % (estimate; actual count may change).
|
|
222
139
|
const total = info.keyCountEstimate || 1;
|
|
223
140
|
await m.bulk({
|
|
141
|
+
resume: true,
|
|
224
142
|
onProgress: (r) => {
|
|
225
143
|
const pct = total ? ((r.scanned_keys / total) * 100).toFixed(1) : '—';
|
|
226
144
|
console.log(
|
|
@@ -233,10 +151,27 @@ await m.bulk({
|
|
|
233
151
|
const { run, dirty } = m.status();
|
|
234
152
|
console.log('bulk status:', run.status, '— dirty counts:', dirty);
|
|
235
153
|
|
|
236
|
-
// Step 2 —
|
|
237
|
-
|
|
154
|
+
// Step 2 — Pause for cutover:
|
|
155
|
+
// stop the app that is still writing to Redis, then press Enter.
|
|
156
|
+
const rl = createInterface({ input: stdin, output: stdout });
|
|
157
|
+
await rl.question('Stop app traffic to Redis, then press Enter to apply the final dirty set...');
|
|
158
|
+
rl.close();
|
|
159
|
+
|
|
160
|
+
// Step 3 — Apply dirty keys that changed in Redis during bulk
|
|
161
|
+
await m.applyDirty({ onProgress: console.log });
|
|
238
162
|
|
|
239
|
-
// Step
|
|
163
|
+
// Step 3b — Stop tracker after cutover
|
|
164
|
+
await m.stopDirtyTracker();
|
|
165
|
+
|
|
166
|
+
// If the source also uses FT.*, this is where you would run m.migrateSearch().
|
|
167
|
+
// Step 3c — Migrate RediSearch indices after writes are frozen
|
|
168
|
+
await m.migrateSearch({
|
|
169
|
+
onProgress: (r) => {
|
|
170
|
+
console.log(`[search ${r.name}] docs=${r.docsImported} skipped=${r.docsSkipped} warnings=${r.warnings.length}`);
|
|
171
|
+
},
|
|
172
|
+
});
|
|
173
|
+
|
|
174
|
+
// Step 4 — Verify a sample of keys match between Redis and the destination
|
|
240
175
|
const result = await m.verify({ samplePct: 0.5, maxSample: 10000 });
|
|
241
176
|
console.log(`verified ${result.sampled} keys — mismatches: ${result.mismatches.length}`);
|
|
242
177
|
|
|
@@ -244,13 +179,17 @@ console.log(`verified ${result.sampled} keys — mismatches: ${result.mismatches
|
|
|
244
179
|
await m.close();
|
|
245
180
|
```
|
|
246
181
|
|
|
247
|
-
**Automatic resume (default)**
|
|
182
|
+
**Bulk: Automatic resume (default)**
|
|
248
183
|
`resume` defaults to `true`. It doesn't matter whether it's the first run or a resume: the same script works for both starting and continuing. The first run starts from cursor 0; if the process is interrupted (Ctrl+C, crash, etc.), running the script again continues from the last checkpoint. You don't need to pass `resume: false` on the first run or change anything to resume.
|
|
249
184
|
|
|
250
185
|
**Graceful shutdown**
|
|
251
186
|
On SIGINT (Ctrl+C) or SIGTERM, the bulk importer checkpoints progress, sets the run status to `aborted`, closes the SQLite database cleanly (so WAL is checkpointed and the file is not left open), then exits. You can safely interrupt a long-running bulk and resume later.
|
|
252
187
|
|
|
253
|
-
The
|
|
188
|
+
The JS API can run the dirty-key tracker in-process via `m.startDirtyTracker()` / `m.stopDirtyTracker()`, so the full flow stays inside a single script.
|
|
189
|
+
|
|
190
|
+
For a real cutover, the simplest flow is: let bulk finish, stop the app that still writes to Redis, press Enter to apply the final dirty set, run `migrateSearch()` if you use `FT.*`, and then switch traffic to RESPLite.
|
|
191
|
+
|
|
192
|
+
The KV bulk flow imports strings, hashes, sets, lists, and zsets. If your source also uses `FT.*` indices, see [Migrating RediSearch indices](#migrating-redisearch-indices).
|
|
254
193
|
|
|
255
194
|
#### Renamed CONFIG command
|
|
256
195
|
|
|
@@ -272,122 +211,38 @@ const info = await m.preflight();
|
|
|
272
211
|
const result = await m.enableKeyspaceNotifications({ value: 'KEA' });
|
|
273
212
|
```
|
|
274
213
|
|
|
275
|
-
The same
|
|
276
|
-
|
|
277
|
-
```bash
|
|
278
|
-
npx resplite-dirty-tracker start --run-id run_001 --to ./resplite.db \
|
|
279
|
-
--from redis://10.0.0.10:6379 --config-command MYCONFIG
|
|
280
|
-
```
|
|
281
|
-
### Simple one-shot import
|
|
282
|
-
|
|
283
|
-
For small datasets or when downtime is acceptable:
|
|
284
|
-
|
|
285
|
-
```bash
|
|
286
|
-
# Default: redis://127.0.0.1:6379 → ./data.db
|
|
287
|
-
npm run import-from-redis -- --db ./migrated.db
|
|
288
|
-
|
|
289
|
-
# Custom Redis URL
|
|
290
|
-
npm run import-from-redis -- --db ./migrated.db --redis-url redis://127.0.0.1:6379
|
|
291
|
-
|
|
292
|
-
# Or host/port
|
|
293
|
-
npm run import-from-redis -- --db ./migrated.db --host 127.0.0.1 --port 6379
|
|
294
|
-
|
|
295
|
-
# Optional: PRAGMA template for the target DB
|
|
296
|
-
npm run import-from-redis -- --db ./migrated.db --pragma-template performance
|
|
297
|
-
```
|
|
298
|
-
|
|
299
|
-
### Redis with authentication
|
|
300
|
-
|
|
301
|
-
Migration supports Redis instances protected by a password. Use a Redis URL that includes the password (or username and password for Redis 6+ ACL):
|
|
302
|
-
|
|
303
|
-
- **Password only:** `redis://:PASSWORD@host:port`
|
|
304
|
-
- **Username and password:** `redis://username:PASSWORD@host:port`
|
|
305
|
-
|
|
306
|
-
Examples:
|
|
307
|
-
|
|
308
|
-
```bash
|
|
309
|
-
# One-shot import from authenticated Redis
|
|
310
|
-
npm run import-from-redis -- --db ./migrated.db --redis-url "redis://:mysecret@127.0.0.1:6379"
|
|
311
|
-
|
|
312
|
-
# flow: use --from with the full URL (or set RESPLITE_IMPORT_FROM)
|
|
313
|
-
npx resplite-import preflight --from "redis://:mysecret@10.0.0.10:6379" --to ./resplite.db
|
|
314
|
-
npx resplite-dirty-tracker start --run-id run_001 --from "redis://:mysecret@10.0.0.10:6379" --to ./resplite.db
|
|
315
|
-
```
|
|
316
|
-
|
|
317
|
-
For one-shot import, authentication is only available when using `--redis-url`; the `--host` / `--port` options do not support a password.
|
|
318
|
-
|
|
319
|
-
**Search indices (FT.\*)**
|
|
320
|
-
The KV bulk migration imports only the Redis keyspace (strings, hashes, sets, lists, zsets). RediSearch index schemas and documents are migrated separately with the `migrate-search` step — see [Migrating RediSearch indices](#migrating-redisearch-indices) below.
|
|
321
|
-
|
|
322
|
-
### Minimal-downtime migration
|
|
214
|
+
The same `configCommand` override is used by `preflight()` and `enableKeyspaceNotifications()` in the programmatic flow.
|
|
323
215
|
|
|
324
|
-
|
|
325
|
-
|
|
326
|
-
**Enable keyspace notifications in Redis** (required for the dirty-key tracker). Either run at runtime:
|
|
327
|
-
|
|
328
|
-
```bash
|
|
329
|
-
redis-cli CONFIG SET notify-keyspace-events KEA
|
|
330
|
-
```
|
|
216
|
+
#### Low-level re-exports
|
|
331
217
|
|
|
332
|
-
|
|
218
|
+
If you need more control, the individual functions and registry helpers are also exported:
|
|
333
219
|
|
|
220
|
+
```javascript
|
|
221
|
+
import {
|
|
222
|
+
runPreflight, runBulkImport, runApplyDirty, runVerify,
|
|
223
|
+
getRun, getDirtyCounts, createRun, setRunStatus, logError,
|
|
224
|
+
} from 'resplite/migration';
|
|
334
225
|
```
|
|
335
|
-
notify-keyspace-events KEA
|
|
336
|
-
```
|
|
337
|
-
|
|
338
|
-
(`K` = keyspace prefix, `E` = keyevent prefix, `A` = all event types — lets the tracker see every key change and expiration.)
|
|
339
|
-
|
|
340
|
-
> **Renamed CONFIG command?** Some Redis deployments rename `CONFIG` for security. Pass `--config-command <name>` to the CLI tools, or the `configCommand` option to the JS API — see below.
|
|
341
|
-
|
|
342
|
-
1. **Preflight** – Check Redis, key count, type distribution, and that keyspace notifications are enabled:
|
|
343
|
-
```bash
|
|
344
|
-
npx resplite-import preflight --from redis://10.0.0.10:6379 --to ./resplite.db
|
|
345
|
-
```
|
|
346
226
|
|
|
347
|
-
|
|
348
|
-
```bash
|
|
349
|
-
npx resplite-dirty-tracker start --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db
|
|
350
|
-
# If CONFIG was renamed:
|
|
351
|
-
npx resplite-dirty-tracker start --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db --config-command MYCONFIG
|
|
352
|
-
```
|
|
227
|
+
## JavaScript examples
|
|
353
228
|
|
|
354
|
-
|
|
355
|
-
```bash
|
|
356
|
-
npx resplite-import bulk --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db \
|
|
357
|
-
--scan-count 1000 --max-rps 2000 --batch-keys 200 --batch-bytes 64MB
|
|
358
|
-
```
|
|
229
|
+
Once connected through the `redis` client, you can use RESPLite with the usual Redis-style API.
|
|
359
230
|
|
|
360
|
-
|
|
361
|
-
```bash
|
|
362
|
-
npx resplite-import status --run-id run_001 --to ./resplite.db
|
|
363
|
-
```
|
|
231
|
+
### Minimal embedded example
|
|
364
232
|
|
|
365
|
-
|
|
366
|
-
|
|
367
|
-
|
|
368
|
-
```
|
|
369
|
-
|
|
370
|
-
6. **Stop tracker and switch** – Stop the tracker and point clients to RespLite:
|
|
371
|
-
```bash
|
|
372
|
-
npx resplite-dirty-tracker stop --run-id run_001 --to ./resplite.db
|
|
373
|
-
```
|
|
374
|
-
|
|
375
|
-
7. **Verify** – Optional sampling check between Redis and destination:
|
|
376
|
-
```bash
|
|
377
|
-
npx resplite-import verify --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db --sample 0.5%
|
|
378
|
-
```
|
|
379
|
-
|
|
380
|
-
Then start RespLite with the migrated DB: `RESPLITE_DB=./resplite.db npm start`.
|
|
233
|
+
```javascript
|
|
234
|
+
import { createClient } from 'redis';
|
|
235
|
+
import { createRESPlite } from 'resplite/embed';
|
|
381
236
|
|
|
382
|
-
|
|
237
|
+
const srv = await createRESPlite({ db: './my-app.db' });
|
|
238
|
+
const client = createClient({ socket: { port: srv.port, host: '127.0.0.1' } });
|
|
239
|
+
await client.connect();
|
|
383
240
|
|
|
384
|
-
|
|
241
|
+
await client.set('hello', 'world');
|
|
242
|
+
console.log(await client.get('hello')); // → "world"
|
|
385
243
|
|
|
386
|
-
|
|
387
|
-
|
|
388
|
-
runPreflight, runBulkImport, runApplyDirty, runVerify,
|
|
389
|
-
getRun, getDirtyCounts, createRun, setRunStatus, logError,
|
|
390
|
-
} from 'resplite/migration';
|
|
244
|
+
await client.quit();
|
|
245
|
+
await srv.close();
|
|
391
246
|
```
|
|
392
247
|
|
|
393
248
|
### Strings, TTL, and key operations
|
|
@@ -561,33 +416,9 @@ await c2.quit();
|
|
|
561
416
|
await srv2.close();
|
|
562
417
|
```
|
|
563
418
|
|
|
564
|
-
|
|
565
|
-
|
|
566
|
-
If your Redis source uses **RediSearch** (Redis Stack or the `redis/search` module), run `migrate-search` after (or during) the KV bulk import. It reads index schemas with `FT.INFO`, creates them in RespLite, and imports documents by scanning the matching hash keys.
|
|
419
|
+
## Migrating RediSearch indices
|
|
567
420
|
|
|
568
|
-
**
|
|
569
|
-
|
|
570
|
-
```bash
|
|
571
|
-
# Migrate all indices
|
|
572
|
-
npx resplite-import migrate-search \
|
|
573
|
-
--from redis://10.0.0.10:6379 \
|
|
574
|
-
--to ./resplite.db
|
|
575
|
-
|
|
576
|
-
# Migrate specific indices only
|
|
577
|
-
npx resplite-import migrate-search \
|
|
578
|
-
--from redis://10.0.0.10:6379 \
|
|
579
|
-
--to ./resplite.db \
|
|
580
|
-
--index products \
|
|
581
|
-
--index articles
|
|
582
|
-
|
|
583
|
-
# Options
|
|
584
|
-
# --scan-count N SCAN COUNT hint (default 500)
|
|
585
|
-
# --max-rps N throttle Redis reads
|
|
586
|
-
# --batch-docs N docs per SQLite transaction (default 200)
|
|
587
|
-
# --max-suggestions N cap for suggestion import (default 10000)
|
|
588
|
-
# --no-skip overwrite if the index already exists in RespLite
|
|
589
|
-
# --no-suggestions skip suggestion import
|
|
590
|
-
```
|
|
421
|
+
If your Redis source uses **RediSearch** (Redis Stack or the `redis/search` module), the best moment to run `migrateSearch()` is after the final KV cutover, once writes to Redis are already frozen. It reads index schemas with `FT.INFO`, creates them in RESPLite, and imports documents by scanning the matching hash keys.
|
|
591
422
|
|
|
592
423
|
**Programmatic API:**
|
|
593
424
|
|
|
@@ -598,7 +429,7 @@ const result = await m.migrateSearch({
|
|
|
598
429
|
onlyIndices: ['products', 'articles'], // omit to migrate all
|
|
599
430
|
batchDocs: 200,
|
|
600
431
|
maxSuggestions: 10000,
|
|
601
|
-
skipExisting: true, //
|
|
432
|
+
skipExisting: true, // reuse existing destination index if already created
|
|
602
433
|
withSuggestions: true, // default
|
|
603
434
|
onProgress: (r) => console.log(r.name, r.docsImported, r.warnings),
|
|
604
435
|
});
|
|
@@ -608,7 +439,7 @@ const result = await m.migrateSearch({
|
|
|
608
439
|
|
|
609
440
|
**What gets migrated:**
|
|
610
441
|
|
|
611
|
-
| RediSearch type |
|
|
442
|
+
| RediSearch type | RESPLite | Notes |
|
|
612
443
|
|---|---|---|
|
|
613
444
|
| TEXT | TEXT | Direct |
|
|
614
445
|
| TAG | TEXT | Values preserved; TAG filtering lost |
|
|
@@ -620,6 +451,88 @@ const result = await m.migrateSearch({
|
|
|
620
451
|
- Suggestions are imported via `FT.SUGGET "" MAX n WITHSCORES` (no cursor; capped at `maxSuggestions`).
|
|
621
452
|
- Graceful shutdown: Ctrl+C finishes the current document, closes SQLite cleanly, and exits with a non-zero code.
|
|
622
453
|
|
|
454
|
+
## CLI and standalone server reference
|
|
455
|
+
|
|
456
|
+
If you prefer operating RESPLite from the terminal, or want separate long-running processes, use the commands below.
|
|
457
|
+
|
|
458
|
+
### Run as a standalone server
|
|
459
|
+
|
|
460
|
+
```bash
|
|
461
|
+
npm start
|
|
462
|
+
```
|
|
463
|
+
|
|
464
|
+
By default the server listens on port **6379** and stores data in `data.db` in the current directory.
|
|
465
|
+
|
|
466
|
+
```bash
|
|
467
|
+
redis-cli -p 6379
|
|
468
|
+
> PING
|
|
469
|
+
PONG
|
|
470
|
+
> SET foo bar
|
|
471
|
+
OK
|
|
472
|
+
> GET foo
|
|
473
|
+
"bar"
|
|
474
|
+
```
|
|
475
|
+
|
|
476
|
+
### Standalone server script (fixed port)
|
|
477
|
+
|
|
478
|
+
Run this as a persistent background process (`node server.js`). RESPLite will listen on port 6380 and stay up until the process receives SIGINT (Ctrl+C) or SIGTERM; then it closes the server and exits cleanly. If you kill the process (for example, SIGKILL or force quit), all client connections are closed as well.
|
|
479
|
+
|
|
480
|
+
```javascript
|
|
481
|
+
// server.js
|
|
482
|
+
import { createRESPlite } from 'resplite/embed';
|
|
483
|
+
|
|
484
|
+
const srv = await createRESPlite({ port: 6380, db: './data.db' });
|
|
485
|
+
console.log(`RESPLite listening on ${srv.host}:${srv.port}`);
|
|
486
|
+
```
|
|
487
|
+
|
|
488
|
+
Then connect from any other script or process:
|
|
489
|
+
|
|
490
|
+
```bash
|
|
491
|
+
redis-cli -p 6380 PING
|
|
492
|
+
```
|
|
493
|
+
|
|
494
|
+
### Environment variables
|
|
495
|
+
|
|
496
|
+
| Variable | Default | Description |
|
|
497
|
+
|---|---|---|
|
|
498
|
+
| `RESPLITE_PORT` | `6379` | Server port |
|
|
499
|
+
| `RESPLITE_DB` | `./data.db` | SQLite database file |
|
|
500
|
+
| `RESPLITE_PRAGMA_TEMPLATE` | `default` | SQLite PRAGMA preset (see below) |
|
|
501
|
+
|
|
502
|
+
### PRAGMA templates
|
|
503
|
+
|
|
504
|
+
| Template | Description | Key settings |
|
|
505
|
+
|---|---|---|
|
|
506
|
+
| `default` | Balanced durability and speed (recommended) | WAL, synchronous=NORMAL, 20 MB cache |
|
|
507
|
+
| `performance` | Maximum throughput, reduced crash safety | WAL, synchronous=OFF, 64 MB cache, 512 MB mmap, exclusive locking |
|
|
508
|
+
| `safety` | Crash-safe writes at the cost of speed | WAL, synchronous=FULL, 20 MB cache |
|
|
509
|
+
| `minimal` | Only WAL + foreign keys | WAL, foreign_keys=ON |
|
|
510
|
+
| `none` | No pragmas applied, pure SQLite defaults | - |
|
|
511
|
+
|
|
512
|
+
## Benchmark (Redis vs RESPLite)
|
|
513
|
+
|
|
514
|
+
A typical comparison is **Redis (for example, in Docker)** on one side and **RESPLite locally** on the other. In that setup, RESPLite often shows **better latency** because it avoids Docker networking and runs in the same process or host. The benchmark below uses RESPLite with the **default** PRAGMA template only.
|
|
515
|
+
|
|
516
|
+
**Example results (Redis vs RESPLite, default pragma, 10k iterations):**
|
|
517
|
+
|
|
518
|
+
| Suite | Redis (Docker) | RESPLite (default) |
|
|
519
|
+
|-----------------|----------------|--------------------|
|
|
520
|
+
| PING | 8.79K/s | 37.36K/s |
|
|
521
|
+
| SET+GET | 4.68K/s | 11.96K/s |
|
|
522
|
+
| MSET+MGET(10) | 4.41K/s | 5.81K/s |
|
|
523
|
+
| INCR | 9.54K/s | 18.97K/s |
|
|
524
|
+
| HSET+HGET | 4.40K/s | 11.91K/s |
|
|
525
|
+
| HGETALL(50) | 8.39K/s | 11.01K/s |
|
|
526
|
+
| HLEN(50) | 9.36K/s | 31.21K/s |
|
|
527
|
+
| SADD+SMEMBERS | 9.27K/s | 17.37K/s |
|
|
528
|
+
| LPUSH+LRANGE | 8.34K/s | 14.27K/s |
|
|
529
|
+
| LREM | 4.37K/s | 6.08K/s |
|
|
530
|
+
| ZADD+ZRANGE | 7.80K/s | 17.12K/s |
|
|
531
|
+
| SET+DEL | 4.39K/s | 9.57K/s |
|
|
532
|
+
| FT.SEARCH | 8.36K/s | 8.22K/s |
|
|
533
|
+
|
|
534
|
+
To reproduce the benchmark, run `npm run benchmark -- --template default`. Numbers depend on host and whether Redis is native or in Docker.
|
|
535
|
+
|
|
623
536
|
## Compatibility matrix
|
|
624
537
|
|
|
625
538
|
### Supported (v1)
|
|
@@ -634,9 +547,8 @@ const result = await m.migrateSearch({
|
|
|
634
547
|
| **Lists** | LPUSH, RPUSH, LLEN, LRANGE, LINDEX, LPOP, RPOP, BLPOP, BRPOP |
|
|
635
548
|
| **Sorted sets** | ZADD, ZREM, ZCARD, ZSCORE, ZRANGE, ZRANGEBYSCORE |
|
|
636
549
|
| **Search (FT.\*)** | FT.CREATE, FT.INFO, FT.ADD, FT.DEL, FT.SEARCH, FT.SUGADD, FT.SUGGET, FT.SUGDEL |
|
|
637
|
-
| **Introspection** | TYPE, SCAN, KEYS, MONITOR |
|
|
550
|
+
| **Introspection** | TYPE, OBJECT IDLETIME, SCAN, KEYS, MONITOR |
|
|
638
551
|
| **Admin** | SQLITE.INFO, CACHE.INFO, MEMORY.INFO |
|
|
639
|
-
| **Tooling** | Redis import CLI (see Migration from Redis) |
|
|
640
552
|
|
|
641
553
|
### Not supported (v1)
|
|
642
554
|
|
|
@@ -660,10 +572,3 @@ Unsupported commands return: `ERR command not supported yet`.
|
|
|
660
572
|
| `npm run test:contract` | Contract tests (redis client) |
|
|
661
573
|
| `npm run test:stress` | Stress tests |
|
|
662
574
|
| `npm run benchmark` | Comparative benchmark Redis vs RESPLite |
|
|
663
|
-
| `npm run import-from-redis` | One-shot import from Redis into a SQLite DB |
|
|
664
|
-
| `npx resplite-import` (preflight, bulk, status, apply-dirty, verify) | Migration CLI (minimal-downtime flow) |
|
|
665
|
-
| `npx resplite-dirty-tracker <start\|stop>` | Dirty-key tracker for migration cutover |
|
|
666
|
-
|
|
667
|
-
## Specification
|
|
668
|
-
|
|
669
|
-
See [SPEC.md](SPEC.md) for the full specification.
|
package/package.json
CHANGED
|
@@ -1,13 +1,9 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "resplite",
|
|
3
|
-
"version": "1.2.
|
|
3
|
+
"version": "1.2.8",
|
|
4
4
|
"description": "A RESP2 server with practical Redis compatibility, backed by SQLite",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "src/index.js",
|
|
7
|
-
"bin": {
|
|
8
|
-
"resplite-import": "src/cli/resplite-import.js",
|
|
9
|
-
"resplite-dirty-tracker": "src/cli/resplite-dirty-tracker.js"
|
|
10
|
-
},
|
|
11
7
|
"exports": {
|
|
12
8
|
".": "./src/index.js",
|
|
13
9
|
"./embed": "./src/embed.js",
|
|
@@ -15,7 +11,6 @@
|
|
|
15
11
|
},
|
|
16
12
|
"scripts": {
|
|
17
13
|
"start": "node src/index.js",
|
|
18
|
-
"import-from-redis": "node src/cli/import-from-redis.js",
|
|
19
14
|
"test": "node --test",
|
|
20
15
|
"test:unit": "node --test 'test/unit/*.test.js'",
|
|
21
16
|
"test:integration": "node --test 'test/integration/*.test.js'",
|
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
/**
|
|
3
|
+
* Quick smoke test for `node:readline/promises` `createInterface()`.
|
|
4
|
+
*
|
|
5
|
+
* Usage:
|
|
6
|
+
* node scripts/create-interface-smoke.js
|
|
7
|
+
*
|
|
8
|
+
* You can also pipe answers:
|
|
9
|
+
* printf 'Martin\n\n' | node scripts/create-interface-smoke.js
|
|
10
|
+
*/
|
|
11
|
+
|
|
12
|
+
import { stdin, stdout } from 'node:process';
|
|
13
|
+
import { createInterface } from 'node:readline/promises';
|
|
14
|
+
|
|
15
|
+
async function main() {
|
|
16
|
+
const rl = createInterface({ input: stdin, output: stdout });
|
|
17
|
+
|
|
18
|
+
try {
|
|
19
|
+
const name = await rl.question('Escribe tu nombre y pulsa Enter: ');
|
|
20
|
+
console.log(`Hola${name ? `, ${name}` : ''}.`);
|
|
21
|
+
|
|
22
|
+
await rl.question('Pulsa Enter para simular el cutover final...');
|
|
23
|
+
console.log('Continuando despues del Enter.');
|
|
24
|
+
} finally {
|
|
25
|
+
rl.close();
|
|
26
|
+
}
|
|
27
|
+
}
|
|
28
|
+
|
|
29
|
+
main().catch((error) => {
|
|
30
|
+
console.error('Fallo la prueba de createInterface:', error);
|
|
31
|
+
process.exitCode = 1;
|
|
32
|
+
});
|