resplite 1.2.6 → 1.2.10
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +168 -275
- package/package.json +1 -6
- package/scripts/create-interface-smoke.js +32 -0
- package/skills/README.md +22 -0
- package/skills/resplite-command-vertical-slice/SKILL.md +134 -0
- package/skills/resplite-ft-search-workbench/SKILL.md +138 -0
- package/skills/resplite-migration-cutover-assistant/SKILL.md +138 -0
- package/spec/00-INDEX.md +37 -0
- package/spec/01-overview-and-goals.md +125 -0
- package/spec/02-protocol-and-commands.md +174 -0
- package/spec/03-data-model-ttl-transactions.md +157 -0
- package/spec/04-cache-architecture.md +171 -0
- package/spec/05-scan-admin-implementation.md +379 -0
- package/spec/06-migration-strategy-core.md +79 -0
- package/spec/07-type-lists.md +202 -0
- package/spec/08-type-sorted-sets.md +220 -0
- package/spec/{SPEC_D.md → 09-search-ft-commands.md} +3 -1
- package/spec/{SPEC_E.md → 10-blocking-commands.md} +3 -1
- package/spec/{SPEC_F.md → 11-migration-dirty-registry.md} +61 -147
- package/src/commands/object.js +17 -0
- package/src/commands/registry.js +4 -0
- package/src/commands/zrevrange.js +27 -0
- package/src/engine/engine.js +19 -0
- package/src/migration/apply-dirty.js +8 -1
- package/src/migration/index.js +5 -4
- package/src/migration/migrate-search.js +25 -6
- package/src/storage/sqlite/zsets.js +34 -0
- package/test/integration/object-idletime.test.js +51 -0
- package/test/integration/zsets.test.js +18 -0
- package/test/unit/migrate-search.test.js +50 -2
- package/spec/SPEC_A.md +0 -1171
- package/spec/SPEC_B.md +0 -426
- package/src/cli/import-from-redis.js +0 -194
- package/src/cli/resplite-dirty-tracker.js +0 -92
- package/src/cli/resplite-import.js +0 -296
- package/test/contract/import-from-redis.test.js +0 -83
package/README.md
CHANGED
|
@@ -21,45 +21,19 @@ Building this project surfaced a clear finding: **Redis running inside Docker**
|
|
|
21
21
|
|
|
22
22
|
The strongest use case is **migrating a non-replicated Redis instance that has grown large** (tens of GB). You don't need to manage replicas, AOF, or RDB. Once migrated, you get a single SQLite file and latency that is good enough for most workloads. The built-in migration tooling (see [Migration from Redis](#migration-from-redis)) handles datasets of that size with minimal downtime.
|
|
23
23
|
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
A typical comparison is **Redis (e.g. in Docker)** on one side and **RESPLite locally** on the other. In that setup, RESPLite often shows **better latency** because it avoids Docker networking and runs in the same process/host. The benchmark below uses RESPLite with the **default** PRAGMA template only.
|
|
27
|
-
|
|
28
|
-
**Example results (Redis vs RESPLite, default pragma, 10k iterations):**
|
|
29
|
-
|
|
30
|
-
| Suite | Redis (Docker) | RESPLite (default) |
|
|
31
|
-
|-----------------|----------------|--------------------|
|
|
32
|
-
| PING | 8.79K/s | 37.36K/s |
|
|
33
|
-
| SET+GET | 4.68K/s | 11.96K/s |
|
|
34
|
-
| MSET+MGET(10) | 4.41K/s | 5.81K/s |
|
|
35
|
-
| INCR | 9.54K/s | 18.97K/s |
|
|
36
|
-
| HSET+HGET | 4.40K/s | 11.91K/s |
|
|
37
|
-
| HGETALL(50) | 8.39K/s | 11.01K/s |
|
|
38
|
-
| HLEN(50) | 9.36K/s | 31.21K/s |
|
|
39
|
-
| SADD+SMEMBERS | 9.27K/s | 17.37K/s |
|
|
40
|
-
| LPUSH+LRANGE | 8.34K/s | 14.27K/s |
|
|
41
|
-
| LREM | 4.37K/s | 6.08K/s |
|
|
42
|
-
| ZADD+ZRANGE | 7.80K/s | 17.12K/s |
|
|
43
|
-
| SET+DEL | 4.39K/s | 9.57K/s |
|
|
44
|
-
| FT.SEARCH | 8.36K/s | 8.22K/s |
|
|
45
|
-
|
|
46
|
-
*Run `npm run benchmark -- --template default` to reproduce. Numbers depend on host and whether Redis is native or in Docker.*
|
|
24
|
+
### Benchmark snapshot
|
|
47
25
|
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
```bash
|
|
51
|
-
# Terminal 1: Redis on 6379 (e.g. docker run -p 6379:6379 redis). Terminal 2: RESPLite on 6380
|
|
52
|
-
RESPLITE_PORT=6380 npm start
|
|
26
|
+
Representative results against Redis in Docker on the same host:
|
|
53
27
|
|
|
54
|
-
|
|
55
|
-
|
|
28
|
+
| Suite | Redis (Docker) | RESPLite (default) |
|
|
29
|
+
|---------------|----------------|--------------------|
|
|
30
|
+
| PING | 8.79K/s | 37.36K/s |
|
|
31
|
+
| SET+GET | 4.68K/s | 11.96K/s |
|
|
32
|
+
| HSET+HGET | 4.40K/s | 11.91K/s |
|
|
33
|
+
| ZADD+ZRANGE | 7.80K/s | 17.12K/s |
|
|
34
|
+
| FT.SEARCH | 8.36K/s | 8.22K/s |
|
|
56
35
|
|
|
57
|
-
|
|
58
|
-
npm run benchmark -- --template default
|
|
59
|
-
|
|
60
|
-
# Custom iterations and ports
|
|
61
|
-
npm run benchmark -- --iterations 10000 --redis-port 6379 --resplite-port 6380
|
|
62
|
-
```
|
|
36
|
+
The full benchmark table is available later in [Benchmark](#benchmark-redis-vs-resplite).
|
|
63
37
|
|
|
64
38
|
## Install
|
|
65
39
|
|
|
@@ -67,124 +41,60 @@ npm run benchmark -- --iterations 10000 --redis-port 6379 --resplite-port 6380
|
|
|
67
41
|
npm install resplite
|
|
68
42
|
```
|
|
69
43
|
|
|
70
|
-
##
|
|
44
|
+
## AI Skill
|
|
71
45
|
|
|
72
46
|
```bash
|
|
73
|
-
|
|
47
|
+
npx skills add https://github.com/clasen/RESPLite
|
|
74
48
|
```
|
|
75
49
|
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
```bash
|
|
79
|
-
redis-cli -p 6379
|
|
80
|
-
> PING
|
|
81
|
-
PONG
|
|
82
|
-
> SET foo bar
|
|
83
|
-
OK
|
|
84
|
-
> GET foo
|
|
85
|
-
"bar"
|
|
86
|
-
```
|
|
87
|
-
|
|
88
|
-
### Standalone server script (fixed port)
|
|
89
|
-
|
|
90
|
-
Run this as a persistent background process (`node server.js`). RESPLite will listen on port 6380 and stay up until the process receives SIGINT (Ctrl+C) or SIGTERM; then it closes the server and exits cleanly. If you kill the process (e.g. SIGKILL or force quit), all client connections are closed as well — with the default configuration the server runs in the same process, so when the process exits the TCP server and its connections are torn down.
|
|
91
|
-
|
|
92
|
-
```javascript
|
|
93
|
-
// server.js
|
|
94
|
-
import { createRESPlite } from 'resplite/embed';
|
|
95
|
-
|
|
96
|
-
const srv = await createRESPlite({ port: 6380, db: './data.db' });
|
|
97
|
-
console.log(`RESPLite listening on ${srv.host}:${srv.port}`);
|
|
98
|
-
|
|
99
|
-
```
|
|
100
|
-
|
|
101
|
-
Then connect from any other script or process:
|
|
102
|
-
|
|
103
|
-
```bash
|
|
104
|
-
redis-cli -p 6380 PING
|
|
105
|
-
```
|
|
106
|
-
|
|
107
|
-
### Environment variables
|
|
108
|
-
|
|
109
|
-
| Variable | Default | Description |
|
|
110
|
-
|---|---|---|
|
|
111
|
-
| `RESPLITE_PORT` | `6379` | Server port |
|
|
112
|
-
| `RESPLITE_DB` | `./data.db` | SQLite database file |
|
|
113
|
-
| `RESPLITE_PRAGMA_TEMPLATE` | `default` | SQLite PRAGMA preset (see below) |
|
|
114
|
-
|
|
115
|
-
### PRAGMA templates
|
|
116
|
-
|
|
117
|
-
| Template | Description | Key settings |
|
|
118
|
-
|---|---|---|
|
|
119
|
-
| `default` | Balanced durability and speed (recommended) | WAL, synchronous=NORMAL, 20 MB cache |
|
|
120
|
-
| `performance` | Maximum throughput, reduced crash safety | WAL, synchronous=OFF, 64 MB cache, 512 MB mmap, exclusive locking |
|
|
121
|
-
| `safety` | Crash-safe writes at the cost of speed | WAL, synchronous=FULL, 20 MB cache |
|
|
122
|
-
| `minimal` | Only WAL + foreign keys | WAL, foreign_keys=ON |
|
|
123
|
-
| `none` | No pragmas applied — pure SQLite defaults | — |
|
|
124
|
-
|
|
125
|
-
## Programmatic usage (embedded)
|
|
126
|
-
|
|
127
|
-
RESPLite can be started and consumed entirely within a single Node.js script — no separate process needed. This is exactly how the test suite works.
|
|
50
|
+
## JavaScript quick start
|
|
128
51
|
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
```javascript
|
|
132
|
-
import { createClient } from 'redis';
|
|
133
|
-
import { createRESPlite } from 'resplite/embed';
|
|
134
|
-
|
|
135
|
-
const srv = await createRESPlite({ db: './my-app.db' });
|
|
136
|
-
const client = createClient({ socket: { port: srv.port, host: '127.0.0.1' } });
|
|
137
|
-
await client.connect();
|
|
138
|
-
|
|
139
|
-
await client.set('hello', 'world');
|
|
140
|
-
console.log(await client.get('hello')); // → "world"
|
|
141
|
-
|
|
142
|
-
await client.quit();
|
|
143
|
-
await srv.close();
|
|
144
|
-
```
|
|
52
|
+
The recommended way to use RESPLite is from your own Node.js script, creating the server with the options and observability hooks your app needs. If you prefer a standalone server or terminal workflow, see [CLI and standalone server reference](#cli-and-standalone-server-reference) below.
|
|
145
53
|
|
|
146
|
-
###
|
|
54
|
+
### Recommended server script
|
|
147
55
|
|
|
148
|
-
|
|
56
|
+
In a typical app, you start RESPLite from your own process and attach hooks for observability. The client still receives the same RESP responses; hooks are for logging and monitoring only.
|
|
149
57
|
|
|
150
58
|
```javascript
|
|
151
|
-
import
|
|
152
|
-
const log =
|
|
59
|
+
import LemonLog from 'lemonlog';
|
|
60
|
+
const log = new LemonLog('RESPlite');
|
|
153
61
|
|
|
154
62
|
const srv = await createRESPlite({
|
|
155
63
|
port: 6380,
|
|
156
64
|
db: './data.db',
|
|
157
65
|
hooks: {
|
|
158
66
|
onUnknownCommand({ command, argsCount, clientAddress }) {
|
|
159
|
-
log.warn({ command, argsCount, clientAddress }, '
|
|
67
|
+
log.warn({ command, argsCount, clientAddress }, 'unsupported command');
|
|
160
68
|
},
|
|
161
69
|
onCommandError({ command, error, clientAddress }) {
|
|
162
|
-
log.warn({ command, error, clientAddress }, '
|
|
70
|
+
log.warn({ command, error, clientAddress }, 'command error');
|
|
163
71
|
},
|
|
164
72
|
onSocketError({ error, clientAddress }) {
|
|
165
|
-
log.error({ err: error, clientAddress }, '
|
|
73
|
+
log.error({ err: error, clientAddress }, 'connection error');
|
|
166
74
|
},
|
|
167
75
|
},
|
|
168
76
|
});
|
|
169
77
|
```
|
|
170
78
|
|
|
171
|
-
|
|
172
|
-
|------|--------------------|
|
|
173
|
-
| `onUnknownCommand` | Client sent a command not implemented by RESPLite (e.g. `SUBSCRIBE`, `PUBLISH`). |
|
|
174
|
-
| `onCommandError` | A command failed (wrong type, invalid args, or handler threw). |
|
|
175
|
-
| `onSocketError` | The connection socket emitted an error (e.g. `ECONNRESET`). |
|
|
79
|
+
Available hooks:
|
|
176
80
|
|
|
177
|
-
|
|
81
|
+
- `onUnknownCommand`: client sent a command not implemented by RESPLite, such as `SUBSCRIBE` or `PUBLISH`.
|
|
82
|
+
- `onCommandError`: a command failed because of wrong type, invalid args, or a handler error.
|
|
83
|
+
- `onSocketError`: the connection socket emitted an error, for example `ECONNRESET`.
|
|
84
|
+
|
|
85
|
+
If you want a tiny in-process smoke test that starts RESPLite and connects with the `redis` client in the same script, see [Minimal embedded example](#minimal-embedded-example) below.
|
|
178
86
|
|
|
179
|
-
|
|
87
|
+
## Migration from Redis
|
|
180
88
|
|
|
181
|
-
|
|
89
|
+
RESPLite is a good fit for migrating **non-replicated Redis** instances that have **grown large** (e.g. tens of GB) and where RESPLite's latency is acceptable. The recommended path is to drive the migration from a Node.js script via `resplite/migration`, keeping preflight, dirty tracking, bulk import, cutover, and verification in one place.
|
|
182
90
|
|
|
183
|
-
###
|
|
91
|
+
### Recommended migration script
|
|
184
92
|
|
|
185
|
-
|
|
93
|
+
The full flow can run from a single script: inspect Redis, enable keyspace notifications, track dirty keys in-process, bulk import with checkpoints, apply dirty keys during cutover, verify, and disconnect cleanly.
|
|
186
94
|
|
|
187
95
|
```javascript
|
|
96
|
+
import { stdin, stdout } from 'node:process';
|
|
97
|
+
import { createInterface } from 'node:readline/promises';
|
|
188
98
|
import { createMigration } from 'resplite/migration';
|
|
189
99
|
|
|
190
100
|
const m = createMigration({
|
|
@@ -192,12 +102,11 @@ const m = createMigration({
|
|
|
192
102
|
to: './resplite.db', // destination SQLite DB path (required)
|
|
193
103
|
runId: 'my-migration-1', // unique run ID (required for bulk/status/applyDirty)
|
|
194
104
|
|
|
195
|
-
// optional
|
|
196
|
-
scanCount:
|
|
197
|
-
batchKeys:
|
|
105
|
+
// optional
|
|
106
|
+
scanCount: 5000,
|
|
107
|
+
batchKeys: 1000,
|
|
198
108
|
batchBytes: 64 * 1024 * 1024, // 64 MB
|
|
199
109
|
maxRps: 0, // 0 = unlimited
|
|
200
|
-
pragmaTemplate: 'default',
|
|
201
110
|
|
|
202
111
|
// If your Redis deployment renamed CONFIG for security:
|
|
203
112
|
// configCommand: 'MYCONFIG',
|
|
@@ -242,13 +151,27 @@ await m.bulk({
|
|
|
242
151
|
const { run, dirty } = m.status();
|
|
243
152
|
console.log('bulk status:', run.status, '— dirty counts:', dirty);
|
|
244
153
|
|
|
245
|
-
// Step 2 —
|
|
246
|
-
|
|
154
|
+
// Step 2 — Pause for cutover:
|
|
155
|
+
// stop the app that is still writing to Redis, then press Enter.
|
|
156
|
+
const rl = createInterface({ input: stdin, output: stdout });
|
|
157
|
+
await rl.question('Stop app traffic to Redis, then press Enter to apply the final dirty set...');
|
|
158
|
+
rl.close();
|
|
159
|
+
|
|
160
|
+
// Step 3 — Apply dirty keys that changed in Redis during bulk
|
|
161
|
+
await m.applyDirty({ onProgress: console.log });
|
|
247
162
|
|
|
248
|
-
// Step
|
|
163
|
+
// Step 3b — Stop tracker after cutover
|
|
249
164
|
await m.stopDirtyTracker();
|
|
250
165
|
|
|
251
|
-
//
|
|
166
|
+
// If the source also uses FT.*, this is where you would run m.migrateSearch().
|
|
167
|
+
// Step 3c — Migrate RediSearch indices after writes are frozen
|
|
168
|
+
await m.migrateSearch({
|
|
169
|
+
onProgress: (r) => {
|
|
170
|
+
console.log(`[search ${r.name}] docs=${r.docsImported} skipped=${r.docsSkipped} warnings=${r.warnings.length}`);
|
|
171
|
+
},
|
|
172
|
+
});
|
|
173
|
+
|
|
174
|
+
// Step 4 — Verify a sample of keys match between Redis and the destination
|
|
252
175
|
const result = await m.verify({ samplePct: 0.5, maxSample: 10000 });
|
|
253
176
|
console.log(`verified ${result.sampled} keys — mismatches: ${result.mismatches.length}`);
|
|
254
177
|
|
|
@@ -256,13 +179,17 @@ console.log(`verified ${result.sampled} keys — mismatches: ${result.mismatches
|
|
|
256
179
|
await m.close();
|
|
257
180
|
```
|
|
258
181
|
|
|
259
|
-
**
|
|
182
|
+
**Bulk: Automatic resume (default)**
|
|
260
183
|
`resume` defaults to `true`. It doesn't matter whether it's the first run or a resume: the same script works for both starting and continuing. The first run starts from cursor 0; if the process is interrupted (Ctrl+C, crash, etc.), running the script again continues from the last checkpoint. You don't need to pass `resume: false` on the first run or change anything to resume.
|
|
261
184
|
|
|
262
185
|
**Graceful shutdown**
|
|
263
186
|
On SIGINT (Ctrl+C) or SIGTERM, the bulk importer checkpoints progress, sets the run status to `aborted`, closes the SQLite database cleanly (so WAL is checkpointed and the file is not left open), then exits. You can safely interrupt a long-running bulk and resume later.
|
|
264
187
|
|
|
265
|
-
The JS API can run the dirty-key tracker in-process via `m.startDirtyTracker()` / `m.stopDirtyTracker()`, so the full flow
|
|
188
|
+
The JS API can run the dirty-key tracker in-process via `m.startDirtyTracker()` / `m.stopDirtyTracker()`, so the full flow stays inside a single script.
|
|
189
|
+
|
|
190
|
+
For a real cutover, the simplest flow is: let bulk finish, stop the app that still writes to Redis, press Enter to apply the final dirty set, run `migrateSearch()` if you use `FT.*`, and then switch traffic to RESPLite.
|
|
191
|
+
|
|
192
|
+
The KV bulk flow imports strings, hashes, sets, lists, and zsets. If your source also uses `FT.*` indices, see [Migrating RediSearch indices](#migrating-redisearch-indices).
|
|
266
193
|
|
|
267
194
|
#### Renamed CONFIG command
|
|
268
195
|
|
|
@@ -284,122 +211,38 @@ const info = await m.preflight();
|
|
|
284
211
|
const result = await m.enableKeyspaceNotifications({ value: 'KEA' });
|
|
285
212
|
```
|
|
286
213
|
|
|
287
|
-
The same
|
|
288
|
-
|
|
289
|
-
```bash
|
|
290
|
-
npx resplite-dirty-tracker start --run-id run_001 --to ./resplite.db \
|
|
291
|
-
--from redis://10.0.0.10:6379 --config-command MYCONFIG
|
|
292
|
-
```
|
|
293
|
-
### Simple one-shot import
|
|
294
|
-
|
|
295
|
-
For small datasets or when downtime is acceptable:
|
|
296
|
-
|
|
297
|
-
```bash
|
|
298
|
-
# Default: redis://127.0.0.1:6379 → ./data.db
|
|
299
|
-
npm run import-from-redis -- --db ./migrated.db
|
|
300
|
-
|
|
301
|
-
# Custom Redis URL
|
|
302
|
-
npm run import-from-redis -- --db ./migrated.db --redis-url redis://127.0.0.1:6379
|
|
303
|
-
|
|
304
|
-
# Or host/port
|
|
305
|
-
npm run import-from-redis -- --db ./migrated.db --host 127.0.0.1 --port 6379
|
|
306
|
-
|
|
307
|
-
# Optional: PRAGMA template for the target DB
|
|
308
|
-
npm run import-from-redis -- --db ./migrated.db --pragma-template performance
|
|
309
|
-
```
|
|
310
|
-
|
|
311
|
-
### Redis with authentication
|
|
312
|
-
|
|
313
|
-
Migration supports Redis instances protected by a password. Use a Redis URL that includes the password (or username and password for Redis 6+ ACL):
|
|
314
|
-
|
|
315
|
-
- **Password only:** `redis://:PASSWORD@host:port`
|
|
316
|
-
- **Username and password:** `redis://username:PASSWORD@host:port`
|
|
317
|
-
|
|
318
|
-
Examples:
|
|
319
|
-
|
|
320
|
-
```bash
|
|
321
|
-
# One-shot import from authenticated Redis
|
|
322
|
-
npm run import-from-redis -- --db ./migrated.db --redis-url "redis://:mysecret@127.0.0.1:6379"
|
|
323
|
-
|
|
324
|
-
# flow: use --from with the full URL (or set RESPLITE_IMPORT_FROM)
|
|
325
|
-
npx resplite-import preflight --from "redis://:mysecret@10.0.0.10:6379" --to ./resplite.db
|
|
326
|
-
npx resplite-dirty-tracker start --run-id run_001 --from "redis://:mysecret@10.0.0.10:6379" --to ./resplite.db
|
|
327
|
-
```
|
|
328
|
-
|
|
329
|
-
For one-shot import, authentication is only available when using `--redis-url`; the `--host` / `--port` options do not support a password.
|
|
214
|
+
The same `configCommand` override is used by `preflight()` and `enableKeyspaceNotifications()` in the programmatic flow.
|
|
330
215
|
|
|
331
|
-
|
|
332
|
-
The KV bulk migration imports only the Redis keyspace (strings, hashes, sets, lists, zsets). RediSearch index schemas and documents are migrated separately with the `migrate-search` step — see [Migrating RediSearch indices](#migrating-redisearch-indices) below.
|
|
333
|
-
|
|
334
|
-
### Minimal-downtime migration
|
|
335
|
-
|
|
336
|
-
For large datasets (~30 GB), use the Dirty Key Registry flow so the bulk of the migration runs online and only a short cutover is needed.
|
|
337
|
-
|
|
338
|
-
**Enable keyspace notifications in Redis** (required for the dirty-key tracker). Either run at runtime:
|
|
339
|
-
|
|
340
|
-
```bash
|
|
341
|
-
redis-cli CONFIG SET notify-keyspace-events KEA
|
|
342
|
-
```
|
|
216
|
+
#### Low-level re-exports
|
|
343
217
|
|
|
344
|
-
|
|
218
|
+
If you need more control, the individual functions and registry helpers are also exported:
|
|
345
219
|
|
|
220
|
+
```javascript
|
|
221
|
+
import {
|
|
222
|
+
runPreflight, runBulkImport, runApplyDirty, runVerify,
|
|
223
|
+
getRun, getDirtyCounts, createRun, setRunStatus, logError,
|
|
224
|
+
} from 'resplite/migration';
|
|
346
225
|
```
|
|
347
|
-
notify-keyspace-events KEA
|
|
348
|
-
```
|
|
349
|
-
|
|
350
|
-
(`K` = keyspace prefix, `E` = keyevent prefix, `A` = all event types — lets the tracker see every key change and expiration.)
|
|
351
226
|
|
|
352
|
-
|
|
227
|
+
## JavaScript examples
|
|
353
228
|
|
|
354
|
-
|
|
355
|
-
```bash
|
|
356
|
-
npx resplite-import preflight --from redis://10.0.0.10:6379 --to ./resplite.db
|
|
357
|
-
```
|
|
229
|
+
Once connected through the `redis` client, you can use RESPLite with the usual Redis-style API.
|
|
358
230
|
|
|
359
|
-
|
|
360
|
-
```bash
|
|
361
|
-
npx resplite-dirty-tracker start --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db
|
|
362
|
-
# If CONFIG was renamed:
|
|
363
|
-
npx resplite-dirty-tracker start --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db --config-command MYCONFIG
|
|
364
|
-
```
|
|
231
|
+
### Minimal embedded example
|
|
365
232
|
|
|
366
|
-
|
|
367
|
-
|
|
368
|
-
|
|
369
|
-
--scan-count 1000 --max-rps 2000 --batch-keys 200 --batch-bytes 64MB
|
|
370
|
-
```
|
|
371
|
-
|
|
372
|
-
4. **Monitor** – Check run and dirty-key counts:
|
|
373
|
-
```bash
|
|
374
|
-
npx resplite-import status --run-id run_001 --to ./resplite.db
|
|
375
|
-
```
|
|
376
|
-
|
|
377
|
-
5. **Cutover** – Freeze app writes to Redis, then apply remaining dirty keys:
|
|
378
|
-
```bash
|
|
379
|
-
npx resplite-import apply-dirty --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db
|
|
380
|
-
```
|
|
381
|
-
|
|
382
|
-
6. **Stop tracker and switch** – Stop the tracker and point clients to RespLite:
|
|
383
|
-
```bash
|
|
384
|
-
npx resplite-dirty-tracker stop --run-id run_001 --to ./resplite.db
|
|
385
|
-
```
|
|
386
|
-
|
|
387
|
-
7. **Verify** – Optional sampling check between Redis and destination:
|
|
388
|
-
```bash
|
|
389
|
-
npx resplite-import verify --run-id run_001 --from redis://10.0.0.10:6379 --to ./resplite.db --sample 0.5%
|
|
390
|
-
```
|
|
391
|
-
|
|
392
|
-
Then start RespLite with the migrated DB: `RESPLITE_DB=./resplite.db npm start`.
|
|
233
|
+
```javascript
|
|
234
|
+
import { createClient } from 'redis';
|
|
235
|
+
import { createRESPlite } from 'resplite/embed';
|
|
393
236
|
|
|
394
|
-
|
|
237
|
+
const srv = await createRESPlite({ db: './my-app.db' });
|
|
238
|
+
const client = createClient({ socket: { port: srv.port, host: '127.0.0.1' } });
|
|
239
|
+
await client.connect();
|
|
395
240
|
|
|
396
|
-
|
|
241
|
+
await client.set('hello', 'world');
|
|
242
|
+
console.log(await client.get('hello')); // → "world"
|
|
397
243
|
|
|
398
|
-
|
|
399
|
-
|
|
400
|
-
runPreflight, runBulkImport, runApplyDirty, runVerify,
|
|
401
|
-
getRun, getDirtyCounts, createRun, setRunStatus, logError,
|
|
402
|
-
} from 'resplite/migration';
|
|
244
|
+
await client.quit();
|
|
245
|
+
await srv.close();
|
|
403
246
|
```
|
|
404
247
|
|
|
405
248
|
### Strings, TTL, and key operations
|
|
@@ -573,33 +416,9 @@ await c2.quit();
|
|
|
573
416
|
await srv2.close();
|
|
574
417
|
```
|
|
575
418
|
|
|
576
|
-
|
|
577
|
-
|
|
578
|
-
If your Redis source uses **RediSearch** (Redis Stack or the `redis/search` module), run `migrate-search` after (or during) the KV bulk import. It reads index schemas with `FT.INFO`, creates them in RespLite, and imports documents by scanning the matching hash keys.
|
|
419
|
+
## Migrating RediSearch indices
|
|
579
420
|
|
|
580
|
-
**
|
|
581
|
-
|
|
582
|
-
```bash
|
|
583
|
-
# Migrate all indices
|
|
584
|
-
npx resplite-import migrate-search \
|
|
585
|
-
--from redis://10.0.0.10:6379 \
|
|
586
|
-
--to ./resplite.db
|
|
587
|
-
|
|
588
|
-
# Migrate specific indices only
|
|
589
|
-
npx resplite-import migrate-search \
|
|
590
|
-
--from redis://10.0.0.10:6379 \
|
|
591
|
-
--to ./resplite.db \
|
|
592
|
-
--index products \
|
|
593
|
-
--index articles
|
|
594
|
-
|
|
595
|
-
# Options
|
|
596
|
-
# --scan-count N SCAN COUNT hint (default 500)
|
|
597
|
-
# --max-rps N throttle Redis reads
|
|
598
|
-
# --batch-docs N docs per SQLite transaction (default 200)
|
|
599
|
-
# --max-suggestions N cap for suggestion import (default 10000)
|
|
600
|
-
# --no-skip overwrite if the index already exists in RespLite
|
|
601
|
-
# --no-suggestions skip suggestion import
|
|
602
|
-
```
|
|
421
|
+
If your Redis source uses **RediSearch** (Redis Stack or the `redis/search` module), the best moment to run `migrateSearch()` is after the final KV cutover, once writes to Redis are already frozen. It reads index schemas with `FT.INFO`, creates them in RESPLite, and imports documents by scanning the matching hash keys.
|
|
603
422
|
|
|
604
423
|
**Programmatic API:**
|
|
605
424
|
|
|
@@ -610,7 +429,7 @@ const result = await m.migrateSearch({
|
|
|
610
429
|
onlyIndices: ['products', 'articles'], // omit to migrate all
|
|
611
430
|
batchDocs: 200,
|
|
612
431
|
maxSuggestions: 10000,
|
|
613
|
-
skipExisting: true, //
|
|
432
|
+
skipExisting: true, // reuse existing destination index if already created
|
|
614
433
|
withSuggestions: true, // default
|
|
615
434
|
onProgress: (r) => console.log(r.name, r.docsImported, r.warnings),
|
|
616
435
|
});
|
|
@@ -620,7 +439,7 @@ const result = await m.migrateSearch({
|
|
|
620
439
|
|
|
621
440
|
**What gets migrated:**
|
|
622
441
|
|
|
623
|
-
| RediSearch type |
|
|
442
|
+
| RediSearch type | RESPLite | Notes |
|
|
624
443
|
|---|---|---|
|
|
625
444
|
| TEXT | TEXT | Direct |
|
|
626
445
|
| TAG | TEXT | Values preserved; TAG filtering lost |
|
|
@@ -632,6 +451,88 @@ const result = await m.migrateSearch({
|
|
|
632
451
|
- Suggestions are imported via `FT.SUGGET "" MAX n WITHSCORES` (no cursor; capped at `maxSuggestions`).
|
|
633
452
|
- Graceful shutdown: Ctrl+C finishes the current document, closes SQLite cleanly, and exits with a non-zero code.
|
|
634
453
|
|
|
454
|
+
## CLI and standalone server reference
|
|
455
|
+
|
|
456
|
+
If you prefer operating RESPLite from the terminal, or want separate long-running processes, use the commands below.
|
|
457
|
+
|
|
458
|
+
### Run as a standalone server
|
|
459
|
+
|
|
460
|
+
```bash
|
|
461
|
+
npm start
|
|
462
|
+
```
|
|
463
|
+
|
|
464
|
+
By default the server listens on port **6379** and stores data in `data.db` in the current directory.
|
|
465
|
+
|
|
466
|
+
```bash
|
|
467
|
+
redis-cli -p 6379
|
|
468
|
+
> PING
|
|
469
|
+
PONG
|
|
470
|
+
> SET foo bar
|
|
471
|
+
OK
|
|
472
|
+
> GET foo
|
|
473
|
+
"bar"
|
|
474
|
+
```
|
|
475
|
+
|
|
476
|
+
### Standalone server script (fixed port)
|
|
477
|
+
|
|
478
|
+
Run this as a persistent background process (`node server.js`). RESPLite will listen on port 6380 and stay up until the process receives SIGINT (Ctrl+C) or SIGTERM; then it closes the server and exits cleanly. If you kill the process (for example, SIGKILL or force quit), all client connections are closed as well.
|
|
479
|
+
|
|
480
|
+
```javascript
|
|
481
|
+
// server.js
|
|
482
|
+
import { createRESPlite } from 'resplite/embed';
|
|
483
|
+
|
|
484
|
+
const srv = await createRESPlite({ port: 6380, db: './data.db' });
|
|
485
|
+
console.log(`RESPLite listening on ${srv.host}:${srv.port}`);
|
|
486
|
+
```
|
|
487
|
+
|
|
488
|
+
Then connect from any other script or process:
|
|
489
|
+
|
|
490
|
+
```bash
|
|
491
|
+
redis-cli -p 6380 PING
|
|
492
|
+
```
|
|
493
|
+
|
|
494
|
+
### Environment variables
|
|
495
|
+
|
|
496
|
+
| Variable | Default | Description |
|
|
497
|
+
|---|---|---|
|
|
498
|
+
| `RESPLITE_PORT` | `6379` | Server port |
|
|
499
|
+
| `RESPLITE_DB` | `./data.db` | SQLite database file |
|
|
500
|
+
| `RESPLITE_PRAGMA_TEMPLATE` | `default` | SQLite PRAGMA preset (see below) |
|
|
501
|
+
|
|
502
|
+
### PRAGMA templates
|
|
503
|
+
|
|
504
|
+
| Template | Description | Key settings |
|
|
505
|
+
|---|---|---|
|
|
506
|
+
| `default` | Balanced durability and speed (recommended) | WAL, synchronous=NORMAL, 20 MB cache |
|
|
507
|
+
| `performance` | Maximum throughput, reduced crash safety | WAL, synchronous=OFF, 64 MB cache, 512 MB mmap, exclusive locking |
|
|
508
|
+
| `safety` | Crash-safe writes at the cost of speed | WAL, synchronous=FULL, 20 MB cache |
|
|
509
|
+
| `minimal` | Only WAL + foreign keys | WAL, foreign_keys=ON |
|
|
510
|
+
| `none` | No pragmas applied, pure SQLite defaults | - |
|
|
511
|
+
|
|
512
|
+
## Benchmark (Redis vs RESPLite)
|
|
513
|
+
|
|
514
|
+
A typical comparison is **Redis (for example, in Docker)** on one side and **RESPLite locally** on the other. In that setup, RESPLite often shows **better latency** because it avoids Docker networking and runs in the same process or host. The benchmark below uses RESPLite with the **default** PRAGMA template only.
|
|
515
|
+
|
|
516
|
+
**Example results (Redis vs RESPLite, default pragma, 10k iterations):**
|
|
517
|
+
|
|
518
|
+
| Suite | Redis (Docker) | RESPLite (default) |
|
|
519
|
+
|-----------------|----------------|--------------------|
|
|
520
|
+
| PING | 8.79K/s | 37.36K/s |
|
|
521
|
+
| SET+GET | 4.68K/s | 11.96K/s |
|
|
522
|
+
| MSET+MGET(10) | 4.41K/s | 5.81K/s |
|
|
523
|
+
| INCR | 9.54K/s | 18.97K/s |
|
|
524
|
+
| HSET+HGET | 4.40K/s | 11.91K/s |
|
|
525
|
+
| HGETALL(50) | 8.39K/s | 11.01K/s |
|
|
526
|
+
| HLEN(50) | 9.36K/s | 31.21K/s |
|
|
527
|
+
| SADD+SMEMBERS | 9.27K/s | 17.37K/s |
|
|
528
|
+
| LPUSH+LRANGE | 8.34K/s | 14.27K/s |
|
|
529
|
+
| LREM | 4.37K/s | 6.08K/s |
|
|
530
|
+
| ZADD+ZRANGE | 7.80K/s | 17.12K/s |
|
|
531
|
+
| SET+DEL | 4.39K/s | 9.57K/s |
|
|
532
|
+
| FT.SEARCH | 8.36K/s | 8.22K/s |
|
|
533
|
+
|
|
534
|
+
To reproduce the benchmark, run `npm run benchmark -- --template default`. Numbers depend on host and whether Redis is native or in Docker.
|
|
535
|
+
|
|
635
536
|
## Compatibility matrix
|
|
636
537
|
|
|
637
538
|
### Supported (v1)
|
|
@@ -646,9 +547,8 @@ const result = await m.migrateSearch({
|
|
|
646
547
|
| **Lists** | LPUSH, RPUSH, LLEN, LRANGE, LINDEX, LPOP, RPOP, BLPOP, BRPOP |
|
|
647
548
|
| **Sorted sets** | ZADD, ZREM, ZCARD, ZSCORE, ZRANGE, ZRANGEBYSCORE |
|
|
648
549
|
| **Search (FT.\*)** | FT.CREATE, FT.INFO, FT.ADD, FT.DEL, FT.SEARCH, FT.SUGADD, FT.SUGGET, FT.SUGDEL |
|
|
649
|
-
| **Introspection** | TYPE, SCAN, KEYS, MONITOR |
|
|
550
|
+
| **Introspection** | TYPE, OBJECT IDLETIME, SCAN, KEYS, MONITOR |
|
|
650
551
|
| **Admin** | SQLITE.INFO, CACHE.INFO, MEMORY.INFO |
|
|
651
|
-
| **Tooling** | Redis import CLI (see Migration from Redis) |
|
|
652
552
|
|
|
653
553
|
### Not supported (v1)
|
|
654
554
|
|
|
@@ -672,10 +572,3 @@ Unsupported commands return: `ERR command not supported yet`.
|
|
|
672
572
|
| `npm run test:contract` | Contract tests (redis client) |
|
|
673
573
|
| `npm run test:stress` | Stress tests |
|
|
674
574
|
| `npm run benchmark` | Comparative benchmark Redis vs RESPLite |
|
|
675
|
-
| `npm run import-from-redis` | One-shot import from Redis into a SQLite DB |
|
|
676
|
-
| `npx resplite-import` (preflight, bulk, status, apply-dirty, verify) | Migration CLI (minimal-downtime flow) |
|
|
677
|
-
| `npx resplite-dirty-tracker <start\|stop>` | Dirty-key tracker for migration cutover |
|
|
678
|
-
|
|
679
|
-
## Specification
|
|
680
|
-
|
|
681
|
-
See [SPEC.md](SPEC.md) for the full specification.
|
package/package.json
CHANGED
|
@@ -1,13 +1,9 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "resplite",
|
|
3
|
-
"version": "1.2.
|
|
3
|
+
"version": "1.2.10",
|
|
4
4
|
"description": "A RESP2 server with practical Redis compatibility, backed by SQLite",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "src/index.js",
|
|
7
|
-
"bin": {
|
|
8
|
-
"resplite-import": "src/cli/resplite-import.js",
|
|
9
|
-
"resplite-dirty-tracker": "src/cli/resplite-dirty-tracker.js"
|
|
10
|
-
},
|
|
11
7
|
"exports": {
|
|
12
8
|
".": "./src/index.js",
|
|
13
9
|
"./embed": "./src/embed.js",
|
|
@@ -15,7 +11,6 @@
|
|
|
15
11
|
},
|
|
16
12
|
"scripts": {
|
|
17
13
|
"start": "node src/index.js",
|
|
18
|
-
"import-from-redis": "node src/cli/import-from-redis.js",
|
|
19
14
|
"test": "node --test",
|
|
20
15
|
"test:unit": "node --test 'test/unit/*.test.js'",
|
|
21
16
|
"test:integration": "node --test 'test/integration/*.test.js'",
|
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
/**
|
|
3
|
+
* Quick smoke test for `node:readline/promises` `createInterface()`.
|
|
4
|
+
*
|
|
5
|
+
* Usage:
|
|
6
|
+
* node scripts/create-interface-smoke.js
|
|
7
|
+
*
|
|
8
|
+
* You can also pipe answers:
|
|
9
|
+
* printf 'Martin\n\n' | node scripts/create-interface-smoke.js
|
|
10
|
+
*/
|
|
11
|
+
|
|
12
|
+
import { stdin, stdout } from 'node:process';
|
|
13
|
+
import { createInterface } from 'node:readline/promises';
|
|
14
|
+
|
|
15
|
+
async function main() {
|
|
16
|
+
const rl = createInterface({ input: stdin, output: stdout });
|
|
17
|
+
|
|
18
|
+
try {
|
|
19
|
+
const name = await rl.question('Escribe tu nombre y pulsa Enter: ');
|
|
20
|
+
console.log(`Hola${name ? `, ${name}` : ''}.`);
|
|
21
|
+
|
|
22
|
+
await rl.question('Pulsa Enter para simular el cutover final...');
|
|
23
|
+
console.log('Continuando despues del Enter.');
|
|
24
|
+
} finally {
|
|
25
|
+
rl.close();
|
|
26
|
+
}
|
|
27
|
+
}
|
|
28
|
+
|
|
29
|
+
main().catch((error) => {
|
|
30
|
+
console.error('Fallo la prueba de createInterface:', error);
|
|
31
|
+
process.exitCode = 1;
|
|
32
|
+
});
|
package/skills/README.md
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
# RESPLite Agent Skills
|
|
2
|
+
|
|
3
|
+
This folder contains portable skills for recurring RESPLite workflows.
|
|
4
|
+
|
|
5
|
+
## Skills
|
|
6
|
+
|
|
7
|
+
- `resplite-command-vertical-slice`: implement or extend Redis-like command support end to end.
|
|
8
|
+
- `resplite-migration-cutover-assistant`: work on Redis to RESPLite migration flows, dirty tracking, cutover, and verification.
|
|
9
|
+
- `resplite-ft-search-workbench`: work on `FT.*`, SQLite FTS5 behavior, and RediSearch migration mapping.
|
|
10
|
+
|
|
11
|
+
## Design intent
|
|
12
|
+
|
|
13
|
+
These skills are scoped by workflow, not by file type. Each one tells the agent:
|
|
14
|
+
|
|
15
|
+
- when the skill should trigger,
|
|
16
|
+
- which RESPLite files and specs matter first,
|
|
17
|
+
- how to keep scope aligned with the project's practical compatibility goals,
|
|
18
|
+
- how to verify the change before calling it done.
|
|
19
|
+
|
|
20
|
+
## Packaging
|
|
21
|
+
|
|
22
|
+
Each skill folder is portable and can be installed independently in a skills directory or zipped for distribution.
|