reddb-cli 0.1.2-next.29 → 0.1.2-next.30
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +162 -641
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,156 +1,111 @@
|
|
|
1
1
|
# RedDB
|
|
2
2
|
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
##
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
<tr>
|
|
48
|
-
<td width="33%">
|
|
49
|
-
<strong>Operational tables</strong><br><br>
|
|
50
|
-
|
|
51
|
-
- rows
|
|
52
|
-
- tables
|
|
53
|
-
- typed values and constraints
|
|
54
|
-
- point/range scans
|
|
55
|
-
- predicates and joins
|
|
56
|
-
- explicit delete/update flows
|
|
57
|
-
|
|
58
|
-
</td>
|
|
59
|
-
<td width="33%">
|
|
60
|
-
<strong>Payload-first entities</strong><br><br>
|
|
61
|
-
|
|
62
|
-
- documents / JSON payloads
|
|
63
|
-
- metadata
|
|
64
|
-
- binary blobs
|
|
65
|
-
- key-value fields
|
|
66
|
-
- snapshots and exports
|
|
67
|
-
|
|
68
|
-
</td>
|
|
69
|
-
<td width="33%">
|
|
70
|
-
<strong>Connected + semantic retrieval</strong><br><br>
|
|
71
|
-
|
|
72
|
-
- graph nodes
|
|
73
|
-
- graph edges
|
|
74
|
-
- paths
|
|
75
|
-
- vectors
|
|
76
|
-
- embeddings
|
|
77
|
-
- hybrid search
|
|
78
|
-
- ANN index-backed search readiness
|
|
79
|
-
|
|
80
|
-
</td>
|
|
81
|
-
</tr>
|
|
82
|
-
</table>
|
|
83
|
-
|
|
84
|
-
What this means in practice:
|
|
85
|
-
|
|
86
|
-
- store operational state, documents, relationships, and embeddings in one catalog
|
|
87
|
-
- evolve from row-driven models to graph and vector workflows without migration hell
|
|
88
|
-
- run the same data model in embedded and server/runtime modes
|
|
89
|
-
- keep manifests, native state, snapshots, and exports in the same control plane
|
|
90
|
-
|
|
91
|
-
---
|
|
92
|
-
|
|
93
|
-
## Quick start
|
|
94
|
-
|
|
95
|
-
### Install
|
|
3
|
+
RedDB is a unified multi-model database engine for teams that do not want to split operational data, documents, graph relationships, vector embeddings, and key-value state across different systems.
|
|
4
|
+
|
|
5
|
+
It gives you one engine, one persistence layer, and one operational surface for:
|
|
6
|
+
|
|
7
|
+
- tables and rows
|
|
8
|
+
- JSON-like documents
|
|
9
|
+
- graph nodes and edges
|
|
10
|
+
- vector embeddings and similarity search
|
|
11
|
+
- key-value records
|
|
12
|
+
|
|
13
|
+
## What RedDB does
|
|
14
|
+
|
|
15
|
+
RedDB lets one application work with different data shapes in the same database file or server runtime.
|
|
16
|
+
|
|
17
|
+
Typical use cases:
|
|
18
|
+
|
|
19
|
+
- operational application state with SQL-style querying
|
|
20
|
+
- graph-aware products that also need regular tables
|
|
21
|
+
- semantic retrieval and vector search next to first-party data
|
|
22
|
+
- local-first or edge deployments that want an embedded database
|
|
23
|
+
- AI/agent workflows that need MCP, HTTP, gRPC, or in-process access
|
|
24
|
+
|
|
25
|
+
## How RedDB works
|
|
26
|
+
|
|
27
|
+
RedDB uses the same core engine across three practical modes:
|
|
28
|
+
|
|
29
|
+
| Mode | When to use it | How you access it |
|
|
30
|
+
|:-----|:---------------|:------------------|
|
|
31
|
+
| Embedded | Your app should own the database directly, like SQLite | Rust API (`RedDB` or `RedDBRuntime`) |
|
|
32
|
+
| Server | Multiple clients or services need to connect | HTTP or gRPC |
|
|
33
|
+
| Agent / tooling | You want CLI or MCP integration on top of the same engine | `red` CLI or MCP server |
|
|
34
|
+
|
|
35
|
+
That means the storage model stays the same whether you:
|
|
36
|
+
|
|
37
|
+
- open a local `.rdb` file inside your Rust process
|
|
38
|
+
- run `red server --http`
|
|
39
|
+
- run `red server --grpc`
|
|
40
|
+
- expose the same database to AI agents through MCP
|
|
41
|
+
|
|
42
|
+
## Install
|
|
43
|
+
|
|
44
|
+
### GitHub releases
|
|
45
|
+
|
|
46
|
+
The recommended install path is the release installer, which pulls the correct asset from GitHub Releases:
|
|
96
47
|
|
|
97
48
|
```bash
|
|
98
49
|
curl -fsSL https://raw.githubusercontent.com/forattini-dev/reddb/main/install.sh | bash
|
|
99
50
|
```
|
|
100
51
|
|
|
101
|
-
|
|
52
|
+
Pin a version:
|
|
102
53
|
|
|
103
54
|
```bash
|
|
104
|
-
|
|
105
|
-
curl -fsSL https://raw.githubusercontent.com/forattini-dev/reddb/main/install.sh | bash -s -- --channel next
|
|
55
|
+
curl -fsSL https://raw.githubusercontent.com/forattini-dev/reddb/main/install.sh | bash -s -- --version v0.1.2
|
|
106
56
|
```
|
|
107
57
|
|
|
108
|
-
|
|
58
|
+
Use the prerelease channel:
|
|
109
59
|
|
|
110
60
|
```bash
|
|
111
|
-
|
|
61
|
+
curl -fsSL https://raw.githubusercontent.com/forattini-dev/reddb/main/install.sh | bash -s -- --channel next
|
|
112
62
|
```
|
|
113
63
|
|
|
114
|
-
|
|
64
|
+
If you prefer manual installation, download the asset for your platform from GitHub Releases and place the `red` binary somewhere in your `PATH`.
|
|
65
|
+
|
|
66
|
+
Release page:
|
|
67
|
+
|
|
68
|
+
`https://github.com/forattini-dev/reddb/releases`
|
|
69
|
+
|
|
70
|
+
### npx
|
|
71
|
+
|
|
72
|
+
`reddb` is also published as an npm wrapper that resolves and runs the real `red` binary for you.
|
|
73
|
+
|
|
74
|
+
Install the managed binary:
|
|
115
75
|
|
|
116
76
|
```bash
|
|
117
|
-
|
|
118
|
-
-p 8080:8080 \
|
|
119
|
-
-v $(pwd)/data:/data \
|
|
120
|
-
--name reddb-http \
|
|
121
|
-
reddb red server --http --path /data/reddb.rdb --bind 0.0.0.0:8080
|
|
77
|
+
npx reddb --install
|
|
122
78
|
```
|
|
123
79
|
|
|
124
|
-
Run
|
|
80
|
+
Run RedDB through `npx` with auto-download when needed:
|
|
125
81
|
|
|
126
82
|
```bash
|
|
127
|
-
|
|
128
|
-
-p 50051:50051 \
|
|
129
|
-
-v $(pwd)/data:/data \
|
|
130
|
-
--name reddb-grpc \
|
|
131
|
-
reddb red server --grpc --path /data/reddb.rdb --bind 0.0.0.0:50051
|
|
83
|
+
npx reddb --auto-download -- server --http --path ./data/reddb.rdb --bind 127.0.0.1:8080
|
|
132
84
|
```
|
|
133
85
|
|
|
134
|
-
|
|
86
|
+
Wrapper help:
|
|
135
87
|
|
|
136
88
|
```bash
|
|
137
|
-
|
|
89
|
+
npx reddb --sdk-help
|
|
90
|
+
```
|
|
138
91
|
|
|
139
|
-
|
|
140
|
-
|
|
141
|
-
|
|
142
|
-
|
|
143
|
-
|
|
92
|
+
### Build from source
|
|
93
|
+
|
|
94
|
+
```bash
|
|
95
|
+
cargo build --release --bin red
|
|
96
|
+
./target/release/red version
|
|
144
97
|
```
|
|
145
98
|
|
|
146
|
-
|
|
99
|
+
## Run a server
|
|
100
|
+
|
|
101
|
+
### HTTP
|
|
147
102
|
|
|
148
103
|
```bash
|
|
149
104
|
mkdir -p ./data
|
|
150
105
|
red server --http --path ./data/reddb.rdb --bind 127.0.0.1:8080
|
|
151
106
|
```
|
|
152
107
|
|
|
153
|
-
|
|
108
|
+
Create data:
|
|
154
109
|
|
|
155
110
|
```bash
|
|
156
111
|
curl -X POST http://127.0.0.1:8080/collections/hosts/rows \
|
|
@@ -162,583 +117,149 @@ curl -X POST http://127.0.0.1:8080/collections/hosts/rows \
|
|
|
162
117
|
"critical": true
|
|
163
118
|
}
|
|
164
119
|
}'
|
|
165
|
-
|
|
166
|
-
curl -X POST http://127.0.0.1:8080/collections/graph/nodes \
|
|
167
|
-
-H 'content-type: application/json' \
|
|
168
|
-
-d '{
|
|
169
|
-
"label": "Host",
|
|
170
|
-
"node_type": "host",
|
|
171
|
-
"properties": {
|
|
172
|
-
"ip": "10.0.0.1"
|
|
173
|
-
}
|
|
174
|
-
}'
|
|
175
|
-
|
|
176
|
-
curl -X POST http://127.0.0.1:8080/collections/embeddings/vectors \
|
|
177
|
-
-H 'content-type: application/json' \
|
|
178
|
-
-d '{
|
|
179
|
-
"dense": [0.12, 0.91, 0.44],
|
|
180
|
-
"content": "host 10.0.0.1 running ssh",
|
|
181
|
-
"metadata": {
|
|
182
|
-
"kind": "host_embedding"
|
|
183
|
-
}
|
|
184
|
-
}'
|
|
185
120
|
```
|
|
186
121
|
|
|
187
|
-
|
|
122
|
+
Query it:
|
|
188
123
|
|
|
189
124
|
```bash
|
|
190
125
|
curl -X POST http://127.0.0.1:8080/query \
|
|
191
126
|
-H 'content-type: application/json' \
|
|
192
|
-
-d '{
|
|
193
|
-
"query": "FROM ANY ORDER BY _score DESC LIMIT 10"
|
|
194
|
-
}'
|
|
127
|
+
-d '{"query":"SELECT * FROM hosts WHERE critical = true"}'
|
|
195
128
|
```
|
|
196
129
|
|
|
197
|
-
|
|
130
|
+
Health check:
|
|
198
131
|
|
|
199
132
|
```bash
|
|
200
133
|
curl -s http://127.0.0.1:8080/health
|
|
201
|
-
curl -s http://127.0.0.1:8080/ready
|
|
202
|
-
curl -s http://127.0.0.1:8080/stats
|
|
203
134
|
```
|
|
204
135
|
|
|
205
|
-
###
|
|
136
|
+
### gRPC
|
|
206
137
|
|
|
207
138
|
```bash
|
|
139
|
+
mkdir -p ./data
|
|
208
140
|
red server --grpc --path ./data/reddb.rdb --bind 127.0.0.1:50051
|
|
209
141
|
```
|
|
210
142
|
|
|
211
|
-
|
|
143
|
+
## Connect to RedDB
|
|
212
144
|
|
|
213
|
-
|
|
145
|
+
There are two main connection paths:
|
|
214
146
|
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
sudo install -m 0755 target/release/red /usr/local/bin/red
|
|
218
|
-
```
|
|
219
|
-
|
|
220
|
-
Install and enable a `systemd` unit that auto-starts on boot:
|
|
147
|
+
- HTTP clients call the REST endpoints directly.
|
|
148
|
+
- `red connect` opens a gRPC session to a running RedDB server.
|
|
221
149
|
|
|
222
|
-
|
|
223
|
-
sudo ./scripts/install-systemd-service.sh \
|
|
224
|
-
--binary /usr/local/bin/red \
|
|
225
|
-
--grpc \
|
|
226
|
-
--path /var/lib/reddb/data.rdb \
|
|
227
|
-
--bind 0.0.0.0:50051
|
|
228
|
-
```
|
|
229
|
-
|
|
230
|
-
That unit is configured with `Restart=always` and `systemctl enable`, so it comes back after reboot.
|
|
150
|
+
### Connect over HTTP
|
|
231
151
|
|
|
232
152
|
```bash
|
|
233
|
-
|
|
234
|
-
-plaintext \
|
|
235
|
-
-d '{"query":"FROM ANY ORDER BY _score DESC LIMIT 5"}' \
|
|
236
|
-
127.0.0.1:50051 \
|
|
237
|
-
reddb.v1.RedDb/Query
|
|
238
|
-
```
|
|
239
|
-
|
|
240
|
-
```bash
|
|
241
|
-
grpcurl \
|
|
242
|
-
-plaintext \
|
|
243
|
-
-d '{
|
|
244
|
-
"collection": "hosts",
|
|
245
|
-
"payloadJson": "{\"fields\":{\"ip\":\"10.0.0.3\",\"os\":\"linux\"}}"
|
|
246
|
-
}' \
|
|
247
|
-
127.0.0.1:50051 \
|
|
248
|
-
reddb.v1.RedDb/CreateRow
|
|
249
|
-
```
|
|
250
|
-
|
|
251
|
-
### Embedded
|
|
252
|
-
|
|
253
|
-
Use `RedDB` directly inside your Rust process.
|
|
254
|
-
|
|
255
|
-
#### 1. Create a database handle
|
|
256
|
-
|
|
257
|
-
```rust
|
|
258
|
-
use reddb::{RedDB, Value};
|
|
259
|
-
|
|
260
|
-
fn main() -> Result<(), Box<dyn std::error::Error>> {
|
|
261
|
-
let db = RedDB::new();
|
|
262
|
-
|
|
263
|
-
let host_id = db
|
|
264
|
-
.row(
|
|
265
|
-
"hosts",
|
|
266
|
-
vec![
|
|
267
|
-
("ip", Value::Text("10.0.0.1".into())),
|
|
268
|
-
("os", Value::Text("linux".into())),
|
|
269
|
-
("critical", Value::Boolean(true)),
|
|
270
|
-
],
|
|
271
|
-
)
|
|
272
|
-
.save()?;
|
|
273
|
-
|
|
274
|
-
let node_id = db
|
|
275
|
-
.node("graph", "Host")
|
|
276
|
-
.node_type("host")
|
|
277
|
-
.property("ip", "10.0.0.1")
|
|
278
|
-
.save()?;
|
|
279
|
-
|
|
280
|
-
let vector_id = db
|
|
281
|
-
.vector("embeddings")
|
|
282
|
-
.dense(vec![0.12, 0.91, 0.44])
|
|
283
|
-
.content("host 10.0.0.1 running ssh")
|
|
284
|
-
.save()?;
|
|
153
|
+
curl -s http://127.0.0.1:8080/health
|
|
285
154
|
|
|
286
|
-
|
|
287
|
-
|
|
288
|
-
}
|
|
155
|
+
curl -X POST http://127.0.0.1:8080/query \
|
|
156
|
+
-H 'content-type: application/json' \
|
|
157
|
+
-d '{"query":"FROM ANY ORDER BY _score DESC LIMIT 10"}'
|
|
289
158
|
```
|
|
290
159
|
|
|
291
|
-
|
|
292
|
-
|
|
293
|
-
In a few lines, the same database stored:
|
|
294
|
-
|
|
295
|
-
- a table row
|
|
296
|
-
- a graph node
|
|
297
|
-
- a vector embedding
|
|
298
|
-
|
|
299
|
-
No extra services. No separate graph store. No separate vector engine.
|
|
300
|
-
|
|
301
|
-
---
|
|
302
|
-
|
|
303
|
-
## A real multi-structure flow
|
|
304
|
-
|
|
305
|
-
This is the shape `RedDB` is built for:
|
|
306
|
-
|
|
307
|
-
| Need | Store it as |
|
|
308
|
-
| --- | --- |
|
|
309
|
-
| operational facts | rows |
|
|
310
|
-
| rich object payloads | documents / JSON-like values |
|
|
311
|
-
| raw files or opaque bytes | binary payloads |
|
|
312
|
-
| linked entities | graph nodes and edges |
|
|
313
|
-
| semantic retrieval | vectors and embeddings |
|
|
314
|
-
| search context | metadata |
|
|
315
|
-
| operational durability state | manifests, roots, snapshots and exports |
|
|
316
|
-
|
|
317
|
-
And this is the point:
|
|
318
|
-
|
|
319
|
-
- one application can write a row
|
|
320
|
-
- link it to a graph node
|
|
321
|
-
- attach one or more embeddings
|
|
322
|
-
- run structured queries
|
|
323
|
-
- run graph traversal
|
|
324
|
-
- run vector or hybrid retrieval
|
|
325
|
-
- export and snapshot the same dataset
|
|
326
|
-
|
|
327
|
-
without changing databases halfway through the system design.
|
|
328
|
-
|
|
329
|
-
---
|
|
330
|
-
|
|
331
|
-
## What is `RedDB`?
|
|
332
|
-
|
|
333
|
-
`RedDB` is a standalone Rust database engine for multi-structure workloads.
|
|
334
|
-
|
|
335
|
-
It is not trying to be “just SQL”, “just document”, or “just vector”.
|
|
336
|
-
It is a **multi-structure database core** with one persistence layer and one operational surface for:
|
|
337
|
-
|
|
338
|
-
- structured rows and scans
|
|
339
|
-
- semi-structured documents
|
|
340
|
-
- graph nodes, edges, traversals and analytics
|
|
341
|
-
- dense vector search, IVF and hybrid retrieval
|
|
342
|
-
- physical metadata, manifests, snapshots and exports
|
|
343
|
-
|
|
344
|
-
`RedDB` is designed to feel like one coherent system:
|
|
345
|
-
|
|
346
|
-
- one engine
|
|
347
|
-
- one runtime
|
|
348
|
-
- one operational surface
|
|
349
|
-
- multiple native data shapes
|
|
350
|
-
|
|
351
|
-
---
|
|
352
|
-
|
|
353
|
-
## Why `RedDB`
|
|
354
|
-
|
|
355
|
-
Most storage stacks get awkward the moment your application needs more than one structure.
|
|
356
|
-
|
|
357
|
-
You start with rows.
|
|
358
|
-
Then you need metadata-heavy docs.
|
|
359
|
-
Then graph relationships.
|
|
360
|
-
Then embeddings.
|
|
361
|
-
Then hybrid search.
|
|
362
|
-
Then operational metadata.
|
|
363
|
-
Then exports, scans, health and online maintenance.
|
|
364
|
-
|
|
365
|
-
`RedDB` is built so all of that belongs to the same system from day one.
|
|
366
|
-
|
|
367
|
-
- rows, docs, graph and vectors live in one engine
|
|
368
|
-
- one transaction boundary can touch multiple structures
|
|
369
|
-
- one runtime exposes scans, queries, analytics and operations
|
|
370
|
-
- one physical metadata story tracks snapshots, roots, manifests and exports
|
|
371
|
-
|
|
372
|
-
---
|
|
373
|
-
|
|
374
|
-
## What makes it special
|
|
160
|
+
### Connect with the CLI REPL
|
|
375
161
|
|
|
376
|
-
|
|
162
|
+
Start a gRPC server first:
|
|
377
163
|
|
|
378
|
-
|
|
379
|
-
|
|
380
|
-
- graph entities and traversals
|
|
381
|
-
- vector retrieval and hybrid ranking
|
|
382
|
-
|
|
383
|
-
All of these are first-class.
|
|
384
|
-
|
|
385
|
-
### Embedded-first, server-capable
|
|
386
|
-
|
|
387
|
-
Use `RedDB` directly as a Rust crate inside your process, or run it as a server.
|
|
388
|
-
|
|
389
|
-
- low-latency local access
|
|
390
|
-
- no mandatory network hop
|
|
391
|
-
- clean server surface when you do want remote access
|
|
392
|
-
|
|
393
|
-
### Operational by default
|
|
394
|
-
|
|
395
|
-
- health endpoints
|
|
396
|
-
- runtime stats
|
|
397
|
-
- manifests
|
|
398
|
-
- collection roots
|
|
399
|
-
- snapshots
|
|
400
|
-
- exports
|
|
401
|
-
- retention controls
|
|
402
|
-
- maintenance and checkpointing
|
|
403
|
-
|
|
404
|
-
### Search that crosses structures
|
|
405
|
-
|
|
406
|
-
- text search
|
|
407
|
-
- vector search
|
|
408
|
-
- IVF search
|
|
409
|
-
- hybrid search
|
|
410
|
-
- graph-aware traversal and analytics
|
|
411
|
-
|
|
412
|
-
### Analytics built into the graph layer
|
|
413
|
-
|
|
414
|
-
- shortest path
|
|
415
|
-
- traversals
|
|
416
|
-
- components
|
|
417
|
-
- centrality
|
|
418
|
-
- communities
|
|
419
|
-
- clustering
|
|
420
|
-
- cycles
|
|
421
|
-
- topological sort
|
|
422
|
-
|
|
423
|
-
---
|
|
424
|
-
|
|
425
|
-
## Current capabilities
|
|
426
|
-
|
|
427
|
-
### Core engine
|
|
428
|
-
|
|
429
|
-
- unified entity model
|
|
430
|
-
- persistence for rows, graph entities and vectors
|
|
431
|
-
- paged backend support
|
|
432
|
-
- physical metadata sidecar
|
|
433
|
-
- manifest trail and collection roots
|
|
434
|
-
- snapshots and named exports
|
|
435
|
-
- retention policy for snapshots and exports
|
|
436
|
-
- health diagnostics and runtime stats
|
|
437
|
-
|
|
438
|
-
### Query/runtime
|
|
439
|
-
|
|
440
|
-
- embedded runtime with connection pool
|
|
441
|
-
- HTTP server surface
|
|
442
|
-
- gRPC server surface
|
|
443
|
-
- collection scans
|
|
444
|
-
- table query execution in `/query`
|
|
445
|
-
- join execution in `/query`
|
|
446
|
-
- graph query execution in `/query`
|
|
447
|
-
- path query execution in `/query`
|
|
448
|
-
- vector query execution in `/query`
|
|
449
|
-
- hybrid query execution in `/query`
|
|
450
|
-
|
|
451
|
-
### Vector
|
|
452
|
-
|
|
453
|
-
- similarity search
|
|
454
|
-
- IVF search
|
|
455
|
-
- k-means-backed IVF training on demand
|
|
456
|
-
- hybrid search
|
|
457
|
-
- text/doc search API
|
|
458
|
-
- vector metadata filtering in runtime query path
|
|
459
|
-
|
|
460
|
-
### Graph
|
|
461
|
-
|
|
462
|
-
- neighborhood expansion
|
|
463
|
-
- BFS / DFS traversal
|
|
464
|
-
- shortest path
|
|
465
|
-
- connected / weak / strong components
|
|
466
|
-
- degree / closeness / betweenness / eigenvector centrality
|
|
467
|
-
- PageRank and personalized PageRank
|
|
468
|
-
- HITS
|
|
469
|
-
- Louvain and label propagation
|
|
470
|
-
- clustering coefficient
|
|
471
|
-
- cycle discovery
|
|
472
|
-
- topological sort
|
|
473
|
-
- named graph projections
|
|
474
|
-
- persisted analytics job metadata
|
|
475
|
-
|
|
476
|
-
### Operations
|
|
477
|
-
|
|
478
|
-
- `GET /health`
|
|
479
|
-
- `GET /ready`
|
|
480
|
-
- `GET /stats`
|
|
481
|
-
- `GET /catalog`
|
|
482
|
-
- `GET /manifest`
|
|
483
|
-
- `GET /roots`
|
|
484
|
-
- `GET /snapshots`
|
|
485
|
-
- `GET /exports`
|
|
486
|
-
- `GET /indexes`
|
|
487
|
-
- `GET /graph/projections`
|
|
488
|
-
- `GET /graph/jobs`
|
|
489
|
-
- `POST /collections/{name}/rows`
|
|
490
|
-
- `POST /collections/{name}/nodes`
|
|
491
|
-
- `POST /collections/{name}/edges`
|
|
492
|
-
- `POST /collections/{name}/vectors`
|
|
493
|
-
- `POST /collections/{name}/bulk/rows`
|
|
494
|
-
- `POST /collections/{name}/bulk/nodes`
|
|
495
|
-
- `POST /collections/{name}/bulk/edges`
|
|
496
|
-
- `POST /collections/{name}/bulk/vectors`
|
|
497
|
-
- `PATCH /collections/{name}/entities/{id}`
|
|
498
|
-
- `DELETE /collections/{name}/entities/{id}`
|
|
499
|
-
- gRPC `Health`
|
|
500
|
-
- gRPC `Ready`
|
|
501
|
-
- gRPC `Stats`
|
|
502
|
-
- gRPC `Collections`
|
|
503
|
-
- gRPC `Scan`
|
|
504
|
-
- gRPC `Query`
|
|
505
|
-
- gRPC `CreateRow`
|
|
506
|
-
- gRPC `CreateNode`
|
|
507
|
-
- gRPC `CreateEdge`
|
|
508
|
-
- gRPC `CreateVector`
|
|
509
|
-
- gRPC `BulkCreateRows`
|
|
510
|
-
- gRPC `BulkCreateNodes`
|
|
511
|
-
- gRPC `BulkCreateEdges`
|
|
512
|
-
- gRPC `BulkCreateVectors`
|
|
513
|
-
- gRPC `PatchEntity`
|
|
514
|
-
- gRPC `DeleteEntity`
|
|
515
|
-
- gRPC `Checkpoint`
|
|
516
|
-
|
|
517
|
-
---
|
|
518
|
-
|
|
519
|
-
## Feature matrix
|
|
520
|
-
|
|
521
|
-
| Area | What `RedDB` already exposes |
|
|
522
|
-
| --- | --- |
|
|
523
|
-
| Storage | rows, graph entities, vectors, paged persistence, metadata sidecar |
|
|
524
|
-
| Query | table, join, graph, path, vector and hybrid execution |
|
|
525
|
-
| Search | text, similarity, IVF, hybrid |
|
|
526
|
-
| Graph | traversals, pathfinding, centrality, communities, clustering, cycles |
|
|
527
|
-
| Operations | health, stats, manifest, roots, snapshots, exports, retention, CRUD, bulk ingest |
|
|
528
|
-
| Runtime | embedded runtime, connection pool, HTTP server, gRPC server |
|
|
529
|
-
|
|
530
|
-
---
|
|
531
|
-
|
|
532
|
-
## Architecture direction
|
|
533
|
-
|
|
534
|
-
`RedDB` is being shaped as a layered database engine:
|
|
535
|
-
|
|
536
|
-
1. **Physical layer**
|
|
537
|
-
- durable file layout
|
|
538
|
-
- metadata manifest
|
|
539
|
-
- snapshots
|
|
540
|
-
- exports
|
|
541
|
-
- collection roots
|
|
542
|
-
|
|
543
|
-
2. **Logical catalog**
|
|
544
|
-
- collections
|
|
545
|
-
- schema manifests
|
|
546
|
-
- index descriptors
|
|
547
|
-
- graph projections
|
|
548
|
-
- analytics jobs
|
|
549
|
-
|
|
550
|
-
3. **Execution layer**
|
|
551
|
-
- scans
|
|
552
|
-
- table filters
|
|
553
|
-
- joins
|
|
554
|
-
- graph traversal
|
|
555
|
-
- vector retrieval
|
|
556
|
-
- hybrid ranking
|
|
557
|
-
|
|
558
|
-
4. **Operational surface**
|
|
559
|
-
- embedded runtime
|
|
560
|
-
- HTTP API
|
|
561
|
-
- gRPC API
|
|
562
|
-
- health
|
|
563
|
-
- stats
|
|
564
|
-
- maintenance
|
|
565
|
-
- checkpointing
|
|
566
|
-
|
|
567
|
-
The physical side is still evolving toward a tighter root-publication model. The repo already persists operational metadata, roots, manifests, snapshots and exports, but the final publication path is still being hardened.
|
|
568
|
-
|
|
569
|
-
---
|
|
570
|
-
|
|
571
|
-
## Repo status
|
|
572
|
-
|
|
573
|
-
This repository is already beyond “scaffold”, but it is **not at 1.0 shape yet**.
|
|
574
|
-
|
|
575
|
-
What is already strong:
|
|
576
|
-
|
|
577
|
-
- storage extraction is complete
|
|
578
|
-
- runtime/API surface is broad
|
|
579
|
-
- graph and vector capabilities are real
|
|
580
|
-
- operational metadata exists and is queryable
|
|
581
|
-
|
|
582
|
-
What still needs to harden:
|
|
583
|
-
|
|
584
|
-
- final physical publication model
|
|
585
|
-
- persistent binary index formats
|
|
586
|
-
- stronger SQL/table planner and executor depth
|
|
587
|
-
- replication and log shipping
|
|
588
|
-
|
|
589
|
-
---
|
|
590
|
-
|
|
591
|
-
## Philosophy
|
|
592
|
-
|
|
593
|
-
`RedDB` is built around one principle:
|
|
594
|
-
|
|
595
|
-
- one storage engine
|
|
596
|
-
- one runtime
|
|
597
|
-
- one operational story
|
|
598
|
-
- multiple native data shapes
|
|
599
|
-
|
|
600
|
-
Rows, docs, graphs and vectors should feel like different faces of the same database.
|
|
601
|
-
|
|
602
|
-
That is the bar.
|
|
603
|
-
|
|
604
|
-
---
|
|
605
|
-
|
|
606
|
-
## Crate
|
|
607
|
-
|
|
608
|
-
`Cargo.toml`
|
|
609
|
-
|
|
610
|
-
```toml
|
|
611
|
-
[package]
|
|
612
|
-
name = "reddb"
|
|
613
|
-
version = "0.1.0"
|
|
614
|
-
edition = "2021"
|
|
164
|
+
```bash
|
|
165
|
+
red server --grpc --path ./data/reddb.rdb --bind 127.0.0.1:50051
|
|
615
166
|
```
|
|
616
167
|
|
|
617
|
-
|
|
618
|
-
|
|
619
|
-
- `query-vector`
|
|
620
|
-
- `query-graph`
|
|
621
|
-
- `query-fulltext`
|
|
622
|
-
- `encryption`
|
|
623
|
-
|
|
624
|
-
## Publicação no crates.io
|
|
625
|
-
|
|
626
|
-
### 1) Validar pacote antes de publicar
|
|
168
|
+
Then connect:
|
|
627
169
|
|
|
628
170
|
```bash
|
|
629
|
-
|
|
171
|
+
red connect 127.0.0.1:50051
|
|
630
172
|
```
|
|
631
173
|
|
|
632
|
-
|
|
174
|
+
One-shot query:
|
|
633
175
|
|
|
634
176
|
```bash
|
|
635
|
-
|
|
177
|
+
red connect --query "SELECT * FROM hosts" 127.0.0.1:50051
|
|
636
178
|
```
|
|
637
179
|
|
|
638
|
-
|
|
180
|
+
If auth is enabled:
|
|
639
181
|
|
|
640
182
|
```bash
|
|
641
|
-
|
|
642
|
-
cargo login
|
|
643
|
-
|
|
644
|
-
# ou configure temporariamente
|
|
645
|
-
export CARGO_REGISTRY_TOKEN=<SEU_TOKEN>
|
|
646
|
-
|
|
647
|
-
# Publica o crate
|
|
648
|
-
make publish
|
|
183
|
+
red connect --token "$REDDB_TOKEN" 127.0.0.1:50051
|
|
649
184
|
```
|
|
650
185
|
|
|
651
|
-
|
|
186
|
+
## Embedded like SQLite
|
|
652
187
|
|
|
653
|
-
|
|
188
|
+
If you want RedDB inside your process, open the database directly from Rust and work against the same engine without a separate server.
|
|
654
189
|
|
|
655
|
-
|
|
656
|
-
o canal `stable`
|
|
657
|
-
- `workflow_dispatch` manual com `channel: stable` e `version: X.Y.Z`
|
|
190
|
+
### Fluent embedded API
|
|
658
191
|
|
|
659
|
-
|
|
660
|
-
|
|
661
|
-
|
|
662
|
-
|
|
663
|
-
- Faça validações locais (check/build/release) antes do release.
|
|
664
|
-
- Mantenha `Cargo.toml`/`Cargo.lock` consistentes.
|
|
665
|
-
- Garanta que o secret `CARGO_REGISTRY_TOKEN` esteja presente no repositório do GitHub.
|
|
666
|
-
|
|
667
|
-
---
|
|
192
|
+
```rust
|
|
193
|
+
use reddb::RedDB;
|
|
194
|
+
use reddb::storage::schema::Value;
|
|
668
195
|
|
|
669
|
-
|
|
196
|
+
fn main() -> Result<(), Box<dyn std::error::Error>> {
|
|
197
|
+
let db = RedDB::open("./data/reddb.rdb")?;
|
|
670
198
|
|
|
671
|
-
|
|
199
|
+
let _user_id = db.row("users", vec![
|
|
200
|
+
("name", Value::Text("Alice".into())),
|
|
201
|
+
("active", Value::Boolean(true)),
|
|
202
|
+
]).save()?;
|
|
672
203
|
|
|
673
|
-
|
|
674
|
-
|
|
675
|
-
|
|
676
|
-
|
|
677
|
-
Building --> Ready : materialization complete
|
|
678
|
-
Building --> Failed : build error
|
|
679
|
-
Ready --> Stale : underlying data changed
|
|
680
|
-
Ready --> Disabled : operator disables
|
|
681
|
-
Stale --> RequiresRebuild : TTL exceeded
|
|
682
|
-
Failed --> RequiresRebuild : operator requests rebuild
|
|
683
|
-
RequiresRebuild --> Building : rebuild triggered
|
|
684
|
-
Disabled --> Ready : operator re-enables (if materialized)
|
|
685
|
-
Disabled --> Declared : operator re-enables (if not materialized)
|
|
204
|
+
let _node_id = db.node("identity", "user")
|
|
205
|
+
.node_type("account")
|
|
206
|
+
.property("name", "Alice")
|
|
207
|
+
.save()?;
|
|
686
208
|
|
|
687
|
-
|
|
688
|
-
|
|
689
|
-
|
|
209
|
+
let results = db.query()
|
|
210
|
+
.collection("users")
|
|
211
|
+
.where_prop("active", true)
|
|
212
|
+
.limit(10)
|
|
213
|
+
.execute()?;
|
|
690
214
|
|
|
691
|
-
|
|
692
|
-
note right of Failed : Needs manual intervention
|
|
693
|
-
```
|
|
215
|
+
println!("matched {}", results.len());
|
|
694
216
|
|
|
695
|
-
|
|
696
|
-
|
|
697
|
-
|
|
698
|
-
- **can_rebuild()**: `declared`, `stale`, `failed`, `requires_rebuild`.
|
|
699
|
-
- **needs_attention()**: `failed`, `stale`, `requires_rebuild`.
|
|
700
|
-
|
|
701
|
-
---
|
|
702
|
-
|
|
703
|
-
## Query Execution Flow
|
|
704
|
-
|
|
705
|
-
```mermaid
|
|
706
|
-
flowchart TD
|
|
707
|
-
A[Query Input] --> B{Detect Mode}
|
|
708
|
-
B -->|SQL| C[Parser]
|
|
709
|
-
B -->|Gremlin| C
|
|
710
|
-
B -->|SPARQL| C
|
|
711
|
-
B -->|Natural| C
|
|
712
|
-
C --> D[Query AST]
|
|
713
|
-
D --> E[Planner / Optimizer]
|
|
714
|
-
E --> F{Source Type}
|
|
715
|
-
F -->|FROM table| G[Table Scan / Index Seek]
|
|
716
|
-
F -->|FROM any / universal| H[Entity Scan - all collections]
|
|
717
|
-
F -->|MATCH graph| I[Graph Pattern Match]
|
|
718
|
-
F -->|JOIN| J[Nested Loop Join]
|
|
719
|
-
G --> K[Filter + Sort + Paginate]
|
|
720
|
-
H --> K
|
|
721
|
-
I --> K
|
|
722
|
-
J --> K
|
|
723
|
-
K --> L[Universal Envelope]
|
|
724
|
-
L --> M[Response: _entity_id, _collection, _kind, _entity_type, _capabilities, _score]
|
|
217
|
+
db.flush()?;
|
|
218
|
+
Ok(())
|
|
219
|
+
}
|
|
725
220
|
```
|
|
726
221
|
|
|
727
|
-
|
|
222
|
+
### Embedded runtime with SQL-style queries
|
|
728
223
|
|
|
729
|
-
|
|
224
|
+
If you want embedded execution with the runtime/use-case layer, use `RedDBRuntime`. This is the closest path to using RedDB "like SQLite", but with the project's multi-model runtime.
|
|
730
225
|
|
|
731
|
-
|
|
732
|
-
|
|
733
|
-
|
|
734
|
-
|
|
735
|
-
| Advanced RBAC | Not supported | Token-based auth only |
|
|
736
|
-
| Cross-entity transactions | Not supported | Per-collection atomicity |
|
|
737
|
-
| Distributed query planner | Not supported | Local cost-based planner |
|
|
738
|
-
| ACID guarantees | WAL-based | Best-effort durability |
|
|
226
|
+
```rust
|
|
227
|
+
use reddb::application::{CreateRowInput, ExecuteQueryInput};
|
|
228
|
+
use reddb::storage::schema::Value;
|
|
229
|
+
use reddb::{EntityUseCases, QueryUseCases, RedDBOptions, RedDBRuntime};
|
|
739
230
|
|
|
740
|
-
|
|
231
|
+
fn main() -> Result<(), Box<dyn std::error::Error>> {
|
|
232
|
+
let rt = RedDBRuntime::with_options(
|
|
233
|
+
RedDBOptions::persistent("./data/reddb.rdb")
|
|
234
|
+
)?;
|
|
235
|
+
|
|
236
|
+
EntityUseCases::new(&rt).create_row(CreateRowInput {
|
|
237
|
+
collection: "users".into(),
|
|
238
|
+
fields: vec![
|
|
239
|
+
("name".into(), Value::Text("Alice".into())),
|
|
240
|
+
("age".into(), Value::Integer(30)),
|
|
241
|
+
],
|
|
242
|
+
metadata: vec![],
|
|
243
|
+
node_links: vec![],
|
|
244
|
+
vector_links: vec![],
|
|
245
|
+
})?;
|
|
246
|
+
|
|
247
|
+
let result = QueryUseCases::new(&rt).execute(ExecuteQueryInput {
|
|
248
|
+
query: "SELECT * FROM users".into(),
|
|
249
|
+
})?;
|
|
250
|
+
|
|
251
|
+
println!("rows = {}", result.result.records.len());
|
|
252
|
+
rt.checkpoint()?;
|
|
253
|
+
Ok(())
|
|
254
|
+
}
|
|
255
|
+
```
|
|
741
256
|
|
|
742
|
-
##
|
|
257
|
+
## Documentation
|
|
743
258
|
|
|
744
|
-
|
|
259
|
+
- Docs home: [docs/README.md](/home/cyber/Work/FF/reddb/docs/README.md)
|
|
260
|
+
- Installation: [docs/getting-started/installation.md](/home/cyber/Work/FF/reddb/docs/getting-started/installation.md)
|
|
261
|
+
- Quick start: [docs/getting-started/quick-start.md](/home/cyber/Work/FF/reddb/docs/getting-started/quick-start.md)
|
|
262
|
+
- Connection guide: [docs/getting-started/connect.md](/home/cyber/Work/FF/reddb/docs/getting-started/connect.md)
|
|
263
|
+
- Embedded guide: [docs/api/embedded.md](/home/cyber/Work/FF/reddb/docs/api/embedded.md)
|
|
264
|
+
- HTTP API: [docs/api/http.md](/home/cyber/Work/FF/reddb/docs/api/http.md)
|
|
265
|
+
- CLI reference: [docs/api/cli.md](/home/cyber/Work/FF/reddb/docs/api/cli.md)
|