@rip-lang/db 1.3.5 → 1.3.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +184 -62
- package/db.rip +9 -6
- package/lib/duckdb.mjs +15 -15
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -2,13 +2,12 @@
|
|
|
2
2
|
|
|
3
3
|
# Rip DB - @rip-lang/db
|
|
4
4
|
|
|
5
|
-
> **A lightweight DuckDB HTTP server with the official DuckDB UI built in**
|
|
5
|
+
> **A lightweight DuckDB HTTP server with bulk inserts, an ActiveRecord-style client, and the official DuckDB UI built in**
|
|
6
6
|
|
|
7
7
|
Rip DB turns any DuckDB database into a full-featured HTTP server — complete
|
|
8
|
-
with the official DuckDB UI for interactive queries,
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
to power the UI with native-speed data transfer.
|
|
8
|
+
with the official DuckDB UI for interactive queries, bulk insert via DuckDB's
|
|
9
|
+
Appender API (~200K rows/sec), and a clean Model interface that picks the
|
|
10
|
+
optimal strategy automatically. Pure Bun FFI, zero npm dependencies for DuckDB.
|
|
12
11
|
|
|
13
12
|
## Quick Start
|
|
14
13
|
|
|
@@ -18,13 +17,32 @@ brew install duckdb # macOS (or see duckdb.org for Linux)
|
|
|
18
17
|
bun add -g @rip-lang/db # Installs rip-db command
|
|
19
18
|
|
|
20
19
|
# Start the server
|
|
21
|
-
rip-db #
|
|
22
|
-
rip-db mydata.duckdb #
|
|
20
|
+
rip-db # Auto-detects *.duckdb file, or :memory:
|
|
21
|
+
rip-db mydata.duckdb # Explicit file
|
|
23
22
|
rip-db mydata.duckdb --port 8080
|
|
24
23
|
```
|
|
25
24
|
|
|
25
|
+
```
|
|
26
|
+
rip-db: DuckDB v1.4.4
|
|
27
|
+
rip-db: rip-db v1.3.6
|
|
28
|
+
rip-db: source mydata.duckdb
|
|
29
|
+
rip-db: server http://localhost:4213
|
|
30
|
+
```
|
|
31
|
+
|
|
26
32
|
Open **http://localhost:4213** for the official DuckDB UI.
|
|
27
33
|
|
|
34
|
+
### Source Selection
|
|
35
|
+
|
|
36
|
+
When no filename is given, `rip-db` looks for exactly one `*.duckdb` file in
|
|
37
|
+
the current directory and uses it automatically. If zero or multiple are found,
|
|
38
|
+
it falls back to `:memory:`. This means `cd my-project && rip-db` just works
|
|
39
|
+
when there's a single database file present.
|
|
40
|
+
|
|
41
|
+
The `source` line shows the active data source — today that's a local DuckDB
|
|
42
|
+
file or `:memory:`, but the architecture supports any source DuckDB can attach:
|
|
43
|
+
S3 buckets, PostgreSQL, MySQL, SQLite, Parquet files, CSV, and more via
|
|
44
|
+
DuckDB's extension system.
|
|
45
|
+
|
|
28
46
|
## What It Does
|
|
29
47
|
|
|
30
48
|
Rip DB sits between your clients and DuckDB, providing two interfaces:
|
|
@@ -47,77 +65,119 @@ Rip DB sits between your clients and DuckDB, providing two interfaces:
|
|
|
47
65
|
**DuckDB UI** — The official DuckDB notebook interface loads instantly in your
|
|
48
66
|
browser. Rip DB proxies the UI assets from ui.duckdb.org and implements the
|
|
49
67
|
full binary serialization protocol that the UI uses to communicate with DuckDB.
|
|
50
|
-
This includes query execution, SQL tokenization for syntax highlighting, and
|
|
51
|
-
Server-Sent Events for real-time catalog updates.
|
|
52
68
|
|
|
53
69
|
**JSON API** — Any HTTP client can execute SQL queries and receive JSON
|
|
54
|
-
responses.
|
|
55
|
-
|
|
70
|
+
responses. Three execution strategies are selected automatically based on the
|
|
71
|
+
request shape — the caller never needs to think about it.
|
|
56
72
|
|
|
57
73
|
## Features
|
|
58
74
|
|
|
59
75
|
- **Official DuckDB UI** — Interactive notebooks, syntax highlighting, data exploration
|
|
76
|
+
- **Bulk insert via Appender API** — ~200K rows/sec, bypasses SQL parsing entirely
|
|
77
|
+
- **Batch prepared statements** — Prepare once, execute N times with different params
|
|
78
|
+
- **ActiveRecord-style Model** — `User.find!`, `User.insert!`, `User.where(...).all!`
|
|
79
|
+
- **Smart dispatch** — `Model.insert!` picks Appender for arrays, prepared statements for singles
|
|
60
80
|
- **Full binary protocol** — Native DuckDB UI serialization implemented in Rip
|
|
61
81
|
- **Pure Bun FFI** — Direct calls to DuckDB's C API using the modern chunk-based interface
|
|
62
82
|
- **Zero npm dependencies for DuckDB** — Uses the system-installed DuckDB library
|
|
63
83
|
- **Parameterized queries** — Prepared statements with type-safe parameter binding
|
|
64
84
|
- **Complete type support** — All DuckDB types handled natively, including UUID, DECIMAL, TIMESTAMP, LIST, STRUCT, MAP
|
|
65
|
-
- **DECIMAL precision preserved** — Exact string representation, never converted to floating point
|
|
66
|
-
- **Timestamps as UTC** — All timestamps returned as JavaScript Date objects (UTC)
|
|
67
|
-
- **Powered by @rip-lang/api** — Fast, lightweight HTTP server framework
|
|
68
85
|
- **Single binary** — One `rip-db` command, one process, one database
|
|
69
86
|
|
|
70
|
-
##
|
|
87
|
+
## Database Client
|
|
71
88
|
|
|
72
|
-
|
|
89
|
+
The real power of Rip DB is its client library. Import it from
|
|
90
|
+
`@rip-lang/db/client` — it talks to a running `rip-db` server over HTTP.
|
|
73
91
|
|
|
74
|
-
|
|
92
|
+
```coffee
|
|
93
|
+
import { query, findOne, findAll, Model } from '@rip-lang/db/client'
|
|
94
|
+
```
|
|
75
95
|
|
|
76
|
-
|
|
96
|
+
### The Balance: Model vs Raw SQL
|
|
77
97
|
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
-H "Content-Type: application/json" \
|
|
81
|
-
-d '{"sql": "SELECT * FROM users WHERE id = $1", "params": [1]}'
|
|
82
|
-
```
|
|
98
|
+
Not every query needs an ORM, and not every query benefits from raw SQL.
|
|
99
|
+
Rip DB gives you both and lets you choose the right tool:
|
|
83
100
|
|
|
84
|
-
|
|
101
|
+
**Use the Model** for simple and medium queries — CRUD, where clauses,
|
|
102
|
+
counts, upserts. The Model is shorter, safer, and handles parameterization
|
|
103
|
+
automatically:
|
|
85
104
|
|
|
86
|
-
|
|
105
|
+
```coffee
|
|
106
|
+
User = Model 'users'
|
|
87
107
|
|
|
88
|
-
|
|
89
|
-
|
|
108
|
+
# These are cleaner than raw SQL
|
|
109
|
+
user = User.find! 42
|
|
110
|
+
count = User.count!
|
|
111
|
+
active = User.where(active: true).order('name').limit(10).all!
|
|
112
|
+
created = User.insert! { name: 'Alice', email: 'alice@example.com' }
|
|
113
|
+
User.upsert! { email: 'alice@example.com', name: 'Alice' }, on: 'email'
|
|
90
114
|
```
|
|
91
115
|
|
|
92
|
-
|
|
116
|
+
**Use raw SQL** for complex queries — JOINs, GROUP BY, aggregates, subqueries.
|
|
117
|
+
SQL is the most direct, readable expression for these. No ORM improves on it:
|
|
93
118
|
|
|
94
|
-
```
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
119
|
+
```coffee
|
|
120
|
+
users = findAll! """
|
|
121
|
+
SELECT u.id, u.name, count(o.id) as order_count
|
|
122
|
+
FROM users u
|
|
123
|
+
LEFT JOIN orders o ON o.user_id = u.id
|
|
124
|
+
WHERE u.active = true
|
|
125
|
+
GROUP BY u.id, u.name
|
|
126
|
+
ORDER BY order_count DESC
|
|
127
|
+
"""
|
|
101
128
|
```
|
|
102
129
|
|
|
103
|
-
|
|
130
|
+
This isn't a compromise — it's the optimal approach. Simple queries get
|
|
131
|
+
shorter with the Model. Complex queries stay clear with SQL. You never
|
|
132
|
+
fight the abstraction.
|
|
104
133
|
|
|
105
|
-
|
|
106
|
-
|----------|--------|-------------|
|
|
107
|
-
| `/health` | GET | Health check |
|
|
108
|
-
| `/tables` | GET | List all tables |
|
|
109
|
-
| `/schema/:table` | GET | Table schema |
|
|
134
|
+
### Bulk Insert (Appender API)
|
|
110
135
|
|
|
111
|
-
|
|
136
|
+
Pass an array to `Model.insert!` and it automatically uses DuckDB's Appender
|
|
137
|
+
API — the fastest possible insert path (~200K rows/sec). The Appender bypasses
|
|
138
|
+
SQL parsing entirely, writing directly to DuckDB's columnar storage.
|
|
112
139
|
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
140
|
+
```coffee
|
|
141
|
+
# Single insert — uses prepared statement, returns the row
|
|
142
|
+
user = User.insert! { name: 'Alice', email: 'alice@example.com' }
|
|
143
|
+
|
|
144
|
+
# Bulk insert — uses Appender API, fastest path
|
|
145
|
+
User.insert! [
|
|
146
|
+
{ name: 'Alice', email: 'alice@example.com' }
|
|
147
|
+
{ name: 'Bob', email: 'bob@example.com' }
|
|
148
|
+
{ name: 'Charlie', email: 'charlie@example.com' }
|
|
149
|
+
]
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
The caller writes the same `insert!` — the Model detects the array and
|
|
153
|
+
picks the optimal strategy. Column subsets work too; missing columns get
|
|
154
|
+
their default values.
|
|
155
|
+
|
|
156
|
+
### Bulk Upsert (Multi-Row VALUES)
|
|
157
|
+
|
|
158
|
+
For upserts (INSERT ... ON CONFLICT), the Appender can't be used. The Model
|
|
159
|
+
builds a multi-row VALUES statement with proper parameterization:
|
|
116
160
|
|
|
117
161
|
```coffee
|
|
118
|
-
|
|
162
|
+
# Single upsert
|
|
163
|
+
Response.upsert! { email: 'alice@example.com', name: 'Alice' }, on: 'email'
|
|
119
164
|
|
|
120
|
-
|
|
165
|
+
# Bulk upsert — one SQL statement with N value tuples
|
|
166
|
+
Response.upsert! responses, on: 'email'
|
|
167
|
+
```
|
|
168
|
+
|
|
169
|
+
### Batch Queries (Prepared Statement Reuse)
|
|
170
|
+
|
|
171
|
+
Pass an array of param arrays to `query!` and it reuses one prepared
|
|
172
|
+
statement for all executions — one prepare, N bind-and-execute cycles:
|
|
173
|
+
|
|
174
|
+
```coffee
|
|
175
|
+
# Execute the same UPDATE 3 times with different params
|
|
176
|
+
query! "UPDATE reviews SET completed_at = $1 WHERE id = $2", [
|
|
177
|
+
[now, 1]
|
|
178
|
+
[now, 2]
|
|
179
|
+
[now, 3]
|
|
180
|
+
]
|
|
121
181
|
```
|
|
122
182
|
|
|
123
183
|
### Low-Level Queries
|
|
@@ -177,11 +237,9 @@ User.where('age > $1', [21]).all!
|
|
|
177
237
|
|
|
178
238
|
# OR conditions
|
|
179
239
|
User.where(active: true).or(role: 'admin').all!
|
|
180
|
-
User.where(active: true).or('role = $1', ['admin']).all!
|
|
181
240
|
|
|
182
241
|
# NOT conditions
|
|
183
242
|
User.where(active: true).not(role: 'banned').all!
|
|
184
|
-
User.not(deleted_at: null).all! # WHERE "deleted_at" IS NOT NULL
|
|
185
243
|
```
|
|
186
244
|
|
|
187
245
|
#### Group & Having
|
|
@@ -196,13 +254,16 @@ User.group('role').having('count(*) > $1', [5]).select('role, count(*) as n').al
|
|
|
196
254
|
All mutations return the affected row(s) via `RETURNING *`.
|
|
197
255
|
|
|
198
256
|
```coffee
|
|
199
|
-
# Insert — returns the new record
|
|
257
|
+
# Insert single — returns the new record
|
|
200
258
|
user = User.insert! { first_name: 'Alice', email: 'alice@example.com' }
|
|
201
259
|
|
|
260
|
+
# Insert bulk — uses Appender API (~200K rows/sec)
|
|
261
|
+
User.insert! rows
|
|
262
|
+
|
|
202
263
|
# Update by id — returns the updated record
|
|
203
264
|
user = User.update! 42, { email: 'newemail@example.com' }
|
|
204
265
|
|
|
205
|
-
# Upsert — insert or update on conflict
|
|
266
|
+
# Upsert — insert or update on conflict (single or bulk)
|
|
206
267
|
user = User.upsert! { email: 'alice@example.com', name: 'Alice' }, on: 'email'
|
|
207
268
|
|
|
208
269
|
# Destroy by id — returns the deleted record
|
|
@@ -238,6 +299,20 @@ Archive = Model 'orders', 'archive_db'
|
|
|
238
299
|
order = Archive.find! 99 # SELECT * FROM "archive_db"."orders" WHERE id = $1
|
|
239
300
|
```
|
|
240
301
|
|
|
302
|
+
### Execution Strategy Summary
|
|
303
|
+
|
|
304
|
+
The client picks the optimal execution path automatically:
|
|
305
|
+
|
|
306
|
+
| Caller writes | Strategy | Speed |
|
|
307
|
+
|---------------|----------|-------|
|
|
308
|
+
| `Model.insert!(object)` | Prepared statement | Fast |
|
|
309
|
+
| `Model.insert!(array)` | DuckDB Appender API | ~200K rows/sec |
|
|
310
|
+
| `Model.upsert!(object)` | Prepared statement | Fast |
|
|
311
|
+
| `Model.upsert!(array)` | Multi-row VALUES SQL | Fast (batch) |
|
|
312
|
+
| `query!(sql, params)` | Prepared statement | Fast |
|
|
313
|
+
| `query!(sql, [params...])` | Prepared stmt reuse | Fast (batch) |
|
|
314
|
+
| `findOne!(sql)` / `findAll!(sql)` | Direct execution | Fast |
|
|
315
|
+
|
|
241
316
|
### Query Builder Reference
|
|
242
317
|
|
|
243
318
|
| Method | Description |
|
|
@@ -274,12 +349,60 @@ order = Archive.find! 99 # SELECT * FROM "archive_db"."orders" WHERE id = $1
|
|
|
274
349
|
| `Model.order(...)` | Start a chain with ORDER BY |
|
|
275
350
|
| `Model.group(...)` | Start a chain with GROUP BY |
|
|
276
351
|
| `Model.limit(n)` | Start a chain with LIMIT |
|
|
277
|
-
| `Model.insert!(data)` | Insert
|
|
352
|
+
| `Model.insert!(data)` | Insert single object or bulk array |
|
|
278
353
|
| `Model.update!(id, data)` | Update by id and return row |
|
|
279
|
-
| `Model.upsert!(data, on:)` | Insert or update on conflict |
|
|
354
|
+
| `Model.upsert!(data, on:)` | Insert or update on conflict (single or bulk) |
|
|
280
355
|
| `Model.destroy!(id)` | Delete by id and return row |
|
|
281
356
|
| `Model.query!(sql, params)` | Raw parameterized query |
|
|
282
357
|
|
|
358
|
+
## JSON API
|
|
359
|
+
|
|
360
|
+
For programmatic access from any HTTP client.
|
|
361
|
+
|
|
362
|
+
### POST /sql
|
|
363
|
+
|
|
364
|
+
The `/sql` endpoint accepts four shapes and dispatches automatically:
|
|
365
|
+
|
|
366
|
+
```bash
|
|
367
|
+
# Standard query
|
|
368
|
+
curl -X POST http://localhost:4213/sql \
|
|
369
|
+
-H "Content-Type: application/json" \
|
|
370
|
+
-d '{"sql": "SELECT * FROM users WHERE id = $1", "params": [1]}'
|
|
371
|
+
|
|
372
|
+
# Bulk insert (Appender API)
|
|
373
|
+
curl -X POST http://localhost:4213/sql \
|
|
374
|
+
-H "Content-Type: application/json" \
|
|
375
|
+
-d '{"table": "users", "columns": ["name", "email"], "rows": [["Alice", "a@b.com"], ["Bob", "b@b.com"]]}'
|
|
376
|
+
|
|
377
|
+
# Batch prepared statement
|
|
378
|
+
curl -X POST http://localhost:4213/sql \
|
|
379
|
+
-H "Content-Type: application/json" \
|
|
380
|
+
-d '{"sql": "INSERT INTO t (a, b) VALUES ($1, $2)", "params": [[1, "x"], [2, "y"]]}'
|
|
381
|
+
```
|
|
382
|
+
|
|
383
|
+
| Shape | Dispatches to |
|
|
384
|
+
|-------|---------------|
|
|
385
|
+
| `{ sql }` | Raw execution |
|
|
386
|
+
| `{ sql, params: [...] }` | Prepared statement |
|
|
387
|
+
| `{ sql, params: [[...], ...] }` | Batch prepared (reuse stmt) |
|
|
388
|
+
| `{ table, columns, rows }` | Appender API (fastest insert) |
|
|
389
|
+
|
|
390
|
+
### POST /
|
|
391
|
+
|
|
392
|
+
Execute raw SQL (body is the query):
|
|
393
|
+
|
|
394
|
+
```bash
|
|
395
|
+
curl -X POST http://localhost:4213/ -d "SELECT 42 as answer"
|
|
396
|
+
```
|
|
397
|
+
|
|
398
|
+
### Other Endpoints
|
|
399
|
+
|
|
400
|
+
| Endpoint | Method | Description |
|
|
401
|
+
|----------|--------|-------------|
|
|
402
|
+
| `/health` | GET | Health check |
|
|
403
|
+
| `/tables` | GET | List all tables |
|
|
404
|
+
| `/schema/:table` | GET | Table schema |
|
|
405
|
+
|
|
283
406
|
## DuckDB UI
|
|
284
407
|
|
|
285
408
|
The official DuckDB UI is available at the root URL. It provides:
|
|
@@ -299,19 +422,18 @@ Rip DB is built from three files:
|
|
|
299
422
|
|
|
300
423
|
| File | Lines | Role |
|
|
301
424
|
|------|-------|------|
|
|
302
|
-
| `db.rip` | ~
|
|
303
|
-
| `lib/duckdb.mjs` | ~
|
|
425
|
+
| `db.rip` | ~430 | HTTP server — routes, middleware, UI proxy, bulk dispatch |
|
|
426
|
+
| `lib/duckdb.mjs` | ~960 | FFI driver — chunk-based API, Appender, batch prepared |
|
|
304
427
|
| `lib/duckdb-binary.rip` | ~550 | Binary serializer — DuckDB UI protocol |
|
|
428
|
+
| `client.rip` | ~320 | HTTP client — Model factory, query builder, bulk insert |
|
|
305
429
|
|
|
306
430
|
The FFI driver uses DuckDB's modern chunk-based API (`duckdb_fetch_chunk`,
|
|
307
431
|
`duckdb_vector_get_data`) to read query results directly from columnar memory.
|
|
308
|
-
|
|
309
|
-
|
|
310
|
-
|
|
311
|
-
|
|
312
|
-
|
|
313
|
-
UI extension uses. It handles all DuckDB types including native 16-byte UUID
|
|
314
|
-
serialization, uint64-aligned validity bitmaps, and proper timestamp encoding.
|
|
432
|
+
For bulk inserts, it uses the Appender API (`duckdb_appender_create`,
|
|
433
|
+
`duckdb_append_*`) which writes directly to DuckDB's storage engine, bypassing
|
|
434
|
+
SQL parsing for maximum throughput. Prepared statement reuse
|
|
435
|
+
(`duckdb_prepare` once, `duckdb_execute_prepared` N times) handles batch
|
|
436
|
+
operations efficiently.
|
|
315
437
|
|
|
316
438
|
## Requirements
|
|
317
439
|
|
package/db.rip
CHANGED
|
@@ -89,7 +89,10 @@ if '--version' in args or '-v' in args
|
|
|
89
89
|
process.exit(0)
|
|
90
90
|
|
|
91
91
|
# Database and port configuration
|
|
92
|
-
path = process.env.DB_PATH or args.find((a) -> not a.startsWith('-')) or
|
|
92
|
+
path = process.env.DB_PATH or args.find((a) -> not a.startsWith('-')) or do ->
|
|
93
|
+
glob = new Bun.Glob("*.duckdb")
|
|
94
|
+
files = Array.from(glob.scanSync('.'))
|
|
95
|
+
if files.length is 1 then files[0] else ':memory:'
|
|
93
96
|
|
|
94
97
|
# Support both --port=N and --port N
|
|
95
98
|
portArg = do ->
|
|
@@ -102,8 +105,9 @@ port = parseInt(process.env.DB_PORT or portArg) or 4213
|
|
|
102
105
|
# Open database and create persistent connection
|
|
103
106
|
db = open(path)
|
|
104
107
|
conn = db.connect()
|
|
105
|
-
console.log "rip-db:
|
|
106
|
-
console.log "rip-db:
|
|
108
|
+
console.log "rip-db: DuckDB #{duckdbVersion()}"
|
|
109
|
+
console.log "rip-db: rip-db v#{VERSION}"
|
|
110
|
+
console.log "rip-db: source #{path}"
|
|
107
111
|
|
|
108
112
|
# ==============================================================================
|
|
109
113
|
# Helpers
|
|
@@ -415,7 +419,6 @@ get '/*' ->
|
|
|
415
419
|
# Start Server
|
|
416
420
|
# ==============================================================================
|
|
417
421
|
|
|
418
|
-
|
|
422
|
+
console.log "rip-db: server http://localhost:#{port}"
|
|
419
423
|
|
|
420
|
-
|
|
421
|
-
console.log "rip-db: DuckDB UI available at http://localhost:#{port}/"
|
|
424
|
+
start port: port, silent: true
|
package/lib/duckdb.mjs
CHANGED
|
@@ -92,25 +92,25 @@ const lib = dlopen(libPath, {
|
|
|
92
92
|
duckdb_clear_bindings: { args: ['ptr'], returns: 'i32' },
|
|
93
93
|
|
|
94
94
|
// Appender API
|
|
95
|
-
duckdb_appender_create:
|
|
96
|
-
duckdb_appender_error:
|
|
97
|
-
duckdb_appender_flush:
|
|
98
|
-
duckdb_appender_close:
|
|
99
|
-
duckdb_appender_destroy:
|
|
100
|
-
duckdb_appender_end_row:
|
|
101
|
-
duckdb_append_bool:
|
|
102
|
-
duckdb_append_int32:
|
|
103
|
-
duckdb_append_int64:
|
|
104
|
-
duckdb_append_double:
|
|
105
|
-
duckdb_append_varchar:
|
|
106
|
-
duckdb_append_null:
|
|
95
|
+
duckdb_appender_create: { args: ['ptr', 'ptr', 'ptr', 'ptr'], returns: 'i32' },
|
|
96
|
+
duckdb_appender_error: { args: ['ptr'], returns: 'ptr' },
|
|
97
|
+
duckdb_appender_flush: { args: ['ptr'], returns: 'i32' },
|
|
98
|
+
duckdb_appender_close: { args: ['ptr'], returns: 'i32' },
|
|
99
|
+
duckdb_appender_destroy: { args: ['ptr'], returns: 'i32' },
|
|
100
|
+
duckdb_appender_end_row: { args: ['ptr'], returns: 'i32' },
|
|
101
|
+
duckdb_append_bool: { args: ['ptr', 'bool'], returns: 'i32' },
|
|
102
|
+
duckdb_append_int32: { args: ['ptr', 'i32'], returns: 'i32' },
|
|
103
|
+
duckdb_append_int64: { args: ['ptr', 'i64'], returns: 'i32' },
|
|
104
|
+
duckdb_append_double: { args: ['ptr', 'f64'], returns: 'i32' },
|
|
105
|
+
duckdb_append_varchar: { args: ['ptr', 'ptr'], returns: 'i32' },
|
|
106
|
+
duckdb_append_null: { args: ['ptr'], returns: 'i32' },
|
|
107
107
|
duckdb_appender_add_column: { args: ['ptr', 'ptr'], returns: 'i32' },
|
|
108
108
|
duckdb_appender_clear_columns: { args: ['ptr'], returns: 'i32' },
|
|
109
109
|
|
|
110
110
|
// Result inspection
|
|
111
|
-
duckdb_column_count:
|
|
112
|
-
duckdb_column_name:
|
|
113
|
-
duckdb_column_type:
|
|
111
|
+
duckdb_column_count: { args: ['ptr'], returns: 'u64' },
|
|
112
|
+
duckdb_column_name: { args: ['ptr', 'u64'], returns: 'ptr' },
|
|
113
|
+
duckdb_column_type: { args: ['ptr', 'u64'], returns: 'i32' },
|
|
114
114
|
duckdb_result_error: { args: ['ptr'], returns: 'ptr' },
|
|
115
115
|
|
|
116
116
|
// Modern chunk-based API (non-deprecated)
|