hypercore 10.0.0-alpha.9 → 10.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +154 -36
- package/index.js +579 -213
- package/lib/bitfield.js +110 -42
- package/lib/block-encryption.js +3 -2
- package/lib/block-store.js +10 -5
- package/lib/caps.js +32 -0
- package/lib/core.js +189 -47
- package/lib/download.js +22 -0
- package/lib/errors.js +50 -0
- package/lib/info.js +24 -0
- package/lib/merkle-tree.js +182 -106
- package/lib/messages.js +249 -168
- package/lib/oplog.js +6 -5
- package/lib/remote-bitfield.js +28 -7
- package/lib/replicator.js +1415 -624
- package/lib/streams.js +56 -0
- package/package.json +23 -16
- package/.github/workflows/test-node.yml +0 -23
- package/CHANGELOG.md +0 -37
- package/UPGRADE.md +0 -9
- package/examples/announce.js +0 -19
- package/examples/basic.js +0 -10
- package/examples/http.js +0 -123
- package/examples/lookup.js +0 -20
- package/lib/extensions.js +0 -76
- package/lib/protocol.js +0 -524
- package/lib/random-iterator.js +0 -46
- package/test/basic.js +0 -90
- package/test/bitfield.js +0 -71
- package/test/core.js +0 -290
- package/test/encodings.js +0 -18
- package/test/encryption.js +0 -121
- package/test/extension.js +0 -71
- package/test/helpers/index.js +0 -23
- package/test/merkle-tree.js +0 -518
- package/test/mutex.js +0 -137
- package/test/oplog.js +0 -399
- package/test/preload.js +0 -72
- package/test/replicate.js +0 -372
- package/test/sessions.js +0 -173
- package/test/user-data.js +0 -47
package/README.md
CHANGED
|
@@ -1,21 +1,24 @@
|
|
|
1
|
-
# Hypercore
|
|
1
|
+
# Hypercore
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
Hypercore is a secure, distributed append-only log.
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
Built for sharing large datasets and streams of real time data
|
|
6
6
|
|
|
7
|
-
|
|
8
|
-
* Fork recovery
|
|
9
|
-
* Promises
|
|
10
|
-
* Simplications and performance/scaling improvements
|
|
11
|
-
* Internal oplog design
|
|
7
|
+
## Features
|
|
12
8
|
|
|
13
|
-
|
|
9
|
+
* **Sparse replication.** Only download the data you are interested in.
|
|
10
|
+
* **Realtime.** Get the latest updates to the log fast and securely.
|
|
11
|
+
* **Performant.** Uses a simple flat file structure to maximize I/O performance.
|
|
12
|
+
* **Secure.** Uses signed merkle trees to verify log integrity in real time.
|
|
13
|
+
* **Modular.** Hypercore aims to do one thing and one thing well - distributing a stream of data.
|
|
14
|
+
|
|
15
|
+
Note that the latest release is Hypercore 10, which adds support for truncate and many other things.
|
|
16
|
+
Version 10 is not compatible with earlier versions (9 and earlier), but is considered LTS, meaning the storage format and wire protocol is forward compatible with future versions.
|
|
14
17
|
|
|
15
|
-
Install
|
|
18
|
+
## Install
|
|
16
19
|
|
|
17
20
|
```sh
|
|
18
|
-
npm install hypercore
|
|
21
|
+
npm install hypercore
|
|
19
22
|
```
|
|
20
23
|
|
|
21
24
|
## API
|
|
@@ -33,13 +36,13 @@ const core = new Hypercore('./directory') // store data in ./directory
|
|
|
33
36
|
Alternatively you can pass a function instead that is called with every filename Hypercore needs to function and return your own [abstract-random-access](https://github.com/random-access-storage/abstract-random-access) instance that is used to store the data.
|
|
34
37
|
|
|
35
38
|
``` js
|
|
36
|
-
const
|
|
39
|
+
const RAM = require('random-access-memory')
|
|
37
40
|
const core = new Hypercore((filename) => {
|
|
38
41
|
// filename will be one of: data, bitfield, tree, signatures, key, secret_key
|
|
39
42
|
// the data file will contain all your data concatenated.
|
|
40
43
|
|
|
41
44
|
// just store all files in ram by returning a random-access-memory instance
|
|
42
|
-
return
|
|
45
|
+
return new RAM()
|
|
43
46
|
})
|
|
44
47
|
```
|
|
45
48
|
|
|
@@ -62,35 +65,126 @@ Note that `tree`, `data`, and `bitfield` are normally heavily sparse files.
|
|
|
62
65
|
{
|
|
63
66
|
createIfMissing: true, // create a new Hypercore key pair if none was present in storage
|
|
64
67
|
overwrite: false, // overwrite any old Hypercore that might already exist
|
|
68
|
+
sparse: true, // enable sparse mode, counting unavailable blocks towards core.length and core.byteLength
|
|
65
69
|
valueEncoding: 'json' | 'utf-8' | 'binary', // defaults to binary
|
|
70
|
+
encodeBatch: batch => { ... }, // optionally apply an encoding to complete batches
|
|
66
71
|
keyPair: kp, // optionally pass the public key and secret key as a key pair
|
|
67
|
-
encryptionKey: k // optionally pass an encryption key to enable block encryption
|
|
72
|
+
encryptionKey: k, // optionally pass an encryption key to enable block encryption
|
|
73
|
+
onwait: () => {} // hook that is called if gets are waiting for download
|
|
68
74
|
}
|
|
69
75
|
```
|
|
70
76
|
|
|
71
77
|
You can also set valueEncoding to any [abstract-encoding](https://github.com/mafintosh/abstract-encoding) or [compact-encoding](https://github.com/compact-encoding) instance.
|
|
72
78
|
|
|
73
|
-
|
|
79
|
+
valueEncodings will be applied to individually blocks, even if you append batches. If you want to control encoding at the batch-level, you can use the `encodeBatch` option, which is a function that takes a batch and returns a binary-encoded batch. If you provide a custom valueEncoding, it will not be applied prior to `encodeBatch`.
|
|
80
|
+
|
|
81
|
+
#### `const { length, byteLength } = await core.append(block)`
|
|
74
82
|
|
|
75
83
|
Append a block of data (or an array of blocks) to the core.
|
|
76
|
-
Returns the
|
|
84
|
+
Returns the new length and byte length of the core.
|
|
85
|
+
|
|
86
|
+
``` js
|
|
87
|
+
// simple call append with a new block of data
|
|
88
|
+
await core.append(Buffer.from('I am a block of data'))
|
|
89
|
+
|
|
90
|
+
// pass an array to append multiple blocks as a batch
|
|
91
|
+
await core.append([Buffer.from('batch block 1'), Buffer.from('batch block 2')])
|
|
92
|
+
```
|
|
77
93
|
|
|
78
94
|
#### `const block = await core.get(index, [options])`
|
|
79
95
|
|
|
80
96
|
Get a block of data.
|
|
81
97
|
If the data is not available locally this method will prioritize and wait for the data to be downloaded.
|
|
82
98
|
|
|
83
|
-
|
|
99
|
+
``` js
|
|
100
|
+
// get block #42
|
|
101
|
+
const block = await core.get(42)
|
|
102
|
+
|
|
103
|
+
// get block #43, but only wait 5s
|
|
104
|
+
const blockIfFast = await core.get(43, { timeout: 5000 })
|
|
105
|
+
|
|
106
|
+
// get block #44, but only if we have it locally
|
|
107
|
+
const blockLocal = await core.get(44, { wait: false })
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
Additional options include
|
|
84
111
|
|
|
85
112
|
``` js
|
|
86
113
|
{
|
|
87
|
-
wait: true, // wait for
|
|
114
|
+
wait: true, // wait for block to be downloaded
|
|
88
115
|
onwait: () => {}, // hook that is called if the get is waiting for download
|
|
89
116
|
timeout: 0, // wait at max some milliseconds (0 means no timeout)
|
|
90
117
|
valueEncoding: 'json' | 'utf-8' | 'binary' // defaults to the core's valueEncoding
|
|
91
118
|
}
|
|
92
119
|
```
|
|
93
120
|
|
|
121
|
+
#### `const updated = await core.update()`
|
|
122
|
+
|
|
123
|
+
Wait for the core to try and find a signed update to it's length.
|
|
124
|
+
Does not download any data from peers except for a proof of the new core length.
|
|
125
|
+
|
|
126
|
+
``` js
|
|
127
|
+
const updated = await core.update()
|
|
128
|
+
|
|
129
|
+
console.log('core was updated?', updated, 'length is', core.length)
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
#### `const [index, relativeOffset] = await core.seek(byteOffset)`
|
|
133
|
+
|
|
134
|
+
Seek to a byte offset.
|
|
135
|
+
|
|
136
|
+
Returns `[index, relativeOffset]`, where `index` is the data block the byteOffset is contained in and `relativeOffset` is
|
|
137
|
+
the relative byte offset in the data block.
|
|
138
|
+
|
|
139
|
+
``` js
|
|
140
|
+
await core.append([Buffer.from('abc'), Buffer.from('d'), Buffer.from('efg')])
|
|
141
|
+
|
|
142
|
+
const first = await core.seek(1) // returns [0, 1]
|
|
143
|
+
const second = await core.seek(3) // returns [1, 0]
|
|
144
|
+
const third = await core.seek(5) // returns [2, 1]
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
#### `const stream = core.createReadStream([options])`
|
|
148
|
+
|
|
149
|
+
Make a read stream to read a range of data out at once.
|
|
150
|
+
|
|
151
|
+
``` js
|
|
152
|
+
// read the full core
|
|
153
|
+
const fullStream = core.createReadStream()
|
|
154
|
+
|
|
155
|
+
// read from block 10-15
|
|
156
|
+
const partialStream = core.createReadStream({ start: 10, end: 15 })
|
|
157
|
+
|
|
158
|
+
// pipe the stream somewhere using the .pipe method on Node.js or consume it as
|
|
159
|
+
// an async iterator
|
|
160
|
+
|
|
161
|
+
for await (const data of fullStream) {
|
|
162
|
+
console.log('data:', data)
|
|
163
|
+
}
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
Additional options include:
|
|
167
|
+
|
|
168
|
+
``` js
|
|
169
|
+
{
|
|
170
|
+
start: 0,
|
|
171
|
+
end: core.length,
|
|
172
|
+
live: false,
|
|
173
|
+
snapshot: true // auto set end to core.length on open or update it on every read
|
|
174
|
+
}
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
#### `await core.clear(start, [end])`
|
|
178
|
+
|
|
179
|
+
Clear stored blocks between `start` and `end`, reclaiming storage when possible.
|
|
180
|
+
|
|
181
|
+
``` js
|
|
182
|
+
await core.clear(4) // clear block 4 from your local cache
|
|
183
|
+
await core.clear(0, 10) // clear block 0-10 from your local cache
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
The core will also gossip to peers it is connected to, that is no longer has these blocks.
|
|
187
|
+
|
|
94
188
|
#### `await core.truncate(newLength, [forkId])`
|
|
95
189
|
|
|
96
190
|
Truncate the core to a smaller length.
|
|
@@ -98,6 +192,10 @@ Truncate the core to a smaller length.
|
|
|
98
192
|
Per default this will update the fork id of the core to `+ 1`, but you can set the fork id you prefer with the option.
|
|
99
193
|
Note that the fork id should be monotonely incrementing.
|
|
100
194
|
|
|
195
|
+
#### `const hash = await core.treeHash([length])`
|
|
196
|
+
|
|
197
|
+
Get the Merkle Tree hash of the core at a given length, defaulting to the current length of the core.
|
|
198
|
+
|
|
101
199
|
#### `const range = core.download([range])`
|
|
102
200
|
|
|
103
201
|
Download a range of data.
|
|
@@ -105,7 +203,7 @@ Download a range of data.
|
|
|
105
203
|
You can await when the range has been fully downloaded by doing:
|
|
106
204
|
|
|
107
205
|
```js
|
|
108
|
-
await range.
|
|
206
|
+
await range.done()
|
|
109
207
|
```
|
|
110
208
|
|
|
111
209
|
A range can have the following properties:
|
|
@@ -130,7 +228,7 @@ core.download({ start: 0, end: -1 })
|
|
|
130
228
|
To downloaded a discrete range of blocks pass a list of indices.
|
|
131
229
|
|
|
132
230
|
```js
|
|
133
|
-
core.download({ blocks: [4, 9, 7] })
|
|
231
|
+
core.download({ blocks: [4, 9, 7] })
|
|
134
232
|
```
|
|
135
233
|
|
|
136
234
|
To cancel downloading a range simply destroy the range instance.
|
|
@@ -140,21 +238,22 @@ To cancel downloading a range simply destroy the range instance.
|
|
|
140
238
|
range.destroy()
|
|
141
239
|
```
|
|
142
240
|
|
|
143
|
-
#### `const
|
|
144
|
-
|
|
145
|
-
Seek to a byte offset.
|
|
146
|
-
|
|
147
|
-
Returns `(index, relativeOffset)`, where `index` is the data block the byteOffset is contained in and `relativeOffset` is
|
|
148
|
-
the relative byte offset in the data block.
|
|
241
|
+
#### `const info = await core.info()`
|
|
149
242
|
|
|
150
|
-
|
|
243
|
+
Get information about this core, such as its total size in bytes.
|
|
151
244
|
|
|
152
|
-
|
|
153
|
-
Does not download any data from peers except for a proof of the new core length.
|
|
245
|
+
The object will look like this:
|
|
154
246
|
|
|
155
|
-
```
|
|
156
|
-
|
|
157
|
-
|
|
247
|
+
```js
|
|
248
|
+
Info {
|
|
249
|
+
key: Buffer(...),
|
|
250
|
+
discoveryKey: Buffer(...),
|
|
251
|
+
length: 18,
|
|
252
|
+
contiguousLength: 16,
|
|
253
|
+
byteLength: 742,
|
|
254
|
+
fork: 0,
|
|
255
|
+
padding: 8
|
|
256
|
+
}
|
|
158
257
|
```
|
|
159
258
|
|
|
160
259
|
#### `await core.close()`
|
|
@@ -190,12 +289,24 @@ Can we read from this core? After closing the core this will be false.
|
|
|
190
289
|
|
|
191
290
|
Populated after `ready` has been emitted. Will be `false` before the event.
|
|
192
291
|
|
|
292
|
+
#### `core.id`
|
|
293
|
+
|
|
294
|
+
String containing the id (z-base-32 of the public key) identifying this core.
|
|
295
|
+
|
|
296
|
+
Populated after `ready` has been emitted. Will be `null` before the event.
|
|
297
|
+
|
|
193
298
|
#### `core.key`
|
|
194
299
|
|
|
195
300
|
Buffer containing the public key identifying this core.
|
|
196
301
|
|
|
197
302
|
Populated after `ready` has been emitted. Will be `null` before the event.
|
|
198
303
|
|
|
304
|
+
#### `core.keyPair`
|
|
305
|
+
|
|
306
|
+
Object containing buffers of the core's public and secret key
|
|
307
|
+
|
|
308
|
+
Populated after `ready` has been emitted. Will be `null` before the event.
|
|
309
|
+
|
|
199
310
|
#### `core.discoveryKey`
|
|
200
311
|
|
|
201
312
|
Buffer containing a key derived from the core's public key.
|
|
@@ -209,13 +320,13 @@ Buffer containing the optional block encryption key of this core. Will be `null`
|
|
|
209
320
|
|
|
210
321
|
#### `core.length`
|
|
211
322
|
|
|
212
|
-
How many blocks of data are available on this core?
|
|
323
|
+
How many blocks of data are available on this core? If `sparse: false`, this will equal `core.contiguousLength`.
|
|
213
324
|
|
|
214
325
|
Populated after `ready` has been emitted. Will be `0` before the event.
|
|
215
326
|
|
|
216
|
-
#### `core.
|
|
327
|
+
#### `core.contiguousLength`
|
|
217
328
|
|
|
218
|
-
How
|
|
329
|
+
How many blocks are contiguously available starting from the first block of this core?
|
|
219
330
|
|
|
220
331
|
Populated after `ready` has been emitted. Will be `0` before the event.
|
|
221
332
|
|
|
@@ -254,10 +365,17 @@ const socket = net.connect(...)
|
|
|
254
365
|
socket.pipe(localCore.replicate(true)).pipe(socket)
|
|
255
366
|
```
|
|
256
367
|
|
|
368
|
+
#### `const done = core.findingPeers()`
|
|
369
|
+
|
|
370
|
+
Create a hook that tells Hypercore you are finding peers for this core in the background. Call `done` when your current discovery iteration is done.
|
|
371
|
+
If you're using Hyperswarm, you'd normally call this after a `swarm.flush()` finishes.
|
|
372
|
+
|
|
373
|
+
This allows `core.update` to wait for either the `findingPeers` hook to finish or one peer to appear before deciding whether it should wait for a merkle tree update before returning.
|
|
374
|
+
|
|
257
375
|
#### `core.on('append')`
|
|
258
376
|
|
|
259
377
|
Emitted when the core has been appended to (i.e. has a new length / byteLength), either locally or remotely.
|
|
260
378
|
|
|
261
|
-
#### `core.on('truncate')`
|
|
379
|
+
#### `core.on('truncate', ancestors, forkId)`
|
|
262
380
|
|
|
263
381
|
Emitted when the core has been truncated, either locally or remotely.
|