cborg 3.0.0 → 4.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -44,17 +44,6 @@ jobs:
44
44
  - name: Build
45
45
  run: |
46
46
  npm run build
47
- - name: Install plugins
48
- run: |
49
- npm install \
50
- @semantic-release/commit-analyzer \
51
- conventional-changelog-conventionalcommits \
52
- @semantic-release/release-notes-generator \
53
- @semantic-release/npm \
54
- @semantic-release/github \
55
- @semantic-release/git \
56
- @semantic-release/changelog \
57
- --no-progress --no-package-lock --no-save
58
47
  - name: Release
59
48
  env:
60
49
  GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
package/CHANGELOG.md CHANGED
@@ -1,3 +1,27 @@
1
+ ## [4.0.1](https://github.com/rvagg/cborg/compare/v4.0.0...v4.0.1) (2023-09-12)
2
+
3
+ ## [4.0.0](https://github.com/rvagg/cborg/compare/v3.0.0...v4.0.0) (2023-09-12)
4
+
5
+
6
+ ### ⚠ BREAKING CHANGES
7
+
8
+ * decodeFirst(), & require DecodeTokenizer to implement pos()
9
+
10
+ ### Features
11
+
12
+ * add decodeFirst to json decoder ([a1bd349](https://github.com/rvagg/cborg/commit/a1bd349a6eb382737716ec1fc6c1061e710358d1))
13
+ * decodeFirst(), & require DecodeTokenizer to implement pos() ([1b35871](https://github.com/rvagg/cborg/commit/1b358712d51bfd941382b6baab333722dafdada5))
14
+
15
+
16
+ ### Bug Fixes
17
+
18
+ * make semantic-release work again ([7ab7a4e](https://github.com/rvagg/cborg/commit/7ab7a4eac21bfd3c98bf8836c3dccca9c56a41e4))
19
+
20
+
21
+ ### Trivial Changes
22
+
23
+ * recompile types ([06133db](https://github.com/rvagg/cborg/commit/06133dbf73025b3e1af8065702d7d65ef03be407))
24
+
1
25
  ## [2.0.5](https://github.com/rvagg/cborg/compare/v2.0.4...v2.0.5) (2023-08-25)
2
26
 
3
27
 
package/README.md CHANGED
@@ -27,6 +27,7 @@
27
27
  * [Options](#options)
28
28
  * [`decode(data[, options])`](#decodedata-options)
29
29
  * [Options](#options-1)
30
+ * [`decodeFirst(data[, options])`](#decodefirstdata-options)
30
31
  * [`encodedLength(data[, options])`](#encodedlengthdata-options)
31
32
  * [Type encoders](#type-encoders)
32
33
  * [Tag decoders](#tag-decoders)
@@ -196,10 +197,6 @@ $ cborg json2hex '["a", "b", 1, "😀"]'
196
197
  import { encode } from 'cborg'
197
198
  ```
198
199
 
199
- ```js
200
- const { encode } = require('cborg')
201
- ```
202
-
203
200
  Encode a JavaScript object and return a `Uint8Array` with the CBOR byte representation.
204
201
 
205
202
  * Objects containing circular references will be rejected.
@@ -226,10 +223,6 @@ Encode a JavaScript object and return a `Uint8Array` with the CBOR byte represen
226
223
  import { decode } from 'cborg'
227
224
  ```
228
225
 
229
- ```js
230
- const { decode } = require('cborg')
231
- ```
232
-
233
226
  Decode valid CBOR bytes from a `Uint8Array` (or `Buffer`) and return a JavaScript object.
234
227
 
235
228
  * Integers (major 0 and 1) that are outside of the safe integer range will be converted to a `BigInt`.
@@ -252,14 +245,47 @@ Decode valid CBOR bytes from a `Uint8Array` (or `Buffer`) and return a JavaScrip
252
245
  * `tags` (array): a mapping of tag number to tag decoder function. By default no tags are supported. See [Tag decoders](#tag-decoders).
253
246
  * `tokenizer` (object): an object with two methods, `next()` which returns a `Token` and `done()` which returns a `boolean`. Can be used to implement custom input decoding. See the source code for examples.
254
247
 
255
- ### `encodedLength(data[, options])`
248
+ ### `decodeFirst(data[, options])`
256
249
 
257
250
  ```js
258
- import { encodedLength } from 'cborg/length'
251
+ import { decodeFirst } from 'cborg'
259
252
  ```
260
253
 
254
+ Decode valid CBOR bytes from a `Uint8Array` (or `Buffer`) and return a JavaScript object ***and*** the remainder of the original byte array that was not consumed by the decode. This can be useful for decoding concatenated CBOR objects, which is often used in streaming modes of CBOR.
255
+
256
+ The returned remainder `Uint8Array` is a subarray of the original input `Uint8Array` and will share the same underlying buffer. This means that there are no new allocations performed by this function and it is as efficient to use as `decode` but without the additional byte-consumption check.
257
+
258
+ The options for `decodeFirst` are the same as for [`decode()`](#decodedata-options), but the return type is different and `decodeFirst()` will not error if a decode operation doesn't consume all of the input bytes.
259
+
260
+ The return value is an array with two elements:
261
+
262
+ * `value`: the decoded JavaScript object
263
+ * `remainder`: a `Uint8Array` containing the bytes that were not consumed by the decode operation
264
+
261
265
  ```js
262
- const { encodedLength } = require('cborg/length')
266
+ import { decodeFirst } from 'cborg'
267
+
268
+ let buf = Buffer.from('a16474686973a26269736543424f522163796179f564746869736269736543424f522163796179f5', 'hex')
269
+ while (buf.length) {
270
+ const [value, remainder] = decodeFirst(buf)
271
+ console.log('decoded:', value)
272
+ buf = remainder
273
+ }
274
+ ```
275
+
276
+ ```
277
+ decoded: { this: { is: 'CBOR!', yay: true } }
278
+ decoded: this
279
+ decoded: is
280
+ decoded: CBOR!
281
+ decoded: yay
282
+ decoded: true
283
+ ```
284
+
285
+ ### `encodedLength(data[, options])`
286
+
287
+ ```js
288
+ import { encodedLength } from 'cborg/length'
263
289
  ```
264
290
 
265
291
  Calculate the byte length of the given data when encoded as CBOR with the options provided. The options are the same as for an `encode()` call. This calculation will be accurate if the same options are used as when performing a normal `encode()`. Some encode options can change the encoding output length.
@@ -400,7 +426,7 @@ There are a number of forms where an object will not round-trip precisely, if th
400
426
 
401
427
  **cborg** can also encode and decode JSON using the same pipeline and many of the same settings. For most (but not all) cases it will be faster to use `JSON.parse()` and `JSON.stringify()`, however **cborg** provides much more control over the process to handle determinism and be more restrictive in allowable forms. It also operates natively with Uint8Arrays rather than strings which may also offer some minor efficiency or usability gains in some circumstances.
402
428
 
403
- Use `import { encode, decode } from 'cborg/json'` or `const { encode, decode } = require('cborg/json')` to access the JSON handling encoder and decoder.
429
+ Use `import { encode, decode, decodeFirst } from 'cborg/json'` to access the JSON handling encoder and decoder.
404
430
 
405
431
  Many of the same encode and decode options available for CBOR can be used to manage JSON handling. These include strictness requirements for decode and custom tag encoders for encode. Tag encoders can't create new tags as there are no tags in JSON, but they can replace JavaScript object forms with custom JSON forms (e.g. convert a `Uint8Array` to a valid JSON form rather than having the encoder throw an error). The inverse is also possible, turning specific JSON forms into JavaScript forms, by using a custom tokenizer on decode.
406
432
 
package/cborg.js CHANGED
@@ -1,5 +1,5 @@
1
1
  import { encode } from './lib/encode.js'
2
- import { decode } from './lib/decode.js'
2
+ import { decode, decodeFirst } from './lib/decode.js'
3
3
  import { Token, Type } from './lib/token.js'
4
4
 
5
5
  /**
@@ -13,6 +13,7 @@ import { Token, Type } from './lib/token.js'
13
13
 
14
14
  export {
15
15
  decode,
16
+ decodeFirst,
16
17
  encode,
17
18
  Token,
18
19
  Type
package/interface.ts CHANGED
@@ -26,7 +26,8 @@ export type QuickEncodeToken = (token: Token) => Uint8Array | undefined
26
26
 
27
27
  export interface DecodeTokenizer {
28
28
  done(): boolean,
29
- next(): Token
29
+ next(): Token,
30
+ pos(): number,
30
31
  }
31
32
 
32
33
  export type TagDecoder = (inner: any) => any
package/lib/byte-utils.js CHANGED
@@ -275,101 +275,44 @@ export function compare (b1, b2) {
275
275
  return 0
276
276
  }
277
277
 
278
- // The below code is mostly taken from https://github.com/feross/buffer
279
- // Licensed MIT. Copyright (c) Feross Aboukhadijeh
278
+ // The below code is taken from https://github.com/google/closure-library/blob/8598d87242af59aac233270742c8984e2b2bdbe0/closure/goog/crypt/crypt.js#L117-L143
279
+ // Licensed Apache-2.0.
280
280
 
281
281
  /**
282
- * @param {string} string
283
- * @param {number} [units]
282
+ * @param {string} str
284
283
  * @returns {number[]}
285
284
  */
286
- function utf8ToBytes (string, units = Infinity) {
287
- let codePoint
288
- const length = string.length
289
- let leadSurrogate = null
290
- const bytes = []
291
-
292
- for (let i = 0; i < length; ++i) {
293
- codePoint = string.charCodeAt(i)
294
-
295
- // is surrogate component
296
- if (codePoint > 0xd7ff && codePoint < 0xe000) {
297
- // last char was a lead
298
- if (!leadSurrogate) {
299
- // no lead yet
300
- /* c8 ignore next 9 */
301
- if (codePoint > 0xdbff) {
302
- // unexpected trail
303
- if ((units -= 3) > -1) bytes.push(0xef, 0xbf, 0xbd)
304
- continue
305
- } else if (i + 1 === length) {
306
- // unpaired lead
307
- if ((units -= 3) > -1) bytes.push(0xef, 0xbf, 0xbd)
308
- continue
309
- }
310
-
311
- // valid lead
312
- leadSurrogate = codePoint
313
-
314
- continue
315
- }
316
-
317
- // 2 leads in a row
318
- /* c8 ignore next 5 */
319
- if (codePoint < 0xdc00) {
320
- if ((units -= 3) > -1) bytes.push(0xef, 0xbf, 0xbd)
321
- leadSurrogate = codePoint
322
- continue
323
- }
324
-
325
- // valid surrogate pair
326
- codePoint = (leadSurrogate - 0xd800 << 10 | codePoint - 0xdc00) + 0x10000
327
- /* c8 ignore next 4 */
328
- } else if (leadSurrogate) {
329
- // valid bmp char, but last char was a lead
330
- if ((units -= 3) > -1) bytes.push(0xef, 0xbf, 0xbd)
331
- }
332
-
333
- leadSurrogate = null
334
-
335
- // encode utf8
336
- if (codePoint < 0x80) {
337
- /* c8 ignore next 1 */
338
- if ((units -= 1) < 0) break
339
- bytes.push(codePoint)
340
- } else if (codePoint < 0x800) {
341
- /* c8 ignore next 1 */
342
- if ((units -= 2) < 0) break
343
- bytes.push(
344
- codePoint >> 0x6 | 0xc0,
345
- codePoint & 0x3f | 0x80
346
- )
347
- } else if (codePoint < 0x10000) {
348
- /* c8 ignore next 1 */
349
- if ((units -= 3) < 0) break
350
- bytes.push(
351
- codePoint >> 0xc | 0xe0,
352
- codePoint >> 0x6 & 0x3f | 0x80,
353
- codePoint & 0x3f | 0x80
354
- )
355
- /* c8 ignore next 9 */
356
- } else if (codePoint < 0x110000) {
357
- if ((units -= 4) < 0) break
358
- bytes.push(
359
- codePoint >> 0x12 | 0xf0,
360
- codePoint >> 0xc & 0x3f | 0x80,
361
- codePoint >> 0x6 & 0x3f | 0x80,
362
- codePoint & 0x3f | 0x80
363
- )
285
+ function utf8ToBytes (str) {
286
+ const out = []
287
+ let p = 0
288
+ for (let i = 0; i < str.length; i++) {
289
+ let c = str.charCodeAt(i)
290
+ if (c < 128) {
291
+ out[p++] = c
292
+ } else if (c < 2048) {
293
+ out[p++] = (c >> 6) | 192
294
+ out[p++] = (c & 63) | 128
295
+ } else if (
296
+ ((c & 0xFC00) === 0xD800) && (i + 1) < str.length &&
297
+ ((str.charCodeAt(i + 1) & 0xFC00) === 0xDC00)) {
298
+ // Surrogate Pair
299
+ c = 0x10000 + ((c & 0x03FF) << 10) + (str.charCodeAt(++i) & 0x03FF)
300
+ out[p++] = (c >> 18) | 240
301
+ out[p++] = ((c >> 12) & 63) | 128
302
+ out[p++] = ((c >> 6) & 63) | 128
303
+ out[p++] = (c & 63) | 128
364
304
  } else {
365
- /* c8 ignore next 2 */
366
- throw new Error('Invalid code point')
305
+ out[p++] = (c >> 12) | 224
306
+ out[p++] = ((c >> 6) & 63) | 128
307
+ out[p++] = (c & 63) | 128
367
308
  }
368
309
  }
369
-
370
- return bytes
310
+ return out
371
311
  }
372
312
 
313
+ // The below code is mostly taken from https://github.com/feross/buffer
314
+ // Licensed MIT. Copyright (c) Feross Aboukhadijeh
315
+
373
316
  /**
374
317
  * @param {Uint8Array} buf
375
318
  * @param {number} offset
package/lib/decode.js CHANGED
@@ -24,17 +24,21 @@ class Tokeniser {
24
24
  * @param {DecodeOptions} options
25
25
  */
26
26
  constructor (data, options = {}) {
27
- this.pos = 0
27
+ this._pos = 0
28
28
  this.data = data
29
29
  this.options = options
30
30
  }
31
31
 
32
+ pos () {
33
+ return this._pos
34
+ }
35
+
32
36
  done () {
33
- return this.pos >= this.data.length
37
+ return this._pos >= this.data.length
34
38
  }
35
39
 
36
40
  next () {
37
- const byt = this.data[this.pos]
41
+ const byt = this.data[this._pos]
38
42
  let token = quick[byt]
39
43
  if (token === undefined) {
40
44
  const decoder = jump[byt]
@@ -44,10 +48,10 @@ class Tokeniser {
44
48
  throw new Error(`${decodeErrPrefix} no decoder for major type ${byt >>> 5} (byte 0x${byt.toString(16).padStart(2, '0')})`)
45
49
  }
46
50
  const minor = byt & 31
47
- token = decoder(this.data, this.pos, minor, this.options)
51
+ token = decoder(this.data, this._pos, minor, this.options)
48
52
  }
49
53
  // @ts-ignore we get to assume encodedLength is set (crossing fingers slightly)
50
- this.pos += token.encodedLength
54
+ this._pos += token.encodedLength
51
55
  return token
52
56
  }
53
57
  }
@@ -171,9 +175,9 @@ function tokensToObject (tokeniser, options) {
171
175
  /**
172
176
  * @param {Uint8Array} data
173
177
  * @param {DecodeOptions} [options]
174
- * @returns {any}
178
+ * @returns {[any, Uint8Array]}
175
179
  */
176
- function decode (data, options) {
180
+ function decodeFirst (data, options) {
177
181
  if (!(data instanceof Uint8Array)) {
178
182
  throw new Error(`${decodeErrPrefix} data to decode must be a Uint8Array`)
179
183
  }
@@ -186,10 +190,20 @@ function decode (data, options) {
186
190
  if (decoded === BREAK) {
187
191
  throw new Error(`${decodeErrPrefix} got unexpected break`)
188
192
  }
189
- if (!tokeniser.done()) {
193
+ return [decoded, data.subarray(tokeniser.pos())]
194
+ }
195
+
196
+ /**
197
+ * @param {Uint8Array} data
198
+ * @param {DecodeOptions} [options]
199
+ * @returns {any}
200
+ */
201
+ function decode (data, options) {
202
+ const [decoded, remainder] = decodeFirst(data, options)
203
+ if (remainder.length > 0) {
190
204
  throw new Error(`${decodeErrPrefix} too many terminals, data makes no sense`)
191
205
  }
192
206
  return decoded
193
207
  }
194
208
 
195
- export { Tokeniser, tokensToObject, decode }
209
+ export { Tokeniser, tokensToObject, decode, decodeFirst }
@@ -1,4 +1,4 @@
1
- import { decode as _decode } from '../decode.js'
1
+ import { decode as _decode, decodeFirst as _decodeFirst } from '../decode.js'
2
2
  import { Token, Type } from '../token.js'
3
3
  import { decodeCodePointsArray } from '../byte-utils.js'
4
4
  import { decodeErrPrefix } from '../common.js'
@@ -17,7 +17,7 @@ class Tokenizer {
17
17
  * @param {DecodeOptions} options
18
18
  */
19
19
  constructor (data, options = {}) {
20
- this.pos = 0
20
+ this._pos = 0
21
21
  this.data = data
22
22
  this.options = options
23
23
  /** @type {string[]} */
@@ -25,18 +25,22 @@ class Tokenizer {
25
25
  this.lastToken = ''
26
26
  }
27
27
 
28
+ pos () {
29
+ return this._pos
30
+ }
31
+
28
32
  /**
29
33
  * @returns {boolean}
30
34
  */
31
35
  done () {
32
- return this.pos >= this.data.length
36
+ return this._pos >= this.data.length
33
37
  }
34
38
 
35
39
  /**
36
40
  * @returns {number}
37
41
  */
38
42
  ch () {
39
- return this.data[this.pos]
43
+ return this.data[this._pos]
40
44
  }
41
45
 
42
46
  /**
@@ -50,7 +54,7 @@ class Tokenizer {
50
54
  let c = this.ch()
51
55
  // @ts-ignore
52
56
  while (c === 32 /* ' ' */ || c === 9 /* '\t' */ || c === 13 /* '\r' */ || c === 10 /* '\n' */) {
53
- c = this.data[++this.pos]
57
+ c = this.data[++this._pos]
54
58
  }
55
59
  }
56
60
 
@@ -58,18 +62,18 @@ class Tokenizer {
58
62
  * @param {number[]} str
59
63
  */
60
64
  expect (str) {
61
- if (this.data.length - this.pos < str.length) {
62
- throw new Error(`${decodeErrPrefix} unexpected end of input at position ${this.pos}`)
65
+ if (this.data.length - this._pos < str.length) {
66
+ throw new Error(`${decodeErrPrefix} unexpected end of input at position ${this._pos}`)
63
67
  }
64
68
  for (let i = 0; i < str.length; i++) {
65
- if (this.data[this.pos++] !== str[i]) {
66
- throw new Error(`${decodeErrPrefix} unexpected token at position ${this.pos}, expected to find '${String.fromCharCode(...str)}'`)
69
+ if (this.data[this._pos++] !== str[i]) {
70
+ throw new Error(`${decodeErrPrefix} unexpected token at position ${this._pos}, expected to find '${String.fromCharCode(...str)}'`)
67
71
  }
68
72
  }
69
73
  }
70
74
 
71
75
  parseNumber () {
72
- const startPos = this.pos
76
+ const startPos = this._pos
73
77
  let negative = false
74
78
  let float = false
75
79
 
@@ -80,7 +84,7 @@ class Tokenizer {
80
84
  while (!this.done()) {
81
85
  const ch = this.ch()
82
86
  if (chars.includes(ch)) {
83
- this.pos++
87
+ this._pos++
84
88
  } else {
85
89
  break
86
90
  }
@@ -90,47 +94,47 @@ class Tokenizer {
90
94
  // lead
91
95
  if (this.ch() === 45) { // '-'
92
96
  negative = true
93
- this.pos++
97
+ this._pos++
94
98
  }
95
99
  if (this.ch() === 48) { // '0'
96
- this.pos++
100
+ this._pos++
97
101
  if (this.ch() === 46) { // '.'
98
- this.pos++
102
+ this._pos++
99
103
  float = true
100
104
  } else {
101
- return new Token(Type.uint, 0, this.pos - startPos)
105
+ return new Token(Type.uint, 0, this._pos - startPos)
102
106
  }
103
107
  }
104
108
  swallow([48, 49, 50, 51, 52, 53, 54, 55, 56, 57]) // DIGIT
105
- if (negative && this.pos === startPos + 1) {
106
- throw new Error(`${decodeErrPrefix} unexpected token at position ${this.pos}`)
109
+ if (negative && this._pos === startPos + 1) {
110
+ throw new Error(`${decodeErrPrefix} unexpected token at position ${this._pos}`)
107
111
  }
108
112
  if (!this.done() && this.ch() === 46) { // '.'
109
113
  if (float) {
110
- throw new Error(`${decodeErrPrefix} unexpected token at position ${this.pos}`)
114
+ throw new Error(`${decodeErrPrefix} unexpected token at position ${this._pos}`)
111
115
  }
112
116
  float = true
113
- this.pos++
117
+ this._pos++
114
118
  swallow([48, 49, 50, 51, 52, 53, 54, 55, 56, 57]) // DIGIT
115
119
  }
116
120
  if (!this.done() && (this.ch() === 101 || this.ch() === 69)) { // '[eE]'
117
121
  float = true
118
- this.pos++
122
+ this._pos++
119
123
  if (!this.done() && (this.ch() === 43 || this.ch() === 45)) { // '+', '-'
120
- this.pos++
124
+ this._pos++
121
125
  }
122
126
  swallow([48, 49, 50, 51, 52, 53, 54, 55, 56, 57]) // DIGIT
123
127
  }
124
128
  // @ts-ignore
125
- const numStr = String.fromCharCode.apply(null, this.data.subarray(startPos, this.pos))
129
+ const numStr = String.fromCharCode.apply(null, this.data.subarray(startPos, this._pos))
126
130
  const num = parseFloat(numStr)
127
131
  if (float) {
128
- return new Token(Type.float, num, this.pos - startPos)
132
+ return new Token(Type.float, num, this._pos - startPos)
129
133
  }
130
134
  if (this.options.allowBigInt !== true || Number.isSafeInteger(num)) {
131
- return new Token(num >= 0 ? Type.uint : Type.negint, num, this.pos - startPos)
135
+ return new Token(num >= 0 ? Type.uint : Type.negint, num, this._pos - startPos)
132
136
  }
133
- return new Token(num >= 0 ? Type.uint : Type.negint, BigInt(numStr), this.pos - startPos)
137
+ return new Token(num >= 0 ? Type.uint : Type.negint, BigInt(numStr), this._pos - startPos)
134
138
  }
135
139
 
136
140
  /**
@@ -140,31 +144,31 @@ class Tokenizer {
140
144
  /* c8 ignore next 4 */
141
145
  if (this.ch() !== 34) { // '"'
142
146
  // this would be a programming error
143
- throw new Error(`${decodeErrPrefix} unexpected character at position ${this.pos}; this shouldn't happen`)
147
+ throw new Error(`${decodeErrPrefix} unexpected character at position ${this._pos}; this shouldn't happen`)
144
148
  }
145
- this.pos++
149
+ this._pos++
146
150
 
147
151
  // check for simple fast-path, all printable ascii, no escapes
148
152
  // >0x10000 elements may fail fn.apply() (http://stackoverflow.com/a/22747272/680742)
149
- for (let i = this.pos, l = 0; i < this.data.length && l < 0x10000; i++, l++) {
153
+ for (let i = this._pos, l = 0; i < this.data.length && l < 0x10000; i++, l++) {
150
154
  const ch = this.data[i]
151
155
  if (ch === 92 || ch < 32 || ch >= 128) { // '\', ' ', control-chars or non-trivial
152
156
  break
153
157
  }
154
158
  if (ch === 34) { // '"'
155
159
  // @ts-ignore
156
- const str = String.fromCharCode.apply(null, this.data.subarray(this.pos, i))
157
- this.pos = i + 1
160
+ const str = String.fromCharCode.apply(null, this.data.subarray(this._pos, i))
161
+ this._pos = i + 1
158
162
  return new Token(Type.string, str, l)
159
163
  }
160
164
  }
161
165
 
162
- const startPos = this.pos
166
+ const startPos = this._pos
163
167
  const chars = []
164
168
 
165
169
  const readu4 = () => {
166
- if (this.pos + 4 >= this.data.length) {
167
- throw new Error(`${decodeErrPrefix} unexpected end of unicode escape sequence at position ${this.pos}`)
170
+ if (this._pos + 4 >= this.data.length) {
171
+ throw new Error(`${decodeErrPrefix} unexpected end of unicode escape sequence at position ${this._pos}`)
168
172
  }
169
173
  let u4 = 0
170
174
  for (let i = 0; i < 4; i++) {
@@ -176,10 +180,10 @@ class Tokenizer {
176
180
  } else if (ch >= 65 && ch <= 70) { // 'A' && 'F'
177
181
  ch = ch - 65 + 10
178
182
  } else {
179
- throw new Error(`${decodeErrPrefix} unexpected unicode escape character at position ${this.pos}`)
183
+ throw new Error(`${decodeErrPrefix} unexpected unicode escape character at position ${this._pos}`)
180
184
  }
181
185
  u4 = u4 * 16 + ch
182
- this.pos++
186
+ this._pos++
183
187
  }
184
188
  return u4
185
189
  }
@@ -191,8 +195,8 @@ class Tokenizer {
191
195
  /* c8 ignore next 1 */
192
196
  let bytesPerSequence = (firstByte > 0xef) ? 4 : (firstByte > 0xdf) ? 3 : (firstByte > 0xbf) ? 2 : 1
193
197
 
194
- if (this.pos + bytesPerSequence > this.data.length) {
195
- throw new Error(`${decodeErrPrefix} unexpected unicode sequence at position ${this.pos}`)
198
+ if (this._pos + bytesPerSequence > this.data.length) {
199
+ throw new Error(`${decodeErrPrefix} unexpected unicode sequence at position ${this._pos}`)
196
200
  }
197
201
 
198
202
  let secondByte, thirdByte, fourthByte, tempCodePoint
@@ -206,7 +210,7 @@ class Tokenizer {
206
210
  }
207
211
  break
208
212
  case 2:
209
- secondByte = this.data[this.pos + 1]
213
+ secondByte = this.data[this._pos + 1]
210
214
  if ((secondByte & 0xc0) === 0x80) {
211
215
  tempCodePoint = (firstByte & 0x1f) << 0x6 | (secondByte & 0x3f)
212
216
  if (tempCodePoint > 0x7f) {
@@ -215,8 +219,8 @@ class Tokenizer {
215
219
  }
216
220
  break
217
221
  case 3:
218
- secondByte = this.data[this.pos + 1]
219
- thirdByte = this.data[this.pos + 2]
222
+ secondByte = this.data[this._pos + 1]
223
+ thirdByte = this.data[this._pos + 2]
220
224
  if ((secondByte & 0xc0) === 0x80 && (thirdByte & 0xc0) === 0x80) {
221
225
  tempCodePoint = (firstByte & 0xf) << 0xc | (secondByte & 0x3f) << 0x6 | (thirdByte & 0x3f)
222
226
  /* c8 ignore next 3 */
@@ -226,9 +230,9 @@ class Tokenizer {
226
230
  }
227
231
  break
228
232
  case 4:
229
- secondByte = this.data[this.pos + 1]
230
- thirdByte = this.data[this.pos + 2]
231
- fourthByte = this.data[this.pos + 3]
233
+ secondByte = this.data[this._pos + 1]
234
+ thirdByte = this.data[this._pos + 2]
235
+ fourthByte = this.data[this._pos + 3]
232
236
  if ((secondByte & 0xc0) === 0x80 && (thirdByte & 0xc0) === 0x80 && (fourthByte & 0xc0) === 0x80) {
233
237
  tempCodePoint = (firstByte & 0xf) << 0x12 | (secondByte & 0x3f) << 0xc | (thirdByte & 0x3f) << 0x6 | (fourthByte & 0x3f)
234
238
  if (tempCodePoint > 0xffff && tempCodePoint < 0x110000) {
@@ -251,7 +255,7 @@ class Tokenizer {
251
255
  }
252
256
 
253
257
  chars.push(codePoint)
254
- this.pos += bytesPerSequence
258
+ this._pos += bytesPerSequence
255
259
  }
256
260
 
257
261
  // TODO: could take the approach of a quick first scan for special chars like encoding/json/decode.go#unquoteBytes
@@ -261,12 +265,12 @@ class Tokenizer {
261
265
  let ch1
262
266
  switch (ch) {
263
267
  case 92: // '\'
264
- this.pos++
268
+ this._pos++
265
269
  if (this.done()) {
266
- throw new Error(`${decodeErrPrefix} unexpected string termination at position ${this.pos}`)
270
+ throw new Error(`${decodeErrPrefix} unexpected string termination at position ${this._pos}`)
267
271
  }
268
272
  ch1 = this.ch()
269
- this.pos++
273
+ this._pos++
270
274
  switch (ch1) {
271
275
  case 34: // '"'
272
276
  case 39: // '\''
@@ -293,25 +297,25 @@ class Tokenizer {
293
297
  chars.push(readu4())
294
298
  break
295
299
  default:
296
- throw new Error(`${decodeErrPrefix} unexpected string escape character at position ${this.pos}`)
300
+ throw new Error(`${decodeErrPrefix} unexpected string escape character at position ${this._pos}`)
297
301
  }
298
302
  break
299
303
  case 34: // '"'
300
- this.pos++
301
- return new Token(Type.string, decodeCodePointsArray(chars), this.pos - startPos)
304
+ this._pos++
305
+ return new Token(Type.string, decodeCodePointsArray(chars), this._pos - startPos)
302
306
  default:
303
307
  if (ch < 32) { // ' '
304
- throw new Error(`${decodeErrPrefix} invalid control character at position ${this.pos}`)
308
+ throw new Error(`${decodeErrPrefix} invalid control character at position ${this._pos}`)
305
309
  } else if (ch < 0x80) {
306
310
  chars.push(ch)
307
- this.pos++
311
+ this._pos++
308
312
  } else {
309
313
  readUtf8Char()
310
314
  }
311
315
  }
312
316
  }
313
317
 
314
- throw new Error(`${decodeErrPrefix} unexpected end of string at position ${this.pos}`)
318
+ throw new Error(`${decodeErrPrefix} unexpected end of string at position ${this._pos}`)
315
319
  }
316
320
 
317
321
  /**
@@ -321,11 +325,11 @@ class Tokenizer {
321
325
  switch (this.ch()) {
322
326
  case 123: // '{'
323
327
  this.modeStack.push('obj-start')
324
- this.pos++
328
+ this._pos++
325
329
  return new Token(Type.map, Infinity, 1)
326
330
  case 91: // '['
327
331
  this.modeStack.push('array-start')
328
- this.pos++
332
+ this._pos++
329
333
  return new Token(Type.array, Infinity, 1)
330
334
  case 34: { // '"'
331
335
  return this.parseString()
@@ -352,7 +356,7 @@ class Tokenizer {
352
356
  case 57: // '9'
353
357
  return this.parseNumber()
354
358
  default:
355
- throw new Error(`${decodeErrPrefix} unexpected character at position ${this.pos}`)
359
+ throw new Error(`${decodeErrPrefix} unexpected character at position ${this._pos}`)
356
360
  }
357
361
  }
358
362
 
@@ -368,14 +372,14 @@ class Tokenizer {
368
372
  case 'array-value': {
369
373
  this.modeStack.pop()
370
374
  if (this.ch() === 93) { // ']'
371
- this.pos++
375
+ this._pos++
372
376
  this.skipWhitespace()
373
377
  return new Token(Type.break, undefined, 1)
374
378
  }
375
379
  if (this.ch() !== 44) { // ','
376
- throw new Error(`${decodeErrPrefix} unexpected character at position ${this.pos}, was expecting array delimiter but found '${String.fromCharCode(this.ch())}'`)
380
+ throw new Error(`${decodeErrPrefix} unexpected character at position ${this._pos}, was expecting array delimiter but found '${String.fromCharCode(this.ch())}'`)
377
381
  }
378
- this.pos++
382
+ this._pos++
379
383
  this.modeStack.push('array-value')
380
384
  this.skipWhitespace()
381
385
  return this.parseValue()
@@ -383,7 +387,7 @@ class Tokenizer {
383
387
  case 'array-start': {
384
388
  this.modeStack.pop()
385
389
  if (this.ch() === 93) { // ']'
386
- this.pos++
390
+ this._pos++
387
391
  this.skipWhitespace()
388
392
  return new Token(Type.break, undefined, 1)
389
393
  }
@@ -395,28 +399,28 @@ class Tokenizer {
395
399
  case 'obj-key':
396
400
  if (this.ch() === 125) { // '}'
397
401
  this.modeStack.pop()
398
- this.pos++
402
+ this._pos++
399
403
  this.skipWhitespace()
400
404
  return new Token(Type.break, undefined, 1)
401
405
  }
402
406
  if (this.ch() !== 44) { // ','
403
- throw new Error(`${decodeErrPrefix} unexpected character at position ${this.pos}, was expecting object delimiter but found '${String.fromCharCode(this.ch())}'`)
407
+ throw new Error(`${decodeErrPrefix} unexpected character at position ${this._pos}, was expecting object delimiter but found '${String.fromCharCode(this.ch())}'`)
404
408
  }
405
- this.pos++
409
+ this._pos++
406
410
  this.skipWhitespace()
407
411
  case 'obj-start': { // eslint-disable-line no-fallthrough
408
412
  this.modeStack.pop()
409
413
  if (this.ch() === 125) { // '}'
410
- this.pos++
414
+ this._pos++
411
415
  this.skipWhitespace()
412
416
  return new Token(Type.break, undefined, 1)
413
417
  }
414
418
  const token = this.parseString()
415
419
  this.skipWhitespace()
416
420
  if (this.ch() !== 58) { // ':'
417
- throw new Error(`${decodeErrPrefix} unexpected character at position ${this.pos}, was expecting key/value delimiter ':' but found '${String.fromCharCode(this.ch())}'`)
421
+ throw new Error(`${decodeErrPrefix} unexpected character at position ${this._pos}, was expecting key/value delimiter ':' but found '${String.fromCharCode(this.ch())}'`)
418
422
  }
419
- this.pos++
423
+ this._pos++
420
424
  this.modeStack.push('obj-value')
421
425
  return token
422
426
  }
@@ -428,7 +432,7 @@ class Tokenizer {
428
432
  }
429
433
  /* c8 ignore next 2 */
430
434
  default:
431
- throw new Error(`${decodeErrPrefix} unexpected parse state at position ${this.pos}; this shouldn't happen`)
435
+ throw new Error(`${decodeErrPrefix} unexpected parse state at position ${this._pos}; this shouldn't happen`)
432
436
  }
433
437
  }
434
438
  }
@@ -443,4 +447,14 @@ function decode (data, options) {
443
447
  return _decode(data, options)
444
448
  }
445
449
 
446
- export { decode, Tokenizer }
450
+ /**
451
+ * @param {Uint8Array} data
452
+ * @param {DecodeOptions} [options]
453
+ * @returns {[any, Uint8Array]}
454
+ */
455
+ function decodeFirst (data, options) {
456
+ options = Object.assign({ tokenizer: new Tokenizer(data, options) }, options)
457
+ return _decodeFirst(data, options)
458
+ }
459
+
460
+ export { decode, decodeFirst, Tokenizer }
package/lib/json/json.js CHANGED
@@ -1,4 +1,4 @@
1
1
  import { encode } from './encode.js'
2
- import { decode, Tokenizer } from './decode.js'
2
+ import { decode, decodeFirst, Tokenizer } from './decode.js'
3
3
 
4
- export { encode, decode, Tokenizer }
4
+ export { encode, decode, decodeFirst, Tokenizer }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cborg",
3
- "version": "3.0.0",
3
+ "version": "4.0.1",
4
4
  "description": "Fast CBOR with a focus on strictness",
5
5
  "main": "cborg.js",
6
6
  "type": "module",
@@ -20,7 +20,7 @@
20
20
  },
21
21
  "repository": {
22
22
  "type": "git",
23
- "url": "https://github.com/rvagg/cborg.git"
23
+ "url": "git@github.com:rvagg/cborg.git"
24
24
  },
25
25
  "keywords": [
26
26
  "cbor"
@@ -28,14 +28,22 @@
28
28
  "author": "Rod <rod@vagg.org> (http://r.va.gg/)",
29
29
  "license": "Apache-2.0",
30
30
  "devDependencies": {
31
+ "@semantic-release/changelog": "^6.0.3",
32
+ "@semantic-release/commit-analyzer": "^10.0.4",
33
+ "@semantic-release/git": "^10.0.1",
34
+ "@semantic-release/github": "^9.0.5",
35
+ "@semantic-release/npm": "^10.0.5",
36
+ "@semantic-release/release-notes-generator": "^11.0.7",
31
37
  "@types/chai": "^4.3.6",
32
38
  "@types/mocha": "^10.0.1",
33
39
  "@types/node": "^20.5.9",
34
40
  "c8": "^8.0.1",
35
41
  "chai": "^4.3.8",
42
+ "conventional-changelog-conventionalcommits": "^6.1.0",
36
43
  "ipld-garbage": "^5.0.0",
37
44
  "mocha": "^10.2.0",
38
45
  "polendina": "^3.2.1",
46
+ "semantic-release": "^21.1.1",
39
47
  "standard": "^17.1.0",
40
48
  "typescript": "^5.2.2"
41
49
  },
package/test/test-json.js CHANGED
@@ -1,7 +1,7 @@
1
1
  /* eslint-env mocha,es2020 */
2
2
 
3
3
  import { assert } from 'chai'
4
- import { decode, encode } from 'cborg/json'
4
+ import { decode, decodeFirst, encode } from 'cborg/json'
5
5
 
6
6
  const toBytes = (str) => new TextEncoder().encode(str)
7
7
 
@@ -188,4 +188,32 @@ describe('json basics', () => {
188
188
  assert.deepStrictEqual(decode(toBytes('{"foo":1,"foo":2}')), { foo: 2 })
189
189
  assert.throws(() => decode(toBytes('{"foo":1,"foo":2}'), { rejectDuplicateMapKeys: true }), /CBOR decode error: found repeat map key "foo"/)
190
190
  })
191
+
192
+ it('decodeFirst', () => {
193
+ /*
194
+ const encoded = new TextDecoder().decode(encode(obj, sorting === false ? { mapSorter: null } : undefined))
195
+ const json = JSON.stringify(obj)
196
+ assert.strictEqual(encoded, json)
197
+ const decoded = decode(toBytes(JSON.stringify(obj)))
198
+ assert.deepStrictEqual(decoded, obj)
199
+ */
200
+ let buf = new TextEncoder().encode('{"foo":1,"bar":2}1"ping"2null3[1,2,3]""[[],[],{"boop":true}]')
201
+ const expected = [
202
+ { foo: 1, bar: 2 },
203
+ 1,
204
+ 'ping',
205
+ 2,
206
+ null,
207
+ 3,
208
+ [1, 2, 3],
209
+ '',
210
+ [[], [], { boop: true }]
211
+ ]
212
+ for (const exp of expected) {
213
+ const [obj, rest] = decodeFirst(buf)
214
+ assert.deepStrictEqual(exp, obj)
215
+ buf = rest
216
+ }
217
+ assert.strictEqual(buf.length, 0)
218
+ })
191
219
  })
@@ -0,0 +1,111 @@
1
+ /* eslint-env mocha */
2
+
3
+ import chai from 'chai'
4
+ import { garbage } from 'ipld-garbage'
5
+ import { uintBoundaries } from '../lib/0uint.js'
6
+ import { encode, decodeFirst } from '../cborg.js'
7
+ import { dateDecoder, dateEncoder } from './common.js'
8
+
9
+ const { assert } = chai
10
+
11
+ function verifyPartial (objects, options) {
12
+ const encoded = []
13
+ const lengths = []
14
+ let length = 0
15
+ for (const object of Array.isArray(objects) ? objects : [objects]) {
16
+ encoded.push(encode(object, options))
17
+ const l = encoded[encoded.length - 1].length
18
+ length += l
19
+ lengths.push(l)
20
+ }
21
+ const buf = new Uint8Array(length)
22
+ let offset = 0
23
+ for (const enc of encoded) {
24
+ buf.set(enc, offset)
25
+ offset += enc.length
26
+ }
27
+ let partial = buf
28
+ for (let ii = 0; ii < encoded.length; ii++) {
29
+ const [decoded, remainder] = decodeFirst(partial, options)
30
+ assert.deepEqual(decoded, objects[ii])
31
+ assert.equal(remainder.length, partial.length - lengths[ii])
32
+ partial = remainder
33
+ }
34
+ assert.equal(partial.length, 0) // just to be sure
35
+ }
36
+
37
+ describe('decodePartial', () => {
38
+ describe('multiple', () => {
39
+ it('simple', () => {
40
+ verifyPartial([1, 2, 3])
41
+ verifyPartial([8.940696716308594e-08, 1])
42
+ verifyPartial([
43
+ [],
44
+ [1, 2, { obj: 1.5 }, null, new Uint8Array([1, 2, 3])],
45
+ { boop: true, bop: 1 },
46
+ 'nope',
47
+ { o: 'nope' },
48
+ new Uint8Array([1, 2, 3]),
49
+ true,
50
+ null
51
+ ])
52
+ })
53
+
54
+ it('options', () => {
55
+ const m = new Map()
56
+ m.set('a', 1)
57
+ m.set('b', null)
58
+ m.set('c', 'grok')
59
+ m.set('date', new Date('2013-03-21T20:04:00Z'))
60
+ verifyPartial(
61
+ [8.940696716308594e-08, 1, null, 'grok', new Date('2013-03-21T20:04:00Z'),
62
+ [8.940696716308594e-08, 1, null, 'grok', new Date('2013-03-21T20:04:00Z')],
63
+ m
64
+ ],
65
+ { typeEncoders: { Date: dateEncoder }, useMaps: true, tags: { 0: dateDecoder } })
66
+ })
67
+
68
+ it('garbage', function () {
69
+ this.timeout(10000)
70
+ for (let ii = 0; ii < 10; ii++) {
71
+ const gbg = []
72
+ for (let ii = 0; ii < 100; ii++) {
73
+ gbg.push(garbage(1 << 6, { weights: { CID: 0 } }))
74
+ }
75
+ verifyPartial(gbg)
76
+ }
77
+ })
78
+ })
79
+
80
+ it('singular', () => {
81
+ it('int boundaries', () => {
82
+ for (let ii = 0; ii < 4; ii++) {
83
+ verifyPartial(uintBoundaries[ii])
84
+ verifyPartial(uintBoundaries[ii] - 1)
85
+ verifyPartial(uintBoundaries[ii] + 1)
86
+ verifyPartial(-1 * uintBoundaries[ii])
87
+ verifyPartial(-1 * uintBoundaries[ii] - 1)
88
+ verifyPartial(-1 * uintBoundaries[ii] + 1)
89
+ }
90
+ })
91
+
92
+ it('tags', () => {
93
+ verifyPartial({ date: new Date('2013-03-21T20:04:00Z') }, { typeEncoders: { Date: dateEncoder } })
94
+ })
95
+
96
+ it('floats', () => {
97
+ verifyPartial(0.5)
98
+ verifyPartial(0.5, { float64: true })
99
+ verifyPartial(8.940696716308594e-08)
100
+ verifyPartial(8.940696716308594e-08, { float64: true })
101
+ })
102
+
103
+ it('small garbage', function () {
104
+ this.timeout(10000)
105
+ for (let ii = 0; ii < 1000; ii++) {
106
+ const gbg = garbage(1 << 6, { weights: { CID: 0 } })
107
+ verifyPartial(gbg)
108
+ }
109
+ })
110
+ })
111
+ })
package/types/cborg.d.ts CHANGED
@@ -15,8 +15,9 @@ export type DecodeOptions = import('./interface').DecodeOptions;
15
15
  */
16
16
  export type EncodeOptions = import('./interface').EncodeOptions;
17
17
  import { decode } from './lib/decode.js';
18
+ import { decodeFirst } from './lib/decode.js';
18
19
  import { encode } from './lib/encode.js';
19
20
  import { Token } from './lib/token.js';
20
21
  import { Type } from './lib/token.js';
21
- export { decode, encode, Token, Type };
22
+ export { decode, decodeFirst, encode, Token, Type };
22
23
  //# sourceMappingURL=cborg.d.ts.map
@@ -1 +1 @@
1
- {"version":3,"file":"cborg.d.ts","sourceRoot":"","sources":["../cborg.js"],"names":[],"mappings":";;;yBAMa,OAAO,aAAa,EAAE,UAAU;;;;0BAEhC,OAAO,aAAa,EAAE,mBAAmB;;;;4BACzC,OAAO,aAAa,EAAE,aAAa;;;;4BACnC,OAAO,aAAa,EAAE,aAAa;uBATzB,iBAAiB;uBADjB,iBAAiB;sBAEZ,gBAAgB;qBAAhB,gBAAgB"}
1
+ {"version":3,"file":"cborg.d.ts","sourceRoot":"","sources":["../cborg.js"],"names":[],"mappings":";;;yBAMa,OAAO,aAAa,EAAE,UAAU;;;;0BAEhC,OAAO,aAAa,EAAE,mBAAmB;;;;4BACzC,OAAO,aAAa,EAAE,aAAa;;;;4BACnC,OAAO,aAAa,EAAE,aAAa;uBATZ,iBAAiB;4BAAjB,iBAAiB;uBAD9B,iBAAiB;sBAEZ,gBAAgB;qBAAhB,gBAAgB"}
@@ -18,6 +18,7 @@ export type QuickEncodeToken = (token: Token) => Uint8Array | undefined;
18
18
  export interface DecodeTokenizer {
19
19
  done(): boolean;
20
20
  next(): Token;
21
+ pos(): number;
21
22
  }
22
23
  export type TagDecoder = (inner: any) => any;
23
24
  export interface DecodeOptions {
@@ -1 +1 @@
1
- {"version":3,"file":"interface.d.ts","sourceRoot":"","sources":["../interface.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,KAAK,EAAE,MAAM,aAAa,CAAA;AACnC,OAAO,EAAE,EAAE,EAAE,MAAM,UAAU,CAAA;AAE7B,MAAM,MAAM,mBAAmB,GAAG,KAAK,GAAG,KAAK,EAAE,GAAG,mBAAmB,EAAE,CAAA;AAEzE,MAAM,WAAW,SAAS;IACxB,MAAM,EAAE,SAAS,GAAG,SAAS,CAAA;IAC7B,GAAG,EAAE,MAAM,GAAG,GAAG,EAAE,CAAA;IACnB,QAAQ,CAAC,GAAG,EAAE,MAAM,GAAG,GAAG,EAAE,GAAG,OAAO,CAAA;CACvC;AAED,MAAM,MAAM,mBAAmB,GAAG,CAAC,IAAI,EAAE,GAAG,EAAE,GAAG,EAAE,MAAM,EAAE,OAAO,EAAE,aAAa,EAAE,QAAQ,CAAC,EAAE,SAAS,KAAK,mBAAmB,GAAG,IAAI,CAAA;AAEtI,MAAM,MAAM,iBAAiB,GAAG,CAAC,IAAI,EAAE,GAAG,EAAE,GAAG,EAAE,MAAM,EAAE,OAAO,EAAE,aAAa,EAAE,QAAQ,CAAC,EAAE,SAAS,KAAK,mBAAmB,CAAA;AAE7H,MAAM,MAAM,gBAAgB,GAAG;IAC7B,CAAC,GAAG,EAAE,EAAE,EAAE,KAAK,EAAE,KAAK,EAAE,OAAO,CAAC,EAAE,aAAa,GAAG,IAAI,CAAC;IACvD,aAAa,CAAC,EAAE,EAAE,KAAK,EAAE,EAAE,EAAE,KAAK,GAAG,MAAM,CAAC;IAE5C,WAAW,CAAC,CAAC,KAAK,EAAE,KAAK,EAAE,OAAO,CAAC,EAAE,aAAa,GAAG,MAAM,CAAC;CAC7D,CAAA;AAED,MAAM,MAAM,SAAS,GAAG,CAAC,EAAE,EAAE,CAAC,KAAK,GAAG,KAAK,EAAE,CAAC,EAAE,EAAE,EAAE,EAAE,CAAC,KAAK,GAAG,KAAK,EAAE,CAAC,EAAE,KAAK,MAAM,CAAA;AAEpF,MAAM,MAAM,gBAAgB,GAAG,CAAC,KAAK,EAAE,KAAK,KAAK,UAAU,GAAG,SAAS,CAAA;AAEvE,MAAM,WAAW,eAAe;IAC9B,IAAI,IAAI,OAAO,CAAC;IAChB,IAAI,IAAI,KAAK,CAAA;CACd;AAED,MAAM,MAAM,UAAU,GAAG,CAAC,KAAK,EAAE,GAAG,KAAK,GAAG,CAAA;AAE5C,MAAM,WAAW,aAAa;IAC5B,eAAe,CAAC,EAAE,OAAO,CAAA;IACzB,cAAc,CAAC,EAAE,OAAO,CAAA;IACxB,qBAAqB,CAAC,EAAE,OAAO,CAAA;IAC/B,aAAa,CAAC,EAAE,OAAO,CAAA;IACvB,QAAQ,CAAC,EAAE,OAAO,CAAA;IAClB,WAAW,CAAC,EAAE,OAAO,CAAA;IACrB,MAAM,CAAC,EAAE,OAAO,CAAA;IAChB,OAAO,CAAC,EAAE,OAAO,CAAA;IACjB,sBAAsB,CAAC,EAAE,OAAO,CAAA;IAChC,iBAAiB,CAAC,EAAE,OAAO,CAAA;IAC3B,IAAI,CAAC,EAAE,UAAU,EAAE,CAAC;IACpB,SAAS,CAAC,EAAE,eAAe,CAAA;CAC5B;AAED,MAAM,WAAW,aAAa;IAC5B,OAAO,CAAC,EAAE,OAAO,CAAC;IAClB,cAAc,CAAC,EAAE,OAAO,CAAC;IACzB,SAAS,CAAC,EAAE,SAAS,CAAC;IACtB,gBAAgB,CAAC,EAAE,gBAAgB,CAAC;IACpC,YAAY,CAAC,EAAE;QAAE,CAAC,QAAQ,EAAE,MAAM,GAAG,mBAAmB,CAAA;KAAE,CAAA;CAC3D"}
1
+ {"version":3,"file":"interface.d.ts","sourceRoot":"","sources":["../interface.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,KAAK,EAAE,MAAM,aAAa,CAAA;AACnC,OAAO,EAAE,EAAE,EAAE,MAAM,UAAU,CAAA;AAE7B,MAAM,MAAM,mBAAmB,GAAG,KAAK,GAAG,KAAK,EAAE,GAAG,mBAAmB,EAAE,CAAA;AAEzE,MAAM,WAAW,SAAS;IACxB,MAAM,EAAE,SAAS,GAAG,SAAS,CAAA;IAC7B,GAAG,EAAE,MAAM,GAAG,GAAG,EAAE,CAAA;IACnB,QAAQ,CAAC,GAAG,EAAE,MAAM,GAAG,GAAG,EAAE,GAAG,OAAO,CAAA;CACvC;AAED,MAAM,MAAM,mBAAmB,GAAG,CAAC,IAAI,EAAE,GAAG,EAAE,GAAG,EAAE,MAAM,EAAE,OAAO,EAAE,aAAa,EAAE,QAAQ,CAAC,EAAE,SAAS,KAAK,mBAAmB,GAAG,IAAI,CAAA;AAEtI,MAAM,MAAM,iBAAiB,GAAG,CAAC,IAAI,EAAE,GAAG,EAAE,GAAG,EAAE,MAAM,EAAE,OAAO,EAAE,aAAa,EAAE,QAAQ,CAAC,EAAE,SAAS,KAAK,mBAAmB,CAAA;AAE7H,MAAM,MAAM,gBAAgB,GAAG;IAC7B,CAAC,GAAG,EAAE,EAAE,EAAE,KAAK,EAAE,KAAK,EAAE,OAAO,CAAC,EAAE,aAAa,GAAG,IAAI,CAAC;IACvD,aAAa,CAAC,EAAE,EAAE,KAAK,EAAE,EAAE,EAAE,KAAK,GAAG,MAAM,CAAC;IAE5C,WAAW,CAAC,CAAC,KAAK,EAAE,KAAK,EAAE,OAAO,CAAC,EAAE,aAAa,GAAG,MAAM,CAAC;CAC7D,CAAA;AAED,MAAM,MAAM,SAAS,GAAG,CAAC,EAAE,EAAE,CAAC,KAAK,GAAG,KAAK,EAAE,CAAC,EAAE,EAAE,EAAE,EAAE,CAAC,KAAK,GAAG,KAAK,EAAE,CAAC,EAAE,KAAK,MAAM,CAAA;AAEpF,MAAM,MAAM,gBAAgB,GAAG,CAAC,KAAK,EAAE,KAAK,KAAK,UAAU,GAAG,SAAS,CAAA;AAEvE,MAAM,WAAW,eAAe;IAC9B,IAAI,IAAI,OAAO,CAAC;IAChB,IAAI,IAAI,KAAK,CAAC;IACd,GAAG,IAAI,MAAM,CAAC;CACf;AAED,MAAM,MAAM,UAAU,GAAG,CAAC,KAAK,EAAE,GAAG,KAAK,GAAG,CAAA;AAE5C,MAAM,WAAW,aAAa;IAC5B,eAAe,CAAC,EAAE,OAAO,CAAA;IACzB,cAAc,CAAC,EAAE,OAAO,CAAA;IACxB,qBAAqB,CAAC,EAAE,OAAO,CAAA;IAC/B,aAAa,CAAC,EAAE,OAAO,CAAA;IACvB,QAAQ,CAAC,EAAE,OAAO,CAAA;IAClB,WAAW,CAAC,EAAE,OAAO,CAAA;IACrB,MAAM,CAAC,EAAE,OAAO,CAAA;IAChB,OAAO,CAAC,EAAE,OAAO,CAAA;IACjB,sBAAsB,CAAC,EAAE,OAAO,CAAA;IAChC,iBAAiB,CAAC,EAAE,OAAO,CAAA;IAC3B,IAAI,CAAC,EAAE,UAAU,EAAE,CAAC;IACpB,SAAS,CAAC,EAAE,eAAe,CAAA;CAC5B;AAED,MAAM,WAAW,aAAa;IAC5B,OAAO,CAAC,EAAE,OAAO,CAAC;IAClB,cAAc,CAAC,EAAE,OAAO,CAAC;IACzB,SAAS,CAAC,EAAE,SAAS,CAAC;IACtB,gBAAgB,CAAC,EAAE,gBAAgB,CAAC;IACpC,YAAY,CAAC,EAAE;QAAE,CAAC,QAAQ,EAAE,MAAM,GAAG,mBAAmB,CAAA;KAAE,CAAA;CAC3D"}
@@ -10,9 +10,10 @@ export class Tokeniser implements DecodeTokenizer {
10
10
  * @param {DecodeOptions} options
11
11
  */
12
12
  constructor(data: Uint8Array, options?: DecodeOptions);
13
- pos: number;
13
+ _pos: number;
14
14
  data: Uint8Array;
15
15
  options: import("../interface").DecodeOptions;
16
+ pos(): number;
16
17
  done(): boolean;
17
18
  next(): import("./token.js").Token;
18
19
  }
@@ -28,6 +29,12 @@ export function tokensToObject(tokeniser: DecodeTokenizer, options: DecodeOption
28
29
  * @returns {any}
29
30
  */
30
31
  export function decode(data: Uint8Array, options?: import("../interface").DecodeOptions | undefined): any;
32
+ /**
33
+ * @param {Uint8Array} data
34
+ * @param {DecodeOptions} [options]
35
+ * @returns {[any, Uint8Array]}
36
+ */
37
+ export function decodeFirst(data: Uint8Array, options?: import("../interface").DecodeOptions | undefined): [any, Uint8Array];
31
38
  declare const BREAK: unique symbol;
32
39
  declare const DONE: unique symbol;
33
40
  export {};
@@ -1 +1 @@
1
- {"version":3,"file":"decode.d.ts","sourceRoot":"","sources":["../../lib/decode.js"],"names":[],"mappings":"oBAKa,OAAO,YAAY,EAAE,KAAK;4BAC1B,OAAO,cAAc,EAAE,aAAa;8BACpC,OAAO,cAAc,EAAE,eAAe;AAUnD;;GAEG;AACH;IACE;;;OAGG;IACH,kBAHW,UAAU,YACV,aAAa,EAMvB;IAHC,YAAY;IACZ,iBAAgB;IAChB,8CAAsB;IAGxB,gBAEC;IAED,mCAgBC;CACF;AA6ED;;;;GAIG;AACH,0CAJW,eAAe,WACf,aAAa,GACX,GAAG,6BAAW,CAoC1B;AAED;;;;GAIG;AACH,6BAJW,UAAU,+DAER,GAAG,CAmBf;AAzID,mCAAiC;AADjC,kCAA+B"}
1
+ {"version":3,"file":"decode.d.ts","sourceRoot":"","sources":["../../lib/decode.js"],"names":[],"mappings":"oBAKa,OAAO,YAAY,EAAE,KAAK;4BAC1B,OAAO,cAAc,EAAE,aAAa;8BACpC,OAAO,cAAc,EAAE,eAAe;AAUnD;;GAEG;AACH;IACE;;;OAGG;IACH,kBAHW,UAAU,YACV,aAAa,EAMvB;IAHC,aAAa;IACb,iBAAgB;IAChB,8CAAsB;IAGxB,cAEC;IAED,gBAEC;IAED,mCAgBC;CACF;AA6ED;;;;GAIG;AACH,0CAJW,eAAe,WACf,aAAa,GACX,GAAG,6BAAW,CAoC1B;AAuBD;;;;GAIG;AACH,6BAJW,UAAU,+DAER,GAAG,CAQf;AAhCD;;;;GAIG;AACH,kCAJW,UAAU,+DAER,CAAC,GAAG,EAAE,UAAU,CAAC,CAgB7B;AAtID,mCAAiC;AADjC,kCAA+B"}
@@ -6,6 +6,12 @@ export type DecodeTokenizer = import('../../interface').DecodeTokenizer;
6
6
  * @returns {any}
7
7
  */
8
8
  export function decode(data: Uint8Array, options?: import("../../interface").DecodeOptions | undefined): any;
9
+ /**
10
+ * @param {Uint8Array} data
11
+ * @param {DecodeOptions} [options]
12
+ * @returns {[any, Uint8Array]}
13
+ */
14
+ export function decodeFirst(data: Uint8Array, options?: import("../../interface").DecodeOptions | undefined): [any, Uint8Array];
9
15
  /**
10
16
  * @typedef {import('../../interface').DecodeOptions} DecodeOptions
11
17
  * @typedef {import('../../interface').DecodeTokenizer} DecodeTokenizer
@@ -19,12 +25,13 @@ export class Tokenizer implements DecodeTokenizer {
19
25
  * @param {DecodeOptions} options
20
26
  */
21
27
  constructor(data: Uint8Array, options?: DecodeOptions);
22
- pos: number;
28
+ _pos: number;
23
29
  data: Uint8Array;
24
30
  options: import("../../interface").DecodeOptions;
25
31
  /** @type {string[]} */
26
32
  modeStack: string[];
27
33
  lastToken: string;
34
+ pos(): number;
28
35
  /**
29
36
  * @returns {boolean}
30
37
  */
@@ -1 +1 @@
1
- {"version":3,"file":"decode.d.ts","sourceRoot":"","sources":["../../../lib/json/decode.js"],"names":[],"mappings":"4BAMa,OAAO,iBAAiB,EAAE,aAAa;8BACvC,OAAO,iBAAiB,EAAE,eAAe;AA4atD;;;;GAIG;AACH,6BAJW,UAAU,kEAER,GAAG,CAKf;AAtbD;;;GAGG;AAEH;;GAEG;AACH;IACE;;;OAGG;IACH,kBAHW,UAAU,YACV,aAAa,EASvB;IANC,YAAY;IACZ,iBAAgB;IAChB,iDAAsB;IACtB,uBAAuB;IACvB,WADW,MAAM,EAAE,CACO;IAC1B,kBAAmB;IAGrB;;OAEG;IACH,QAFa,OAAO,CAInB;IAED;;OAEG;IACH,MAFa,MAAM,CAIlB;IAED;;OAEG;IACH,eAFa,MAAM,CAIlB;IAED,uBAMC;IAED;;OAEG;IACH,YAFW,MAAM,EAAE,QAWlB;IAED,qBA+DC;IAED;;OAEG;IACH,eAFa,KAAK,CAkLjB;IAED;;OAEG;IACH,cAFa,KAAK,CAuCjB;IAED;;OAEG;IACH,QAFa,KAAK,CAyEjB;CACF;sBAhb2B,aAAa"}
1
+ {"version":3,"file":"decode.d.ts","sourceRoot":"","sources":["../../../lib/json/decode.js"],"names":[],"mappings":"4BAMa,OAAO,iBAAiB,EAAE,aAAa;8BACvC,OAAO,iBAAiB,EAAE,eAAe;AAgbtD;;;;GAIG;AACH,6BAJW,UAAU,kEAER,GAAG,CAKf;AAED;;;;GAIG;AACH,kCAJW,UAAU,kEAER,CAAC,GAAG,EAAE,UAAU,CAAC,CAK7B;AApcD;;;GAGG;AAEH;;GAEG;AACH;IACE;;;OAGG;IACH,kBAHW,UAAU,YACV,aAAa,EASvB;IANC,aAAa;IACb,iBAAgB;IAChB,iDAAsB;IACtB,uBAAuB;IACvB,WADW,MAAM,EAAE,CACO;IAC1B,kBAAmB;IAGrB,cAEC;IAED;;OAEG;IACH,QAFa,OAAO,CAInB;IAED;;OAEG;IACH,MAFa,MAAM,CAIlB;IAED;;OAEG;IACH,eAFa,MAAM,CAIlB;IAED,uBAMC;IAED;;OAEG;IACH,YAFW,MAAM,EAAE,QAWlB;IAED,qBA+DC;IAED;;OAEG;IACH,eAFa,KAAK,CAkLjB;IAED;;OAEG;IACH,cAFa,KAAK,CAuCjB;IAED;;OAEG;IACH,QAFa,KAAK,CAyEjB;CACF;sBApb2B,aAAa"}
@@ -1,5 +1,6 @@
1
1
  import { encode } from './encode.js';
2
2
  import { decode } from './decode.js';
3
+ import { decodeFirst } from './decode.js';
3
4
  import { Tokenizer } from './decode.js';
4
- export { encode, decode, Tokenizer };
5
+ export { encode, decode, decodeFirst, Tokenizer };
5
6
  //# sourceMappingURL=json.d.ts.map
@@ -1 +1 @@
1
- {"version":3,"file":"json.d.ts","sourceRoot":"","sources":["../../../lib/json/json.js"],"names":[],"mappings":"uBAAuB,aAAa;uBACF,aAAa;0BAAb,aAAa"}
1
+ {"version":3,"file":"json.d.ts","sourceRoot":"","sources":["../../../lib/json/json.js"],"names":[],"mappings":"uBAAuB,aAAa;uBACW,aAAa;4BAAb,aAAa;0BAAb,aAAa"}