node-pkware 0.6.0 → 1.0.0--beta.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,61 +1,228 @@
1
- # node-pkware
2
-
3
- nodejs implementation of StormLib compression/decompression algorithm
4
-
5
- it was the de-facto compression for games from around Y2K, like Arx Fatalis
6
-
7
- ## installation / update existing version
8
-
9
- `npm i -g node-pkware`
10
-
11
- recommended node version: 8.5+
12
-
13
- tested in node version 12.7.0
14
-
15
- ## command line interface
16
-
17
- `implode <filename> --output=<filename> --ascii|--binary --level=1|2|3` - compresses file.
18
- if `--output` is omitted, then output will be placed next to input and names as `<filename>.compressed`.
19
- optionally you can specify an offset from which the compressed data starts with the `--offset=<int|hex>`,
20
- which is useful for mixed files, such as the fts files of Arx Fatalis
21
-
22
- `explode <filename> --output=<filename>` - decompresses file. if `--output` is omitted, then
23
- output will be placed next to input and names as `<filename>.decompressed`. optionally you can
24
- specify an offset from which the compressed data starts with the `--offset=<int|hex>`, which is useful
25
- for mixed files, such as the fts files of Arx Fatalis
26
-
27
- The `--drop-before-offset` flag tells node-pkware to drop the portion before `--offset`, otherwise
28
- it will keep it untouched and attach it to the output file.
29
-
30
- ## examples
31
-
32
- `explode test/files/fast.fts --output=C:/fast.fts.decompressed --offset=1816 --keep-header`
33
-
34
- `explode test/files/fast.fts --output=C:/fast.fts.decompressed --offset=0x718 --keep-header`
35
-
36
- `implode test/files/fast.fts.unpacked --output=C:/fast.fts --binary --level=3 --offset=1816 --keep-header`
37
-
38
- ### piping also works
39
-
40
- `cat c:/arx/level8.llf | explode > c:/arx/level8.llf.unpacked --keep-header`
41
-
42
- `explode c:/arx/level8.llf > c:/arx/level8.llf.unpacked --keep-header`
43
-
44
- `cat c:/arx/level8.llf | explode --output=c:/arx/level8.llf.unpacked --keep-header`
45
-
46
-
47
- `cat e:/piping/level8.llf.unpacked | implode --binary --level=3 > e:/piping/level8.llf.comp2 --keep-header`
48
-
49
- `implode e:/piping/level8.llf.unpacked --binary --level=3 > e:/piping/level8.llf.comp --keep-header`
50
-
51
- `cat e:/piping/level8.llf.unpacked | implode --binary --level=3 --output="e:/piping/level8.llf.comp2" --keep-header`
52
-
53
- ## sources:
54
-
55
- * https://github.com/ladislav-zezula/StormLib/tree/master/src/pklib
56
- * https://github.com/ShieldBattery/implode-decoder
57
-
58
- ### helpful links:
59
-
60
- * https://stackoverflow.com/questions/2094666/pointers-in-c-when-to-use-the-ampersand-and-the-asterisk
61
- * https://stackoverflow.com/a/49394095/1806628
1
+ # node-pkware
2
+
3
+ Node JS implementation of StormLib's Pkware compression/decompression algorithm
4
+
5
+ It was the de-facto compression for games from around Y2K, like Arx Fatalis
6
+
7
+ ## installation / update existing version
8
+
9
+ `npm i -g node-pkware`
10
+
11
+ recommended node version: 8.5+
12
+
13
+ development and testing should be done in node 12.3+ because of the tests utilizing `Readable.from()` - source: https://stackoverflow.com/a/59638132/1806628
14
+
15
+ tested in node version 14.9.0
16
+
17
+ ## command line interface
18
+
19
+ `implode <filename> --output=<filename> --ascii|-a|--binary|-b --small|-s|--medium|-m|--large|-l` - compresses file. If `--output` is omitted, then output will be placed next to input and names as `<filename>.compressed`. Optionally you can specify an offset from which the compressed data starts with the `--offset=<int|hex>`, which is useful for mixed files, such as the fts files of Arx Fatalis
20
+
21
+ `explode <filename> --output=<filename>` - decompresses file. If `--output` is omitted, then output will be placed next to input and names as `<filename>.decompressed`. Optionally you can specify an offset from which the compressed data starts with the `--offset=<int|hex>`, which is useful for mixed files, such as the fts files of Arx Fatalis
22
+
23
+ The `--drop-before-offset` flag tells node-pkware to drop the portion before `--offset`, otherwise it will keep it untouched and attach it to the output file.
24
+
25
+ **currently disabled, WIP** There is an `--auto-detect` flag, which will search for the first pkware header starting from the beginning of the file. If `--offset` is defined, then it will start searching from that point.
26
+
27
+ Calling either explode or implode with the `-v` or `--version` flag will display the package's version
28
+
29
+ ## examples
30
+
31
+ `explode test/files/fast.fts --output=C:/fast.fts.decompressed --offset=1816`
32
+
33
+ `explode test/files/fast.fts --output=C:/fast.fts.decompressed --offset=0x718`
34
+
35
+ `implode test/files/fast.fts.unpacked --output=C:/fast.fts --binary --large --offset=1816`
36
+
37
+ `explode test/files/fast.fts --auto-detect --debug --output=E:/fast.fts.unpacked`
38
+
39
+ `explode test/files/fast.fts --auto-detect --debug --output=E:/fast.fts.unpacked --offset=2000`
40
+
41
+ ### piping also works
42
+
43
+ **Don't use --debug when piping, because debug messages will be outputted to where the decompressed data is being outputted!**
44
+
45
+ `cat c:/arx/level8.llf | explode > c:/arx/level8.llf.unpacked`
46
+
47
+ `explode c:/arx/level8.llf > c:/arx/level8.llf.unpacked`
48
+
49
+ `cat c:/arx/level8.llf | explode --output=c:/arx/level8.llf.unpacked`
50
+
51
+ `cat e:/piping/level8.llf.unpacked | implode --binary --large > e:/piping/level8.llf`
52
+
53
+ `implode e:/piping/level8.llf.unpacked --binary --large > e:/piping/level8.llf.comp`
54
+
55
+ `cat e:/piping/level8.llf.unpacked | implode --binary --large --output="e:/piping/level8.llf"`
56
+
57
+ ## using as a library
58
+
59
+ ### API (named imports of node-pkware)
60
+
61
+ `explode(config: object): transform._transform` - decompresses stream
62
+
63
+ Returns a function, that you can use as a [transform.\_transform](https://nodejs.org/api/stream.html#stream_transform_transform_chunk_encoding_callback) method. The returned function has the `(chunk: Buffer, encoding: string, callback: function)` parameter signature.
64
+
65
+ Takes an optional config object, which has the following properties:
66
+
67
+ ```js
68
+ {
69
+ debug: boolean, // whether the code should display debug messages on the console or not (default = false)
70
+ inputBufferSize: int, // the starting size of the input buffer, may expand later as needed. Not having to expand may have performance impact (default 0)
71
+ outputBufferSize: int // same as inputBufferSize, but for the outputBuffer (default 0)
72
+ }
73
+ ```
74
+
75
+ `decompress(config: object): transform._transform` - alias for explode
76
+
77
+ `implode(compressionType: int, dictionarySize: int, config: object): transform._transform` - compresses stream
78
+
79
+ Takes an optional config object, which has the following properties:
80
+
81
+ ```js
82
+ {
83
+ debug: boolean, // whether the code should display debug messages on the console or not (default = false)
84
+ inputBufferSize: int, // the starting size of the input buffer, may expand later as needed. Not having to expand may have performance impact (default 0)
85
+ outputBufferSize: int // same as inputBufferSize, but for the outputBuffer (default 0)
86
+ }
87
+ ```
88
+
89
+ Returns a function, that you can use as a [transform.\_transform](https://nodejs.org/api/stream.html#stream_transform_transform_chunk_encoding_callback) method. The returned function has the `(chunk: Buffer, encoding: string, callback: function)` parameter signature.
90
+
91
+ `compress(compressionType: int, dictionarySize: int, config: object): transform._transform` - aliasa for implode
92
+
93
+ `stream` - an object of helper functions for channeling streams to and from explode/implode
94
+
95
+ `stream.through(transformer: function): Transform` - a function, which takes a `transform._transform` type function and turns it into a Transform stream instance
96
+
97
+ `stream.transformEmpty(chunk: Buffer, encoding: string, callback: function)` - a `transform._transform` type function, which for every input chunk will output an empty buffer
98
+
99
+ `stream.transformIdentity(chunk: Buffer, encoding: string, callback: function)` - a `transform._transform` type function, which lets the input chunks through without any change
100
+
101
+ `stream.splitAt(index: int): function` - creates a **"predicate"** function, that awaits Buffers, keeps an internal counter of the bytes from them and splits the appropriate buffer at the given index. Splitting is done by returning an array with `[left: Buffer, right: Buffer, isLeftDone: bool]`. If you want to split data at the 100th byte and you keep feeding 60 byte long buffers to the function returned by splitAt(100), then it will return arrays in the following manner:
102
+
103
+ ```
104
+ 1) [inputBuffer, emptyBuffer, false]
105
+ 2) [inputBuffer.slice(0, 40), inputBuffer.slice(40, 60), true]
106
+ 3) [emptyBuffer, inputBuffer, true]
107
+ 4) [emptyBuffer, inputBuffer, true]
108
+ ... and so on
109
+ ```
110
+
111
+ `stream.transformSplitBy(predicate: predicate, left: transform._transform, right: transform._transform): transform._transform` - higher order function for introducing conditional logic to transform.\_transform functions. This is used internally to handle offsets for explode()
112
+
113
+ `stream.streamToBuffer(callback: function): writable._write` - data can be piped to the returned function from a stream and it will concatenate all chunks into a single buffer. Takes a callback function, which will receive the concatenated buffer as a parameter
114
+
115
+ `constants.COMPRESSION_BINARY` and `constants.COMPRESSION_ASCII` - compression types for implode
116
+
117
+ `constants.DICTIONARY_SIZE_SMALL`, `constants.DICTIONARY_SIZE_MEDIUM` and `constants.DICTIONARY_SIZE_LARGE` - dictionary sizes for implode, determines how well the file get compressed. Small dictionary size means less memory to lookback in data for repetitions, meaning it will be less effective, the file stays larger, less compressed. On the other hand, large compression allows more lookback allowing more effective compression, thus generating smaller, more compressed files.
118
+
119
+ `errors.InvalidDictionarySizeError` - thrown by implode when invalid dictionary size was specified or by explode when it encounters invalid data in the header section (the first 2 bytes of a compressed files)
120
+
121
+ `errors.InvalidCompressionTypeError` - thrown by implode when invalid compression type was specified or by explode when it encounters invalid data in the header section (the first 2 bytes of a compressed files)
122
+
123
+ `errors.InvalidDataError` - thrown by explode, when compressed data is less, than 5 bytes long. Pkware compressed files have 2 bytes header followed by at lest 2 bytes of data and an end literal.
124
+
125
+ `errors.AbortedError` - thrown by explode when compressed data ends without reaching the end literal or in mid decompression
126
+
127
+ ### examples
128
+
129
+ #### decompressing file with no offset into a file
130
+
131
+ ```javascript
132
+ const fs = require('fs')
133
+ const { explode, stream } = require('node-pkware')
134
+ const { through } = stream
135
+
136
+ fs.createReadStream(`path-to-compressed-file`)
137
+ .pipe(through(explode()))
138
+ .pipe(fs.createWriteStream(`path-to-write-decompressed-data`))
139
+ ```
140
+
141
+ #### decompressing buffer with no offset into a buffer
142
+
143
+ ```javascript
144
+ const { Readable } = require('stream')
145
+ const { explode, stream } = require('node-pkware')
146
+ const { through, streamToBuffer } = stream
147
+
148
+ Readable.from(buffer) // buffer is of type Buffer with compressed data
149
+ .pipe(through(explode()))
150
+ .pipe(
151
+ streamToBuffer(decompressedData => {
152
+ // decompressedData holds the decompressed buffer
153
+ })
154
+ )
155
+ ```
156
+
157
+ #### decompressing file with offset into a file, keeping initial part intact
158
+
159
+ ```javascript
160
+ const fs = require('fs')
161
+ const { explode, stream } = require('node-pkware')
162
+ const { through, transformSplitBy, splitAt, transformIdentity } = stream
163
+
164
+ const offset = 150 // 150 bytes of data will be skipped and explode will decompress the data that comes afterwards
165
+
166
+ fs.createReadStream(`path-to-compressed-file`)
167
+ .pipe(through(transformSplitBy(splitAt(offset), transformIdentity(), explode())))
168
+ .pipe(fs.createWriteStream(`path-to-write-decompressed-data`))
169
+ ```
170
+
171
+ #### decompressing file with offset into a file, discarding initial part
172
+
173
+ ```javascript
174
+ const fs = require('fs')
175
+ const { explode, stream } = require('node-pkware')
176
+ const { through, transformSplitBy, splitAt, transformEmpty } = stream
177
+
178
+ const offset = 150 // 150 bytes of data will be skipped and explode will decompress the data that comes afterwards
179
+
180
+ fs.createReadStream(`path-to-compressed-file`)
181
+ .pipe(through(transformSplitBy(splitAt(offset), transformEmpty(), explode())))
182
+ .pipe(fs.createWriteStream(`path-to-write-decompressed-data`))
183
+ ```
184
+
185
+ ### Catching errors
186
+
187
+ ```javascript
188
+ const fs = require('fs')
189
+ const { explode, stream } = require('node-pkware')
190
+ const { through } = stream
191
+
192
+ fs.createReadStream(`path-to-compressed-file`)
193
+ .on('error', err => {
194
+ console.error('readstream error')
195
+ })
196
+ .pipe(
197
+ through(explode()).on('error', err => {
198
+ console.error('explode error')
199
+ })
200
+ )
201
+ .pipe(
202
+ fs.createWriteStream(`path-to-write-decompressed-data`).on('error', err => {
203
+ console.error('writestream error')
204
+ })
205
+ )
206
+ ```
207
+
208
+ ## Useful links
209
+
210
+ ### test files
211
+
212
+ https://github.com/meszaros-lajos-gyorgy/pkware-test-files
213
+
214
+ ### sources
215
+
216
+ - https://github.com/ladislav-zezula/StormLib/tree/master/src/pklib
217
+ - https://github.com/ShieldBattery/implode-decoder
218
+ - https://github.com/TheNitesWhoSay/lawine/blob/master/lawine/misc/implode.c - nice find, @alexpineda!
219
+ - https://github.com/arx/ArxLibertatis/blob/master/src/io/Blast.cpp
220
+ - https://github.com/arx/ArxLibertatis/blob/229d55b1c537c137ac50096221fa486df18ba0d2/src/io/Implode.cpp
221
+
222
+ Implode was removed from Arx Libertatis at this commit: https://github.com/arx/ArxLibertatis/commit/2db9f0dd023fdd5d4da6f06c08a92d932e218187
223
+
224
+ ### helpful info
225
+
226
+ - https://stackoverflow.com/questions/2094666/pointers-in-c-when-to-use-the-ampersand-and-the-asterisk
227
+ - https://stackoverflow.com/a/49394095/1806628
228
+ - https://nodejs.org/api/stream.html
package/bin/explode.js ADDED
@@ -0,0 +1,99 @@
1
+ #!/usr/bin/env node
2
+
3
+ const fs = require('fs')
4
+ const minimist = require('minimist-lite')
5
+ const { getPackageVersion, parseNumberString, fileExists } = require('../src/helpers/functions.js')
6
+ const { transformEmpty, transformIdentity, transformSplitBy, splitAt, through } = require('../src/helpers/stream.js')
7
+ const { explode } = require('../src/explode.js')
8
+ // const {
9
+ // COMPRESSION_BINARY,
10
+ // COMPRESSION_ASCII,
11
+ // DICTIONARY_SIZE_SMALL,
12
+ // DICTIONARY_SIZE_MEDIUM,
13
+ // DICTIONARY_SIZE_LARGE
14
+ // } = require('../src/constants.js')
15
+
16
+ const args = minimist(process.argv.slice(2), {
17
+ string: ['output', 'offset', 'input-buffer-size', 'output-buffer-size'],
18
+ boolean: ['version', 'drop-before-offset', 'debug' /*, 'auto-detect' */],
19
+ alias: {
20
+ v: 'version'
21
+ }
22
+ })
23
+
24
+ const decompress = (input, output, offset, /* autoDetect, */ keepHeader, config) => {
25
+ const leftHandler = keepHeader ? transformIdentity() : transformEmpty()
26
+ const rightHandler = explode(config)
27
+
28
+ let handler = rightHandler
29
+
30
+ // if (autoDetect) {
31
+ // const everyPkwareHeader = [
32
+ // Buffer.from([COMPRESSION_BINARY, DICTIONARY_SIZE_SMALL]),
33
+ // Buffer.from([COMPRESSION_BINARY, DICTIONARY_SIZE_MEDIUM]),
34
+ // Buffer.from([COMPRESSION_BINARY, DICTIONARY_SIZE_LARGE]),
35
+ // Buffer.from([COMPRESSION_ASCII, DICTIONARY_SIZE_SMALL]),
36
+ // Buffer.from([COMPRESSION_ASCII, DICTIONARY_SIZE_MEDIUM]),
37
+ // Buffer.from([COMPRESSION_ASCII, DICTIONARY_SIZE_LARGE])
38
+ // ]
39
+ // handler = transformSplitBy(splitAtMatch(everyPkwareHeader, offset, config.debug), leftHandler, rightHandler)
40
+ // } else if (offset > 0) {
41
+ handler = transformSplitBy(splitAt(offset), leftHandler, rightHandler)
42
+ // }
43
+
44
+ return new Promise((resolve, reject) => {
45
+ input.pipe(through(handler).on('error', reject)).pipe(output).on('finish', resolve).on('error', reject)
46
+ })
47
+ }
48
+
49
+ ;(async () => {
50
+ if (args.version) {
51
+ console.log(await getPackageVersion())
52
+ process.exit(0)
53
+ }
54
+
55
+ let input = args._[0] || args.input
56
+ let output = args.output
57
+
58
+ let hasErrors = false
59
+
60
+ if (input) {
61
+ if (await fileExists(input)) {
62
+ input = fs.createReadStream(input)
63
+ } else {
64
+ console.error('error: input file does not exist')
65
+ hasErrors = true
66
+ }
67
+ } else {
68
+ input = process.openStdin()
69
+ }
70
+
71
+ if (output) {
72
+ output = fs.createWriteStream(output)
73
+ } else {
74
+ output = process.stdout
75
+ }
76
+
77
+ if (hasErrors) {
78
+ process.exit(1)
79
+ }
80
+
81
+ const offset = parseNumberString(args.offset, 0)
82
+ // const autoDetect = args['auto-detect']
83
+
84
+ const keepHeader = !args['drop-before-offset']
85
+ const config = {
86
+ debug: args.debug,
87
+ inputBufferSize: parseNumberString(args['input-buffer-size'], 0x10000),
88
+ outputBufferSize: parseNumberString(args['output-buffer-size'], 0x40000)
89
+ }
90
+
91
+ decompress(input, output, offset, /* autoDetect, */ keepHeader, config)
92
+ .then(() => {
93
+ process.exit(0)
94
+ })
95
+ .catch(e => {
96
+ console.error(`error: ${e.message}`)
97
+ process.exit(1)
98
+ })
99
+ })()
package/bin/implode.js ADDED
@@ -0,0 +1,113 @@
1
+ #!/usr/bin/env node
2
+
3
+ const fs = require('fs')
4
+ const minimist = require('minimist-lite')
5
+ const {
6
+ COMPRESSION_BINARY,
7
+ COMPRESSION_ASCII,
8
+ DICTIONARY_SIZE_SMALL,
9
+ DICTIONARY_SIZE_MEDIUM,
10
+ DICTIONARY_SIZE_LARGE
11
+ } = require('../src/constants.js')
12
+ const { getPackageVersion, parseNumberString, fileExists } = require('../src/helpers/functions.js')
13
+ const { implode } = require('../src/implode.js')
14
+ const { transformEmpty, transformIdentity, transformSplitBy, splitAt, through } = require('../src/helpers/stream.js')
15
+
16
+ const decompress = (input, output, offset, keepHeader, compressionType, dictionarySize, config) => {
17
+ const leftHandler = keepHeader ? transformIdentity() : transformEmpty()
18
+ const rightHandler = implode(compressionType, dictionarySize, config)
19
+
20
+ const handler = transformSplitBy(splitAt(offset), leftHandler, rightHandler)
21
+
22
+ return new Promise((resolve, reject) => {
23
+ input.pipe(through(handler).on('error', reject)).pipe(output).on('finish', resolve).on('error', reject)
24
+ })
25
+ }
26
+
27
+ const args = minimist(process.argv.slice(2), {
28
+ string: ['output', 'offset', 'input-buffer-size', 'output-buffer-size'],
29
+ boolean: ['version', 'binary', 'ascii', 'drop-before-offset', 'debug', 'small', 'medium', 'large'],
30
+ alias: {
31
+ a: 'ascii',
32
+ b: 'binary',
33
+ s: 'small',
34
+ m: 'medium',
35
+ l: 'large',
36
+ v: 'version'
37
+ }
38
+ })
39
+
40
+ ;(async () => {
41
+ if (args.version) {
42
+ console.log(await getPackageVersion())
43
+ process.exit(0)
44
+ }
45
+
46
+ let input = args._[0] || args.input
47
+ let output = args.output
48
+
49
+ let hasErrors = false
50
+
51
+ if (input) {
52
+ if (await fileExists(input)) {
53
+ input = fs.createReadStream(input)
54
+ } else {
55
+ console.error('error: given file does not exist')
56
+ hasErrors = true
57
+ }
58
+ } else {
59
+ input = process.openStdin()
60
+ }
61
+
62
+ if (args.ascii && args.binary) {
63
+ console.error('error: multiple compression types specified, can only work with one of --ascii and --binary')
64
+ hasErrors = true
65
+ } else if (!args.ascii && !args.binary) {
66
+ console.error('error: compression type missing, expected either --ascii or --binary')
67
+ hasErrors = true
68
+ }
69
+
70
+ const sizes = [args.small, args.medium, args.large].filter(x => x === true)
71
+ if (sizes.length > 1) {
72
+ console.error('error: multiple size types specified, can only work with one of --small, --medium and --large')
73
+ hasErrors = true
74
+ } else if (sizes.length === 0) {
75
+ console.error('error: size type missing, expected either --small, --medium or --large')
76
+ hasErrors = true
77
+ }
78
+
79
+ if (output) {
80
+ output = fs.createWriteStream(output)
81
+ } else {
82
+ output = process.stdout
83
+ }
84
+
85
+ if (hasErrors) {
86
+ process.exit(1)
87
+ }
88
+
89
+ const compressionType = args.ascii ? COMPRESSION_ASCII : COMPRESSION_BINARY
90
+ const dictionarySize = args.small
91
+ ? DICTIONARY_SIZE_SMALL
92
+ : args.medium
93
+ ? DICTIONARY_SIZE_MEDIUM
94
+ : DICTIONARY_SIZE_LARGE
95
+
96
+ const offset = parseNumberString(args.offset, 0)
97
+
98
+ const keepHeader = !args['drop-before-offset']
99
+ const config = {
100
+ debug: args.debug,
101
+ inputBufferSize: parseNumberString(args['input-buffer-size'], 0x10000),
102
+ outputBufferSize: parseNumberString(args['output-buffer-size'], 0x12000)
103
+ }
104
+
105
+ decompress(input, output, offset, keepHeader, compressionType, dictionarySize, config)
106
+ .then(() => {
107
+ process.exit(0)
108
+ })
109
+ .catch(e => {
110
+ console.error(`error: ${e.message}`)
111
+ process.exit(1)
112
+ })
113
+ })()
package/package.json CHANGED
@@ -1,20 +1,21 @@
1
1
  {
2
2
  "name": "node-pkware",
3
- "version": "0.6.0",
3
+ "version": "1.0.0--beta.8",
4
4
  "description": "The nodejs implementation of StormLib's pkware compressor/de-compressor",
5
- "main": "src/index.mjs",
5
+ "main": "src/index.js",
6
+ "types": "types/index.d.ts",
6
7
  "bin": {
7
- "implode": "bin/implode.mjs",
8
- "explode": "bin/explode.mjs"
8
+ "explode": "bin/explode.js",
9
+ "implode": "bin/implode.js"
9
10
  },
10
11
  "scripts": {
11
- "test": "node --experimental-modules test/index.mjs",
12
- "test:watch": "nodemon --experimental-modules test/index.mjs --ext .mjs",
13
- "lint": "eslint \"src/**/*.+(js|mjs)\"",
12
+ "lint": "eslint \"src/**/*.js\"",
14
13
  "lint:fix": "npm run lint -- --fix",
15
- "unit": "mocha --experimental-modules test/**/*.spec.mjs --timeout 5000",
16
- "unit:watch": "nodemon --exec \"npm run unit\" --watch test --watch src",
17
- "lint:staged": "lint-staged"
14
+ "lint:staged": "lint-staged",
15
+ "test:unit": "set FORCE_COLOR=true && mocha test/**/*.spec.js --timeout 5000",
16
+ "test:unit:watch": "nodemon --exec \"npm run test:unit\" --watch test --watch src",
17
+ "test:files": "set FORCE_COLOR=true && mocha test/**/*.files.js --timeout 10000",
18
+ "test:files:watch": "nodemon --exec \"npm run test:files\" --watch test --watch src"
18
19
  },
19
20
  "repository": {
20
21
  "type": "git",
@@ -27,33 +28,36 @@
27
28
  "author": "Lajos Meszaros <m_lajos@hotmail.com>",
28
29
  "license": "GPL-3.0-or-later",
29
30
  "dependencies": {
30
- "minimist": "^1.2.5",
31
- "ramda": "^0.27.1"
31
+ "minimist-lite": "^2.0.0",
32
+ "ramda": "^0.27.1",
33
+ "ramda-adjunct": "^2.33.0"
32
34
  },
33
35
  "devDependencies": {
34
- "assert": "^2.0.0",
35
- "eslint": "^7.6.0",
36
- "eslint-config-prettier": "^6.11.0",
37
- "eslint-config-prettier-standard": "^3.0.1",
38
- "eslint-config-standard": "^14.1.1",
39
- "eslint-plugin-import": "^2.22.0",
36
+ "arx-header-size": "^0.4.4",
37
+ "binary-comparator": "^0.5.0",
38
+ "eslint": "^7.32.0",
39
+ "eslint-config-prettier": "^8.3.0",
40
+ "eslint-config-prettier-standard": "^4.0.1",
41
+ "eslint-config-standard": "^16.0.3",
42
+ "eslint-plugin-import": "^2.24.2",
40
43
  "eslint-plugin-node": "^11.1.0",
41
- "eslint-plugin-prettier": "^3.1.4",
42
- "eslint-plugin-promise": "^4.2.1",
44
+ "eslint-plugin-prettier": "^4.0.0",
45
+ "eslint-plugin-promise": "^5.1.0",
43
46
  "eslint-plugin-ramda": "^2.5.1",
44
- "eslint-plugin-standard": "^4.0.1",
45
- "lint-staged": "^10.2.11",
46
- "mocha": "^8.1.1",
47
- "nodemon": "^2.0.4",
47
+ "eslint-plugin-standard": "^4.1.0",
48
+ "lint-staged": "^11.1.2",
49
+ "mocha": "^9.1.1",
50
+ "nodemon": "^2.0.12",
48
51
  "pre-commit": "^1.2.2",
49
- "prettier": "^2.0.5",
50
- "prettier-config-standard": "^1.0.1"
52
+ "prettier": "^2.3.2",
53
+ "prettier-config-standard": "^4.0.0"
51
54
  },
52
55
  "pre-commit": [
53
56
  "lint:staged",
54
- "unit"
57
+ "test:unit",
58
+ "test:files"
55
59
  ],
56
60
  "lint-staged": {
57
- "*.{js,mjs}": "eslint --fix"
61
+ "*.js": "eslint --fix"
58
62
  }
59
63
  }