@nxtedition/rocksdb 5.2.26 → 5.2.27

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@nxtedition/rocksdb",
3
- "version": "5.2.26",
3
+ "version": "5.2.27",
4
4
  "description": "A low-level Node.js RocksDB binding",
5
5
  "license": "MIT",
6
6
  "main": "index.js",
@@ -26,7 +26,6 @@
26
26
  },
27
27
  "dependencies": {
28
28
  "abstract-level": "^1.0.2",
29
- "catering": "^2.1.1",
30
29
  "module-error": "^1.0.1",
31
30
  "napi-macros": "~2.0.0",
32
31
  "node-gyp-build": "^4.3.0"
@@ -1,32 +0,0 @@
1
- ## RocksDB: A Persistent Key-Value Store for Flash and RAM Storage
2
-
3
- [![CircleCI Status](https://circleci.com/gh/facebook/rocksdb.svg?style=svg)](https://circleci.com/gh/facebook/rocksdb)
4
- [![TravisCI Status](https://api.travis-ci.com/facebook/rocksdb.svg?branch=main)](https://travis-ci.com/github/facebook/rocksdb)
5
- [![Appveyor Build status](https://ci.appveyor.com/api/projects/status/fbgfu0so3afcno78/branch/main?svg=true)](https://ci.appveyor.com/project/Facebook/rocksdb/branch/main)
6
- [![PPC64le Build Status](http://140-211-168-68-openstack.osuosl.org:8080/buildStatus/icon?job=rocksdb&style=plastic)](http://140-211-168-68-openstack.osuosl.org:8080/job/rocksdb)
7
-
8
- RocksDB is developed and maintained by Facebook Database Engineering Team.
9
- It is built on earlier work on [LevelDB](https://github.com/google/leveldb) by Sanjay Ghemawat (sanjay@google.com)
10
- and Jeff Dean (jeff@google.com)
11
-
12
- This code is a library that forms the core building block for a fast
13
- key-value server, especially suited for storing data on flash drives.
14
- It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs
15
- between Write-Amplification-Factor (WAF), Read-Amplification-Factor (RAF)
16
- and Space-Amplification-Factor (SAF). It has multi-threaded compactions,
17
- making it especially suitable for storing multiple terabytes of data in a
18
- single database.
19
-
20
- Start with example usage here: https://github.com/facebook/rocksdb/tree/main/examples
21
-
22
- See the [github wiki](https://github.com/facebook/rocksdb/wiki) for more explanation.
23
-
24
- The public interface is in `include/`. Callers should not include or
25
- rely on the details of any other header files in this package. Those
26
- internal APIs may be changed without warning.
27
-
28
- Questions and discussions are welcome on the [RocksDB Developers Public](https://www.facebook.com/groups/rocksdb.dev/) Facebook group and [email list](https://groups.google.com/g/rocksdb) on Google Groups.
29
-
30
- ## License
31
-
32
- RocksDB is dual-licensed under both the GPLv2 (found in the COPYING file in the root directory) and Apache 2.0 License (found in the LICENSE.Apache file in the root directory). You may select, at your option, one of the above-listed licenses.
@@ -1,60 +0,0 @@
1
- # RocksDB Micro-Benchmark
2
-
3
- ## Overview
4
-
5
- RocksDB micro-benchmark is a set of tests for benchmarking a single component or simple DB operations. The test artificially generates input data and executes the same operation with it to collect and report performance metrics. As it's focusing on testing a single, well-defined operation, the result is more precise and reproducible, which also has its limitation of not representing a real production use case. The test author needs to carefully design the microbench to represent its true purpose.
6
-
7
- The tests are based on [Google Benchmark](https://github.com/google/benchmark) library, which provides a standard framework for writing benchmarks.
8
-
9
- ## How to Run
10
- ### Prerequisite
11
- Install the [Google Benchmark](https://github.com/google/benchmark) version `1.6.0` or above.
12
-
13
- *Note: Google Benchmark `1.6.x` is incompatible with previous versions like `1.5.x`, please make sure you're using the newer version.*
14
-
15
- ### Build and Run
16
- With `Makefile`:
17
- ```bash
18
- $ DEBUG_LEVEL=0 make run_microbench
19
- ```
20
- Or with cmake:
21
- ```bash
22
- $ mkdir build && cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -DWITH_BENCHMARK
23
- $ make run_microbench
24
- ```
25
-
26
- *Note: Please run the benchmark code in release build.*
27
- ### Run Single Test
28
- Example:
29
- ```bash
30
- $ make db_basic_bench
31
- $ ./db_basic_bench --benchmark_filter=<TEST_NAME>
32
- ```
33
-
34
- ## Best Practices
35
- #### * Use the Same Test Directory Setting as Unittest
36
- Most of the Micro-benchmark tests use the same test directory setup as unittest, so it could be overridden by:
37
- ```bash
38
- $ TEST_TMPDIR=/mydata/tmp/ ./db_basic_bench --benchmark_filter=<TEST_NAME>
39
- ```
40
- Please also follow that when designing new tests.
41
-
42
- #### * Avoid Using Debug API
43
- Even though micro-benchmark is a test, avoid using internal Debug API like TEST_WaitForRun() which is designed for unittest. As benchmark tests are designed for release build, don't use any of that.
44
-
45
- #### * Pay Attention to Local Optimization
46
- As a micro-benchmark is focusing on a single component or area, make sure it is a key part for impacting the overall application performance.
47
-
48
- The compiler might be able to optimize the code that not the same way as the whole application, and if the test data input is simple and small, it may be able to all cached in CPU memory, which is leading to a wrong metric. Take these into consideration when designing the tests.
49
-
50
- #### * Names of user-defined counters/metrics has to be `[A-Za-z0-9_]`
51
- It's a restriction of the metrics collecting and reporting system RocksDB is using internally. It will also help integrate with more systems.
52
-
53
- #### * Minimize the Metrics Variation
54
- Try reducing the test result variation, one way to check that is running the test multiple times and check the CV (Coefficient of Variation) reported by gbenchmark.
55
- ```bash
56
- $ ./db_basic_bench --benchmark_filter=<TEST_NAME> --benchmark_repetitions=10
57
- ...
58
- <TEST_NAME>_cv 3.2%
59
- ```
60
- RocksDB has background compaction jobs which may cause the test result to vary a lot. If the micro-benchmark is not purposely testing the operation while compaction is in progress, it should wait for the compaction to finish (`db_impl->WaitForCompact()`) or disable auto-compaction.
@@ -1,43 +0,0 @@
1
- ## Building external plugins together with RocksDB
2
-
3
- RocksDB offers several plugin interfaces for developers to customize its behavior. One difficulty developers face is how to make their plugin available to end users. The approach discussed here involves building the external code together with the RocksDB code into a single binary. Note another approach we plan to support involves loading plugins dynamically from shared libraries.
4
-
5
- ### Discovery
6
-
7
- We hope developers will mention their work in "PLUGINS.md" so users can easily discover and reuse solutions for customizing RocksDB.
8
-
9
- ### Directory organization
10
-
11
- External plugins will be linked according to their name into a subdirectory of "plugin/". For example, a plugin called "dedupfs" would be linked into "plugin/dedupfs/".
12
-
13
- ### Build standard
14
-
15
- Currently the only supported build system are make and cmake.
16
-
17
- For make, files in the plugin directory ending in the .mk extension can define the following variables.
18
-
19
- * `$(PLUGIN_NAME)_SOURCES`: these files will be compiled and linked with RocksDB. They can access RocksDB public header files.
20
- * `$(PLUGIN_NAME)_HEADERS`: these files will be installed in the RocksDB header directory. Their paths will be prefixed by "rocksdb/plugin/$(PLUGIN_NAME)/".
21
- * `$(PLUGIN_NAME)_LDFLAGS`: these flags will be passed to the final link step. For example, library dependencies can be propagated here, or symbols can be forcibly included, e.g., for static registration.
22
- * `$(PLUGIN_NAME)_CXXFLAGS`: these flags will be passed to the compiler. For example, they can specify locations of header files in non-standard locations.
23
-
24
- Users will run the usual make commands from the RocksDB directory, specifying the plugins to include in a space-separated list in the variable `ROCKSDB_PLUGINS`.
25
-
26
- For CMake, the CMakeLists.txt file in the plugin directory can define the following variables.
27
-
28
- * `${PLUGIN_NAME}_SOURCES`: these files will be compiled and linked with RocksDB. They can access RocksDB public header files.
29
- * `${PLUGIN_NAME}_COMPILE_FLAGS`: these flags will be passed to the compiler. For example, they can specify locations of header files in non-standard locations.
30
- * `${PLUGIN_NAME}_INCLUDE_PATHS`: paths to directories to search for plugin-specific header files during compilation.
31
- * `${PLUGIN_NAME}_LIBS`: list of library names required to build the plugin, e.g. `dl`, `java`, `jvm`, `rados`, etc. CMake will generate proper flags for linking.
32
- * `${PLUGIN_NAME}_LINK_PATHS`: list of paths for the linker to search for required libraries in additional to standard locations.
33
- * `${PLUGIN_NAME}_CMAKE_SHARED_LINKER_FLAGS` additional linker flags used to generate shared libraries. For example, symbols can be forcibly included, e.g., for static registration.
34
- * `${PLUGIN_NAME}_CMAKE_EXE_LINKER_FLAGS`: additional linker flags used to generate executables. For example, symbols can be forcibly included, e.g., for static registration.
35
-
36
- Users will run the usual cmake commands, specifying the plugins to include in a space-separated list in the command line variable `ROCKSDB_PLUGINS` when invoking cmake.
37
- ```
38
- cmake .. -DROCKSDB_PLUGINS="dedupfs hdfs rados"
39
- ```
40
-
41
- ### Example
42
-
43
- For a working example, see [Dedupfs](https://github.com/ajkr/dedupfs).
@@ -1,10 +0,0 @@
1
- This directory contains interfaces and implementations that isolate the
2
- rest of the package from platform details.
3
-
4
- Code in the rest of the package includes "port.h" from this directory.
5
- "port.h" in turn includes a platform specific "port_<platform>.h" file
6
- that provides the platform specific implementation.
7
-
8
- See port_posix.h for an example of what must be provided in a platform
9
- specific header file.
10
-
@@ -1,13 +0,0 @@
1
- The files in this directory originally come from
2
- https://github.com/percona/PerconaFT/.
3
-
4
- This directory only includes the "locktree" part of PerconaFT, and its
5
- dependencies.
6
-
7
- The following modifications were made:
8
- - Make locktree usable outside of PerconaFT library
9
- - Add shared read-only lock support
10
-
11
- The files named *_subst.* are substitutes of the PerconaFT's files, they
12
- contain replacements of PerconaFT's functionality.
13
-
@@ -1,149 +0,0 @@
1
- Snappy, a fast compressor/decompressor.
2
-
3
-
4
- Introduction
5
- ============
6
-
7
- Snappy is a compression/decompression library. It does not aim for maximum
8
- compression, or compatibility with any other compression library; instead,
9
- it aims for very high speeds and reasonable compression. For instance,
10
- compared to the fastest mode of zlib, Snappy is an order of magnitude faster
11
- for most inputs, but the resulting compressed files are anywhere from 20% to
12
- 100% bigger. (For more information, see "Performance", below.)
13
-
14
- Snappy has the following properties:
15
-
16
- * Fast: Compression speeds at 250 MB/sec and beyond, with no assembler code.
17
- See "Performance" below.
18
- * Stable: Over the last few years, Snappy has compressed and decompressed
19
- petabytes of data in Google's production environment. The Snappy bitstream
20
- format is stable and will not change between versions.
21
- * Robust: The Snappy decompressor is designed not to crash in the face of
22
- corrupted or malicious input.
23
- * Free and open source software: Snappy is licensed under a BSD-type license.
24
- For more information, see the included COPYING file.
25
-
26
- Snappy has previously been called "Zippy" in some Google presentations
27
- and the like.
28
-
29
-
30
- Performance
31
- ===========
32
-
33
- Snappy is intended to be fast. On a single core of a Core i7 processor
34
- in 64-bit mode, it compresses at about 250 MB/sec or more and decompresses at
35
- about 500 MB/sec or more. (These numbers are for the slowest inputs in our
36
- benchmark suite; others are much faster.) In our tests, Snappy usually
37
- is faster than algorithms in the same class (e.g. LZO, LZF, QuickLZ,
38
- etc.) while achieving comparable compression ratios.
39
-
40
- Typical compression ratios (based on the benchmark suite) are about 1.5-1.7x
41
- for plain text, about 2-4x for HTML, and of course 1.0x for JPEGs, PNGs and
42
- other already-compressed data. Similar numbers for zlib in its fastest mode
43
- are 2.6-2.8x, 3-7x and 1.0x, respectively. More sophisticated algorithms are
44
- capable of achieving yet higher compression rates, although usually at the
45
- expense of speed. Of course, compression ratio will vary significantly with
46
- the input.
47
-
48
- Although Snappy should be fairly portable, it is primarily optimized
49
- for 64-bit x86-compatible processors, and may run slower in other environments.
50
- In particular:
51
-
52
- - Snappy uses 64-bit operations in several places to process more data at
53
- once than would otherwise be possible.
54
- - Snappy assumes unaligned 32- and 64-bit loads and stores are cheap.
55
- On some platforms, these must be emulated with single-byte loads
56
- and stores, which is much slower.
57
- - Snappy assumes little-endian throughout, and needs to byte-swap data in
58
- several places if running on a big-endian platform.
59
-
60
- Experience has shown that even heavily tuned code can be improved.
61
- Performance optimizations, whether for 64-bit x86 or other platforms,
62
- are of course most welcome; see "Contact", below.
63
-
64
-
65
- Building
66
- ========
67
-
68
- CMake is supported and autotools will soon be deprecated.
69
- You need CMake 3.4 or above to build:
70
-
71
- mkdir build
72
- cd build && cmake ../ && make
73
-
74
-
75
- Usage
76
- =====
77
-
78
- Note that Snappy, both the implementation and the main interface,
79
- is written in C++. However, several third-party bindings to other languages
80
- are available; see the home page at http://google.github.io/snappy/
81
- for more information. Also, if you want to use Snappy from C code, you can
82
- use the included C bindings in snappy-c.h.
83
-
84
- To use Snappy from your own C++ program, include the file "snappy.h" from
85
- your calling file, and link against the compiled library.
86
-
87
- There are many ways to call Snappy, but the simplest possible is
88
-
89
- snappy::Compress(input.data(), input.size(), &output);
90
-
91
- and similarly
92
-
93
- snappy::Uncompress(input.data(), input.size(), &output);
94
-
95
- where "input" and "output" are both instances of std::string.
96
-
97
- There are other interfaces that are more flexible in various ways, including
98
- support for custom (non-array) input sources. See the header file for more
99
- information.
100
-
101
-
102
- Tests and benchmarks
103
- ====================
104
-
105
- When you compile Snappy, snappy_unittest is compiled in addition to the
106
- library itself. You do not need it to use the compressor from your own library,
107
- but it contains several useful components for Snappy development.
108
-
109
- First of all, it contains unit tests, verifying correctness on your machine in
110
- various scenarios. If you want to change or optimize Snappy, please run the
111
- tests to verify you have not broken anything. Note that if you have the
112
- Google Test library installed, unit test behavior (especially failures) will be
113
- significantly more user-friendly. You can find Google Test at
114
-
115
- http://github.com/google/googletest
116
-
117
- You probably also want the gflags library for handling of command-line flags;
118
- you can find it at
119
-
120
- http://gflags.github.io/gflags/
121
-
122
- In addition to the unit tests, snappy contains microbenchmarks used to
123
- tune compression and decompression performance. These are automatically run
124
- before the unit tests, but you can disable them using the flag
125
- --run_microbenchmarks=false if you have gflags installed (otherwise you will
126
- need to edit the source).
127
-
128
- Finally, snappy can benchmark Snappy against a few other compression libraries
129
- (zlib, LZO, LZF, and QuickLZ), if they were detected at configure time.
130
- To benchmark using a given file, give the compression algorithm you want to test
131
- Snappy against (e.g. --zlib) and then a list of one or more file names on the
132
- command line. The testdata/ directory contains the files used by the
133
- microbenchmark, which should provide a reasonably balanced starting point for
134
- benchmarking. (Note that baddata[1-3].snappy are not intended as benchmarks; they
135
- are used to verify correctness in the presence of corrupted data in the unit
136
- test.)
137
-
138
-
139
- Contact
140
- =======
141
-
142
- Snappy is distributed through GitHub. For the latest version, a bug tracker,
143
- and other information, see
144
-
145
- http://google.github.io/snappy/
146
-
147
- or the repository at
148
-
149
- https://github.com/google/snappy