extlz4 0.2.4.2

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,114 @@
1
+ LZ4 - Extremely fast compression
2
+ ================================
3
+
4
+ LZ4 is lossless compression algorithm,
5
+ providing compression speed at 400 MB/s per core,
6
+ scalable with multi-cores CPU.
7
+ It features an extremely fast decoder,
8
+ with speed in multiple GB/s per core,
9
+ typically reaching RAM speed limits on multi-core systems.
10
+
11
+ Speed can be tuned dynamically, selecting an "acceleration" factor
12
+ which trades compression ratio for more speed up.
13
+ On the other end, a high compression derivative, LZ4_HC, is also provided,
14
+ trading CPU time for improved compression ratio.
15
+ All versions feature the same decompression speed.
16
+
17
+ LZ4 library is provided as open-source software using BSD 2-Clause license.
18
+
19
+
20
+ |Branch |Status |
21
+ |------------|---------|
22
+ |master | [![Build Status][travisMasterBadge]][travisLink] [![Build status][AppveyorMasterBadge]][AppveyorLink] [![coverity][coverBadge]][coverlink] |
23
+ |dev | [![Build Status][travisDevBadge]][travisLink] [![Build status][AppveyorDevBadge]][AppveyorLink] |
24
+
25
+ [travisMasterBadge]: https://travis-ci.org/lz4/lz4.svg?branch=master "Continuous Integration test suite"
26
+ [travisDevBadge]: https://travis-ci.org/lz4/lz4.svg?branch=dev "Continuous Integration test suite"
27
+ [travisLink]: https://travis-ci.org/lz4/lz4
28
+ [AppveyorMasterBadge]: https://ci.appveyor.com/api/projects/status/github/lz4/lz4?branch=master&svg=true "Windows test suite"
29
+ [AppveyorDevBadge]: https://ci.appveyor.com/api/projects/status/github/lz4/lz4?branch=dev&svg=true "Windows test suite"
30
+ [AppveyorLink]: https://ci.appveyor.com/project/YannCollet/lz4-1lndh
31
+ [coverBadge]: https://scan.coverity.com/projects/4735/badge.svg "Static code analysis of Master branch"
32
+ [coverlink]: https://scan.coverity.com/projects/4735
33
+
34
+ > **Branch Policy:**
35
+ > - The "master" branch is considered stable, at all times.
36
+ > - The "dev" branch is the one where all contributions must be merged
37
+ before being promoted to master.
38
+ > + If you plan to propose a patch, please commit into the "dev" branch,
39
+ or its own feature branch.
40
+ Direct commit to "master" are not permitted.
41
+
42
+ Benchmarks
43
+ -------------------------
44
+
45
+ The benchmark uses [lzbench], from @inikep
46
+ compiled with GCC v6.2.0 on Linux 64-bits.
47
+ The reference system uses a Core i7-3930K CPU @ 4.5GHz.
48
+ Benchmark evaluates the compression of reference [Silesia Corpus]
49
+ in single-thread mode.
50
+
51
+ [lzbench]: https://github.com/inikep/lzbench
52
+ [Silesia Corpus]: http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia
53
+
54
+ | Compressor | Ratio | Compression | Decompression |
55
+ | ---------- | ----- | ----------- | ------------- |
56
+ | memcpy | 1.000 | 7300 MB/s | 7300 MB/s |
57
+ |**LZ4 fast 8 (v1.7.3)**| 1.799 |**911 MB/s** | **3360 MB/s** |
58
+ |**LZ4 default (v1.7.3)**|**2.101**|**625 MB/s** | **3220 MB/s** |
59
+ | LZO 2.09 | 2.108 | 620 MB/s | 845 MB/s |
60
+ | QuickLZ 1.5.0 | 2.238 | 510 MB/s | 600 MB/s |
61
+ | Snappy 1.1.3 | 2.091 | 450 MB/s | 1550 MB/s |
62
+ | LZF v3.6 | 2.073 | 365 MB/s | 820 MB/s |
63
+ | [Zstandard] 1.1.1 -1 | 2.876 | 330 MB/s | 930 MB/s |
64
+ | [Zstandard] 1.1.1 -3 | 3.164 | 200 MB/s | 810 MB/s |
65
+ | [zlib] deflate 1.2.8 -1| 2.730 | 100 MB/s | 370 MB/s |
66
+ |**LZ4 HC -9 (v1.7.3)** |**2.720**| 34 MB/s | **3240 MB/s** |
67
+ | [zlib] deflate 1.2.8 -6| 3.099 | 33 MB/s | 390 MB/s |
68
+
69
+ [zlib]: http://www.zlib.net/
70
+ [Zstandard]: http://www.zstd.net/
71
+
72
+ LZ4 is also compatible and well optimized for x32 mode, for which it provides an additional +10% speed performance.
73
+
74
+
75
+ Installation
76
+ -------------------------
77
+
78
+ ```
79
+ make
80
+ make install # this command may require root access
81
+ ```
82
+
83
+ LZ4's `Makefile` supports standard [Makefile conventions],
84
+ including [staged installs], [redirection], or [command redefinition].
85
+
86
+ [Makefile conventions]: https://www.gnu.org/prep/standards/html_node/Makefile-Conventions.html
87
+ [staged installs]: https://www.gnu.org/prep/standards/html_node/DESTDIR.html
88
+ [redirection]: https://www.gnu.org/prep/standards/html_node/Directory-Variables.html
89
+ [command redefinition]: https://www.gnu.org/prep/standards/html_node/Utilities-in-Makefiles.html
90
+
91
+
92
+ Documentation
93
+ -------------------------
94
+
95
+ The raw LZ4 block compression format is detailed within [lz4_Block_format].
96
+
97
+ To compress an arbitrarily long file or data stream, multiple blocks are required.
98
+ Organizing these blocks and providing a common header format to handle their content
99
+ is the purpose of the Frame format, defined into [lz4_Frame_format].
100
+ Interoperable versions of LZ4 must respect this frame format.
101
+
102
+ [lz4_Block_format]: doc/lz4_Block_format.md
103
+ [lz4_Frame_format]: doc/lz4_Frame_format.md
104
+
105
+
106
+ Other source versions
107
+ -------------------------
108
+
109
+ Beyond the C reference source,
110
+ many contributors have created versions of lz4 in multiple languages
111
+ (Java, C#, Python, Perl, Ruby, etc.).
112
+ A list of known source ports is maintained on the [LZ4 Homepage].
113
+
114
+ [LZ4 Homepage]: http://www.lz4.org
@@ -0,0 +1,39 @@
1
+ dependencies:
2
+ override:
3
+ - sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test; sudo apt-get -y -qq update
4
+ - sudo apt-get -y install qemu-system-ppc qemu-user-static gcc-powerpc-linux-gnu
5
+ - sudo apt-get -y install qemu-system-arm gcc-arm-linux-gnueabi libc6-dev-armel-cross gcc-aarch64-linux-gnu libc6-dev-arm64-cross
6
+ - sudo apt-get -y install libc6-dev-i386 clang gcc-5 gcc-5-multilib gcc-6 valgrind
7
+
8
+ test:
9
+ override:
10
+ # Tests compilers and C standards
11
+ - clang -v; make clangtest && make clean
12
+ - g++ -v; make gpptest && make clean
13
+ - gcc -v; make c_standards && make clean
14
+ - gcc-5 -v; make -C tests test-lz4 CC=gcc-5 MOREFLAGS=-Werror && make clean
15
+ - gcc-5 -v; make -C tests test-lz4c32 CC=gcc-5 MOREFLAGS="-I/usr/include/x86_64-linux-gnu -Werror" && make clean
16
+ - gcc-6 -v; make c_standards CC=gcc-6 && make clean
17
+ - gcc-6 -v; make -C tests test-lz4 CC=gcc-6 MOREFLAGS=-Werror && make clean
18
+ # Shorter tests
19
+ - make cmake && make clean
20
+ - make -C tests test-lz4
21
+ - make -C tests test-lz4c
22
+ - make -C tests test-fasttest
23
+ - make -C tests test-frametest
24
+ - make -C tests test-fullbench
25
+ - make -C tests test-fuzzer && make clean
26
+ - make -C lib all && make clean
27
+ - pyenv global 3.4.4; CFLAGS=-I/usr/include/x86_64-linux-gnu make versionsTest && make clean
28
+ - make travis-install && make clean
29
+ # Longer tests
30
+ - gcc -v; make -C tests test32 MOREFLAGS="-I/usr/include/x86_64-linux-gnu" && make clean
31
+ - make usan && make clean
32
+ - clang -v; make staticAnalyze && make clean
33
+ # Valgrind tests
34
+ - make -C tests test-mem && make clean
35
+ # ARM, AArch64, PowerPC, PowerPC64 tests
36
+ - make platformTest CC=powerpc-linux-gnu-gcc QEMU_SYS=qemu-ppc-static && make clean
37
+ - make platformTest CC=powerpc-linux-gnu-gcc QEMU_SYS=qemu-ppc64-static MOREFLAGS=-m64 && make clean
38
+ - make platformTest CC=arm-linux-gnueabi-gcc QEMU_SYS=qemu-arm-static && make clean
39
+ - make platformTest CC=aarch64-linux-gnu-gcc QEMU_SYS=qemu-aarch64-static && make clean
@@ -0,0 +1,24 @@
1
+ LZ4 Library
2
+ Copyright (c) 2011-2016, Yann Collet
3
+ All rights reserved.
4
+
5
+ Redistribution and use in source and binary forms, with or without modification,
6
+ are permitted provided that the following conditions are met:
7
+
8
+ * Redistributions of source code must retain the above copyright notice, this
9
+ list of conditions and the following disclaimer.
10
+
11
+ * Redistributions in binary form must reproduce the above copyright notice, this
12
+ list of conditions and the following disclaimer in the documentation and/or
13
+ other materials provided with the distribution.
14
+
15
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
16
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
17
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
18
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
19
+ ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
20
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
21
+ LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
22
+ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
23
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
24
+ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
@@ -0,0 +1,73 @@
1
+ LZ4 - Library Files
2
+ ================================
3
+
4
+ The directory contains many files, but depending on project's objectives,
5
+ not all of them are necessary.
6
+
7
+ #### Minimal LZ4 build
8
+
9
+ The minimum required is **`lz4.c`** and **`lz4.h`**,
10
+ which will provide the fast compression and decompression algorithm.
11
+
12
+
13
+ #### The High Compression variant of LZ4
14
+
15
+ For more compression at the cost of compression speed,
16
+ the High Compression variant **lz4hc** is available.
17
+ It's necessary to add **`lz4hc.c`** and **`lz4hc.h`**.
18
+ The variant still depends on regular `lz4` source files.
19
+ In particular, the decompression is still provided by `lz4.c`.
20
+
21
+
22
+ #### Compatibility issues
23
+
24
+ In order to produce files or streams compatible with `lz4` command line utility,
25
+ it's necessary to encode lz4-compressed blocks using the [official interoperable frame format].
26
+ This format is generated and decoded automatically by the **lz4frame** library.
27
+ In order to work properly, lz4frame needs lz4 and lz4hc, and also **xxhash**,
28
+ which provides error detection.
29
+ (_Advanced stuff_ : It's possible to hide xxhash symbols into a local namespace.
30
+ This is what `liblz4` does, to avoid symbol duplication
31
+ in case a user program would link to several libraries containing xxhash symbols.)
32
+
33
+
34
+ #### Advanced API
35
+
36
+ A more complex `lz4frame_static.h` is also provided.
37
+ It contains definitions which are not guaranteed to remain stable within future versions.
38
+ It must be used with static linking ***only***.
39
+
40
+
41
+ #### Using MinGW+MSYS to create DLL
42
+
43
+ DLL can be created using MinGW+MSYS with the `make liblz4` command.
44
+ This command creates `dll\liblz4.dll` and the import library `dll\liblz4.lib`.
45
+ The import library is only required with Visual C++.
46
+ The header files `lz4.h`, `lz4hc.h`, `lz4frame.h` and the dynamic library
47
+ `dll\liblz4.dll` are required to compile a project using gcc/MinGW.
48
+ The dynamic library has to be added to linking options.
49
+ It means that if a project that uses LZ4 consists of a single `test-dll.c`
50
+ file it should be linked with `dll\liblz4.dll`. For example:
51
+ ```
52
+ gcc $(CFLAGS) -Iinclude/ test-dll.c -o test-dll dll\liblz4.dll
53
+ ```
54
+ The compiled executable will require LZ4 DLL which is available at `dll\liblz4.dll`.
55
+
56
+
57
+ #### Miscellaneous
58
+
59
+ Other files present in the directory are not source code. There are :
60
+
61
+ - LICENSE : contains the BSD license text
62
+ - Makefile : script to compile or install lz4 library (static or dynamic)
63
+ - liblz4.pc.in : for pkg-config (make install)
64
+ - README.md : this file
65
+
66
+ [official interoperable frame format]: ../doc/lz4_Frame_format.md
67
+
68
+
69
+ #### License
70
+
71
+ All source material within __lib__ directory are BSD 2-Clause licensed.
72
+ See [LICENSE](LICENSE) for details.
73
+ The license is also repeated at the top of each source file.
@@ -0,0 +1,14 @@
1
+ # LZ4 - Fast LZ compression algorithm
2
+ # Copyright (C) 2011-2014, Yann Collet.
3
+ # BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
4
+
5
+ prefix=@PREFIX@
6
+ libdir=@LIBDIR@
7
+ includedir=@INCLUDEDIR@
8
+
9
+ Name: lz4
10
+ Description: extremely fast lossless compression algorithm library
11
+ URL: http://www.lz4.org/
12
+ Version: @VERSION@
13
+ Libs: -L@LIBDIR@ -llz4
14
+ Cflags: -I@INCLUDEDIR@
@@ -0,0 +1,1478 @@
1
+ /*
2
+ LZ4 - Fast LZ compression algorithm
3
+ Copyright (C) 2011-2017, Yann Collet.
4
+
5
+ BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
6
+
7
+ Redistribution and use in source and binary forms, with or without
8
+ modification, are permitted provided that the following conditions are
9
+ met:
10
+
11
+ * Redistributions of source code must retain the above copyright
12
+ notice, this list of conditions and the following disclaimer.
13
+ * Redistributions in binary form must reproduce the above
14
+ copyright notice, this list of conditions and the following disclaimer
15
+ in the documentation and/or other materials provided with the
16
+ distribution.
17
+
18
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
19
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
20
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
21
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
22
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
23
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
24
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
25
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
26
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
27
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
28
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
29
+
30
+ You can contact the author at :
31
+ - LZ4 homepage : http://www.lz4.org
32
+ - LZ4 source repository : https://github.com/lz4/lz4
33
+ */
34
+
35
+
36
+ /*-************************************
37
+ * Tuning parameters
38
+ **************************************/
39
+ /*
40
+ * LZ4_HEAPMODE :
41
+ * Select how default compression functions will allocate memory for their hash table,
42
+ * in memory stack (0:default, fastest), or in memory heap (1:requires malloc()).
43
+ */
44
+ #ifndef LZ4_HEAPMODE
45
+ # define LZ4_HEAPMODE 0
46
+ #endif
47
+
48
+ /*
49
+ * ACCELERATION_DEFAULT :
50
+ * Select "acceleration" for LZ4_compress_fast() when parameter value <= 0
51
+ */
52
+ #define ACCELERATION_DEFAULT 1
53
+
54
+
55
+ /*-************************************
56
+ * CPU Feature Detection
57
+ **************************************/
58
+ /* LZ4_FORCE_MEMORY_ACCESS
59
+ * By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable.
60
+ * Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal.
61
+ * The below switch allow to select different access method for improved performance.
62
+ * Method 0 (default) : use `memcpy()`. Safe and portable.
63
+ * Method 1 : `__packed` statement. It depends on compiler extension (ie, not portable).
64
+ * This method is safe if your compiler supports it, and *generally* as fast or faster than `memcpy`.
65
+ * Method 2 : direct access. This method is portable but violate C standard.
66
+ * It can generate buggy code on targets which assembly generation depends on alignment.
67
+ * But in some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6)
68
+ * See https://fastcompression.blogspot.fr/2015/08/accessing-unaligned-memory.html for details.
69
+ * Prefer these methods in priority order (0 > 1 > 2)
70
+ */
71
+ #ifndef LZ4_FORCE_MEMORY_ACCESS /* can be defined externally */
72
+ # if defined(__GNUC__) && ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) )
73
+ # define LZ4_FORCE_MEMORY_ACCESS 2
74
+ # elif defined(__INTEL_COMPILER) || defined(__GNUC__)
75
+ # define LZ4_FORCE_MEMORY_ACCESS 1
76
+ # endif
77
+ #endif
78
+
79
+ /*
80
+ * LZ4_FORCE_SW_BITCOUNT
81
+ * Define this parameter if your target system or compiler does not support hardware bit count
82
+ */
83
+ #if defined(_MSC_VER) && defined(_WIN32_WCE) /* Visual Studio for Windows CE does not support Hardware bit count */
84
+ # define LZ4_FORCE_SW_BITCOUNT
85
+ #endif
86
+
87
+
88
+ /*-************************************
89
+ * Dependency
90
+ **************************************/
91
+ #include "lz4.h"
92
+ /* see also "memory routines" below */
93
+
94
+
95
+ /*-************************************
96
+ * Compiler Options
97
+ **************************************/
98
+ #ifdef _MSC_VER /* Visual Studio */
99
+ # include <intrin.h>
100
+ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */
101
+ # pragma warning(disable : 4293) /* disable: C4293: too large shift (32-bits) */
102
+ #endif /* _MSC_VER */
103
+
104
+ #ifndef FORCE_INLINE
105
+ # ifdef _MSC_VER /* Visual Studio */
106
+ # define FORCE_INLINE static __forceinline
107
+ # else
108
+ # if defined (__cplusplus) || defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */
109
+ # ifdef __GNUC__
110
+ # define FORCE_INLINE static inline __attribute__((always_inline))
111
+ # else
112
+ # define FORCE_INLINE static inline
113
+ # endif
114
+ # else
115
+ # define FORCE_INLINE static
116
+ # endif /* __STDC_VERSION__ */
117
+ # endif /* _MSC_VER */
118
+ #endif /* FORCE_INLINE */
119
+
120
+ #if (defined(__GNUC__) && (__GNUC__ >= 3)) || (defined(__INTEL_COMPILER) && (__INTEL_COMPILER >= 800)) || defined(__clang__)
121
+ # define expect(expr,value) (__builtin_expect ((expr),(value)) )
122
+ #else
123
+ # define expect(expr,value) (expr)
124
+ #endif
125
+
126
+ #define likely(expr) expect((expr) != 0, 1)
127
+ #define unlikely(expr) expect((expr) != 0, 0)
128
+
129
+
130
+ /*-************************************
131
+ * Memory routines
132
+ **************************************/
133
+ #include <stdlib.h> /* malloc, calloc, free */
134
+ #define ALLOCATOR(n,s) calloc(n,s)
135
+ #define FREEMEM free
136
+ #include <string.h> /* memset, memcpy */
137
+ #define MEM_INIT memset
138
+
139
+
140
+ /*-************************************
141
+ * Basic Types
142
+ **************************************/
143
+ #if defined(__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */)
144
+ # include <stdint.h>
145
+ typedef uint8_t BYTE;
146
+ typedef uint16_t U16;
147
+ typedef uint32_t U32;
148
+ typedef int32_t S32;
149
+ typedef uint64_t U64;
150
+ typedef uintptr_t uptrval;
151
+ #else
152
+ typedef unsigned char BYTE;
153
+ typedef unsigned short U16;
154
+ typedef unsigned int U32;
155
+ typedef signed int S32;
156
+ typedef unsigned long long U64;
157
+ typedef size_t uptrval; /* generally true, except OpenVMS-64 */
158
+ #endif
159
+
160
+ #if defined(__x86_64__)
161
+ typedef U64 reg_t; /* 64-bits in x32 mode */
162
+ #else
163
+ typedef size_t reg_t; /* 32-bits in x32 mode */
164
+ #endif
165
+
166
+ /*-************************************
167
+ * Reading and writing into memory
168
+ **************************************/
169
+ static unsigned LZ4_isLittleEndian(void)
170
+ {
171
+ const union { U32 u; BYTE c[4]; } one = { 1 }; /* don't use static : performance detrimental */
172
+ return one.c[0];
173
+ }
174
+
175
+
176
+ #if defined(LZ4_FORCE_MEMORY_ACCESS) && (LZ4_FORCE_MEMORY_ACCESS==2)
177
+ /* lie to the compiler about data alignment; use with caution */
178
+
179
+ static U16 LZ4_read16(const void* memPtr) { return *(const U16*) memPtr; }
180
+ static U32 LZ4_read32(const void* memPtr) { return *(const U32*) memPtr; }
181
+ static reg_t LZ4_read_ARCH(const void* memPtr) { return *(const reg_t*) memPtr; }
182
+
183
+ static void LZ4_write16(void* memPtr, U16 value) { *(U16*)memPtr = value; }
184
+ static void LZ4_write32(void* memPtr, U32 value) { *(U32*)memPtr = value; }
185
+
186
+ #elif defined(LZ4_FORCE_MEMORY_ACCESS) && (LZ4_FORCE_MEMORY_ACCESS==1)
187
+
188
+ /* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */
189
+ /* currently only defined for gcc and icc */
190
+ typedef union { U16 u16; U32 u32; reg_t uArch; } __attribute__((packed)) unalign;
191
+
192
+ static U16 LZ4_read16(const void* ptr) { return ((const unalign*)ptr)->u16; }
193
+ static U32 LZ4_read32(const void* ptr) { return ((const unalign*)ptr)->u32; }
194
+ static reg_t LZ4_read_ARCH(const void* ptr) { return ((const unalign*)ptr)->uArch; }
195
+
196
+ static void LZ4_write16(void* memPtr, U16 value) { ((unalign*)memPtr)->u16 = value; }
197
+ static void LZ4_write32(void* memPtr, U32 value) { ((unalign*)memPtr)->u32 = value; }
198
+
199
+ #else /* safe and portable access through memcpy() */
200
+
201
+ static U16 LZ4_read16(const void* memPtr)
202
+ {
203
+ U16 val; memcpy(&val, memPtr, sizeof(val)); return val;
204
+ }
205
+
206
+ static U32 LZ4_read32(const void* memPtr)
207
+ {
208
+ U32 val; memcpy(&val, memPtr, sizeof(val)); return val;
209
+ }
210
+
211
+ static reg_t LZ4_read_ARCH(const void* memPtr)
212
+ {
213
+ reg_t val; memcpy(&val, memPtr, sizeof(val)); return val;
214
+ }
215
+
216
+ static void LZ4_write16(void* memPtr, U16 value)
217
+ {
218
+ memcpy(memPtr, &value, sizeof(value));
219
+ }
220
+
221
+ static void LZ4_write32(void* memPtr, U32 value)
222
+ {
223
+ memcpy(memPtr, &value, sizeof(value));
224
+ }
225
+
226
+ #endif /* LZ4_FORCE_MEMORY_ACCESS */
227
+
228
+
229
+ static U16 LZ4_readLE16(const void* memPtr)
230
+ {
231
+ if (LZ4_isLittleEndian()) {
232
+ return LZ4_read16(memPtr);
233
+ } else {
234
+ const BYTE* p = (const BYTE*)memPtr;
235
+ return (U16)((U16)p[0] + (p[1]<<8));
236
+ }
237
+ }
238
+
239
+ static void LZ4_writeLE16(void* memPtr, U16 value)
240
+ {
241
+ if (LZ4_isLittleEndian()) {
242
+ LZ4_write16(memPtr, value);
243
+ } else {
244
+ BYTE* p = (BYTE*)memPtr;
245
+ p[0] = (BYTE) value;
246
+ p[1] = (BYTE)(value>>8);
247
+ }
248
+ }
249
+
250
+ static void LZ4_copy8(void* dst, const void* src)
251
+ {
252
+ memcpy(dst,src,8);
253
+ }
254
+
255
+ /* customized variant of memcpy, which can overwrite up to 8 bytes beyond dstEnd */
256
+ static void LZ4_wildCopy(void* dstPtr, const void* srcPtr, void* dstEnd)
257
+ {
258
+ BYTE* d = (BYTE*)dstPtr;
259
+ const BYTE* s = (const BYTE*)srcPtr;
260
+ BYTE* const e = (BYTE*)dstEnd;
261
+
262
+ do { LZ4_copy8(d,s); d+=8; s+=8; } while (d<e);
263
+ }
264
+
265
+
266
+ /*-************************************
267
+ * Common Constants
268
+ **************************************/
269
+ #define MINMATCH 4
270
+
271
+ #define WILDCOPYLENGTH 8
272
+ #define LASTLITERALS 5
273
+ #define MFLIMIT (WILDCOPYLENGTH+MINMATCH)
274
+ static const int LZ4_minLength = (MFLIMIT+1);
275
+
276
+ #define KB *(1 <<10)
277
+ #define MB *(1 <<20)
278
+ #define GB *(1U<<30)
279
+
280
+ #define MAXD_LOG 16
281
+ #define MAX_DISTANCE ((1 << MAXD_LOG) - 1)
282
+
283
+ #define ML_BITS 4
284
+ #define ML_MASK ((1U<<ML_BITS)-1)
285
+ #define RUN_BITS (8-ML_BITS)
286
+ #define RUN_MASK ((1U<<RUN_BITS)-1)
287
+
288
+
289
+ /*-************************************
290
+ * Error detection
291
+ **************************************/
292
+ #define LZ4_STATIC_ASSERT(c) { enum { LZ4_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
293
+
294
+ #if defined(LZ4_DEBUG) && (LZ4_DEBUG>=2)
295
+ # include <stdio.h>
296
+ # define DEBUGLOG(l, ...) { \
297
+ if (l<=LZ4_DEBUG) { \
298
+ fprintf(stderr, __FILE__ ": "); \
299
+ fprintf(stderr, __VA_ARGS__); \
300
+ fprintf(stderr, " \n"); \
301
+ } }
302
+ #else
303
+ # define DEBUGLOG(l, ...) {} /* disabled */
304
+ #endif
305
+
306
+
307
+ /*-************************************
308
+ * Common functions
309
+ **************************************/
310
+ static unsigned LZ4_NbCommonBytes (register reg_t val)
311
+ {
312
+ if (LZ4_isLittleEndian()) {
313
+ if (sizeof(val)==8) {
314
+ # if defined(_MSC_VER) && defined(_WIN64) && !defined(LZ4_FORCE_SW_BITCOUNT)
315
+ unsigned long r = 0;
316
+ _BitScanForward64( &r, (U64)val );
317
+ return (int)(r>>3);
318
+ # elif (defined(__clang__) || (defined(__GNUC__) && (__GNUC__>=3))) && !defined(LZ4_FORCE_SW_BITCOUNT)
319
+ return (__builtin_ctzll((U64)val) >> 3);
320
+ # else
321
+ static const int DeBruijnBytePos[64] = { 0, 0, 0, 0, 0, 1, 1, 2, 0, 3, 1, 3, 1, 4, 2, 7, 0, 2, 3, 6, 1, 5, 3, 5, 1, 3, 4, 4, 2, 5, 6, 7, 7, 0, 1, 2, 3, 3, 4, 6, 2, 6, 5, 5, 3, 4, 5, 6, 7, 1, 2, 4, 6, 4, 4, 5, 7, 2, 6, 5, 7, 6, 7, 7 };
322
+ return DeBruijnBytePos[((U64)((val & -(long long)val) * 0x0218A392CDABBD3FULL)) >> 58];
323
+ # endif
324
+ } else /* 32 bits */ {
325
+ # if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT)
326
+ unsigned long r;
327
+ _BitScanForward( &r, (U32)val );
328
+ return (int)(r>>3);
329
+ # elif (defined(__clang__) || (defined(__GNUC__) && (__GNUC__>=3))) && !defined(LZ4_FORCE_SW_BITCOUNT)
330
+ return (__builtin_ctz((U32)val) >> 3);
331
+ # else
332
+ static const int DeBruijnBytePos[32] = { 0, 0, 3, 0, 3, 1, 3, 0, 3, 2, 2, 1, 3, 2, 0, 1, 3, 3, 1, 2, 2, 2, 2, 0, 3, 1, 2, 0, 1, 0, 1, 1 };
333
+ return DeBruijnBytePos[((U32)((val & -(S32)val) * 0x077CB531U)) >> 27];
334
+ # endif
335
+ }
336
+ } else /* Big Endian CPU */ {
337
+ if (sizeof(val)==8) {
338
+ # if defined(_MSC_VER) && defined(_WIN64) && !defined(LZ4_FORCE_SW_BITCOUNT)
339
+ unsigned long r = 0;
340
+ _BitScanReverse64( &r, val );
341
+ return (unsigned)(r>>3);
342
+ # elif (defined(__clang__) || (defined(__GNUC__) && (__GNUC__>=3))) && !defined(LZ4_FORCE_SW_BITCOUNT)
343
+ return (__builtin_clzll((U64)val) >> 3);
344
+ # else
345
+ unsigned r;
346
+ if (!(val>>32)) { r=4; } else { r=0; val>>=32; }
347
+ if (!(val>>16)) { r+=2; val>>=8; } else { val>>=24; }
348
+ r += (!val);
349
+ return r;
350
+ # endif
351
+ } else /* 32 bits */ {
352
+ # if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT)
353
+ unsigned long r = 0;
354
+ _BitScanReverse( &r, (unsigned long)val );
355
+ return (unsigned)(r>>3);
356
+ # elif (defined(__clang__) || (defined(__GNUC__) && (__GNUC__>=3))) && !defined(LZ4_FORCE_SW_BITCOUNT)
357
+ return (__builtin_clz((U32)val) >> 3);
358
+ # else
359
+ unsigned r;
360
+ if (!(val>>16)) { r=2; val>>=8; } else { r=0; val>>=24; }
361
+ r += (!val);
362
+ return r;
363
+ # endif
364
+ }
365
+ }
366
+ }
367
+
368
+ #define STEPSIZE sizeof(reg_t)
369
+ static unsigned LZ4_count(const BYTE* pIn, const BYTE* pMatch, const BYTE* pInLimit)
370
+ {
371
+ const BYTE* const pStart = pIn;
372
+
373
+ while (likely(pIn<pInLimit-(STEPSIZE-1))) {
374
+ reg_t const diff = LZ4_read_ARCH(pMatch) ^ LZ4_read_ARCH(pIn);
375
+ if (!diff) { pIn+=STEPSIZE; pMatch+=STEPSIZE; continue; }
376
+ pIn += LZ4_NbCommonBytes(diff);
377
+ return (unsigned)(pIn - pStart);
378
+ }
379
+
380
+ if ((STEPSIZE==8) && (pIn<(pInLimit-3)) && (LZ4_read32(pMatch) == LZ4_read32(pIn))) { pIn+=4; pMatch+=4; }
381
+ if ((pIn<(pInLimit-1)) && (LZ4_read16(pMatch) == LZ4_read16(pIn))) { pIn+=2; pMatch+=2; }
382
+ if ((pIn<pInLimit) && (*pMatch == *pIn)) pIn++;
383
+ return (unsigned)(pIn - pStart);
384
+ }
385
+
386
+
387
+ #ifndef LZ4_COMMONDEFS_ONLY
388
+ /*-************************************
389
+ * Local Constants
390
+ **************************************/
391
+ static const int LZ4_64Klimit = ((64 KB) + (MFLIMIT-1));
392
+ static const U32 LZ4_skipTrigger = 6; /* Increase this value ==> compression run slower on incompressible data */
393
+
394
+
395
+ /*-************************************
396
+ * Local Structures and types
397
+ **************************************/
398
+ typedef enum { notLimited = 0, limitedOutput = 1 } limitedOutput_directive;
399
+ typedef enum { byPtr, byU32, byU16 } tableType_t;
400
+
401
+ typedef enum { noDict = 0, withPrefix64k, usingExtDict } dict_directive;
402
+ typedef enum { noDictIssue = 0, dictSmall } dictIssue_directive;
403
+
404
+ typedef enum { endOnOutputSize = 0, endOnInputSize = 1 } endCondition_directive;
405
+ typedef enum { full = 0, partial = 1 } earlyEnd_directive;
406
+
407
+
408
+ /*-************************************
409
+ * Local Utils
410
+ **************************************/
411
+ int LZ4_versionNumber (void) { return LZ4_VERSION_NUMBER; }
412
+ const char* LZ4_versionString(void) { return LZ4_VERSION_STRING; }
413
+ int LZ4_compressBound(int isize) { return LZ4_COMPRESSBOUND(isize); }
414
+ int LZ4_sizeofState() { return LZ4_STREAMSIZE; }
415
+
416
+
417
+ /*-******************************
418
+ * Compression functions
419
+ ********************************/
420
+ static U32 LZ4_hash4(U32 sequence, tableType_t const tableType)
421
+ {
422
+ if (tableType == byU16)
423
+ return ((sequence * 2654435761U) >> ((MINMATCH*8)-(LZ4_HASHLOG+1)));
424
+ else
425
+ return ((sequence * 2654435761U) >> ((MINMATCH*8)-LZ4_HASHLOG));
426
+ }
427
+
428
+ static U32 LZ4_hash5(U64 sequence, tableType_t const tableType)
429
+ {
430
+ static const U64 prime5bytes = 889523592379ULL;
431
+ static const U64 prime8bytes = 11400714785074694791ULL;
432
+ const U32 hashLog = (tableType == byU16) ? LZ4_HASHLOG+1 : LZ4_HASHLOG;
433
+ if (LZ4_isLittleEndian())
434
+ return (U32)(((sequence << 24) * prime5bytes) >> (64 - hashLog));
435
+ else
436
+ return (U32)(((sequence >> 24) * prime8bytes) >> (64 - hashLog));
437
+ }
438
+
439
+ FORCE_INLINE U32 LZ4_hashPosition(const void* const p, tableType_t const tableType)
440
+ {
441
+ if ((sizeof(reg_t)==8) && (tableType != byU16)) return LZ4_hash5(LZ4_read_ARCH(p), tableType);
442
+ return LZ4_hash4(LZ4_read32(p), tableType);
443
+ }
444
+
445
+ static void LZ4_putPositionOnHash(const BYTE* p, U32 h, void* tableBase, tableType_t const tableType, const BYTE* srcBase)
446
+ {
447
+ switch (tableType)
448
+ {
449
+ case byPtr: { const BYTE** hashTable = (const BYTE**)tableBase; hashTable[h] = p; return; }
450
+ case byU32: { U32* hashTable = (U32*) tableBase; hashTable[h] = (U32)(p-srcBase); return; }
451
+ case byU16: { U16* hashTable = (U16*) tableBase; hashTable[h] = (U16)(p-srcBase); return; }
452
+ }
453
+ }
454
+
455
+ FORCE_INLINE void LZ4_putPosition(const BYTE* p, void* tableBase, tableType_t tableType, const BYTE* srcBase)
456
+ {
457
+ U32 const h = LZ4_hashPosition(p, tableType);
458
+ LZ4_putPositionOnHash(p, h, tableBase, tableType, srcBase);
459
+ }
460
+
461
+ static const BYTE* LZ4_getPositionOnHash(U32 h, void* tableBase, tableType_t tableType, const BYTE* srcBase)
462
+ {
463
+ if (tableType == byPtr) { const BYTE** hashTable = (const BYTE**) tableBase; return hashTable[h]; }
464
+ if (tableType == byU32) { const U32* const hashTable = (U32*) tableBase; return hashTable[h] + srcBase; }
465
+ { const U16* const hashTable = (U16*) tableBase; return hashTable[h] + srcBase; } /* default, to ensure a return */
466
+ }
467
+
468
+ FORCE_INLINE const BYTE* LZ4_getPosition(const BYTE* p, void* tableBase, tableType_t tableType, const BYTE* srcBase)
469
+ {
470
+ U32 const h = LZ4_hashPosition(p, tableType);
471
+ return LZ4_getPositionOnHash(h, tableBase, tableType, srcBase);
472
+ }
473
+
474
+
475
+ /** LZ4_compress_generic() :
476
+ inlined, to ensure branches are decided at compilation time */
477
+ FORCE_INLINE int LZ4_compress_generic(
478
+ LZ4_stream_t_internal* const cctx,
479
+ const char* const source,
480
+ char* const dest,
481
+ const int inputSize,
482
+ const int maxOutputSize,
483
+ const limitedOutput_directive outputLimited,
484
+ const tableType_t tableType,
485
+ const dict_directive dict,
486
+ const dictIssue_directive dictIssue,
487
+ const U32 acceleration)
488
+ {
489
+ const BYTE* ip = (const BYTE*) source;
490
+ const BYTE* base;
491
+ const BYTE* lowLimit;
492
+ const BYTE* const lowRefLimit = ip - cctx->dictSize;
493
+ const BYTE* const dictionary = cctx->dictionary;
494
+ const BYTE* const dictEnd = dictionary + cctx->dictSize;
495
+ const ptrdiff_t dictDelta = dictEnd - (const BYTE*)source;
496
+ const BYTE* anchor = (const BYTE*) source;
497
+ const BYTE* const iend = ip + inputSize;
498
+ const BYTE* const mflimit = iend - MFLIMIT;
499
+ const BYTE* const matchlimit = iend - LASTLITERALS;
500
+
501
+ BYTE* op = (BYTE*) dest;
502
+ BYTE* const olimit = op + maxOutputSize;
503
+
504
+ U32 forwardH;
505
+
506
+ /* Init conditions */
507
+ if ((U32)inputSize > (U32)LZ4_MAX_INPUT_SIZE) return 0; /* Unsupported inputSize, too large (or negative) */
508
+ switch(dict)
509
+ {
510
+ case noDict:
511
+ default:
512
+ base = (const BYTE*)source;
513
+ lowLimit = (const BYTE*)source;
514
+ break;
515
+ case withPrefix64k:
516
+ base = (const BYTE*)source - cctx->currentOffset;
517
+ lowLimit = (const BYTE*)source - cctx->dictSize;
518
+ break;
519
+ case usingExtDict:
520
+ base = (const BYTE*)source - cctx->currentOffset;
521
+ lowLimit = (const BYTE*)source;
522
+ break;
523
+ }
524
+ if ((tableType == byU16) && (inputSize>=LZ4_64Klimit)) return 0; /* Size too large (not within 64K limit) */
525
+ if (inputSize<LZ4_minLength) goto _last_literals; /* Input too small, no compression (all literals) */
526
+
527
+ /* First Byte */
528
+ LZ4_putPosition(ip, cctx->hashTable, tableType, base);
529
+ ip++; forwardH = LZ4_hashPosition(ip, tableType);
530
+
531
+ /* Main Loop */
532
+ for ( ; ; ) {
533
+ ptrdiff_t refDelta = 0;
534
+ const BYTE* match;
535
+ BYTE* token;
536
+
537
+ /* Find a match */
538
+ { const BYTE* forwardIp = ip;
539
+ unsigned step = 1;
540
+ unsigned searchMatchNb = acceleration << LZ4_skipTrigger;
541
+ do {
542
+ U32 const h = forwardH;
543
+ ip = forwardIp;
544
+ forwardIp += step;
545
+ step = (searchMatchNb++ >> LZ4_skipTrigger);
546
+
547
+ if (unlikely(forwardIp > mflimit)) goto _last_literals;
548
+
549
+ match = LZ4_getPositionOnHash(h, cctx->hashTable, tableType, base);
550
+ if (dict==usingExtDict) {
551
+ if (match < (const BYTE*)source) {
552
+ refDelta = dictDelta;
553
+ lowLimit = dictionary;
554
+ } else {
555
+ refDelta = 0;
556
+ lowLimit = (const BYTE*)source;
557
+ } }
558
+ forwardH = LZ4_hashPosition(forwardIp, tableType);
559
+ LZ4_putPositionOnHash(ip, h, cctx->hashTable, tableType, base);
560
+
561
+ } while ( ((dictIssue==dictSmall) ? (match < lowRefLimit) : 0)
562
+ || ((tableType==byU16) ? 0 : (match + MAX_DISTANCE < ip))
563
+ || (LZ4_read32(match+refDelta) != LZ4_read32(ip)) );
564
+ }
565
+
566
+ /* Catch up */
567
+ while (((ip>anchor) & (match+refDelta > lowLimit)) && (unlikely(ip[-1]==match[refDelta-1]))) { ip--; match--; }
568
+
569
+ /* Encode Literals */
570
+ { unsigned const litLength = (unsigned)(ip - anchor);
571
+ token = op++;
572
+ if ((outputLimited) && /* Check output buffer overflow */
573
+ (unlikely(op + litLength + (2 + 1 + LASTLITERALS) + (litLength/255) > olimit)))
574
+ return 0;
575
+ if (litLength >= RUN_MASK) {
576
+ int len = (int)litLength-RUN_MASK;
577
+ *token = (RUN_MASK<<ML_BITS);
578
+ for(; len >= 255 ; len-=255) *op++ = 255;
579
+ *op++ = (BYTE)len;
580
+ }
581
+ else *token = (BYTE)(litLength<<ML_BITS);
582
+
583
+ /* Copy Literals */
584
+ LZ4_wildCopy(op, anchor, op+litLength);
585
+ op+=litLength;
586
+ }
587
+
588
+ _next_match:
589
+ /* Encode Offset */
590
+ LZ4_writeLE16(op, (U16)(ip-match)); op+=2;
591
+
592
+ /* Encode MatchLength */
593
+ { unsigned matchCode;
594
+
595
+ if ((dict==usingExtDict) && (lowLimit==dictionary)) {
596
+ const BYTE* limit;
597
+ match += refDelta;
598
+ limit = ip + (dictEnd-match);
599
+ if (limit > matchlimit) limit = matchlimit;
600
+ matchCode = LZ4_count(ip+MINMATCH, match+MINMATCH, limit);
601
+ ip += MINMATCH + matchCode;
602
+ if (ip==limit) {
603
+ unsigned const more = LZ4_count(ip, (const BYTE*)source, matchlimit);
604
+ matchCode += more;
605
+ ip += more;
606
+ }
607
+ } else {
608
+ matchCode = LZ4_count(ip+MINMATCH, match+MINMATCH, matchlimit);
609
+ ip += MINMATCH + matchCode;
610
+ }
611
+
612
+ if ( outputLimited && /* Check output buffer overflow */
613
+ (unlikely(op + (1 + LASTLITERALS) + (matchCode>>8) > olimit)) )
614
+ return 0;
615
+ if (matchCode >= ML_MASK) {
616
+ *token += ML_MASK;
617
+ matchCode -= ML_MASK;
618
+ LZ4_write32(op, 0xFFFFFFFF);
619
+ while (matchCode >= 4*255) op+=4, LZ4_write32(op, 0xFFFFFFFF), matchCode -= 4*255;
620
+ op += matchCode / 255;
621
+ *op++ = (BYTE)(matchCode % 255);
622
+ } else
623
+ *token += (BYTE)(matchCode);
624
+ }
625
+
626
+ anchor = ip;
627
+
628
+ /* Test end of chunk */
629
+ if (ip > mflimit) break;
630
+
631
+ /* Fill table */
632
+ LZ4_putPosition(ip-2, cctx->hashTable, tableType, base);
633
+
634
+ /* Test next position */
635
+ match = LZ4_getPosition(ip, cctx->hashTable, tableType, base);
636
+ if (dict==usingExtDict) {
637
+ if (match < (const BYTE*)source) {
638
+ refDelta = dictDelta;
639
+ lowLimit = dictionary;
640
+ } else {
641
+ refDelta = 0;
642
+ lowLimit = (const BYTE*)source;
643
+ } }
644
+ LZ4_putPosition(ip, cctx->hashTable, tableType, base);
645
+ if ( ((dictIssue==dictSmall) ? (match>=lowRefLimit) : 1)
646
+ && (match+MAX_DISTANCE>=ip)
647
+ && (LZ4_read32(match+refDelta)==LZ4_read32(ip)) )
648
+ { token=op++; *token=0; goto _next_match; }
649
+
650
+ /* Prepare next loop */
651
+ forwardH = LZ4_hashPosition(++ip, tableType);
652
+ }
653
+
654
+ _last_literals:
655
+ /* Encode Last Literals */
656
+ { size_t const lastRun = (size_t)(iend - anchor);
657
+ if ( (outputLimited) && /* Check output buffer overflow */
658
+ ((op - (BYTE*)dest) + lastRun + 1 + ((lastRun+255-RUN_MASK)/255) > (U32)maxOutputSize) )
659
+ return 0;
660
+ if (lastRun >= RUN_MASK) {
661
+ size_t accumulator = lastRun - RUN_MASK;
662
+ *op++ = RUN_MASK << ML_BITS;
663
+ for(; accumulator >= 255 ; accumulator-=255) *op++ = 255;
664
+ *op++ = (BYTE) accumulator;
665
+ } else {
666
+ *op++ = (BYTE)(lastRun<<ML_BITS);
667
+ }
668
+ memcpy(op, anchor, lastRun);
669
+ op += lastRun;
670
+ }
671
+
672
+ /* End */
673
+ return (int) (((char*)op)-dest);
674
+ }
675
+
676
+
677
+ int LZ4_compress_fast_extState(void* state, const char* source, char* dest, int inputSize, int maxOutputSize, int acceleration)
678
+ {
679
+ LZ4_stream_t_internal* ctx = &((LZ4_stream_t*)state)->internal_donotuse;
680
+ LZ4_resetStream((LZ4_stream_t*)state);
681
+ if (acceleration < 1) acceleration = ACCELERATION_DEFAULT;
682
+
683
+ if (maxOutputSize >= LZ4_compressBound(inputSize)) {
684
+ if (inputSize < LZ4_64Klimit)
685
+ return LZ4_compress_generic(ctx, source, dest, inputSize, 0, notLimited, byU16, noDict, noDictIssue, acceleration);
686
+ else
687
+ return LZ4_compress_generic(ctx, source, dest, inputSize, 0, notLimited, (sizeof(void*)==8) ? byU32 : byPtr, noDict, noDictIssue, acceleration);
688
+ } else {
689
+ if (inputSize < LZ4_64Klimit)
690
+ return LZ4_compress_generic(ctx, source, dest, inputSize, maxOutputSize, limitedOutput, byU16, noDict, noDictIssue, acceleration);
691
+ else
692
+ return LZ4_compress_generic(ctx, source, dest, inputSize, maxOutputSize, limitedOutput, (sizeof(void*)==8) ? byU32 : byPtr, noDict, noDictIssue, acceleration);
693
+ }
694
+ }
695
+
696
+
697
+ int LZ4_compress_fast(const char* source, char* dest, int inputSize, int maxOutputSize, int acceleration)
698
+ {
699
+ #if (LZ4_HEAPMODE)
700
+ void* ctxPtr = ALLOCATOR(1, sizeof(LZ4_stream_t)); /* malloc-calloc always properly aligned */
701
+ #else
702
+ LZ4_stream_t ctx;
703
+ void* const ctxPtr = &ctx;
704
+ #endif
705
+
706
+ int const result = LZ4_compress_fast_extState(ctxPtr, source, dest, inputSize, maxOutputSize, acceleration);
707
+
708
+ #if (LZ4_HEAPMODE)
709
+ FREEMEM(ctxPtr);
710
+ #endif
711
+ return result;
712
+ }
713
+
714
+
715
+ int LZ4_compress_default(const char* source, char* dest, int inputSize, int maxOutputSize)
716
+ {
717
+ return LZ4_compress_fast(source, dest, inputSize, maxOutputSize, 1);
718
+ }
719
+
720
+
721
+ /* hidden debug function */
722
+ /* strangely enough, gcc generates faster code when this function is uncommented, even if unused */
723
+ int LZ4_compress_fast_force(const char* source, char* dest, int inputSize, int maxOutputSize, int acceleration)
724
+ {
725
+ LZ4_stream_t ctx;
726
+ LZ4_resetStream(&ctx);
727
+
728
+ if (inputSize < LZ4_64Klimit)
729
+ return LZ4_compress_generic(&ctx.internal_donotuse, source, dest, inputSize, maxOutputSize, limitedOutput, byU16, noDict, noDictIssue, acceleration);
730
+ else
731
+ return LZ4_compress_generic(&ctx.internal_donotuse, source, dest, inputSize, maxOutputSize, limitedOutput, sizeof(void*)==8 ? byU32 : byPtr, noDict, noDictIssue, acceleration);
732
+ }
733
+
734
+
735
+ /*-******************************
736
+ * *_destSize() variant
737
+ ********************************/
738
+
739
+ static int LZ4_compress_destSize_generic(
740
+ LZ4_stream_t_internal* const ctx,
741
+ const char* const src,
742
+ char* const dst,
743
+ int* const srcSizePtr,
744
+ const int targetDstSize,
745
+ const tableType_t tableType)
746
+ {
747
+ const BYTE* ip = (const BYTE*) src;
748
+ const BYTE* base = (const BYTE*) src;
749
+ const BYTE* lowLimit = (const BYTE*) src;
750
+ const BYTE* anchor = ip;
751
+ const BYTE* const iend = ip + *srcSizePtr;
752
+ const BYTE* const mflimit = iend - MFLIMIT;
753
+ const BYTE* const matchlimit = iend - LASTLITERALS;
754
+
755
+ BYTE* op = (BYTE*) dst;
756
+ BYTE* const oend = op + targetDstSize;
757
+ BYTE* const oMaxLit = op + targetDstSize - 2 /* offset */ - 8 /* because 8+MINMATCH==MFLIMIT */ - 1 /* token */;
758
+ BYTE* const oMaxMatch = op + targetDstSize - (LASTLITERALS + 1 /* token */);
759
+ BYTE* const oMaxSeq = oMaxLit - 1 /* token */;
760
+
761
+ U32 forwardH;
762
+
763
+
764
+ /* Init conditions */
765
+ if (targetDstSize < 1) return 0; /* Impossible to store anything */
766
+ if ((U32)*srcSizePtr > (U32)LZ4_MAX_INPUT_SIZE) return 0; /* Unsupported input size, too large (or negative) */
767
+ if ((tableType == byU16) && (*srcSizePtr>=LZ4_64Klimit)) return 0; /* Size too large (not within 64K limit) */
768
+ if (*srcSizePtr<LZ4_minLength) goto _last_literals; /* Input too small, no compression (all literals) */
769
+
770
+ /* First Byte */
771
+ *srcSizePtr = 0;
772
+ LZ4_putPosition(ip, ctx->hashTable, tableType, base);
773
+ ip++; forwardH = LZ4_hashPosition(ip, tableType);
774
+
775
+ /* Main Loop */
776
+ for ( ; ; ) {
777
+ const BYTE* match;
778
+ BYTE* token;
779
+
780
+ /* Find a match */
781
+ { const BYTE* forwardIp = ip;
782
+ unsigned step = 1;
783
+ unsigned searchMatchNb = 1 << LZ4_skipTrigger;
784
+
785
+ do {
786
+ U32 h = forwardH;
787
+ ip = forwardIp;
788
+ forwardIp += step;
789
+ step = (searchMatchNb++ >> LZ4_skipTrigger);
790
+
791
+ if (unlikely(forwardIp > mflimit)) goto _last_literals;
792
+
793
+ match = LZ4_getPositionOnHash(h, ctx->hashTable, tableType, base);
794
+ forwardH = LZ4_hashPosition(forwardIp, tableType);
795
+ LZ4_putPositionOnHash(ip, h, ctx->hashTable, tableType, base);
796
+
797
+ } while ( ((tableType==byU16) ? 0 : (match + MAX_DISTANCE < ip))
798
+ || (LZ4_read32(match) != LZ4_read32(ip)) );
799
+ }
800
+
801
+ /* Catch up */
802
+ while ((ip>anchor) && (match > lowLimit) && (unlikely(ip[-1]==match[-1]))) { ip--; match--; }
803
+
804
+ /* Encode Literal length */
805
+ { unsigned litLength = (unsigned)(ip - anchor);
806
+ token = op++;
807
+ if (op + ((litLength+240)/255) + litLength > oMaxLit) {
808
+ /* Not enough space for a last match */
809
+ op--;
810
+ goto _last_literals;
811
+ }
812
+ if (litLength>=RUN_MASK) {
813
+ unsigned len = litLength - RUN_MASK;
814
+ *token=(RUN_MASK<<ML_BITS);
815
+ for(; len >= 255 ; len-=255) *op++ = 255;
816
+ *op++ = (BYTE)len;
817
+ }
818
+ else *token = (BYTE)(litLength<<ML_BITS);
819
+
820
+ /* Copy Literals */
821
+ LZ4_wildCopy(op, anchor, op+litLength);
822
+ op += litLength;
823
+ }
824
+
825
+ _next_match:
826
+ /* Encode Offset */
827
+ LZ4_writeLE16(op, (U16)(ip-match)); op+=2;
828
+
829
+ /* Encode MatchLength */
830
+ { size_t matchLength = LZ4_count(ip+MINMATCH, match+MINMATCH, matchlimit);
831
+
832
+ if (op + ((matchLength+240)/255) > oMaxMatch) {
833
+ /* Match description too long : reduce it */
834
+ matchLength = (15-1) + (oMaxMatch-op) * 255;
835
+ }
836
+ ip += MINMATCH + matchLength;
837
+
838
+ if (matchLength>=ML_MASK) {
839
+ *token += ML_MASK;
840
+ matchLength -= ML_MASK;
841
+ while (matchLength >= 255) { matchLength-=255; *op++ = 255; }
842
+ *op++ = (BYTE)matchLength;
843
+ }
844
+ else *token += (BYTE)(matchLength);
845
+ }
846
+
847
+ anchor = ip;
848
+
849
+ /* Test end of block */
850
+ if (ip > mflimit) break;
851
+ if (op > oMaxSeq) break;
852
+
853
+ /* Fill table */
854
+ LZ4_putPosition(ip-2, ctx->hashTable, tableType, base);
855
+
856
+ /* Test next position */
857
+ match = LZ4_getPosition(ip, ctx->hashTable, tableType, base);
858
+ LZ4_putPosition(ip, ctx->hashTable, tableType, base);
859
+ if ( (match+MAX_DISTANCE>=ip)
860
+ && (LZ4_read32(match)==LZ4_read32(ip)) )
861
+ { token=op++; *token=0; goto _next_match; }
862
+
863
+ /* Prepare next loop */
864
+ forwardH = LZ4_hashPosition(++ip, tableType);
865
+ }
866
+
867
+ _last_literals:
868
+ /* Encode Last Literals */
869
+ { size_t lastRunSize = (size_t)(iend - anchor);
870
+ if (op + 1 /* token */ + ((lastRunSize+240)/255) /* litLength */ + lastRunSize /* literals */ > oend) {
871
+ /* adapt lastRunSize to fill 'dst' */
872
+ lastRunSize = (oend-op) - 1;
873
+ lastRunSize -= (lastRunSize+240)/255;
874
+ }
875
+ ip = anchor + lastRunSize;
876
+
877
+ if (lastRunSize >= RUN_MASK) {
878
+ size_t accumulator = lastRunSize - RUN_MASK;
879
+ *op++ = RUN_MASK << ML_BITS;
880
+ for(; accumulator >= 255 ; accumulator-=255) *op++ = 255;
881
+ *op++ = (BYTE) accumulator;
882
+ } else {
883
+ *op++ = (BYTE)(lastRunSize<<ML_BITS);
884
+ }
885
+ memcpy(op, anchor, lastRunSize);
886
+ op += lastRunSize;
887
+ }
888
+
889
+ /* End */
890
+ *srcSizePtr = (int) (((const char*)ip)-src);
891
+ return (int) (((char*)op)-dst);
892
+ }
893
+
894
+
895
+ static int LZ4_compress_destSize_extState (LZ4_stream_t* state, const char* src, char* dst, int* srcSizePtr, int targetDstSize)
896
+ {
897
+ LZ4_resetStream(state);
898
+
899
+ if (targetDstSize >= LZ4_compressBound(*srcSizePtr)) { /* compression success is guaranteed */
900
+ return LZ4_compress_fast_extState(state, src, dst, *srcSizePtr, targetDstSize, 1);
901
+ } else {
902
+ if (*srcSizePtr < LZ4_64Klimit)
903
+ return LZ4_compress_destSize_generic(&state->internal_donotuse, src, dst, srcSizePtr, targetDstSize, byU16);
904
+ else
905
+ return LZ4_compress_destSize_generic(&state->internal_donotuse, src, dst, srcSizePtr, targetDstSize, sizeof(void*)==8 ? byU32 : byPtr);
906
+ }
907
+ }
908
+
909
+
910
+ int LZ4_compress_destSize(const char* src, char* dst, int* srcSizePtr, int targetDstSize)
911
+ {
912
+ #if (LZ4_HEAPMODE)
913
+ LZ4_stream_t* ctx = (LZ4_stream_t*)ALLOCATOR(1, sizeof(LZ4_stream_t)); /* malloc-calloc always properly aligned */
914
+ #else
915
+ LZ4_stream_t ctxBody;
916
+ LZ4_stream_t* ctx = &ctxBody;
917
+ #endif
918
+
919
+ int result = LZ4_compress_destSize_extState(ctx, src, dst, srcSizePtr, targetDstSize);
920
+
921
+ #if (LZ4_HEAPMODE)
922
+ FREEMEM(ctx);
923
+ #endif
924
+ return result;
925
+ }
926
+
927
+
928
+
929
+ /*-******************************
930
+ * Streaming functions
931
+ ********************************/
932
+
933
+ LZ4_stream_t* LZ4_createStream(void)
934
+ {
935
+ LZ4_stream_t* lz4s = (LZ4_stream_t*)ALLOCATOR(8, LZ4_STREAMSIZE_U64);
936
+ LZ4_STATIC_ASSERT(LZ4_STREAMSIZE >= sizeof(LZ4_stream_t_internal)); /* A compilation error here means LZ4_STREAMSIZE is not large enough */
937
+ LZ4_resetStream(lz4s);
938
+ return lz4s;
939
+ }
940
+
941
+ void LZ4_resetStream (LZ4_stream_t* LZ4_stream)
942
+ {
943
+ MEM_INIT(LZ4_stream, 0, sizeof(LZ4_stream_t));
944
+ }
945
+
946
+ int LZ4_freeStream (LZ4_stream_t* LZ4_stream)
947
+ {
948
+ if (!LZ4_stream) return 0; /* support free on NULL */
949
+ FREEMEM(LZ4_stream);
950
+ return (0);
951
+ }
952
+
953
+
954
+ #define HASH_UNIT sizeof(reg_t)
955
+ int LZ4_loadDict (LZ4_stream_t* LZ4_dict, const char* dictionary, int dictSize)
956
+ {
957
+ LZ4_stream_t_internal* dict = &LZ4_dict->internal_donotuse;
958
+ const BYTE* p = (const BYTE*)dictionary;
959
+ const BYTE* const dictEnd = p + dictSize;
960
+ const BYTE* base;
961
+
962
+ if ((dict->initCheck) || (dict->currentOffset > 1 GB)) /* Uninitialized structure, or reuse overflow */
963
+ LZ4_resetStream(LZ4_dict);
964
+
965
+ if (dictSize < (int)HASH_UNIT) {
966
+ dict->dictionary = NULL;
967
+ dict->dictSize = 0;
968
+ return 0;
969
+ }
970
+
971
+ if ((dictEnd - p) > 64 KB) p = dictEnd - 64 KB;
972
+ dict->currentOffset += 64 KB;
973
+ base = p - dict->currentOffset;
974
+ dict->dictionary = p;
975
+ dict->dictSize = (U32)(dictEnd - p);
976
+ dict->currentOffset += dict->dictSize;
977
+
978
+ while (p <= dictEnd-HASH_UNIT) {
979
+ LZ4_putPosition(p, dict->hashTable, byU32, base);
980
+ p+=3;
981
+ }
982
+
983
+ return dict->dictSize;
984
+ }
985
+
986
+
987
+ static void LZ4_renormDictT(LZ4_stream_t_internal* LZ4_dict, const BYTE* src)
988
+ {
989
+ if ((LZ4_dict->currentOffset > 0x80000000) ||
990
+ ((uptrval)LZ4_dict->currentOffset > (uptrval)src)) { /* address space overflow */
991
+ /* rescale hash table */
992
+ U32 const delta = LZ4_dict->currentOffset - 64 KB;
993
+ const BYTE* dictEnd = LZ4_dict->dictionary + LZ4_dict->dictSize;
994
+ int i;
995
+ for (i=0; i<LZ4_HASH_SIZE_U32; i++) {
996
+ if (LZ4_dict->hashTable[i] < delta) LZ4_dict->hashTable[i]=0;
997
+ else LZ4_dict->hashTable[i] -= delta;
998
+ }
999
+ LZ4_dict->currentOffset = 64 KB;
1000
+ if (LZ4_dict->dictSize > 64 KB) LZ4_dict->dictSize = 64 KB;
1001
+ LZ4_dict->dictionary = dictEnd - LZ4_dict->dictSize;
1002
+ }
1003
+ }
1004
+
1005
+
1006
+ int LZ4_compress_fast_continue (LZ4_stream_t* LZ4_stream, const char* source, char* dest, int inputSize, int maxOutputSize, int acceleration)
1007
+ {
1008
+ LZ4_stream_t_internal* streamPtr = &LZ4_stream->internal_donotuse;
1009
+ const BYTE* const dictEnd = streamPtr->dictionary + streamPtr->dictSize;
1010
+
1011
+ const BYTE* smallest = (const BYTE*) source;
1012
+ if (streamPtr->initCheck) return 0; /* Uninitialized structure detected */
1013
+ if ((streamPtr->dictSize>0) && (smallest>dictEnd)) smallest = dictEnd;
1014
+ LZ4_renormDictT(streamPtr, smallest);
1015
+ if (acceleration < 1) acceleration = ACCELERATION_DEFAULT;
1016
+
1017
+ /* Check overlapping input/dictionary space */
1018
+ { const BYTE* sourceEnd = (const BYTE*) source + inputSize;
1019
+ if ((sourceEnd > streamPtr->dictionary) && (sourceEnd < dictEnd)) {
1020
+ streamPtr->dictSize = (U32)(dictEnd - sourceEnd);
1021
+ if (streamPtr->dictSize > 64 KB) streamPtr->dictSize = 64 KB;
1022
+ if (streamPtr->dictSize < 4) streamPtr->dictSize = 0;
1023
+ streamPtr->dictionary = dictEnd - streamPtr->dictSize;
1024
+ }
1025
+ }
1026
+
1027
+ /* prefix mode : source data follows dictionary */
1028
+ if (dictEnd == (const BYTE*)source) {
1029
+ int result;
1030
+ if ((streamPtr->dictSize < 64 KB) && (streamPtr->dictSize < streamPtr->currentOffset))
1031
+ result = LZ4_compress_generic(streamPtr, source, dest, inputSize, maxOutputSize, limitedOutput, byU32, withPrefix64k, dictSmall, acceleration);
1032
+ else
1033
+ result = LZ4_compress_generic(streamPtr, source, dest, inputSize, maxOutputSize, limitedOutput, byU32, withPrefix64k, noDictIssue, acceleration);
1034
+ streamPtr->dictSize += (U32)inputSize;
1035
+ streamPtr->currentOffset += (U32)inputSize;
1036
+ return result;
1037
+ }
1038
+
1039
+ /* external dictionary mode */
1040
+ { int result;
1041
+ if ((streamPtr->dictSize < 64 KB) && (streamPtr->dictSize < streamPtr->currentOffset))
1042
+ result = LZ4_compress_generic(streamPtr, source, dest, inputSize, maxOutputSize, limitedOutput, byU32, usingExtDict, dictSmall, acceleration);
1043
+ else
1044
+ result = LZ4_compress_generic(streamPtr, source, dest, inputSize, maxOutputSize, limitedOutput, byU32, usingExtDict, noDictIssue, acceleration);
1045
+ streamPtr->dictionary = (const BYTE*)source;
1046
+ streamPtr->dictSize = (U32)inputSize;
1047
+ streamPtr->currentOffset += (U32)inputSize;
1048
+ return result;
1049
+ }
1050
+ }
1051
+
1052
+
1053
+ /* Hidden debug function, to force external dictionary mode */
1054
+ int LZ4_compress_forceExtDict (LZ4_stream_t* LZ4_dict, const char* source, char* dest, int inputSize)
1055
+ {
1056
+ LZ4_stream_t_internal* streamPtr = &LZ4_dict->internal_donotuse;
1057
+ int result;
1058
+ const BYTE* const dictEnd = streamPtr->dictionary + streamPtr->dictSize;
1059
+
1060
+ const BYTE* smallest = dictEnd;
1061
+ if (smallest > (const BYTE*) source) smallest = (const BYTE*) source;
1062
+ LZ4_renormDictT(streamPtr, smallest);
1063
+
1064
+ result = LZ4_compress_generic(streamPtr, source, dest, inputSize, 0, notLimited, byU32, usingExtDict, noDictIssue, 1);
1065
+
1066
+ streamPtr->dictionary = (const BYTE*)source;
1067
+ streamPtr->dictSize = (U32)inputSize;
1068
+ streamPtr->currentOffset += (U32)inputSize;
1069
+
1070
+ return result;
1071
+ }
1072
+
1073
+
1074
+ /*! LZ4_saveDict() :
1075
+ * If previously compressed data block is not guaranteed to remain available at its memory location,
1076
+ * save it into a safer place (char* safeBuffer).
1077
+ * Note : you don't need to call LZ4_loadDict() afterwards,
1078
+ * dictionary is immediately usable, you can therefore call LZ4_compress_fast_continue().
1079
+ * Return : saved dictionary size in bytes (necessarily <= dictSize), or 0 if error.
1080
+ */
1081
+ int LZ4_saveDict (LZ4_stream_t* LZ4_dict, char* safeBuffer, int dictSize)
1082
+ {
1083
+ LZ4_stream_t_internal* const dict = &LZ4_dict->internal_donotuse;
1084
+ const BYTE* const previousDictEnd = dict->dictionary + dict->dictSize;
1085
+
1086
+ if ((U32)dictSize > 64 KB) dictSize = 64 KB; /* useless to define a dictionary > 64 KB */
1087
+ if ((U32)dictSize > dict->dictSize) dictSize = dict->dictSize;
1088
+
1089
+ memmove(safeBuffer, previousDictEnd - dictSize, dictSize);
1090
+
1091
+ dict->dictionary = (const BYTE*)safeBuffer;
1092
+ dict->dictSize = (U32)dictSize;
1093
+
1094
+ return dictSize;
1095
+ }
1096
+
1097
+
1098
+
1099
+ /*-*****************************
1100
+ * Decompression functions
1101
+ *******************************/
1102
+ /*! LZ4_decompress_generic() :
1103
+ * This generic decompression function cover all use cases.
1104
+ * It shall be instantiated several times, using different sets of directives
1105
+ * Note that it is important this generic function is really inlined,
1106
+ * in order to remove useless branches during compilation optimization.
1107
+ */
1108
+ FORCE_INLINE int LZ4_decompress_generic(
1109
+ const char* const source,
1110
+ char* const dest,
1111
+ int inputSize,
1112
+ int outputSize, /* If endOnInput==endOnInputSize, this value is the max size of Output Buffer. */
1113
+
1114
+ int endOnInput, /* endOnOutputSize, endOnInputSize */
1115
+ int partialDecoding, /* full, partial */
1116
+ int targetOutputSize, /* only used if partialDecoding==partial */
1117
+ int dict, /* noDict, withPrefix64k, usingExtDict */
1118
+ const BYTE* const lowPrefix, /* == dest when no prefix */
1119
+ const BYTE* const dictStart, /* only if dict==usingExtDict */
1120
+ const size_t dictSize /* note : = 0 if noDict */
1121
+ )
1122
+ {
1123
+ /* Local Variables */
1124
+ const BYTE* ip = (const BYTE*) source;
1125
+ const BYTE* const iend = ip + inputSize;
1126
+
1127
+ BYTE* op = (BYTE*) dest;
1128
+ BYTE* const oend = op + outputSize;
1129
+ BYTE* cpy;
1130
+ BYTE* oexit = op + targetOutputSize;
1131
+ const BYTE* const lowLimit = lowPrefix - dictSize;
1132
+
1133
+ const BYTE* const dictEnd = (const BYTE*)dictStart + dictSize;
1134
+ const unsigned dec32table[] = {0, 1, 2, 1, 4, 4, 4, 4};
1135
+ const int dec64table[] = {0, 0, 0, -1, 0, 1, 2, 3};
1136
+
1137
+ const int safeDecode = (endOnInput==endOnInputSize);
1138
+ const int checkOffset = ((safeDecode) && (dictSize < (int)(64 KB)));
1139
+
1140
+
1141
+ /* Special cases */
1142
+ if ((partialDecoding) && (oexit > oend-MFLIMIT)) oexit = oend-MFLIMIT; /* targetOutputSize too high => decode everything */
1143
+ if ((endOnInput) && (unlikely(outputSize==0))) return ((inputSize==1) && (*ip==0)) ? 0 : -1; /* Empty output buffer */
1144
+ if ((!endOnInput) && (unlikely(outputSize==0))) return (*ip==0?1:-1);
1145
+
1146
+ /* Main Loop : decode sequences */
1147
+ while (1) {
1148
+ size_t length;
1149
+ const BYTE* match;
1150
+ size_t offset;
1151
+
1152
+ /* get literal length */
1153
+ unsigned const token = *ip++;
1154
+ if ((length=(token>>ML_BITS)) == RUN_MASK) {
1155
+ unsigned s;
1156
+ do {
1157
+ s = *ip++;
1158
+ length += s;
1159
+ } while ( likely(endOnInput ? ip<iend-RUN_MASK : 1) & (s==255) );
1160
+ if ((safeDecode) && unlikely((uptrval)(op)+length<(uptrval)(op))) goto _output_error; /* overflow detection */
1161
+ if ((safeDecode) && unlikely((uptrval)(ip)+length<(uptrval)(ip))) goto _output_error; /* overflow detection */
1162
+ }
1163
+
1164
+ /* copy literals */
1165
+ cpy = op+length;
1166
+ if ( ((endOnInput) && ((cpy>(partialDecoding?oexit:oend-MFLIMIT)) || (ip+length>iend-(2+1+LASTLITERALS))) )
1167
+ || ((!endOnInput) && (cpy>oend-WILDCOPYLENGTH)) )
1168
+ {
1169
+ if (partialDecoding) {
1170
+ if (cpy > oend) goto _output_error; /* Error : write attempt beyond end of output buffer */
1171
+ if ((endOnInput) && (ip+length > iend)) goto _output_error; /* Error : read attempt beyond end of input buffer */
1172
+ } else {
1173
+ if ((!endOnInput) && (cpy != oend)) goto _output_error; /* Error : block decoding must stop exactly there */
1174
+ if ((endOnInput) && ((ip+length != iend) || (cpy > oend))) goto _output_error; /* Error : input must be consumed */
1175
+ }
1176
+ memcpy(op, ip, length);
1177
+ ip += length;
1178
+ op += length;
1179
+ break; /* Necessarily EOF, due to parsing restrictions */
1180
+ }
1181
+ LZ4_wildCopy(op, ip, cpy);
1182
+ ip += length; op = cpy;
1183
+
1184
+ /* get offset */
1185
+ offset = LZ4_readLE16(ip); ip+=2;
1186
+ match = op - offset;
1187
+ if ((checkOffset) && (unlikely(match < lowLimit))) goto _output_error; /* Error : offset outside buffers */
1188
+ LZ4_write32(op, (U32)offset); /* costs ~1%; silence an msan warning when offset==0 */
1189
+
1190
+ /* get matchlength */
1191
+ length = token & ML_MASK;
1192
+ if (length == ML_MASK) {
1193
+ unsigned s;
1194
+ do {
1195
+ s = *ip++;
1196
+ if ((endOnInput) && (ip > iend-LASTLITERALS)) goto _output_error;
1197
+ length += s;
1198
+ } while (s==255);
1199
+ if ((safeDecode) && unlikely((uptrval)(op)+length<(uptrval)op)) goto _output_error; /* overflow detection */
1200
+ }
1201
+ length += MINMATCH;
1202
+
1203
+ /* check external dictionary */
1204
+ if ((dict==usingExtDict) && (match < lowPrefix)) {
1205
+ if (unlikely(op+length > oend-LASTLITERALS)) goto _output_error; /* doesn't respect parsing restriction */
1206
+
1207
+ if (length <= (size_t)(lowPrefix-match)) {
1208
+ /* match can be copied as a single segment from external dictionary */
1209
+ memmove(op, dictEnd - (lowPrefix-match), length);
1210
+ op += length;
1211
+ } else {
1212
+ /* match encompass external dictionary and current block */
1213
+ size_t const copySize = (size_t)(lowPrefix-match);
1214
+ size_t const restSize = length - copySize;
1215
+ memcpy(op, dictEnd - copySize, copySize);
1216
+ op += copySize;
1217
+ if (restSize > (size_t)(op-lowPrefix)) { /* overlap copy */
1218
+ BYTE* const endOfMatch = op + restSize;
1219
+ const BYTE* copyFrom = lowPrefix;
1220
+ while (op < endOfMatch) *op++ = *copyFrom++;
1221
+ } else {
1222
+ memcpy(op, lowPrefix, restSize);
1223
+ op += restSize;
1224
+ } }
1225
+ continue;
1226
+ }
1227
+
1228
+ /* copy match within block */
1229
+ cpy = op + length;
1230
+ if (unlikely(offset<8)) {
1231
+ const int dec64 = dec64table[offset];
1232
+ op[0] = match[0];
1233
+ op[1] = match[1];
1234
+ op[2] = match[2];
1235
+ op[3] = match[3];
1236
+ match += dec32table[offset];
1237
+ memcpy(op+4, match, 4);
1238
+ match -= dec64;
1239
+ } else { LZ4_copy8(op, match); match+=8; }
1240
+ op += 8;
1241
+
1242
+ if (unlikely(cpy>oend-12)) {
1243
+ BYTE* const oCopyLimit = oend-(WILDCOPYLENGTH-1);
1244
+ if (cpy > oend-LASTLITERALS) goto _output_error; /* Error : last LASTLITERALS bytes must be literals (uncompressed) */
1245
+ if (op < oCopyLimit) {
1246
+ LZ4_wildCopy(op, match, oCopyLimit);
1247
+ match += oCopyLimit - op;
1248
+ op = oCopyLimit;
1249
+ }
1250
+ while (op<cpy) *op++ = *match++;
1251
+ } else {
1252
+ LZ4_copy8(op, match);
1253
+ if (length>16) LZ4_wildCopy(op+8, match+8, cpy);
1254
+ }
1255
+ op=cpy; /* correction */
1256
+ }
1257
+
1258
+ /* end of decoding */
1259
+ if (endOnInput)
1260
+ return (int) (((char*)op)-dest); /* Nb of output bytes decoded */
1261
+ else
1262
+ return (int) (((const char*)ip)-source); /* Nb of input bytes read */
1263
+
1264
+ /* Overflow error detected */
1265
+ _output_error:
1266
+ return (int) (-(((const char*)ip)-source))-1;
1267
+ }
1268
+
1269
+
1270
+ int LZ4_decompress_safe(const char* source, char* dest, int compressedSize, int maxDecompressedSize)
1271
+ {
1272
+ return LZ4_decompress_generic(source, dest, compressedSize, maxDecompressedSize, endOnInputSize, full, 0, noDict, (BYTE*)dest, NULL, 0);
1273
+ }
1274
+
1275
+ int LZ4_decompress_safe_partial(const char* source, char* dest, int compressedSize, int targetOutputSize, int maxDecompressedSize)
1276
+ {
1277
+ return LZ4_decompress_generic(source, dest, compressedSize, maxDecompressedSize, endOnInputSize, partial, targetOutputSize, noDict, (BYTE*)dest, NULL, 0);
1278
+ }
1279
+
1280
+ int LZ4_decompress_fast(const char* source, char* dest, int originalSize)
1281
+ {
1282
+ return LZ4_decompress_generic(source, dest, 0, originalSize, endOnOutputSize, full, 0, withPrefix64k, (BYTE*)(dest - 64 KB), NULL, 64 KB);
1283
+ }
1284
+
1285
+
1286
+ /*===== streaming decompression functions =====*/
1287
+
1288
+ LZ4_streamDecode_t* LZ4_createStreamDecode(void)
1289
+ {
1290
+ LZ4_streamDecode_t* lz4s = (LZ4_streamDecode_t*) ALLOCATOR(1, sizeof(LZ4_streamDecode_t));
1291
+ return lz4s;
1292
+ }
1293
+
1294
+ int LZ4_freeStreamDecode (LZ4_streamDecode_t* LZ4_stream)
1295
+ {
1296
+ if (!LZ4_stream) return 0; /* support free on NULL */
1297
+ FREEMEM(LZ4_stream);
1298
+ return 0;
1299
+ }
1300
+
1301
+ /*!
1302
+ * LZ4_setStreamDecode() :
1303
+ * Use this function to instruct where to find the dictionary.
1304
+ * This function is not necessary if previous data is still available where it was decoded.
1305
+ * Loading a size of 0 is allowed (same effect as no dictionary).
1306
+ * Return : 1 if OK, 0 if error
1307
+ */
1308
+ int LZ4_setStreamDecode (LZ4_streamDecode_t* LZ4_streamDecode, const char* dictionary, int dictSize)
1309
+ {
1310
+ LZ4_streamDecode_t_internal* lz4sd = &LZ4_streamDecode->internal_donotuse;
1311
+ lz4sd->prefixSize = (size_t) dictSize;
1312
+ lz4sd->prefixEnd = (const BYTE*) dictionary + dictSize;
1313
+ lz4sd->externalDict = NULL;
1314
+ lz4sd->extDictSize = 0;
1315
+ return 1;
1316
+ }
1317
+
1318
+ /*
1319
+ *_continue() :
1320
+ These decoding functions allow decompression of multiple blocks in "streaming" mode.
1321
+ Previously decoded blocks must still be available at the memory position where they were decoded.
1322
+ If it's not possible, save the relevant part of decoded data into a safe buffer,
1323
+ and indicate where it stands using LZ4_setStreamDecode()
1324
+ */
1325
+ int LZ4_decompress_safe_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* source, char* dest, int compressedSize, int maxOutputSize)
1326
+ {
1327
+ LZ4_streamDecode_t_internal* lz4sd = &LZ4_streamDecode->internal_donotuse;
1328
+ int result;
1329
+
1330
+ if (lz4sd->prefixEnd == (BYTE*)dest) {
1331
+ result = LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize,
1332
+ endOnInputSize, full, 0,
1333
+ usingExtDict, lz4sd->prefixEnd - lz4sd->prefixSize, lz4sd->externalDict, lz4sd->extDictSize);
1334
+ if (result <= 0) return result;
1335
+ lz4sd->prefixSize += result;
1336
+ lz4sd->prefixEnd += result;
1337
+ } else {
1338
+ lz4sd->extDictSize = lz4sd->prefixSize;
1339
+ lz4sd->externalDict = lz4sd->prefixEnd - lz4sd->extDictSize;
1340
+ result = LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize,
1341
+ endOnInputSize, full, 0,
1342
+ usingExtDict, (BYTE*)dest, lz4sd->externalDict, lz4sd->extDictSize);
1343
+ if (result <= 0) return result;
1344
+ lz4sd->prefixSize = result;
1345
+ lz4sd->prefixEnd = (BYTE*)dest + result;
1346
+ }
1347
+
1348
+ return result;
1349
+ }
1350
+
1351
+ int LZ4_decompress_fast_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* source, char* dest, int originalSize)
1352
+ {
1353
+ LZ4_streamDecode_t_internal* lz4sd = &LZ4_streamDecode->internal_donotuse;
1354
+ int result;
1355
+
1356
+ if (lz4sd->prefixEnd == (BYTE*)dest) {
1357
+ result = LZ4_decompress_generic(source, dest, 0, originalSize,
1358
+ endOnOutputSize, full, 0,
1359
+ usingExtDict, lz4sd->prefixEnd - lz4sd->prefixSize, lz4sd->externalDict, lz4sd->extDictSize);
1360
+ if (result <= 0) return result;
1361
+ lz4sd->prefixSize += originalSize;
1362
+ lz4sd->prefixEnd += originalSize;
1363
+ } else {
1364
+ lz4sd->extDictSize = lz4sd->prefixSize;
1365
+ lz4sd->externalDict = lz4sd->prefixEnd - lz4sd->extDictSize;
1366
+ result = LZ4_decompress_generic(source, dest, 0, originalSize,
1367
+ endOnOutputSize, full, 0,
1368
+ usingExtDict, (BYTE*)dest, lz4sd->externalDict, lz4sd->extDictSize);
1369
+ if (result <= 0) return result;
1370
+ lz4sd->prefixSize = originalSize;
1371
+ lz4sd->prefixEnd = (BYTE*)dest + originalSize;
1372
+ }
1373
+
1374
+ return result;
1375
+ }
1376
+
1377
+
1378
+ /*
1379
+ Advanced decoding functions :
1380
+ *_usingDict() :
1381
+ These decoding functions work the same as "_continue" ones,
1382
+ the dictionary must be explicitly provided within parameters
1383
+ */
1384
+
1385
+ FORCE_INLINE int LZ4_decompress_usingDict_generic(const char* source, char* dest, int compressedSize, int maxOutputSize, int safe, const char* dictStart, int dictSize)
1386
+ {
1387
+ if (dictSize==0)
1388
+ return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize, safe, full, 0, noDict, (BYTE*)dest, NULL, 0);
1389
+ if (dictStart+dictSize == dest) {
1390
+ if (dictSize >= (int)(64 KB - 1))
1391
+ return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize, safe, full, 0, withPrefix64k, (BYTE*)dest-64 KB, NULL, 0);
1392
+ return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize, safe, full, 0, noDict, (BYTE*)dest-dictSize, NULL, 0);
1393
+ }
1394
+ return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize, safe, full, 0, usingExtDict, (BYTE*)dest, (const BYTE*)dictStart, dictSize);
1395
+ }
1396
+
1397
+ int LZ4_decompress_safe_usingDict(const char* source, char* dest, int compressedSize, int maxOutputSize, const char* dictStart, int dictSize)
1398
+ {
1399
+ return LZ4_decompress_usingDict_generic(source, dest, compressedSize, maxOutputSize, 1, dictStart, dictSize);
1400
+ }
1401
+
1402
+ int LZ4_decompress_fast_usingDict(const char* source, char* dest, int originalSize, const char* dictStart, int dictSize)
1403
+ {
1404
+ return LZ4_decompress_usingDict_generic(source, dest, 0, originalSize, 0, dictStart, dictSize);
1405
+ }
1406
+
1407
+ /* debug function */
1408
+ int LZ4_decompress_safe_forceExtDict(const char* source, char* dest, int compressedSize, int maxOutputSize, const char* dictStart, int dictSize)
1409
+ {
1410
+ return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize, endOnInputSize, full, 0, usingExtDict, (BYTE*)dest, (const BYTE*)dictStart, dictSize);
1411
+ }
1412
+
1413
+
1414
+ /*=*************************************************
1415
+ * Obsolete Functions
1416
+ ***************************************************/
1417
+ /* obsolete compression functions */
1418
+ int LZ4_compress_limitedOutput(const char* source, char* dest, int inputSize, int maxOutputSize) { return LZ4_compress_default(source, dest, inputSize, maxOutputSize); }
1419
+ int LZ4_compress(const char* source, char* dest, int inputSize) { return LZ4_compress_default(source, dest, inputSize, LZ4_compressBound(inputSize)); }
1420
+ int LZ4_compress_limitedOutput_withState (void* state, const char* src, char* dst, int srcSize, int dstSize) { return LZ4_compress_fast_extState(state, src, dst, srcSize, dstSize, 1); }
1421
+ int LZ4_compress_withState (void* state, const char* src, char* dst, int srcSize) { return LZ4_compress_fast_extState(state, src, dst, srcSize, LZ4_compressBound(srcSize), 1); }
1422
+ int LZ4_compress_limitedOutput_continue (LZ4_stream_t* LZ4_stream, const char* src, char* dst, int srcSize, int maxDstSize) { return LZ4_compress_fast_continue(LZ4_stream, src, dst, srcSize, maxDstSize, 1); }
1423
+ int LZ4_compress_continue (LZ4_stream_t* LZ4_stream, const char* source, char* dest, int inputSize) { return LZ4_compress_fast_continue(LZ4_stream, source, dest, inputSize, LZ4_compressBound(inputSize), 1); }
1424
+
1425
+ /*
1426
+ These function names are deprecated and should no longer be used.
1427
+ They are only provided here for compatibility with older user programs.
1428
+ - LZ4_uncompress is totally equivalent to LZ4_decompress_fast
1429
+ - LZ4_uncompress_unknownOutputSize is totally equivalent to LZ4_decompress_safe
1430
+ */
1431
+ int LZ4_uncompress (const char* source, char* dest, int outputSize) { return LZ4_decompress_fast(source, dest, outputSize); }
1432
+ int LZ4_uncompress_unknownOutputSize (const char* source, char* dest, int isize, int maxOutputSize) { return LZ4_decompress_safe(source, dest, isize, maxOutputSize); }
1433
+
1434
+
1435
+ /* Obsolete Streaming functions */
1436
+
1437
+ int LZ4_sizeofStreamState() { return LZ4_STREAMSIZE; }
1438
+
1439
+ static void LZ4_init(LZ4_stream_t* lz4ds, BYTE* base)
1440
+ {
1441
+ MEM_INIT(lz4ds, 0, sizeof(LZ4_stream_t));
1442
+ lz4ds->internal_donotuse.bufferStart = base;
1443
+ }
1444
+
1445
+ int LZ4_resetStreamState(void* state, char* inputBuffer)
1446
+ {
1447
+ if ((((uptrval)state) & 3) != 0) return 1; /* Error : pointer is not aligned on 4-bytes boundary */
1448
+ LZ4_init((LZ4_stream_t*)state, (BYTE*)inputBuffer);
1449
+ return 0;
1450
+ }
1451
+
1452
+ void* LZ4_create (char* inputBuffer)
1453
+ {
1454
+ LZ4_stream_t* lz4ds = (LZ4_stream_t*)ALLOCATOR(8, sizeof(LZ4_stream_t));
1455
+ LZ4_init (lz4ds, (BYTE*)inputBuffer);
1456
+ return lz4ds;
1457
+ }
1458
+
1459
+ char* LZ4_slideInputBuffer (void* LZ4_Data)
1460
+ {
1461
+ LZ4_stream_t_internal* ctx = &((LZ4_stream_t*)LZ4_Data)->internal_donotuse;
1462
+ int dictSize = LZ4_saveDict((LZ4_stream_t*)LZ4_Data, (char*)ctx->bufferStart, 64 KB);
1463
+ return (char*)(ctx->bufferStart + dictSize);
1464
+ }
1465
+
1466
+ /* Obsolete streaming decompression functions */
1467
+
1468
+ int LZ4_decompress_safe_withPrefix64k(const char* source, char* dest, int compressedSize, int maxOutputSize)
1469
+ {
1470
+ return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize, endOnInputSize, full, 0, withPrefix64k, (BYTE*)dest - 64 KB, NULL, 64 KB);
1471
+ }
1472
+
1473
+ int LZ4_decompress_fast_withPrefix64k(const char* source, char* dest, int originalSize)
1474
+ {
1475
+ return LZ4_decompress_generic(source, dest, 0, originalSize, endOnOutputSize, full, 0, withPrefix64k, (BYTE*)dest - 64 KB, NULL, 64 KB);
1476
+ }
1477
+
1478
+ #endif /* LZ4_COMMONDEFS_ONLY */