diff options
author | thegeorg <thegeorg@yandex-team.ru> | 2022-02-10 16:45:12 +0300 |
---|---|---|
committer | Daniil Cherednik <dcherednik@yandex-team.ru> | 2022-02-10 16:45:12 +0300 |
commit | 49116032d905455a7b1c994e4a696afc885c1e71 (patch) | |
tree | be835aa92c6248212e705f25388ebafcf84bc7a1 /contrib/libs/snappy | |
parent | 4e839db24a3bbc9f1c610c43d6faaaa99824dcca (diff) | |
download | ydb-49116032d905455a7b1c994e4a696afc885c1e71.tar.gz |
Restoring authorship annotation for <thegeorg@yandex-team.ru>. Commit 2 of 2.
Diffstat (limited to 'contrib/libs/snappy')
22 files changed, 2089 insertions, 2089 deletions
diff --git a/contrib/libs/snappy/AUTHORS b/contrib/libs/snappy/AUTHORS index 72e817a668..4858b377c7 100644 --- a/contrib/libs/snappy/AUTHORS +++ b/contrib/libs/snappy/AUTHORS @@ -1 +1 @@ -opensource@google.com +opensource@google.com diff --git a/contrib/libs/snappy/CONTRIBUTING.md b/contrib/libs/snappy/CONTRIBUTING.md index 4cc16b100d..c7b84516c2 100644 --- a/contrib/libs/snappy/CONTRIBUTING.md +++ b/contrib/libs/snappy/CONTRIBUTING.md @@ -1,26 +1,26 @@ -# How to Contribute - -We'd love to accept your patches and contributions to this project. There are -just a few small guidelines you need to follow. - -## Contributor License Agreement - -Contributions to this project must be accompanied by a Contributor License -Agreement. You (or your employer) retain the copyright to your contribution, -this simply gives us permission to use and redistribute your contributions as -part of the project. Head over to <https://cla.developers.google.com/> to see -your current agreements on file or to sign a new one. - -You generally only need to submit a CLA once, so if you've already submitted one -(even if it was for a different project), you probably don't need to do it -again. - -## Code reviews - -All submissions, including submissions by project members, require review. We -use GitHub pull requests for this purpose. Consult -[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more -information on using pull requests. - -Please make sure that all the automated checks (CLA, AppVeyor, Travis) pass for -your pull requests. Pull requests whose checks fail may be ignored. +# How to Contribute + +We'd love to accept your patches and contributions to this project. There are +just a few small guidelines you need to follow. + +## Contributor License Agreement + +Contributions to this project must be accompanied by a Contributor License +Agreement. You (or your employer) retain the copyright to your contribution, +this simply gives us permission to use and redistribute your contributions as +part of the project. Head over to <https://cla.developers.google.com/> to see +your current agreements on file or to sign a new one. + +You generally only need to submit a CLA once, so if you've already submitted one +(even if it was for a different project), you probably don't need to do it +again. + +## Code reviews + +All submissions, including submissions by project members, require review. We +use GitHub pull requests for this purpose. Consult +[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more +information on using pull requests. + +Please make sure that all the automated checks (CLA, AppVeyor, Travis) pass for +your pull requests. Pull requests whose checks fail may be ignored. diff --git a/contrib/libs/snappy/COPYING b/contrib/libs/snappy/COPYING index 09dec7bc52..bd0e5971db 100644 --- a/contrib/libs/snappy/COPYING +++ b/contrib/libs/snappy/COPYING @@ -1,54 +1,54 @@ -Copyright 2011, Google Inc. -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -=== - -Some of the benchmark data in testdata/ is licensed differently: - - - fireworks.jpeg is Copyright 2013 Steinar H. Gunderson, and - is licensed under the Creative Commons Attribution 3.0 license - (CC-BY-3.0). See https://creativecommons.org/licenses/by/3.0/ - for more information. - - - kppkn.gtb is taken from the Gaviota chess tablebase set, and - is licensed under the MIT License. See - https://sites.google.com/site/gaviotachessengine/Home/endgame-tablebases-1 - for more information. - - - paper-100k.pdf is an excerpt (bytes 92160 to 194560) from the paper - “Combinatorial Modeling of Chromatin Features Quantitatively Predicts DNA - Replication Timing in _Drosophila_” by Federico Comoglio and Renato Paro, - which is licensed under the CC-BY license. See - http://www.ploscompbiol.org/static/license for more ifnormation. - - - alice29.txt, asyoulik.txt, plrabn12.txt and lcet10.txt are from Project - Gutenberg. The first three have expired copyrights and are in the public - domain; the latter does not have expired copyright, but is still in the - public domain according to the license information - (http://www.gutenberg.org/ebooks/53). +Copyright 2011, Google Inc. +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +=== + +Some of the benchmark data in testdata/ is licensed differently: + + - fireworks.jpeg is Copyright 2013 Steinar H. Gunderson, and + is licensed under the Creative Commons Attribution 3.0 license + (CC-BY-3.0). See https://creativecommons.org/licenses/by/3.0/ + for more information. + + - kppkn.gtb is taken from the Gaviota chess tablebase set, and + is licensed under the MIT License. See + https://sites.google.com/site/gaviotachessengine/Home/endgame-tablebases-1 + for more information. + + - paper-100k.pdf is an excerpt (bytes 92160 to 194560) from the paper + “Combinatorial Modeling of Chromatin Features Quantitatively Predicts DNA + Replication Timing in _Drosophila_” by Federico Comoglio and Renato Paro, + which is licensed under the CC-BY license. See + http://www.ploscompbiol.org/static/license for more ifnormation. + + - alice29.txt, asyoulik.txt, plrabn12.txt and lcet10.txt are from Project + Gutenberg. The first three have expired copyrights and are in the public + domain; the latter does not have expired copyright, but is still in the + public domain according to the license information + (http://www.gutenberg.org/ebooks/53). diff --git a/contrib/libs/snappy/NEWS b/contrib/libs/snappy/NEWS index 62a6a62e25..98048dbdd8 100644 --- a/contrib/libs/snappy/NEWS +++ b/contrib/libs/snappy/NEWS @@ -1,188 +1,188 @@ -Snappy v1.1.8, January 15th 2020: - - * Small performance improvements. - - * Removed snappy::string alias for std::string. - - * Improved CMake configuration. - -Snappy v1.1.7, August 24th 2017: - - * Improved CMake build support for 64-bit Linux distributions. - - * MSVC builds now use MSVC-specific intrinsics that map to clzll. - - * ARM64 (AArch64) builds use the code paths optimized for 64-bit processors. - -Snappy v1.1.6, July 12th 2017: - -This is a re-release of v1.1.5 with proper SONAME / SOVERSION values. - -Snappy v1.1.5, June 28th 2017: - -This release has broken SONAME / SOVERSION values. Users of snappy as a shared -library should avoid 1.1.5 and use 1.1.6 instead. SONAME / SOVERSION errors will -manifest as the dynamic library loader complaining that it cannot find snappy's -shared library file (libsnappy.so / libsnappy.dylib), or that the library it -found does not have the required version. 1.1.6 has the same code as 1.1.5, but -carries build configuration fixes for the issues above. - - * Add CMake build support. The autoconf build support is now deprecated, and - will be removed in the next release. - - * Add AppVeyor configuration, for Windows CI coverage. - - * Small performance improvement on little-endian PowerPC. - - * Small performance improvement on LLVM with position-independent executables. - - * Fix a few issues with various build environments. - -Snappy v1.1.4, January 25th 2017: - - * Fix a 1% performance regression when snappy is used in PIE executables. - - * Improve compression performance by 5%. - - * Improve decompression performance by 20%. - -Snappy v1.1.3, July 6th 2015: - -This is the first release to be done from GitHub, which means that -some minor things like the ChangeLog format has changed (git log -format instead of svn log). - - * Add support for Uncompress() from a Source to a Sink. - - * Various minor changes to improve MSVC support; in particular, - the unit tests now compile and run under MSVC. - - -Snappy v1.1.2, February 28th 2014: - -This is a maintenance release with no changes to the actual library -source code. - - * Stop distributing benchmark data files that have unclear - or unsuitable licensing. - - * Add support for padding chunks in the framing format. - - -Snappy v1.1.1, October 15th 2013: - - * Add support for uncompressing to iovecs (scatter I/O). - The bulk of this patch was contributed by Mohit Aron. - - * Speed up decompression by ~2%; much more so (~13-20%) on - a few benchmarks on given compilers and CPUs. - - * Fix a few issues with MSVC compilation. - - * Support truncated test data in the benchmark. - - -Snappy v1.1.0, January 18th 2013: - - * Snappy now uses 64 kB block size instead of 32 kB. On average, - this means it compresses about 3% denser (more so for some - inputs), at the same or better speeds. - - * libsnappy no longer depends on iostream. - - * Some small performance improvements in compression on x86 - (0.5–1%). - - * Various portability fixes for ARM-based platforms, for MSVC, - and for GNU/Hurd. - - -Snappy v1.0.5, February 24th 2012: - - * More speed improvements. Exactly how big will depend on - the architecture: - - - 3–10% faster decompression for the base case (x86-64). - - - ARMv7 and higher can now use unaligned accesses, - and will see about 30% faster decompression and - 20–40% faster compression. - - - 32-bit platforms (ARM and 32-bit x86) will see 2–5% - faster compression. - - These are all cumulative (e.g., ARM gets all three speedups). - - * Fixed an issue where the unit test would crash on system - with less than 256 MB address space available, - e.g. some embedded platforms. - - * Added a framing format description, for use over e.g. HTTP, - or for a command-line compressor. We do not have any - implementations of this at the current point, but there seems - to be enough of a general interest in the topic. - Also make the format description slightly clearer. - - * Remove some compile-time warnings in -Wall - (mostly signed/unsigned comparisons), for easier embedding - into projects that use -Wall -Werror. - - -Snappy v1.0.4, September 15th 2011: - - * Speeded up the decompressor somewhat; typically about 2–8% - for Core i7, in 64-bit mode (comparable for Opteron). - Somewhat more for some tests, almost no gain for others. - - * Make Snappy compile on certain platforms it didn't before - (Solaris with SunPro C++, HP-UX, AIX). - - * Correct some minor errors in the format description. - - -Snappy v1.0.3, June 2nd 2011: - - * Speeded up the decompressor somewhat; about 3-6% for Core 2, - 6-13% for Core i7, and 5-12% for Opteron (all in 64-bit mode). - - * Added compressed format documentation. This text is new, - but an earlier version from Zeev Tarantov was used as reference. - - * Only link snappy_unittest against -lz and other autodetected - libraries, not libsnappy.so (which doesn't need any such dependency). - - * Fixed some display issues in the microbenchmarks, one of which would - frequently make the test crash on GNU/Hurd. - - -Snappy v1.0.2, April 29th 2011: - - * Relicense to a BSD-type license. - - * Added C bindings, contributed by Martin Gieseking. - - * More Win32 fixes, in particular for MSVC. - - * Replace geo.protodata with a newer version. - - * Fix timing inaccuracies in the unit test when comparing Snappy - to other algorithms. - - -Snappy v1.0.1, March 25th 2011: - -This is a maintenance release, mostly containing minor fixes. -There is no new functionality. The most important fixes include: - - * The COPYING file and all licensing headers now correctly state that - Snappy is licensed under the Apache 2.0 license. - - * snappy_unittest should now compile natively under Windows, - as well as on embedded systems with no mmap(). - - * Various autotools nits have been fixed. - - -Snappy v1.0, March 17th 2011: - - * Initial version. +Snappy v1.1.8, January 15th 2020: + + * Small performance improvements. + + * Removed snappy::string alias for std::string. + + * Improved CMake configuration. + +Snappy v1.1.7, August 24th 2017: + + * Improved CMake build support for 64-bit Linux distributions. + + * MSVC builds now use MSVC-specific intrinsics that map to clzll. + + * ARM64 (AArch64) builds use the code paths optimized for 64-bit processors. + +Snappy v1.1.6, July 12th 2017: + +This is a re-release of v1.1.5 with proper SONAME / SOVERSION values. + +Snappy v1.1.5, June 28th 2017: + +This release has broken SONAME / SOVERSION values. Users of snappy as a shared +library should avoid 1.1.5 and use 1.1.6 instead. SONAME / SOVERSION errors will +manifest as the dynamic library loader complaining that it cannot find snappy's +shared library file (libsnappy.so / libsnappy.dylib), or that the library it +found does not have the required version. 1.1.6 has the same code as 1.1.5, but +carries build configuration fixes for the issues above. + + * Add CMake build support. The autoconf build support is now deprecated, and + will be removed in the next release. + + * Add AppVeyor configuration, for Windows CI coverage. + + * Small performance improvement on little-endian PowerPC. + + * Small performance improvement on LLVM with position-independent executables. + + * Fix a few issues with various build environments. + +Snappy v1.1.4, January 25th 2017: + + * Fix a 1% performance regression when snappy is used in PIE executables. + + * Improve compression performance by 5%. + + * Improve decompression performance by 20%. + +Snappy v1.1.3, July 6th 2015: + +This is the first release to be done from GitHub, which means that +some minor things like the ChangeLog format has changed (git log +format instead of svn log). + + * Add support for Uncompress() from a Source to a Sink. + + * Various minor changes to improve MSVC support; in particular, + the unit tests now compile and run under MSVC. + + +Snappy v1.1.2, February 28th 2014: + +This is a maintenance release with no changes to the actual library +source code. + + * Stop distributing benchmark data files that have unclear + or unsuitable licensing. + + * Add support for padding chunks in the framing format. + + +Snappy v1.1.1, October 15th 2013: + + * Add support for uncompressing to iovecs (scatter I/O). + The bulk of this patch was contributed by Mohit Aron. + + * Speed up decompression by ~2%; much more so (~13-20%) on + a few benchmarks on given compilers and CPUs. + + * Fix a few issues with MSVC compilation. + + * Support truncated test data in the benchmark. + + +Snappy v1.1.0, January 18th 2013: + + * Snappy now uses 64 kB block size instead of 32 kB. On average, + this means it compresses about 3% denser (more so for some + inputs), at the same or better speeds. + + * libsnappy no longer depends on iostream. + + * Some small performance improvements in compression on x86 + (0.5–1%). + + * Various portability fixes for ARM-based platforms, for MSVC, + and for GNU/Hurd. + + +Snappy v1.0.5, February 24th 2012: + + * More speed improvements. Exactly how big will depend on + the architecture: + + - 3–10% faster decompression for the base case (x86-64). + + - ARMv7 and higher can now use unaligned accesses, + and will see about 30% faster decompression and + 20–40% faster compression. + + - 32-bit platforms (ARM and 32-bit x86) will see 2–5% + faster compression. + + These are all cumulative (e.g., ARM gets all three speedups). + + * Fixed an issue where the unit test would crash on system + with less than 256 MB address space available, + e.g. some embedded platforms. + + * Added a framing format description, for use over e.g. HTTP, + or for a command-line compressor. We do not have any + implementations of this at the current point, but there seems + to be enough of a general interest in the topic. + Also make the format description slightly clearer. + + * Remove some compile-time warnings in -Wall + (mostly signed/unsigned comparisons), for easier embedding + into projects that use -Wall -Werror. + + +Snappy v1.0.4, September 15th 2011: + + * Speeded up the decompressor somewhat; typically about 2–8% + for Core i7, in 64-bit mode (comparable for Opteron). + Somewhat more for some tests, almost no gain for others. + + * Make Snappy compile on certain platforms it didn't before + (Solaris with SunPro C++, HP-UX, AIX). + + * Correct some minor errors in the format description. + + +Snappy v1.0.3, June 2nd 2011: + + * Speeded up the decompressor somewhat; about 3-6% for Core 2, + 6-13% for Core i7, and 5-12% for Opteron (all in 64-bit mode). + + * Added compressed format documentation. This text is new, + but an earlier version from Zeev Tarantov was used as reference. + + * Only link snappy_unittest against -lz and other autodetected + libraries, not libsnappy.so (which doesn't need any such dependency). + + * Fixed some display issues in the microbenchmarks, one of which would + frequently make the test crash on GNU/Hurd. + + +Snappy v1.0.2, April 29th 2011: + + * Relicense to a BSD-type license. + + * Added C bindings, contributed by Martin Gieseking. + + * More Win32 fixes, in particular for MSVC. + + * Replace geo.protodata with a newer version. + + * Fix timing inaccuracies in the unit test when comparing Snappy + to other algorithms. + + +Snappy v1.0.1, March 25th 2011: + +This is a maintenance release, mostly containing minor fixes. +There is no new functionality. The most important fixes include: + + * The COPYING file and all licensing headers now correctly state that + Snappy is licensed under the Apache 2.0 license. + + * snappy_unittest should now compile natively under Windows, + as well as on embedded systems with no mmap(). + + * Various autotools nits have been fixed. + + +Snappy v1.0, March 17th 2011: + + * Initial version. diff --git a/contrib/libs/snappy/README.md b/contrib/libs/snappy/README.md index c8bcbdf235..cef4017492 100644 --- a/contrib/libs/snappy/README.md +++ b/contrib/libs/snappy/README.md @@ -1,148 +1,148 @@ -Snappy, a fast compressor/decompressor. - - -Introduction -============ - -Snappy is a compression/decompression library. It does not aim for maximum -compression, or compatibility with any other compression library; instead, -it aims for very high speeds and reasonable compression. For instance, -compared to the fastest mode of zlib, Snappy is an order of magnitude faster -for most inputs, but the resulting compressed files are anywhere from 20% to -100% bigger. (For more information, see "Performance", below.) - -Snappy has the following properties: - - * Fast: Compression speeds at 250 MB/sec and beyond, with no assembler code. - See "Performance" below. - * Stable: Over the last few years, Snappy has compressed and decompressed - petabytes of data in Google's production environment. The Snappy bitstream - format is stable and will not change between versions. - * Robust: The Snappy decompressor is designed not to crash in the face of - corrupted or malicious input. - * Free and open source software: Snappy is licensed under a BSD-type license. - For more information, see the included COPYING file. - -Snappy has previously been called "Zippy" in some Google presentations -and the like. - - -Performance -=========== - -Snappy is intended to be fast. On a single core of a Core i7 processor -in 64-bit mode, it compresses at about 250 MB/sec or more and decompresses at -about 500 MB/sec or more. (These numbers are for the slowest inputs in our -benchmark suite; others are much faster.) In our tests, Snappy usually -is faster than algorithms in the same class (e.g. LZO, LZF, QuickLZ, -etc.) while achieving comparable compression ratios. - -Typical compression ratios (based on the benchmark suite) are about 1.5-1.7x -for plain text, about 2-4x for HTML, and of course 1.0x for JPEGs, PNGs and -other already-compressed data. Similar numbers for zlib in its fastest mode -are 2.6-2.8x, 3-7x and 1.0x, respectively. More sophisticated algorithms are -capable of achieving yet higher compression rates, although usually at the -expense of speed. Of course, compression ratio will vary significantly with -the input. - -Although Snappy should be fairly portable, it is primarily optimized -for 64-bit x86-compatible processors, and may run slower in other environments. -In particular: - - - Snappy uses 64-bit operations in several places to process more data at - once than would otherwise be possible. - - Snappy assumes unaligned 32 and 64-bit loads and stores are cheap. - On some platforms, these must be emulated with single-byte loads - and stores, which is much slower. - - Snappy assumes little-endian throughout, and needs to byte-swap data in - several places if running on a big-endian platform. - -Experience has shown that even heavily tuned code can be improved. -Performance optimizations, whether for 64-bit x86 or other platforms, -are of course most welcome; see "Contact", below. - - -Building -======== - -You need the CMake version specified in [CMakeLists.txt](./CMakeLists.txt) -or later to build: - -```bash -mkdir build -cd build && cmake ../ && make -``` - -Usage -===== - -Note that Snappy, both the implementation and the main interface, -is written in C++. However, several third-party bindings to other languages -are available; see the [home page](docs/README.md) for more information. -Also, if you want to use Snappy from C code, you can use the included C -bindings in snappy-c.h. - -To use Snappy from your own C++ program, include the file "snappy.h" from -your calling file, and link against the compiled library. - -There are many ways to call Snappy, but the simplest possible is - -```c++ -snappy::Compress(input.data(), input.size(), &output); -``` - -and similarly - -```c++ -snappy::Uncompress(input.data(), input.size(), &output); -``` - -where "input" and "output" are both instances of std::string. - -There are other interfaces that are more flexible in various ways, including -support for custom (non-array) input sources. See the header file for more -information. - - -Tests and benchmarks -==================== - -When you compile Snappy, snappy_unittest is compiled in addition to the -library itself. You do not need it to use the compressor from your own library, -but it contains several useful components for Snappy development. - -First of all, it contains unit tests, verifying correctness on your machine in -various scenarios. If you want to change or optimize Snappy, please run the -tests to verify you have not broken anything. Note that if you have the -Google Test library installed, unit test behavior (especially failures) will be -significantly more user-friendly. You can find Google Test at - - https://github.com/google/googletest - -You probably also want the gflags library for handling of command-line flags; -you can find it at - - https://gflags.github.io/gflags/ - -In addition to the unit tests, snappy contains microbenchmarks used to -tune compression and decompression performance. These are automatically run -before the unit tests, but you can disable them using the flag ---run_microbenchmarks=false if you have gflags installed (otherwise you will -need to edit the source). - -Finally, snappy can benchmark Snappy against a few other compression libraries -(zlib, LZO, LZF, and QuickLZ), if they were detected at configure time. -To benchmark using a given file, give the compression algorithm you want to test -Snappy against (e.g. --zlib) and then a list of one or more file names on the -command line. The testdata/ directory contains the files used by the -microbenchmark, which should provide a reasonably balanced starting point for -benchmarking. (Note that baddata[1-3].snappy are not intended as benchmarks; they -are used to verify correctness in the presence of corrupted data in the unit -test.) - - -Contact -======= - -Snappy is distributed through GitHub. For the latest version, a bug tracker, -and other information, see https://github.com/google/snappy. +Snappy, a fast compressor/decompressor. + + +Introduction +============ + +Snappy is a compression/decompression library. It does not aim for maximum +compression, or compatibility with any other compression library; instead, +it aims for very high speeds and reasonable compression. For instance, +compared to the fastest mode of zlib, Snappy is an order of magnitude faster +for most inputs, but the resulting compressed files are anywhere from 20% to +100% bigger. (For more information, see "Performance", below.) + +Snappy has the following properties: + + * Fast: Compression speeds at 250 MB/sec and beyond, with no assembler code. + See "Performance" below. + * Stable: Over the last few years, Snappy has compressed and decompressed + petabytes of data in Google's production environment. The Snappy bitstream + format is stable and will not change between versions. + * Robust: The Snappy decompressor is designed not to crash in the face of + corrupted or malicious input. + * Free and open source software: Snappy is licensed under a BSD-type license. + For more information, see the included COPYING file. + +Snappy has previously been called "Zippy" in some Google presentations +and the like. + + +Performance +=========== + +Snappy is intended to be fast. On a single core of a Core i7 processor +in 64-bit mode, it compresses at about 250 MB/sec or more and decompresses at +about 500 MB/sec or more. (These numbers are for the slowest inputs in our +benchmark suite; others are much faster.) In our tests, Snappy usually +is faster than algorithms in the same class (e.g. LZO, LZF, QuickLZ, +etc.) while achieving comparable compression ratios. + +Typical compression ratios (based on the benchmark suite) are about 1.5-1.7x +for plain text, about 2-4x for HTML, and of course 1.0x for JPEGs, PNGs and +other already-compressed data. Similar numbers for zlib in its fastest mode +are 2.6-2.8x, 3-7x and 1.0x, respectively. More sophisticated algorithms are +capable of achieving yet higher compression rates, although usually at the +expense of speed. Of course, compression ratio will vary significantly with +the input. + +Although Snappy should be fairly portable, it is primarily optimized +for 64-bit x86-compatible processors, and may run slower in other environments. +In particular: + + - Snappy uses 64-bit operations in several places to process more data at + once than would otherwise be possible. + - Snappy assumes unaligned 32 and 64-bit loads and stores are cheap. + On some platforms, these must be emulated with single-byte loads + and stores, which is much slower. + - Snappy assumes little-endian throughout, and needs to byte-swap data in + several places if running on a big-endian platform. + +Experience has shown that even heavily tuned code can be improved. +Performance optimizations, whether for 64-bit x86 or other platforms, +are of course most welcome; see "Contact", below. + + +Building +======== + +You need the CMake version specified in [CMakeLists.txt](./CMakeLists.txt) +or later to build: + +```bash +mkdir build +cd build && cmake ../ && make +``` + +Usage +===== + +Note that Snappy, both the implementation and the main interface, +is written in C++. However, several third-party bindings to other languages +are available; see the [home page](docs/README.md) for more information. +Also, if you want to use Snappy from C code, you can use the included C +bindings in snappy-c.h. + +To use Snappy from your own C++ program, include the file "snappy.h" from +your calling file, and link against the compiled library. + +There are many ways to call Snappy, but the simplest possible is + +```c++ +snappy::Compress(input.data(), input.size(), &output); +``` + +and similarly + +```c++ +snappy::Uncompress(input.data(), input.size(), &output); +``` + +where "input" and "output" are both instances of std::string. + +There are other interfaces that are more flexible in various ways, including +support for custom (non-array) input sources. See the header file for more +information. + + +Tests and benchmarks +==================== + +When you compile Snappy, snappy_unittest is compiled in addition to the +library itself. You do not need it to use the compressor from your own library, +but it contains several useful components for Snappy development. + +First of all, it contains unit tests, verifying correctness on your machine in +various scenarios. If you want to change or optimize Snappy, please run the +tests to verify you have not broken anything. Note that if you have the +Google Test library installed, unit test behavior (especially failures) will be +significantly more user-friendly. You can find Google Test at + + https://github.com/google/googletest + +You probably also want the gflags library for handling of command-line flags; +you can find it at + + https://gflags.github.io/gflags/ + +In addition to the unit tests, snappy contains microbenchmarks used to +tune compression and decompression performance. These are automatically run +before the unit tests, but you can disable them using the flag +--run_microbenchmarks=false if you have gflags installed (otherwise you will +need to edit the source). + +Finally, snappy can benchmark Snappy against a few other compression libraries +(zlib, LZO, LZF, and QuickLZ), if they were detected at configure time. +To benchmark using a given file, give the compression algorithm you want to test +Snappy against (e.g. --zlib) and then a list of one or more file names on the +command line. The testdata/ directory contains the files used by the +microbenchmark, which should provide a reasonably balanced starting point for +benchmarking. (Note that baddata[1-3].snappy are not intended as benchmarks; they +are used to verify correctness in the presence of corrupted data in the unit +test.) + + +Contact +======= + +Snappy is distributed through GitHub. For the latest version, a bug tracker, +and other information, see https://github.com/google/snappy. diff --git a/contrib/libs/snappy/config-linux.h b/contrib/libs/snappy/config-linux.h index 7474a63705..f1a066fb97 100644 --- a/contrib/libs/snappy/config-linux.h +++ b/contrib/libs/snappy/config-linux.h @@ -1,62 +1,62 @@ -#ifndef THIRD_PARTY_SNAPPY_OPENSOURCE_CMAKE_CONFIG_H_ -#define THIRD_PARTY_SNAPPY_OPENSOURCE_CMAKE_CONFIG_H_ - -/* Define to 1 if the compiler supports __builtin_ctz and friends. */ -#define HAVE_BUILTIN_CTZ 1 - -/* Define to 1 if the compiler supports __builtin_expect. */ -#define HAVE_BUILTIN_EXPECT 1 - -/* Define to 1 if you have the <byteswap.h> header file. */ -#define HAVE_BYTESWAP_H 1 - -/* Define to 1 if you have a definition for mmap() in <sys/mman.h>. */ -#define HAVE_FUNC_MMAP 1 - -/* Define to 1 if you have a definition for sysconf() in <unistd.h>. */ -#define HAVE_FUNC_SYSCONF 1 - -/* Define to 1 to use the gflags package for command-line parsing. */ -/* #undef HAVE_GFLAGS */ - -/* Define to 1 if you have Google Test. */ -/* #undef HAVE_GTEST */ - -/* Define to 1 if you have the `lzo2' library (-llzo2). */ -/* #undef HAVE_LIBLZO2 */ - -/* Define to 1 if you have the `z' library (-lz). */ -/* #undef HAVE_LIBZ */ - -/* Define to 1 if you have the <sys/endian.h> header file. */ -/* #undef HAVE_SYS_ENDIAN_H */ - -/* Define to 1 if you have the <sys/mman.h> header file. */ -#define HAVE_SYS_MMAN_H 1 - -/* Define to 1 if you have the <sys/resource.h> header file. */ -#define HAVE_SYS_RESOURCE_H 1 - -/* Define to 1 if you have the <sys/time.h> header file. */ -#define HAVE_SYS_TIME_H 1 - -/* Define to 1 if you have the <sys/uio.h> header file. */ -#define HAVE_SYS_UIO_H 1 - -/* Define to 1 if you have the <unistd.h> header file. */ -#define HAVE_UNISTD_H 1 - -/* Define to 1 if you have the <windows.h> header file. */ -/* #undef HAVE_WINDOWS_H */ - -/* Define to 1 if you target processors with SSSE3+ and have <tmmintrin.h>. */ -#define SNAPPY_HAVE_SSSE3 0 - -/* Define to 1 if you target processors with BMI2+ and have <bmi2intrin.h>. */ -#define SNAPPY_HAVE_BMI2 0 - -/* Define to 1 if your processor stores words with the most significant byte - first (like Motorola and SPARC, unlike Intel and VAX). */ -/* #undef SNAPPY_IS_BIG_ENDIAN */ - -#endif // THIRD_PARTY_SNAPPY_OPENSOURCE_CMAKE_CONFIG_H_ +#ifndef THIRD_PARTY_SNAPPY_OPENSOURCE_CMAKE_CONFIG_H_ +#define THIRD_PARTY_SNAPPY_OPENSOURCE_CMAKE_CONFIG_H_ + +/* Define to 1 if the compiler supports __builtin_ctz and friends. */ +#define HAVE_BUILTIN_CTZ 1 + +/* Define to 1 if the compiler supports __builtin_expect. */ +#define HAVE_BUILTIN_EXPECT 1 + +/* Define to 1 if you have the <byteswap.h> header file. */ +#define HAVE_BYTESWAP_H 1 + +/* Define to 1 if you have a definition for mmap() in <sys/mman.h>. */ +#define HAVE_FUNC_MMAP 1 + +/* Define to 1 if you have a definition for sysconf() in <unistd.h>. */ +#define HAVE_FUNC_SYSCONF 1 + +/* Define to 1 to use the gflags package for command-line parsing. */ +/* #undef HAVE_GFLAGS */ + +/* Define to 1 if you have Google Test. */ +/* #undef HAVE_GTEST */ + +/* Define to 1 if you have the `lzo2' library (-llzo2). */ +/* #undef HAVE_LIBLZO2 */ + +/* Define to 1 if you have the `z' library (-lz). */ +/* #undef HAVE_LIBZ */ + +/* Define to 1 if you have the <sys/endian.h> header file. */ +/* #undef HAVE_SYS_ENDIAN_H */ + +/* Define to 1 if you have the <sys/mman.h> header file. */ +#define HAVE_SYS_MMAN_H 1 + +/* Define to 1 if you have the <sys/resource.h> header file. */ +#define HAVE_SYS_RESOURCE_H 1 + +/* Define to 1 if you have the <sys/time.h> header file. */ +#define HAVE_SYS_TIME_H 1 + +/* Define to 1 if you have the <sys/uio.h> header file. */ +#define HAVE_SYS_UIO_H 1 + +/* Define to 1 if you have the <unistd.h> header file. */ +#define HAVE_UNISTD_H 1 + +/* Define to 1 if you have the <windows.h> header file. */ +/* #undef HAVE_WINDOWS_H */ + +/* Define to 1 if you target processors with SSSE3+ and have <tmmintrin.h>. */ +#define SNAPPY_HAVE_SSSE3 0 + +/* Define to 1 if you target processors with BMI2+ and have <bmi2intrin.h>. */ +#define SNAPPY_HAVE_BMI2 0 + +/* Define to 1 if your processor stores words with the most significant byte + first (like Motorola and SPARC, unlike Intel and VAX). */ +/* #undef SNAPPY_IS_BIG_ENDIAN */ + +#endif // THIRD_PARTY_SNAPPY_OPENSOURCE_CMAKE_CONFIG_H_ diff --git a/contrib/libs/snappy/config-win.h b/contrib/libs/snappy/config-win.h index b94935dd5e..58b8be4839 100644 --- a/contrib/libs/snappy/config-win.h +++ b/contrib/libs/snappy/config-win.h @@ -1,9 +1,9 @@ -#pragma once - -#include "config-linux.h" - -#undef HAVE_SYS_UIO_H -#undef HAVE_SYS_MMAN_H -#undef HAVE_UNISTD_H -#undef HAVE_BUILTIN_EXPECT -#undef HAVE_BUILTIN_CTZ +#pragma once + +#include "config-linux.h" + +#undef HAVE_SYS_UIO_H +#undef HAVE_SYS_MMAN_H +#undef HAVE_UNISTD_H +#undef HAVE_BUILTIN_EXPECT +#undef HAVE_BUILTIN_CTZ diff --git a/contrib/libs/snappy/config.h b/contrib/libs/snappy/config.h index b27f1cf733..5623f311fa 100644 --- a/contrib/libs/snappy/config.h +++ b/contrib/libs/snappy/config.h @@ -1,7 +1,7 @@ -#pragma once - -#if defined(_MSC_VER) -# include "config-win.h" -#else -# include "config-linux.h" -#endif +#pragma once + +#if defined(_MSC_VER) +# include "config-win.h" +#else +# include "config-linux.h" +#endif diff --git a/contrib/libs/snappy/include/snappy-c.h b/contrib/libs/snappy/include/snappy-c.h index 0d5391b4a8..2096f07db4 100644 --- a/contrib/libs/snappy/include/snappy-c.h +++ b/contrib/libs/snappy/include/snappy-c.h @@ -1 +1 @@ -#include "../snappy-c.h" /* inclink generated by yamaker */ +#include "../snappy-c.h" /* inclink generated by yamaker */ diff --git a/contrib/libs/snappy/include/snappy-sinksource.h b/contrib/libs/snappy/include/snappy-sinksource.h index 3be401dbed..7bf3eead7f 100644 --- a/contrib/libs/snappy/include/snappy-sinksource.h +++ b/contrib/libs/snappy/include/snappy-sinksource.h @@ -1 +1 @@ -#include "../snappy-sinksource.h" /* inclink generated by yamaker */ +#include "../snappy-sinksource.h" /* inclink generated by yamaker */ diff --git a/contrib/libs/snappy/include/snappy-stubs-public.h b/contrib/libs/snappy/include/snappy-stubs-public.h index d9942a40dd..b468f7b11c 100644 --- a/contrib/libs/snappy/include/snappy-stubs-public.h +++ b/contrib/libs/snappy/include/snappy-stubs-public.h @@ -1 +1 @@ -#include "../snappy-stubs-public.h" /* inclink generated by yamaker */ +#include "../snappy-stubs-public.h" /* inclink generated by yamaker */ diff --git a/contrib/libs/snappy/include/snappy.h b/contrib/libs/snappy/include/snappy.h index 4ac7282541..a3dc8633d6 100644 --- a/contrib/libs/snappy/include/snappy.h +++ b/contrib/libs/snappy/include/snappy.h @@ -1 +1 @@ -#include "../snappy.h" /* inclink generated by yamaker */ +#include "../snappy.h" /* inclink generated by yamaker */ diff --git a/contrib/libs/snappy/snappy-c.h b/contrib/libs/snappy/snappy-c.h index 86e1b87a22..32aa0c6b8b 100644 --- a/contrib/libs/snappy/snappy-c.h +++ b/contrib/libs/snappy/snappy-c.h @@ -30,8 +30,8 @@ * Plain C interface (a wrapper around the C++ implementation). */ -#ifndef THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_C_H_ -#define THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_C_H_ +#ifndef THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_C_H_ +#define THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_C_H_ #ifdef __cplusplus extern "C" { @@ -46,7 +46,7 @@ extern "C" { typedef enum { SNAPPY_OK = 0, SNAPPY_INVALID_INPUT = 1, - SNAPPY_BUFFER_TOO_SMALL = 2 + SNAPPY_BUFFER_TOO_SMALL = 2 } snappy_status; /* @@ -135,4 +135,4 @@ snappy_status snappy_validate_compressed_buffer(const char* compressed, } // extern "C" #endif -#endif /* THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_C_H_ */ +#endif /* THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_C_H_ */ diff --git a/contrib/libs/snappy/snappy-internal.h b/contrib/libs/snappy/snappy-internal.h index f5c39ce75c..1e1c307fef 100644 --- a/contrib/libs/snappy/snappy-internal.h +++ b/contrib/libs/snappy/snappy-internal.h @@ -28,38 +28,38 @@ // // Internals shared between the Snappy implementation and its unittest. -#ifndef THIRD_PARTY_SNAPPY_SNAPPY_INTERNAL_H_ -#define THIRD_PARTY_SNAPPY_SNAPPY_INTERNAL_H_ +#ifndef THIRD_PARTY_SNAPPY_SNAPPY_INTERNAL_H_ +#define THIRD_PARTY_SNAPPY_SNAPPY_INTERNAL_H_ #include "snappy-stubs-internal.h" namespace snappy { namespace internal { -// Working memory performs a single allocation to hold all scratch space -// required for compression. +// Working memory performs a single allocation to hold all scratch space +// required for compression. class WorkingMemory { public: - explicit WorkingMemory(size_t input_size); - ~WorkingMemory(); + explicit WorkingMemory(size_t input_size); + ~WorkingMemory(); // Allocates and clears a hash table using memory in "*this", // stores the number of buckets in "*table_size" and returns a pointer to // the base of the hash table. - uint16* GetHashTable(size_t fragment_size, int* table_size) const; - char* GetScratchInput() const { return input_; } - char* GetScratchOutput() const { return output_; } + uint16* GetHashTable(size_t fragment_size, int* table_size) const; + char* GetScratchInput() const { return input_; } + char* GetScratchOutput() const { return output_; } private: - char* mem_; // the allocated memory, never nullptr - size_t size_; // the size of the allocated memory, never 0 - uint16* table_; // the pointer to the hashtable - char* input_; // the pointer to the input scratch buffer - char* output_; // the pointer to the output scratch buffer - - // No copying - WorkingMemory(const WorkingMemory&); - void operator=(const WorkingMemory&); + char* mem_; // the allocated memory, never nullptr + size_t size_; // the size of the allocated memory, never 0 + uint16* table_; // the pointer to the hashtable + char* input_; // the pointer to the input scratch buffer + char* output_; // the pointer to the output scratch buffer + + // No copying + WorkingMemory(const WorkingMemory&); + void operator=(const WorkingMemory&); }; // Flat array compression that does not emit the "uncompressed length" @@ -79,74 +79,74 @@ char* CompressFragment(const char* input, uint16* table, const int table_size); -// Find the largest n such that +// Find the largest n such that // // s1[0,n-1] == s2[0,n-1] // and n <= (s2_limit - s2). // -// Return make_pair(n, n < 8). +// Return make_pair(n, n < 8). // Does not read *s2_limit or beyond. // Does not read *(s1 + (s2_limit - s2)) or beyond. // Requires that s2_limit >= s2. // -// Separate implementation for 64-bit, little-endian cpus. -#if !defined(SNAPPY_IS_BIG_ENDIAN) && \ - (defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM)) -static inline std::pair<size_t, bool> FindMatchLength(const char* s1, - const char* s2, - const char* s2_limit) { - assert(s2_limit >= s2); - size_t matched = 0; - - // This block isn't necessary for correctness; we could just start looping - // immediately. As an optimization though, it is useful. It creates some not - // uncommon code paths that determine, without extra effort, whether the match - // length is less than 8. In short, we are hoping to avoid a conditional - // branch, and perhaps get better code layout from the C++ compiler. - if (SNAPPY_PREDICT_TRUE(s2 <= s2_limit - 8)) { - uint64 a1 = UNALIGNED_LOAD64(s1); - uint64 a2 = UNALIGNED_LOAD64(s2); - if (a1 != a2) { - return std::pair<size_t, bool>(Bits::FindLSBSetNonZero64(a1 ^ a2) >> 3, - true); - } else { - matched = 8; - s2 += 8; - } - } - +// Separate implementation for 64-bit, little-endian cpus. +#if !defined(SNAPPY_IS_BIG_ENDIAN) && \ + (defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM)) +static inline std::pair<size_t, bool> FindMatchLength(const char* s1, + const char* s2, + const char* s2_limit) { + assert(s2_limit >= s2); + size_t matched = 0; + + // This block isn't necessary for correctness; we could just start looping + // immediately. As an optimization though, it is useful. It creates some not + // uncommon code paths that determine, without extra effort, whether the match + // length is less than 8. In short, we are hoping to avoid a conditional + // branch, and perhaps get better code layout from the C++ compiler. + if (SNAPPY_PREDICT_TRUE(s2 <= s2_limit - 8)) { + uint64 a1 = UNALIGNED_LOAD64(s1); + uint64 a2 = UNALIGNED_LOAD64(s2); + if (a1 != a2) { + return std::pair<size_t, bool>(Bits::FindLSBSetNonZero64(a1 ^ a2) >> 3, + true); + } else { + matched = 8; + s2 += 8; + } + } + // Find out how long the match is. We loop over the data 64 bits at a // time until we find a 64-bit block that doesn't match; then we find // the first non-matching bit and use that to calculate the total // length of the match. - while (SNAPPY_PREDICT_TRUE(s2 <= s2_limit - 8)) { - if (UNALIGNED_LOAD64(s2) == UNALIGNED_LOAD64(s1 + matched)) { + while (SNAPPY_PREDICT_TRUE(s2 <= s2_limit - 8)) { + if (UNALIGNED_LOAD64(s2) == UNALIGNED_LOAD64(s1 + matched)) { s2 += 8; matched += 8; } else { uint64 x = UNALIGNED_LOAD64(s2) ^ UNALIGNED_LOAD64(s1 + matched); int matching_bits = Bits::FindLSBSetNonZero64(x); matched += matching_bits >> 3; - assert(matched >= 8); - return std::pair<size_t, bool>(matched, false); + assert(matched >= 8); + return std::pair<size_t, bool>(matched, false); } } - while (SNAPPY_PREDICT_TRUE(s2 < s2_limit)) { - if (s1[matched] == *s2) { + while (SNAPPY_PREDICT_TRUE(s2 < s2_limit)) { + if (s1[matched] == *s2) { ++s2; ++matched; } else { - return std::pair<size_t, bool>(matched, matched < 8); + return std::pair<size_t, bool>(matched, matched < 8); } } - return std::pair<size_t, bool>(matched, matched < 8); + return std::pair<size_t, bool>(matched, matched < 8); } #else -static inline std::pair<size_t, bool> FindMatchLength(const char* s1, - const char* s2, - const char* s2_limit) { +static inline std::pair<size_t, bool> FindMatchLength(const char* s1, + const char* s2, + const char* s2_limit) { // Implementation based on the x86-64 version, above. - assert(s2_limit >= s2); + assert(s2_limit >= s2); int matched = 0; while (s2 <= s2_limit - 4 && @@ -164,68 +164,68 @@ static inline std::pair<size_t, bool> FindMatchLength(const char* s1, ++matched; } } - return std::pair<size_t, bool>(matched, matched < 8); + return std::pair<size_t, bool>(matched, matched < 8); } #endif -// Lookup tables for decompression code. Give --snappy_dump_decompression_table -// to the unit test to recompute char_table. - -enum { - LITERAL = 0, - COPY_1_BYTE_OFFSET = 1, // 3 bit length + 3 bits of offset in opcode - COPY_2_BYTE_OFFSET = 2, - COPY_4_BYTE_OFFSET = 3 -}; -static const int kMaximumTagLength = 5; // COPY_4_BYTE_OFFSET plus the actual offset. - -// Data stored per entry in lookup table: -// Range Bits-used Description -// ------------------------------------ -// 1..64 0..7 Literal/copy length encoded in opcode byte -// 0..7 8..10 Copy offset encoded in opcode byte / 256 -// 0..4 11..13 Extra bytes after opcode -// -// We use eight bits for the length even though 7 would have sufficed -// because of efficiency reasons: -// (1) Extracting a byte is faster than a bit-field -// (2) It properly aligns copy offset so we do not need a <<8 -static const uint16 char_table[256] = { - 0x0001, 0x0804, 0x1001, 0x2001, 0x0002, 0x0805, 0x1002, 0x2002, - 0x0003, 0x0806, 0x1003, 0x2003, 0x0004, 0x0807, 0x1004, 0x2004, - 0x0005, 0x0808, 0x1005, 0x2005, 0x0006, 0x0809, 0x1006, 0x2006, - 0x0007, 0x080a, 0x1007, 0x2007, 0x0008, 0x080b, 0x1008, 0x2008, - 0x0009, 0x0904, 0x1009, 0x2009, 0x000a, 0x0905, 0x100a, 0x200a, - 0x000b, 0x0906, 0x100b, 0x200b, 0x000c, 0x0907, 0x100c, 0x200c, - 0x000d, 0x0908, 0x100d, 0x200d, 0x000e, 0x0909, 0x100e, 0x200e, - 0x000f, 0x090a, 0x100f, 0x200f, 0x0010, 0x090b, 0x1010, 0x2010, - 0x0011, 0x0a04, 0x1011, 0x2011, 0x0012, 0x0a05, 0x1012, 0x2012, - 0x0013, 0x0a06, 0x1013, 0x2013, 0x0014, 0x0a07, 0x1014, 0x2014, - 0x0015, 0x0a08, 0x1015, 0x2015, 0x0016, 0x0a09, 0x1016, 0x2016, - 0x0017, 0x0a0a, 0x1017, 0x2017, 0x0018, 0x0a0b, 0x1018, 0x2018, - 0x0019, 0x0b04, 0x1019, 0x2019, 0x001a, 0x0b05, 0x101a, 0x201a, - 0x001b, 0x0b06, 0x101b, 0x201b, 0x001c, 0x0b07, 0x101c, 0x201c, - 0x001d, 0x0b08, 0x101d, 0x201d, 0x001e, 0x0b09, 0x101e, 0x201e, - 0x001f, 0x0b0a, 0x101f, 0x201f, 0x0020, 0x0b0b, 0x1020, 0x2020, - 0x0021, 0x0c04, 0x1021, 0x2021, 0x0022, 0x0c05, 0x1022, 0x2022, - 0x0023, 0x0c06, 0x1023, 0x2023, 0x0024, 0x0c07, 0x1024, 0x2024, - 0x0025, 0x0c08, 0x1025, 0x2025, 0x0026, 0x0c09, 0x1026, 0x2026, - 0x0027, 0x0c0a, 0x1027, 0x2027, 0x0028, 0x0c0b, 0x1028, 0x2028, - 0x0029, 0x0d04, 0x1029, 0x2029, 0x002a, 0x0d05, 0x102a, 0x202a, - 0x002b, 0x0d06, 0x102b, 0x202b, 0x002c, 0x0d07, 0x102c, 0x202c, - 0x002d, 0x0d08, 0x102d, 0x202d, 0x002e, 0x0d09, 0x102e, 0x202e, - 0x002f, 0x0d0a, 0x102f, 0x202f, 0x0030, 0x0d0b, 0x1030, 0x2030, - 0x0031, 0x0e04, 0x1031, 0x2031, 0x0032, 0x0e05, 0x1032, 0x2032, - 0x0033, 0x0e06, 0x1033, 0x2033, 0x0034, 0x0e07, 0x1034, 0x2034, - 0x0035, 0x0e08, 0x1035, 0x2035, 0x0036, 0x0e09, 0x1036, 0x2036, - 0x0037, 0x0e0a, 0x1037, 0x2037, 0x0038, 0x0e0b, 0x1038, 0x2038, - 0x0039, 0x0f04, 0x1039, 0x2039, 0x003a, 0x0f05, 0x103a, 0x203a, - 0x003b, 0x0f06, 0x103b, 0x203b, 0x003c, 0x0f07, 0x103c, 0x203c, - 0x0801, 0x0f08, 0x103d, 0x203d, 0x1001, 0x0f09, 0x103e, 0x203e, - 0x1801, 0x0f0a, 0x103f, 0x203f, 0x2001, 0x0f0b, 0x1040, 0x2040 -}; - +// Lookup tables for decompression code. Give --snappy_dump_decompression_table +// to the unit test to recompute char_table. + +enum { + LITERAL = 0, + COPY_1_BYTE_OFFSET = 1, // 3 bit length + 3 bits of offset in opcode + COPY_2_BYTE_OFFSET = 2, + COPY_4_BYTE_OFFSET = 3 +}; +static const int kMaximumTagLength = 5; // COPY_4_BYTE_OFFSET plus the actual offset. + +// Data stored per entry in lookup table: +// Range Bits-used Description +// ------------------------------------ +// 1..64 0..7 Literal/copy length encoded in opcode byte +// 0..7 8..10 Copy offset encoded in opcode byte / 256 +// 0..4 11..13 Extra bytes after opcode +// +// We use eight bits for the length even though 7 would have sufficed +// because of efficiency reasons: +// (1) Extracting a byte is faster than a bit-field +// (2) It properly aligns copy offset so we do not need a <<8 +static const uint16 char_table[256] = { + 0x0001, 0x0804, 0x1001, 0x2001, 0x0002, 0x0805, 0x1002, 0x2002, + 0x0003, 0x0806, 0x1003, 0x2003, 0x0004, 0x0807, 0x1004, 0x2004, + 0x0005, 0x0808, 0x1005, 0x2005, 0x0006, 0x0809, 0x1006, 0x2006, + 0x0007, 0x080a, 0x1007, 0x2007, 0x0008, 0x080b, 0x1008, 0x2008, + 0x0009, 0x0904, 0x1009, 0x2009, 0x000a, 0x0905, 0x100a, 0x200a, + 0x000b, 0x0906, 0x100b, 0x200b, 0x000c, 0x0907, 0x100c, 0x200c, + 0x000d, 0x0908, 0x100d, 0x200d, 0x000e, 0x0909, 0x100e, 0x200e, + 0x000f, 0x090a, 0x100f, 0x200f, 0x0010, 0x090b, 0x1010, 0x2010, + 0x0011, 0x0a04, 0x1011, 0x2011, 0x0012, 0x0a05, 0x1012, 0x2012, + 0x0013, 0x0a06, 0x1013, 0x2013, 0x0014, 0x0a07, 0x1014, 0x2014, + 0x0015, 0x0a08, 0x1015, 0x2015, 0x0016, 0x0a09, 0x1016, 0x2016, + 0x0017, 0x0a0a, 0x1017, 0x2017, 0x0018, 0x0a0b, 0x1018, 0x2018, + 0x0019, 0x0b04, 0x1019, 0x2019, 0x001a, 0x0b05, 0x101a, 0x201a, + 0x001b, 0x0b06, 0x101b, 0x201b, 0x001c, 0x0b07, 0x101c, 0x201c, + 0x001d, 0x0b08, 0x101d, 0x201d, 0x001e, 0x0b09, 0x101e, 0x201e, + 0x001f, 0x0b0a, 0x101f, 0x201f, 0x0020, 0x0b0b, 0x1020, 0x2020, + 0x0021, 0x0c04, 0x1021, 0x2021, 0x0022, 0x0c05, 0x1022, 0x2022, + 0x0023, 0x0c06, 0x1023, 0x2023, 0x0024, 0x0c07, 0x1024, 0x2024, + 0x0025, 0x0c08, 0x1025, 0x2025, 0x0026, 0x0c09, 0x1026, 0x2026, + 0x0027, 0x0c0a, 0x1027, 0x2027, 0x0028, 0x0c0b, 0x1028, 0x2028, + 0x0029, 0x0d04, 0x1029, 0x2029, 0x002a, 0x0d05, 0x102a, 0x202a, + 0x002b, 0x0d06, 0x102b, 0x202b, 0x002c, 0x0d07, 0x102c, 0x202c, + 0x002d, 0x0d08, 0x102d, 0x202d, 0x002e, 0x0d09, 0x102e, 0x202e, + 0x002f, 0x0d0a, 0x102f, 0x202f, 0x0030, 0x0d0b, 0x1030, 0x2030, + 0x0031, 0x0e04, 0x1031, 0x2031, 0x0032, 0x0e05, 0x1032, 0x2032, + 0x0033, 0x0e06, 0x1033, 0x2033, 0x0034, 0x0e07, 0x1034, 0x2034, + 0x0035, 0x0e08, 0x1035, 0x2035, 0x0036, 0x0e09, 0x1036, 0x2036, + 0x0037, 0x0e0a, 0x1037, 0x2037, 0x0038, 0x0e0b, 0x1038, 0x2038, + 0x0039, 0x0f04, 0x1039, 0x2039, 0x003a, 0x0f05, 0x103a, 0x203a, + 0x003b, 0x0f06, 0x103b, 0x203b, 0x003c, 0x0f07, 0x103c, 0x203c, + 0x0801, 0x0f08, 0x103d, 0x203d, 0x1001, 0x0f09, 0x103e, 0x203e, + 0x1801, 0x0f0a, 0x103f, 0x203f, 0x2001, 0x0f0b, 0x1040, 0x2040 +}; + } // end namespace internal } // end namespace snappy -#endif // THIRD_PARTY_SNAPPY_SNAPPY_INTERNAL_H_ +#endif // THIRD_PARTY_SNAPPY_SNAPPY_INTERNAL_H_ diff --git a/contrib/libs/snappy/snappy-sinksource.cc b/contrib/libs/snappy/snappy-sinksource.cc index 35c20741a1..369a13215b 100644 --- a/contrib/libs/snappy/snappy-sinksource.cc +++ b/contrib/libs/snappy/snappy-sinksource.cc @@ -40,21 +40,21 @@ char* Sink::GetAppendBuffer(size_t length, char* scratch) { return scratch; } -char* Sink::GetAppendBufferVariable( - size_t min_size, size_t desired_size_hint, char* scratch, - size_t scratch_size, size_t* allocated_size) { - *allocated_size = scratch_size; - return scratch; -} - -void Sink::AppendAndTakeOwnership( - char* bytes, size_t n, - void (*deleter)(void*, const char*, size_t), - void *deleter_arg) { - Append(bytes, n); - (*deleter)(deleter_arg, bytes, n); -} - +char* Sink::GetAppendBufferVariable( + size_t min_size, size_t desired_size_hint, char* scratch, + size_t scratch_size, size_t* allocated_size) { + *allocated_size = scratch_size; + return scratch; +} + +void Sink::AppendAndTakeOwnership( + char* bytes, size_t n, + void (*deleter)(void*, const char*, size_t), + void *deleter_arg) { + Append(bytes, n); + (*deleter)(deleter_arg, bytes, n); +} + ByteArraySource::~ByteArraySource() { } size_t ByteArraySource::Available() const { return left_; } @@ -83,22 +83,22 @@ char* UncheckedByteArraySink::GetAppendBuffer(size_t len, char* scratch) { return dest_; } -void UncheckedByteArraySink::AppendAndTakeOwnership( - char* data, size_t n, - void (*deleter)(void*, const char*, size_t), - void *deleter_arg) { - if (data != dest_) { - memcpy(dest_, data, n); - (*deleter)(deleter_arg, data, n); - } - dest_ += n; +void UncheckedByteArraySink::AppendAndTakeOwnership( + char* data, size_t n, + void (*deleter)(void*, const char*, size_t), + void *deleter_arg) { + if (data != dest_) { + memcpy(dest_, data, n); + (*deleter)(deleter_arg, data, n); + } + dest_ += n; +} + +char* UncheckedByteArraySink::GetAppendBufferVariable( + size_t min_size, size_t desired_size_hint, char* scratch, + size_t scratch_size, size_t* allocated_size) { + *allocated_size = desired_size_hint; + return dest_; } - -char* UncheckedByteArraySink::GetAppendBufferVariable( - size_t min_size, size_t desired_size_hint, char* scratch, - size_t scratch_size, size_t* allocated_size) { - *allocated_size = desired_size_hint; - return dest_; -} - -} // namespace snappy + +} // namespace snappy diff --git a/contrib/libs/snappy/snappy-sinksource.h b/contrib/libs/snappy/snappy-sinksource.h index 9bfeecede6..8afcdaaa2c 100644 --- a/contrib/libs/snappy/snappy-sinksource.h +++ b/contrib/libs/snappy/snappy-sinksource.h @@ -26,8 +26,8 @@ // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -#ifndef THIRD_PARTY_SNAPPY_SNAPPY_SINKSOURCE_H_ -#define THIRD_PARTY_SNAPPY_SNAPPY_SINKSOURCE_H_ +#ifndef THIRD_PARTY_SNAPPY_SNAPPY_SINKSOURCE_H_ +#define THIRD_PARTY_SNAPPY_SNAPPY_SINKSOURCE_H_ #include <stddef.h> @@ -59,48 +59,48 @@ class Sink { // The default implementation always returns the scratch buffer. virtual char* GetAppendBuffer(size_t length, char* scratch); - // For higher performance, Sink implementations can provide custom - // AppendAndTakeOwnership() and GetAppendBufferVariable() methods. - // These methods can reduce the number of copies done during - // compression/decompression. - - // Append "bytes[0,n-1] to the sink. Takes ownership of "bytes" - // and calls the deleter function as (*deleter)(deleter_arg, bytes, n) - // to free the buffer. deleter function must be non NULL. - // - // The default implementation just calls Append and frees "bytes". - // Other implementations may avoid a copy while appending the buffer. - virtual void AppendAndTakeOwnership( - char* bytes, size_t n, void (*deleter)(void*, const char*, size_t), - void *deleter_arg); - - // Returns a writable buffer for appending and writes the buffer's capacity to - // *allocated_size. Guarantees *allocated_size >= min_size. - // May return a pointer to the caller-owned scratch buffer which must have - // scratch_size >= min_size. - // - // The returned buffer is only valid until the next operation - // on this ByteSink. - // - // After writing at most *allocated_size bytes, call Append() with the - // pointer returned from this function and the number of bytes written. - // Many Append() implementations will avoid copying bytes if this function - // returned an internal buffer. - // - // If the sink implementation allocates or reallocates an internal buffer, - // it should use the desired_size_hint if appropriate. If a caller cannot - // provide a reasonable guess at the desired capacity, it should set - // desired_size_hint = 0. - // - // If a non-scratch buffer is returned, the caller may only pass - // a prefix to it to Append(). That is, it is not correct to pass an - // interior pointer to Append(). - // - // The default implementation always returns the scratch buffer. - virtual char* GetAppendBufferVariable( - size_t min_size, size_t desired_size_hint, char* scratch, - size_t scratch_size, size_t* allocated_size); - + // For higher performance, Sink implementations can provide custom + // AppendAndTakeOwnership() and GetAppendBufferVariable() methods. + // These methods can reduce the number of copies done during + // compression/decompression. + + // Append "bytes[0,n-1] to the sink. Takes ownership of "bytes" + // and calls the deleter function as (*deleter)(deleter_arg, bytes, n) + // to free the buffer. deleter function must be non NULL. + // + // The default implementation just calls Append and frees "bytes". + // Other implementations may avoid a copy while appending the buffer. + virtual void AppendAndTakeOwnership( + char* bytes, size_t n, void (*deleter)(void*, const char*, size_t), + void *deleter_arg); + + // Returns a writable buffer for appending and writes the buffer's capacity to + // *allocated_size. Guarantees *allocated_size >= min_size. + // May return a pointer to the caller-owned scratch buffer which must have + // scratch_size >= min_size. + // + // The returned buffer is only valid until the next operation + // on this ByteSink. + // + // After writing at most *allocated_size bytes, call Append() with the + // pointer returned from this function and the number of bytes written. + // Many Append() implementations will avoid copying bytes if this function + // returned an internal buffer. + // + // If the sink implementation allocates or reallocates an internal buffer, + // it should use the desired_size_hint if appropriate. If a caller cannot + // provide a reasonable guess at the desired capacity, it should set + // desired_size_hint = 0. + // + // If a non-scratch buffer is returned, the caller may only pass + // a prefix to it to Append(). That is, it is not correct to pass an + // interior pointer to Append(). + // + // The default implementation always returns the scratch buffer. + virtual char* GetAppendBufferVariable( + size_t min_size, size_t desired_size_hint, char* scratch, + size_t scratch_size, size_t* allocated_size); + private: // No copying Sink(const Sink&); @@ -162,12 +162,12 @@ class UncheckedByteArraySink : public Sink { virtual ~UncheckedByteArraySink(); virtual void Append(const char* data, size_t n); virtual char* GetAppendBuffer(size_t len, char* scratch); - virtual char* GetAppendBufferVariable( - size_t min_size, size_t desired_size_hint, char* scratch, - size_t scratch_size, size_t* allocated_size); - virtual void AppendAndTakeOwnership( - char* bytes, size_t n, void (*deleter)(void*, const char*, size_t), - void *deleter_arg); + virtual char* GetAppendBufferVariable( + size_t min_size, size_t desired_size_hint, char* scratch, + size_t scratch_size, size_t* allocated_size); + virtual void AppendAndTakeOwnership( + char* bytes, size_t n, void (*deleter)(void*, const char*, size_t), + void *deleter_arg); // Return the current output pointer so that a caller can see how // many bytes were produced. @@ -177,6 +177,6 @@ class UncheckedByteArraySink : public Sink { char* dest_; }; -} // namespace snappy +} // namespace snappy -#endif // THIRD_PARTY_SNAPPY_SNAPPY_SINKSOURCE_H_ +#endif // THIRD_PARTY_SNAPPY_SNAPPY_SINKSOURCE_H_ diff --git a/contrib/libs/snappy/snappy-stubs-internal.cc b/contrib/libs/snappy/snappy-stubs-internal.cc index 4ab6f453d6..66ed2e9039 100644 --- a/contrib/libs/snappy/snappy-stubs-internal.cc +++ b/contrib/libs/snappy/snappy-stubs-internal.cc @@ -27,16 +27,16 @@ // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #include <algorithm> -#include <string> +#include <string> #include "snappy-stubs-internal.h" namespace snappy { -void Varint::Append32(std::string* s, uint32 value) { +void Varint::Append32(std::string* s, uint32 value) { char buf[Varint::kMax32]; const char* p = Varint::Encode32(buf, value); - s->append(buf, p - buf); + s->append(buf, p - buf); } } // namespace snappy diff --git a/contrib/libs/snappy/snappy-stubs-internal.h b/contrib/libs/snappy/snappy-stubs-internal.h index 128553b328..4854689d17 100644 --- a/contrib/libs/snappy/snappy-stubs-internal.h +++ b/contrib/libs/snappy/snappy-stubs-internal.h @@ -28,43 +28,43 @@ // // Various stubs for the open-source version of Snappy. -#ifndef THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_ -#define THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_ +#ifndef THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_ +#define THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_ -#ifdef HAVE_CONFIG_H -#include "config.h" +#ifdef HAVE_CONFIG_H +#include "config.h" #endif -#include <string> +#include <string> #include <assert.h> #include <stdlib.h> #include <string.h> -#ifdef HAVE_SYS_MMAN_H -#include <sys/mman.h> -#endif - -#ifdef HAVE_UNISTD_H -#include <unistd.h> -#endif - -#if defined(_MSC_VER) -#include <intrin.h> -#endif // defined(_MSC_VER) - -#ifndef __has_feature -#define __has_feature(x) 0 -#endif - -#if __has_feature(memory_sanitizer) -#include <sanitizer/msan_interface.h> -#define SNAPPY_ANNOTATE_MEMORY_IS_INITIALIZED(address, size) \ - __msan_unpoison((address), (size)) -#else -#define SNAPPY_ANNOTATE_MEMORY_IS_INITIALIZED(address, size) /* empty */ -#endif // __has_feature(memory_sanitizer) - +#ifdef HAVE_SYS_MMAN_H +#include <sys/mman.h> +#endif + +#ifdef HAVE_UNISTD_H +#include <unistd.h> +#endif + +#if defined(_MSC_VER) +#include <intrin.h> +#endif // defined(_MSC_VER) + +#ifndef __has_feature +#define __has_feature(x) 0 +#endif + +#if __has_feature(memory_sanitizer) +#include <sanitizer/msan_interface.h> +#define SNAPPY_ANNOTATE_MEMORY_IS_INITIALIZED(address, size) \ + __msan_unpoison((address), (size)) +#else +#define SNAPPY_ANNOTATE_MEMORY_IS_INITIALIZED(address, size) /* empty */ +#endif // __has_feature(memory_sanitizer) + #include "snappy-stubs-public.h" #if defined(__x86_64__) @@ -72,14 +72,14 @@ // Enable 64-bit optimized versions of some routines. #define ARCH_K8 1 -#elif defined(__ppc64__) - -#define ARCH_PPC 1 - -#elif defined(__aarch64__) - -#define ARCH_ARM 1 - +#elif defined(__ppc64__) + +#define ARCH_PPC 1 + +#elif defined(__aarch64__) + +#define ARCH_ARM 1 + #endif // Needed by OS X, among others. @@ -95,14 +95,14 @@ #endif #define ARRAYSIZE(a) (sizeof(a) / sizeof(*(a))) -// Static prediction hints. -#ifdef HAVE_BUILTIN_EXPECT -#define SNAPPY_PREDICT_FALSE(x) (__builtin_expect(x, 0)) -#define SNAPPY_PREDICT_TRUE(x) (__builtin_expect(!!(x), 1)) -#else -#define SNAPPY_PREDICT_FALSE(x) x -#define SNAPPY_PREDICT_TRUE(x) x -#endif +// Static prediction hints. +#ifdef HAVE_BUILTIN_EXPECT +#define SNAPPY_PREDICT_FALSE(x) (__builtin_expect(x, 0)) +#define SNAPPY_PREDICT_TRUE(x) (__builtin_expect(!!(x), 1)) +#else +#define SNAPPY_PREDICT_FALSE(x) x +#define SNAPPY_PREDICT_TRUE(x) x +#endif // This is only used for recomputing the tag byte table used during // decompression; for simplicity we just remove it from the open-source @@ -120,10 +120,10 @@ static const int64 kint64max = static_cast<int64>(0x7FFFFFFFFFFFFFFFLL); // Potentially unaligned loads and stores. -// x86, PowerPC, and ARM64 can simply do these loads and stores native. +// x86, PowerPC, and ARM64 can simply do these loads and stores native. -#if defined(__i386__) || defined(__x86_64__) || defined(__powerpc__) || \ - defined(__aarch64__) +#if defined(__i386__) || defined(__x86_64__) || defined(__powerpc__) || \ + defined(__aarch64__) #define UNALIGNED_LOAD16(_p) (*reinterpret_cast<const uint16 *>(_p)) #define UNALIGNED_LOAD32(_p) (*reinterpret_cast<const uint32 *>(_p)) @@ -141,19 +141,19 @@ static const int64 kint64max = static_cast<int64>(0x7FFFFFFFFFFFFFFFLL); // sub-architectures. // // This is a mess, but there's not much we can do about it. -// -// To further complicate matters, only LDR instructions (single reads) are -// allowed to be unaligned, not LDRD (two reads) or LDM (many reads). Unless we -// explicitly tell the compiler that these accesses can be unaligned, it can and -// will combine accesses. On armcc, the way to signal this is done by accessing -// through the type (uint32 __packed *), but GCC has no such attribute -// (it ignores __attribute__((packed)) on individual variables). However, -// we can tell it that a _struct_ is unaligned, which has the same effect, -// so we do that. +// +// To further complicate matters, only LDR instructions (single reads) are +// allowed to be unaligned, not LDRD (two reads) or LDM (many reads). Unless we +// explicitly tell the compiler that these accesses can be unaligned, it can and +// will combine accesses. On armcc, the way to signal this is done by accessing +// through the type (uint32 __packed *), but GCC has no such attribute +// (it ignores __attribute__((packed)) on individual variables). However, +// we can tell it that a _struct_ is unaligned, which has the same effect, +// so we do that. #elif defined(__arm__) && \ - !defined(__ARM_ARCH_4__) && \ - !defined(__ARM_ARCH_4T__) && \ + !defined(__ARM_ARCH_4__) && \ + !defined(__ARM_ARCH_4T__) && \ !defined(__ARM_ARCH_5__) && \ !defined(__ARM_ARCH_5T__) && \ !defined(__ARM_ARCH_5TE__) && \ @@ -165,41 +165,41 @@ static const int64 kint64max = static_cast<int64>(0x7FFFFFFFFFFFFFFFLL); !defined(__ARM_ARCH_6ZK__) && \ !defined(__ARM_ARCH_6T2__) -#if __GNUC__ -#define ATTRIBUTE_PACKED __attribute__((__packed__)) -#else -#define ATTRIBUTE_PACKED -#endif - -namespace base { -namespace internal { - -struct Unaligned16Struct { - uint16 value; - uint8 dummy; // To make the size non-power-of-two. -} ATTRIBUTE_PACKED; - -struct Unaligned32Struct { - uint32 value; - uint8 dummy; // To make the size non-power-of-two. -} ATTRIBUTE_PACKED; - -} // namespace internal -} // namespace base - -#define UNALIGNED_LOAD16(_p) \ - ((reinterpret_cast<const ::snappy::base::internal::Unaligned16Struct *>(_p))->value) -#define UNALIGNED_LOAD32(_p) \ - ((reinterpret_cast<const ::snappy::base::internal::Unaligned32Struct *>(_p))->value) - -#define UNALIGNED_STORE16(_p, _val) \ - ((reinterpret_cast< ::snappy::base::internal::Unaligned16Struct *>(_p))->value = \ - (_val)) -#define UNALIGNED_STORE32(_p, _val) \ - ((reinterpret_cast< ::snappy::base::internal::Unaligned32Struct *>(_p))->value = \ - (_val)) - -// TODO: NEON supports unaligned 64-bit loads and stores. +#if __GNUC__ +#define ATTRIBUTE_PACKED __attribute__((__packed__)) +#else +#define ATTRIBUTE_PACKED +#endif + +namespace base { +namespace internal { + +struct Unaligned16Struct { + uint16 value; + uint8 dummy; // To make the size non-power-of-two. +} ATTRIBUTE_PACKED; + +struct Unaligned32Struct { + uint32 value; + uint8 dummy; // To make the size non-power-of-two. +} ATTRIBUTE_PACKED; + +} // namespace internal +} // namespace base + +#define UNALIGNED_LOAD16(_p) \ + ((reinterpret_cast<const ::snappy::base::internal::Unaligned16Struct *>(_p))->value) +#define UNALIGNED_LOAD32(_p) \ + ((reinterpret_cast<const ::snappy::base::internal::Unaligned32Struct *>(_p))->value) + +#define UNALIGNED_STORE16(_p, _val) \ + ((reinterpret_cast< ::snappy::base::internal::Unaligned16Struct *>(_p))->value = \ + (_val)) +#define UNALIGNED_STORE32(_p, _val) \ + ((reinterpret_cast< ::snappy::base::internal::Unaligned32Struct *>(_p))->value = \ + (_val)) + +// TODO: NEON supports unaligned 64-bit loads and stores. // See if that would be more efficient on platforms supporting it, // at least for copies. @@ -250,66 +250,66 @@ inline void UNALIGNED_STORE64(void *p, uint64 v) { #endif -// The following guarantees declaration of the byte swap functions. -#if defined(SNAPPY_IS_BIG_ENDIAN) - -#ifdef HAVE_SYS_BYTEORDER_H -#include <sys/byteorder.h> -#endif - -#ifdef HAVE_SYS_ENDIAN_H -#include <sys/endian.h> -#endif - -#ifdef _MSC_VER -#include <stdlib.h> -#define bswap_16(x) _byteswap_ushort(x) -#define bswap_32(x) _byteswap_ulong(x) -#define bswap_64(x) _byteswap_uint64(x) - -#elif defined(__APPLE__) -// Mac OS X / Darwin features -#include <libkern/OSByteOrder.h> -#define bswap_16(x) OSSwapInt16(x) -#define bswap_32(x) OSSwapInt32(x) -#define bswap_64(x) OSSwapInt64(x) - -#elif defined(HAVE_BYTESWAP_H) -#include <byteswap.h> - -#elif defined(bswap32) -// FreeBSD defines bswap{16,32,64} in <sys/endian.h> (already #included). -#define bswap_16(x) bswap16(x) -#define bswap_32(x) bswap32(x) -#define bswap_64(x) bswap64(x) - -#elif defined(BSWAP_64) -// Solaris 10 defines BSWAP_{16,32,64} in <sys/byteorder.h> (already #included). -#define bswap_16(x) BSWAP_16(x) -#define bswap_32(x) BSWAP_32(x) -#define bswap_64(x) BSWAP_64(x) - -#else - -inline uint16 bswap_16(uint16 x) { - return (x << 8) | (x >> 8); +// The following guarantees declaration of the byte swap functions. +#if defined(SNAPPY_IS_BIG_ENDIAN) + +#ifdef HAVE_SYS_BYTEORDER_H +#include <sys/byteorder.h> +#endif + +#ifdef HAVE_SYS_ENDIAN_H +#include <sys/endian.h> +#endif + +#ifdef _MSC_VER +#include <stdlib.h> +#define bswap_16(x) _byteswap_ushort(x) +#define bswap_32(x) _byteswap_ulong(x) +#define bswap_64(x) _byteswap_uint64(x) + +#elif defined(__APPLE__) +// Mac OS X / Darwin features +#include <libkern/OSByteOrder.h> +#define bswap_16(x) OSSwapInt16(x) +#define bswap_32(x) OSSwapInt32(x) +#define bswap_64(x) OSSwapInt64(x) + +#elif defined(HAVE_BYTESWAP_H) +#include <byteswap.h> + +#elif defined(bswap32) +// FreeBSD defines bswap{16,32,64} in <sys/endian.h> (already #included). +#define bswap_16(x) bswap16(x) +#define bswap_32(x) bswap32(x) +#define bswap_64(x) bswap64(x) + +#elif defined(BSWAP_64) +// Solaris 10 defines BSWAP_{16,32,64} in <sys/byteorder.h> (already #included). +#define bswap_16(x) BSWAP_16(x) +#define bswap_32(x) BSWAP_32(x) +#define bswap_64(x) BSWAP_64(x) + +#else + +inline uint16 bswap_16(uint16 x) { + return (x << 8) | (x >> 8); +} + +inline uint32 bswap_32(uint32 x) { + x = ((x & 0xff00ff00UL) >> 8) | ((x & 0x00ff00ffUL) << 8); + return (x >> 16) | (x << 16); +} + +inline uint64 bswap_64(uint64 x) { + x = ((x & 0xff00ff00ff00ff00ULL) >> 8) | ((x & 0x00ff00ff00ff00ffULL) << 8); + x = ((x & 0xffff0000ffff0000ULL) >> 16) | ((x & 0x0000ffff0000ffffULL) << 16); + return (x >> 32) | (x << 32); } -inline uint32 bswap_32(uint32 x) { - x = ((x & 0xff00ff00UL) >> 8) | ((x & 0x00ff00ffUL) << 8); - return (x >> 16) | (x << 16); -} - -inline uint64 bswap_64(uint64 x) { - x = ((x & 0xff00ff00ff00ff00ULL) >> 8) | ((x & 0x00ff00ff00ff00ffULL) << 8); - x = ((x & 0xffff0000ffff0000ULL) >> 16) | ((x & 0x0000ffff0000ffffULL) << 16); - return (x >> 32) | (x << 32); -} - -#endif - -#endif // defined(SNAPPY_IS_BIG_ENDIAN) - +#endif + +#endif // defined(SNAPPY_IS_BIG_ENDIAN) + // Convert to little-endian storage, opposite of network format. // Convert x from host to little endian: x = LittleEndian.FromHost(x); // convert x from little endian to host: x = LittleEndian.ToHost(x); @@ -322,28 +322,28 @@ inline uint64 bswap_64(uint64 x) { class LittleEndian { public: // Conversion functions. -#if defined(SNAPPY_IS_BIG_ENDIAN) +#if defined(SNAPPY_IS_BIG_ENDIAN) + + static uint16 FromHost16(uint16 x) { return bswap_16(x); } + static uint16 ToHost16(uint16 x) { return bswap_16(x); } - static uint16 FromHost16(uint16 x) { return bswap_16(x); } - static uint16 ToHost16(uint16 x) { return bswap_16(x); } + static uint32 FromHost32(uint32 x) { return bswap_32(x); } + static uint32 ToHost32(uint32 x) { return bswap_32(x); } - static uint32 FromHost32(uint32 x) { return bswap_32(x); } - static uint32 ToHost32(uint32 x) { return bswap_32(x); } - static bool IsLittleEndian() { return false; } -#else // !defined(SNAPPY_IS_BIG_ENDIAN) - - static uint16 FromHost16(uint16 x) { return x; } - static uint16 ToHost16(uint16 x) { return x; } - - static uint32 FromHost32(uint32 x) { return x; } - static uint32 ToHost32(uint32 x) { return x; } - - static bool IsLittleEndian() { return true; } - -#endif // !defined(SNAPPY_IS_BIG_ENDIAN) - +#else // !defined(SNAPPY_IS_BIG_ENDIAN) + + static uint16 FromHost16(uint16 x) { return x; } + static uint16 ToHost16(uint16 x) { return x; } + + static uint32 FromHost32(uint32 x) { return x; } + static uint32 ToHost32(uint32 x) { return x; } + + static bool IsLittleEndian() { return true; } + +#endif // !defined(SNAPPY_IS_BIG_ENDIAN) + // Functions to do unaligned loads and stores in little-endian order. static uint16 Load16(const void *p) { return ToHost16(UNALIGNED_LOAD16(p)); @@ -365,9 +365,9 @@ class LittleEndian { // Some bit-manipulation functions. class Bits { public: - // Return floor(log2(n)) for positive integer n. - static int Log2FloorNonZero(uint32 n); - + // Return floor(log2(n)) for positive integer n. + static int Log2FloorNonZero(uint32 n); + // Return floor(log2(n)) for positive integer n. Returns -1 iff n == 0. static int Log2Floor(uint32 n); @@ -375,85 +375,85 @@ class Bits { // undefined value if n == 0. FindLSBSetNonZero() is similar to ffs() except // that it's 0-indexed. static int FindLSBSetNonZero(uint32 n); - -#if defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) + +#if defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) static int FindLSBSetNonZero64(uint64 n); -#endif // defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) +#endif // defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) private: - // No copying - Bits(const Bits&); - void operator=(const Bits&); + // No copying + Bits(const Bits&); + void operator=(const Bits&); }; #ifdef HAVE_BUILTIN_CTZ -inline int Bits::Log2FloorNonZero(uint32 n) { - assert(n != 0); - // (31 ^ x) is equivalent to (31 - x) for x in [0, 31]. An easy proof - // represents subtraction in base 2 and observes that there's no carry. - // - // GCC and Clang represent __builtin_clz on x86 as 31 ^ _bit_scan_reverse(x). - // Using "31 ^" here instead of "31 -" allows the optimizer to strip the - // function body down to _bit_scan_reverse(x). - return 31 ^ __builtin_clz(n); -} - +inline int Bits::Log2FloorNonZero(uint32 n) { + assert(n != 0); + // (31 ^ x) is equivalent to (31 - x) for x in [0, 31]. An easy proof + // represents subtraction in base 2 and observes that there's no carry. + // + // GCC and Clang represent __builtin_clz on x86 as 31 ^ _bit_scan_reverse(x). + // Using "31 ^" here instead of "31 -" allows the optimizer to strip the + // function body down to _bit_scan_reverse(x). + return 31 ^ __builtin_clz(n); +} + inline int Bits::Log2Floor(uint32 n) { - return (n == 0) ? -1 : Bits::Log2FloorNonZero(n); + return (n == 0) ? -1 : Bits::Log2FloorNonZero(n); } inline int Bits::FindLSBSetNonZero(uint32 n) { - assert(n != 0); + assert(n != 0); return __builtin_ctz(n); } -#if defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) +#if defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) inline int Bits::FindLSBSetNonZero64(uint64 n) { - assert(n != 0); + assert(n != 0); return __builtin_ctzll(n); } -#endif // defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) - -#elif defined(_MSC_VER) - -inline int Bits::Log2FloorNonZero(uint32 n) { - assert(n != 0); - unsigned long where; - _BitScanReverse(&where, n); - return static_cast<int>(where); -} - -inline int Bits::Log2Floor(uint32 n) { - unsigned long where; - if (_BitScanReverse(&where, n)) - return static_cast<int>(where); - return -1; -} - -inline int Bits::FindLSBSetNonZero(uint32 n) { - assert(n != 0); - unsigned long where; - if (_BitScanForward(&where, n)) - return static_cast<int>(where); - return 32; -} - -#if defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) -inline int Bits::FindLSBSetNonZero64(uint64 n) { - assert(n != 0); - unsigned long where; - if (_BitScanForward64(&where, n)) - return static_cast<int>(where); - return 64; -} -#endif // defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) - +#endif // defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) + +#elif defined(_MSC_VER) + +inline int Bits::Log2FloorNonZero(uint32 n) { + assert(n != 0); + unsigned long where; + _BitScanReverse(&where, n); + return static_cast<int>(where); +} + +inline int Bits::Log2Floor(uint32 n) { + unsigned long where; + if (_BitScanReverse(&where, n)) + return static_cast<int>(where); + return -1; +} + +inline int Bits::FindLSBSetNonZero(uint32 n) { + assert(n != 0); + unsigned long where; + if (_BitScanForward(&where, n)) + return static_cast<int>(where); + return 32; +} + +#if defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) +inline int Bits::FindLSBSetNonZero64(uint64 n) { + assert(n != 0); + unsigned long where; + if (_BitScanForward64(&where, n)) + return static_cast<int>(where); + return 64; +} +#endif // defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) + #else // Portable versions. -inline int Bits::Log2FloorNonZero(uint32 n) { - assert(n != 0); - +inline int Bits::Log2FloorNonZero(uint32 n) { + assert(n != 0); + int log = 0; uint32 value = n; for (int i = 4; i >= 0; --i) { @@ -468,13 +468,13 @@ inline int Bits::Log2FloorNonZero(uint32 n) { return log; } -inline int Bits::Log2Floor(uint32 n) { - return (n == 0) ? -1 : Bits::Log2FloorNonZero(n); -} - +inline int Bits::Log2Floor(uint32 n) { + return (n == 0) ? -1 : Bits::Log2FloorNonZero(n); +} + inline int Bits::FindLSBSetNonZero(uint32 n) { - assert(n != 0); - + assert(n != 0); + int rc = 31; for (int i = 4, shift = 1 << 4; i >= 0; --i) { const uint32 x = n << shift; @@ -487,11 +487,11 @@ inline int Bits::FindLSBSetNonZero(uint32 n) { return rc; } -#if defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) +#if defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) // FindLSBSetNonZero64() is defined in terms of FindLSBSetNonZero(). inline int Bits::FindLSBSetNonZero64(uint64 n) { - assert(n != 0); - + assert(n != 0); + const uint32 bottombits = static_cast<uint32>(n); if (bottombits == 0) { // Bottom bits are zero, so scan in top bits @@ -500,7 +500,7 @@ inline int Bits::FindLSBSetNonZero64(uint64 n) { return FindLSBSetNonZero(bottombits); } } -#endif // defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) +#endif // defined(ARCH_K8) || defined(ARCH_PPC) || defined(ARCH_ARM) #endif // End portable versions. @@ -524,7 +524,7 @@ class Varint { static char* Encode32(char* ptr, uint32 v); // EFFECTS Appends the varint representation of "value" to "*s". - static void Append32(std::string* s, uint32 value); + static void Append32(std::string* s, uint32 value); }; inline const char* Varint::Parse32WithLimit(const char* p, @@ -577,12 +577,12 @@ inline char* Varint::Encode32(char* sptr, uint32 v) { return reinterpret_cast<char*>(ptr); } -// If you know the internal layout of the std::string in use, you can +// If you know the internal layout of the std::string in use, you can // replace this function with one that resizes the string without // filling the new space with zeros (if applicable) -- // it will be non-portable but faster. -inline void STLStringResizeUninitialized(std::string* s, size_t new_size) { - s->resize(new_size); +inline void STLStringResizeUninitialized(std::string* s, size_t new_size) { + s->resize(new_size); } // Return a mutable char* pointing to a string's internal buffer, @@ -597,10 +597,10 @@ inline void STLStringResizeUninitialized(std::string* s, size_t new_size) { // (http://www.open-std.org/JTC1/SC22/WG21/docs/lwg-defects.html#530) // proposes this as the method. It will officially be part of the standard // for C++0x. This should already work on all current implementations. -inline char* string_as_array(std::string* str) { - return str->empty() ? NULL : &*str->begin(); +inline char* string_as_array(std::string* str) { + return str->empty() ? NULL : &*str->begin(); } } // namespace snappy -#endif // THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_ +#endif // THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_INTERNAL_H_ diff --git a/contrib/libs/snappy/snappy-stubs-public.h b/contrib/libs/snappy/snappy-stubs-public.h index a6bc455dab..357c4b2e4b 100644 --- a/contrib/libs/snappy/snappy-stubs-public.h +++ b/contrib/libs/snappy/snappy-stubs-public.h @@ -32,45 +32,45 @@ // which is a public header. Instead, snappy-stubs-public.h is generated by // from snappy-stubs-public.h.in at configure time. -#ifndef THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_ -#define THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_ +#ifndef THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_ +#define THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_ -#include <cstddef> -#include <cstdint> -#include <string> +#include <cstddef> +#include <cstdint> +#include <string> -#include "config.h" - -#if defined(HAVE_SYS_UIO_H) -#include <sys/uio.h> -#endif // HAVE_SYS_UIO_H +#include "config.h" +#if defined(HAVE_SYS_UIO_H) +#include <sys/uio.h> +#endif // HAVE_SYS_UIO_H + #define SNAPPY_MAJOR 1 -#define SNAPPY_MINOR 1 -#define SNAPPY_PATCHLEVEL 8 +#define SNAPPY_MINOR 1 +#define SNAPPY_PATCHLEVEL 8 #define SNAPPY_VERSION \ ((SNAPPY_MAJOR << 16) | (SNAPPY_MINOR << 8) | SNAPPY_PATCHLEVEL) namespace snappy { -using int8 = std::int8_t; -using uint8 = std::uint8_t; -using int16 = std::int16_t; -using uint16 = std::uint16_t; -using int32 = std::int32_t; -using uint32 = std::uint32_t; -using int64 = std::int64_t; -using uint64 = std::uint64_t; +using int8 = std::int8_t; +using uint8 = std::uint8_t; +using int16 = std::int16_t; +using uint16 = std::uint16_t; +using int32 = std::int32_t; +using uint32 = std::uint32_t; +using int64 = std::int64_t; +using uint64 = std::uint64_t; -#if !defined(HAVE_SYS_UIO_H) -// Windows does not have an iovec type, yet the concept is universally useful. -// It is simple to define it ourselves, so we put it inside our own namespace. -struct iovec { - void* iov_base; - size_t iov_len; -}; -#endif // !HAVE_SYS_UIO_H +#if !defined(HAVE_SYS_UIO_H) +// Windows does not have an iovec type, yet the concept is universally useful. +// It is simple to define it ourselves, so we put it inside our own namespace. +struct iovec { + void* iov_base; + size_t iov_len; +}; +#endif // !HAVE_SYS_UIO_H } // namespace snappy -#endif // THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_ +#endif // THIRD_PARTY_SNAPPY_OPENSOURCE_SNAPPY_STUBS_PUBLIC_H_ diff --git a/contrib/libs/snappy/snappy.cc b/contrib/libs/snappy/snappy.cc index 4491be6871..9351b0f21e 100644 --- a/contrib/libs/snappy/snappy.cc +++ b/contrib/libs/snappy/snappy.cc @@ -30,59 +30,59 @@ #include "snappy-internal.h" #include "snappy-sinksource.h" -#if !defined(SNAPPY_HAVE_SSSE3) -// __SSSE3__ is defined by GCC and Clang. Visual Studio doesn't target SIMD -// support between SSE2 and AVX (so SSSE3 instructions require AVX support), and -// defines __AVX__ when AVX support is available. -#if defined(__SSSE3__) || defined(__AVX__) -#define SNAPPY_HAVE_SSSE3 1 -#else -#define SNAPPY_HAVE_SSSE3 0 -#endif -#endif // !defined(SNAPPY_HAVE_SSSE3) - -#if !defined(SNAPPY_HAVE_BMI2) -// __BMI2__ is defined by GCC and Clang. Visual Studio doesn't target BMI2 -// specifically, but it does define __AVX2__ when AVX2 support is available. -// Fortunately, AVX2 was introduced in Haswell, just like BMI2. -// -// BMI2 is not defined as a subset of AVX2 (unlike SSSE3 and AVX above). So, -// GCC and Clang can build code with AVX2 enabled but BMI2 disabled, in which -// case issuing BMI2 instructions results in a compiler error. -#if defined(__BMI2__) || (defined(_MSC_VER) && defined(__AVX2__)) -#define SNAPPY_HAVE_BMI2 1 -#else -#define SNAPPY_HAVE_BMI2 0 -#endif -#endif // !defined(SNAPPY_HAVE_BMI2) - -#if SNAPPY_HAVE_SSSE3 -// Please do not replace with <x86intrin.h>. or with headers that assume more -// advanced SSE versions without checking with all the OWNERS. -#include <tmmintrin.h> -#endif - -#if SNAPPY_HAVE_BMI2 -// Please do not replace with <x86intrin.h>. or with headers that assume more -// advanced SSE versions without checking with all the OWNERS. -#include <immintrin.h> -#endif - +#if !defined(SNAPPY_HAVE_SSSE3) +// __SSSE3__ is defined by GCC and Clang. Visual Studio doesn't target SIMD +// support between SSE2 and AVX (so SSSE3 instructions require AVX support), and +// defines __AVX__ when AVX support is available. +#if defined(__SSSE3__) || defined(__AVX__) +#define SNAPPY_HAVE_SSSE3 1 +#else +#define SNAPPY_HAVE_SSSE3 0 +#endif +#endif // !defined(SNAPPY_HAVE_SSSE3) + +#if !defined(SNAPPY_HAVE_BMI2) +// __BMI2__ is defined by GCC and Clang. Visual Studio doesn't target BMI2 +// specifically, but it does define __AVX2__ when AVX2 support is available. +// Fortunately, AVX2 was introduced in Haswell, just like BMI2. +// +// BMI2 is not defined as a subset of AVX2 (unlike SSSE3 and AVX above). So, +// GCC and Clang can build code with AVX2 enabled but BMI2 disabled, in which +// case issuing BMI2 instructions results in a compiler error. +#if defined(__BMI2__) || (defined(_MSC_VER) && defined(__AVX2__)) +#define SNAPPY_HAVE_BMI2 1 +#else +#define SNAPPY_HAVE_BMI2 0 +#endif +#endif // !defined(SNAPPY_HAVE_BMI2) + +#if SNAPPY_HAVE_SSSE3 +// Please do not replace with <x86intrin.h>. or with headers that assume more +// advanced SSE versions without checking with all the OWNERS. +#include <tmmintrin.h> +#endif + +#if SNAPPY_HAVE_BMI2 +// Please do not replace with <x86intrin.h>. or with headers that assume more +// advanced SSE versions without checking with all the OWNERS. +#include <immintrin.h> +#endif + #include <stdio.h> #include <algorithm> -#include <string> -#include <vector> -#include <util/generic/string.h> +#include <string> +#include <vector> +#include <util/generic/string.h> namespace snappy { -using internal::COPY_1_BYTE_OFFSET; -using internal::COPY_2_BYTE_OFFSET; -using internal::LITERAL; -using internal::char_table; -using internal::kMaximumTagLength; - +using internal::COPY_1_BYTE_OFFSET; +using internal::COPY_2_BYTE_OFFSET; +using internal::LITERAL; +using internal::char_table; +using internal::kMaximumTagLength; + // Any hash function will produce a valid compressed bitstream, but a good // hash function reduces the number of collisions and thus yields better // compression for compressible input, and more speed for incompressible @@ -120,311 +120,311 @@ size_t MaxCompressedLength(size_t source_len) { return 32 + source_len + source_len/6; } -namespace { - -void UnalignedCopy64(const void* src, void* dst) { - char tmp[8]; - memcpy(tmp, src, 8); - memcpy(dst, tmp, 8); -} - -void UnalignedCopy128(const void* src, void* dst) { - // memcpy gets vectorized when the appropriate compiler options are used. - // For example, x86 compilers targeting SSE2+ will optimize to an SSE2 load - // and store. - char tmp[16]; - memcpy(tmp, src, 16); - memcpy(dst, tmp, 16); -} - -// Copy [src, src+(op_limit-op)) to [op, (op_limit-op)) a byte at a time. Used -// for handling COPY operations where the input and output regions may overlap. -// For example, suppose: -// src == "ab" -// op == src + 2 -// op_limit == op + 20 -// After IncrementalCopySlow(src, op, op_limit), the result will have eleven -// copies of "ab" +namespace { + +void UnalignedCopy64(const void* src, void* dst) { + char tmp[8]; + memcpy(tmp, src, 8); + memcpy(dst, tmp, 8); +} + +void UnalignedCopy128(const void* src, void* dst) { + // memcpy gets vectorized when the appropriate compiler options are used. + // For example, x86 compilers targeting SSE2+ will optimize to an SSE2 load + // and store. + char tmp[16]; + memcpy(tmp, src, 16); + memcpy(dst, tmp, 16); +} + +// Copy [src, src+(op_limit-op)) to [op, (op_limit-op)) a byte at a time. Used +// for handling COPY operations where the input and output regions may overlap. +// For example, suppose: +// src == "ab" +// op == src + 2 +// op_limit == op + 20 +// After IncrementalCopySlow(src, op, op_limit), the result will have eleven +// copies of "ab" // ababababababababababab -// Note that this does not match the semantics of either memcpy() or memmove(). -inline char* IncrementalCopySlow(const char* src, char* op, - char* const op_limit) { - // TODO: Remove pragma when LLVM is aware this - // function is only called in cold regions and when cold regions don't get - // vectorized or unrolled. -#ifdef __clang__ -#pragma clang loop unroll(disable) -#endif - while (op < op_limit) { +// Note that this does not match the semantics of either memcpy() or memmove(). +inline char* IncrementalCopySlow(const char* src, char* op, + char* const op_limit) { + // TODO: Remove pragma when LLVM is aware this + // function is only called in cold regions and when cold regions don't get + // vectorized or unrolled. +#ifdef __clang__ +#pragma clang loop unroll(disable) +#endif + while (op < op_limit) { *op++ = *src++; - } - return op_limit; + } + return op_limit; } -#if SNAPPY_HAVE_SSSE3 - -// This is a table of shuffle control masks that can be used as the source -// operand for PSHUFB to permute the contents of the destination XMM register -// into a repeating byte pattern. -alignas(16) const char pshufb_fill_patterns[7][16] = { - {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, - {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1}, - {0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0}, - {0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3}, - {0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0}, - {0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3}, - {0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 0, 1}, -}; - -#endif // SNAPPY_HAVE_SSSE3 - -// Copy [src, src+(op_limit-op)) to [op, (op_limit-op)) but faster than -// IncrementalCopySlow. buf_limit is the address past the end of the writable -// region of the buffer. -inline char* IncrementalCopy(const char* src, char* op, char* const op_limit, - char* const buf_limit) { - // Terminology: - // - // slop = buf_limit - op - // pat = op - src - // len = limit - op - assert(src < op); - assert(op <= op_limit); - assert(op_limit <= buf_limit); - // NOTE: The compressor always emits 4 <= len <= 64. It is ok to assume that - // to optimize this function but we have to also handle other cases in case - // the input does not satisfy these conditions. - - size_t pattern_size = op - src; - // The cases are split into different branches to allow the branch predictor, - // FDO, and static prediction hints to work better. For each input we list the - // ratio of invocations that match each condition. - // - // input slop < 16 pat < 8 len > 16 - // ------------------------------------------ - // html|html4|cp 0% 1.01% 27.73% - // urls 0% 0.88% 14.79% - // jpg 0% 64.29% 7.14% - // pdf 0% 2.56% 58.06% - // txt[1-4] 0% 0.23% 0.97% - // pb 0% 0.96% 13.88% - // bin 0.01% 22.27% 41.17% - // - // It is very rare that we don't have enough slop for doing block copies. It - // is also rare that we need to expand a pattern. Small patterns are common - // for incompressible formats and for those we are plenty fast already. - // Lengths are normally not greater than 16 but they vary depending on the - // input. In general if we always predict len <= 16 it would be an ok - // prediction. - // - // In order to be fast we want a pattern >= 8 bytes and an unrolled loop - // copying 2x 8 bytes at a time. - - // Handle the uncommon case where pattern is less than 8 bytes. - if (SNAPPY_PREDICT_FALSE(pattern_size < 8)) { -#if SNAPPY_HAVE_SSSE3 - // Load the first eight bytes into an 128-bit XMM register, then use PSHUFB - // to permute the register's contents in-place into a repeating sequence of - // the first "pattern_size" bytes. - // For example, suppose: - // src == "abc" - // op == op + 3 - // After _mm_shuffle_epi8(), "pattern" will have five copies of "abc" - // followed by one byte of slop: abcabcabcabcabca. - // - // The non-SSE fallback implementation suffers from store-forwarding stalls - // because its loads and stores partly overlap. By expanding the pattern - // in-place, we avoid the penalty. - if (SNAPPY_PREDICT_TRUE(op <= buf_limit - 16)) { - const __m128i shuffle_mask = _mm_load_si128( - reinterpret_cast<const __m128i*>(pshufb_fill_patterns) - + pattern_size - 1); - const __m128i pattern = _mm_shuffle_epi8( - _mm_loadl_epi64(reinterpret_cast<const __m128i*>(src)), shuffle_mask); - // Uninitialized bytes are masked out by the shuffle mask. - // TODO: remove annotation and macro defs once MSan is fixed. - SNAPPY_ANNOTATE_MEMORY_IS_INITIALIZED(&pattern, sizeof(pattern)); - pattern_size *= 16 / pattern_size; - char* op_end = std::min(op_limit, buf_limit - 15); - while (op < op_end) { - _mm_storeu_si128(reinterpret_cast<__m128i*>(op), pattern); - op += pattern_size; - } - if (SNAPPY_PREDICT_TRUE(op >= op_limit)) return op_limit; - } - return IncrementalCopySlow(src, op, op_limit); -#else // !SNAPPY_HAVE_SSSE3 - // If plenty of buffer space remains, expand the pattern to at least 8 - // bytes. The way the following loop is written, we need 8 bytes of buffer - // space if pattern_size >= 4, 11 bytes if pattern_size is 1 or 3, and 10 - // bytes if pattern_size is 2. Precisely encoding that is probably not - // worthwhile; instead, invoke the slow path if we cannot write 11 bytes - // (because 11 are required in the worst case). - if (SNAPPY_PREDICT_TRUE(op <= buf_limit - 11)) { - while (pattern_size < 8) { - UnalignedCopy64(src, op); - op += pattern_size; - pattern_size *= 2; - } - if (SNAPPY_PREDICT_TRUE(op >= op_limit)) return op_limit; - } else { - return IncrementalCopySlow(src, op, op_limit); - } -#endif // SNAPPY_HAVE_SSSE3 - } - assert(pattern_size >= 8); - - // Copy 2x 8 bytes at a time. Because op - src can be < 16, a single - // UnalignedCopy128 might overwrite data in op. UnalignedCopy64 is safe - // because expanding the pattern to at least 8 bytes guarantees that - // op - src >= 8. - // - // Typically, the op_limit is the gating factor so try to simplify the loop - // based on that. - if (SNAPPY_PREDICT_TRUE(op_limit <= buf_limit - 16)) { - // There is at least one, and at most four 16-byte blocks. Writing four - // conditionals instead of a loop allows FDO to layout the code with respect - // to the actual probabilities of each length. - // TODO: Replace with loop with trip count hint. +#if SNAPPY_HAVE_SSSE3 + +// This is a table of shuffle control masks that can be used as the source +// operand for PSHUFB to permute the contents of the destination XMM register +// into a repeating byte pattern. +alignas(16) const char pshufb_fill_patterns[7][16] = { + {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, + {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1}, + {0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0}, + {0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3}, + {0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0}, + {0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3}, + {0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 0, 1}, +}; + +#endif // SNAPPY_HAVE_SSSE3 + +// Copy [src, src+(op_limit-op)) to [op, (op_limit-op)) but faster than +// IncrementalCopySlow. buf_limit is the address past the end of the writable +// region of the buffer. +inline char* IncrementalCopy(const char* src, char* op, char* const op_limit, + char* const buf_limit) { + // Terminology: + // + // slop = buf_limit - op + // pat = op - src + // len = limit - op + assert(src < op); + assert(op <= op_limit); + assert(op_limit <= buf_limit); + // NOTE: The compressor always emits 4 <= len <= 64. It is ok to assume that + // to optimize this function but we have to also handle other cases in case + // the input does not satisfy these conditions. + + size_t pattern_size = op - src; + // The cases are split into different branches to allow the branch predictor, + // FDO, and static prediction hints to work better. For each input we list the + // ratio of invocations that match each condition. + // + // input slop < 16 pat < 8 len > 16 + // ------------------------------------------ + // html|html4|cp 0% 1.01% 27.73% + // urls 0% 0.88% 14.79% + // jpg 0% 64.29% 7.14% + // pdf 0% 2.56% 58.06% + // txt[1-4] 0% 0.23% 0.97% + // pb 0% 0.96% 13.88% + // bin 0.01% 22.27% 41.17% + // + // It is very rare that we don't have enough slop for doing block copies. It + // is also rare that we need to expand a pattern. Small patterns are common + // for incompressible formats and for those we are plenty fast already. + // Lengths are normally not greater than 16 but they vary depending on the + // input. In general if we always predict len <= 16 it would be an ok + // prediction. + // + // In order to be fast we want a pattern >= 8 bytes and an unrolled loop + // copying 2x 8 bytes at a time. + + // Handle the uncommon case where pattern is less than 8 bytes. + if (SNAPPY_PREDICT_FALSE(pattern_size < 8)) { +#if SNAPPY_HAVE_SSSE3 + // Load the first eight bytes into an 128-bit XMM register, then use PSHUFB + // to permute the register's contents in-place into a repeating sequence of + // the first "pattern_size" bytes. + // For example, suppose: + // src == "abc" + // op == op + 3 + // After _mm_shuffle_epi8(), "pattern" will have five copies of "abc" + // followed by one byte of slop: abcabcabcabcabca. + // + // The non-SSE fallback implementation suffers from store-forwarding stalls + // because its loads and stores partly overlap. By expanding the pattern + // in-place, we avoid the penalty. + if (SNAPPY_PREDICT_TRUE(op <= buf_limit - 16)) { + const __m128i shuffle_mask = _mm_load_si128( + reinterpret_cast<const __m128i*>(pshufb_fill_patterns) + + pattern_size - 1); + const __m128i pattern = _mm_shuffle_epi8( + _mm_loadl_epi64(reinterpret_cast<const __m128i*>(src)), shuffle_mask); + // Uninitialized bytes are masked out by the shuffle mask. + // TODO: remove annotation and macro defs once MSan is fixed. + SNAPPY_ANNOTATE_MEMORY_IS_INITIALIZED(&pattern, sizeof(pattern)); + pattern_size *= 16 / pattern_size; + char* op_end = std::min(op_limit, buf_limit - 15); + while (op < op_end) { + _mm_storeu_si128(reinterpret_cast<__m128i*>(op), pattern); + op += pattern_size; + } + if (SNAPPY_PREDICT_TRUE(op >= op_limit)) return op_limit; + } + return IncrementalCopySlow(src, op, op_limit); +#else // !SNAPPY_HAVE_SSSE3 + // If plenty of buffer space remains, expand the pattern to at least 8 + // bytes. The way the following loop is written, we need 8 bytes of buffer + // space if pattern_size >= 4, 11 bytes if pattern_size is 1 or 3, and 10 + // bytes if pattern_size is 2. Precisely encoding that is probably not + // worthwhile; instead, invoke the slow path if we cannot write 11 bytes + // (because 11 are required in the worst case). + if (SNAPPY_PREDICT_TRUE(op <= buf_limit - 11)) { + while (pattern_size < 8) { + UnalignedCopy64(src, op); + op += pattern_size; + pattern_size *= 2; + } + if (SNAPPY_PREDICT_TRUE(op >= op_limit)) return op_limit; + } else { + return IncrementalCopySlow(src, op, op_limit); + } +#endif // SNAPPY_HAVE_SSSE3 + } + assert(pattern_size >= 8); + + // Copy 2x 8 bytes at a time. Because op - src can be < 16, a single + // UnalignedCopy128 might overwrite data in op. UnalignedCopy64 is safe + // because expanding the pattern to at least 8 bytes guarantees that + // op - src >= 8. + // + // Typically, the op_limit is the gating factor so try to simplify the loop + // based on that. + if (SNAPPY_PREDICT_TRUE(op_limit <= buf_limit - 16)) { + // There is at least one, and at most four 16-byte blocks. Writing four + // conditionals instead of a loop allows FDO to layout the code with respect + // to the actual probabilities of each length. + // TODO: Replace with loop with trip count hint. + UnalignedCopy64(src, op); + UnalignedCopy64(src + 8, op + 8); + + if (op + 16 < op_limit) { + UnalignedCopy64(src + 16, op + 16); + UnalignedCopy64(src + 24, op + 24); + } + if (op + 32 < op_limit) { + UnalignedCopy64(src + 32, op + 32); + UnalignedCopy64(src + 40, op + 40); + } + if (op + 48 < op_limit) { + UnalignedCopy64(src + 48, op + 48); + UnalignedCopy64(src + 56, op + 56); + } + return op_limit; + } + + // Fall back to doing as much as we can with the available slop in the + // buffer. This code path is relatively cold however so we save code size by + // avoiding unrolling and vectorizing. + // + // TODO: Remove pragma when when cold regions don't get vectorized + // or unrolled. +#ifdef __clang__ +#pragma clang loop unroll(disable) +#endif + for (char *op_end = buf_limit - 16; op < op_end; op += 16, src += 16) { UnalignedCopy64(src, op); - UnalignedCopy64(src + 8, op + 8); - - if (op + 16 < op_limit) { - UnalignedCopy64(src + 16, op + 16); - UnalignedCopy64(src + 24, op + 24); - } - if (op + 32 < op_limit) { - UnalignedCopy64(src + 32, op + 32); - UnalignedCopy64(src + 40, op + 40); - } - if (op + 48 < op_limit) { - UnalignedCopy64(src + 48, op + 48); - UnalignedCopy64(src + 56, op + 56); - } - return op_limit; + UnalignedCopy64(src + 8, op + 8); } - - // Fall back to doing as much as we can with the available slop in the - // buffer. This code path is relatively cold however so we save code size by - // avoiding unrolling and vectorizing. - // - // TODO: Remove pragma when when cold regions don't get vectorized - // or unrolled. -#ifdef __clang__ -#pragma clang loop unroll(disable) -#endif - for (char *op_end = buf_limit - 16; op < op_end; op += 16, src += 16) { + if (op >= op_limit) + return op_limit; + + // We only take this branch if we didn't have enough slop and we can do a + // single 8 byte copy. + if (SNAPPY_PREDICT_FALSE(op <= buf_limit - 8)) { UnalignedCopy64(src, op); - UnalignedCopy64(src + 8, op + 8); - } - if (op >= op_limit) - return op_limit; - - // We only take this branch if we didn't have enough slop and we can do a - // single 8 byte copy. - if (SNAPPY_PREDICT_FALSE(op <= buf_limit - 8)) { - UnalignedCopy64(src, op); src += 8; op += 8; } - return IncrementalCopySlow(src, op, op_limit); + return IncrementalCopySlow(src, op, op_limit); } -} // namespace - -template <bool allow_fast_path> +} // namespace + +template <bool allow_fast_path> static inline char* EmitLiteral(char* op, const char* literal, - int len) { - // The vast majority of copies are below 16 bytes, for which a - // call to memcpy is overkill. This fast path can sometimes - // copy up to 15 bytes too much, but that is okay in the - // main loop, since we have a bit to go on for both sides: - // - // - The input will always have kInputMarginBytes = 15 extra - // available bytes, as long as we're in the main loop, and - // if not, allow_fast_path = false. - // - The output will always have 32 spare bytes (see - // MaxCompressedLength). - assert(len > 0); // Zero-length literals are disallowed - int n = len - 1; - if (allow_fast_path && len <= 16) { - // Fits in tag byte - *op++ = LITERAL | (n << 2); - - UnalignedCopy128(literal, op); - return op + len; - } - + int len) { + // The vast majority of copies are below 16 bytes, for which a + // call to memcpy is overkill. This fast path can sometimes + // copy up to 15 bytes too much, but that is okay in the + // main loop, since we have a bit to go on for both sides: + // + // - The input will always have kInputMarginBytes = 15 extra + // available bytes, as long as we're in the main loop, and + // if not, allow_fast_path = false. + // - The output will always have 32 spare bytes (see + // MaxCompressedLength). + assert(len > 0); // Zero-length literals are disallowed + int n = len - 1; + if (allow_fast_path && len <= 16) { + // Fits in tag byte + *op++ = LITERAL | (n << 2); + + UnalignedCopy128(literal, op); + return op + len; + } + if (n < 60) { // Fits in tag byte *op++ = LITERAL | (n << 2); } else { - int count = (Bits::Log2Floor(n) >> 3) + 1; + int count = (Bits::Log2Floor(n) >> 3) + 1; assert(count >= 1); assert(count <= 4); - *op++ = LITERAL | ((59 + count) << 2); - // Encode in upcoming bytes. - // Write 4 bytes, though we may care about only 1 of them. The output buffer - // is guaranteed to have at least 3 more spaces left as 'len >= 61' holds - // here and there is a memcpy of size 'len' below. - LittleEndian::Store32(op, n); - op += count; + *op++ = LITERAL | ((59 + count) << 2); + // Encode in upcoming bytes. + // Write 4 bytes, though we may care about only 1 of them. The output buffer + // is guaranteed to have at least 3 more spaces left as 'len >= 61' holds + // here and there is a memcpy of size 'len' below. + LittleEndian::Store32(op, n); + op += count; } memcpy(op, literal, len); return op + len; } -template <bool len_less_than_12> -static inline char* EmitCopyAtMost64(char* op, size_t offset, size_t len) { - assert(len <= 64); - assert(len >= 4); - assert(offset < 65536); - assert(len_less_than_12 == (len < 12)); - - if (len_less_than_12 && SNAPPY_PREDICT_TRUE(offset < 2048)) { - // offset fits in 11 bits. The 3 highest go in the top of the first byte, - // and the rest go in the second byte. - *op++ = COPY_1_BYTE_OFFSET + ((len - 4) << 2) + ((offset >> 3) & 0xe0); +template <bool len_less_than_12> +static inline char* EmitCopyAtMost64(char* op, size_t offset, size_t len) { + assert(len <= 64); + assert(len >= 4); + assert(offset < 65536); + assert(len_less_than_12 == (len < 12)); + + if (len_less_than_12 && SNAPPY_PREDICT_TRUE(offset < 2048)) { + // offset fits in 11 bits. The 3 highest go in the top of the first byte, + // and the rest go in the second byte. + *op++ = COPY_1_BYTE_OFFSET + ((len - 4) << 2) + ((offset >> 3) & 0xe0); *op++ = offset & 0xff; } else { - // Write 4 bytes, though we only care about 3 of them. The output buffer - // is required to have some slack, so the extra byte won't overrun it. - uint32 u = COPY_2_BYTE_OFFSET + ((len - 1) << 2) + (offset << 8); - LittleEndian::Store32(op, u); - op += 3; + // Write 4 bytes, though we only care about 3 of them. The output buffer + // is required to have some slack, so the extra byte won't overrun it. + uint32 u = COPY_2_BYTE_OFFSET + ((len - 1) << 2) + (offset << 8); + LittleEndian::Store32(op, u); + op += 3; } return op; } -template <bool len_less_than_12> -static inline char* EmitCopy(char* op, size_t offset, size_t len) { - assert(len_less_than_12 == (len < 12)); - if (len_less_than_12) { - return EmitCopyAtMost64</*len_less_than_12=*/true>(op, offset, len); - } else { - // A special case for len <= 64 might help, but so far measurements suggest - // it's in the noise. - - // Emit 64 byte copies but make sure to keep at least four bytes reserved. - while (SNAPPY_PREDICT_FALSE(len >= 68)) { - op = EmitCopyAtMost64</*len_less_than_12=*/false>(op, offset, 64); - len -= 64; - } - - // One or two copies will now finish the job. - if (len > 64) { - op = EmitCopyAtMost64</*len_less_than_12=*/false>(op, offset, 60); - len -= 60; - } - - // Emit remainder. - if (len < 12) { - op = EmitCopyAtMost64</*len_less_than_12=*/true>(op, offset, len); - } else { - op = EmitCopyAtMost64</*len_less_than_12=*/false>(op, offset, len); - } - return op; +template <bool len_less_than_12> +static inline char* EmitCopy(char* op, size_t offset, size_t len) { + assert(len_less_than_12 == (len < 12)); + if (len_less_than_12) { + return EmitCopyAtMost64</*len_less_than_12=*/true>(op, offset, len); + } else { + // A special case for len <= 64 might help, but so far measurements suggest + // it's in the noise. + + // Emit 64 byte copies but make sure to keep at least four bytes reserved. + while (SNAPPY_PREDICT_FALSE(len >= 68)) { + op = EmitCopyAtMost64</*len_less_than_12=*/false>(op, offset, 64); + len -= 64; + } + + // One or two copies will now finish the job. + if (len > 64) { + op = EmitCopyAtMost64</*len_less_than_12=*/false>(op, offset, 60); + len -= 60; + } + + // Emit remainder. + if (len < 12) { + op = EmitCopyAtMost64</*len_less_than_12=*/true>(op, offset, len); + } else { + op = EmitCopyAtMost64</*len_less_than_12=*/false>(op, offset, len); + } + return op; } } @@ -439,45 +439,45 @@ bool GetUncompressedLength(const char* start, size_t n, size_t* result) { } } -namespace { -uint32 CalculateTableSize(uint32 input_size) { - static_assert( - kMaxHashTableSize >= kMinHashTableSize, - "kMaxHashTableSize should be greater or equal to kMinHashTableSize."); - if (input_size > kMaxHashTableSize) { - return kMaxHashTableSize; +namespace { +uint32 CalculateTableSize(uint32 input_size) { + static_assert( + kMaxHashTableSize >= kMinHashTableSize, + "kMaxHashTableSize should be greater or equal to kMinHashTableSize."); + if (input_size > kMaxHashTableSize) { + return kMaxHashTableSize; } - if (input_size < kMinHashTableSize) { - return kMinHashTableSize; + if (input_size < kMinHashTableSize) { + return kMinHashTableSize; } - // This is equivalent to Log2Ceiling(input_size), assuming input_size > 1. - // 2 << Log2Floor(x - 1) is equivalent to 1 << (1 + Log2Floor(x - 1)). - return 2u << Bits::Log2Floor(input_size - 1); -} -} // namespace - -namespace internal { -WorkingMemory::WorkingMemory(size_t input_size) { - const size_t max_fragment_size = std::min(input_size, kBlockSize); - const size_t table_size = CalculateTableSize(max_fragment_size); - size_ = table_size * sizeof(*table_) + max_fragment_size + - MaxCompressedLength(max_fragment_size); - mem_ = std::allocator<char>().allocate(size_); - table_ = reinterpret_cast<uint16*>(mem_); - input_ = mem_ + table_size * sizeof(*table_); - output_ = input_ + max_fragment_size; -} - -WorkingMemory::~WorkingMemory() { - std::allocator<char>().deallocate(mem_, size_); -} - -uint16* WorkingMemory::GetHashTable(size_t fragment_size, - int* table_size) const { - const size_t htsize = CalculateTableSize(fragment_size); - memset(table_, 0, htsize * sizeof(*table_)); + // This is equivalent to Log2Ceiling(input_size), assuming input_size > 1. + // 2 << Log2Floor(x - 1) is equivalent to 1 << (1 + Log2Floor(x - 1)). + return 2u << Bits::Log2Floor(input_size - 1); +} +} // namespace + +namespace internal { +WorkingMemory::WorkingMemory(size_t input_size) { + const size_t max_fragment_size = std::min(input_size, kBlockSize); + const size_t table_size = CalculateTableSize(max_fragment_size); + size_ = table_size * sizeof(*table_) + max_fragment_size + + MaxCompressedLength(max_fragment_size); + mem_ = std::allocator<char>().allocate(size_); + table_ = reinterpret_cast<uint16*>(mem_); + input_ = mem_ + table_size * sizeof(*table_); + output_ = input_ + max_fragment_size; +} + +WorkingMemory::~WorkingMemory() { + std::allocator<char>().deallocate(mem_, size_); +} + +uint16* WorkingMemory::GetHashTable(size_t fragment_size, + int* table_size) const { + const size_t htsize = CalculateTableSize(fragment_size); + memset(table_, 0, htsize * sizeof(*table_)); *table_size = htsize; - return table_; + return table_; } } // end namespace internal @@ -503,8 +503,8 @@ static inline EightBytesReference GetEightBytesAt(const char* ptr) { } static inline uint32 GetUint32AtOffset(uint64 v, int offset) { - assert(offset >= 0); - assert(offset <= 4); + assert(offset >= 0); + assert(offset <= 4); return v >> (LittleEndian::IsLittleEndian() ? 8 * offset : 32 - 8 * offset); } @@ -517,8 +517,8 @@ static inline EightBytesReference GetEightBytesAt(const char* ptr) { } static inline uint32 GetUint32AtOffset(const char* v, int offset) { - assert(offset >= 0); - assert(offset <= 4); + assert(offset >= 0); + assert(offset <= 4); return UNALIGNED_LOAD32(v + offset); } @@ -543,10 +543,10 @@ char* CompressFragment(const char* input, const int table_size) { // "ip" is the input pointer, and "op" is the output pointer. const char* ip = input; - assert(input_size <= kBlockSize); - assert((table_size & (table_size - 1)) == 0); // table must be power of two + assert(input_size <= kBlockSize); + assert((table_size & (table_size - 1)) == 0); // table must be power of two const int shift = 32 - Bits::Log2Floor(table_size); - assert(static_cast<int>(kuint32max >> shift) == table_size - 1); + assert(static_cast<int>(kuint32max >> shift) == table_size - 1); const char* ip_end = input + input_size; const char* base_ip = ip; // Bytes in [next_emit, ip) will be emitted as literal bytes. Or @@ -554,11 +554,11 @@ char* CompressFragment(const char* input, const char* next_emit = ip; const size_t kInputMarginBytes = 15; - if (SNAPPY_PREDICT_TRUE(input_size >= kInputMarginBytes)) { + if (SNAPPY_PREDICT_TRUE(input_size >= kInputMarginBytes)) { const char* ip_limit = input + input_size - kInputMarginBytes; for (uint32 next_hash = Hash(++ip, shift); ; ) { - assert(next_emit < ip); + assert(next_emit < ip); // The body of this loop calls EmitLiteral once and then EmitCopy one or // more times. (The exception is that when we're close to exhausting // the input we goto emit_remainder.) @@ -574,9 +574,9 @@ char* CompressFragment(const char* input, // // Heuristic match skipping: If 32 bytes are scanned with no matches // found, start looking only at every other byte. If 32 more bytes are - // scanned (or skipped), look at every third byte, etc.. When a match is - // found, immediately go back to looking at every byte. This is a small - // loss (~5% performance, ~0.1% density) for compressible data due to more + // scanned (or skipped), look at every third byte, etc.. When a match is + // found, immediately go back to looking at every byte. This is a small + // loss (~5% performance, ~0.1% density) for compressible data due to more // bookkeeping, but for non-compressible data (such as JPEG) it's a huge // win since the compressor quickly "realizes" the data is incompressible // and doesn't bother looking for matches everywhere. @@ -591,27 +591,27 @@ char* CompressFragment(const char* input, do { ip = next_ip; uint32 hash = next_hash; - assert(hash == Hash(ip, shift)); - uint32 bytes_between_hash_lookups = skip >> 5; - skip += bytes_between_hash_lookups; + assert(hash == Hash(ip, shift)); + uint32 bytes_between_hash_lookups = skip >> 5; + skip += bytes_between_hash_lookups; next_ip = ip + bytes_between_hash_lookups; - if (SNAPPY_PREDICT_FALSE(next_ip > ip_limit)) { + if (SNAPPY_PREDICT_FALSE(next_ip > ip_limit)) { goto emit_remainder; } next_hash = Hash(next_ip, shift); candidate = base_ip + table[hash]; - assert(candidate >= base_ip); - assert(candidate < ip); + assert(candidate >= base_ip); + assert(candidate < ip); table[hash] = ip - base_ip; - } while (SNAPPY_PREDICT_TRUE(UNALIGNED_LOAD32(ip) != - UNALIGNED_LOAD32(candidate))); + } while (SNAPPY_PREDICT_TRUE(UNALIGNED_LOAD32(ip) != + UNALIGNED_LOAD32(candidate))); // Step 2: A 4-byte match has been found. We'll later see if more // than 4 bytes match. But, prior to the match, input // bytes [next_emit, ip) are unmatched. Emit them as "literal bytes." - assert(next_emit + 16 <= ip_end); - op = EmitLiteral</*allow_fast_path=*/true>(op, next_emit, ip - next_emit); + assert(next_emit + 16 <= ip_end); + op = EmitLiteral</*allow_fast_path=*/true>(op, next_emit, ip - next_emit); // Step 3: Call EmitCopy, and then see if another EmitCopy could // be our next move. Repeat until we find no match for the @@ -628,25 +628,25 @@ char* CompressFragment(const char* input, // We have a 4-byte match at ip, and no need to emit any // "literal bytes" prior to ip. const char* base = ip; - std::pair<size_t, bool> p = - FindMatchLength(candidate + 4, ip + 4, ip_end); - size_t matched = 4 + p.first; + std::pair<size_t, bool> p = + FindMatchLength(candidate + 4, ip + 4, ip_end); + size_t matched = 4 + p.first; ip += matched; size_t offset = base - candidate; - assert(0 == memcmp(base, candidate, matched)); - if (p.second) { - op = EmitCopy</*len_less_than_12=*/true>(op, offset, matched); - } else { - op = EmitCopy</*len_less_than_12=*/false>(op, offset, matched); - } + assert(0 == memcmp(base, candidate, matched)); + if (p.second) { + op = EmitCopy</*len_less_than_12=*/true>(op, offset, matched); + } else { + op = EmitCopy</*len_less_than_12=*/false>(op, offset, matched); + } next_emit = ip; - if (SNAPPY_PREDICT_FALSE(ip >= ip_limit)) { + if (SNAPPY_PREDICT_FALSE(ip >= ip_limit)) { goto emit_remainder; } - // We are now looking for a 4-byte match again. We read - // table[Hash(ip, shift)] for that. To improve compression, - // we also update table[Hash(ip - 1, shift)] and table[Hash(ip, shift)]. - input_bytes = GetEightBytesAt(ip - 1); + // We are now looking for a 4-byte match again. We read + // table[Hash(ip, shift)] for that. To improve compression, + // we also update table[Hash(ip - 1, shift)] and table[Hash(ip, shift)]. + input_bytes = GetEightBytesAt(ip - 1); uint32 prev_hash = HashBytes(GetUint32AtOffset(input_bytes, 0), shift); table[prev_hash] = ip - base_ip - 1; uint32 cur_hash = HashBytes(GetUint32AtOffset(input_bytes, 1), shift); @@ -663,18 +663,18 @@ char* CompressFragment(const char* input, emit_remainder: // Emit the remaining bytes as a literal if (next_emit < ip_end) { - op = EmitLiteral</*allow_fast_path=*/false>(op, next_emit, - ip_end - next_emit); + op = EmitLiteral</*allow_fast_path=*/false>(op, next_emit, + ip_end - next_emit); } return op; } } // end namespace internal -// Called back at avery compression call to trace parameters and sizes. -static inline void Report(const char *algorithm, size_t compressed_size, - size_t uncompressed_size) {} - +// Called back at avery compression call to trace parameters and sizes. +static inline void Report(const char *algorithm, size_t compressed_size, + size_t uncompressed_size) {} + // Signature of output types needed by decompression code. // The decompression code is templatized on a type that obeys this // signature so that we do not pay virtual function call overhead in @@ -692,50 +692,50 @@ static inline void Report(const char *algorithm, size_t compressed_size, // bool Append(const char* ip, size_t length); // bool AppendFromSelf(uint32 offset, size_t length); // -// // The rules for how TryFastAppend differs from Append are somewhat -// // convoluted: +// // The rules for how TryFastAppend differs from Append are somewhat +// // convoluted: // // -// // - TryFastAppend is allowed to decline (return false) at any -// // time, for any reason -- just "return false" would be -// // a perfectly legal implementation of TryFastAppend. -// // The intention is for TryFastAppend to allow a fast path -// // in the common case of a small append. -// // - TryFastAppend is allowed to read up to <available> bytes -// // from the input buffer, whereas Append is allowed to read -// // <length>. However, if it returns true, it must leave -// // at least five (kMaximumTagLength) bytes in the input buffer -// // afterwards, so that there is always enough space to read the -// // next tag without checking for a refill. -// // - TryFastAppend must always return decline (return false) -// // if <length> is 61 or more, as in this case the literal length is not -// // decoded fully. In practice, this should not be a big problem, -// // as it is unlikely that one would implement a fast path accepting -// // this much data. +// // - TryFastAppend is allowed to decline (return false) at any +// // time, for any reason -- just "return false" would be +// // a perfectly legal implementation of TryFastAppend. +// // The intention is for TryFastAppend to allow a fast path +// // in the common case of a small append. +// // - TryFastAppend is allowed to read up to <available> bytes +// // from the input buffer, whereas Append is allowed to read +// // <length>. However, if it returns true, it must leave +// // at least five (kMaximumTagLength) bytes in the input buffer +// // afterwards, so that there is always enough space to read the +// // next tag without checking for a refill. +// // - TryFastAppend must always return decline (return false) +// // if <length> is 61 or more, as in this case the literal length is not +// // decoded fully. In practice, this should not be a big problem, +// // as it is unlikely that one would implement a fast path accepting +// // this much data. // // // bool TryFastAppend(const char* ip, size_t available, size_t length); // }; -static inline uint32 ExtractLowBytes(uint32 v, int n) { - assert(n >= 0); - assert(n <= 4); -#if SNAPPY_HAVE_BMI2 - return _bzhi_u32(v, 8 * n); -#else - // This needs to be wider than uint32 otherwise `mask << 32` will be - // undefined. - uint64 mask = 0xffffffff; - return v & ~(mask << (8 * n)); -#endif +static inline uint32 ExtractLowBytes(uint32 v, int n) { + assert(n >= 0); + assert(n <= 4); +#if SNAPPY_HAVE_BMI2 + return _bzhi_u32(v, 8 * n); +#else + // This needs to be wider than uint32 otherwise `mask << 32` will be + // undefined. + uint64 mask = 0xffffffff; + return v & ~(mask << (8 * n)); +#endif } -static inline bool LeftShiftOverflows(uint8 value, uint32 shift) { - assert(shift < 32); - static const uint8 masks[] = { - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // - 0x00, 0x80, 0xc0, 0xe0, 0xf0, 0xf8, 0xfc, 0xfe}; - return (value & masks[shift]) != 0; +static inline bool LeftShiftOverflows(uint8 value, uint32 shift) { + assert(shift < 32); + static const uint8 masks[] = { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // + 0x00, 0x80, 0xc0, 0xe0, 0xf0, 0xf8, 0xfc, 0xfe}; + return (value & masks[shift]) != 0; } // Helper class for decompression @@ -746,7 +746,7 @@ class SnappyDecompressor { const char* ip_limit_; // Points just past buffered bytes uint32 peeked_; // Bytes peeked from reader (need to skip) bool eof_; // Hit end of input without an error? - char scratch_[kMaximumTagLength]; // See RefillTag(). + char scratch_[kMaximumTagLength]; // See RefillTag(). // Ensure that all of the tag metadata for the next tag is available // in [ip_..ip_limit_-1]. Also ensures that [ip,ip+4] is readable even @@ -775,10 +775,10 @@ class SnappyDecompressor { } // Read the uncompressed length stored at the start of the compressed data. - // On success, stores the length in *result and returns true. + // On success, stores the length in *result and returns true. // On failure, returns false. bool ReadUncompressedLength(uint32* result) { - assert(ip_ == NULL); // Must not have read anything yet + assert(ip_ == NULL); // Must not have read anything yet // Length is encoded in 1..5 bytes *result = 0; uint32 shift = 0; @@ -789,9 +789,9 @@ class SnappyDecompressor { if (n == 0) return false; const unsigned char c = *(reinterpret_cast<const unsigned char*>(ip)); reader_->Skip(1); - uint32 val = c & 0x7f; - if (LeftShiftOverflows(static_cast<uint8>(val), shift)) return false; - *result |= val << shift; + uint32 val = c & 0x7f; + if (LeftShiftOverflows(static_cast<uint8>(val), shift)) return false; + *result |= val << shift; if (c < 128) { break; } @@ -803,33 +803,33 @@ class SnappyDecompressor { // Process the next item found in the input. // Returns true if successful, false on error or end of input. template <class Writer> -#if defined(__GNUC__) && defined(__x86_64__) - __attribute__((aligned(32))) -#endif +#if defined(__GNUC__) && defined(__x86_64__) + __attribute__((aligned(32))) +#endif void DecompressAllTags(Writer* writer) { - // In x86, pad the function body to start 16 bytes later. This function has - // a couple of hotspots that are highly sensitive to alignment: we have - // observed regressions by more than 20% in some metrics just by moving the - // exact same code to a different position in the benchmark binary. - // - // Putting this code on a 32-byte-aligned boundary + 16 bytes makes us hit - // the "lucky" case consistently. Unfortunately, this is a very brittle - // workaround, and future differences in code generation may reintroduce - // this regression. If you experience a big, difficult to explain, benchmark - // performance regression here, first try removing this hack. -#if defined(__GNUC__) && defined(__x86_64__) - // Two 8-byte "NOP DWORD ptr [EAX + EAX*1 + 00000000H]" instructions. - asm(".byte 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00"); - asm(".byte 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00"); -#endif - + // In x86, pad the function body to start 16 bytes later. This function has + // a couple of hotspots that are highly sensitive to alignment: we have + // observed regressions by more than 20% in some metrics just by moving the + // exact same code to a different position in the benchmark binary. + // + // Putting this code on a 32-byte-aligned boundary + 16 bytes makes us hit + // the "lucky" case consistently. Unfortunately, this is a very brittle + // workaround, and future differences in code generation may reintroduce + // this regression. If you experience a big, difficult to explain, benchmark + // performance regression here, first try removing this hack. +#if defined(__GNUC__) && defined(__x86_64__) + // Two 8-byte "NOP DWORD ptr [EAX + EAX*1 + 00000000H]" instructions. + asm(".byte 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00"); + asm(".byte 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00"); +#endif + const char* ip = ip_; // We could have put this refill fragment only at the beginning of the loop. // However, duplicating it at the end of each branch gives the compiler more // scope to optimize the <ip_limit_ - ip> expression based on the local // context, which overall increases speed. #define MAYBE_REFILL() \ - if (ip_limit_ - ip < kMaximumTagLength) { \ + if (ip_limit_ - ip < kMaximumTagLength) { \ ip_ = ip; \ if (!RefillTag()) return; \ ip = ip_; \ @@ -839,34 +839,34 @@ class SnappyDecompressor { for ( ;; ) { const unsigned char c = *(reinterpret_cast<const unsigned char*>(ip++)); - // Ratio of iterations that have LITERAL vs non-LITERAL for different - // inputs. - // - // input LITERAL NON_LITERAL - // ----------------------------------- - // html|html4|cp 23% 77% - // urls 36% 64% - // jpg 47% 53% - // pdf 19% 81% - // txt[1-4] 25% 75% - // pb 24% 76% - // bin 24% 76% - if (SNAPPY_PREDICT_FALSE((c & 0x3) == LITERAL)) { + // Ratio of iterations that have LITERAL vs non-LITERAL for different + // inputs. + // + // input LITERAL NON_LITERAL + // ----------------------------------- + // html|html4|cp 23% 77% + // urls 36% 64% + // jpg 47% 53% + // pdf 19% 81% + // txt[1-4] 25% 75% + // pb 24% 76% + // bin 24% 76% + if (SNAPPY_PREDICT_FALSE((c & 0x3) == LITERAL)) { size_t literal_length = (c >> 2) + 1u; if (writer->TryFastAppend(ip, ip_limit_ - ip, literal_length)) { - assert(literal_length < 61); + assert(literal_length < 61); ip += literal_length; - // NOTE: There is no MAYBE_REFILL() here, as TryFastAppend() - // will not return true unless there's already at least five spare - // bytes in addition to the literal. + // NOTE: There is no MAYBE_REFILL() here, as TryFastAppend() + // will not return true unless there's already at least five spare + // bytes in addition to the literal. continue; } - if (SNAPPY_PREDICT_FALSE(literal_length >= 61)) { + if (SNAPPY_PREDICT_FALSE(literal_length >= 61)) { // Long literal. const size_t literal_length_length = literal_length - 60; literal_length = - ExtractLowBytes(LittleEndian::Load32(ip), literal_length_length) + - 1; + ExtractLowBytes(LittleEndian::Load32(ip), literal_length_length) + + 1; ip += literal_length_length; } @@ -888,16 +888,16 @@ class SnappyDecompressor { ip += literal_length; MAYBE_REFILL(); } else { - const size_t entry = char_table[c]; - const size_t trailer = - ExtractLowBytes(LittleEndian::Load32(ip), entry >> 11); - const size_t length = entry & 0xff; + const size_t entry = char_table[c]; + const size_t trailer = + ExtractLowBytes(LittleEndian::Load32(ip), entry >> 11); + const size_t length = entry & 0xff; ip += entry >> 11; // copy_offset/256 is encoded in bits 8..10. By just fetching // those bits, we get copy_offset (since the bit-field starts at // bit 8). - const size_t copy_offset = entry & 0x700; + const size_t copy_offset = entry & 0x700; if (!writer->AppendFromSelf(copy_offset + trailer, length)) { return; } @@ -917,17 +917,17 @@ bool SnappyDecompressor::RefillTag() { size_t n; ip = reader_->Peek(&n); peeked_ = n; - eof_ = (n == 0); - if (eof_) return false; + eof_ = (n == 0); + if (eof_) return false; ip_limit_ = ip + n; } // Read the tag character - assert(ip < ip_limit_); + assert(ip < ip_limit_); const unsigned char c = *(reinterpret_cast<const unsigned char*>(ip)); const uint32 entry = char_table[c]; const uint32 needed = (entry >> 11) + 1; // +1 byte for 'c' - assert(needed <= sizeof(scratch_)); + assert(needed <= sizeof(scratch_)); // Read more bytes from reader if needed uint32 nbuf = ip_limit_ - ip; @@ -943,15 +943,15 @@ bool SnappyDecompressor::RefillTag() { size_t length; const char* src = reader_->Peek(&length); if (length == 0) return false; - uint32 to_add = std::min<uint32>(needed - nbuf, length); + uint32 to_add = std::min<uint32>(needed - nbuf, length); memcpy(scratch_ + nbuf, src, to_add); nbuf += to_add; reader_->Skip(to_add); } - assert(nbuf == needed); + assert(nbuf == needed); ip_ = scratch_; ip_limit_ = scratch_ + needed; - } else if (nbuf < kMaximumTagLength) { + } else if (nbuf < kMaximumTagLength) { // Have enough bytes, but move into scratch_ so that we do not // read past end of input memmove(scratch_, ip, nbuf); @@ -967,28 +967,28 @@ bool SnappyDecompressor::RefillTag() { } template <typename Writer> -static bool InternalUncompress(Source* r, Writer* writer) { +static bool InternalUncompress(Source* r, Writer* writer) { // Read the uncompressed length from the front of the compressed input SnappyDecompressor decompressor(r); uint32 uncompressed_len = 0; if (!decompressor.ReadUncompressedLength(&uncompressed_len)) return false; - - return InternalUncompressAllTags(&decompressor, writer, r->Available(), - uncompressed_len); + + return InternalUncompressAllTags(&decompressor, writer, r->Available(), + uncompressed_len); } template <typename Writer> static bool InternalUncompressAllTags(SnappyDecompressor* decompressor, Writer* writer, - uint32 compressed_len, - uint32 uncompressed_len) { - Report("snappy_uncompress", compressed_len, uncompressed_len); + uint32 compressed_len, + uint32 uncompressed_len) { + Report("snappy_uncompress", compressed_len, uncompressed_len); writer->SetExpectedLength(uncompressed_len); // Process the entire input decompressor->DecompressAllTags(writer); - writer->Flush(); + writer->Flush(); return (decompressor->eof() && writer->CheckLength()); } @@ -1000,20 +1000,20 @@ bool GetUncompressedLength(Source* source, uint32* result) { size_t Compress(Source* reader, Sink* writer) { size_t written = 0; size_t N = reader->Available(); - const size_t uncompressed_size = N; + const size_t uncompressed_size = N; char ulength[Varint::kMax32]; char* p = Varint::Encode32(ulength, N); writer->Append(ulength, p-ulength); written += (p - ulength); - internal::WorkingMemory wmem(N); + internal::WorkingMemory wmem(N); while (N > 0) { // Get next block to compress (without copying if possible) size_t fragment_size; const char* fragment = reader->Peek(&fragment_size); - assert(fragment_size != 0); // premature end of input - const size_t num_to_read = std::min(N, kBlockSize); + assert(fragment_size != 0); // premature end of input + const size_t num_to_read = std::min(N, kBlockSize); size_t bytes_read = fragment_size; size_t pending_advance = 0; @@ -1022,22 +1022,22 @@ size_t Compress(Source* reader, Sink* writer) { pending_advance = num_to_read; fragment_size = num_to_read; } else { - char* scratch = wmem.GetScratchInput(); + char* scratch = wmem.GetScratchInput(); memcpy(scratch, fragment, bytes_read); reader->Skip(bytes_read); while (bytes_read < num_to_read) { fragment = reader->Peek(&fragment_size); - size_t n = std::min<size_t>(fragment_size, num_to_read - bytes_read); + size_t n = std::min<size_t>(fragment_size, num_to_read - bytes_read); memcpy(scratch + bytes_read, fragment, n); bytes_read += n; reader->Skip(n); } - assert(bytes_read == num_to_read); + assert(bytes_read == num_to_read); fragment = scratch; fragment_size = num_to_read; } - assert(fragment_size == num_to_read); + assert(fragment_size == num_to_read); // Get encoding table for compression int table_size; @@ -1048,13 +1048,13 @@ size_t Compress(Source* reader, Sink* writer) { // Need a scratch buffer for the output, in case the byte sink doesn't // have room for us directly. - - // Since we encode kBlockSize regions followed by a region - // which is <= kBlockSize in length, a previously allocated - // scratch_output[] region is big enough for this iteration. - char* dest = writer->GetAppendBuffer(max_output, wmem.GetScratchOutput()); - char* end = internal::CompressFragment(fragment, fragment_size, dest, table, - table_size); + + // Since we encode kBlockSize regions followed by a region + // which is <= kBlockSize in length, a previously allocated + // scratch_output[] region is big enough for this iteration. + char* dest = writer->GetAppendBuffer(max_output, wmem.GetScratchOutput()); + char* end = internal::CompressFragment(fragment, fragment_size, dest, table, + table_size); writer->Append(dest, end - dest); written += (end - dest); @@ -1062,204 +1062,204 @@ size_t Compress(Source* reader, Sink* writer) { reader->Skip(pending_advance); } - Report("snappy_compress", written, uncompressed_size); + Report("snappy_compress", written, uncompressed_size); return written; } // ----------------------------------------------------------------------- -// IOVec interfaces -// ----------------------------------------------------------------------- - -// A type that writes to an iovec. -// Note that this is not a "ByteSink", but a type that matches the -// Writer template argument to SnappyDecompressor::DecompressAllTags(). -class SnappyIOVecWriter { - private: - // output_iov_end_ is set to iov + count and used to determine when - // the end of the iovs is reached. - const struct iovec* output_iov_end_; - -#if !defined(NDEBUG) - const struct iovec* output_iov_; -#endif // !defined(NDEBUG) - - // Current iov that is being written into. - const struct iovec* curr_iov_; - - // Pointer to current iov's write location. - char* curr_iov_output_; - - // Remaining bytes to write into curr_iov_output. - size_t curr_iov_remaining_; - - // Total bytes decompressed into output_iov_ so far. - size_t total_written_; - - // Maximum number of bytes that will be decompressed into output_iov_. - size_t output_limit_; - - static inline char* GetIOVecPointer(const struct iovec* iov, size_t offset) { - return reinterpret_cast<char*>(iov->iov_base) + offset; - } - - public: - // Does not take ownership of iov. iov must be valid during the - // entire lifetime of the SnappyIOVecWriter. - inline SnappyIOVecWriter(const struct iovec* iov, size_t iov_count) - : output_iov_end_(iov + iov_count), -#if !defined(NDEBUG) - output_iov_(iov), -#endif // !defined(NDEBUG) - curr_iov_(iov), - curr_iov_output_(iov_count ? reinterpret_cast<char*>(iov->iov_base) - : nullptr), - curr_iov_remaining_(iov_count ? iov->iov_len : 0), - total_written_(0), - output_limit_(-1) {} - - inline void SetExpectedLength(size_t len) { - output_limit_ = len; - } - - inline bool CheckLength() const { - return total_written_ == output_limit_; - } - - inline bool Append(const char* ip, size_t len) { - if (total_written_ + len > output_limit_) { - return false; - } - - return AppendNoCheck(ip, len); - } - - inline bool AppendNoCheck(const char* ip, size_t len) { - while (len > 0) { - if (curr_iov_remaining_ == 0) { - // This iovec is full. Go to the next one. - if (curr_iov_ + 1 >= output_iov_end_) { - return false; - } - ++curr_iov_; - curr_iov_output_ = reinterpret_cast<char*>(curr_iov_->iov_base); - curr_iov_remaining_ = curr_iov_->iov_len; - } - - const size_t to_write = std::min(len, curr_iov_remaining_); - memcpy(curr_iov_output_, ip, to_write); - curr_iov_output_ += to_write; - curr_iov_remaining_ -= to_write; - total_written_ += to_write; - ip += to_write; - len -= to_write; - } - - return true; - } - - inline bool TryFastAppend(const char* ip, size_t available, size_t len) { - const size_t space_left = output_limit_ - total_written_; - if (len <= 16 && available >= 16 + kMaximumTagLength && space_left >= 16 && - curr_iov_remaining_ >= 16) { - // Fast path, used for the majority (about 95%) of invocations. - UnalignedCopy128(ip, curr_iov_output_); - curr_iov_output_ += len; - curr_iov_remaining_ -= len; - total_written_ += len; - return true; - } - - return false; - } - - inline bool AppendFromSelf(size_t offset, size_t len) { - // See SnappyArrayWriter::AppendFromSelf for an explanation of - // the "offset - 1u" trick. - if (offset - 1u >= total_written_) { - return false; - } - const size_t space_left = output_limit_ - total_written_; - if (len > space_left) { - return false; - } - - // Locate the iovec from which we need to start the copy. - const iovec* from_iov = curr_iov_; - size_t from_iov_offset = curr_iov_->iov_len - curr_iov_remaining_; - while (offset > 0) { - if (from_iov_offset >= offset) { - from_iov_offset -= offset; - break; - } - - offset -= from_iov_offset; - --from_iov; -#if !defined(NDEBUG) - assert(from_iov >= output_iov_); -#endif // !defined(NDEBUG) - from_iov_offset = from_iov->iov_len; - } - - // Copy <len> bytes starting from the iovec pointed to by from_iov_index to - // the current iovec. - while (len > 0) { - assert(from_iov <= curr_iov_); - if (from_iov != curr_iov_) { - const size_t to_copy = - std::min(from_iov->iov_len - from_iov_offset, len); - AppendNoCheck(GetIOVecPointer(from_iov, from_iov_offset), to_copy); - len -= to_copy; - if (len > 0) { - ++from_iov; - from_iov_offset = 0; - } - } else { - size_t to_copy = curr_iov_remaining_; - if (to_copy == 0) { - // This iovec is full. Go to the next one. - if (curr_iov_ + 1 >= output_iov_end_) { - return false; - } - ++curr_iov_; - curr_iov_output_ = reinterpret_cast<char*>(curr_iov_->iov_base); - curr_iov_remaining_ = curr_iov_->iov_len; - continue; - } - if (to_copy > len) { - to_copy = len; - } - - IncrementalCopy(GetIOVecPointer(from_iov, from_iov_offset), - curr_iov_output_, curr_iov_output_ + to_copy, - curr_iov_output_ + curr_iov_remaining_); - curr_iov_output_ += to_copy; - curr_iov_remaining_ -= to_copy; - from_iov_offset += to_copy; - total_written_ += to_copy; - len -= to_copy; - } - } - - return true; - } - - inline void Flush() {} -}; - -bool RawUncompressToIOVec(const char* compressed, size_t compressed_length, - const struct iovec* iov, size_t iov_cnt) { - ByteArraySource reader(compressed, compressed_length); - return RawUncompressToIOVec(&reader, iov, iov_cnt); -} - -bool RawUncompressToIOVec(Source* compressed, const struct iovec* iov, - size_t iov_cnt) { - SnappyIOVecWriter output(iov, iov_cnt); - return InternalUncompress(compressed, &output); -} - -// ----------------------------------------------------------------------- +// IOVec interfaces +// ----------------------------------------------------------------------- + +// A type that writes to an iovec. +// Note that this is not a "ByteSink", but a type that matches the +// Writer template argument to SnappyDecompressor::DecompressAllTags(). +class SnappyIOVecWriter { + private: + // output_iov_end_ is set to iov + count and used to determine when + // the end of the iovs is reached. + const struct iovec* output_iov_end_; + +#if !defined(NDEBUG) + const struct iovec* output_iov_; +#endif // !defined(NDEBUG) + + // Current iov that is being written into. + const struct iovec* curr_iov_; + + // Pointer to current iov's write location. + char* curr_iov_output_; + + // Remaining bytes to write into curr_iov_output. + size_t curr_iov_remaining_; + + // Total bytes decompressed into output_iov_ so far. + size_t total_written_; + + // Maximum number of bytes that will be decompressed into output_iov_. + size_t output_limit_; + + static inline char* GetIOVecPointer(const struct iovec* iov, size_t offset) { + return reinterpret_cast<char*>(iov->iov_base) + offset; + } + + public: + // Does not take ownership of iov. iov must be valid during the + // entire lifetime of the SnappyIOVecWriter. + inline SnappyIOVecWriter(const struct iovec* iov, size_t iov_count) + : output_iov_end_(iov + iov_count), +#if !defined(NDEBUG) + output_iov_(iov), +#endif // !defined(NDEBUG) + curr_iov_(iov), + curr_iov_output_(iov_count ? reinterpret_cast<char*>(iov->iov_base) + : nullptr), + curr_iov_remaining_(iov_count ? iov->iov_len : 0), + total_written_(0), + output_limit_(-1) {} + + inline void SetExpectedLength(size_t len) { + output_limit_ = len; + } + + inline bool CheckLength() const { + return total_written_ == output_limit_; + } + + inline bool Append(const char* ip, size_t len) { + if (total_written_ + len > output_limit_) { + return false; + } + + return AppendNoCheck(ip, len); + } + + inline bool AppendNoCheck(const char* ip, size_t len) { + while (len > 0) { + if (curr_iov_remaining_ == 0) { + // This iovec is full. Go to the next one. + if (curr_iov_ + 1 >= output_iov_end_) { + return false; + } + ++curr_iov_; + curr_iov_output_ = reinterpret_cast<char*>(curr_iov_->iov_base); + curr_iov_remaining_ = curr_iov_->iov_len; + } + + const size_t to_write = std::min(len, curr_iov_remaining_); + memcpy(curr_iov_output_, ip, to_write); + curr_iov_output_ += to_write; + curr_iov_remaining_ -= to_write; + total_written_ += to_write; + ip += to_write; + len -= to_write; + } + + return true; + } + + inline bool TryFastAppend(const char* ip, size_t available, size_t len) { + const size_t space_left = output_limit_ - total_written_; + if (len <= 16 && available >= 16 + kMaximumTagLength && space_left >= 16 && + curr_iov_remaining_ >= 16) { + // Fast path, used for the majority (about 95%) of invocations. + UnalignedCopy128(ip, curr_iov_output_); + curr_iov_output_ += len; + curr_iov_remaining_ -= len; + total_written_ += len; + return true; + } + + return false; + } + + inline bool AppendFromSelf(size_t offset, size_t len) { + // See SnappyArrayWriter::AppendFromSelf for an explanation of + // the "offset - 1u" trick. + if (offset - 1u >= total_written_) { + return false; + } + const size_t space_left = output_limit_ - total_written_; + if (len > space_left) { + return false; + } + + // Locate the iovec from which we need to start the copy. + const iovec* from_iov = curr_iov_; + size_t from_iov_offset = curr_iov_->iov_len - curr_iov_remaining_; + while (offset > 0) { + if (from_iov_offset >= offset) { + from_iov_offset -= offset; + break; + } + + offset -= from_iov_offset; + --from_iov; +#if !defined(NDEBUG) + assert(from_iov >= output_iov_); +#endif // !defined(NDEBUG) + from_iov_offset = from_iov->iov_len; + } + + // Copy <len> bytes starting from the iovec pointed to by from_iov_index to + // the current iovec. + while (len > 0) { + assert(from_iov <= curr_iov_); + if (from_iov != curr_iov_) { + const size_t to_copy = + std::min(from_iov->iov_len - from_iov_offset, len); + AppendNoCheck(GetIOVecPointer(from_iov, from_iov_offset), to_copy); + len -= to_copy; + if (len > 0) { + ++from_iov; + from_iov_offset = 0; + } + } else { + size_t to_copy = curr_iov_remaining_; + if (to_copy == 0) { + // This iovec is full. Go to the next one. + if (curr_iov_ + 1 >= output_iov_end_) { + return false; + } + ++curr_iov_; + curr_iov_output_ = reinterpret_cast<char*>(curr_iov_->iov_base); + curr_iov_remaining_ = curr_iov_->iov_len; + continue; + } + if (to_copy > len) { + to_copy = len; + } + + IncrementalCopy(GetIOVecPointer(from_iov, from_iov_offset), + curr_iov_output_, curr_iov_output_ + to_copy, + curr_iov_output_ + curr_iov_remaining_); + curr_iov_output_ += to_copy; + curr_iov_remaining_ -= to_copy; + from_iov_offset += to_copy; + total_written_ += to_copy; + len -= to_copy; + } + } + + return true; + } + + inline void Flush() {} +}; + +bool RawUncompressToIOVec(const char* compressed, size_t compressed_length, + const struct iovec* iov, size_t iov_cnt) { + ByteArraySource reader(compressed, compressed_length); + return RawUncompressToIOVec(&reader, iov, iov_cnt); +} + +bool RawUncompressToIOVec(Source* compressed, const struct iovec* iov, + size_t iov_cnt) { + SnappyIOVecWriter output(iov, iov_cnt); + return InternalUncompress(compressed, &output); +} + +// ----------------------------------------------------------------------- // Flat array interfaces // ----------------------------------------------------------------------- @@ -1275,8 +1275,8 @@ class SnappyArrayWriter { public: inline explicit SnappyArrayWriter(char* dst) : base_(dst), - op_(dst), - op_limit_(dst) { + op_(dst), + op_limit_(dst) { } inline void SetExpectedLength(size_t len) { @@ -1301,9 +1301,9 @@ class SnappyArrayWriter { inline bool TryFastAppend(const char* ip, size_t available, size_t len) { char* op = op_; const size_t space_left = op_limit_ - op; - if (len <= 16 && available >= 16 + kMaximumTagLength && space_left >= 16) { + if (len <= 16 && available >= 16 + kMaximumTagLength && space_left >= 16) { // Fast path, used for the majority (about 95%) of invocations. - UnalignedCopy128(ip, op); + UnalignedCopy128(ip, op); op_ = op + len; return true; } else { @@ -1312,25 +1312,25 @@ class SnappyArrayWriter { } inline bool AppendFromSelf(size_t offset, size_t len) { - char* const op_end = op_ + len; - - // Check if we try to append from before the start of the buffer. - // Normally this would just be a check for "produced < offset", - // but "produced <= offset - 1u" is equivalent for every case - // except the one where offset==0, where the right side will wrap around - // to a very big number. This is convenient, as offset==0 is another - // invalid case that we also want to catch, so that we do not go - // into an infinite loop. - if (Produced() <= offset - 1u || op_end > op_limit_) return false; - op_ = IncrementalCopy(op_ - offset, op_, op_end, op_limit_); + char* const op_end = op_ + len; + + // Check if we try to append from before the start of the buffer. + // Normally this would just be a check for "produced < offset", + // but "produced <= offset - 1u" is equivalent for every case + // except the one where offset==0, where the right side will wrap around + // to a very big number. This is convenient, as offset==0 is another + // invalid case that we also want to catch, so that we do not go + // into an infinite loop. + if (Produced() <= offset - 1u || op_end > op_limit_) return false; + op_ = IncrementalCopy(op_ - offset, op_, op_end, op_limit_); return true; } - inline size_t Produced() const { - assert(op_ >= base_); - return op_ - base_; - } - inline void Flush() {} + inline size_t Produced() const { + assert(op_ >= base_); + return op_ - base_; + } + inline void Flush() {} }; bool RawUncompress(const char* compressed, size_t n, char* uncompressed) { @@ -1340,37 +1340,37 @@ bool RawUncompress(const char* compressed, size_t n, char* uncompressed) { bool RawUncompress(Source* compressed, char* uncompressed) { SnappyArrayWriter output(uncompressed); - return InternalUncompress(compressed, &output); + return InternalUncompress(compressed, &output); } -bool Uncompress(const char* compressed, size_t n, std::string* uncompressed) { +bool Uncompress(const char* compressed, size_t n, std::string* uncompressed) { size_t ulength; if (!GetUncompressedLength(compressed, n, &ulength)) { return false; } - // On 32-bit builds: max_size() < kuint32max. Check for that instead - // of crashing (e.g., consider externally specified compressed data). - if (ulength > uncompressed->max_size()) { + // On 32-bit builds: max_size() < kuint32max. Check for that instead + // of crashing (e.g., consider externally specified compressed data). + if (ulength > uncompressed->max_size()) { return false; } STLStringResizeUninitialized(uncompressed, ulength); return RawUncompress(compressed, n, string_as_array(uncompressed)); } -bool Uncompress(const char* compressed, size_t n, TString* uncompressed) { - size_t ulength; - if (!GetUncompressedLength(compressed, n, &ulength)) { - return false; - } - // On 32-bit builds: max_size() < kuint32max. Check for that instead - // of crashing (e.g., consider externally specified compressed data). - if (ulength > uncompressed->max_size()) { - return false; - } - uncompressed->ReserveAndResize(ulength); - return RawUncompress(compressed, n, uncompressed->begin()); -} - +bool Uncompress(const char* compressed, size_t n, TString* uncompressed) { + size_t ulength; + if (!GetUncompressedLength(compressed, n, &ulength)) { + return false; + } + // On 32-bit builds: max_size() < kuint32max. Check for that instead + // of crashing (e.g., consider externally specified compressed data). + if (ulength > uncompressed->max_size()) { + return false; + } + uncompressed->ReserveAndResize(ulength); + return RawUncompress(compressed, n, uncompressed->begin()); +} + // A Writer that drops everything on the floor and just does validation class SnappyDecompressionValidator { private: @@ -1378,7 +1378,7 @@ class SnappyDecompressionValidator { size_t produced_; public: - inline SnappyDecompressionValidator() : expected_(0), produced_(0) { } + inline SnappyDecompressionValidator() : expected_(0), produced_(0) { } inline void SetExpectedLength(size_t len) { expected_ = len; } @@ -1393,26 +1393,26 @@ class SnappyDecompressionValidator { return false; } inline bool AppendFromSelf(size_t offset, size_t len) { - // See SnappyArrayWriter::AppendFromSelf for an explanation of - // the "offset - 1u" trick. - if (produced_ <= offset - 1u) return false; + // See SnappyArrayWriter::AppendFromSelf for an explanation of + // the "offset - 1u" trick. + if (produced_ <= offset - 1u) return false; produced_ += len; return produced_ <= expected_; } - inline void Flush() {} + inline void Flush() {} }; bool IsValidCompressedBuffer(const char* compressed, size_t n) { ByteArraySource reader(compressed, n); SnappyDecompressionValidator writer; - return InternalUncompress(&reader, &writer); + return InternalUncompress(&reader, &writer); +} + +bool IsValidCompressed(Source* compressed) { + SnappyDecompressionValidator writer; + return InternalUncompress(compressed, &writer); } -bool IsValidCompressed(Source* compressed) { - SnappyDecompressionValidator writer; - return InternalUncompress(compressed, &writer); -} - void RawCompress(const char* input, size_t input_length, char* compressed, @@ -1425,10 +1425,10 @@ void RawCompress(const char* input, *compressed_length = (writer.CurrentDestination() - compressed); } -size_t Compress(const char* input, size_t input_length, - std::string* compressed) { +size_t Compress(const char* input, size_t input_length, + std::string* compressed) { // Pre-grow the buffer to the max length of the compressed output - STLStringResizeUninitialized(compressed, MaxCompressedLength(input_length)); + STLStringResizeUninitialized(compressed, MaxCompressedLength(input_length)); size_t compressed_length; RawCompress(input, input_length, string_as_array(compressed), @@ -1437,252 +1437,252 @@ size_t Compress(const char* input, size_t input_length, return compressed_length; } -size_t Compress(const char* input, size_t input_length, - TString* compressed) { - // Pre-grow the buffer to the max length of the compressed output - compressed->ReserveAndResize(MaxCompressedLength(input_length)); - - size_t compressed_length; - RawCompress(input, input_length, compressed->begin(), - &compressed_length); - compressed->resize(compressed_length); - return compressed_length; -} - -// ----------------------------------------------------------------------- -// Sink interface -// ----------------------------------------------------------------------- - -// A type that decompresses into a Sink. The template parameter -// Allocator must export one method "char* Allocate(int size);", which -// allocates a buffer of "size" and appends that to the destination. -template <typename Allocator> -class SnappyScatteredWriter { - Allocator allocator_; - - // We need random access into the data generated so far. Therefore - // we keep track of all of the generated data as an array of blocks. - // All of the blocks except the last have length kBlockSize. - std::vector<char*> blocks_; - size_t expected_; - - // Total size of all fully generated blocks so far - size_t full_size_; - - // Pointer into current output block - char* op_base_; // Base of output block - char* op_ptr_; // Pointer to next unfilled byte in block - char* op_limit_; // Pointer just past block - - inline size_t Size() const { - return full_size_ + (op_ptr_ - op_base_); - } - - bool SlowAppend(const char* ip, size_t len); - bool SlowAppendFromSelf(size_t offset, size_t len); - - public: - inline explicit SnappyScatteredWriter(const Allocator& allocator) - : allocator_(allocator), - full_size_(0), - op_base_(NULL), - op_ptr_(NULL), - op_limit_(NULL) { - } - - inline void SetExpectedLength(size_t len) { - assert(blocks_.empty()); - expected_ = len; - } - - inline bool CheckLength() const { - return Size() == expected_; - } - - // Return the number of bytes actually uncompressed so far - inline size_t Produced() const { - return Size(); - } - - inline bool Append(const char* ip, size_t len) { - size_t avail = op_limit_ - op_ptr_; - if (len <= avail) { - // Fast path - memcpy(op_ptr_, ip, len); - op_ptr_ += len; - return true; - } else { - return SlowAppend(ip, len); - } - } - - inline bool TryFastAppend(const char* ip, size_t available, size_t length) { - char* op = op_ptr_; - const int space_left = op_limit_ - op; - if (length <= 16 && available >= 16 + kMaximumTagLength && - space_left >= 16) { - // Fast path, used for the majority (about 95%) of invocations. - UnalignedCopy128(ip, op); - op_ptr_ = op + length; - return true; - } else { - return false; - } - } - - inline bool AppendFromSelf(size_t offset, size_t len) { - char* const op_end = op_ptr_ + len; - // See SnappyArrayWriter::AppendFromSelf for an explanation of - // the "offset - 1u" trick. - if (SNAPPY_PREDICT_TRUE(offset - 1u < op_ptr_ - op_base_ && - op_end <= op_limit_)) { - // Fast path: src and dst in current block. - op_ptr_ = IncrementalCopy(op_ptr_ - offset, op_ptr_, op_end, op_limit_); - return true; - } - return SlowAppendFromSelf(offset, len); - } - - // Called at the end of the decompress. We ask the allocator - // write all blocks to the sink. - inline void Flush() { allocator_.Flush(Produced()); } -}; - -template<typename Allocator> -bool SnappyScatteredWriter<Allocator>::SlowAppend(const char* ip, size_t len) { - size_t avail = op_limit_ - op_ptr_; - while (len > avail) { - // Completely fill this block - memcpy(op_ptr_, ip, avail); - op_ptr_ += avail; - assert(op_limit_ - op_ptr_ == 0); - full_size_ += (op_ptr_ - op_base_); - len -= avail; - ip += avail; - - // Bounds check - if (full_size_ + len > expected_) { - return false; - } - - // Make new block - size_t bsize = std::min<size_t>(kBlockSize, expected_ - full_size_); - op_base_ = allocator_.Allocate(bsize); - op_ptr_ = op_base_; - op_limit_ = op_base_ + bsize; - blocks_.push_back(op_base_); - avail = bsize; - } - - memcpy(op_ptr_, ip, len); - op_ptr_ += len; - return true; -} - -template<typename Allocator> -bool SnappyScatteredWriter<Allocator>::SlowAppendFromSelf(size_t offset, - size_t len) { - // Overflow check - // See SnappyArrayWriter::AppendFromSelf for an explanation of - // the "offset - 1u" trick. - const size_t cur = Size(); - if (offset - 1u >= cur) return false; - if (expected_ - cur < len) return false; - - // Currently we shouldn't ever hit this path because Compress() chops the - // input into blocks and does not create cross-block copies. However, it is - // nice if we do not rely on that, since we can get better compression if we - // allow cross-block copies and thus might want to change the compressor in - // the future. - size_t src = cur - offset; - while (len-- > 0) { - char c = blocks_[src >> kBlockLog][src & (kBlockSize-1)]; - Append(&c, 1); - src++; - } - return true; -} - -class SnappySinkAllocator { - public: - explicit SnappySinkAllocator(Sink* dest): dest_(dest) {} - ~SnappySinkAllocator() {} - - char* Allocate(int size) { - Datablock block(new char[size], size); - blocks_.push_back(block); - return block.data; - } - - // We flush only at the end, because the writer wants - // random access to the blocks and once we hand the - // block over to the sink, we can't access it anymore. - // Also we don't write more than has been actually written - // to the blocks. - void Flush(size_t size) { - size_t size_written = 0; - size_t block_size; - for (int i = 0; i < blocks_.size(); ++i) { - block_size = std::min<size_t>(blocks_[i].size, size - size_written); - dest_->AppendAndTakeOwnership(blocks_[i].data, block_size, - &SnappySinkAllocator::Deleter, NULL); - size_written += block_size; - } - blocks_.clear(); - } - - private: - struct Datablock { - char* data; - size_t size; - Datablock(char* p, size_t s) : data(p), size(s) {} - }; - - static void Deleter(void* arg, const char* bytes, size_t size) { - delete[] bytes; - } - - Sink* dest_; - std::vector<Datablock> blocks_; - - // Note: copying this object is allowed -}; - -size_t UncompressAsMuchAsPossible(Source* compressed, Sink* uncompressed) { - SnappySinkAllocator allocator(uncompressed); - SnappyScatteredWriter<SnappySinkAllocator> writer(allocator); - InternalUncompress(compressed, &writer); - return writer.Produced(); -} - -bool Uncompress(Source* compressed, Sink* uncompressed) { - // Read the uncompressed length from the front of the compressed input - SnappyDecompressor decompressor(compressed); - uint32 uncompressed_len = 0; - if (!decompressor.ReadUncompressedLength(&uncompressed_len)) { - return false; - } - - char c; - size_t allocated_size; - char* buf = uncompressed->GetAppendBufferVariable( - 1, uncompressed_len, &c, 1, &allocated_size); - - const size_t compressed_len = compressed->Available(); - // If we can get a flat buffer, then use it, otherwise do block by block - // uncompression - if (allocated_size >= uncompressed_len) { - SnappyArrayWriter writer(buf); - bool result = InternalUncompressAllTags(&decompressor, &writer, - compressed_len, uncompressed_len); - uncompressed->Append(buf, writer.Produced()); - return result; - } else { - SnappySinkAllocator allocator(uncompressed); - SnappyScatteredWriter<SnappySinkAllocator> writer(allocator); - return InternalUncompressAllTags(&decompressor, &writer, compressed_len, - uncompressed_len); - } -} - -} // namespace snappy +size_t Compress(const char* input, size_t input_length, + TString* compressed) { + // Pre-grow the buffer to the max length of the compressed output + compressed->ReserveAndResize(MaxCompressedLength(input_length)); + + size_t compressed_length; + RawCompress(input, input_length, compressed->begin(), + &compressed_length); + compressed->resize(compressed_length); + return compressed_length; +} + +// ----------------------------------------------------------------------- +// Sink interface +// ----------------------------------------------------------------------- + +// A type that decompresses into a Sink. The template parameter +// Allocator must export one method "char* Allocate(int size);", which +// allocates a buffer of "size" and appends that to the destination. +template <typename Allocator> +class SnappyScatteredWriter { + Allocator allocator_; + + // We need random access into the data generated so far. Therefore + // we keep track of all of the generated data as an array of blocks. + // All of the blocks except the last have length kBlockSize. + std::vector<char*> blocks_; + size_t expected_; + + // Total size of all fully generated blocks so far + size_t full_size_; + + // Pointer into current output block + char* op_base_; // Base of output block + char* op_ptr_; // Pointer to next unfilled byte in block + char* op_limit_; // Pointer just past block + + inline size_t Size() const { + return full_size_ + (op_ptr_ - op_base_); + } + + bool SlowAppend(const char* ip, size_t len); + bool SlowAppendFromSelf(size_t offset, size_t len); + + public: + inline explicit SnappyScatteredWriter(const Allocator& allocator) + : allocator_(allocator), + full_size_(0), + op_base_(NULL), + op_ptr_(NULL), + op_limit_(NULL) { + } + + inline void SetExpectedLength(size_t len) { + assert(blocks_.empty()); + expected_ = len; + } + + inline bool CheckLength() const { + return Size() == expected_; + } + + // Return the number of bytes actually uncompressed so far + inline size_t Produced() const { + return Size(); + } + + inline bool Append(const char* ip, size_t len) { + size_t avail = op_limit_ - op_ptr_; + if (len <= avail) { + // Fast path + memcpy(op_ptr_, ip, len); + op_ptr_ += len; + return true; + } else { + return SlowAppend(ip, len); + } + } + + inline bool TryFastAppend(const char* ip, size_t available, size_t length) { + char* op = op_ptr_; + const int space_left = op_limit_ - op; + if (length <= 16 && available >= 16 + kMaximumTagLength && + space_left >= 16) { + // Fast path, used for the majority (about 95%) of invocations. + UnalignedCopy128(ip, op); + op_ptr_ = op + length; + return true; + } else { + return false; + } + } + + inline bool AppendFromSelf(size_t offset, size_t len) { + char* const op_end = op_ptr_ + len; + // See SnappyArrayWriter::AppendFromSelf for an explanation of + // the "offset - 1u" trick. + if (SNAPPY_PREDICT_TRUE(offset - 1u < op_ptr_ - op_base_ && + op_end <= op_limit_)) { + // Fast path: src and dst in current block. + op_ptr_ = IncrementalCopy(op_ptr_ - offset, op_ptr_, op_end, op_limit_); + return true; + } + return SlowAppendFromSelf(offset, len); + } + + // Called at the end of the decompress. We ask the allocator + // write all blocks to the sink. + inline void Flush() { allocator_.Flush(Produced()); } +}; + +template<typename Allocator> +bool SnappyScatteredWriter<Allocator>::SlowAppend(const char* ip, size_t len) { + size_t avail = op_limit_ - op_ptr_; + while (len > avail) { + // Completely fill this block + memcpy(op_ptr_, ip, avail); + op_ptr_ += avail; + assert(op_limit_ - op_ptr_ == 0); + full_size_ += (op_ptr_ - op_base_); + len -= avail; + ip += avail; + + // Bounds check + if (full_size_ + len > expected_) { + return false; + } + + // Make new block + size_t bsize = std::min<size_t>(kBlockSize, expected_ - full_size_); + op_base_ = allocator_.Allocate(bsize); + op_ptr_ = op_base_; + op_limit_ = op_base_ + bsize; + blocks_.push_back(op_base_); + avail = bsize; + } + + memcpy(op_ptr_, ip, len); + op_ptr_ += len; + return true; +} + +template<typename Allocator> +bool SnappyScatteredWriter<Allocator>::SlowAppendFromSelf(size_t offset, + size_t len) { + // Overflow check + // See SnappyArrayWriter::AppendFromSelf for an explanation of + // the "offset - 1u" trick. + const size_t cur = Size(); + if (offset - 1u >= cur) return false; + if (expected_ - cur < len) return false; + + // Currently we shouldn't ever hit this path because Compress() chops the + // input into blocks and does not create cross-block copies. However, it is + // nice if we do not rely on that, since we can get better compression if we + // allow cross-block copies and thus might want to change the compressor in + // the future. + size_t src = cur - offset; + while (len-- > 0) { + char c = blocks_[src >> kBlockLog][src & (kBlockSize-1)]; + Append(&c, 1); + src++; + } + return true; +} + +class SnappySinkAllocator { + public: + explicit SnappySinkAllocator(Sink* dest): dest_(dest) {} + ~SnappySinkAllocator() {} + + char* Allocate(int size) { + Datablock block(new char[size], size); + blocks_.push_back(block); + return block.data; + } + + // We flush only at the end, because the writer wants + // random access to the blocks and once we hand the + // block over to the sink, we can't access it anymore. + // Also we don't write more than has been actually written + // to the blocks. + void Flush(size_t size) { + size_t size_written = 0; + size_t block_size; + for (int i = 0; i < blocks_.size(); ++i) { + block_size = std::min<size_t>(blocks_[i].size, size - size_written); + dest_->AppendAndTakeOwnership(blocks_[i].data, block_size, + &SnappySinkAllocator::Deleter, NULL); + size_written += block_size; + } + blocks_.clear(); + } + + private: + struct Datablock { + char* data; + size_t size; + Datablock(char* p, size_t s) : data(p), size(s) {} + }; + + static void Deleter(void* arg, const char* bytes, size_t size) { + delete[] bytes; + } + + Sink* dest_; + std::vector<Datablock> blocks_; + + // Note: copying this object is allowed +}; + +size_t UncompressAsMuchAsPossible(Source* compressed, Sink* uncompressed) { + SnappySinkAllocator allocator(uncompressed); + SnappyScatteredWriter<SnappySinkAllocator> writer(allocator); + InternalUncompress(compressed, &writer); + return writer.Produced(); +} + +bool Uncompress(Source* compressed, Sink* uncompressed) { + // Read the uncompressed length from the front of the compressed input + SnappyDecompressor decompressor(compressed); + uint32 uncompressed_len = 0; + if (!decompressor.ReadUncompressedLength(&uncompressed_len)) { + return false; + } + + char c; + size_t allocated_size; + char* buf = uncompressed->GetAppendBufferVariable( + 1, uncompressed_len, &c, 1, &allocated_size); + + const size_t compressed_len = compressed->Available(); + // If we can get a flat buffer, then use it, otherwise do block by block + // uncompression + if (allocated_size >= uncompressed_len) { + SnappyArrayWriter writer(buf); + bool result = InternalUncompressAllTags(&decompressor, &writer, + compressed_len, uncompressed_len); + uncompressed->Append(buf, writer.Produced()); + return result; + } else { + SnappySinkAllocator allocator(uncompressed); + SnappyScatteredWriter<SnappySinkAllocator> writer(allocator); + return InternalUncompressAllTags(&decompressor, &writer, compressed_len, + uncompressed_len); + } +} + +} // namespace snappy diff --git a/contrib/libs/snappy/snappy.h b/contrib/libs/snappy/snappy.h index f8eaa1c60d..9a3bc3fa64 100644 --- a/contrib/libs/snappy/snappy.h +++ b/contrib/libs/snappy/snappy.h @@ -36,14 +36,14 @@ // using BMDiff and then compressing the output of BMDiff with // Snappy. -#ifndef THIRD_PARTY_SNAPPY_SNAPPY_H__ -#define THIRD_PARTY_SNAPPY_SNAPPY_H__ +#ifndef THIRD_PARTY_SNAPPY_SNAPPY_H__ +#define THIRD_PARTY_SNAPPY_SNAPPY_H__ -#include <cstddef> -#include <string> +#include <cstddef> +#include <string> + +#include <util/generic/fwd.h> -#include <util/generic/fwd.h> - #include "snappy-stubs-public.h" namespace snappy { @@ -58,27 +58,27 @@ namespace snappy { // number of bytes written. size_t Compress(Source* source, Sink* sink); - // Find the uncompressed length of the given stream, as given by the header. - // Note that the true length could deviate from this; the stream could e.g. - // be truncated. - // - // Also note that this leaves "*source" in a state that is unsuitable for - // further operations, such as RawUncompress(). You will need to rewind - // or recreate the source yourself before attempting any further calls. + // Find the uncompressed length of the given stream, as given by the header. + // Note that the true length could deviate from this; the stream could e.g. + // be truncated. + // + // Also note that this leaves "*source" in a state that is unsuitable for + // further operations, such as RawUncompress(). You will need to rewind + // or recreate the source yourself before attempting any further calls. bool GetUncompressedLength(Source* source, uint32* result); // ------------------------------------------------------------------------ // Higher-level string based routines (should be sufficient for most users) // ------------------------------------------------------------------------ - // Sets "*compressed" to the compressed version of "input[0,input_length-1]". - // Original contents of *compressed are lost. + // Sets "*compressed" to the compressed version of "input[0,input_length-1]". + // Original contents of *compressed are lost. // - // REQUIRES: "input[]" is not an alias of "*compressed". - size_t Compress(const char* input, size_t input_length, - std::string* compressed); - size_t Compress(const char* input, size_t input_length, - TString* compressed); + // REQUIRES: "input[]" is not an alias of "*compressed". + size_t Compress(const char* input, size_t input_length, + std::string* compressed); + size_t Compress(const char* input, size_t input_length, + TString* compressed); // Decompresses "compressed[0,compressed_length-1]" to "*uncompressed". // Original contents of "*uncompressed" are lost. @@ -87,23 +87,23 @@ namespace snappy { // // returns false if the message is corrupted and could not be decompressed bool Uncompress(const char* compressed, size_t compressed_length, - std::string* uncompressed); - bool Uncompress(const char* compressed, size_t compressed_length, - TString* uncompressed); - - // Decompresses "compressed" to "*uncompressed". - // - // returns false if the message is corrupted and could not be decompressed - bool Uncompress(Source* compressed, Sink* uncompressed); - - // This routine uncompresses as much of the "compressed" as possible - // into sink. It returns the number of valid bytes added to sink - // (extra invalid bytes may have been added due to errors; the caller - // should ignore those). The emitted data typically has length - // GetUncompressedLength(), but may be shorter if an error is - // encountered. - size_t UncompressAsMuchAsPossible(Source* compressed, Sink* uncompressed); - + std::string* uncompressed); + bool Uncompress(const char* compressed, size_t compressed_length, + TString* uncompressed); + + // Decompresses "compressed" to "*uncompressed". + // + // returns false if the message is corrupted and could not be decompressed + bool Uncompress(Source* compressed, Sink* uncompressed); + + // This routine uncompresses as much of the "compressed" as possible + // into sink. It returns the number of valid bytes added to sink + // (extra invalid bytes may have been added due to errors; the caller + // should ignore those). The emitted data typically has length + // GetUncompressedLength(), but may be shorter if an error is + // encountered. + size_t UncompressAsMuchAsPossible(Source* compressed, Sink* uncompressed); + // ------------------------------------------------------------------------ // Lower-level character array based routines. May be useful for // efficiency reasons in certain circumstances. @@ -143,28 +143,28 @@ namespace snappy { // returns false if the message is corrupted and could not be decrypted bool RawUncompress(Source* compressed, char* uncompressed); - // Given data in "compressed[0..compressed_length-1]" generated by - // calling the Snappy::Compress routine, this routine - // stores the uncompressed data to the iovec "iov". The number of physical - // buffers in "iov" is given by iov_cnt and their cumulative size - // must be at least GetUncompressedLength(compressed). The individual buffers - // in "iov" must not overlap with each other. - // - // returns false if the message is corrupted and could not be decrypted - bool RawUncompressToIOVec(const char* compressed, size_t compressed_length, - const struct iovec* iov, size_t iov_cnt); - - // Given data from the byte source 'compressed' generated by calling - // the Snappy::Compress routine, this routine stores the uncompressed - // data to the iovec "iov". The number of physical - // buffers in "iov" is given by iov_cnt and their cumulative size - // must be at least GetUncompressedLength(compressed). The individual buffers - // in "iov" must not overlap with each other. - // - // returns false if the message is corrupted and could not be decrypted - bool RawUncompressToIOVec(Source* compressed, const struct iovec* iov, - size_t iov_cnt); - + // Given data in "compressed[0..compressed_length-1]" generated by + // calling the Snappy::Compress routine, this routine + // stores the uncompressed data to the iovec "iov". The number of physical + // buffers in "iov" is given by iov_cnt and their cumulative size + // must be at least GetUncompressedLength(compressed). The individual buffers + // in "iov" must not overlap with each other. + // + // returns false if the message is corrupted and could not be decrypted + bool RawUncompressToIOVec(const char* compressed, size_t compressed_length, + const struct iovec* iov, size_t iov_cnt); + + // Given data from the byte source 'compressed' generated by calling + // the Snappy::Compress routine, this routine stores the uncompressed + // data to the iovec "iov". The number of physical + // buffers in "iov" is given by iov_cnt and their cumulative size + // must be at least GetUncompressedLength(compressed). The individual buffers + // in "iov" must not overlap with each other. + // + // returns false if the message is corrupted and could not be decrypted + bool RawUncompressToIOVec(Source* compressed, const struct iovec* iov, + size_t iov_cnt); + // Returns the maximal size of the compressed representation of // input data that is "source_bytes" bytes in length; size_t MaxCompressedLength(size_t source_bytes); @@ -183,31 +183,31 @@ namespace snappy { bool IsValidCompressedBuffer(const char* compressed, size_t compressed_length); - // Returns true iff the contents of "compressed" can be uncompressed - // successfully. Does not return the uncompressed data. Takes - // time proportional to *compressed length, but is usually at least - // a factor of four faster than actual decompression. - // On success, consumes all of *compressed. On failure, consumes an - // unspecified prefix of *compressed. - bool IsValidCompressed(Source* compressed); - - // The size of a compression block. Note that many parts of the compression - // code assumes that kBlockSize <= 65536; in particular, the hash table - // can only store 16-bit offsets, and EmitCopy() also assumes the offset - // is 65535 bytes or less. Note also that if you change this, it will - // affect the framing format (see framing_format.txt). + // Returns true iff the contents of "compressed" can be uncompressed + // successfully. Does not return the uncompressed data. Takes + // time proportional to *compressed length, but is usually at least + // a factor of four faster than actual decompression. + // On success, consumes all of *compressed. On failure, consumes an + // unspecified prefix of *compressed. + bool IsValidCompressed(Source* compressed); + + // The size of a compression block. Note that many parts of the compression + // code assumes that kBlockSize <= 65536; in particular, the hash table + // can only store 16-bit offsets, and EmitCopy() also assumes the offset + // is 65535 bytes or less. Note also that if you change this, it will + // affect the framing format (see framing_format.txt). // - // Note that there might be older data around that is compressed with larger - // block sizes, so the decompression code should not rely on the - // non-existence of long backreferences. - static constexpr int kBlockLog = 16; - static constexpr size_t kBlockSize = 1 << kBlockLog; + // Note that there might be older data around that is compressed with larger + // block sizes, so the decompression code should not rely on the + // non-existence of long backreferences. + static constexpr int kBlockLog = 16; + static constexpr size_t kBlockSize = 1 << kBlockLog; - static constexpr int kMinHashTableBits = 8; - static constexpr size_t kMinHashTableSize = 1 << kMinHashTableBits; + static constexpr int kMinHashTableBits = 8; + static constexpr size_t kMinHashTableSize = 1 << kMinHashTableBits; - static constexpr int kMaxHashTableBits = 14; - static constexpr size_t kMaxHashTableSize = 1 << kMaxHashTableBits; + static constexpr int kMaxHashTableBits = 14; + static constexpr size_t kMaxHashTableSize = 1 << kMaxHashTableBits; } // end namespace snappy -#endif // THIRD_PARTY_SNAPPY_SNAPPY_H__ +#endif // THIRD_PARTY_SNAPPY_SNAPPY_H__ diff --git a/contrib/libs/snappy/ya.make b/contrib/libs/snappy/ya.make index 4d83009b19..472daa0c80 100644 --- a/contrib/libs/snappy/ya.make +++ b/contrib/libs/snappy/ya.make @@ -1,15 +1,15 @@ -# Generated by devtools/yamaker from nixpkgs 92c884dfd7140a6c3e6c717cf8990f7a78524331. - +# Generated by devtools/yamaker from nixpkgs 92c884dfd7140a6c3e6c717cf8990f7a78524331. + LIBRARY() -OWNER(g:cpp-contrib) +OWNER(g:cpp-contrib) -VERSION(1.1.8) +VERSION(1.1.8) ORIGINAL_SOURCE(https://github.com/google/snappy/archive/1.1.8.tar.gz) -LICENSE(BSD-3-Clause) - +LICENSE(BSD-3-Clause) + LICENSE_TEXTS(.yandex_meta/licenses.list.txt) ADDINCL( @@ -24,9 +24,9 @@ CFLAGS( SRCS( snappy-c.cc - snappy-sinksource.cc + snappy-sinksource.cc snappy-stubs-internal.cc - snappy.cc + snappy.cc ) END() |