aboutsummaryrefslogtreecommitdiffstats
path: root/contrib/libs/t1ha
diff options
context:
space:
mode:
authorthegeorg <thegeorg@yandex-team.ru>2022-02-10 16:45:12 +0300
committerDaniil Cherednik <dcherednik@yandex-team.ru>2022-02-10 16:45:12 +0300
commit49116032d905455a7b1c994e4a696afc885c1e71 (patch)
treebe835aa92c6248212e705f25388ebafcf84bc7a1 /contrib/libs/t1ha
parent4e839db24a3bbc9f1c610c43d6faaaa99824dcca (diff)
downloadydb-49116032d905455a7b1c994e4a696afc885c1e71.tar.gz
Restoring authorship annotation for <thegeorg@yandex-team.ru>. Commit 2 of 2.
Diffstat (limited to 'contrib/libs/t1ha')
-rw-r--r--contrib/libs/t1ha/LICENSE2
-rw-r--r--contrib/libs/t1ha/README.md228
-rw-r--r--contrib/libs/t1ha/src/t1ha0.c18
-rw-r--r--contrib/libs/t1ha/src/t1ha0_ia32aes_a.h6
-rw-r--r--contrib/libs/t1ha/src/t1ha0_ia32aes_b.h6
-rw-r--r--contrib/libs/t1ha/src/t1ha0_selfcheck.c6
-rw-r--r--contrib/libs/t1ha/src/t1ha1.c6
-rw-r--r--contrib/libs/t1ha/src/t1ha1_selfcheck.c6
-rw-r--r--contrib/libs/t1ha/src/t1ha2.c114
-rw-r--r--contrib/libs/t1ha/src/t1ha2_selfcheck.c6
-rw-r--r--contrib/libs/t1ha/src/t1ha_bits.h172
-rw-r--r--contrib/libs/t1ha/src/t1ha_selfcheck.c6
-rw-r--r--contrib/libs/t1ha/src/t1ha_selfcheck.h6
-rw-r--r--contrib/libs/t1ha/src/t1ha_selfcheck_all.c6
-rw-r--r--contrib/libs/t1ha/t1ha.h38
-rw-r--r--contrib/libs/t1ha/ya.make18
16 files changed, 322 insertions, 322 deletions
diff --git a/contrib/libs/t1ha/LICENSE b/contrib/libs/t1ha/LICENSE
index 90e82bda8d..c198acc89c 100644
--- a/contrib/libs/t1ha/LICENSE
+++ b/contrib/libs/t1ha/LICENSE
@@ -1,6 +1,6 @@
zlib License, see https://en.wikipedia.org/wiki/Zlib_License
- Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
Fast Positive Hash.
Portions Copyright (c) 2010-2013 Leonid Yuriev <leo@yuriev.ru>,
diff --git a/contrib/libs/t1ha/README.md b/contrib/libs/t1ha/README.md
index cad869b925..13c3d82f6a 100644
--- a/contrib/libs/t1ha/README.md
+++ b/contrib/libs/t1ha/README.md
@@ -1,82 +1,82 @@
-<!-- Required extensions: pymdownx.betterem, pymdownx.tilde, pymdownx.emoji, pymdownx.tasklist, pymdownx.superfences -->
-
+<!-- Required extensions: pymdownx.betterem, pymdownx.tilde, pymdownx.emoji, pymdownx.tasklist, pymdownx.superfences -->
+
t1ha
-=====
+=====
Fast Positive Hash, aka "Позитивный Хэш"
by [Positive Technologies](https://www.ptsecurity.com).
Included in the [Awesome C](https://github.com/kozross/awesome-c) list of open source C software.
-*The Future will (be) [Positive](https://www.ptsecurity.com). Всё будет хорошо.*
-
+*The Future will (be) [Positive](https://www.ptsecurity.com). Всё будет хорошо.*
+
[![License: Zlib](https://img.shields.io/badge/License-Zlib-lightgrey.svg)](https://opensource.org/licenses/Zlib)
-[![Build Status](https://travis-ci.org/erthink/t1ha.svg?branch=master)](https://travis-ci.org/erthink/t1ha)
+[![Build Status](https://travis-ci.org/erthink/t1ha.svg?branch=master)](https://travis-ci.org/erthink/t1ha)
[![Build status](https://ci.appveyor.com/api/projects/status/ptug5fl2ouxdo68h/branch/master?svg=true)](https://ci.appveyor.com/project/leo-yuriev/t1ha/branch/master)
-[![CircleCI](https://circleci.com/gh/erthink/t1ha/tree/master.svg?style=svg)](https://circleci.com/gh/erthink/t1ha/tree/master)
+[![CircleCI](https://circleci.com/gh/erthink/t1ha/tree/master.svg?style=svg)](https://circleci.com/gh/erthink/t1ha/tree/master)
[![Coverity Scan Status](https://scan.coverity.com/projects/12918/badge.svg)](https://scan.coverity.com/projects/leo-yuriev-t1ha)
-## Briefly, it is a portable non-cryptographic 64-bit hash function:
-1. Intended for 64-bit little-endian platforms, predominantly for Elbrus and x86_64,
-but portable and without penalties it can run on any 64-bit CPU.
-
-2. In most cases up to 15% faster than
-[xxHash](https://cyan4973.github.io/xxHash/),
-[StadtX](https://github.com/demerphq/BeagleHash/blob/master/stadtx_hash.h),
-[MUM](https://github.com/vnmakarov/mum-hash) and others portable
-hash-functions (which do not use specific hardware tricks).
-
- Currently [wyhash](https://github.com/wangyi-fudan/wyhash)
- outperforms _t1ha_ on `x86_64`. However **next version `t1ha3_atonce()` will be even
- faster** on all platforms, especially on
- [E2K](https://en.wikipedia.org/wiki/Elbrus_2000), architectures with
- [SIMD](https://en.wikipedia.org/wiki/SIMD) and most
- [RISC-V](https://en.wikipedia.org/wiki/RISC-V) implementations.
- In addition, it should be noted that _wyhash_ have a "blinding multiplication"
- flaw and can lose entropy (similarly as described below).
- For instance, when data could be correlated with the `seed ^ _wypN` values or equal to it.
- Another case is where one of `_wymum()` multipliers becomes zero. In result of such blinding all
- previous data will not be influence to the hash value.
-
-3. Licensed under [zlib License](https://en.wikipedia.org/wiki/Zlib_License).
-
+## Briefly, it is a portable non-cryptographic 64-bit hash function:
+1. Intended for 64-bit little-endian platforms, predominantly for Elbrus and x86_64,
+but portable and without penalties it can run on any 64-bit CPU.
+
+2. In most cases up to 15% faster than
+[xxHash](https://cyan4973.github.io/xxHash/),
+[StadtX](https://github.com/demerphq/BeagleHash/blob/master/stadtx_hash.h),
+[MUM](https://github.com/vnmakarov/mum-hash) and others portable
+hash-functions (which do not use specific hardware tricks).
+
+ Currently [wyhash](https://github.com/wangyi-fudan/wyhash)
+ outperforms _t1ha_ on `x86_64`. However **next version `t1ha3_atonce()` will be even
+ faster** on all platforms, especially on
+ [E2K](https://en.wikipedia.org/wiki/Elbrus_2000), architectures with
+ [SIMD](https://en.wikipedia.org/wiki/SIMD) and most
+ [RISC-V](https://en.wikipedia.org/wiki/RISC-V) implementations.
+ In addition, it should be noted that _wyhash_ have a "blinding multiplication"
+ flaw and can lose entropy (similarly as described below).
+ For instance, when data could be correlated with the `seed ^ _wypN` values or equal to it.
+ Another case is where one of `_wymum()` multipliers becomes zero. In result of such blinding all
+ previous data will not be influence to the hash value.
+
+3. Licensed under [zlib License](https://en.wikipedia.org/wiki/Zlib_License).
+
Also pay attention to [Rust](https://github.com/flier/rust-t1ha),
-[Erlang](https://github.com/lemenkov/erlang-t1ha) and
-[Golang](https://github.com/dgryski/go-t1ha) implementations.
-
-### FAQ: Why _t1ha_ don't follow [NH](https://en.wikipedia.org/wiki/UMAC)-approach like [FARSH](https://github.com/Bulat-Ziganshin/FARSH), [XXH3](https://fastcompression.blogspot.com/2019/03/presenting-xxh3.html), HighwayHash and so on?
-
-Okay, just for clarity, we should distinguish functions families:
-**_MMH_** (_Multilinear-Modular-Hashing_),
-[**_NMH_**](https://link.springer.com/content/pdf/10.1007/BFb0052345.pdf)
-(_Non-linear Modular-Hashing_) and the next simplification step UMAC's
-[**_NH_**](https://web.archive.org/web/20120310090322/http://www.cs.ucdavis.edu/~rogaway/papers/umac-full.pdf).
-
-Now take a look to NH hash-function family definition: ![Wikipedia](
-https://wikimedia.org/api/rest_v1/media/math/render/svg/3cafee01ea2f26664503b6725fe859ed5f07b9a3)
-
-It is very SIMD-friendly, since SSE2's `_mm_add_epi32()` and
-`_mm_mul_epu32()` is enough for ![_W =
-32_](https://wikimedia.org/api/rest_v1/media/math/render/svg/8c609e2684eb709b260154fb505321e417037009).
-On the other hand, the result of the inner multiplication becomes zero
-when **_(m[2i] + k[2i]) mod 2^32 == 0_** or **_(m[2i+1] + k[2i+1]) mod
-2^32 == 0_**, in which case the opposite multiplier will not affect the
-result of hashing, i.e. NH function just ignores part of the input data.
-I called this an "blinding multiplication". That's all.
-More useful related information can be googled by "[UMAC NH key
-recovery
-attack](https://www.google.com/search?q=umac+nh+key+recovery+attack)".
-
-The right NMH/NH code without entropy loss should be looking like this:
-```
- uint64_t proper_NH_block(const uint32_t *M /* message data */,
- const uint64_t *K /* 64-bit primes */,
- size_t N_even, uint64_t optional_weak_seed) {
- uint64_t H = optional_weak_seed;
- for (size_t i = 0; i < N_even / 2; ++i)
- H += (uint64_t(M[i*2]) + K[i*2]) * (uint64_t(M[i*2+1]) + K[i*2+1]);
- return H;
- }
-```
-
+[Erlang](https://github.com/lemenkov/erlang-t1ha) and
+[Golang](https://github.com/dgryski/go-t1ha) implementations.
+
+### FAQ: Why _t1ha_ don't follow [NH](https://en.wikipedia.org/wiki/UMAC)-approach like [FARSH](https://github.com/Bulat-Ziganshin/FARSH), [XXH3](https://fastcompression.blogspot.com/2019/03/presenting-xxh3.html), HighwayHash and so on?
+
+Okay, just for clarity, we should distinguish functions families:
+**_MMH_** (_Multilinear-Modular-Hashing_),
+[**_NMH_**](https://link.springer.com/content/pdf/10.1007/BFb0052345.pdf)
+(_Non-linear Modular-Hashing_) and the next simplification step UMAC's
+[**_NH_**](https://web.archive.org/web/20120310090322/http://www.cs.ucdavis.edu/~rogaway/papers/umac-full.pdf).
+
+Now take a look to NH hash-function family definition: ![Wikipedia](
+https://wikimedia.org/api/rest_v1/media/math/render/svg/3cafee01ea2f26664503b6725fe859ed5f07b9a3)
+
+It is very SIMD-friendly, since SSE2's `_mm_add_epi32()` and
+`_mm_mul_epu32()` is enough for ![_W =
+32_](https://wikimedia.org/api/rest_v1/media/math/render/svg/8c609e2684eb709b260154fb505321e417037009).
+On the other hand, the result of the inner multiplication becomes zero
+when **_(m[2i] + k[2i]) mod 2^32 == 0_** or **_(m[2i+1] + k[2i+1]) mod
+2^32 == 0_**, in which case the opposite multiplier will not affect the
+result of hashing, i.e. NH function just ignores part of the input data.
+I called this an "blinding multiplication". That's all.
+More useful related information can be googled by "[UMAC NH key
+recovery
+attack](https://www.google.com/search?q=umac+nh+key+recovery+attack)".
+
+The right NMH/NH code without entropy loss should be looking like this:
+```
+ uint64_t proper_NH_block(const uint32_t *M /* message data */,
+ const uint64_t *K /* 64-bit primes */,
+ size_t N_even, uint64_t optional_weak_seed) {
+ uint64_t H = optional_weak_seed;
+ for (size_t i = 0; i < N_even / 2; ++i)
+ H += (uint64_t(M[i*2]) + K[i*2]) * (uint64_t(M[i*2+1]) + K[i*2+1]);
+ return H;
+ }
+```
+
********************************************************************************
# Usage
@@ -204,12 +204,12 @@ for _The 1Hippeus project - zerocopy messaging in the spirit of Sparta!_
Current version of t1ha library includes tool for basic testing and benchmarking.
Just try `make check` from t1ha directory.
-To comparison benchmark also includes `wyhash`, `xxHash`, `StadtX` and
-`HighwayHash` functions. For example actual results for `Intel(R)
-Core(TM) i7-4600U CPU`:
+To comparison benchmark also includes `wyhash`, `xxHash`, `StadtX` and
+`HighwayHash` functions. For example actual results for `Intel(R)
+Core(TM) i7-4600U CPU`:
```
-$ make all && sudo make check
-Build by GNU C/C++ compiler 9.3 (self-check passed)
+$ make all && sudo make check
+Build by GNU C/C++ compiler 9.3 (self-check passed)
Testing t1ha2_atonce... Ok
Testing t1ha2_atonce128... Ok
Testing t1ha2_stream... Ok
@@ -226,49 +226,49 @@ Testing HighwayHash64_portable_cxx... Ok
Testing HighwayHash64_sse41... Ok
Testing HighwayHash64_avx2... Ok
Testing StadtX... Ok
-Testing wyhash_v7... Ok
+Testing wyhash_v7... Ok
Preparing to benchmarking...
- - running on CPU#0
- - use RDPMC_40000001 as clock source for benchmarking
+ - running on CPU#0
+ - use RDPMC_40000001 as clock source for benchmarking
- assume it cheap and stable
- - measure granularity and overhead: 54 cycles, 0.0185185 iteration/cycle
+ - measure granularity and overhead: 54 cycles, 0.0185185 iteration/cycle
Bench for tiny keys (7 bytes):
-t1ha2_atonce : 17.250 cycle/hash, 2.464 cycle/byte, 0.406 byte/cycle, 1.217 GiB/s @3GHz
-t1ha2_atonce128* : 33.281 cycle/hash, 4.754 cycle/byte, 0.210 byte/cycle, 0.631 GiB/s @3GHz
-t1ha2_stream* : 77.500 cycle/hash, 11.071 cycle/byte, 0.090 byte/cycle, 0.271 GiB/s @3GHz
-t1ha2_stream128* : 99.125 cycle/hash, 14.161 cycle/byte, 0.071 byte/cycle, 0.212 GiB/s @3GHz
-t1ha1_64le : 18.219 cycle/hash, 2.603 cycle/byte, 0.384 byte/cycle, 1.153 GiB/s @3GHz
-t1ha0 : 15.102 cycle/hash, 2.157 cycle/byte, 0.464 byte/cycle, 1.391 GiB/s @3GHz
-xxhash32 : 16.545 cycle/hash, 2.364 cycle/byte, 0.423 byte/cycle, 1.269 GiB/s @3GHz
-xxhash64 : 27.203 cycle/hash, 3.886 cycle/byte, 0.257 byte/cycle, 0.772 GiB/s @3GHz
-xxh3_64 : 15.102 cycle/hash, 2.157 cycle/byte, 0.464 byte/cycle, 1.391 GiB/s @3GHz
-xxh3_128 : 18.219 cycle/hash, 2.603 cycle/byte, 0.384 byte/cycle, 1.153 GiB/s @3GHz
-StadtX : 20.203 cycle/hash, 2.886 cycle/byte, 0.346 byte/cycle, 1.039 GiB/s @3GHz
-HighwayHash64_pure_c : 607.000 cycle/hash, 86.714 cycle/byte, 0.012 byte/cycle, 0.035 GiB/s @3GHz
-HighwayHash64_portable: 513.000 cycle/hash, 73.286 cycle/byte, 0.014 byte/cycle, 0.041 GiB/s @3GHz
-HighwayHash64_sse41 : 69.438 cycle/hash, 9.920 cycle/byte, 0.101 byte/cycle, 0.302 GiB/s @3GHz
-HighwayHash64_avx2 : 54.875 cycle/hash, 7.839 cycle/byte, 0.128 byte/cycle, 0.383 GiB/s @3GHz
-wyhash_v7 : 14.102 cycle/hash, 2.015 cycle/byte, 0.496 byte/cycle, 1.489 GiB/s @3GHz
+t1ha2_atonce : 17.250 cycle/hash, 2.464 cycle/byte, 0.406 byte/cycle, 1.217 GiB/s @3GHz
+t1ha2_atonce128* : 33.281 cycle/hash, 4.754 cycle/byte, 0.210 byte/cycle, 0.631 GiB/s @3GHz
+t1ha2_stream* : 77.500 cycle/hash, 11.071 cycle/byte, 0.090 byte/cycle, 0.271 GiB/s @3GHz
+t1ha2_stream128* : 99.125 cycle/hash, 14.161 cycle/byte, 0.071 byte/cycle, 0.212 GiB/s @3GHz
+t1ha1_64le : 18.219 cycle/hash, 2.603 cycle/byte, 0.384 byte/cycle, 1.153 GiB/s @3GHz
+t1ha0 : 15.102 cycle/hash, 2.157 cycle/byte, 0.464 byte/cycle, 1.391 GiB/s @3GHz
+xxhash32 : 16.545 cycle/hash, 2.364 cycle/byte, 0.423 byte/cycle, 1.269 GiB/s @3GHz
+xxhash64 : 27.203 cycle/hash, 3.886 cycle/byte, 0.257 byte/cycle, 0.772 GiB/s @3GHz
+xxh3_64 : 15.102 cycle/hash, 2.157 cycle/byte, 0.464 byte/cycle, 1.391 GiB/s @3GHz
+xxh3_128 : 18.219 cycle/hash, 2.603 cycle/byte, 0.384 byte/cycle, 1.153 GiB/s @3GHz
+StadtX : 20.203 cycle/hash, 2.886 cycle/byte, 0.346 byte/cycle, 1.039 GiB/s @3GHz
+HighwayHash64_pure_c : 607.000 cycle/hash, 86.714 cycle/byte, 0.012 byte/cycle, 0.035 GiB/s @3GHz
+HighwayHash64_portable: 513.000 cycle/hash, 73.286 cycle/byte, 0.014 byte/cycle, 0.041 GiB/s @3GHz
+HighwayHash64_sse41 : 69.438 cycle/hash, 9.920 cycle/byte, 0.101 byte/cycle, 0.302 GiB/s @3GHz
+HighwayHash64_avx2 : 54.875 cycle/hash, 7.839 cycle/byte, 0.128 byte/cycle, 0.383 GiB/s @3GHz
+wyhash_v7 : 14.102 cycle/hash, 2.015 cycle/byte, 0.496 byte/cycle, 1.489 GiB/s @3GHz
Bench for large keys (16384 bytes):
-t1ha2_atonce : 3493.000 cycle/hash, 0.213 cycle/byte, 4.691 byte/cycle, 14.072 GiB/s @3GHz
-t1ha2_atonce128* : 3664.000 cycle/hash, 0.224 cycle/byte, 4.472 byte/cycle, 13.415 GiB/s @3GHz
-t1ha2_stream* : 3684.000 cycle/hash, 0.225 cycle/byte, 4.447 byte/cycle, 13.342 GiB/s @3GHz
-t1ha2_stream128* : 3709.239 cycle/hash, 0.226 cycle/byte, 4.417 byte/cycle, 13.251 GiB/s @3GHz
-t1ha1_64le : 3644.000 cycle/hash, 0.222 cycle/byte, 4.496 byte/cycle, 13.488 GiB/s @3GHz
-t1ha0 : 1289.000 cycle/hash, 0.079 cycle/byte, 12.711 byte/cycle, 38.132 GiB/s @3GHz
-xxhash32 : 8198.000 cycle/hash, 0.500 cycle/byte, 1.999 byte/cycle, 5.996 GiB/s @3GHz
-xxhash64 : 4126.750 cycle/hash, 0.252 cycle/byte, 3.970 byte/cycle, 11.911 GiB/s @3GHz
-xxh3_64 : 4929.000 cycle/hash, 0.301 cycle/byte, 3.324 byte/cycle, 9.972 GiB/s @3GHz
-xxh3_128 : 4887.536 cycle/hash, 0.298 cycle/byte, 3.352 byte/cycle, 10.057 GiB/s @3GHz
-StadtX : 3667.000 cycle/hash, 0.224 cycle/byte, 4.468 byte/cycle, 13.404 GiB/s @3GHz
-HighwayHash64_pure_c : 55294.000 cycle/hash, 3.375 cycle/byte, 0.296 byte/cycle, 0.889 GiB/s @3GHz
-HighwayHash64_portable: 44982.321 cycle/hash, 2.746 cycle/byte, 0.364 byte/cycle, 1.093 GiB/s @3GHz
-HighwayHash64_sse41 : 7041.000 cycle/hash, 0.430 cycle/byte, 2.327 byte/cycle, 6.981 GiB/s @3GHz
-HighwayHash64_avx2 : 4542.000 cycle/hash, 0.277 cycle/byte, 3.607 byte/cycle, 10.822 GiB/s @3GHz
-wyhash_v7 : 3383.000 cycle/hash, 0.206 cycle/byte, 4.843 byte/cycle, 14.529 GiB/s @3GHz
+t1ha2_atonce : 3493.000 cycle/hash, 0.213 cycle/byte, 4.691 byte/cycle, 14.072 GiB/s @3GHz
+t1ha2_atonce128* : 3664.000 cycle/hash, 0.224 cycle/byte, 4.472 byte/cycle, 13.415 GiB/s @3GHz
+t1ha2_stream* : 3684.000 cycle/hash, 0.225 cycle/byte, 4.447 byte/cycle, 13.342 GiB/s @3GHz
+t1ha2_stream128* : 3709.239 cycle/hash, 0.226 cycle/byte, 4.417 byte/cycle, 13.251 GiB/s @3GHz
+t1ha1_64le : 3644.000 cycle/hash, 0.222 cycle/byte, 4.496 byte/cycle, 13.488 GiB/s @3GHz
+t1ha0 : 1289.000 cycle/hash, 0.079 cycle/byte, 12.711 byte/cycle, 38.132 GiB/s @3GHz
+xxhash32 : 8198.000 cycle/hash, 0.500 cycle/byte, 1.999 byte/cycle, 5.996 GiB/s @3GHz
+xxhash64 : 4126.750 cycle/hash, 0.252 cycle/byte, 3.970 byte/cycle, 11.911 GiB/s @3GHz
+xxh3_64 : 4929.000 cycle/hash, 0.301 cycle/byte, 3.324 byte/cycle, 9.972 GiB/s @3GHz
+xxh3_128 : 4887.536 cycle/hash, 0.298 cycle/byte, 3.352 byte/cycle, 10.057 GiB/s @3GHz
+StadtX : 3667.000 cycle/hash, 0.224 cycle/byte, 4.468 byte/cycle, 13.404 GiB/s @3GHz
+HighwayHash64_pure_c : 55294.000 cycle/hash, 3.375 cycle/byte, 0.296 byte/cycle, 0.889 GiB/s @3GHz
+HighwayHash64_portable: 44982.321 cycle/hash, 2.746 cycle/byte, 0.364 byte/cycle, 1.093 GiB/s @3GHz
+HighwayHash64_sse41 : 7041.000 cycle/hash, 0.430 cycle/byte, 2.327 byte/cycle, 6.981 GiB/s @3GHz
+HighwayHash64_avx2 : 4542.000 cycle/hash, 0.277 cycle/byte, 3.607 byte/cycle, 10.822 GiB/s @3GHz
+wyhash_v7 : 3383.000 cycle/hash, 0.206 cycle/byte, 4.843 byte/cycle, 14.529 GiB/s @3GHz
```
The `test` tool support a set of command line options to selecting functions and size of keys for benchmarking.
@@ -471,7 +471,7 @@ sha1_32a | 531.44 | 1222.44 |
MurmurOAAT | 465.12 | 107.61 | poor (collisions, 99.99% distrib)
md5_32a | 433.03 | 508.98 |
crc32 | 342.27 | 142.06 | poor (insecure, 8589.93x collisions, distrib)
-
------
-
-#### This is a mirror of the origin repository that was moved to [abf.io](https://abf.io/erthink/) because of discriminatory restrictions for Russian Crimea.
+
+-----
+
+#### This is a mirror of the origin repository that was moved to [abf.io](https://abf.io/erthink/) because of discriminatory restrictions for Russian Crimea.
diff --git a/contrib/libs/t1ha/src/t1ha0.c b/contrib/libs/t1ha/src/t1ha0.c
index 295057f828..bde71299cb 100644
--- a/contrib/libs/t1ha/src/t1ha0.c
+++ b/contrib/libs/t1ha/src/t1ha0.c
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
@@ -430,19 +430,19 @@ __cold t1ha0_function_t t1ha0_resolve(void) {
* For more info please see
* https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
* and https://sourceware.org/glibc/wiki/GNU_IFUNC */
-#if __has_attribute(__ifunc__)
+#if __has_attribute(__ifunc__)
uint64_t t1ha0(const void *data, size_t len, uint64_t seed)
- __attribute__((__ifunc__("t1ha0_resolve")));
+ __attribute__((__ifunc__("t1ha0_resolve")));
#else
__asm("\t.globl\tt1ha0\n\t.type\tt1ha0, "
"%gnu_indirect_function\n\t.set\tt1ha0,t1ha0_resolve");
-#endif /* __has_attribute(__ifunc__) */
+#endif /* __has_attribute(__ifunc__) */
-#elif __GNUC_PREREQ(4, 0) || __has_attribute(__constructor__)
+#elif __GNUC_PREREQ(4, 0) || __has_attribute(__constructor__)
-uint64_t (*t1ha0_funcptr)(const void *, size_t, uint64_t);
+uint64_t (*t1ha0_funcptr)(const void *, size_t, uint64_t);
-static __cold void __attribute__((__constructor__)) t1ha0_init(void) {
+static __cold void __attribute__((__constructor__)) t1ha0_init(void) {
t1ha0_funcptr = t1ha0_resolve();
}
diff --git a/contrib/libs/t1ha/src/t1ha0_ia32aes_a.h b/contrib/libs/t1ha/src/t1ha0_ia32aes_a.h
index f4f802679a..a2372d5201 100644
--- a/contrib/libs/t1ha/src/t1ha0_ia32aes_a.h
+++ b/contrib/libs/t1ha/src/t1ha0_ia32aes_a.h
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
diff --git a/contrib/libs/t1ha/src/t1ha0_ia32aes_b.h b/contrib/libs/t1ha/src/t1ha0_ia32aes_b.h
index c79a3a247d..f8759dde82 100644
--- a/contrib/libs/t1ha/src/t1ha0_ia32aes_b.h
+++ b/contrib/libs/t1ha/src/t1ha0_ia32aes_b.h
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
diff --git a/contrib/libs/t1ha/src/t1ha0_selfcheck.c b/contrib/libs/t1ha/src/t1ha0_selfcheck.c
index 7996bb7492..d3c8e9a3fd 100644
--- a/contrib/libs/t1ha/src/t1ha0_selfcheck.c
+++ b/contrib/libs/t1ha/src/t1ha0_selfcheck.c
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
diff --git a/contrib/libs/t1ha/src/t1ha1.c b/contrib/libs/t1ha/src/t1ha1.c
index c769151490..da6899c221 100644
--- a/contrib/libs/t1ha/src/t1ha1.c
+++ b/contrib/libs/t1ha/src/t1ha1.c
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
diff --git a/contrib/libs/t1ha/src/t1ha1_selfcheck.c b/contrib/libs/t1ha/src/t1ha1_selfcheck.c
index 10f4cb562c..5cf49632ed 100644
--- a/contrib/libs/t1ha/src/t1ha1_selfcheck.c
+++ b/contrib/libs/t1ha/src/t1ha1_selfcheck.c
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
diff --git a/contrib/libs/t1ha/src/t1ha2.c b/contrib/libs/t1ha/src/t1ha2.c
index 157ff89de7..009f922751 100644
--- a/contrib/libs/t1ha/src/t1ha2.c
+++ b/contrib/libs/t1ha/src/t1ha2.c
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
@@ -206,12 +206,12 @@ uint64_t t1ha2_atonce(const void *data, size_t length, uint64_t seed) {
#if T1HA_SYS_UNALIGNED_ACCESS == T1HA_UNALIGNED_ACCESS__EFFICIENT
if (unlikely(length > 32)) {
init_cd(&state, seed, length);
-#if defined(__LCC__) && __LCC__ > 123
-/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
- * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
- * говорят, что это нецелесообразно */
-#pragma comb_oper
-#endif /* E2K LCC > 1.23 */
+#if defined(__LCC__) && __LCC__ > 123
+/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
+ * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
+ * говорят, что это нецелесообразно */
+#pragma comb_oper
+#endif /* E2K LCC > 1.23 */
T1HA2_LOOP(le, unaligned, &state, data, length);
squash(&state);
length &= 31;
@@ -222,12 +222,12 @@ uint64_t t1ha2_atonce(const void *data, size_t length, uint64_t seed) {
if (misaligned) {
if (unlikely(length > 32)) {
init_cd(&state, seed, length);
-#if defined(__LCC__) && __LCC__ > 123
-/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
- * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
- * говорят, что это нецелесообразно */
-#pragma comb_oper
-#endif /* E2K LCC > 1.23 */
+#if defined(__LCC__) && __LCC__ > 123
+/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
+ * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
+ * говорят, что это нецелесообразно */
+#pragma comb_oper
+#endif /* E2K LCC > 1.23 */
T1HA2_LOOP(le, unaligned, &state, data, length);
squash(&state);
length &= 31;
@@ -236,12 +236,12 @@ uint64_t t1ha2_atonce(const void *data, size_t length, uint64_t seed) {
} else {
if (unlikely(length > 32)) {
init_cd(&state, seed, length);
-#if defined(__LCC__) && __LCC__ > 123
-/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
- * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
- * говорят, что это нецелесообразно */
-#pragma comb_oper
-#endif /* E2K LCC > 1.23 */
+#if defined(__LCC__) && __LCC__ > 123
+/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
+ * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
+ * говорят, что это нецелесообразно */
+#pragma comb_oper
+#endif /* E2K LCC > 1.23 */
T1HA2_LOOP(le, aligned, &state, data, length);
squash(&state);
length &= 31;
@@ -260,12 +260,12 @@ uint64_t t1ha2_atonce128(uint64_t *__restrict extra_result,
#if T1HA_SYS_UNALIGNED_ACCESS == T1HA_UNALIGNED_ACCESS__EFFICIENT
if (unlikely(length > 32)) {
-#if defined(__LCC__) && __LCC__ > 123
-/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
- * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
- * говорят, что это нецелесообразно */
-#pragma comb_oper
-#endif /* E2K LCC > 1.23 */
+#if defined(__LCC__) && __LCC__ > 123
+/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
+ * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
+ * говорят, что это нецелесообразно */
+#pragma comb_oper
+#endif /* E2K LCC > 1.23 */
T1HA2_LOOP(le, unaligned, &state, data, length);
length &= 31;
}
@@ -274,24 +274,24 @@ uint64_t t1ha2_atonce128(uint64_t *__restrict extra_result,
const bool misaligned = (((uintptr_t)data) & (ALIGNMENT_64 - 1)) != 0;
if (misaligned) {
if (unlikely(length > 32)) {
-#if defined(__LCC__) && __LCC__ > 123
-/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
- * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
- * говорят, что это нецелесообразно */
-#pragma comb_oper
-#endif /* E2K LCC > 1.23 */
+#if defined(__LCC__) && __LCC__ > 123
+/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
+ * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
+ * говорят, что это нецелесообразно */
+#pragma comb_oper
+#endif /* E2K LCC > 1.23 */
T1HA2_LOOP(le, unaligned, &state, data, length);
length &= 31;
}
T1HA2_TAIL_ABCD(le, unaligned, &state, data, length);
} else {
if (unlikely(length > 32)) {
-#if defined(__LCC__) && __LCC__ > 123
-/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
- * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
- * говорят, что это нецелесообразно */
-#pragma comb_oper
-#endif /* E2K LCC > 1.23 */
+#if defined(__LCC__) && __LCC__ > 123
+/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
+ * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
+ * говорят, что это нецелесообразно */
+#pragma comb_oper
+#endif /* E2K LCC > 1.23 */
T1HA2_LOOP(le, aligned, &state, data, length);
length &= 31;
}
@@ -330,30 +330,30 @@ void t1ha2_update(t1ha_context_t *__restrict ctx, const void *__restrict data,
if (length >= 32) {
#if T1HA_SYS_UNALIGNED_ACCESS == T1HA_UNALIGNED_ACCESS__EFFICIENT
-#if defined(__LCC__) && __LCC__ > 123
-/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
- * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
- * говорят, что это нецелесообразно */
-#pragma comb_oper
-#endif /* E2K LCC > 1.23 */
+#if defined(__LCC__) && __LCC__ > 123
+/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
+ * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
+ * говорят, что это нецелесообразно */
+#pragma comb_oper
+#endif /* E2K LCC > 1.23 */
T1HA2_LOOP(le, unaligned, &ctx->state, data, length);
#else
const bool misaligned = (((uintptr_t)data) & (ALIGNMENT_64 - 1)) != 0;
if (misaligned) {
-#if defined(__LCC__) && __LCC__ > 123
-/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
- * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
- * говорят, что это нецелесообразно */
-#pragma comb_oper
-#endif /* E2K LCC > 1.23 */
+#if defined(__LCC__) && __LCC__ > 123
+/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
+ * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
+ * говорят, что это нецелесообразно */
+#pragma comb_oper
+#endif /* E2K LCC > 1.23 */
T1HA2_LOOP(le, unaligned, &ctx->state, data, length);
} else {
-#if defined(__LCC__) && __LCC__ > 123
-/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
- * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
- * говорят, что это нецелесообразно */
-#pragma comb_oper
-#endif /* E2K LCC > 1.23 */
+#if defined(__LCC__) && __LCC__ > 123
+/* Форсирует комбинирование пар арифметических операций в двухэтажные операции
+ * в ближайшем после объявления директивы цикле, даже если эвристики оптимизации
+ * говорят, что это нецелесообразно */
+#pragma comb_oper
+#endif /* E2K LCC > 1.23 */
T1HA2_LOOP(le, aligned, &ctx->state, data, length);
}
#endif
diff --git a/contrib/libs/t1ha/src/t1ha2_selfcheck.c b/contrib/libs/t1ha/src/t1ha2_selfcheck.c
index f3f843f7d4..1a01f99512 100644
--- a/contrib/libs/t1ha/src/t1ha2_selfcheck.c
+++ b/contrib/libs/t1ha/src/t1ha2_selfcheck.c
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
diff --git a/contrib/libs/t1ha/src/t1ha_bits.h b/contrib/libs/t1ha/src/t1ha_bits.h
index d85e8ede95..93b6b51a54 100644
--- a/contrib/libs/t1ha/src/t1ha_bits.h
+++ b/contrib/libs/t1ha/src/t1ha_bits.h
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
@@ -123,10 +123,10 @@
#endif
#ifndef __optimize
-#if defined(__clang__) && !__has_attribute(__optimize__)
+#if defined(__clang__) && !__has_attribute(__optimize__)
#define __optimize(ops)
-#elif defined(__GNUC__) || __has_attribute(__optimize__)
-#define __optimize(ops) __attribute__((__optimize__(ops)))
+#elif defined(__GNUC__) || __has_attribute(__optimize__)
+#define __optimize(ops) __attribute__((__optimize__(ops)))
#else
#define __optimize(ops)
#endif
@@ -135,13 +135,13 @@
#ifndef __cold
#if defined(__OPTIMIZE__)
#if defined(__e2k__)
-#define __cold __optimize(1) __attribute__((__cold__))
-#elif defined(__clang__) && !__has_attribute(__cold__) && \
- __has_attribute(__section__)
+#define __cold __optimize(1) __attribute__((__cold__))
+#elif defined(__clang__) && !__has_attribute(__cold__) && \
+ __has_attribute(__section__)
/* just put infrequently used functions in separate section */
-#define __cold __attribute__((__section__("text.unlikely"))) __optimize("Os")
-#elif defined(__GNUC__) || __has_attribute(__cold__)
-#define __cold __attribute__((__cold__)) __optimize("Os")
+#define __cold __attribute__((__section__("text.unlikely"))) __optimize("Os")
+#elif defined(__GNUC__) || __has_attribute(__cold__)
+#define __cold __attribute__((__cold__)) __optimize("Os")
#else
#define __cold __optimize("Os")
#endif
@@ -161,7 +161,7 @@
#endif
#if defined(__e2k__)
-#include <e2kbuiltin.h>
+#include <e2kbuiltin.h>
#endif
#ifndef likely
@@ -182,14 +182,14 @@
#define bswap16(v) __builtin_bswap16(v)
#endif
-#if !defined(__maybe_unused) && \
- (__GNUC_PREREQ(4, 3) || __has_attribute(__unused__))
-#define __maybe_unused __attribute__((__unused__))
+#if !defined(__maybe_unused) && \
+ (__GNUC_PREREQ(4, 3) || __has_attribute(__unused__))
+#define __maybe_unused __attribute__((__unused__))
#endif
#if !defined(__always_inline) && \
- (__GNUC_PREREQ(3, 2) || __has_attribute(__always_inline__))
-#define __always_inline __inline __attribute__((__always_inline__))
+ (__GNUC_PREREQ(3, 2) || __has_attribute(__always_inline__))
+#define __always_inline __inline __attribute__((__always_inline__))
#endif
#if defined(__e2k__)
@@ -401,24 +401,24 @@ static __always_inline uint16_t bswap16(uint16_t v) { return v << 8 | v >> 8; }
#endif
#endif /* bswap16 */
-#if defined(__ia32__) || \
- T1HA_SYS_UNALIGNED_ACCESS == T1HA_UNALIGNED_ACCESS__EFFICIENT
-/* The __builtin_assume_aligned() leads gcc/clang to load values into the
- * registers, even when it is possible to directly use an operand from memory.
- * This can lead to a shortage of registers and a significant slowdown.
- * Therefore avoid unnecessary use of __builtin_assume_aligned() for x86. */
-#define read_unaligned(ptr, bits) (*(const uint##bits##_t *__restrict)(ptr))
-#define read_aligned(ptr, bits) (*(const uint##bits##_t *__restrict)(ptr))
-#endif /* __ia32__ */
-
+#if defined(__ia32__) || \
+ T1HA_SYS_UNALIGNED_ACCESS == T1HA_UNALIGNED_ACCESS__EFFICIENT
+/* The __builtin_assume_aligned() leads gcc/clang to load values into the
+ * registers, even when it is possible to directly use an operand from memory.
+ * This can lead to a shortage of registers and a significant slowdown.
+ * Therefore avoid unnecessary use of __builtin_assume_aligned() for x86. */
+#define read_unaligned(ptr, bits) (*(const uint##bits##_t *__restrict)(ptr))
+#define read_aligned(ptr, bits) (*(const uint##bits##_t *__restrict)(ptr))
+#endif /* __ia32__ */
+
#ifndef read_unaligned
-#if defined(__GNUC__) || __has_attribute(__packed__)
+#if defined(__GNUC__) || __has_attribute(__packed__)
typedef struct {
uint8_t unaligned_8;
uint16_t unaligned_16;
uint32_t unaligned_32;
uint64_t unaligned_64;
-} __attribute__((__packed__)) t1ha_unaligned_proxy;
+} __attribute__((__packed__)) t1ha_unaligned_proxy;
#define read_unaligned(ptr, bits) \
(((const t1ha_unaligned_proxy *)((const uint8_t *)(ptr)-offsetof( \
t1ha_unaligned_proxy, unaligned_##bits))) \
@@ -448,25 +448,25 @@ typedef struct {
#if __GNUC_PREREQ(4, 8) || __has_builtin(__builtin_assume_aligned)
#define read_aligned(ptr, bits) \
(*(const uint##bits##_t *)__builtin_assume_aligned(ptr, ALIGNMENT_##bits))
-#elif (__GNUC_PREREQ(3, 3) || __has_attribute(__aligned__)) && \
- !defined(__clang__)
+#elif (__GNUC_PREREQ(3, 3) || __has_attribute(__aligned__)) && \
+ !defined(__clang__)
#define read_aligned(ptr, bits) \
- (*(const uint##bits##_t \
- __attribute__((__aligned__(ALIGNMENT_##bits))) *)(ptr))
-#elif __has_attribute(__assume_aligned__)
+ (*(const uint##bits##_t \
+ __attribute__((__aligned__(ALIGNMENT_##bits))) *)(ptr))
+#elif __has_attribute(__assume_aligned__)
static __always_inline const
- uint16_t *__attribute__((__assume_aligned__(ALIGNMENT_16)))
+ uint16_t *__attribute__((__assume_aligned__(ALIGNMENT_16)))
cast_aligned_16(const void *ptr) {
return (const uint16_t *)ptr;
}
static __always_inline const
- uint32_t *__attribute__((__assume_aligned__(ALIGNMENT_32)))
+ uint32_t *__attribute__((__assume_aligned__(ALIGNMENT_32)))
cast_aligned_32(const void *ptr) {
return (const uint32_t *)ptr;
}
static __always_inline const
- uint64_t *__attribute__((__assume_aligned__(ALIGNMENT_64)))
+ uint64_t *__attribute__((__assume_aligned__(ALIGNMENT_64)))
cast_aligned_64(const void *ptr) {
return (const uint64_t *)ptr;
}
@@ -524,8 +524,8 @@ static __always_inline const
/*---------------------------------------------------------- Little Endian */
#ifndef fetch16_le_aligned
-static __maybe_unused __always_inline uint16_t
-fetch16_le_aligned(const void *v) {
+static __maybe_unused __always_inline uint16_t
+fetch16_le_aligned(const void *v) {
assert(((uintptr_t)v) % ALIGNMENT_16 == 0);
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
return read_aligned(v, 16);
@@ -536,8 +536,8 @@ fetch16_le_aligned(const void *v) {
#endif /* fetch16_le_aligned */
#ifndef fetch16_le_unaligned
-static __maybe_unused __always_inline uint16_t
-fetch16_le_unaligned(const void *v) {
+static __maybe_unused __always_inline uint16_t
+fetch16_le_unaligned(const void *v) {
#if T1HA_SYS_UNALIGNED_ACCESS == T1HA_UNALIGNED_ACCESS__UNABLE
const uint8_t *p = (const uint8_t *)v;
return p[0] | (uint16_t)p[1] << 8;
@@ -550,8 +550,8 @@ fetch16_le_unaligned(const void *v) {
#endif /* fetch16_le_unaligned */
#ifndef fetch32_le_aligned
-static __maybe_unused __always_inline uint32_t
-fetch32_le_aligned(const void *v) {
+static __maybe_unused __always_inline uint32_t
+fetch32_le_aligned(const void *v) {
assert(((uintptr_t)v) % ALIGNMENT_32 == 0);
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
return read_aligned(v, 32);
@@ -562,8 +562,8 @@ fetch32_le_aligned(const void *v) {
#endif /* fetch32_le_aligned */
#ifndef fetch32_le_unaligned
-static __maybe_unused __always_inline uint32_t
-fetch32_le_unaligned(const void *v) {
+static __maybe_unused __always_inline uint32_t
+fetch32_le_unaligned(const void *v) {
#if T1HA_SYS_UNALIGNED_ACCESS == T1HA_UNALIGNED_ACCESS__UNABLE
return fetch16_le_unaligned(v) |
(uint32_t)fetch16_le_unaligned((const uint8_t *)v + 2) << 16;
@@ -576,8 +576,8 @@ fetch32_le_unaligned(const void *v) {
#endif /* fetch32_le_unaligned */
#ifndef fetch64_le_aligned
-static __maybe_unused __always_inline uint64_t
-fetch64_le_aligned(const void *v) {
+static __maybe_unused __always_inline uint64_t
+fetch64_le_aligned(const void *v) {
assert(((uintptr_t)v) % ALIGNMENT_64 == 0);
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
return read_aligned(v, 64);
@@ -588,8 +588,8 @@ fetch64_le_aligned(const void *v) {
#endif /* fetch64_le_aligned */
#ifndef fetch64_le_unaligned
-static __maybe_unused __always_inline uint64_t
-fetch64_le_unaligned(const void *v) {
+static __maybe_unused __always_inline uint64_t
+fetch64_le_unaligned(const void *v) {
#if T1HA_SYS_UNALIGNED_ACCESS == T1HA_UNALIGNED_ACCESS__UNABLE
return fetch32_le_unaligned(v) |
(uint64_t)fetch32_le_unaligned((const uint8_t *)v + 4) << 32;
@@ -601,8 +601,8 @@ fetch64_le_unaligned(const void *v) {
}
#endif /* fetch64_le_unaligned */
-static __maybe_unused __always_inline uint64_t tail64_le_aligned(const void *v,
- size_t tail) {
+static __maybe_unused __always_inline uint64_t tail64_le_aligned(const void *v,
+ size_t tail) {
const uint8_t *const p = (const uint8_t *)v;
#if T1HA_USE_FAST_ONESHOT_READ && !defined(__SANITIZE_ADDRESS__)
/* We can perform a 'oneshot' read, which is little bit faster. */
@@ -680,8 +680,8 @@ static __maybe_unused __always_inline uint64_t tail64_le_aligned(const void *v,
(((PAGESIZE - (size)) & (uintptr_t)(ptr)) != 0)
#endif /* T1HA_USE_FAST_ONESHOT_READ */
-static __maybe_unused __always_inline uint64_t
-tail64_le_unaligned(const void *v, size_t tail) {
+static __maybe_unused __always_inline uint64_t
+tail64_le_unaligned(const void *v, size_t tail) {
const uint8_t *p = (const uint8_t *)v;
#if defined(can_read_underside) && \
(UINTPTR_MAX > 0xffffFFFFul || ULONG_MAX > 0xffffFFFFul)
@@ -980,14 +980,14 @@ tail64_be_unaligned(const void *v, size_t tail) {
/***************************************************************************/
#ifndef rot64
-static __maybe_unused __always_inline uint64_t rot64(uint64_t v, unsigned s) {
+static __maybe_unused __always_inline uint64_t rot64(uint64_t v, unsigned s) {
return (v >> s) | (v << (64 - s));
}
#endif /* rot64 */
#ifndef mul_32x32_64
-static __maybe_unused __always_inline uint64_t mul_32x32_64(uint32_t a,
- uint32_t b) {
+static __maybe_unused __always_inline uint64_t mul_32x32_64(uint32_t a,
+ uint32_t b) {
return a * (uint64_t)b;
}
#endif /* mul_32x32_64 */
@@ -1037,9 +1037,9 @@ add64carry_last(unsigned carry, uint64_t base, uint64_t addend, uint64_t *sum) {
static __maybe_unused __always_inline uint64_t mul_64x64_128(uint64_t a,
uint64_t b,
uint64_t *h) {
-#if (defined(__SIZEOF_INT128__) || \
- (defined(_INTEGRAL_MAX_BITS) && _INTEGRAL_MAX_BITS >= 128)) && \
- (!defined(__LCC__) || __LCC__ != 124)
+#if (defined(__SIZEOF_INT128__) || \
+ (defined(_INTEGRAL_MAX_BITS) && _INTEGRAL_MAX_BITS >= 128)) && \
+ (!defined(__LCC__) || __LCC__ != 124)
__uint128_t r = (__uint128_t)a * (__uint128_t)b;
/* modern GCC could nicely optimize this */
*h = (uint64_t)(r >> 64);
@@ -1094,15 +1094,15 @@ static __maybe_unused __always_inline uint64_t mux64(uint64_t v,
return l ^ h;
}
-static __maybe_unused __always_inline uint64_t final64(uint64_t a, uint64_t b) {
+static __maybe_unused __always_inline uint64_t final64(uint64_t a, uint64_t b) {
uint64_t x = (a + rot64(b, 41)) * prime_0;
uint64_t y = (rot64(a, 23) + b) * prime_6;
return mux64(x ^ y, prime_5);
}
-static __maybe_unused __always_inline void mixup64(uint64_t *__restrict a,
- uint64_t *__restrict b,
- uint64_t v, uint64_t prime) {
+static __maybe_unused __always_inline void mixup64(uint64_t *__restrict a,
+ uint64_t *__restrict b,
+ uint64_t v, uint64_t prime) {
uint64_t h;
*a ^= mul_64x64_128(*b + v, prime, &h);
*b += h;
@@ -1124,8 +1124,8 @@ typedef union t1ha_uint128 {
};
} t1ha_uint128_t;
-static __maybe_unused __always_inline t1ha_uint128_t
-not128(const t1ha_uint128_t v) {
+static __maybe_unused __always_inline t1ha_uint128_t
+not128(const t1ha_uint128_t v) {
t1ha_uint128_t r;
#if defined(__SIZEOF_INT128__) || \
(defined(_INTEGRAL_MAX_BITS) && _INTEGRAL_MAX_BITS >= 128)
@@ -1137,8 +1137,8 @@ not128(const t1ha_uint128_t v) {
return r;
}
-static __maybe_unused __always_inline t1ha_uint128_t
-left128(const t1ha_uint128_t v, unsigned s) {
+static __maybe_unused __always_inline t1ha_uint128_t
+left128(const t1ha_uint128_t v, unsigned s) {
t1ha_uint128_t r;
assert(s < 128);
#if defined(__SIZEOF_INT128__) || \
@@ -1151,8 +1151,8 @@ left128(const t1ha_uint128_t v, unsigned s) {
return r;
}
-static __maybe_unused __always_inline t1ha_uint128_t
-right128(const t1ha_uint128_t v, unsigned s) {
+static __maybe_unused __always_inline t1ha_uint128_t
+right128(const t1ha_uint128_t v, unsigned s) {
t1ha_uint128_t r;
assert(s < 128);
#if defined(__SIZEOF_INT128__) || \
@@ -1165,8 +1165,8 @@ right128(const t1ha_uint128_t v, unsigned s) {
return r;
}
-static __maybe_unused __always_inline t1ha_uint128_t or128(t1ha_uint128_t x,
- t1ha_uint128_t y) {
+static __maybe_unused __always_inline t1ha_uint128_t or128(t1ha_uint128_t x,
+ t1ha_uint128_t y) {
t1ha_uint128_t r;
#if defined(__SIZEOF_INT128__) || \
(defined(_INTEGRAL_MAX_BITS) && _INTEGRAL_MAX_BITS >= 128)
@@ -1178,8 +1178,8 @@ static __maybe_unused __always_inline t1ha_uint128_t or128(t1ha_uint128_t x,
return r;
}
-static __maybe_unused __always_inline t1ha_uint128_t xor128(t1ha_uint128_t x,
- t1ha_uint128_t y) {
+static __maybe_unused __always_inline t1ha_uint128_t xor128(t1ha_uint128_t x,
+ t1ha_uint128_t y) {
t1ha_uint128_t r;
#if defined(__SIZEOF_INT128__) || \
(defined(_INTEGRAL_MAX_BITS) && _INTEGRAL_MAX_BITS >= 128)
@@ -1191,8 +1191,8 @@ static __maybe_unused __always_inline t1ha_uint128_t xor128(t1ha_uint128_t x,
return r;
}
-static __maybe_unused __always_inline t1ha_uint128_t rot128(t1ha_uint128_t v,
- unsigned s) {
+static __maybe_unused __always_inline t1ha_uint128_t rot128(t1ha_uint128_t v,
+ unsigned s) {
s &= 127;
#if defined(__SIZEOF_INT128__) || \
(defined(_INTEGRAL_MAX_BITS) && _INTEGRAL_MAX_BITS >= 128)
@@ -1203,8 +1203,8 @@ static __maybe_unused __always_inline t1ha_uint128_t rot128(t1ha_uint128_t v,
#endif
}
-static __maybe_unused __always_inline t1ha_uint128_t add128(t1ha_uint128_t x,
- t1ha_uint128_t y) {
+static __maybe_unused __always_inline t1ha_uint128_t add128(t1ha_uint128_t x,
+ t1ha_uint128_t y) {
t1ha_uint128_t r;
#if defined(__SIZEOF_INT128__) || \
(defined(_INTEGRAL_MAX_BITS) && _INTEGRAL_MAX_BITS >= 128)
@@ -1215,8 +1215,8 @@ static __maybe_unused __always_inline t1ha_uint128_t add128(t1ha_uint128_t x,
return r;
}
-static __maybe_unused __always_inline t1ha_uint128_t mul128(t1ha_uint128_t x,
- t1ha_uint128_t y) {
+static __maybe_unused __always_inline t1ha_uint128_t mul128(t1ha_uint128_t x,
+ t1ha_uint128_t y) {
t1ha_uint128_t r;
#if defined(__SIZEOF_INT128__) || \
(defined(_INTEGRAL_MAX_BITS) && _INTEGRAL_MAX_BITS >= 128)
@@ -1233,20 +1233,20 @@ static __maybe_unused __always_inline t1ha_uint128_t mul128(t1ha_uint128_t x,
#if T1HA0_AESNI_AVAILABLE && defined(__ia32__)
uint64_t t1ha_ia32cpu_features(void);
-static __maybe_unused __always_inline bool
-t1ha_ia32_AESNI_avail(uint64_t ia32cpu_features) {
+static __maybe_unused __always_inline bool
+t1ha_ia32_AESNI_avail(uint64_t ia32cpu_features) {
/* check for AES-NI */
return (ia32cpu_features & UINT32_C(0x02000000)) != 0;
}
-static __maybe_unused __always_inline bool
-t1ha_ia32_AVX_avail(uint64_t ia32cpu_features) {
+static __maybe_unused __always_inline bool
+t1ha_ia32_AVX_avail(uint64_t ia32cpu_features) {
/* check for any AVX */
return (ia32cpu_features & UINT32_C(0x1A000000)) == UINT32_C(0x1A000000);
}
-static __maybe_unused __always_inline bool
-t1ha_ia32_AVX2_avail(uint64_t ia32cpu_features) {
+static __maybe_unused __always_inline bool
+t1ha_ia32_AVX2_avail(uint64_t ia32cpu_features) {
/* check for 'Advanced Vector Extensions 2' */
return ((ia32cpu_features >> 32) & 32) != 0;
}
diff --git a/contrib/libs/t1ha/src/t1ha_selfcheck.c b/contrib/libs/t1ha/src/t1ha_selfcheck.c
index 1c1a506b50..ee9394bf3b 100644
--- a/contrib/libs/t1ha/src/t1ha_selfcheck.c
+++ b/contrib/libs/t1ha/src/t1ha_selfcheck.c
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
diff --git a/contrib/libs/t1ha/src/t1ha_selfcheck.h b/contrib/libs/t1ha/src/t1ha_selfcheck.h
index f70e1c6188..e83cd2417d 100644
--- a/contrib/libs/t1ha/src/t1ha_selfcheck.h
+++ b/contrib/libs/t1ha/src/t1ha_selfcheck.h
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
diff --git a/contrib/libs/t1ha/src/t1ha_selfcheck_all.c b/contrib/libs/t1ha/src/t1ha_selfcheck_all.c
index 47766ef9b6..f916cef716 100644
--- a/contrib/libs/t1ha/src/t1ha_selfcheck_all.c
+++ b/contrib/libs/t1ha/src/t1ha_selfcheck_all.c
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
diff --git a/contrib/libs/t1ha/t1ha.h b/contrib/libs/t1ha/t1ha.h
index e72ce039f7..9bb8d74496 100644
--- a/contrib/libs/t1ha/t1ha.h
+++ b/contrib/libs/t1ha/t1ha.h
@@ -1,8 +1,8 @@
/*
- * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
+ * Copyright (c) 2016-2020 Positive Technologies, https://www.ptsecurity.com,
* Fast Positive Hash.
*
- * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
+ * Portions Copyright (c) 2010-2020 Leonid Yuriev <leo@yuriev.ru>,
* The 1Hippeus project (t1h).
*
* This software is provided 'as-is', without any express or implied
@@ -34,7 +34,7 @@
* hardware tricks).
* 3. Not suitable for cryptography.
*
- * The Future will (be) Positive. Всё будет хорошо.
+ * The Future will (be) Positive. Всё будет хорошо.
*
* ACKNOWLEDGEMENT:
* The t1ha was originally developed by Leonid Yuriev (Леонид Юрьев)
@@ -60,7 +60,7 @@
* // To disable unaligned access at all.
* #define T1HA_SYS_UNALIGNED_ACCESS 0
*
- * // To enable unaligned access, but indicate that it significantly slow.
+ * // To enable unaligned access, but indicate that it significantly slow.
* #define T1HA_SYS_UNALIGNED_ACCESS 1
*
* // To enable unaligned access, and indicate that it effecient.
@@ -323,8 +323,8 @@
#else
#define __dll_export __declspec(dllexport)
#endif
-#elif defined(__GNUC__) || __has_attribute(__visibility__)
-#define __dll_export __attribute__((__visibility__("default")))
+#elif defined(__GNUC__) || __has_attribute(__visibility__)
+#define __dll_export __attribute__((__visibility__("default")))
#else
#define __dll_export
#endif
@@ -337,8 +337,8 @@
#else
#define __dll_import __declspec(dllimport)
#endif
-#elif defined(__GNUC__) || __has_attribute(__visibility__)
-#define __dll_import __attribute__((__visibility__("default")))
+#elif defined(__GNUC__) || __has_attribute(__visibility__)
+#define __dll_import __attribute__((__visibility__("default")))
#else
#define __dll_import
#endif
@@ -347,8 +347,8 @@
#ifndef __force_inline
#ifdef _MSC_VER
#define __force_inline __forceinline
-#elif __GNUC_PREREQ(3, 2) || __has_attribute(__always_inline__)
-#define __force_inline __inline __attribute__((__always_inline__))
+#elif __GNUC_PREREQ(3, 2) || __has_attribute(__always_inline__)
+#define __force_inline __inline __attribute__((__always_inline__))
#else
#define __force_inline __inline
#endif
@@ -372,7 +372,7 @@
#if defined(__GNUC__) && defined(__ia32__)
#define T1HA_ALIGN_SUFFIX \
- __attribute__((__aligned__(32))) /* required only for SIMD */
+ __attribute__((__aligned__(32))) /* required only for SIMD */
#else
#define T1HA_ALIGN_SUFFIX
#endif /* GCC x86 */
@@ -383,14 +383,14 @@
/* GNU ELF indirect functions usage control. For more info please see
* https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
* and https://sourceware.org/glibc/wiki/GNU_IFUNC */
-#if defined(__ELF__) && defined(__amd64__) && \
- (__has_attribute(__ifunc__) || \
- (!defined(__clang__) && defined(__GNUC__) && __GNUC__ >= 4 && \
- !defined(__SANITIZE_ADDRESS__) && !defined(__SSP_ALL__)))
-/* Enable gnu_indirect_function by default if :
- * - ELF AND x86_64
- * - attribute(__ifunc__) is available OR
- * GCC >= 4 WITHOUT -fsanitize=address NOR -fstack-protector-all */
+#if defined(__ELF__) && defined(__amd64__) && \
+ (__has_attribute(__ifunc__) || \
+ (!defined(__clang__) && defined(__GNUC__) && __GNUC__ >= 4 && \
+ !defined(__SANITIZE_ADDRESS__) && !defined(__SSP_ALL__)))
+/* Enable gnu_indirect_function by default if :
+ * - ELF AND x86_64
+ * - attribute(__ifunc__) is available OR
+ * GCC >= 4 WITHOUT -fsanitize=address NOR -fstack-protector-all */
#define T1HA_USE_INDIRECT_FUNCTIONS 1
#else
#define T1HA_USE_INDIRECT_FUNCTIONS 0
diff --git a/contrib/libs/t1ha/ya.make b/contrib/libs/t1ha/ya.make
index 107e7a48d3..6b0c94f9f3 100644
--- a/contrib/libs/t1ha/ya.make
+++ b/contrib/libs/t1ha/ya.make
@@ -1,13 +1,13 @@
-# Generated by devtools/yamaker from nixpkgs 8e778c6df06ab73862b9abc71f40489f9bbf6c40.
+# Generated by devtools/yamaker from nixpkgs 8e778c6df06ab73862b9abc71f40489f9bbf6c40.
-LIBRARY()
+LIBRARY()
OWNER(
va-kuznecov
g:cpp-contrib
)
-VERSION(2.1.4)
+VERSION(2.1.4)
ORIGINAL_SOURCE(https://github.com/PositiveTechnologies/t1ha/archive/v2.1.4.tar.gz)
@@ -15,10 +15,10 @@ LICENSE(Zlib)
LICENSE_TEXTS(.yandex_meta/licenses.list.txt)
-NO_COMPILER_WARNINGS()
-
-NO_RUNTIME()
-
+NO_COMPILER_WARNINGS()
+
+NO_RUNTIME()
+
SRCS(
src/t1ha0.c
src/t1ha0_ia32aes_avx.c
@@ -33,10 +33,10 @@ SRCS(
src/t1ha_selfcheck_all.c
)
-IF (ARCH_X86_64)
+IF (ARCH_X86_64)
CFLAGS(
-maes
)
-ENDIF()
+ENDIF()
END()