aboutsummaryrefslogtreecommitdiffstats
path: root/libavutil/x86
Commit message (Collapse)AuthorAgeFilesLines
* avutil/x86/aes: remove a few branchesJames Almer2025-04-102-20/+36
| | | | | | | | | | | | | | | | | | | | | | | | The rounds value is constant and can be one of three hardcoded values, so instead of checking it on every loop, just split the function into three different implementations for each value. Before: aes_decrypt_128_aesni: 93.8 (47.58x) aes_decrypt_192_aesni: 106.9 (49.30x) aes_decrypt_256_aesni: 109.8 (56.50x) aes_encrypt_128_aesni: 93.2 (47.70x) aes_encrypt_192_aesni: 111.1 (48.36x) aes_encrypt_256_aesni: 113.6 (56.27x) After: aes_decrypt_128_aesni: 71.5 (63.31x) aes_decrypt_192_aesni: 96.8 (55.64x) aes_decrypt_256_aesni: 106.1 (58.51x) aes_encrypt_128_aesni: 81.3 (55.92x) aes_encrypt_192_aesni: 91.2 (59.78x) aes_encrypt_256_aesni: 109.0 (58.26x) Signed-off-by: James Almer <jamrial@gmail.com>
* avutil/x86/aes: ignore the upper bits in countJames Almer2025-04-061-1/+1
| | | | | | The argument is an int. Signed-off-by: James Almer <jamrial@gmail.com>
* lavu/aes: add x86 AESNI optimizationsRodger Combs2025-04-053-2/+135
| | | | | | | | | | | | | crypto_bench comparison for AES-128-ECB: lavu_aesni AES-128-ECB size: 1048576 runs: 1024 time: 0.596 +- 0.081 lavu_c AES-128-ECB size: 1048576 runs: 1024 time: 17.007 +- 2.131 crypto AES-128-ECB size: 1048576 runs: 1024 time: 0.612 +- 1.857 gcrypt AES-128-ECB size: 1048576 runs: 1024 time: 1.123 +- 0.224 tomcrypt AES-128-ECB size: 1048576 runs: 1024 time: 9.038 +- 0.790 Improved-By: Henrik Gramner <henrik@gramner.com> Signed-off-by: James Almer <jamrial@gmail.com>
* x86/tx_float: remove HAVE_AVX2_EXTERNAL checksLynne2024-10-062-5/+1
| | | | | It'll always be enabled. Thanks, nasm.
* Revert "x86/tx_float: set all operands for shufps"Lynne2024-10-061-2/+2
| | | | This reverts commit 74f5fb6db899dbc4fde9ccf77f37256ddcaaaab9.
* Revert "x86/tx_float: add missing check for AVX2"Lynne2024-10-061-1/+1
| | | | This reverts commit f4097e4c1f1bb244cae78c363a69d5e84495b616.
* Revert "x86/tx_float: add missing preprocessor wrapper for AVX2 functions"Lynne2024-10-061-1/+1
| | | | This reverts commit 750f378becf15c0552c45a66a66aca7cc506d490.
* Revert "x86/tx_float: change a condition in a preprocessor check"Lynne2024-10-061-1/+1
| | | | This reverts commit 0d8f43c74d0b1039ba70aacb4c9c7768e8bebf9f.
* x86/intreadwrite: add SSE2 optimized AV_COPY128UJames Almer2024-07-291-0/+7
| | | | Signed-off-by: James Almer <jamrial@gmail.com>
* x86/intreadwrite: add missing casts to pointer argumentsJames Almer2024-07-111-11/+4
| | | | | | | | | | | Should make strict compilers happy. Also, make AV_COPY128 use integer operations while at it. Removing the inclusion of immintrin.h ensures a lot less intrinsic related headers are included as well, which fixes a clash of defines with some Clang versions. Reviewed-by: Martin Storsjö <martin@martin.st> Signed-off-by: James Almer <jamrial@gmail.com>
* x86/intreadwrite: fix include of config.hJames Almer2024-07-101-1/+1
| | | | | | Should fix make checkheaders. Signed-off-by: James Almer <jamrial@gmail.com>
* x86/intreadwrite.h: add missing preprocessor checksJames Almer2024-07-101-6/+6
| | | | | | | | Removed by accident in the previous commits. This makes the code only run when compiled with GCC and Clang like before. Support for other compilers like msvc can be added later. Signed-off-by: James Almer <jamrial@gmail.com>
* x86/intreadwrite: use intrinsics instead of inline asm for AV_COPY128James Almer2024-07-101-13/+7
| | | | | | | This has the benefit of removing any SSE -> AVX penalty that may happen when the compiler emits VEX encoded instructions. Signed-off-by: James Almer <jamrial@gmail.com>
* x86/intreadwrite: use intrinsics instead of inline asm for AV_ZERO128James Almer2024-07-101-8/+7
| | | | | | | | | | | | | When called inside a loop, the inline asm version results in one pxor unnecessarely emitted per iteration, as the contents of the __asm__() block are opaque to the compiler's instruction scheduler. This is not the case with intrinsics, where pxor will be emitted once with any half decent compiler. This also has the benefit of removing any SSE -> AVX penalty that may happen when the compiler emits VEX encoded instructions. Signed-off-by: James Almer <jamrial@gmail.com>
* avutil/common: assert that bit position in av_zero_extend is validJames Almer2024-06-131-1/+13
| | | | Signed-off-by: James Almer <jamrial@gmail.com>
* avutil: rename av_mod_uintp2 to av_zero_extendJames Almer2024-06-131-3/+3
| | | | | | It's more descriptive of what it does. Signed-off-by: James Almer <jamrial@gmail.com>
* lavu/x86: remove GCC 4.4- stuffRémi Denis-Courmont2024-06-131-11/+2
| | | | | | Since the C11 support is required, those GCC versions can no longer be supported anyhow. (Clang pretends to be GCC 4.4, but it looks like the code was intended for old GCC specifically.)
* x86/float_dsp: add SSE2 and AVX versions of scalarproduct_doubleJames Almer2024-06-032-0/+57
| | | | Signed-off-by: James Almer <jamrial@gmail.com>
* avutil/common: Don't auto-include mem.hAndreas Rheinhardt2024-03-311-0/+1
| | | | | | | | | | | There are lots of files that don't need it: The number of object files that actually need it went down from 2011 to 884 here. Keep it for external users in order to not cause breakages. Also improve the other headers a bit while just at it. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* x86: Update x86inc.asmHenrik Gramner2024-03-241-210/+462
| | | | | | Make things up-to-date with upstream. https://code.videolan.org/videolan/x86inc.asm
* avutil/x86util: Fix broken pre-SSE4.1 PMINSD emulationHenrik Gramner2024-03-171-4/+0
| | | | | | Fixes yadif-16 which allows FATE to pass. Broken since 2904db90458a1253e4aea6844ba9a59ac11923b6 (2017).
* configure: Remove av_restrictAndreas Rheinhardt2024-03-152-7/+2
| | | | | | | | | All versions of MSVC that support C11 (namely >= v19.27) also support the restrict keyword, therefore av_restrict is no longer necessary since 75697836b1db3e0f0a3b7061be6be28d00c675a0. Reviewed-by: Martin Storsjö <martin@martin.st> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* x86: Remove inline MMX assembly that clobbers the FPU stateMartin Storsjö2024-02-091-36/+0
| | | | | | | | | | | | | | | | | | | | | | These inline implementations of AV_COPY64, AV_SWAP64 and AV_ZERO64 are known to clobber the FPU state - which has to be restored with the 'emms' instruction afterwards. This was known and signaled with the FF_COPY_SWAP_ZERO_USES_MMX define, which calling code seems to have been supposed to check, in order to call emms_c() after using them. See 0b1972d4096df5879038f0af776f87f41e90ebd4, 29c4c0886d143790fcbeddbe40a23dfc6f56345c and df215e575850e41b19aeb1fd99e53372a6b3d537 for history on earlier fixes in the same area. However, new code can use these AV_*64() macros without knowing about the need to call emms_c(). Just get rid of these dangerous inline assembly snippets; this doesn't make any difference for 64 bit architectures anyway. Signed-off-by: Martin Storsjö <martin@martin.st>
* x86/tx_init: propely indicate the extended available transform sizesLynne2024-02-091-9/+9
| | | | | | | | | Forgot to do this with the previous commit. Actually makes the assembly being used. Still the fastest FFT in the world, 15% faster than FFTW on the largest available size.
* x86/tx_float: enable SIMD for sizes over 131072Lynne2024-02-071-2/+6
| | | | | | The tables for the new sizes were added last year due to being required for SDR. However, the assembly was never updated to use them.
* x86inc: Add REPX macro to repeat instructions/operationsHenrik Gramner2023-11-081-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When operating on large blocks of data it's common to repeatedly use an instruction on multiple registers. Using the REPX macro makes it easy to quickly write dense code to achieve this without having to explicitly duplicate the same instruction over and over. For example, REPX {paddw x, m4}, m0, m1, m2, m3 REPX {mova [r0+16*x], m5}, 0, 1, 2, 3 will expand to paddw m0, m4 paddw m1, m4 paddw m2, m4 paddw m3, m4 mova [r0+16*0], m5 mova [r0+16*1], m5 mova [r0+16*2], m5 mova [r0+16*3], m5 Commit taken from x264: https://code.videolan.org/videolan/x264/-/commit/6d10612ab0007f8f60dd2399182efd696da3ffe4 Signed-off-by: Frank Plowman <post@frankplowman.com> Signed-off-by: Anton Khirnov <anton@khirnov.net>
* avutil/x86/pixelutils: Empty MMX state in ff_pixelutils_sad_8x8_mmxextAndreas Rheinhardt2023-11-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | We currently mostly do not empty the MMX state in our MMX DSP functions; instead we only do so before code that might be using x87 code. This is a violation of the System V i386 ABI (and maybe of other ABIs, too): "The CPU shall be in x87 mode upon entry to a function. Therefore, every function that uses the MMX registers is required to issue an emms or femms instruction after using MMX registers, before returning or calling another function." (See 2.2.1 in [1]) This patch does not intend to change all these functions to abide by the ABI; it only does so for ff_pixelutils_sad_8x8_mmxext, as this function can by called by external users, because it is exported via the pixelutils API. Without this, the following fragment will assert (on x86/x64): uint8_t src1[8 * 8], src2[8 * 8]; av_pixelutils_sad_fn fn = av_pixelutils_get_sad_fn(3, 3, 0, NULL); fn(src1, 8, src2, 8); av_assert0_fpu(); [1]: https://raw.githubusercontent.com/wiki/hjl-tools/x86-psABI/intel386-psABI-1.1.pdf Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avutil/internal: Don't auto-include emms.hAndreas Rheinhardt2023-09-041-58/+0
| | | | | | Instead include emms.h wherever it is needed. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* x86: replace explicit REP_RETs with RETsLynne2023-02-012-11/+11
| | | | | | | | | | | | | | | | | | | From x86inc: > On AMD cpus <=K10, an ordinary ret is slow if it immediately follows either > a branch or a branch target. So switch to a 2-byte form of ret in that case. > We can automatically detect "follows a branch", but not a branch target. > (SSSE3 is a sufficient condition to know that your cpu doesn't have this problem.) x86inc can automatically determine whether to use REP_RET rather than REP in most of these cases, so impact is minimal. Additionally, a few REP_RETs were used unnecessary, despite the return being nowhere near a branch. The only CPUs affected were AMD K10s, made between 2007 and 2011, 16 years ago and 12 years ago, respectively. In the future, everyone involved with x86inc should consider dropping REP_RETs altogether.
* x86/tx_float: fix stray change in 15xM FFT and replace imul->leaLynne2022-11-281-2/+2
| | | | Thanks to rorgoroth for bisecting and kurosu for the lea suggestion.
* lavu/tx: refactor to explicitly track and convert lookup table orderLynne2022-11-241-21/+25
| | | | Necessary for generalizing PFAs.
* x86/tx_float: implement striding in fft_15xMLynne2022-11-241-16/+29
|
* x86/tx_float_init: properly specify the supported factors of 15xM FFTsLynne2022-11-241-3/+3
| | | | Only powers of two are currently supported.
* x86/tx_float: add a standalone 15-point AVX2 transformLynne2022-11-242-0/+117
| | | | Enables its use everywhere else in the framework.
* x86/tx_float: optimize and macro out FFT15Lynne2022-11-241-134/+143
|
* lavu/fixed_dsp: add missing av_restrict qualifiersJohannes Kauffmann2022-10-041-1/+1
| | | | | | | | | | The butterflies_fixed function pointer declaration specifies av_restrict for the first two pointer arguments. So the corresponding function definitions should honor this declaration. MSVC emits warning C4113 for this. Signed-off-by: Anton Khirnov <anton@khirnov.net>
* x86/tx_float: enable AVX-only split-radix FFT codeletsLynne2022-09-242-0/+10
| | | | Sandy Bridge, Ivy Bridge and Bulldozer cores don't support FMA3.
* x86/tx_float: fix some symbol namesJames Almer2022-09-231-3/+3
| | | | | | Should fix compilation on MacOS Signed-off-by: James Almer <jamrial@gmail.com>
* x86/tx_float: change a condition in a preprocessor checkJames Almer2022-09-231-1/+1
| | | | | | Fixes compilation with yasm. Signed-off-by: James Almer <jamrial@gmail.com>
* x86/tx_float: add missing preprocessor wrapper for AVX2 functionsJames Almer2022-09-231-1/+1
| | | | | | Fixes compilation with old assemblers. Signed-off-by: James Almer <jamrial@gmail.com>
* x86/tx_float: generalize iMDCTLynne2022-09-232-29/+40
| | | | | | | | To support non-aligned buffers during the post-transform step, just iterate backwards over the array. This allows using the 15xN-point FFT, with which the speed is 2.1 times faster than our old libavcodec implementation.
* x86/tx_float: add 15xN PFA FFT AVX SIMDLynne2022-09-232-0/+348
| | | | | | | | | ~4x faster than the C version. The shuffles in the 15pt dim1 are seriously expensive. Not happy with it, but I'm contempt. Can be easily converted to pure AVX by removing all vpermpd/vpermps instructions.
* x86/tx_float: adjust internal ASM call ABI againLynne2022-09-231-20/+8
| | | | | There are many ways to go about it, and this one seems optimal for both MDCTs and PFA FFTs without requiring excessive instructions or stack usage.
* x86/tx_float: add asm call versions of the 2pt and 4pt transformsLynne2022-09-192-3/+32
| | | | Verified to be working.
* x86/tx_float: fully support 128bit regs in LOAD64_LUTLynne2022-09-191-5/+5
| | | | | The gather path didn't support 128bit registers. It's not faster on Zen 3, but it's here for completeness.
* x86/tx_float: simplify and describe the intra-asm call conventionLynne2022-09-191-13/+30
|
* x86/float_dsp: use three operand form for some instructionsJames Almer2022-09-131-8/+8
| | | | | | Fixes compilation with old yasm Signed-off-by: James Almer <jamrial@gmail.com>
* avutil/x86/float_dsp: add fma3 for scalarproductPaul B Mahol2022-09-132-0/+129
|
* avutil/x86/intreadwrite: Add ability to detect whether MMX code is usedAndreas Rheinhardt2022-09-111-0/+2
| | | | | | It can be used to call emms_c() only when needed. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* x86/tx_float: add missing check for AVX2James Almer2022-09-061-1/+1
| | | | | | Fixes compilation with old yasm. Signed-off-by: James Almer <jamrial@gmail.com>