aboutsummaryrefslogtreecommitdiffstats
path: root/libavutil/x86
Commit message (Collapse)AuthorAgeFilesLines
* avutil/x86util: Fix broken pre-SSE4.1 PMINSD emulationHenrik Gramner2024-03-171-4/+0
| | | | | | Fixes yadif-16 which allows FATE to pass. Broken since 2904db90458a1253e4aea6844ba9a59ac11923b6 (2017).
* configure: Remove av_restrictAndreas Rheinhardt2024-03-152-7/+2
| | | | | | | | | All versions of MSVC that support C11 (namely >= v19.27) also support the restrict keyword, therefore av_restrict is no longer necessary since 75697836b1db3e0f0a3b7061be6be28d00c675a0. Reviewed-by: Martin Storsjö <martin@martin.st> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* x86: Remove inline MMX assembly that clobbers the FPU stateMartin Storsjö2024-02-091-36/+0
| | | | | | | | | | | | | | | | | | | | | | These inline implementations of AV_COPY64, AV_SWAP64 and AV_ZERO64 are known to clobber the FPU state - which has to be restored with the 'emms' instruction afterwards. This was known and signaled with the FF_COPY_SWAP_ZERO_USES_MMX define, which calling code seems to have been supposed to check, in order to call emms_c() after using them. See 0b1972d4096df5879038f0af776f87f41e90ebd4, 29c4c0886d143790fcbeddbe40a23dfc6f56345c and df215e575850e41b19aeb1fd99e53372a6b3d537 for history on earlier fixes in the same area. However, new code can use these AV_*64() macros without knowing about the need to call emms_c(). Just get rid of these dangerous inline assembly snippets; this doesn't make any difference for 64 bit architectures anyway. Signed-off-by: Martin Storsjö <martin@martin.st>
* x86/tx_init: propely indicate the extended available transform sizesLynne2024-02-091-9/+9
| | | | | | | | | Forgot to do this with the previous commit. Actually makes the assembly being used. Still the fastest FFT in the world, 15% faster than FFTW on the largest available size.
* x86/tx_float: enable SIMD for sizes over 131072Lynne2024-02-071-2/+6
| | | | | | The tables for the new sizes were added last year due to being required for SDR. However, the assembly was never updated to use them.
* x86inc: Add REPX macro to repeat instructions/operationsHenrik Gramner2023-11-081-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When operating on large blocks of data it's common to repeatedly use an instruction on multiple registers. Using the REPX macro makes it easy to quickly write dense code to achieve this without having to explicitly duplicate the same instruction over and over. For example, REPX {paddw x, m4}, m0, m1, m2, m3 REPX {mova [r0+16*x], m5}, 0, 1, 2, 3 will expand to paddw m0, m4 paddw m1, m4 paddw m2, m4 paddw m3, m4 mova [r0+16*0], m5 mova [r0+16*1], m5 mova [r0+16*2], m5 mova [r0+16*3], m5 Commit taken from x264: https://code.videolan.org/videolan/x264/-/commit/6d10612ab0007f8f60dd2399182efd696da3ffe4 Signed-off-by: Frank Plowman <post@frankplowman.com> Signed-off-by: Anton Khirnov <anton@khirnov.net>
* avutil/x86/pixelutils: Empty MMX state in ff_pixelutils_sad_8x8_mmxextAndreas Rheinhardt2023-11-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | We currently mostly do not empty the MMX state in our MMX DSP functions; instead we only do so before code that might be using x87 code. This is a violation of the System V i386 ABI (and maybe of other ABIs, too): "The CPU shall be in x87 mode upon entry to a function. Therefore, every function that uses the MMX registers is required to issue an emms or femms instruction after using MMX registers, before returning or calling another function." (See 2.2.1 in [1]) This patch does not intend to change all these functions to abide by the ABI; it only does so for ff_pixelutils_sad_8x8_mmxext, as this function can by called by external users, because it is exported via the pixelutils API. Without this, the following fragment will assert (on x86/x64): uint8_t src1[8 * 8], src2[8 * 8]; av_pixelutils_sad_fn fn = av_pixelutils_get_sad_fn(3, 3, 0, NULL); fn(src1, 8, src2, 8); av_assert0_fpu(); [1]: https://raw.githubusercontent.com/wiki/hjl-tools/x86-psABI/intel386-psABI-1.1.pdf Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avutil/internal: Don't auto-include emms.hAndreas Rheinhardt2023-09-041-58/+0
| | | | | | Instead include emms.h wherever it is needed. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* x86: replace explicit REP_RETs with RETsLynne2023-02-012-11/+11
| | | | | | | | | | | | | | | | | | | From x86inc: > On AMD cpus <=K10, an ordinary ret is slow if it immediately follows either > a branch or a branch target. So switch to a 2-byte form of ret in that case. > We can automatically detect "follows a branch", but not a branch target. > (SSSE3 is a sufficient condition to know that your cpu doesn't have this problem.) x86inc can automatically determine whether to use REP_RET rather than REP in most of these cases, so impact is minimal. Additionally, a few REP_RETs were used unnecessary, despite the return being nowhere near a branch. The only CPUs affected were AMD K10s, made between 2007 and 2011, 16 years ago and 12 years ago, respectively. In the future, everyone involved with x86inc should consider dropping REP_RETs altogether.
* x86/tx_float: fix stray change in 15xM FFT and replace imul->leaLynne2022-11-281-2/+2
| | | | Thanks to rorgoroth for bisecting and kurosu for the lea suggestion.
* lavu/tx: refactor to explicitly track and convert lookup table orderLynne2022-11-241-21/+25
| | | | Necessary for generalizing PFAs.
* x86/tx_float: implement striding in fft_15xMLynne2022-11-241-16/+29
|
* x86/tx_float_init: properly specify the supported factors of 15xM FFTsLynne2022-11-241-3/+3
| | | | Only powers of two are currently supported.
* x86/tx_float: add a standalone 15-point AVX2 transformLynne2022-11-242-0/+117
| | | | Enables its use everywhere else in the framework.
* x86/tx_float: optimize and macro out FFT15Lynne2022-11-241-134/+143
|
* lavu/fixed_dsp: add missing av_restrict qualifiersJohannes Kauffmann2022-10-041-1/+1
| | | | | | | | | | The butterflies_fixed function pointer declaration specifies av_restrict for the first two pointer arguments. So the corresponding function definitions should honor this declaration. MSVC emits warning C4113 for this. Signed-off-by: Anton Khirnov <anton@khirnov.net>
* x86/tx_float: enable AVX-only split-radix FFT codeletsLynne2022-09-242-0/+10
| | | | Sandy Bridge, Ivy Bridge and Bulldozer cores don't support FMA3.
* x86/tx_float: fix some symbol namesJames Almer2022-09-231-3/+3
| | | | | | Should fix compilation on MacOS Signed-off-by: James Almer <jamrial@gmail.com>
* x86/tx_float: change a condition in a preprocessor checkJames Almer2022-09-231-1/+1
| | | | | | Fixes compilation with yasm. Signed-off-by: James Almer <jamrial@gmail.com>
* x86/tx_float: add missing preprocessor wrapper for AVX2 functionsJames Almer2022-09-231-1/+1
| | | | | | Fixes compilation with old assemblers. Signed-off-by: James Almer <jamrial@gmail.com>
* x86/tx_float: generalize iMDCTLynne2022-09-232-29/+40
| | | | | | | | To support non-aligned buffers during the post-transform step, just iterate backwards over the array. This allows using the 15xN-point FFT, with which the speed is 2.1 times faster than our old libavcodec implementation.
* x86/tx_float: add 15xN PFA FFT AVX SIMDLynne2022-09-232-0/+348
| | | | | | | | | ~4x faster than the C version. The shuffles in the 15pt dim1 are seriously expensive. Not happy with it, but I'm contempt. Can be easily converted to pure AVX by removing all vpermpd/vpermps instructions.
* x86/tx_float: adjust internal ASM call ABI againLynne2022-09-231-20/+8
| | | | | There are many ways to go about it, and this one seems optimal for both MDCTs and PFA FFTs without requiring excessive instructions or stack usage.
* x86/tx_float: add asm call versions of the 2pt and 4pt transformsLynne2022-09-192-3/+32
| | | | Verified to be working.
* x86/tx_float: fully support 128bit regs in LOAD64_LUTLynne2022-09-191-5/+5
| | | | | The gather path didn't support 128bit registers. It's not faster on Zen 3, but it's here for completeness.
* x86/tx_float: simplify and describe the intra-asm call conventionLynne2022-09-191-13/+30
|
* x86/float_dsp: use three operand form for some instructionsJames Almer2022-09-131-8/+8
| | | | | | Fixes compilation with old yasm Signed-off-by: James Almer <jamrial@gmail.com>
* avutil/x86/float_dsp: add fma3 for scalarproductPaul B Mahol2022-09-132-0/+129
|
* avutil/x86/intreadwrite: Add ability to detect whether MMX code is usedAndreas Rheinhardt2022-09-111-0/+2
| | | | | | It can be used to call emms_c() only when needed. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* x86/tx_float: add missing check for AVX2James Almer2022-09-061-1/+1
| | | | | | Fixes compilation with old yasm. Signed-off-by: James Almer <jamrial@gmail.com>
* x86/tx_float: set all operands for shufpsJames Almer2022-09-061-2/+2
| | | | | | Fixes compilation with AVX2 enabled yasm. Signed-off-by: James Almer <jamrial@gmail.com>
* x86/tx_float: Fix building for platforms with a symbol prefixMartin Storsjö2022-09-061-5/+5
| | | | | | | This fixes building for x86 macOS (both i386 and x86_64) and i386 windows. Signed-off-by: Martin Storsjö <martin@martin.st>
* x86/tx_float: implement inverse MDCT AVX2 assemblyLynne2022-09-062-1/+216
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit implements an iMDCT in pure assembly. This is capable of processing any mod-8 transforms, rather than just power of two, but since power of two is all we have assembly for currently, that's what's supported. It would really benefit if we could somehow use the C code to decide which function to jump into, but exposing function labels from assebly into C is anything but easy. The post-transform loop could probably be improved. This was somewhat annoying to write, as we must support arbitrary strides during runtime. There's a fast branch for stride == 4 bytes and a slower one which uses vgatherdps. Zen 3 benchmarks for stride == 4 for old (av_imdct_half) vs new (av_tx): 128pt: 2811 decicycles in av_tx (imdct),16775916 runs, 1300 skips 3082 decicycles in av_imdct_half,16776751 runs, 465 skips 256pt: 4920 decicycles in av_tx (imdct),16775820 runs, 1396 skips 5378 decicycles in av_imdct_half,16776411 runs, 805 skips 512pt: 9668 decicycles in av_tx (imdct),16775774 runs, 1442 skips 10626 decicycles in av_imdct_half,16775647 runs, 1569 skips 1024pt: 19812 decicycles in av_tx (imdct),16777144 runs, 72 skips 23036 decicycles in av_imdct_half,16777167 runs, 49 skips
* x86/tx_float: add support for calling assembly functions from assemblyLynne2022-09-062-47/+138
| | | | | | Needed for the next patch. We get this for the extremely small cost of a branch on _ns functions, which wouldn't be used anyway with assembly.
* x86/tx_float: save a branch during coefficient deinterleavingLynne2022-08-091-4/+1
| | | | | | | | | | | Directly branch into the special 64-point deinterleave subroutine rather than going through the general deinterleave. 64-point transform timings on Zen 3: Before: 1974 decicycles in av_tx (fft),16776864 runs, 352 skips After: 1956 decicycles in av_tx (fft),16775378 runs, 1838 skips
* avutil/x86/float_dsp: Remove obsolete 3dnowext functionAndreas Rheinhardt2022-06-222-29/+1
| | | | | | | | | | | x64 always has MMX, MMXEXT, SSE and SSE2 and this means that some functions for MMX, MMXEXT, SSE and 3dnow are always overridden by other functions (unless one e.g. explicitly disables SSE2). So given that the only systems which benefit from ff_vector_fmul_window_3dnowext are truely ancient 32bit AMD x86s it is removed. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avutil/x86/pixelutils: Remove obsolete MMX(EXT) functionsAndreas Rheinhardt2022-06-222-67/+0
| | | | | | | | | | | x64 always has MMX, MMXEXT, SSE and SSE2 and this means that some functions for MMX, MMXEXT, SSE and 3dnow are always overridden by other functions (unless one e.g. explicitly disables SSE2). So given that the only systems which benefit from the 8x8 MMX (overridden by MMXEXT) or the 16x16 MMXEXT (overridden by SSE2) are truely ancient 32bit x86s they are removed. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* x86/tx_float: replace fft_sr_avx with fft_sr_fma3Lynne2022-05-212-9/+9
| | | | | | | | | | | | When the SLOW_GATHER flag was added to the AVX2 version, this made FMA3-features not enabled on Zen CPUs. As FMA3 adds 6-7% across all platforms that support it, in the interest of saving space, this commit removes the AVX version and replaces it with an FMA3 version. The only CPUs affected are Sandy Bridge and Bulldozer, which have AVX support, but no FMA3 support. In the future, if there's a demand for it, a version of the function duplicated for AVX can be added.
* x86/tx_float: improve temporary register allocation for loadsLynne2022-05-211-24/+24
| | | | | | | | | | On Zen 3: Before: 1484285 decicycles in av_tx (fft), 131072 runs, 0 skips After: 1415243 decicycles in av_tx (fft), 131072 runs, 0 skips
* x86/tx_float: add AV_CPU_FLAG_AVXSLOW/SLOW_GATHER flags where appropriateLynne2022-05-211-14/+21
|
* Revert "x86/tx_float: remove vgatherdpd usage"Lynne2022-05-212-31/+43
| | | | | | | | This reverts commit 82a68a8771ca39564f6a74e0f875d6852e7a0c2a. Smarter slow ISA penalties makes gathers still useful. The intention is to use gathers with the final stage of non-ptwo iMDCTs, where they give benefit.
* x86/tx_float: remove vgatherdpd usageLynne2022-05-202-43/+31
| | | | | | | | | | | | | | | | | | | | Its performance loss ranges from either being just as fast as individual loads (Skylake), a few percent slower (Alderlake), 8% slower (Zen 3), to completely disasterous (older/other CPUs). Sadly, gathers never panned out fast on x86, even with the benefit of time and implementation experience. This also saves a register, as there's no need to fill out an additional register mask. Zen 3 (16384-point transform): Before: 1561050 decicycles in av_tx (fft), 131072 runs, 0 skips After: 1449621 decicycles in av_tx (fft), 131072 runs, 0 skips Alderlake: 2% slower on big transforms (65536), to 1% (131072), to a few percent for smaller sizes.
* avutil/cpu: add AVX512 Icelake flagWu Jianhua2022-03-103-28/+34
| | | | | | Signed-off-by: Wu Jianhua <jianhua.wu@intel.com> Reviewed-by: Henrik Gramner <henrik@gramner.com> Signed-off-by: James Almer <jamrial@gmail.com>
* Remove unnecessary libavutil/(avutil|common|internal).h inclusionsAndreas Rheinhardt2022-02-241-2/+1
| | | | | | | | | | Some of these were made possible by moving several common macros to libavutil/macros.h. While just at it, also improve the other headers a bit. Reviewed-by: Martin Storsjö <martin@martin.st> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* avutil/x86/emms: Don't unnecessarily include lavu/cpu.hAndreas Rheinhardt2022-02-211-1/+4
| | | | | | | | | | | | | | | | | Only include it if it is needed, namely if __MMX__ is undefined. X86 is currently the only arch where lavu/cpu.h is basically automatically included (for internal development): #if ARCH_X86 is true, lavu/internal.h (which is basically included everywhere) includes lavu/x86/emms.h which can mask missing inclusions of lavu/cpu.h if the developer works on x86/x64. This has happened in 8e825ec3ab09d877f12dcf05d76902a8bb9c8b11 and also earlier (see 6d2365882f281f9452b31b91edb2e6a2d4f5ff08). By including said header only if necessary ordinary developer machines will behave like non-x86 arches, so that missing inclusions of cpu.h won't go unnoticed any more. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
* libavutil: include assembly with full path from source rootAlexander Kanavin2022-02-087-7/+7
| | | | | | | | Otherwise nasm writes the full host-specific paths into .o output, which breaks binary reproducibility. Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com> Signed-off-by: Anton Khirnov <anton@khirnov.net>
* lavu/tx: refactor assembly codelet definitionLynne2022-02-071-93/+47
| | | | | | | | | | | This commit does some refactoring to make defining assembly codelets smaller, and fixes compiler redefinition warnings. It also allows for other assembly versions to reuse the same boilerplate code as x86. Finally, it also adds the out_of_place flag to all assembly codelets. This changes nothing, as out-of-place operation was assumed to be available anyway, but this makes it more explicit.
* x86/tx_float: avoid redefining macrosLynne2022-02-021-6/+6
| | | | FFT16_FN was used for fft8 and for fft16 afterwards.
* x86/tx_float: mark AVX2 functions as AVXSLOWLynne2022-01-291-2/+2
| | | | | | | | | | | | | Makes Bulldozer prefer AVX functions rather than AVX2, which are 64% slower: AVX: 117653 decicycles in av_tx (fft), 1048535 runs, 41 skips AVX2: 193385 decicycles in av_tx (fft), 1048561 runs, 15 skips The only difference between both is that vgatherdpd is used in the former. We don't want to mark them with the new SLOW_GATHER flag however, since gathers are still faster on Haswell/Zen 2/3 than plain loads.
* x86/tx_float: add missing FF_TX_OUT_OF_PLACE flag to functionsLynne2022-01-271-2/+2
| | | | This caused smaller length dedicated transforms to not be picked up.