diff options
author | James Almer <jamrial@gmail.com> | 2022-11-14 02:32:33 -0300 |
---|---|---|
committer | James Almer <jamrial@gmail.com> | 2024-07-10 13:25:44 -0300 |
commit | 4a04cca69af807ccf831da977a94350611967c4c (patch) | |
tree | 827ec417bfa53c97317296f2808dfc8d1b410139 /libswresample/aarch64/resample.S | |
parent | 34b4ca8696de64ca756e7aed7bdefa9ff6bb5fac (diff) | |
download | ffmpeg-4a04cca69af807ccf831da977a94350611967c4c.tar.gz |
x86/intreadwrite: use intrinsics instead of inline asm for AV_ZERO128
When called inside a loop, the inline asm version results in one pxor
unnecessarely emitted per iteration, as the contents of the __asm__() block are
opaque to the compiler's instruction scheduler.
This is not the case with intrinsics, where pxor will be emitted once with any
half decent compiler.
This also has the benefit of removing any SSE -> AVX penalty that may happen
when the compiler emits VEX encoded instructions.
Signed-off-by: James Almer <jamrial@gmail.com>
Diffstat (limited to 'libswresample/aarch64/resample.S')
0 files changed, 0 insertions, 0 deletions