aboutsummaryrefslogtreecommitdiffstats
path: root/libavformat/oggparseopus.c
diff options
context:
space:
mode:
authorChristophe Gisquet <christophe.gisquet@gmail.com>2015-10-12 19:37:49 +0200
committerMichael Niedermayer <michael@niedermayer.cc>2015-10-13 15:34:32 +0200
commit7ece8b50b19e140ace13eda6f1a9f45f868c2528 (patch)
treeee4e986a0913996db69ad2054cc03f464cd717d8 /libavformat/oggparseopus.c
parent4369b9dc7b2b0da594223ce46615ba8e2b4cead6 (diff)
downloadffmpeg-7ece8b50b19e140ace13eda6f1a9f45f868c2528.tar.gz
x86: simple_idct: 12bits versions
On 12 frames of a 444p 12 bits DNxHR sequence, _put function: C: 78902 decicycles in idct, 262071 runs, 73 skips avx: 32478 decicycles in idct, 262045 runs, 99 skips Difference between the 2: stddev: 0.39 PSNR:104.47 MAXDIFF: 2 This is unavoidable and due to the scale factors used in the x86 version, which cannot match the C ones. In addition, the trick of adding an initial bias to the input of a pass can overflow, as the input coefficients are already 15bits, which is the maximum this function can handle. Overall, however, the omse on 12 bits samples goes from 0.16916 to 0.16883. Reducing rowshift by 1 improves to 0.0908, but causes overflows. Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Diffstat (limited to 'libavformat/oggparseopus.c')
0 files changed, 0 insertions, 0 deletions