aboutsummaryrefslogtreecommitdiffstats
path: root/libavcodec/mjpeg_parser.c
diff options
context:
space:
mode:
authorBen Avison <bavison@riscosopen.org>2022-03-31 18:23:43 +0100
committerMartin Storsjö <martin@martin.st>2022-04-01 10:03:33 +0300
commit2698bfdc93d456d304a38b570052e1a238d64c54 (patch)
treed24db8bfa9f0ba5974a122e5596ea82f0df13491 /libavcodec/mjpeg_parser.c
parent20cb43ea8ba0471dcba442b8de8fa17ff41f6281 (diff)
downloadffmpeg-2698bfdc93d456d304a38b570052e1a238d64c54.tar.gz
checkasm: Add vc1dsp inverse transform tests
This test deliberately doesn't exercise the full range of inputs described in the committee draft VC-1 standard. It says: input coefficients in frequency domain, D, satisfy -2048 <= D < 2047 intermediate coefficients, E, satisfy -4096 <= E < 4095 fully inverse-transformed coefficients, R, satisfy -512 <= R < 511 For one thing, the inequalities look odd. Did they mean them to go the other way round? That would make more sense because the equations generally both add and subtract coefficients multiplied by constants, including powers of 2. Requiring the most-negative values to be valid extends the number of bits to represent the intermediate values just for the sake of that one case! For another thing, the extreme values don't look to occur in real streams - both in my experience and supported by the following comment in the AArch32 decoder: tNhalf is half of the value of tN (as described in vc1_inv_trans_8x8_c). This is done because sometimes files have input that causes tN + tM to overflow. To avoid this overflow, we compute tNhalf, then compute tNhalf + tM (which doesn't overflow), and then we use vhadd to compute (tNhalf + (tNhalf + tM)) >> 1 which does not overflow because it is one instruction. My AArch64 decoder goes further than this. It calculates tNhalf and tM then does an SRA (essentially a fused halve and add) to compute (tN + tM) >> 1 without ever having to hold (tNhalf + tM) in a 16-bit element without overflowing. It only encounters difficulties if either tNhalf or tM overflow in isolation. I haven't had sight of the final standard, so it's possible that these issues were dealt with during finalisation, which could explain the lack of usage of extreme inputs in real streams. Or a preponderance of decoders that only support 16-bit intermediate values in their inverse transforms might have caused encoders to steer clear of such cases. I have effectively followed this approach in the test, and limited the scale of the coefficients sufficient that both the existing AArch32 decoder and my new AArch64 decoder both pass. Signed-off-by: Ben Avison <bavison@riscosopen.org> Signed-off-by: Martin Storsjö <martin@martin.st>
Diffstat (limited to 'libavcodec/mjpeg_parser.c')
0 files changed, 0 insertions, 0 deletions