diff options
author | Michael Niedermayer <michael@niedermayer.cc> | 2021-01-22 00:39:19 +0100 |
---|---|---|
committer | Michael Niedermayer <michael@niedermayer.cc> | 2021-03-08 22:08:49 +0100 |
commit | 1957095e808b80cbe607347a6f23207d6318a48d (patch) | |
tree | 6179fb3cf42b9f9817e55055720c7437a689ee47 /doc/metadata.texi | |
parent | e6ff5d0facabb016e51038227a8944e36d24f1af (diff) | |
download | ffmpeg-1957095e808b80cbe607347a6f23207d6318a48d.tar.gz |
avformat/swfdec: Check outlen before allocation
Fixes: Timeout (too long -> 241ms)
Fixes: 29083/clusterfuzz-testcase-minimized-ffmpeg_dem_SWF_fuzzer-6273684478230528
The source of the magic number is
A very quick simulation of the best case compression for "compress"
below is not nice written code as i did not expect I or anyone else
would ever see it again
I would have preferred some nicer expression or course, but thats
what it seems to be asymptotically. For smaller amounts of data a
tighter bound is possible but i saw no nice way to consider that
and it seems also overkill to try to do it more fine grained for
just this
main(){
int64_t bits = 0;
int bank = 256;
int bitbank = 8;
for(unsigned i = 0; i<1024*1024*1024*4U-100000;) {
int word_size = bank-255;
i += word_size;
bits += bitbank;
if (!(bank & (bank-1)))
bitbank ++;
bank++;
if (bitbank > 16) {
printf("BEST %f \n", 8.0 * i / bits );
bank = 256;
bitbank = 8;
}
}
}
above assumes i remembered correctly how the algorithm works but the
value was close to what actual compession of zeros gave
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Diffstat (limited to 'doc/metadata.texi')
0 files changed, 0 insertions, 0 deletions