diff options
author | wm4 <nfxjfg@googlemail.com> | 2016-10-01 17:22:15 +0200 |
---|---|---|
committer | wm4 <nfxjfg@googlemail.com> | 2016-10-01 17:22:22 +0200 |
commit | 8f6f2322285fc14f8f16377db50355864019a757 (patch) | |
tree | e99e58fe15f1aecfc667e7d3694573a8dc00ad77 /libavcodec/vda.h | |
parent | b2fea2fdee464edd736fc903ec3a4dc1e3a06e56 (diff) | |
download | ffmpeg-8f6f2322285fc14f8f16377db50355864019a757.tar.gz |
ffmpeg: use new decode API
This is a bit messy, mainly due to timestamp handling.
decode_video() relied on the fact that it could set dts on a flush/drain
packet. This is not possible with the old API, and won't be. (I think
doing this was very questionable with the old API. Flush packets should
not contain any information; they just cause a FIFO to be emptied.) This
is replaced with checking the best_effort_timestamp for AV_NOPTS_VALUE,
and using the suggested DTS in the drain case.
The modified tests (fate-cavs and others) still fails due to dropping
the last frame. This happens because the timestamp of the last frame
goes backwards (ffprobe -show_frames shows the same thing). I suspect
that this "worked" due to the best effort timestamp logic picking the
DTS over the decreasing PTS. Since this logic is in libavcodec (where
it probably shouldn't be), this can't be easily fixed. The timestamps
of the cavs samples are weird anyway, so I chose not to fix it.
Another strange thing is the timestamp handling in the video path of
process_input_packet (after the decode_video() call). It looks like
the code to increase next_dts and next_pts should be run every time
a frame is decoded - but it's needed even if output is skipped.
Diffstat (limited to 'libavcodec/vda.h')
0 files changed, 0 insertions, 0 deletions