| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
Set InputStream.decoding_needed/discard/etc. only from
ist_{filter,output},add() functions. Reduces the knowledge of
InputStream internals in muxing/filtering code.
|
|
|
|
|
| |
Makes it easier to see where the input stream is modified from muxer
code.
|
| |
|
|
|
|
|
| |
Everything in it can be done immediately when creating the output
stream, there is no reason to postpone it.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Creating a new output stream of a given type is currently done by
calling new_<type>_stream(), which all start by calling
new_output_stream() to allocate the stream and do common init, followed
by type-specific init.
Reverse this structure - the caller now calls the common function
ost_add() with the type as a parameter, which then calls the
type-specific function internally. This will allow adding common code
that runs after type-specific code in future commits.
|
|
|
|
|
|
|
| |
Reduces the diff in the following commit.
Temporarily add a forward declaration for new_output_stream(), it will
be removed in the next commit.
|
|
|
|
| |
Fixes #10319 and #10309.
|
|
|
|
|
|
|
| |
Currently, output streams where an input stream is sent directly (i.e.
not through lavfi) are determined by iterating over ALL the output
streams and skipping the irrelevant ones. This is awkward and
inefficient.
|
| |
|
|
|
|
|
| |
This value is associated with the filtergraph output rather than the
output stream, so this is a more appropriate place for it.
|
|
|
|
|
| |
It is audio/video encoding-only and does not need to be visible outside
of ffmpeg_enc.c
|
|
|
|
|
| |
Start by moving OutputStream.last_frame to it. In the future it will
hold other encoder-internal state.
|
|
|
|
|
|
|
|
|
|
|
| |
The code currently uses lavfi for this, which creates a sort of
configuration dependency loop - the encoder should be ideally
initialized with information from the first audio frame, but to get this
frame one needs to first open the encoder to know the frame size. This
necessitates an awkward workaround, which causes audio handling to be
different from video.
With this change, audio encoder initialization is congruent with video.
|
|
|
|
| |
It is equivalent to !InputStream.discard.
|
|
|
|
|
| |
Fixes regression from 3c7dd5ed37da6d2de06c4850de5a319ca9cdd47f.
Fixes ticket #10157.
|
|
|
|
| |
Fixes #10243
|
|
|
|
| |
Signed-off-by: James Almer <jamrial@gmail.com>
|
|
|
|
|
|
| |
Analogous to -enc_stats*, but happens right before muxing. Useful
because bitstream filters and the sync queue can modify packets after
encoding and before muxing. Also has access to the muxing timebase.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Splits the currently handled subtitle at random access point
packets that can be configured to follow a specific output stream.
Currently only subtitle streams which are directly mapped into the
same output in which the heartbeat stream resides are affected.
This way the subtitle - which is known to be shown at this time
can be split and passed to muxer before its full duration is
yet known. This is also a drawback, as this essentially outputs
multiple subtitles from a single input subtitle that continues
over multiple random access points. Thus this feature should not
be utilized in cases where subtitle output latency does not matter.
Co-authored-by: Andrzej Nadachowski <andrzej.nadachowski@24i.com>
Co-authored-by: Bernard Boulay <bernard.boulay@24i.com>
Signed-off-by: Jan Ekström <jan.ekstrom@24i.com>
|
| |
|
|
|
|
|
| |
Use it for logging. This makes log messages related to this output
stream more consistent.
|
|
|
|
|
| |
Use it for logging. This makes log messages related to this output file
more consistent.
|
|
|
|
|
|
|
| |
Similar to -vstats, but more flexible:
- works for audio as well as video
- frame and/or packet information
- user-specifiable format
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current code may, depending on the muxer, decide to use VSYNC_VFR tagged
with the specified framerate, without actually performing framerate
conversion. This is clearly wrong and against the documentation, which
states unambiguously that -r should produce CFR output for video
encoding.
FATE test changes:
* nuv-rtjpeg: replace -r with '-enc_time_base -1', which keeps the
original timebase. Output frames are now produced with proper
durations.
* filter-mpdecimate: just drop the -r option, it is unnecessary
* filter-fps-r: remove, this test makes no sense and actually
produces broken VFR output (with incorrect frame durations).
|
|
|
|
|
| |
It is not needed after the spec is parsed. Also avoids ugly string
comparisons for each video frame.
|
|
|
|
| |
Allows to remove the ugly of_get_chapters() wrapper.
|
|
|
|
|
|
| |
There are 8 of them and they are typically used together. Allows to pass
just this struct to forced_kf_apply(), which makes it clear that the
rest of the OutputStream is not accessed there.
|
|
|
|
|
|
|
|
|
| |
Do it in set_dispositions() rather than during stream creation.
Since at this point all other stream information is known, this allows
setting disposition based on metadata, which implements #10015. This
also avoids an extra allocated string in OutputStream that was unused
after of_open().
|
|
|
|
|
|
|
|
|
| |
Replace it with an array of streams in each InputFile. This is a more
accurate reflection of the actual relationship between InputStream and
InputFile.
Analogous to what was previously done to output streams in
7ef7a22251b852faab9404c85399ba8ac5dfbdc3.
|
|
|
|
| |
Skip unusable streams early and do not compute any scores for them.
|
|
|
|
|
| |
This is simpler. The indirection via an index exists for historical
reasons that longer apply.
|
|
|
|
| |
It cannot be true since 1959351aecf. Effectively reverts 6a3833e1411.
|
| |
|
| |
|
| |
|
|
|
|
| |
It no longer needs to be visible outside of the muxing code.
|
|
|
|
|
|
| |
Specificaly, the of_add_attachments() call (which can add attachment
streams to the output) and the check whether the output file contains
any streams. They both logically belong in create_streams().
|
| |
|
|
|
|
|
| |
It should be input-only to this code. Will allow making it const in
future commits.
|
|
|
|
| |
It does the same thing as the block right below it.
|
|
|
|
|
|
|
|
|
|
|
| |
The current code will override the *_disable fields (set by -vn/-an
options) when creating output streams for unlabeled complex filtergraph
outputs, in order to disable automatic mapping for the corresponding
media type.
However, this will apply not only to automatic mappings, but to manual
ones as well, which should not happen. Avoid this by adding local
variables that are used only for automatic mappings.
|
|
|
|
|
| |
Makes it easy to see where all the streams are created. Will also be
useful in the following commit.
|
|
|
|
|
|
| |
Specifically recording_time and stop_time - use local variables instead.
OptionsContext should be input-only to this code. Will allow making it
const in future commits.
|
| |
|
|
|
|
|
| |
Use a local variable instead. This will allow making OptionsContext
const in future commits.
|
|
|
|
|
| |
This code shares variables like OptionsContext.metadata_*_manual, so it
makes sense to group it together.
|
|
|
|
|
| |
libass defines a non-static read_file() symbol, which causes conflicts
with static linking.
|
|
|
|
|
|
|
|
| |
Now that we have proper options for defining display matrix
overrides, this should no longer be required.
fftools does not have its own versioning, so for now the define is
just set to 1 and disables the functionality if set to zero.
|
|
|
|
| |
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
|
|
|
|
|
| |
They are private to the muxer and do not need to be visible outside of
it.
|