aboutsummaryrefslogtreecommitdiffstats
path: root/doc/ffmpeg.txt
blob: c2ec309772d9f4d163509a7ef007e97101d101a9 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
*************** FFMPEG soft VCR documentation *****************

0) Introduction
---------------

  FFmpeg is a very fast video and audio encoder. It can grab from
  files or from a live audio/video source. 
  
  The command line interface is designed to be intuitive, in the sense
  that ffmpeg tries to figure out all the parameters, when
  possible. You have usually to give only the target bitrate you want.

  FFmpeg can also convert from any sample rate to any other, and
  resize video on the fly with a high quality polyphase filter.

1) Video and Audio grabbing
---------------------------

* FFmpeg can use a video4linux compatible video source and any Open
  Sound System audio source:

  ffmpeg /tmp/out.mpg 

  Note that you must activate the right video source and channel
  before launching ffmpeg. You can use any TV viewer such as xawtv by
  Gerd Knorr which I find very good. You must also set correctly the
  audio recording levels with a standard mixer.

2) Video and Audio file format convertion
-----------------------------------------

* ffmpeg can use any supported file format and protocol as input: 

Examples:

* You can input from YUV files:

  ffmpeg -i /tmp/test%d.Y /tmp/out.mpg 

  It will use the files: 
       /tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
       /tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...

  The Y files use twice the resolution of the U and V files. They are
  raw files, without header. They can be generated by all decent video
  decoders. You must specify the size of the image with the '-s' option
  if ffmpeg cannot guess it.

* You can input from a RAW YUV420P file:

  ffmpeg -i /tmp/test.yuv /tmp/out.avi

  The RAW YUV420P is a file containing RAW YUV planar, for each frame first
  come the Y plane followed by U and V planes, which are half vertical and
  horizontal resolution.

* You can output to a RAW YUV420P file:

  ffmpeg -i mydivx.avi -o hugefile.yuv

* You can set several input files and output files:

  ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg

  Convert the audio file a.wav and the raw yuv video file a.yuv
  to mpeg file a.mpg

* You can also do audio and video convertions at the same time:

  ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2

  Convert the sample rate of a.wav to 22050 Hz and encode it to MPEG audio.

* You can encode to several formats at the same time and define a
  mapping from input stream to output streams:

  ffmpeg -i /tmp/a.wav -ab 64 /tmp/a.mp2 -ab 128 /tmp/b.mp2 -map 0:0 -map 0:0

  Convert a.wav to a.mp2 at 64 kbits and b.mp2 at 128 kbits. '-map
  file:index' specify which input stream is used for each output
  stream, in the order of the definition of output streams.

* You can transcode decrypted VOBs

  ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800 -g 300 -bf 2 -acodec
  mp3 -ab 128 snatch.avi

  This is a typicall DVD ripper example, input from a VOB file, output to
  an AVI file with MPEG-4 video and MP3 audio, note that in this command we
  use B frames so the MPEG-4 stream is DivX5 compatible, GOP size is 300
  that means an INTRA frame every 10 seconds for 29.97 fps input video.
  Also the audio stream is MP3 encoded so you need LAME support which is
  enabled using '--enable-mp3lame' when configuring.
  The mapping is particullary usefull for DVD transcoding to get the desired
  audio language.

  NOTE: to see the supported input formats, use 'ffmpeg -formats'.

2) Invocation
-------------

* The generic syntax is :

  ffmpeg [[options][-i input_file]]... {[options] output_file}...

  If no input file is given, audio/video grabbing is done.

  As a general rule, options are applied to the next specified
  file. For example, if you give the '-b 64' option, it sets the video
  bitrate of the next file. Format option may be needed for raw input
  files.

  By default, ffmpeg tries to convert as losslessly as possible: it
  uses the same audio and video parameter fors the outputs as the one
  specified for the inputs.

* Main options are:

-L                  show license
-h                  show help
-formats            show available formats, codecs, protocols, ...
-f fmt              force format
-i filename         input file name
-y                  overwrite output files
-t duration         set the recording time
-title string       set the title
-author string      set the author
-copyright string   set the copyright
-comment string     set the comment
-b bitrate          set video bitrate (in kbit/s)

* Video Options are:

-s size             set frame size                       [160x128]
-r fps              set frame rate                       [25]
-b bitrate          set the video bitrate in kbit/s      [200]
-vn                 disable video recording              [no]
-bt tolerance       set video bitrate tolerance (in kbit/s)
-sameq              use same video quality as source (implies VBR)
-ab bitrate         set audio bitrate (in kbit/s)

* Audio Options are:

-ar freq     set the audio sampling freq          [44100]
-ab bitrate  set the audio bitrate in kbit/s      [64]
-ac channels set the number of audio channels     [1]
-an          disable audio recording              [no]

* Advanced options are:

-map file:stream    set input stream mapping
-g gop_size         set the group of picture size
-intra              use only intra frames
-qscale q           use fixed video quantiser scale (VBR)
-qmin q             min video quantiser scale (VBR)
-qmax q             max video quantiser scale (VBR)
-qdiff q            max difference between the quantiser scale (VBR)
-qblur blur         video quantiser scale blur (VBR)
-qcomp compression  video quantiser scale compression (VBR)
-vd device          set video device
-vcodec codec       force video codec
-me method          set motion estimation method
-bf frames          use 'frames' B frames (only MPEG-4)
-hq                 activate high quality settings
-4mv                use four motion vector by macroblock (only MPEG-4)
-ad device          set audio device
-acodec codec       force audio codec
-deinterlace        deinterlace pictures
-benchmark          add timings for benchmarking
-hex                dump each input packet
-psnr               calculate PSNR of compressed frames
-vstats             dump video coding statistics to file

The output file can be "-" to output to a pipe. This is only possible
with mpeg1 and h263 formats. 

3) Protocols

 ffmpeg handles also many protocols specified with the URL syntax.

 Use 'ffmpeg -formats' to have a list of the supported protocols.

 The protocol 'http:' is currently used only to communicate with
 ffserver (see the ffserver documentation). When ffmpeg will be a
 video player it will also be used for streaming :-)

4) File formats and codecs
--------------------------

 Use 'ffmpeg -formats' to have a list of the supported output
 formats. Only some formats are handled as input, but it will improve
 in the next versions.

5) Tips
-------

- For streaming at very low bit rate application, use a low frame rate
  and a small gop size. This is especially true for real video where
  the Linux player does not seem to be very fast, so it can miss
  frames. An example is:

  ffmpeg -g 3 -r 3 -t 10 -b 50 -s qcif -f rv10 /tmp/b.rm

- The parameter 'q' which is displayed while encoding is the current
  quantizer. The value of 1 indicates that a very good quality could
  be achieved. The value of 31 indicates the worst quality. If q=31
  too often, it means that the encoder cannot compress enough to meet
  your bit rate. You must either increase the bit rate, decrease the
  frame rate or decrease the frame size.

- If your computer is not fast enough, you can speed up the
  compression at the expense of the compression ratio. You can use
  '-me zero' to speed up motion estimation, and '-intra' to disable
  completly motion estimation (you have only I frames, which means it
  is about as good as JPEG compression).

- To have very low bitrates in audio, reduce the sampling frequency
  (down to 22050 kHz for mpeg audio, 22050 or 11025 for ac3).

- To have a constant quality (but a variable bitrate), use the option
  '-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
  quality).

- When converting video files, you can use the '-sameq' option which
  uses in the encoder the same quality factor than in the decoder. It
  allows to be almost lossless in encoding.