- 17 Oct, 2014 12 commits
-
-
Omer Osman authored
For streams which contain DRC metadata, the FDK decoder is able to control rendering of the decoded output. The rendering parameters are detailed in fdk_aac_dec_options []. The default behavior is left up to the decoder. Signed-off-by: Martin Storsjö <martin@martin.st>
-
Omer Osman authored
The FDK decoder is capable of producing mono and stereo downmix from multichannel streams. These streams may contain metadata that control the downmix process. The decoder requires an Ancillary Buffer in order to correctly apply downmix in streams containing downmix Metadata. The decoder does not have an API interface to inform of the presence of Metadata in the stream, and therefore the Ancillary Buffer is always allocated whenever a downmix is requested. When downmixing multichannel streams, the decoder requires the output buffer in aacDecoder_DecodeFrame call to be of fixed size in order to hold the actual number of channels contained in the stream. For example, for a 5.1ch to stereo downmix, the decoder requires that the output buffer is allocated for 6 channels, regardless of the fact that the output is in fact two channels. Due to this requirement, the output buffer is allocated for the maximum output buffer size in case a downmix is requested (and also during decoder init). When a downmix is requested, the buffer used for output during init will also be used for the entire duration the decoder is open. Otherwise, the initial decoder output buffer is freed and the decoder decodes straight into the output AVFrame. Signed-off-by: Martin Storsjö <martin@martin.st>
-
Uwe L. Korn authored
In (non-live) streams with no metadata, the duration of a stream can be retrieved by calling the RTMP function getStreamLength with the playpath. The server will return a positive duration upon the request if the duration is known, otherwise either no response or a duration of 0 will be returned. Signed-off-by: Martin Storsjö <martin@martin.st>
-
Uwe L. Korn authored
Packets that contain a number as a result to a rtmp function call are structured the same way (String, Number, Null, Number). This new method also includes more bounds checks to better handle packets that are not structured as expected. Signed-off-by: Martin Storsjö <martin@martin.st>
-
Luca Barbato authored
The OptionDef arrays are terminated with a { NULL } element not NULL. CC: libav-stable@libav.org Bug-Id: CID 703769
-
Luca Barbato authored
Work as the other free()-like functions. Bug-Id: CID 1087081 CC: libav-stable@libav.org
-
Vittorio Giovara authored
CC: libav-stable@libav.org Bug-Id: CID 1224275
-
Vittorio Giovara authored
CC: libav-stable@libav.org Bug-Id: CID 1005311
-
Luca Barbato authored
The element is always valid. CC: libav-stable@libav.org Bug-Id: CID 732276
-
Luca Barbato authored
CC: libav-stable@libav.org Bug-Id: CID 733793
-
Luca Barbato authored
CC: libav-stable@libav.org Bug-Id: CID 1238794
-
Janne Grunau authored
-
- 16 Oct, 2014 2 commits
-
-
Mika Raento authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-
Michael Lynch authored
CC: libav-stable@libav.org Signed-off-by: Martin Storsjö <martin@martin.st>
-
- 15 Oct, 2014 13 commits
-
-
Martin Storsjö authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-
Martin Storsjö authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-
Martin Storsjö authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-
Vittorio Giovara authored
Reported-by: Ruoyu <liangry@ucweb.com>
-
Vittorio Giovara authored
Reported-by: Ruoyu <liangry@ucweb.com>
-
Martin Storsjö authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-
Martin Storsjö authored
These are assembled into extradata in the order vps/sps/pps/sei. Signed-off-by: Martin Storsjö <martin@martin.st>
-
Anton Khirnov authored
-
Anton Khirnov authored
-
Anton Khirnov authored
When decoding, this field holds the inverse of the framerate that can be written in the headers for some codecs. Using a field called 'time_base' for this is very misleading, as there are no timestamps associated with it. Furthermore, this field is used for a very different purpose during encoding. Add a new field, called 'framerate', to replace the use of time_base for decoding.
-
Rémi Denis-Courmont authored
Decoding acceleration may work even if the codec level is higher than the stated limit of the VDPAU driver. Or the problem may be considered acceptable by the user. This flag allows skipping the codec level capability checks and proceed with decoding. Applications should obviously not set this flag by default, but only if the user explicitly requested this behavior (and presumably knows how to turn it back off if it fails). Signed-off-by: Anton Khirnov <anton@khirnov.net>
-
Rémi Denis-Courmont authored
Currently, no flags are supported. Signed-off-by: Anton Khirnov <anton@khirnov.net>
-
Rémi Denis-Courmont authored
Signed-off-by: Anton Khirnov <anton@khirnov.net>
-
- 14 Oct, 2014 2 commits
-
-
Martin Storsjö authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-
Martin Storsjö authored
These allow getting the absolute start timestamp of a fragment without reading preceding timestamps. This fixes sync between tracks if starting from fragments in different streams that don't align exactly. This also is a prerequisite for producing DASH content. Signed-off-by: Martin Storsjö <martin@martin.st>
-
- 13 Oct, 2014 4 commits
-
-
Anton Khirnov authored
-
Anton Khirnov authored
Currently, the amount of padding inserted at the beginning by some audio encoders, is exported through AVCodecContext.delay. However - the term 'delay' is heavily overloaded and can have multiple different meanings even in the case of audio encoding. - this field has entirely different meanings, depending on whether the codec context is used for encoding or decoding (and has yet another different meaning for video), preventing generic handling of the codec context. Therefore, add a new field -- AVCodecContext.initial_padding. It could conceivably be used for decoding as well at a later point.
-
Rémi Denis-Courmont authored
Fail safe if the pixel format changes.
-
Rémi Denis-Courmont authored
Bug-Id: 541
-
- 12 Oct, 2014 7 commits
-
-
Mark McGough authored
Icecast uses HTTP 1.0 while Libav uses HTTP 1.1 and enables by default chunked post. Icecast actually forwards the HTTP chunk headers to the listener as part of the media stream (without the chunk encoding HTTP headers) causing the players to lose sync. Disabling the option is enough to feed icecast properly. Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
-
Martin Storsjö authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-
Martin Storsjö authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-
Martin Storsjö authored
This is necessary to get the right timestamp offset for content that starts with dts != 0. This currently only helps when writing fragmented files with a non-empty moov atom. When writing an empty moov atom, we don't have any packets yet, so we don't know the starting dts for the tracks. Signed-off-by: Martin Storsjö <martin@martin.st>
-
Martin Storsjö authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-
Michael Niedermayer authored
This makes sure that audio preroll for e.g. AAC is signaled correctly. Previously we only wrote the edit list correctly if we had negative dts but started with pts == 0 (e.g. for video with B-frames). Signed-off-by: Martin Storsjö <martin@martin.st>
-
Martin Storsjö authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-