- 29 May, 2016 2 commits
-
-
Diego Biurrun authored
Split version files into one line per symbol/directive to allow compatibility with the Solaris linker without preprocessing and eliminate $ from version file templates to simplify the postprocessing shell command.
-
Diego Biurrun authored
-
- 28 May, 2016 2 commits
-
-
Diego Biurrun authored
These warnings conflict with system macros on Solaris, producing truckloads of warnings about macro redefinition.
-
Diego Biurrun authored
-
- 27 May, 2016 5 commits
-
-
Luca Barbato authored
CC: libav-stable@libav.org Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
-
Mark Thompson authored
Experimental; requires Skylake and VAAPI 0.39.1 (not yet released). Also increases the allowed range of the quality option - in low-power mode, the Intel driver supports levels 1-8 (and 0 meaning default).
-
Mark Thompson authored
Non-reference frames (nal_ref_idc == 0) should be discardable, so frame_num does not advance after them. Before this change, a stream containing unreferenced B-frames would be rejected by the reference decoder.
-
Mark Thompson authored
This prevents attempts to use unsupported modes, such as low-power H.264 mode on non-Skylake targets. Also fixes a crash on invalid configuration, when trying to destroy an invalid VA config/context.
-
Diego Biurrun authored
-
- 26 May, 2016 12 commits
-
-
Anton Khirnov authored
This is a video test and there are no audio packets in the sample anyway.
-
Anton Khirnov authored
The current code modifies the user-supplied string, which is shared for the whole output file. So a bitstream filter specification applied to multiple streams would not work correctly.
-
Anton Khirnov authored
-
Anton Khirnov authored
-
Anton Khirnov authored
-
Anton Khirnov authored
-
Anton Khirnov authored
-
Anton Khirnov authored
-
Anton Khirnov authored
-
Anton Khirnov authored
-
Anton Khirnov authored
-
Andrey Turkin authored
avcodec_copy_context() didn't handle hw_frames_ctx references correctly which could cause crashes. Signed-off-by: Anton Khirnov <anton@khirnov.net>
-
- 25 May, 2016 4 commits
-
-
Diego Biurrun authored
-
Diego Biurrun authored
-
Diego Biurrun authored
-
Martin Storsjö authored
This is only used for logging a human readable codec name for debugging. Signed-off-by: Martin Storsjö <martin@martin.st>
-
- 24 May, 2016 2 commits
-
-
Diego Biurrun authored
-
Francois Cartegnie authored
Signed-off-by: Diego Biurrun <diego@biurrun.de>
-
- 23 May, 2016 5 commits
-
-
Anton Khirnov authored
-
Anton Khirnov authored
We cannot deprecate it until the new parser API is in place, because of the way libavformat works. But the majority of the users can already simply replace it with avcodec_free_context(), which will simplify the transition once it is finally deprecated.
-
Anton Khirnov authored
This function is supposed to "reset" a codec context to a clean state so that it can be opened again. The only reason it exists is to allow using AVStream.codec as a decoding context (after it was already opened/used/closed by avformat_find_stream_info()). Since that behaviour is now deprecated, there is no reason for this function to exist anymore.
-
Anton Khirnov authored
Since AVCodecContext contains a lot of complex state, copying a codec context is not a well-defined operation. The purpose for which it is typically used (which is well-defined) is copying the stream parameters from one codec context to another. That is now possible with through the AVCodecParameters API. Therefore, there is no reason for avcodec_copy_context() to exist.
-
Anton Khirnov authored
Describe the new AVCodecParameters API.
-
- 22 May, 2016 5 commits
-
-
Luca Barbato authored
Initialize the bit buffer with the correct size (amount of bits that will be read) instead of relying on the bitstream reader overreading the correct values. Signed-off-by: Luca Barbato <lu_zero@gentoo.org> Signed-off-by: Diego Biurrun <diego@biurrun.de>
-
Diego Biurrun authored
It will not be provided by the new bit reader anyway.
-
Diego Biurrun authored
-
Diego Biurrun authored
This fixes compilation with the libavcodec version bumped to 58.
-
Anton Khirnov authored
It is now only used by the av_parser_change() call during streamcopy, so allocate a special AVCodecContext instance for this case. This instance should go away when the new parser API is finished. Signed-off-by: Diego Biurrun <diego@biurrun.de>
-
- 19 May, 2016 3 commits
-
-
Anton Khirnov authored
Based on a patch by Agatha Hu <ahu@nvidia.com>
-
Philip Langdale authored
For reasons we are not privy to, nvidia decided that the nvenc encoder should apply aspect ratio compensation to 'DVD like' content, assuming that the content is not BT.601 compliant, but needs to be BT.601 compliant. In this context, that means that they make the following, questionable, assumptions: 1) If the input dimensions are 720x480 or 720x576, assume the content has an active area of 704x480 or 704x576. 2) Assume that whatever the input sample aspect ratio is, it does not account for the difference between 'physical' and 'active' dimensions. From these assumptions, they then conclude that they can 'help', by adjusting the sample aspect ratio by a factor of 45/44. And indeed, if you wanted to display only the 704 wide active area with the same aspect ratio as the full 720 wide image - this would be the correct adjustment factor, but what if you don't? And more importantly, what if you're used to lavc not making this kind of adjustment at encode time - because none of the other encoders do this! And, what if you had already accounted for BT.601 and your input had the correct attributes? Well, it's going to apply the compensation anyway! So, if you take some content, and feed it through nvenc repeatedly, it will keep scaling the aspect ratio every time, stretching your video out more and more and more. So, clearly, regardless of whether you want to apply bt.601 aspect ratio adjustments or not, this is not the way to do it. With any other lavc encoder, you would do it as part of defining your input parameters or do the adjustment at playback time, and there's no reason by nvenc should be any different. This change adds some logic to undo the compensation that nvenc would otherwise do. nvidia engineers have told us that they will work to make this compensation mechanism optional in a future release of the nvenc SDK. At that point, we can adapt accordingly. Signed-off-by: Philip Langdale <philipl@overt.org> Reviewed-by: Timo Rothenpieler <timo@rothenpieler.org> Signed-off-by: Anton Khirnov <anton@khirnov.net>
-
Anton Khirnov authored
-