- 23 Apr, 2020 11 commits
-
-
ManojGuptaBonda authored
Adding the support to build FFMPEG with HW accelerated decode(nvdec) and encode on aarch64 architecture. Signed-off-by: Timo Rothenpieler <timo@rothenpieler.org>
-
Gyan Doshi authored
-
Jun Zhao authored
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
-
Jun Zhao authored
Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
-
Jun Zhao authored
enable dvcC/dvvC box support from DOVI sidedata. Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
-
Jun Zhao authored
support DOVI sidedata. Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
-
Jun Zhao authored
dump DOVI side data. Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
-
vacingfang authored
support dvcC/dvcC box from spec Dolby Vision Streams Within the ISO Base MediaFile Format Version 2.1.2 (https://www.dolby.com/in/en/technologies/dolby-vision/dolby-vision\ -bitstreams-within-the-iso-base-media-file-format-v2.1.2.pdf) export the DOVI information to sidedata. Signed-off-by: vacingfang <vacingfang@tencent.com>
-
vacingfang authored
support DOVI Video Stream Descriptor from Dolby Vision Streams Within the MPEG-2 Transport Stream Format V1.2 From the spec: https://www.dolby.com/us/en/technologies/\ dolby-vision/dolby-vision-bitstreams-in-mpeg-2-transport-\ stream-multiplex-v1.2.pdf. export the DOVI information with sidedata. Signed-off-by: vacingfang <vacingfang@tencent.com>
-
vacingfang authored
add DOVI related struct Signed-off-by: vacingfang <vacingfang@tencent.com>
-
Jun Zhao authored
add a new sidedata type for DOVI. Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
-
- 22 Apr, 2020 15 commits
-
-
Vitaly Buka authored
Also the patch makes this code consistent with mpeg4videodec.c Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
-
Gautam Ramakrishnan authored
This patch adds support to skip the CRG marker. The CRG marker, is an informational marker. Allows samples such as p0_03.j2k to be decoded. Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
-
Michael Niedermayer authored
Fixes: out of array read Fixes: 20796/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_IFF_ILBM_fuzzer-5111364702175232.fuzz Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpegSigned-off-by: Michael Niedermayer <michael@niedermayer.cc>
-
Michael Niedermayer authored
Reviewed-by: Peter Ross <pross@xvid.org> Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
-
Thilo Borgmann authored
-
Thilo Borgmann authored
-
Gyan Doshi authored
They can be demuxed by ffmpeg.
-
Guo, Yejun authored
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
-
Guo, Yejun authored
it can be tested with model file generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') z1 = 2 / x z2 = 1 / z1 z3 = z2 / 0.25 + 0.3 z4 = z3 - x * 1.5 - 0.3 y = tf.identity(z4, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
-
Guo, Yejun authored
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
-
Guo, Yejun authored
it can be tested with model file generated from above python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') z1 = 0.5 + 0.3 * x z2 = z1 * 4 z3 = z2 - x - 2.0 y = tf.identity(z3, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
-
Guo, Yejun authored
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
-
Guo, Yejun authored
It can be tested with the model file generated with below python script: import tensorflow as tf import numpy as np import imageio in_img = imageio.imread('input.jpg') in_img = in_img.astype(np.float32)/255.0 in_data = in_img[np.newaxis, :] x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in') z1 = 0.039 + x z2 = x + 0.042 z3 = z1 + z2 z4 = z3 - 0.381 z5 = z4 - x y = tf.math.maximum(z5, 0.0, name='dnn_out') sess=tf.Session() sess.run(tf.global_variables_initializer()) graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out']) tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False) print("image_process.pb generated, please use \ path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n") output = sess.run(y, feed_dict={x: in_data}) imageio.imsave("out.jpg", np.squeeze(output)) Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
-
Jun Zhao authored
fix resource leak in mbedtls part. fix #8614 Signed-off-by: Jun Zhao <barryjzhao@tencent.com>
-
Andreas Rheinhardt authored
This bug was introduced in 3589b3f2. Fixes Coverity ID 1462425. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
- 21 Apr, 2020 14 commits
-
-
Martin Storsjö authored
Signed-off-by: Martin Storsjö <martin@martin.st>
-
Marton Balint authored
Fixes ticket #2622. Signed-off-by: Marton Balint <cus@passwd.hu>
-
Marton Balint authored
Signed-off-by: Marton Balint <cus@passwd.hu>
-
Marton Balint authored
The standard does not allow more. Signed-off-by: Marton Balint <cus@passwd.hu>
-
Marton Balint authored
Signed-off-by: Marton Balint <cus@passwd.hu>
-
Błażej Szczygieł authored
fixes #8080 Signed-off-by: Błażej Szczygieł <spaz16@wp.pl>
-
Lynne authored
We derive the destination buffer stride from the input stride, which meant if the image was flipped with a negative stride, we'd be FFALIGNING a negative number which ends up being huge, thus making the Vulkan buffer allocation fail and the whole image transfer fail. Only found out about this as OpenGL compositors can copy an entire image with a single call if its flipped, rather than iterate over each line.
-
Andreas Rheinhardt authored
Reindentation, removal of { } if they contain only one statement and moving the return statement to a line of its own in situations like "if (ret < 0) return ret;". Moreover, several overlong lines were made shorter and a camelCase variable received a name in line with our naming conventions. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
Andreas Rheinhardt authored
Up until now, the Matroska muxer would mark a track as default if it had the disposition AV_DISPOSITION_DEFAULT or if there was no track with AV_DISPOSITION_DEFAULT set; in the latter case even more than one track of a kind (audio, video, subtitles) was marked as default which is not sensible. This commit changes the logic used to mark tracks as default. There are now three modes for this: a) In the "infer" mode the first track of every type (audio, video, subtitles) with default disposition set will be marked as default; if there is no such track (for a given type), then the first track of this type (if existing) will be marked as default. This behaviour is inspired by mkvmerge. It ensures that the default flags will be set in a sensible way even if the input comes from containers that lack the concept of default flags. This mode is the default mode. b) The "infer_no_subs" mode is similar to the "infer" mode; the difference is that if no subtitle track with default disposition exists, no subtitle track will be marked as default at all. c) The "passthrough" mode: Here the track will be marked as default if and only the corresponding input stream had disposition default. This fixes ticket #8173 (the passthrough mode is ideal for this) as well as ticket #8416 (the "infer_no_subs" mode leads to the desired output). Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
Andreas Rheinhardt authored
At the end of encoding, the FLAC encoder sends a packet whose side data contains updated extradata (e.g. a correct md5 checksum). The Matroska muxer uses this to update the CodecPrivate. In doing so, the stream's codecpar was copied. But given that writing a FLAC CodecPrivate does not modify the used AVCodecParameters at all, there is no need to do so and this commit changes this. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
Andreas Rheinhardt authored
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
Andreas Rheinhardt authored
Several EBML Master elements for which a good upper bound of the final length was available were nevertheless written without giving an upper bound of the final length to start_ebml_master(), so that their length fields were eight bytes long. This has been changed. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
Andreas Rheinhardt authored
The Matroska muxer does not write every stream as a Matroska track; some streams are written as AttachedFile. But should no stream be written as a Matroska track, the Matroska muxer would nevertheless write a Tracks element without a TrackEntry. This is against the spec. This commit changes this and only writes a Tracks if there is a Matroska track. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
Andreas Rheinhardt authored
As WebM doesn't support Attachments, the Matroska muxer drops them when in WebM mode. This happened silently until this commit which adds a warning for this. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-