Commit bb258fb9 authored by Michael Niedermayer's avatar Michael Niedermayer

Merge remote-tracking branch 'qatar/master'

* qatar/master:
  doc: Improve references to external URLs.
  h264: move decode_mb_skip() from h264.h to h.264_mvpred.h
  ffplay: skip return value of avcodec_decode_video2 / avcodec_decode_subtitle2
  dnxhdenc: Replace a forward declaration by the proper #include.
  h264: move h264_mvpred.h include.
  pix_fmt: Fix number of bits per component in yuv444p9be
  lavf: deprecate AVFormatContext.timestamp
  ffmpeg: merge input_files_ts_scale into InputStream.
  ffmpeg: don't abuse a global for passing sample format from input to output
  ffmpeg: don't abuse a global for passing channel layout from input to output
  ffmpeg: factor common code from new_a/v/s/d_stream to new_output_stream()
  matroskaenc: make SSA default subtitle codec.
  oggdec: prevent heap corruption.

Conflicts:
	doc/developer.texi
	doc/faq.texi
	doc/general.texi
	ffmpeg.c
	ffplay.c
Merged-by: 's avatarMichael Niedermayer <michaelni@gmx.at>
parents 896e5975 2cb6dec6
...@@ -34,6 +34,7 @@ You can use libavcodec or libavformat in your commercial program, but ...@@ -34,6 +34,7 @@ You can use libavcodec or libavformat in your commercial program, but
@emph{any patch you make must be published}. The best way to proceed is @emph{any patch you make must be published}. The best way to proceed is
to send your patches to the FFmpeg mailing list. to send your patches to the FFmpeg mailing list.
@anchor{Coding Rules} @anchor{Coding Rules}
@section Coding Rules @section Coding Rules
......
...@@ -47,7 +47,7 @@ Likely reasons ...@@ -47,7 +47,7 @@ Likely reasons
@item We are busy and haven't had time yet to read your report or @item We are busy and haven't had time yet to read your report or
investigate the issue. investigate the issue.
@item You didn't follow @url{http://ffmpeg.org/bugreports.html}. @item You didn't follow @url{http://ffmpeg.org/bugreports.html}.
@item You didn't use git HEAD. @item You didn't use git master.
@item You reported a segmentation fault without gdb output. @item You reported a segmentation fault without gdb output.
@item You describe a problem but not how to reproduce it. @item You describe a problem but not how to reproduce it.
@item It's unclear if you use ffmpeg as command line tool or use @item It's unclear if you use ffmpeg as command line tool or use
...@@ -123,7 +123,8 @@ problem and an NP-hard problem... ...@@ -123,7 +123,8 @@ problem and an NP-hard problem...
@section ffmpeg does not work; what is wrong? @section ffmpeg does not work; what is wrong?
Try a @code{make distclean} in the ffmpeg source directory before the build. If this does not help see Try a @code{make distclean} in the ffmpeg source directory before the build.
If this does not help see
(@url{http://ffmpeg.org/bugreports.html}). (@url{http://ffmpeg.org/bugreports.html}).
@section How do I encode single pictures into movies? @section How do I encode single pictures into movies?
...@@ -285,7 +286,8 @@ Just create an "input.avs" text file with this single line ... ...@@ -285,7 +286,8 @@ Just create an "input.avs" text file with this single line ...
ffmpeg -i input.avs ffmpeg -i input.avs
@end example @end example
For ANY other help on Avisynth, please visit @url{http://www.avisynth.org/}. For ANY other help on Avisynth, please visit the
@uref{http://www.avisynth.org/, Avisynth homepage}.
@section How can I join video files? @section How can I join video files?
...@@ -417,7 +419,7 @@ No. These tools are too bloated and they complicate the build. ...@@ -417,7 +419,7 @@ No. These tools are too bloated and they complicate the build.
FFmpeg is already organized in a highly modular manner and does not need to FFmpeg is already organized in a highly modular manner and does not need to
be rewritten in a formal object language. Further, many of the developers be rewritten in a formal object language. Further, many of the developers
favor straight C; it works for them. For more arguments on this matter, favor straight C; it works for them. For more arguments on this matter,
read "Programming Religion" at (@url{http://www.tux.org/lkml/#s15}). read @uref{http://www.tux.org/lkml/#s15, "Programming Religion"}.
@section Why are the ffmpeg programs devoid of debugging symbols? @section Why are the ffmpeg programs devoid of debugging symbols?
......
...@@ -888,8 +888,8 @@ ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg ...@@ -888,8 +888,8 @@ ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
@end example @end example
Note that you must activate the right video source and channel before Note that you must activate the right video source and channel before
launching ffmpeg with any TV viewer such as xawtv launching ffmpeg with any TV viewer such as
(@url{http://linux.bytesex.org/xawtv/}) by Gerd Knorr. You also @uref{http://linux.bytesex.org/xawtv/, xawtv} by Gerd Knorr. You also
have to set the audio recording levels correctly with a have to set the audio recording levels correctly with a
standard mixer. standard mixer.
......
...@@ -848,7 +848,7 @@ noticeable when running make for a second time (for example in ...@@ -848,7 +848,7 @@ noticeable when running make for a second time (for example in
@code{make install}). @code{make install}).
@item In order to compile FFplay, you must have the MinGW development library @item In order to compile FFplay, you must have the MinGW development library
of SDL. Get it from @url{http://www.libsdl.org}. of @uref{http://www.libsdl.org/, SDL}.
Edit the @file{bin/sdl-config} script so that it points to the correct prefix Edit the @file{bin/sdl-config} script so that it points to the correct prefix
where SDL was installed. Verify that @file{sdl-config} can be launched from where SDL was installed. Verify that @file{sdl-config} can be launched from
the MSYS command line. the MSYS command line.
...@@ -1044,8 +1044,7 @@ Then configure FFmpeg with the following options: ...@@ -1044,8 +1044,7 @@ Then configure FFmpeg with the following options:
(you can change the cross-prefix according to the prefix chosen for the (you can change the cross-prefix according to the prefix chosen for the
MinGW tools). MinGW tools).
Then you can easily test FFmpeg with Wine Then you can easily test FFmpeg with @uref{http://www.winehq.com/, Wine}.
(@url{http://www.winehq.com/}).
@subsection Compilation under Cygwin @subsection Compilation under Cygwin
...@@ -1084,8 +1083,8 @@ If you want to build FFmpeg with additional libraries, download Cygwin ...@@ -1084,8 +1083,8 @@ If you want to build FFmpeg with additional libraries, download Cygwin
libogg-devel, libvorbis-devel libogg-devel, libvorbis-devel
@end example @end example
These library packages are only available from Cygwin Ports These library packages are only available from
(@url{http://sourceware.org/cygwinports/}) : @uref{http://sourceware.org/cygwinports/, Cygwin Ports}:
@example @example
yasm, libSDL-devel, libdirac-devel, libfaac-devel, libgsm-devel, yasm, libSDL-devel, libdirac-devel, libfaac-devel, libgsm-devel,
......
...@@ -242,7 +242,7 @@ data transferred over RDT). ...@@ -242,7 +242,7 @@ data transferred over RDT).
The muxer can be used to send a stream using RTSP ANNOUNCE to a server The muxer can be used to send a stream using RTSP ANNOUNCE to a server
supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
RTSP server, @url{http://github.com/revmischa/rtsp-server}). @uref{http://github.com/revmischa/rtsp-server, RTSP server}).
The required syntax for a RTSP url is: The required syntax for a RTSP url is:
@example @example
......
This diff is collapsed.
...@@ -1431,7 +1431,7 @@ static int queue_picture(VideoState *is, AVFrame *src_frame, double pts1, int64_ ...@@ -1431,7 +1431,7 @@ static int queue_picture(VideoState *is, AVFrame *src_frame, double pts1, int64_
static int get_video_frame(VideoState *is, AVFrame *frame, int64_t *pts, AVPacket *pkt) static int get_video_frame(VideoState *is, AVFrame *frame, int64_t *pts, AVPacket *pkt)
{ {
int len1 av_unused, got_picture, i; int got_picture, i;
if (packet_queue_get(&is->videoq, pkt, 1) < 0) if (packet_queue_get(&is->videoq, pkt, 1) < 0)
return -1; return -1;
...@@ -1458,9 +1458,7 @@ static int get_video_frame(VideoState *is, AVFrame *frame, int64_t *pts, AVPacke ...@@ -1458,9 +1458,7 @@ static int get_video_frame(VideoState *is, AVFrame *frame, int64_t *pts, AVPacke
return 0; return 0;
} }
len1 = avcodec_decode_video2(is->video_st->codec, avcodec_decode_video2(is->video_st->codec, frame, &got_picture, pkt);
frame, &got_picture,
pkt);
if (got_picture) { if (got_picture) {
if (decoder_reorder_pts == -1) { if (decoder_reorder_pts == -1) {
...@@ -1807,7 +1805,7 @@ static int subtitle_thread(void *arg) ...@@ -1807,7 +1805,7 @@ static int subtitle_thread(void *arg)
VideoState *is = arg; VideoState *is = arg;
SubPicture *sp; SubPicture *sp;
AVPacket pkt1, *pkt = &pkt1; AVPacket pkt1, *pkt = &pkt1;
int len1 av_unused, got_subtitle; int got_subtitle;
double pts; double pts;
int i, j; int i, j;
int r, g, b, y, u, v, a; int r, g, b, y, u, v, a;
...@@ -1841,9 +1839,9 @@ static int subtitle_thread(void *arg) ...@@ -1841,9 +1839,9 @@ static int subtitle_thread(void *arg)
if (pkt->pts != AV_NOPTS_VALUE) if (pkt->pts != AV_NOPTS_VALUE)
pts = av_q2d(is->subtitle_st->time_base)*pkt->pts; pts = av_q2d(is->subtitle_st->time_base)*pkt->pts;
len1 = avcodec_decode_subtitle2(is->subtitle_st->codec, avcodec_decode_subtitle2(is->subtitle_st->codec, &sp->sub,
&sp->sub, &got_subtitle, &got_subtitle, pkt);
pkt);
if (got_subtitle && sp->sub.format == 0) { if (got_subtitle && sp->sub.format == 0) {
sp->pts = pts; sp->pts = pts;
......
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#include "avcodec.h" #include "avcodec.h"
#include "dsputil.h" #include "dsputil.h"
#include "mpegvideo.h" #include "mpegvideo.h"
#include "mpegvideo_common.h"
#include "dnxhdenc.h" #include "dnxhdenc.h"
#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM #define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
...@@ -38,8 +39,6 @@ static const AVOption options[]={ ...@@ -38,8 +39,6 @@ static const AVOption options[]={
}; };
static const AVClass class = { "dnxhd", av_default_item_name, options, LIBAVUTIL_VERSION_INT }; static const AVClass class = { "dnxhd", av_default_item_name, options, LIBAVUTIL_VERSION_INT };
int dct_quantize_c(MpegEncContext *s, DCTELEM *block, int n, int qscale, int *overflow);
#define LAMBDA_FRAC_BITS 10 #define LAMBDA_FRAC_BITS 10
static av_always_inline void dnxhd_get_pixels_8x4(DCTELEM *restrict block, const uint8_t *pixels, int line_size) static av_always_inline void dnxhd_get_pixels_8x4(DCTELEM *restrict block, const uint8_t *pixels, int line_size)
......
...@@ -770,8 +770,6 @@ static av_always_inline int get_chroma_qp(H264Context *h, int t, int qscale){ ...@@ -770,8 +770,6 @@ static av_always_inline int get_chroma_qp(H264Context *h, int t, int qscale){
return h->pps.chroma_qp_table[t][qscale]; return h->pps.chroma_qp_table[t][qscale];
} }
static av_always_inline void pred_pskip_motion(H264Context * const h);
static void fill_decode_neighbors(H264Context *h, int mb_type){ static void fill_decode_neighbors(H264Context *h, int mb_type){
MpegEncContext * const s = &h->s; MpegEncContext * const s = &h->s;
const int mb_xy= h->mb_xy; const int mb_xy= h->mb_xy;
...@@ -1302,45 +1300,4 @@ static av_always_inline int get_dct8x8_allowed(H264Context *h){ ...@@ -1302,45 +1300,4 @@ static av_always_inline int get_dct8x8_allowed(H264Context *h){
return !(AV_RN64A(h->sub_mb_type) & ((MB_TYPE_16x8|MB_TYPE_8x16|MB_TYPE_8x8|MB_TYPE_DIRECT2)*0x0001000100010001ULL)); return !(AV_RN64A(h->sub_mb_type) & ((MB_TYPE_16x8|MB_TYPE_8x16|MB_TYPE_8x8|MB_TYPE_DIRECT2)*0x0001000100010001ULL));
} }
/**
* decodes a P_SKIP or B_SKIP macroblock
*/
static void av_unused decode_mb_skip(H264Context *h){
MpegEncContext * const s = &h->s;
const int mb_xy= h->mb_xy;
int mb_type=0;
memset(h->non_zero_count[mb_xy], 0, 48);
if(MB_FIELD)
mb_type|= MB_TYPE_INTERLACED;
if( h->slice_type_nos == AV_PICTURE_TYPE_B )
{
// just for fill_caches. pred_direct_motion will set the real mb_type
mb_type|= MB_TYPE_L0L1|MB_TYPE_DIRECT2|MB_TYPE_SKIP;
if(h->direct_spatial_mv_pred){
fill_decode_neighbors(h, mb_type);
fill_decode_caches(h, mb_type); //FIXME check what is needed and what not ...
}
ff_h264_pred_direct_motion(h, &mb_type);
mb_type|= MB_TYPE_SKIP;
}
else
{
mb_type|= MB_TYPE_16x16|MB_TYPE_P0L0|MB_TYPE_P1L0|MB_TYPE_SKIP;
fill_decode_neighbors(h, mb_type);
pred_pskip_motion(h);
}
write_back_motion(h, mb_type);
s->current_picture.f.mb_type[mb_xy] = mb_type;
s->current_picture.f.qscale_table[mb_xy] = s->qscale;
h->slice_table[ mb_xy ]= h->slice_num;
h->prev_mb_skipped= 1;
}
#include "h264_mvpred.h" //For pred_pskip_motion()
#endif /* AVCODEC_H264_H */ #endif /* AVCODEC_H264_H */
...@@ -327,4 +327,43 @@ zeromv: ...@@ -327,4 +327,43 @@ zeromv:
return; return;
} }
/**
* decodes a P_SKIP or B_SKIP macroblock
*/
static void av_unused decode_mb_skip(H264Context *h){
MpegEncContext * const s = &h->s;
const int mb_xy= h->mb_xy;
int mb_type=0;
memset(h->non_zero_count[mb_xy], 0, 48);
if(MB_FIELD)
mb_type|= MB_TYPE_INTERLACED;
if( h->slice_type_nos == AV_PICTURE_TYPE_B )
{
// just for fill_caches. pred_direct_motion will set the real mb_type
mb_type|= MB_TYPE_L0L1|MB_TYPE_DIRECT2|MB_TYPE_SKIP;
if(h->direct_spatial_mv_pred){
fill_decode_neighbors(h, mb_type);
fill_decode_caches(h, mb_type); //FIXME check what is needed and what not ...
}
ff_h264_pred_direct_motion(h, &mb_type);
mb_type|= MB_TYPE_SKIP;
}
else
{
mb_type|= MB_TYPE_16x16|MB_TYPE_P0L0|MB_TYPE_P1L0|MB_TYPE_SKIP;
fill_decode_neighbors(h, mb_type);
pred_pskip_motion(h);
}
write_back_motion(h, mb_type);
s->current_picture.f.mb_type[mb_xy] = mb_type;
s->current_picture.f.qscale_table[mb_xy] = s->qscale;
h->slice_table[ mb_xy ]= h->slice_num;
h->prev_mb_skipped= 1;
}
#endif /* AVCODEC_H264_MVPRED_H */ #endif /* AVCODEC_H264_MVPRED_H */
...@@ -698,7 +698,12 @@ typedef struct AVFormatContext { ...@@ -698,7 +698,12 @@ typedef struct AVFormatContext {
AVStream **streams; AVStream **streams;
char filename[1024]; /**< input or output filename */ char filename[1024]; /**< input or output filename */
/* stream info */ /* stream info */
int64_t timestamp; #if FF_API_TIMESTAMP
/**
* @deprecated use 'creation_time' metadata tag instead
*/
attribute_deprecated int64_t timestamp;
#endif
int ctx_flags; /**< Format-specific flags, see AVFMTCTX_xx */ int ctx_flags; /**< Format-specific flags, see AVFMTCTX_xx */
/* private data for pts handling (do not modify directly). */ /* private data for pts handling (do not modify directly). */
......
...@@ -43,7 +43,7 @@ struct DVMuxContext { ...@@ -43,7 +43,7 @@ struct DVMuxContext {
AVStream *ast[2]; /* stereo audio streams */ AVStream *ast[2]; /* stereo audio streams */
AVFifoBuffer *audio_data[2]; /* FIFO for storing excessive amounts of PCM */ AVFifoBuffer *audio_data[2]; /* FIFO for storing excessive amounts of PCM */
int frames; /* current frame number */ int frames; /* current frame number */
time_t start_time; /* recording start time */ int64_t start_time; /* recording start time */
int has_audio; /* frame under contruction has audio */ int has_audio; /* frame under contruction has audio */
int has_video; /* frame under contruction has video */ int has_video; /* frame under contruction has video */
uint8_t frame_buf[DV_MAX_FRAME_SIZE]; /* frame under contruction */ uint8_t frame_buf[DV_MAX_FRAME_SIZE]; /* frame under contruction */
...@@ -290,6 +290,7 @@ static DVMuxContext* dv_init_mux(AVFormatContext* s) ...@@ -290,6 +290,7 @@ static DVMuxContext* dv_init_mux(AVFormatContext* s)
{ {
DVMuxContext *c = s->priv_data; DVMuxContext *c = s->priv_data;
AVStream *vst = NULL; AVStream *vst = NULL;
AVDictionaryEntry *t;
int i; int i;
/* we support at most 1 video and 2 audio streams */ /* we support at most 1 video and 2 audio streams */
...@@ -337,7 +338,16 @@ static DVMuxContext* dv_init_mux(AVFormatContext* s) ...@@ -337,7 +338,16 @@ static DVMuxContext* dv_init_mux(AVFormatContext* s)
c->frames = 0; c->frames = 0;
c->has_audio = 0; c->has_audio = 0;
c->has_video = 0; c->has_video = 0;
c->start_time = (time_t)s->timestamp; #if FF_API_TIMESTAMP
if (s->timestamp)
c->start_time = s->timestamp;
else
#endif
if (t = av_dict_get(s->metadata, "creation_time", NULL, 0)) {
struct tm time = {0};
strptime(t->value, "%Y - %m - %dT%T", &time);
c->start_time = mktime(&time);
}
for (i=0; i < c->n_ast; i++) { for (i=0; i < c->n_ast; i++) {
if (c->ast[i] && !(c->audio_data[i]=av_fifo_alloc(100*AVCODEC_MAX_AUDIO_FRAME_SIZE))) { if (c->ast[i] && !(c->audio_data[i]=av_fifo_alloc(100*AVCODEC_MAX_AUDIO_FRAME_SIZE))) {
......
...@@ -394,6 +394,20 @@ static int gxf_write_umf_material_description(AVFormatContext *s) ...@@ -394,6 +394,20 @@ static int gxf_write_umf_material_description(AVFormatContext *s)
GXFContext *gxf = s->priv_data; GXFContext *gxf = s->priv_data;
AVIOContext *pb = s->pb; AVIOContext *pb = s->pb;
int timecode_base = gxf->time_base.den == 60000 ? 60 : 50; int timecode_base = gxf->time_base.den == 60000 ? 60 : 50;
int64_t timestamp = 0;
AVDictionaryEntry *t;
#if FF_API_TIMESTAMP
if (s->timestamp)
timestamp = s->timestamp;
else
#endif
if (t = av_dict_get(s->metadata, "creation_time", NULL, 0)) {
struct tm time = {0};
strptime(t->value, "%Y - %m - %dT%T", &time);
timestamp = mktime(&time);
}
// XXX drop frame // XXX drop frame
uint32_t timecode = uint32_t timecode =
...@@ -409,8 +423,8 @@ static int gxf_write_umf_material_description(AVFormatContext *s) ...@@ -409,8 +423,8 @@ static int gxf_write_umf_material_description(AVFormatContext *s)
avio_wl32(pb, gxf->nb_fields); /* mark out */ avio_wl32(pb, gxf->nb_fields); /* mark out */
avio_wl32(pb, 0); /* timecode mark in */ avio_wl32(pb, 0); /* timecode mark in */
avio_wl32(pb, timecode); /* timecode mark out */ avio_wl32(pb, timecode); /* timecode mark out */
avio_wl64(pb, s->timestamp); /* modification time */ avio_wl64(pb, timestamp); /* modification time */
avio_wl64(pb, s->timestamp); /* creation time */ avio_wl64(pb, timestamp); /* creation time */
avio_wl16(pb, 0); /* reserved */ avio_wl16(pb, 0); /* reserved */
avio_wl16(pb, 0); /* reserved */ avio_wl16(pb, 0); /* reserved */
avio_wl16(pb, gxf->audio_tracks); avio_wl16(pb, gxf->audio_tracks);
......
...@@ -1212,7 +1212,7 @@ AVOutputFormat ff_matroska_muxer = { ...@@ -1212,7 +1212,7 @@ AVOutputFormat ff_matroska_muxer = {
mkv_write_trailer, mkv_write_trailer,
.flags = AVFMT_GLOBALHEADER | AVFMT_VARIABLE_FPS, .flags = AVFMT_GLOBALHEADER | AVFMT_VARIABLE_FPS,
.codec_tag = (const AVCodecTag* const []){ff_codec_bmp_tags, ff_codec_wav_tags, 0}, .codec_tag = (const AVCodecTag* const []){ff_codec_bmp_tags, ff_codec_wav_tags, 0},
.subtitle_codec = CODEC_ID_TEXT, .subtitle_codec = CODEC_ID_SSA,
}; };
#endif #endif
......
...@@ -2142,6 +2142,7 @@ static int mov_write_header(AVFormatContext *s) ...@@ -2142,6 +2142,7 @@ static int mov_write_header(AVFormatContext *s)
{ {
AVIOContext *pb = s->pb; AVIOContext *pb = s->pb;
MOVMuxContext *mov = s->priv_data; MOVMuxContext *mov = s->priv_data;
AVDictionaryEntry *t;
int i, hint_track = 0; int i, hint_track = 0;
if (!s->pb->seekable) { if (!s->pb->seekable) {
...@@ -2272,7 +2273,18 @@ static int mov_write_header(AVFormatContext *s) ...@@ -2272,7 +2273,18 @@ static int mov_write_header(AVFormatContext *s)
} }
mov_write_mdat_tag(pb, mov); mov_write_mdat_tag(pb, mov);
mov->time = s->timestamp + 0x7C25B080; //1970 based -> 1904 based
#if FF_API_TIMESTAMP
if (s->timestamp)
mov->time = s->timestamp;
else
#endif
if (t = av_dict_get(s->metadata, "creation_time", NULL, 0)) {
struct tm time = {0};
strptime(t->value, "%Y - %m - %dT%T", &time);
mov->time = mktime(&time);
}
mov->time += 0x7C25B080; //1970 based -> 1904 based
if (mov->chapter_track) if (mov->chapter_track)
mov_create_chapter_track(s, mov->chapter_track); mov_create_chapter_track(s, mov->chapter_track);
......
...@@ -1407,6 +1407,8 @@ static int mxf_write_header(AVFormatContext *s) ...@@ -1407,6 +1407,8 @@ static int mxf_write_header(AVFormatContext *s)
int i; int i;
uint8_t present[FF_ARRAY_ELEMS(mxf_essence_container_uls)] = {0}; uint8_t present[FF_ARRAY_ELEMS(mxf_essence_container_uls)] = {0};
const int *samples_per_frame = NULL; const int *samples_per_frame = NULL;
AVDictionaryEntry *t;
int64_t timestamp = 0;
if (!s->nb_streams) if (!s->nb_streams)
return -1; return -1;
...@@ -1512,8 +1514,18 @@ static int mxf_write_header(AVFormatContext *s) ...@@ -1512,8 +1514,18 @@ static int mxf_write_header(AVFormatContext *s)
sc->order = AV_RB32(sc->track_essence_element_key+12); sc->order = AV_RB32(sc->track_essence_element_key+12);
} }
#if FF_API_TIMESTAMP
if (s->timestamp) if (s->timestamp)
mxf->timestamp = mxf_parse_timestamp(s->timestamp); timestamp = s->timestamp;
else
#endif
if (t = av_dict_get(s->metadata, "creation_time", NULL, 0)) {
struct tm time = {0};
strptime(t->value, "%Y - %m - %dT%T", &time);
timestamp = mktime(&time);
}
if (timestamp)
mxf->timestamp = mxf_parse_timestamp(timestamp);
mxf->duration = -1; mxf->duration = -1;
mxf->timecode_track = av_mallocz(sizeof(*mxf->timecode_track)); mxf->timecode_track = av_mallocz(sizeof(*mxf->timecode_track));
......
...@@ -605,15 +605,15 @@ static int64_t ogg_read_timestamp(AVFormatContext *s, int stream_index, ...@@ -605,15 +605,15 @@ static int64_t ogg_read_timestamp(AVFormatContext *s, int stream_index,
int64_t *pos_arg, int64_t pos_limit) int64_t *pos_arg, int64_t pos_limit)
{ {
struct ogg *ogg = s->priv_data; struct ogg *ogg = s->priv_data;
struct ogg_stream *os = ogg->streams + stream_index;
AVIOContext *bc = s->pb; AVIOContext *bc = s->pb;
int64_t pts = AV_NOPTS_VALUE; int64_t pts = AV_NOPTS_VALUE;
int i; int i = -1;
avio_seek(bc, *pos_arg, SEEK_SET); avio_seek(bc, *pos_arg, SEEK_SET);
ogg_reset(ogg); ogg_reset(ogg);
while (avio_tell(bc) < pos_limit && !ogg_packet(s, &i, NULL, NULL, pos_arg)) { while (avio_tell(bc) < pos_limit && !ogg_packet(s, &i, NULL, NULL, pos_arg)) {
if (i == stream_index) { if (i == stream_index) {
struct ogg_stream *os = ogg->streams + stream_index;
pts = ogg_calc_pts(s, i, NULL); pts = ogg_calc_pts(s, i, NULL);
if (os->keyframe_seek && !(os->pflags & AV_PKT_FLAG_KEY)) if (os->keyframe_seek && !(os->pflags & AV_PKT_FLAG_KEY))
pts = AV_NOPTS_VALUE; pts = AV_NOPTS_VALUE;
...@@ -639,6 +639,7 @@ static int ogg_read_seek(AVFormatContext *s, int stream_index, ...@@ -639,6 +639,7 @@ static int ogg_read_seek(AVFormatContext *s, int stream_index,
os->keyframe_seek = 1; os->keyframe_seek = 1;
ret = av_seek_frame_binary(s, stream_index, timestamp, flags); ret = av_seek_frame_binary(s, stream_index, timestamp, flags);
os = ogg->streams + stream_index;
if (ret < 0) if (ret < 0)
os->keyframe_seek = 0; os->keyframe_seek = 0;
return ret; return ret;
......
...@@ -86,5 +86,8 @@ ...@@ -86,5 +86,8 @@
#ifndef FF_API_LOOP_OUTPUT #ifndef FF_API_LOOP_OUTPUT
#define FF_API_LOOP_OUTPUT (LIBAVFORMAT_VERSION_MAJOR < 54) #define FF_API_LOOP_OUTPUT (LIBAVFORMAT_VERSION_MAJOR < 54)
#endif #endif
#ifndef FF_API_TIMESTAMP
#define FF_API_TIMESTAMP (LIBAVFORMAT_VERSION_MAJOR < 54)
#endif
#endif /* AVFORMAT_VERSION_H */ #endif /* AVFORMAT_VERSION_H */
...@@ -918,9 +918,9 @@ const AVPixFmtDescriptor av_pix_fmt_descriptors[PIX_FMT_NB] = { ...@@ -918,9 +918,9 @@ const AVPixFmtDescriptor av_pix_fmt_descriptors[PIX_FMT_NB] = {
.log2_chroma_w= 0, .log2_chroma_w= 0,
.log2_chroma_h= 0, .log2_chroma_h= 0,
.comp = { .comp = {
{0,1,1,0,9}, /* Y */ {0,1,1,0,8}, /* Y */
{1,1,1,0,9}, /* U */ {1,1,1,0,8}, /* U */
{2,1,1,0,9}, /* V */ {2,1,1,0,8}, /* V */
}, },
.flags = PIX_FMT_BE, .flags = PIX_FMT_BE,
}, },
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment