-
Andreas Rheinhardt authored
When flushing, MAX_FRAME_HEADER_SIZE bytes (always zero) are supposed to be written to the fifo buffer in order to be able to check the rest of the buffer for frame headers. It was intended to write these by writing a small buffer of size MAX_FRAME_HEADER_SIZE to the buffer. But the way it was actually done ensured that this did not happen: First, it would be checked whether the size of the input buffer was zero, in which case it buf_size would be set to MAX_FRAME_HEADER_SIZE and read_end would be set to indicate that MAX_FRAME_HEADER_SIZE bytes need to be written. Then it would be made sure that there is enough space in the fifo for the data to be written. Afterwards the data is written. The check used here is for whether buf_size is zero or not. But if it was zero initially, it is MAX_FRAME_HEADER_SIZE now, so that not the designated buffer for writing MAX_FRAME_HEADER_SIZE is written; instead the padded buffer (from the stack of av_parser_parse2()) is used. This works because AV_INPUT_BUFFER_PADDING_SIZE >= MAX_FRAME_HEADER_SIZE. Lateron, buf_size is set to zero again. Given that since 7edbd536, the actual amount of data read is no longer automatically equal to buf_size, it is completely unnecessary to modify buf_size at all. Moreover, modifying it is dangerous: Some allocations can fail and because buf_size is never reset to zero in this codepath, the parser might return a value > 0 on flushing. Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
87b30f8a