1. 09 Apr, 2018 1 commit
  2. 07 Mar, 2018 1 commit
  3. 08 Dec, 2017 1 commit
    • Alexandra Hájková's avatar
      hevc: Add hevc_get_pixel_4/8/12/16/24/32/48/64 · 7993ec19
      Alexandra Hájková authored
      Checkasm timings:
      block size bitdepth  C       NEON
      4           8 bit:    146.7   48.7
                 10 bit:    146.7   52.7
      8           8 bit:    430.3   84.4
                 10 bit:    430.4  119.5
      12          8 bit:    812.8  141.0
                 10 bit:    812.8  195.0
      16          8 bit:   1499.1  268.0
                 10 bit:   1498.9  368.4
      24          8 bit:   4394.2  574.8
                 10 bit:   3696.3  804.8
      32          8 bit:   5108.6  568.9
                 10 bit:   4249.6  918.8
      48          8 bit:  16819.6 2304.9
                 10 bit:  13882.0 3178.5
      64          8 bit:  13490.8 1799.5
                 10 bit:  11018.5 2519.4
      Signed-off-by: 's avatarMartin Storsjö <martin@martin.st>
      7993ec19
  4. 28 Mar, 2017 1 commit
  5. 27 Mar, 2017 1 commit
  6. 24 Jan, 2017 3 commits
    • Martin Storsjö's avatar
      arm: Add NEON optimizations for 10 and 12 bit vp9 loop filter · 1e5d87ee
      Martin Storsjö authored
      This work is sponsored by, and copyright, Google.
      
      This is pretty much similar to the 8 bpp version, but in some senses
      simpler. All input pixels are 16 bits, and all intermediates also fit
      in 16 bits, so there's no lengthening/narrowing in the filter at all.
      
      For the full 16 pixel wide filter, we can only process 4 pixels at a time
      (using an implementation very much similar to the one for 8 bpp),
      but we can do 8 pixels at a time for the 4 and 8 pixel wide filters with
      a different implementation of the core filter.
      
      Examples of relative speedup compared to the C version, from checkasm:
                                         Cortex    A7     A8     A9    A53
      vp9_loop_filter_h_4_8_10bpp_neon:          1.83   2.16   1.40   2.09
      vp9_loop_filter_h_8_8_10bpp_neon:          1.39   1.67   1.24   1.70
      vp9_loop_filter_h_16_8_10bpp_neon:         1.56   1.47   1.10   1.81
      vp9_loop_filter_h_16_16_10bpp_neon:        1.94   1.69   1.33   2.24
      vp9_loop_filter_mix2_h_44_16_10bpp_neon:   2.01   2.27   1.67   2.39
      vp9_loop_filter_mix2_h_48_16_10bpp_neon:   1.84   2.06   1.45   2.19
      vp9_loop_filter_mix2_h_84_16_10bpp_neon:   1.89   2.20   1.47   2.29
      vp9_loop_filter_mix2_h_88_16_10bpp_neon:   1.69   2.12   1.47   2.08
      vp9_loop_filter_mix2_v_44_16_10bpp_neon:   3.16   3.98   2.50   4.05
      vp9_loop_filter_mix2_v_48_16_10bpp_neon:   2.84   3.64   2.25   3.77
      vp9_loop_filter_mix2_v_84_16_10bpp_neon:   2.65   3.45   2.16   3.54
      vp9_loop_filter_mix2_v_88_16_10bpp_neon:   2.55   3.30   2.16   3.55
      vp9_loop_filter_v_4_8_10bpp_neon:          2.85   3.97   2.24   3.68
      vp9_loop_filter_v_8_8_10bpp_neon:          2.27   3.19   1.96   3.08
      vp9_loop_filter_v_16_8_10bpp_neon:         3.42   2.74   2.26   4.40
      vp9_loop_filter_v_16_16_10bpp_neon:        2.86   2.44   1.93   3.88
      
      The speedup vs C code measured in checkasm is around 1.1-4x.
      These numbers are quite inconclusive though, since the checkasm test
      runs multiple filterings on top of each other, so later rounds might
      end up with different codepaths (different decisions on which filter
      to apply, based on input pixel differences).
      
      Based on START_TIMER/STOP_TIMER wrapping around a few individual
      functions, the speedup vs C code is around 2-4x.
      Signed-off-by: 's avatarMartin Storsjö <martin@martin.st>
      1e5d87ee
    • Martin Storsjö's avatar
      arm: Add NEON optimizations for 10 and 12 bit vp9 itxfm · 2ed67eba
      Martin Storsjö authored
      This work is sponsored by, and copyright, Google.
      
      This is structured similarly to the 8 bit version. In the 8 bit
      version, the coefficients are 16 bits, and intermediates are 32 bits.
      
      Here, the coefficients are 32 bit. For the 4x4 transforms for 10 bit
      content, the intermediates also fit in 32 bits, but for all other
      transforms (4x4 for 12 bit content, and 8x8 and larger for both 10
      and 12 bit) the intermediates are 64 bit.
      
      For the existing 8 bit case, the 8x8 transform fit all coefficients in
      registers; for 10/12 bit, when the coefficients are 32 bit, the 8x8
      transform also has to be done in slices of 4 pixels (just as 16x16 and
      32x32 for 8 bit).
      
      The slice width also shrinks from 4 elements to 2 elements in parallel
      for the 16x16 and 32x32 cases.
      
      The 16 bit coefficients from idct_coeffs and similar tables also need
      to be lenghtened to 32 bit in order to be used in multiplication with
      vectors with 32 bit elements. This leads to the fixed coefficient
      vectors needing more space, leading to more cases where they have to
      be reloaded within the transform (in iadst16).
      
      This technically would need testing in checkasm for subpartitions
      in increments of 2, but that slows down normal checkasm runs
      excessively.
      
      Examples of relative speedup compared to the C version, from checkasm:
                                           Cortex    A7     A8     A9    A53
      vp9_inv_adst_adst_4x4_sub4_add_10_neon:      4.83  11.36   5.22   6.77
      vp9_inv_adst_adst_8x8_sub8_add_10_neon:      4.12   7.60   4.06   4.84
      vp9_inv_adst_adst_16x16_sub16_add_10_neon:   3.93   8.16   4.52   5.35
      vp9_inv_dct_dct_4x4_sub1_add_10_neon:        1.36   2.57   1.41   1.61
      vp9_inv_dct_dct_4x4_sub4_add_10_neon:        4.24   8.66   5.06   5.81
      vp9_inv_dct_dct_8x8_sub1_add_10_neon:        2.63   4.18   1.68   2.87
      vp9_inv_dct_dct_8x8_sub4_add_10_neon:        4.52   9.47   4.24   5.39
      vp9_inv_dct_dct_8x8_sub8_add_10_neon:        3.45   7.34   3.45   4.30
      vp9_inv_dct_dct_16x16_sub1_add_10_neon:      3.56   6.21   2.47   4.32
      vp9_inv_dct_dct_16x16_sub2_add_10_neon:      5.68  12.73   5.28   7.07
      vp9_inv_dct_dct_16x16_sub8_add_10_neon:      4.42   9.28   4.24   5.45
      vp9_inv_dct_dct_16x16_sub16_add_10_neon:     3.41   7.29   3.35   4.19
      vp9_inv_dct_dct_32x32_sub1_add_10_neon:      4.52   8.35   3.83   6.40
      vp9_inv_dct_dct_32x32_sub2_add_10_neon:      5.86  13.19   6.14   7.04
      vp9_inv_dct_dct_32x32_sub16_add_10_neon:     4.29   8.11   4.59   5.06
      vp9_inv_dct_dct_32x32_sub32_add_10_neon:     3.31   5.70   3.56   3.84
      vp9_inv_wht_wht_4x4_sub4_add_10_neon:        1.89   2.80   1.82   1.97
      
      The speedup compared to the C functions is around 1.3 to 7x for the
      full transforms, even higher for the smaller subpartitions.
      Signed-off-by: 's avatarMartin Storsjö <martin@martin.st>
      2ed67eba
    • Martin Storsjö's avatar
      arm: Add NEON optimizations for 10 and 12 bit vp9 MC · a4d4bad7
      Martin Storsjö authored
      This work is sponsored by, and copyright, Google.
      
      The plain pixel put/copy functions are used from the 8 bit version,
      for the double size (e.g. put16 uses ff_vp9_copy32_neon), and a new
      copy128 is added.
      
      Compared with the 8 bit version, the filters can no longer use the
      trick to accumulate in 16 bit with only saturation at the end, but now
      the accumulators need to be 32 bit. This avoids the need to keep track
      of which filter index is the largest though, reducing the size of the
      executable code for these filters.
      
      For the horizontal filters, we only do 4 or 8 pixels wide in parallel
      (while doing two rows at a time), since we don't have enough register
      space to filter 16 pixels wide.
      
      For the vertical filters, we still do 4 and 8 pixels in parallel just
      as in the 8 bit case, but we need to store the output after every 2
      rows instead of after every 4 rows.
      
      Examples of relative speedup compared to the C version, from checkasm:
                                     Cortex    A7     A8     A9    A53
      vp9_avg4_10bpp_neon:                   2.25   2.44   3.05   2.16
      vp9_avg8_10bpp_neon:                   3.66   8.48   3.86   3.50
      vp9_avg16_10bpp_neon:                  3.39   8.26   3.37   2.72
      vp9_avg32_10bpp_neon:                  4.03  10.20   4.07   3.42
      vp9_avg64_10bpp_neon:                  4.15  10.01   4.13   3.70
      vp9_avg_8tap_smooth_4h_10bpp_neon:     3.38   6.22   3.41   4.75
      vp9_avg_8tap_smooth_4hv_10bpp_neon:    3.89   6.39   4.30   5.32
      vp9_avg_8tap_smooth_4v_10bpp_neon:     5.32   9.73   6.34   7.31
      vp9_avg_8tap_smooth_8h_10bpp_neon:     4.45   9.40   4.68   6.87
      vp9_avg_8tap_smooth_8hv_10bpp_neon:    4.64   8.91   5.44   6.47
      vp9_avg_8tap_smooth_8v_10bpp_neon:     6.44  13.42   8.68   8.79
      vp9_avg_8tap_smooth_64h_10bpp_neon:    4.66   9.02   4.84   7.71
      vp9_avg_8tap_smooth_64hv_10bpp_neon:   4.61   9.14   4.92   7.10
      vp9_avg_8tap_smooth_64v_10bpp_neon:    6.90  14.13   9.57  10.41
      vp9_put4_10bpp_neon:                   1.33   1.46   2.09   1.33
      vp9_put8_10bpp_neon:                   1.57   3.42   1.83   1.84
      vp9_put16_10bpp_neon:                  1.55   4.78   2.17   1.89
      vp9_put32_10bpp_neon:                  2.06   5.35   2.14   2.30
      vp9_put64_10bpp_neon:                  3.00   2.41   1.95   1.66
      vp9_put_8tap_smooth_4h_10bpp_neon:     3.19   5.81   3.31   4.63
      vp9_put_8tap_smooth_4hv_10bpp_neon:    3.86   6.22   4.32   5.21
      vp9_put_8tap_smooth_4v_10bpp_neon:     5.40   9.77   6.08   7.21
      vp9_put_8tap_smooth_8h_10bpp_neon:     4.22   8.41   4.46   6.63
      vp9_put_8tap_smooth_8hv_10bpp_neon:    4.56   8.51   5.39   6.25
      vp9_put_8tap_smooth_8v_10bpp_neon:     6.60  12.43   8.17   8.89
      vp9_put_8tap_smooth_64h_10bpp_neon:    4.41   8.59   4.54   7.49
      vp9_put_8tap_smooth_64hv_10bpp_neon:   4.43   8.58   5.34   6.63
      vp9_put_8tap_smooth_64v_10bpp_neon:    7.26  13.92   9.27  10.92
      
      For the larger 8tap filters, the speedup vs C code is around 4-14x.
      Signed-off-by: 's avatarMartin Storsjö <martin@martin.st>
      a4d4bad7
  7. 15 Nov, 2016 3 commits
    • Martin Storsjö's avatar
      arm: vp9: Add NEON loop filters · 6bec60a6
      Martin Storsjö authored
      This work is sponsored by, and copyright, Google.
      
      The implementation tries to have smart handling of cases
      where no pixels need the full filtering for the 8/16 width
      filters, skipping both calculation and writeback of the
      unmodified pixels in those cases. The actual effect of this
      is hard to test with checkasm though, since it tests the
      full filtering, and the benefit depends on how many filtered
      blocks use the shortcut.
      
      Examples of relative speedup compared to the C version, from checkasm:
                                Cortex       A7     A8     A9    A53
      vp9_loop_filter_h_4_8_neon:          2.72   2.68   1.78   3.15
      vp9_loop_filter_h_8_8_neon:          2.36   2.38   1.70   2.91
      vp9_loop_filter_h_16_8_neon:         1.80   1.89   1.45   2.01
      vp9_loop_filter_h_16_16_neon:        2.81   2.78   2.18   3.16
      vp9_loop_filter_mix2_h_44_16_neon:   2.65   2.67   1.93   3.05
      vp9_loop_filter_mix2_h_48_16_neon:   2.46   2.38   1.81   2.85
      vp9_loop_filter_mix2_h_84_16_neon:   2.50   2.41   1.73   2.85
      vp9_loop_filter_mix2_h_88_16_neon:   2.77   2.66   1.96   3.23
      vp9_loop_filter_mix2_v_44_16_neon:   4.28   4.46   3.22   5.70
      vp9_loop_filter_mix2_v_48_16_neon:   3.92   4.00   3.03   5.19
      vp9_loop_filter_mix2_v_84_16_neon:   3.97   4.31   2.98   5.33
      vp9_loop_filter_mix2_v_88_16_neon:   3.91   4.19   3.06   5.18
      vp9_loop_filter_v_4_8_neon:          4.53   4.47   3.31   6.05
      vp9_loop_filter_v_8_8_neon:          3.58   3.99   2.92   5.17
      vp9_loop_filter_v_16_8_neon:         3.40   3.50   2.81   4.68
      vp9_loop_filter_v_16_16_neon:        4.66   4.41   3.74   6.02
      
      The speedup vs C code is around 2-6x. The numbers are quite
      inconclusive though, since the checkasm test runs multiple filterings
      on top of each other, so later rounds might end up with different
      codepaths (different decisions on which filter to apply, based
      on input pixel differences). Disabling the early-exit in the asm
      doesn't give a fair comparison either though, since the C code
      only does the necessary calcuations for each row.
      
      Based on START_TIMER/STOP_TIMER wrapping around a few individual
      functions, the speedup vs C code is around 4-9x.
      
      This is pretty similar in runtime to the corresponding routines
      in libvpx. (This is comparing vpx_lpf_vertical_16_neon,
      vpx_lpf_horizontal_edge_8_neon and vpx_lpf_horizontal_edge_16_neon
      to vp9_loop_filter_h_16_8_neon, vp9_loop_filter_v_16_8_neon
      and vp9_loop_filter_v_16_16_neon - note that the naming of horizonal
      and vertical is flipped between the libraries.)
      
      In order to have stable, comparable numbers, the early exits in both
      asm versions were disabled, forcing the full filtering codepath.
      
                                 Cortex           A7      A8      A9     A53
      vp9_loop_filter_h_16_8_neon:             597.2   472.0   482.4   415.0
      libvpx vpx_lpf_vertical_16_neon:         626.0   464.5   470.7   445.0
      vp9_loop_filter_v_16_8_neon:             500.2   422.5   429.7   295.0
      libvpx vpx_lpf_horizontal_edge_8_neon:   586.5   414.5   415.6   383.2
      vp9_loop_filter_v_16_16_neon:            905.0   784.7   791.5   546.0
      libvpx vpx_lpf_horizontal_edge_16_neon: 1060.2   751.7   743.5   685.2
      
      Our version is consistently faster on on A7 and A53, marginally slower on
      A8, and sometimes faster, sometimes slower on A9 (marginally slower in all
      three tests in this particular test run).
      
      This is an adapted cherry-pick from libav commit
      dd299a2d.
      Signed-off-by: 's avatarRonald S. Bultje <rsbultje@gmail.com>
      6bec60a6
    • Martin Storsjö's avatar
      arm: vp9: Add NEON itxfm routines · b4dc7c34
      Martin Storsjö authored
      This work is sponsored by, and copyright, Google.
      
      For the transforms up to 8x8, we can fit all the data (including
      temporaries) in registers and just do a straightforward transform
      of all the data. For 16x16, we do a transform of 4x16 pixels in
      4 slices, using a temporary buffer. For 32x32, we transform 4x32
      pixels at a time, in two steps of 4x16 pixels each.
      
      Examples of relative speedup compared to the C version, from checkasm:
                               Cortex       A7     A8     A9    A53
      vp9_inv_adst_adst_4x4_add_neon:     3.39   5.83   4.17   4.01
      vp9_inv_adst_adst_8x8_add_neon:     3.79   4.86   4.23   3.98
      vp9_inv_adst_adst_16x16_add_neon:   3.33   4.36   4.11   4.16
      vp9_inv_dct_dct_4x4_add_neon:       4.06   6.16   4.59   4.46
      vp9_inv_dct_dct_8x8_add_neon:       4.61   6.01   4.98   4.86
      vp9_inv_dct_dct_16x16_add_neon:     3.35   3.44   3.36   3.79
      vp9_inv_dct_dct_32x32_add_neon:     3.89   3.50   3.79   4.42
      vp9_inv_wht_wht_4x4_add_neon:       3.22   5.13   3.53   3.77
      
      Thus, the speedup vs C code is around 3-6x.
      
      This is mostly marginally faster than the corresponding routines
      in libvpx on most cores, tested with their 32x32 idct (compared to
      vpx_idct32x32_1024_add_neon). These numbers are slightly in libvpx's
      favour since their version doesn't clear the input buffer like ours
      do (although the effect of that on the total runtime probably is
      negligible.)
      
                                 Cortex       A7       A8       A9      A53
      vp9_inv_dct_dct_32x32_add_neon:    18436.8  16874.1  14235.1  11988.9
      libvpx vpx_idct32x32_1024_add_neon 20789.0  13344.3  15049.9  13030.5
      
      Only on the Cortex A8, the libvpx function is faster. On the other cores,
      ours is slightly faster even though ours has got source block clearing
      integrated.
      
      This is an adapted cherry-pick from libav commits
      a67ae670 and
      52d196fb.
      Signed-off-by: 's avatarRonald S. Bultje <rsbultje@gmail.com>
      b4dc7c34
    • Martin Storsjö's avatar
      arm: vp9: Add NEON optimizations of VP9 MC functions · 68caef9d
      Martin Storsjö authored
      This work is sponsored by, and copyright, Google.
      
      The filter coefficients are signed values, where the product of the
      multiplication with one individual filter coefficient doesn't
      overflow a 16 bit signed value (the largest filter coefficient is
      127). But when the products are accumulated, the resulting sum can
      overflow the 16 bit signed range. Instead of accumulating in 32 bit,
      we accumulate the largest product (either index 3 or 4) last with a
      saturated addition.
      
      (The VP8 MC asm does something similar, but slightly simpler, by
      accumulating each half of the filter separately. In the VP9 MC
      filters, each half of the filter can also overflow though, so the
      largest component has to be handled individually.)
      
      Examples of relative speedup compared to the C version, from checkasm:
                             Cortex      A7     A8     A9    A53
      vp9_avg4_neon:                   1.71   1.15   1.42   1.49
      vp9_avg8_neon:                   2.51   3.63   3.14   2.58
      vp9_avg16_neon:                  2.95   6.76   3.01   2.84
      vp9_avg32_neon:                  3.29   6.64   2.85   3.00
      vp9_avg64_neon:                  3.47   6.67   3.14   2.80
      vp9_avg_8tap_smooth_4h_neon:     3.22   4.73   2.76   4.67
      vp9_avg_8tap_smooth_4hv_neon:    3.67   4.76   3.28   4.71
      vp9_avg_8tap_smooth_4v_neon:     5.52   7.60   4.60   6.31
      vp9_avg_8tap_smooth_8h_neon:     6.22   9.04   5.12   9.32
      vp9_avg_8tap_smooth_8hv_neon:    6.38   8.21   5.72   8.17
      vp9_avg_8tap_smooth_8v_neon:     9.22  12.66   8.15  11.10
      vp9_avg_8tap_smooth_64h_neon:    7.02  10.23   5.54  11.58
      vp9_avg_8tap_smooth_64hv_neon:   6.76   9.46   5.93   9.40
      vp9_avg_8tap_smooth_64v_neon:   10.76  14.13   9.46  13.37
      vp9_put4_neon:                   1.11   1.47   1.00   1.21
      vp9_put8_neon:                   1.23   2.17   1.94   1.48
      vp9_put16_neon:                  1.63   4.02   1.73   1.97
      vp9_put32_neon:                  1.56   4.92   2.00   1.96
      vp9_put64_neon:                  2.10   5.28   2.03   2.35
      vp9_put_8tap_smooth_4h_neon:     3.11   4.35   2.63   4.35
      vp9_put_8tap_smooth_4hv_neon:    3.67   4.69   3.25   4.71
      vp9_put_8tap_smooth_4v_neon:     5.45   7.27   4.49   6.52
      vp9_put_8tap_smooth_8h_neon:     5.97   8.18   4.81   8.56
      vp9_put_8tap_smooth_8hv_neon:    6.39   7.90   5.64   8.15
      vp9_put_8tap_smooth_8v_neon:     9.03  11.84   8.07  11.51
      vp9_put_8tap_smooth_64h_neon:    6.78   9.48   4.88  10.89
      vp9_put_8tap_smooth_64hv_neon:   6.99   8.87   5.94   9.56
      vp9_put_8tap_smooth_64v_neon:   10.69  13.30   9.43  14.34
      
      For the larger 8tap filters, the speedup vs C code is around 5-14x.
      
      This is significantly faster than libvpx's implementation of the same
      functions, at least when comparing the put_8tap_smooth_64 functions
      (compared to vpx_convolve8_horiz_neon and vpx_convolve8_vert_neon from
      libvpx).
      
      Absolute runtimes from checkasm:
                                Cortex      A7        A8        A9       A53
      vp9_put_8tap_smooth_64h_neon:    20150.3   14489.4   19733.6   10863.7
      libvpx vpx_convolve8_horiz_neon: 52623.3   19736.4   21907.7   25027.7
      
      vp9_put_8tap_smooth_64v_neon:    14455.0   12303.9   13746.4    9628.9
      libvpx vpx_convolve8_vert_neon:  42090.0   17706.2   17659.9   16941.2
      
      Thus, on the A9, the horizontal filter is only marginally faster than
      libvpx, while our version is significantly faster on the other cores,
      and the vertical filter is significantly faster on all cores. The
      difference is especially large on the A7.
      
      The libvpx implementation does the accumulation in 32 bit, which
      probably explains most of the differences.
      
      This is an adapted cherry-pick from libav commits
      ffbd1d2b,
      392caa65,
      557c1675 and
      11623217.
      Signed-off-by: 's avatarRonald S. Bultje <rsbultje@gmail.com>
      68caef9d
  8. 11 Nov, 2016 2 commits
    • Martin Storsjö's avatar
      arm: vp9: Add NEON loop filters · dd299a2d
      Martin Storsjö authored
      This work is sponsored by, and copyright, Google.
      
      The implementation tries to have smart handling of cases
      where no pixels need the full filtering for the 8/16 width
      filters, skipping both calculation and writeback of the
      unmodified pixels in those cases. The actual effect of this
      is hard to test with checkasm though, since it tests the
      full filtering, and the benefit depends on how many filtered
      blocks use the shortcut.
      
      Examples of relative speedup compared to the C version, from checkasm:
                                Cortex       A7     A8     A9    A53
      vp9_loop_filter_h_4_8_neon:          2.72   2.68   1.78   3.15
      vp9_loop_filter_h_8_8_neon:          2.36   2.38   1.70   2.91
      vp9_loop_filter_h_16_8_neon:         1.80   1.89   1.45   2.01
      vp9_loop_filter_h_16_16_neon:        2.81   2.78   2.18   3.16
      vp9_loop_filter_mix2_h_44_16_neon:   2.65   2.67   1.93   3.05
      vp9_loop_filter_mix2_h_48_16_neon:   2.46   2.38   1.81   2.85
      vp9_loop_filter_mix2_h_84_16_neon:   2.50   2.41   1.73   2.85
      vp9_loop_filter_mix2_h_88_16_neon:   2.77   2.66   1.96   3.23
      vp9_loop_filter_mix2_v_44_16_neon:   4.28   4.46   3.22   5.70
      vp9_loop_filter_mix2_v_48_16_neon:   3.92   4.00   3.03   5.19
      vp9_loop_filter_mix2_v_84_16_neon:   3.97   4.31   2.98   5.33
      vp9_loop_filter_mix2_v_88_16_neon:   3.91   4.19   3.06   5.18
      vp9_loop_filter_v_4_8_neon:          4.53   4.47   3.31   6.05
      vp9_loop_filter_v_8_8_neon:          3.58   3.99   2.92   5.17
      vp9_loop_filter_v_16_8_neon:         3.40   3.50   2.81   4.68
      vp9_loop_filter_v_16_16_neon:        4.66   4.41   3.74   6.02
      
      The speedup vs C code is around 2-6x. The numbers are quite
      inconclusive though, since the checkasm test runs multiple filterings
      on top of each other, so later rounds might end up with different
      codepaths (different decisions on which filter to apply, based
      on input pixel differences). Disabling the early-exit in the asm
      doesn't give a fair comparison either though, since the C code
      only does the necessary calcuations for each row.
      
      Based on START_TIMER/STOP_TIMER wrapping around a few individual
      functions, the speedup vs C code is around 4-9x.
      
      This is pretty similar in runtime to the corresponding routines
      in libvpx. (This is comparing vpx_lpf_vertical_16_neon,
      vpx_lpf_horizontal_edge_8_neon and vpx_lpf_horizontal_edge_16_neon
      to vp9_loop_filter_h_16_8_neon, vp9_loop_filter_v_16_8_neon
      and vp9_loop_filter_v_16_16_neon - note that the naming of horizonal
      and vertical is flipped between the libraries.)
      
      In order to have stable, comparable numbers, the early exits in both
      asm versions were disabled, forcing the full filtering codepath.
      
                                 Cortex           A7      A8      A9     A53
      vp9_loop_filter_h_16_8_neon:             597.2   472.0   482.4   415.0
      libvpx vpx_lpf_vertical_16_neon:         626.0   464.5   470.7   445.0
      vp9_loop_filter_v_16_8_neon:             500.2   422.5   429.7   295.0
      libvpx vpx_lpf_horizontal_edge_8_neon:   586.5   414.5   415.6   383.2
      vp9_loop_filter_v_16_16_neon:            905.0   784.7   791.5   546.0
      libvpx vpx_lpf_horizontal_edge_16_neon: 1060.2   751.7   743.5   685.2
      
      Our version is consistently faster on on A7 and A53, marginally slower on
      A8, and sometimes faster, sometimes slower on A9 (marginally slower in all
      three tests in this particular test run).
      Signed-off-by: 's avatarMartin Storsjö <martin@martin.st>
      dd299a2d
    • Martin Storsjö's avatar
      arm: vp9: Add NEON itxfm routines · a67ae670
      Martin Storsjö authored
      This work is sponsored by, and copyright, Google.
      
      For the transforms up to 8x8, we can fit all the data (including
      temporaries) in registers and just do a straightforward transform
      of all the data. For 16x16, we do a transform of 4x16 pixels in
      4 slices, using a temporary buffer. For 32x32, we transform 4x32
      pixels at a time, in two steps of 4x16 pixels each.
      
      Examples of relative speedup compared to the C version, from checkasm:
                               Cortex       A7     A8     A9    A53
      vp9_inv_adst_adst_4x4_add_neon:     3.39   5.83   4.17   4.01
      vp9_inv_adst_adst_8x8_add_neon:     3.79   4.86   4.23   3.98
      vp9_inv_adst_adst_16x16_add_neon:   3.33   4.36   4.11   4.16
      vp9_inv_dct_dct_4x4_add_neon:       4.06   6.16   4.59   4.46
      vp9_inv_dct_dct_8x8_add_neon:       4.61   6.01   4.98   4.86
      vp9_inv_dct_dct_16x16_add_neon:     3.35   3.44   3.36   3.79
      vp9_inv_dct_dct_32x32_add_neon:     3.89   3.50   3.79   4.42
      vp9_inv_wht_wht_4x4_add_neon:       3.22   5.13   3.53   3.77
      
      Thus, the speedup vs C code is around 3-6x.
      
      This is mostly marginally faster than the corresponding routines
      in libvpx on most cores, tested with their 32x32 idct (compared to
      vpx_idct32x32_1024_add_neon). These numbers are slightly in libvpx's
      favour since their version doesn't clear the input buffer like ours
      do (although the effect of that on the total runtime probably is
      negligible.)
      
                                 Cortex       A7       A8       A9      A53
      vp9_inv_dct_dct_32x32_add_neon:    18436.8  16874.1  14235.1  11988.9
      libvpx vpx_idct32x32_1024_add_neon 20789.0  13344.3  15049.9  13030.5
      
      Only on the Cortex A8, the libvpx function is faster. On the other cores,
      ours is slightly faster even though ours has got source block clearing
      integrated.
      Signed-off-by: 's avatarMartin Storsjö <martin@martin.st>
      a67ae670
  9. 03 Nov, 2016 1 commit
    • Martin Storsjö's avatar
      arm: vp9: Add NEON optimizations of VP9 MC functions · ffbd1d2b
      Martin Storsjö authored
      This work is sponsored by, and copyright, Google.
      
      The filter coefficients are signed values, where the product of the
      multiplication with one individual filter coefficient doesn't
      overflow a 16 bit signed value (the largest filter coefficient is
      127). But when the products are accumulated, the resulting sum can
      overflow the 16 bit signed range. Instead of accumulating in 32 bit,
      we accumulate the largest product (either index 3 or 4) last with a
      saturated addition.
      
      (The VP8 MC asm does something similar, but slightly simpler, by
      accumulating each half of the filter separately. In the VP9 MC
      filters, each half of the filter can also overflow though, so the
      largest component has to be handled individually.)
      
      Examples of relative speedup compared to the C version, from checkasm:
                             Cortex      A7     A8     A9    A53
      vp9_avg4_neon:                   1.71   1.15   1.42   1.49
      vp9_avg8_neon:                   2.51   3.63   3.14   2.58
      vp9_avg16_neon:                  2.95   6.76   3.01   2.84
      vp9_avg32_neon:                  3.29   6.64   2.85   3.00
      vp9_avg64_neon:                  3.47   6.67   3.14   2.80
      vp9_avg_8tap_smooth_4h_neon:     3.22   4.73   2.76   4.67
      vp9_avg_8tap_smooth_4hv_neon:    3.67   4.76   3.28   4.71
      vp9_avg_8tap_smooth_4v_neon:     5.52   7.60   4.60   6.31
      vp9_avg_8tap_smooth_8h_neon:     6.22   9.04   5.12   9.32
      vp9_avg_8tap_smooth_8hv_neon:    6.38   8.21   5.72   8.17
      vp9_avg_8tap_smooth_8v_neon:     9.22  12.66   8.15  11.10
      vp9_avg_8tap_smooth_64h_neon:    7.02  10.23   5.54  11.58
      vp9_avg_8tap_smooth_64hv_neon:   6.76   9.46   5.93   9.40
      vp9_avg_8tap_smooth_64v_neon:   10.76  14.13   9.46  13.37
      vp9_put4_neon:                   1.11   1.47   1.00   1.21
      vp9_put8_neon:                   1.23   2.17   1.94   1.48
      vp9_put16_neon:                  1.63   4.02   1.73   1.97
      vp9_put32_neon:                  1.56   4.92   2.00   1.96
      vp9_put64_neon:                  2.10   5.28   2.03   2.35
      vp9_put_8tap_smooth_4h_neon:     3.11   4.35   2.63   4.35
      vp9_put_8tap_smooth_4hv_neon:    3.67   4.69   3.25   4.71
      vp9_put_8tap_smooth_4v_neon:     5.45   7.27   4.49   6.52
      vp9_put_8tap_smooth_8h_neon:     5.97   8.18   4.81   8.56
      vp9_put_8tap_smooth_8hv_neon:    6.39   7.90   5.64   8.15
      vp9_put_8tap_smooth_8v_neon:     9.03  11.84   8.07  11.51
      vp9_put_8tap_smooth_64h_neon:    6.78   9.48   4.88  10.89
      vp9_put_8tap_smooth_64hv_neon:   6.99   8.87   5.94   9.56
      vp9_put_8tap_smooth_64v_neon:   10.69  13.30   9.43  14.34
      
      For the larger 8tap filters, the speedup vs C code is around 5-14x.
      
      This is significantly faster than libvpx's implementation of the same
      functions, at least when comparing the put_8tap_smooth_64 functions
      (compared to vpx_convolve8_horiz_neon and vpx_convolve8_vert_neon from
      libvpx).
      
      Absolute runtimes from checkasm:
                                Cortex      A7        A8        A9       A53
      vp9_put_8tap_smooth_64h_neon:    20150.3   14489.4   19733.6   10863.7
      libvpx vpx_convolve8_horiz_neon: 52623.3   19736.4   21907.7   25027.7
      
      vp9_put_8tap_smooth_64v_neon:    14455.0   12303.9   13746.4    9628.9
      libvpx vpx_convolve8_vert_neon:  42090.0   17706.2   17659.9   16941.2
      
      Thus, on the A9, the horizontal filter is only marginally faster than
      libvpx, while our version is significantly faster on the other cores,
      and the vertical filter is significantly faster on all cores. The
      difference is especially large on the A7.
      
      The libvpx implementation does the accumulation in 32 bit, which
      probably explains most of the differences.
      Signed-off-by: 's avatarMartin Storsjö <martin@martin.st>
      ffbd1d2b
  10. 07 Apr, 2016 1 commit
    • Diego Biurrun's avatar
      build: miscellaneous cosmetics · 01621202
      Diego Biurrun authored
      Restore alphabetical order in lists, break overly long lines, do some
      prettyprinting, add some explanatory section comments, group parts
      together that belong together logically.
      01621202
  11. 01 Mar, 2016 1 commit
  12. 26 Feb, 2016 1 commit
  13. 19 Feb, 2016 1 commit
  14. 31 Jan, 2016 2 commits
  15. 25 Jan, 2016 1 commit
  16. 17 Jul, 2015 4 commits
  17. 12 Mar, 2015 1 commit
  18. 28 Feb, 2015 2 commits
  19. 25 Feb, 2015 1 commit
  20. 15 Feb, 2015 1 commit
  21. 08 Feb, 2015 1 commit
  22. 05 Feb, 2015 1 commit
  23. 15 Aug, 2014 1 commit
  24. 04 Aug, 2014 1 commit
  25. 17 Jul, 2014 1 commit
  26. 16 Jul, 2014 1 commit
  27. 09 Jul, 2014 1 commit
  28. 06 Jul, 2014 1 commit
  29. 30 Jun, 2014 1 commit
  30. 22 Jun, 2014 1 commit