• Martin Storsjö's avatar
    dnn-layer-mathbinary-test: Fix tests for cases with extra intermediate precision · f4d8fad8
    Martin Storsjö authored
    This fixes tests on 32 bit x86 mingw with clang, which uses x87
    fpu by default.
    
    In this setup, while the get_expected function is declared to
    return float, the compiler is (especially given the optimization
    flags set) free to keep the intermediate values (in this case,
    the return value from the inlined function) in higher precision.
    
    This results in the situation where 7.28 (which actually, as
    a float, ends up as 7.2800002098), multiplied by 100, is
    728.000000 when really forced into a 32 bit float, but 728.000021
    when kept with higher intermediate precision.
    
    For the multiplication case, a more suitable epsilon would e.g.
    be 2*FLT_EPSILON*fabs(expected_output), but just increase the
    current hardcoded threshold for now.
    Signed-off-by: 's avatarMartin Storsjö <martin@martin.st>
    f4d8fad8
Name
Last commit
Last update
..
.gitignore Loading commit data...
Makefile Loading commit data...
dnn-layer-conv2d-test.c Loading commit data...
dnn-layer-depth2space-test.c Loading commit data...
dnn-layer-mathbinary-test.c Loading commit data...
dnn-layer-maximum-test.c Loading commit data...
dnn-layer-pad-test.c Loading commit data...