[llvm-dev] Floating point variance in the test suite

Renato Golin via llvm-dev llvm-dev at lists.llvm.org
Thu Jun 24 10:05:51 PDT 2021


Hi Andrew,

Sorry I didn't see this before. My reply to bugzilla didn't take into
account the contents, here, so are probable moot.

On Thu, 24 Jun 2021 at 17:22, Kaylor, Andrew <andrew.kaylor at intel.com>
wrote:

> I don't agree that the result doesn't matter for benchmarks. It seems that
> the benchmarks are some of the best tests we have for exercising
> optimizations like this and if the result is wrong by a wide enough margin
> that could indicate a problem. But I understand Renato’s point that the
> performance measurement is the primary purpose of the benchmarks, and some
> numeric differences should be acceptable.
>
Yes, that's the point I was trying to make. You can't run a benchmark
without understanding what it does and what the results mean. Small
variations can be fine in one benchmark and totally unacceptable in others.
However, what we have in the test-suite are benchmark-turned-tests and
tests-turned-benchmarks in which the output is a lot less important if it's
more important if it's totally different (ex. error messages, NaNs). My
comment was just to the subset we have in the test-suite, not benchmarks in
general.

If you truly want to benchmark LLVM, you should really be running specific
benchmarks in specific ways and looking very carefully at the results, not
relying on the test-suite.

In the previous discussion of this issue, Sebastian Pop proposed having the
> program run twice -- once with "precise" FP results, and once with the
> optimizations being tested. For the Blur test, the floating point results
> are only intermediate and the final (printed) results are a matrix of 8-bit
> integers. I’m not sure what would constitute an acceptable result for this
> program. For any given value, an off-by-one result seems acceptable, but if
> there are too many off-by-one values that would probably indicate a
> problem. In the Polybench tests, Sebastian modified the tests to do a
> comparison within the test itself. I don’t know if that’s practical for
> Blur or if it would be better to have two runs and use a custom comparison
> tool.
>
Given the point above about the difference between benchmarks and
test-suite-benchmarks, I think having comparisons inside the program itself
is probably the best way forward. I should have mentioned that on my list,
as I did that, too, in the test-suite.

The main problem with that, for benchmarks, is that they can add
substantial runtime and change the profile of the test. But that can be
easily fixed by iterating a few more times on the kernel (from the ground
state).

What we want is to make sure the program doesn't generate garbage, but
garbage means different things for different tests, and having an external
tool that knows what each of the tests think is garbage is not practical.

The way I see it, there are only three types of comparison:
 * Text comparison, for tests that must be identical on every platform.
 * Hash comparison, for those above where the output is too big.
 * FP-comparison, for those where the text and integers must be identical
but the FP numbers can vary a bit.

The weird behaviour of fpcmp looking at hashes and comparing the numbers in
them is a bug, IMO. As is comparing integers and allowing wiggle room.

Using fpcmp for comparing text is fine, because what it does with text and
integers should be exactly the same thing as diff, and if the text has FP
output, then it also can change depending on precision and it's mostly fine
if it does.

To me, the path forward is to fix the tests that break with one of the
alternatives above, and make sure fpcmp doesn't identify hex, octal, binary
or integers as floating-point, and treat them all as text.

For the Blur test, a quick comparison between the two matrices inside the
program (with appropriate wiggle room) would suffice.

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210624/cf83b9eb/attachment.html>


More information about the llvm-dev mailing list