[llvm-dev] Floating point variance in the test suite

Renato Golin via llvm-dev llvm-dev at lists.llvm.org
Fri Jun 25 01:22:42 PDT 2021


On Thu, 24 Jun 2021 at 19:13, Kaylor, Andrew <andrew.kaylor at intel.com>
wrote:

> This gets at my questions about which benchmarks are important and who
> considers them to be important. I expect a lot of us have non-public
> testing going on for the benchmarks that we consider to be critical. I see
> the test suite benchmarks as more of a guard rail to catch changes that
> degrade performance early and in a way that is convenient for other
> community members to address. So, to me, the benchmarks don’t have to be
> perfect measures. On the other hand, if we just disable things like
> fast-math and FMA, the benchmarks won’t tell us anything at all about the
> impact of changes touching those optimizations.
>

Right, that's why we ended up with the complicated fp-contract situation.

Moreover, the benchmarks we have in the test-suite weren't super tuned like
other commercial ones. Worse still, they have to "compare similar" on a
large number of platforms, with some potentially unstable output.

To make those benchmarks stable we'd need to:
 * Understand what the program is trying to generate
 * Only output relevant information (not like a dump of a huge intermediary
matrix)
 * Make sure the output is stable under varying conditions on different
architectures

Only then comparing outputs, even with fpcmp, will be meaningful. Right
now, we're on the stage "let's just hope the output is the same", which
makes discussions like this one a recurrent theme.


>  So, I think we need a way for each test to indicate whether it can be run
> in value-unsafe modes, to set different tolerances for different modes, and
> to be built to run differently in different modes. For example, if I’m
> running the Blur test in a value safe mode, there’s no need to perform an
> internal comparison and a hashed output comparison can be used. If I’m
> running with fp-contract=on or fast-math, I’d want an internal value check
> but those modes might have different tolerances. Finally, I might want a
> way to run the test as a benchmark with either fp-contract=on or fast-math
> without any check of the results in order to get better performance data.
>

Yes, I think we may need different comparisons for different runs. For
example, on a test run, the FP delta must be really small, but on a
benchmark run, it can be larger or even ignore some numbers we know don't
change the overall result.

I believe this has to be in each benchmark's code, not in the comparison
tool, which has to be as dumb as possible.

As for updating the tests, I’m going to bring up test ownership again
> because I don’t know what constitutes acceptable variation for any given
> test. I could take a guess at it, but if I get it wrong, my wrong guess
> becomes semi-enshrined in the test suite and may not be noticed by people
> who would know better.
>

Unfortunately, there are no owners to the tests like that.

There are people who know more about certain tests than others, but I have
added tests and benchmarks to the test-suite without really knowing a lot
about them in the past, and I believe many other people have, too.

There's no way to know who knows more about a particular test than asking,
so I think the easiest way forward is to send an RFC to the list for each
benchmark we want to change with the proposal. If no one thinks that's a
bad idea, we go with it. If someone downstream raises issues, reverting the
commit to one single test/benchmark is easier than one that touches a lot
of different tests.

That’s an accumulated result inside four nested loops. It looks like in
> practice the differently rounded results with FMA must be getting averaged
> out most of the time, which makes sense assuming a relatively consistent
> magnitude of values, but I’d have to study the algorithm to understand
> exactly what’s happening and how to check the results reliably for a range
> of inputs. I think that’s too much to expect from someone who is just
> making some optimization change that triggers a failure in the test.
>

Absolutely agreed.

The work that Melanie, Sebastian and many others have done on improving the
test-suite is an important but thankless job.

Unfortunately, no one has time to spend on cleaning up the test-suite for
more than a month (stretching) so it gets some attention then fades.

Many years ago I spent a good few months on it because running on Arm would
yield the wildest differences in output, and my goal was to make Arm to be
a first-class citizen on LLVM, so it had to run on Arm buildbots without
noise.

As you'd expect, I only touched the tests that broke on Arm (at the time,
too many!), but cleaning the test-suite wasn't my primary goal. Since then,
many people have done the same, for new targets, new optimisations, etc.
But all with the test-suite as a secondary goal, which brings us here.

There were notable exceptions, for example when the Benchmark mode was
added, when LNT had its interface revamped with good statistics,
Sebastian's fp-contract change, now Melanie's work, etc. But they were few
and far between.

We had a GSOC project to make the test-suite robust, but honestly, that's
not the sexiest GSOC project ever, so we're still waiting for some kind
soul to go through the painful task of understanding everything and
validating the output stability.


>  In the case that led me to start the discussion this week, Melanie was
> just making the behavior of clang match its documentation. She didn’t even
> change any optimizations. The failures that were exposed would always have
> happened if certain compilation options were used. Naturally, she just
> wanted to not turn any buildbots red. Then I started looking at the failing
> tests and ended up opening this can of worms.
>

I sincerely apologise. :D

I know we all have more important things to do (our jobs, for starter) than
to fix some spaghetti monster that should have been good enough from the
beginning. But truth is, testing and benchmarking is a really hard job.

I think this is not just an important part of the project to do, but I also
think less experienced developers should all go through an experience like
that some time in their early careers.

The work is painful, but it is also interesting. Understanding the
benchmarks teaches you a lot about their fields (ray tracing, physics
simulation, Fourier transforms, etc) as well as the numeric techniques
used.

You also learn about output stability, good software engineering, integer
and floating point arithmetic and all the pitfalls around them. It may not
make for the best CV item, but you do become a better programmer after
working on those hairy issues.

So, while we do what we can when we must, it really needs someone new, with
fresh eyes, looking at it as if everything is wrong, and come up with a
much better solution than what we have today.

cheers,
--renato

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210625/c9639f46/attachment.html>


More information about the llvm-dev mailing list