<div dir="ltr"><div dir="ltr">On Thu, 24 Jun 2021 at 19:13, Kaylor, Andrew <<a href="mailto:andrew.kaylor@intel.com">andrew.kaylor@intel.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div lang="EN-US" style="overflow-wrap: break-word;">
<div class="gmail-m_-3722006870812179996WordSection1">
<p class="MsoNormal">This gets at my questions about which benchmarks are important and who considers them to be important. I expect a lot of us have non-public testing going on for the benchmarks that we consider to be critical. I see the test suite benchmarks
as more of a guard rail to catch changes that degrade performance early and in a way that is convenient for other community members to address. So, to me, the benchmarks don’t have to be perfect measures. On the other hand, if we just disable things like fast-math
and FMA, the benchmarks won’t tell us anything at all about the impact of changes touching those optimizations.<br></p></div></div></blockquote><div><br></div><div>Right, that's why we ended up with the complicated fp-contract situation. </div><div><br></div><div>Moreover, the benchmarks we have in the test-suite weren't super tuned like other commercial ones. Worse still, they have to "compare similar" on a large number of platforms, with some potentially unstable output.</div><div><br></div><div>To make those benchmarks stable we'd need to:</div><div> * Understand what the program is trying to generate</div><div> * Only output relevant information (not like a dump of a huge intermediary matrix)</div><div> * Make sure the output is stable under varying conditions on different architectures</div><div><br></div><div>Only then comparing outputs, even with fpcmp, will be meaningful. Right now, we're on the stage "let's just hope the output is the same", which makes discussions like this one a recurrent theme.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lang="EN-US" style="overflow-wrap: break-word;"><div class="gmail-m_-3722006870812179996WordSection1"><p class="MsoNormal"></p><p class="MsoNormal"><u></u></p>
<p class="MsoNormal"><u></u> So, I think we need a way for each test to indicate whether it can be run in value-unsafe modes, to set different tolerances for different modes, and to be built to run differently in different modes. For example, if I’m running the Blur
test in a value safe mode, there’s no need to perform an internal comparison and a hashed output comparison can be used. If I’m running with fp-contract=on or fast-math, I’d want an internal value check but those modes might have different tolerances. Finally,
I might want a way to run the test as a benchmark with either fp-contract=on or fast-math without any check of the results in order to get better performance data.</p></div></div></blockquote><div><br></div><div>Yes, I think we may need different comparisons for different runs. For example, on a test run, the FP delta must be really small, but on a benchmark run, it can be larger or even ignore some numbers we know don't change the overall result.</div><div><br></div><div>I believe this has to be in each benchmark's code, not in the comparison tool, which has to be as dumb as possible.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lang="EN-US" style="overflow-wrap: break-word;"><div class="gmail-m_-3722006870812179996WordSection1"><p class="MsoNormal"><u></u></p>
<p class="MsoNormal">As for updating the tests, I’m going to bring up test ownership again because I don’t know what constitutes acceptable variation for any given test. I could take a guess at it, but if I get it wrong, my wrong guess becomes semi-enshrined
in the test suite and may not be noticed by people who would know better.</p></div></div></blockquote><div><br></div><div>Unfortunately, there are no owners to the tests like that.</div><div><br></div><div>There are people who know more about certain tests than others, but I have added tests and benchmarks to the test-suite without really knowing a lot about them in the past, and I believe many other people have, too.</div><div><br></div><div>There's no way to know who knows more about a particular test than asking, so I think the easiest way forward is to send an RFC to the list for each benchmark we want to change with the proposal. If no one thinks that's a bad idea, we go with it. If someone downstream raises issues, reverting the commit to one single test/benchmark is easier than one that touches a lot of different tests.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lang="EN-US" style="overflow-wrap: break-word;"><div class="gmail-m_-3722006870812179996WordSection1"><p class="MsoNormal"><u></u><u></u></p>
<p class="MsoNormal">That’s an accumulated result inside four nested loops. It looks like in practice the differently rounded results with FMA must be getting averaged out most of the time, which makes sense assuming a relatively consistent magnitude of values,
but I’d have to study the algorithm to understand exactly what’s happening and how to check the results reliably for a range of inputs. I think that’s too much to expect from someone who is just making some optimization change that triggers a failure in the
test.<br></p></div></div></blockquote><div><br></div><div>Absolutely agreed. </div><div><br></div><div>The work that Melanie, Sebastian and many others have done on improving the test-suite is an important but thankless job.</div><div><br></div><div>Unfortunately, no one has time to spend on cleaning up the test-suite for more than a month (stretching) so it gets some attention then fades. </div><div><br></div><div>Many years ago I spent a good few months on it because running on Arm would yield the wildest differences in output, and my goal was to make Arm to be a first-class citizen on LLVM, so it had to run on Arm buildbots without noise.</div><div><br></div><div>As you'd expect, I only touched the tests that broke on Arm (at the time, too many!), but cleaning the test-suite wasn't my primary goal. Since then, many people have done the same, for new targets, new optimisations, etc. But all with the test-suite as a secondary goal, which brings us here.</div><div><br></div><div>There were notable exceptions, for example when the Benchmark mode was added, when LNT had its interface revamped with good statistics, Sebastian's fp-contract change, now Melanie's work, etc. But they were few and far between.</div><div><br></div><div>We had a GSOC project to make the test-suite robust, but honestly, that's not the sexiest GSOC project ever, so we're still waiting for some kind soul to go through the painful task of understanding everything and validating the output stability.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lang="EN-US" style="overflow-wrap: break-word;"><div class="gmail-m_-3722006870812179996WordSection1"><p class="MsoNormal"></p><p class="MsoNormal"><u></u></p>
<p class="MsoNormal"><u></u> In the case that led me to start the discussion this week, Melanie was just making the behavior of clang match its documentation. She didn’t even change any optimizations. The failures that were exposed would always have happened if certain
compilation options were used. Naturally, she just wanted to not turn any buildbots red. Then I started looking at the failing tests and ended up opening this can of worms.</p></div></div></blockquote><div><br></div><div>I sincerely apologise. :D</div><div><br></div><div>I know we all have more important things to do (our jobs, for starter) than to fix some spaghetti monster that should have been good enough from the beginning. But truth is, testing and benchmarking is a really hard job.</div><div><br></div><div>I think this is not just an important part of the project to do, but I also think less experienced developers should all go through an experience like that some time in their early careers.</div><div><br></div><div>The work is painful, but it is also interesting. Understanding the benchmarks teaches you a lot about their fields (ray tracing, physics simulation, Fourier transforms, etc) as well as the numeric techniques used. </div><div><br></div><div>You also learn about output stability, good software engineering, integer and floating point arithmetic and all the pitfalls around them. It may not make for the best CV item, but you do become a better programmer after working on those hairy issues.</div><div><br></div><div>So, while we do what we can when we must, it really needs someone new, with fresh eyes, looking at it as if everything is wrong, and come up with a much better solution than what we have today.</div><div><br></div><div>cheers,</div><div>--renato</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lang="EN-US" style="overflow-wrap: break-word;"><div class="gmail-m_-3722006870812179996WordSection1">
</div>
</div>
</blockquote></div></div>