[llvm-dev] [cfe-dev] RFC: End-to-end testing

Florian Hahn via llvm-dev llvm-dev at lists.llvm.org
Thu Oct 10 02:34:37 PDT 2019


Hi David,

Thanks for kicking off a discussion on this topic!


> On Oct 9, 2019, at 22:31, David Greene via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
> Mehdi AMINI via llvm-dev <llvm-dev at lists.llvm.org> writes:
> 
>>> I absolutely disagree about vectorization tests.  We have seen
>>> vectorization loss in clang even though related LLVM lit tests pass,
>>> because something else in the clang pipeline changed that caused the
>>> vectorizer to not do its job.
>> 
>> Of course, and as I mentioned I tried to add these tests (probably 4 or 5
>> years ago), but someone (I think Chandler?) was asking me at the time: does
>> it affect a benchmark performance? If so why isn't it tracked there? And if
>> not does it matter?
>> The benchmark was presented as the actual way to check this invariant
>> (because you're only vectoring to get performance, not for the sake of it).
>> So I never pursued, even if I'm a bit puzzled that we don't have such tests.
> 
> Thanks for explaining.
> 
> Our experience is that relying solely on performance tests to uncover
> such issues is problematic for several reasons:
> 
> - Performance varies from implementation to implementation.  It is
>  difficult to keep tests up-to-date for all possible targets and
>  subtargets.

Could you expand a bit more what you mean here? Are you concerned about having to run the performance tests on different kinds of hardware? In what way do the existing benchmarks require keeping up-to-date?

With tests checking ASM, wouldn’t we end up with lots of checks for various targets/subtargets that we need to keep up to date? Just considering AArch64 as an example, people might want to check the ASM for different architecture versions and different vector extensions and different vendors might want to make sure that the ASM on their specific cores does not regress. 

> 
> - Partially as a result, but also for other reasons, performance tests
>  tend to be complicated, either in code size or in the numerous code
>  paths tested.  This makes such tests hard to debug when there is a
>  regression.

I am not sure they have to. Have you considered adding the small test functions/loops as micro-benchmarks using the existing google benchmark infrastructure in test-suite?

I think that might be able to address the points here relatively adequately. The separate micro benchmarks would be relatively small and we should be able to track down regressions in a similar fashion as if it would be a stand-alone file we compile and then analyze the ASM. Plus, we can easily run it and verify the performance on actual hardware.
 
> 
> - Performance tests don't focus on the why/how of vectorization.  They
>  just check, "did it run fast enough?"  Maybe the test ran fast enough
>  for some other reason but we still lost desired vectorization and
>  could have run even faster.
> 

If you would add a new micro-benchmark, you could check that it produces the desired result when adding it. The runtime-tracking should cover cases where we lost optimizations. I guess if the benchmarks are too big, additional optimizations in one part could hide lost optimizations somewhere else. But I would assume this to be relatively unlikely, as long as the benchmarks are isolated.

Also, checking the assembly for vector code does also not guarantee that the vector code will be actually executed. So for example  by just checking the assembly for certain vector instructions, we might miss that we regressed performance, because we messed up the runtime checks guarding the vector loop. 

Cheers,
Florian



More information about the llvm-dev mailing list