[lldb-dev] [cfe-dev] [llvm-dev] RFC: End-to-end testing

David Greene via lldb-dev lldb-dev at lists.llvm.org
Wed Oct 9 08:12:36 PDT 2019

Mehdi AMINI via cfe-dev <cfe-dev at lists.llvm.org> writes:

>> I have a bit of concern about this sort of thing - worrying it'll lead to
>> people being less cautious about writing the more isolated tests.
> I have the same concern. I really believe we need to be careful about
> testing at the right granularity to keep things both modular and the
> testing maintainable (for instance checking vectorized ASM from a C++
> source through clang has always been considered a bad FileCheck practice).
> (Not saying that there is no space for better integration testing in some
> areas).

I absolutely disagree about vectorization tests.  We have seen
vectorization loss in clang even though related LLVM lit tests pass,
because something else in the clang pipeline changed that caused the
vectorizer to not do its job.  We need both kinds of tests.  There are
many asm tests of value beyond vectorization and they should include
component and well as end-to-end tests.

> For instance I remember asking about implementing test based on checking if
> some loops written in C source file were properly vectorized by the -O2 /
> -O3 pipeline and it was deemed like the kind of test that we don't want to
> maintain: instead I was pointed at the test-suite to add better benchmarks
> there for the end-to-end story. What is interesting is that the test-suite
> is not gonna be part of the monorepo!

And it shouldn't be.  It's much too big.  But there is a place for small
end-to-end tests that live alongside the code.

>>> We could, for example, create
>>> a top-level "test" directory and put end-to-end tests there.  Some of
>>> the things that could be tested include:
>>> - Pipeline execution (debug-pass=Executions)
>>> - Optimization warnings/messages
>>> - Specific asm code sequences out of clang (e.g. ensure certain loops
>>>   are vectorized)
>>> - Pragma effects (e.g. ensure loop optimizations are honored)
>>> - Complete end-to-end PGO (generate a profile and re-compile)
>>> - GPU/accelerator offloading
>>> - Debuggability of clang-generated code
>>> Each of these things is tested to some degree within their own
>>> subprojects, but AFAIK there are currently no dedicated tests ensuring
>>> such things work through the entire clang pipeline flow and with other
>>> tools that make use of the results (debuggers, etc.).  It is relatively
>>> easy to break the pipeline while the individual subproject tests
>>> continue to pass.
> I'm not sure I really see much in your list that isn't purely about testing
> clang itself here?

Debugging and PGO involve other components, no?  If we want to put clang
end-to-end tests in the clang subdirectory, that's fine with me.  But we
need a place for tests that cut across components.

I could also imagine llvm-mca end-to-end tests through clang.

> Actually the first one seems more of a pure LLVM test.

Definitely not.  It would test the pipeline as constructed by clang,
which is very different from the default pipeline constructed by
opt/llc.  The old and new pass managers also construct different
pipelines.  As we have seen with various mailing list messages, this is
surprising to users.  Best to document and check it with testing.


More information about the lldb-dev mailing list