[llvm-dev] [Openmp-dev] [cfe-dev] RFC: End-to-end testing

David Greene via llvm-dev llvm-dev at lists.llvm.org
Thu Oct 17 10:09:59 PDT 2019


Renato Golin <rengolin at gmail.com> writes:

> On Wed, 16 Oct 2019 at 21:00, David Greene <greened at obbligato.org> wrote:
>> Can you elaborate?  I'm talking about very small tests targeted to
>> generate a specific instruction or small number of instructions.
>> Vectorization isn't the best example.  Something like verifying FMA
>> generation is a better example.
>
> To check that instructions are generated from source, a two-step test
> is the best approach:
>  - Verify that Clang emits different IR for different options, or the
> right IR for a new functionality
>  - Verify that the affected targets (or at least two of the main ones)
> can take that IR and generate the right asm

Yes, of course we have tests like that.  We have found they are not
always sufficient.

> If you want to do the test in Clang all the way to asm, you need to
> make sure the back-end is built. Clang is not always build with all
> back-ends, possibly even none.

Right, which is why we have things like REQUIRES: x86-registered-target.

> To do that in the back-end, you'd have to rely on Clang being built,
> which is not always true.

Sure.

> Hacking our test infrastructure to test different things when a
> combination of components is built, especially after they start to
> merge after being in a monorepo, will complicate tests and increase
> the likelihood that some tests will never be run by CI and bit rot.

>From other discussion, it sounds like at least some people are open to
asm tests under clang.  I think that should be fine.  But there are
probably other kinds of end-to-end tests that should not live under
clang.

> On the test-suite, you can guarantee that the whole toolchain is
> available: Front and back end of the compilers, assemblers (if
> necessary), linkers, libraries, etc.
>
> Writing a small source file per test, as you would in Clang/LLVM,
> running LIT and FileCheck, and *always* running it in the TS would be
> trivial.

How often would such tests be run as part of test-suite?

Honestly, it's not really clear to me exactly which bots cover what, how
often they run and so on.  Is there a document somewhere describing the
setup?

                     -David


More information about the llvm-dev mailing list