[llvm-dev] [RFC] Compiled regression tests.

Hal Finkel via llvm-dev llvm-dev at lists.llvm.org
Wed Jul 1 12:16:16 PDT 2020


On 7/1/20 1:48 PM, David Greene via llvm-dev wrote:
> "Robinson, Paul via llvm-dev" <llvm-dev at lists.llvm.org> writes:
>
>> We even have scripts that automatically generate such tests, used
>> primarily in codegen tests.  I devoutly hope that the people who
>> produce those tests responsibly eyeball all those cases.
> I don't think people do because they are so noisy.  Solving the problem
> was the primary motivation of my pile of test update script
> enhancements.
>
>> Hal’s suggestion is more to the point: If the output we’re generating
>> is not appropriate to the kinds of tests we want to perform, it can be
>> worthwhile to generate different kinds of output.  MIR is a case in
>> point; for a long time it was hard to introspect into the interval
>> between IR and final machine code, but now it’s a lot easier.
> +1.  I don't believe there's a silver bullet here.  Different tools are
> appropriate for different kinds of tests.  I happen to think Michael's
> proposal is another useful tool in the box, as is Hal's.


To follow up on this, I also think that Michael's proposal is a 
potentially-useful tool in our testing toolbox. Moreover, his motivation 
for proposing this points out a real issue that we have with our current 
set of tests.

Hindsight is 20/20, but the hardship associated with Michael's work 
updating the loop metadata tests probably would have been best addressed 
by having a good analysis output and checking that. It seems like a 
single bit of C++ code could, thus, be shared across many tests. It 
seems to me that the use case Michael's proposal addresses is where we 
have a large number of test cases, each requiring complex verification 
logic, and logic not easily phrased in terms of checking a generic 
textual analysis summary. I'm not yet sure this particular use case exists.

His proposal also aims to do something else: make certain kinds of 
checks less fragile. As he points out, many useful concepts are easily 
expressed as CHECK-NOT, but because there's no binding between the text 
string in the check line and the code that might produce that text, it's 
easy for these to fall out of sync. His proposal, using C++ tests, 
provides better type checking in this regard. There may be other ways to 
address this concern (e.g., using FileCheck variables of some kind of 
positively match string name fragments before their use in CHECK-NOTs).

  -Hal


>
>               -David
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

-- 
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory



More information about the llvm-dev mailing list