[llvm-dev] [RFC] Compiled regression tests.

Chris Lattner via llvm-dev llvm-dev at lists.llvm.org
Wed Jul 1 13:01:17 PDT 2020



> On Jul 1, 2020, at 7:33 AM, Hal Finkel via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
>> 
>> Ideally the regression test would be robust and understandable, achievable with two asserts in a unittest:
>> 
>>     Loop &OuterLoop = *LI->begin();
>>     ASSERT_TRUE(OuterLoop.isAnnotatedParallel());
>>     Loop &InnerLoop = *OuterLoop.begin();
>>     ASSERT_TRUE(InnerLoop.isAnnotatedParallel());
> 
> I definitely agree that we should not be trying to do this kind of checking using textual metadata-node matching in FileCheck. The alternative already available is to add an analysis pass with some kind of verifier output. This output, not the raw metadata itself, can be checked by FileCheck. We also need to check the verification code, but at least that's something we can keep just in one place. For parallel annotations, we already have such a thing (we can run opt -loops -analyze; e.g., in test/Analysis/LoopInfo/annotated-parallel-complex.ll). We also do this kind of thing for the cost model (by running with -cost-model -analyze). To what extent would making more-extensive use of this technique address the use cases you're trying to address?
> 
> Maybe we should, also, just make more-extensive use of unit tests?
> 
Agreed, it is a very reasonable thing to write an LLVM pass (analysis or otherwise) that does things with the IR and prints things in a structured output format that is easy to FileCheck.  This achieves the goals I was striving for, including the design and curation of high quality test logic.

On the MLIR side of things, there are good examples in testing dependence analysis and other things like that.

-Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200701/a8593836/attachment.html>


More information about the llvm-dev mailing list