[llvm-dev] how to verify completeness of the llvm backend

Quentin Colombet via llvm-dev llvm-dev at lists.llvm.org
Wed Feb 1 18:00:37 PST 2017


Hi Vivek,

Thanks for bringing that topic up.

We have the exact same problem to bring-up GlobalISel and we try to push the LLVM technology forward to have some infrastructure to tackle this problem.
Concretely, alongside doing what Hal is suggesting, we are pushing to generate test cases from the target description. Moreover, we are working on pairing llvm-stress with some fuzzing capabilities to achieve more ISel coverage.

For now, the test generation we are pushing aims at validating that our tablegen backend does what we tell it to do, but longer term we want to generate broader tests and ultimately this framework would tell us what support we miss and generate tests for it.

One problem that I don’t see how to fix automatically is that the generated tests would match whatever target description was written, thus, if the description is wrong, we are not testing the right thing and if something is handled by C++ code we don’t see it either. In particular, the interaction with the legalizer gets interesting!

Anyway, we are still far from that but if you want to contribute to that goal, I encourage you to do so :).

Cheers,
-Quentin


> On Feb 1, 2017, at 5:35 PM, Hal Finkel via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
> Hi Vivek,
> 
> We don't have a fool-proof way to do this. I recommend that you look at the llvm-stress tool. Run through the regression tests in test/CodeGen/Generic. Also, if you can fully self host (including libc++, etc.) that's a good sign.
> 
>  -Hal
> 
> On 02/01/2017 10:58 AM, vivek pandya via llvm-dev wrote:
>> Hello LLVM Devs,
>> 
>> I have a question regarding porting a new target to LLVM backend.
>> When we write a llvm backend for any new architecture then how can we verify that each type of instructions are being generated i.e particular pattern is not missing or not handled properly, every possible addressing modes are getting generated etc ?
>> 
>> One way is that generally architecture developer team should provide set of benchmarks that can cover complete set of instructions for one compiler other than LLVM  but have you ever heard about any tool to check so? Or what is the industry standard practice that you guys follow or know?
>> 
>> For a tool very natural idea is write a script which does string processing to find unique instructions generated over a given benchmark. But Will it be possible to write tool which is generic enough to take LLVM's target description files as input and then perform certain check over generated assembly ?
>> 
>> Sincerely,
>> Vivek
>> 
>> 
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>
> 
> -- 
> Hal Finkel
> Lead, Compiler Technology and Programming Languages
> Leadership Computing Facility
> Argonne National Laboratory
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170201/6389ddee/attachment.html>


More information about the llvm-dev mailing list