[PATCH] D39747: [globalisel][tablegen] Generate rule coverage and use it to identify untested rules

Kristof Beyls via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Nov 7 23:50:39 PST 2017


kristof.beyls added a comment.

In https://reviews.llvm.org/D39747#918954, @rovka wrote:

> Is the long-term intention to try and drive this coverage to 100% via in-tree tests, or rather via the test-suite?




In https://reviews.llvm.org/D39747#918954, @rovka wrote:

> Awesome, I could use something like this. LGTM with a few nits.
>
> Is the long-term intention to try and drive this coverage to 100% via in-tree tests, or rather via the test-suite? I'm asking because I was actually considering removing some of the arm-instruction-selector.mir tests which cover the same kind of pattern - e.g. ADD, SUB, AND, OR etc are all derived from an AsI1_bin_irs, so it should be enough to test one of them. We already have tests for the TableGen emitter, so each backend should only have acceptance tests, to make sure TableGen does the right thing for each kind of pattern that it's interested in. Having one test for each rule would just explode the number of tests to the point where they can only be managed automatically, which would really reduce my confidence in them (mostly because TableGen is quirky and I would expect whatever edge cases are handled incorrectly in the emitter to also be handled incorrectly in whatever automatic test generator we'd derive with TableGen).


I'd definitely be interested in knowing what the statistics show today on coverage of regression tests vs test-suite. My expectation is that we actually have higher coverage via the regression tests rather than the test-suite, since the regression tests try harder to cover the paths of logic in the compiler.
My guess is that trying to get 100% code coverage via the test-suite is not going to work. If we'd need to get 100% coverage, I guess it needs to be done via the regression tests. But as you say - if they are going to be mostly automatically generated and not manually inspected, there is a lot of scope for the tests to be wrong.


https://reviews.llvm.org/D39747





More information about the llvm-commits mailing list