[llvm] r178018 - R600/SI: fix ELSE pseudo op handling

Christian König christian.koenig at amd.com
Wed Mar 27 03:30:35 PDT 2013


Am 27.03.2013 10:19, schrieb Chandler Carruth:
> On Wed, Mar 27, 2013 at 2:07 AM, Christian König 
> <christian.koenig at amd.com <mailto:christian.koenig at amd.com>> wrote:
>
>     Am 27.03.2013 07 <tel:27.03.2013%2007>:46, schrieb Tobias Grosser:
>
>         On 03/26/2013 03:11 PM, Sean Silva wrote:
>
>             Test?
>
>
>         Yes, very good point. I have the feeling the R600 commits miss
>         test cases most of the time. Christian, was there a specific
>         reason that there is no test case? In case there was, please
>         explain the next time in the commit message. Otherwise, can
>         you commit a test case.
>
>
>     For this specific case I really had problems extracting a
>     reasonable test case.
>
>     The bug was that PHI elimination placed a COPY directly beneath a
>     control flow pseudo opcode, and while expanding the COPY and
>     pseudo opcode they ended up in the wrong order. But to actually
>     force those condition you need a quite fair amount of control
>     flow, and my example IR only generates exactly this pattern
>     because it isn't optimized properly. I just wanted to avoid
>     submitting a large amount of IR to test for this bug which I need
>     to remove again when I optimize this control flow pattern.
>
>
> This is a fairly well known challenge of testing backend bug fixes -- 
> don't get too discouraged here. =]
>
> Jakob and Andy have done some work recently to allow a more restricted 
> set of things to run in 'llc' to make testing a bit easier. Also, many 
> folks have long wanted the ability to have MI -> MI and MI -> assembly 
> regression testing so that no optimizations actually run except for 
> the buggy one, allowing you to construct these types of odd situations 
> in a stable way.
>

Oh yes, please! That would make live so much easier!

> I dunno what you're priorities are, but if you (or others) have time 
> to work on this, it would be an *amazingly* useful addition to LLVM's 
> code generation testing.
>

Totally agree, but unfortunately I have my doubts that we get time to 
work on such general purpose things (e.g. not directly related to the 
R600/SI backend). Maybe when we get reasonable OpenCL support for our 
new hardware.

>     Well you guys pretty much convinced me of the reason for good
>     testcases, but we have developed this backend for quite some time
>     outside of master and honestly we probably would need to add a
>     couple of hundred test cases to cover all the stuff in it.
>
>     I think the only way out of this misery is that I promise to
>     either provide a test case or a very very good reason not to do so
>     for future patches.
>
>
> I agree with your strategy here. It's hard to bootstrap a good set of 
> tests. I think the idea of "work really hard to write the test, or 
> explain what went wrong so that debugging in the future can at least 
> refer to the commit log" for every patch is the right baseline strategy.
>
> The other thing I've found helpful to build up better test coverage of 
> a sizable chunk of LLVM over time is to essentially apply a bloom 
> filter to the above strategy. My process is:
>
> 1) find bug, debug, hack on crazy test case to debug, find the fix
Exactly here is the problem, I usually don't need to hack a crazy test 
case to debug a bug, but instead already have LLVM IR code from piglit. 
As long as it only covers a single instruction, pattern or intrinsics it 
is pretty easy to reduce, but especially the bugs in the CFG handling 
have given me a really hard time (and not only since the backend is 
upstream). Any suggestion or idea is on that topic is very welcome.

> 2) step back, look at the fix, the nature of it, and the input 
> triggering the bug and try to synthesize or reduce a nice minimal test 
> case to exercise the code.
> 3) commit that stage -- there is a nice regression test for *this* 
> bug, and the bug is fixed
> 4) take a finite, bounded amount of time, and apply a bloom filter 
> like process to the code in question and the test case. write some 
> more test cases for the immediate surrounding logic, flesh out the 
> existing test case, add some obvious base cases, etc
> 5) as soon as the allotted time is up, set it aside, move on to other 
> bugs or features
>
> The idea is to make the impact on development both minimal and 
> predictable, while doing a bit better than just adding regression 
> tests. You've already stared at this particular code path and the test 
> inputs that trigger it. It is all in the front of your head, so you 
> can relatively cheaply write a few tests for the immediate area. 
> Eventually, this tends to grow into pretty reasonable test coverage.
>
> Hope it helps, and thanks for all the amazing work on R600!! =D It's 
> really great to see this work in the main tree, and I appreciate the 
> effort you're putting into getting the development process and testing 
> into such great shape.

Thanks for the flowers, but it's mostly Tom's eford that's gotten the 
R600 backend to where it is now.

Christian.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20130327/935da38a/attachment.html>


More information about the llvm-commits mailing list