[llvm-commits] [llvm] r160273 - /llvm/trunk/test/CodeGen/R600/

Tom Stellard thomas.stellard at amd.com
Mon Jul 16 09:31:41 PDT 2012


On Mon, Jul 16, 2012 at 06:01:31PM +0200, Duncan Sands wrote:
> Hi Tom,
> 
> On 16/07/12 17:54, Tom Stellard wrote:
> >On Mon, Jul 16, 2012 at 04:39:37PM +0200, Duncan Sands wrote:
> >>Hi Tom, adding a bunch of testcases that diff against binary blobs seems like a
> >>really bad idea to me.  Suppose I make a change to codegen and one of these
> >>tests breaks.  How am I to understand if the change I made just resulted in an
> >>unimportant difference, such as slightly different register allocation, or if
> >>I really broke something?
> >>
> >
> >Hi Duncan,
> >
> >I can see how that would be a problem.  I think the main difficulty here
> >is that the R600 backend doesn't have an AsmPrinter,
> 
> can it get one?
>

We don't really have any plans to write an AsmPrinter, but if it is a
requirement for the tests.  I can write one.

-Tom

>  so there isn't really
> >another way to test the output (unless it's possible to parse the output
> >of MF.dump()).
> >
> >The other issue is that I wasn't quite sure exactly what should be
> >tested with these test cases.  I tried to make all test cases I wrote
> >compile down to a single instruction, so that they should never change,
> >but ideally I would like to add more complex tests cases as well.
> >However, my concern with the more complex tests, is that they might be
> >"broken" by correct optimizations that changed the output slightly,
> >which would confuse the developer implementing the optimization.  How is
> >this situation handled with other backends?
> 
> They output a textual representation and use FileCheck to check the output,
> c.f. the tests for all the other backends in tests/CodeGen.  Since FileCheck
> does string matching (possibly with patterns) it is possible, but not always
> easy, to check that the aspects you are looking for occurred while ignoring
> insignificant details.
> 
> Ciao, Duncan.
> 
> >
> >Up to this point I've been regression testing the backend using Mesa's
> >Open Source test suite piglit[1], which provides really good coverage.
> >I would like to make better use of the LLVM testing infrastructure, but
> >as I mentioned I'm not quite sure the kinds of tests I should be
> >writing.
> >
> >-Tom
> >
> >[1] http://cgit.freedesktop.org/piglit
> >
> >>Ciao, Duncan.
> >
> >>>--- llvm/trunk/test/CodeGen/R600/fadd.ll (added)
> >>>+++ llvm/trunk/test/CodeGen/R600/fadd.ll Mon Jul 16 09:17:19 2012
> >>>@@ -0,0 +1,15 @@
> >>>+;RUN: llc < %s -march=r600 -mcpu=redwood | diff %s.check -
> >>>+
> >>>+
> >>>+define void @test() {
> >>>+   %r0 = call float @llvm.R600.load.input(i32 0)
> >>>+   %r1 = call float @llvm.R600.load.input(i32 1)
> >>>+   %r2 = fadd float %r0, %r1
> >>>+   call void @llvm.AMDGPU.store.output(float %r2, i32 0)
> >>>+   ret void
> >>>+}
> >>>+
> >>>+declare float @llvm.R600.load.input(i32) readnone
> >>>+
> >>>+declare void @llvm.AMDGPU.store.output(float, i32)
> >>>+
> >>>
> >>>Added: llvm/trunk/test/CodeGen/R600/fadd.ll.check
> >>>URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/R600/fadd.ll.check?rev=160273&view=auto
> >>>==============================================================================
> >>>Binary files llvm/trunk/test/CodeGen/R600/fadd.ll.check (added) and llvm/trunk/test/CodeGen/R600/fadd.ll.check Mon Jul 16 09:17:19 2012 differ
> >>...
> >>_______________________________________________
> >>llvm-commits mailing list
> >>llvm-commits at cs.uiuc.edu
> >>http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
> >>
> >
> 
> 
> 




More information about the llvm-commits mailing list