[LLVMdev] RFC: Improving our DWARF (and ELF) emission testing capabilities
David Blaikie
dblaikie at gmail.com
Tue Jan 22 18:33:11 PST 2013
On Tue, Jan 22, 2013 at 6:22 PM, Robinson, Paul
<Paul.Robinson at am.sony.com> wrote:
> David Blaikie [dblaikie at gmail.com] wrote:
>> On Tue, Jan 22, 2013 at 3:23 PM, Robinson, Paul
>> <Paul.Robinson at am.sony.com> wrote:
>>>>>> 2. Relying of assembly directive emissions (i.e. .cfi_*), which is
>>>>>> cumbersome and misses a lot of things like actual DWARF encoding.
>>>>>
>>>>> I'm not sure what you mean by "actual DWARF encoding" here.
>>>>> (disclaimer: I've only recently started dabbling with debug info, so I
>>>>> may be missing obvious things)
>>>>
>>>> I mean that it doesn't test the whole way, and there's quite a bit of
>>>> DWARF-related functionality in MC. So when a test relies on matching
>>>> directives in ASM output, there's quite a bit of code in MC it doesn't
>>>> exercise.
>>>
>>> Hmmm. "Proper" testing would exercise each component involved, as well
>>> as possibly longer paths that maybe are not exactly the sum of the parts.
>>> Debug info changes are quite likely to involve most or all of:
>>> - Clang's C/C++ to IR (metadata)
>>
>> Simple enough: changes to Clang require tests in Clang (OK, so there's
>> no mechanism in place to avoid tests in Clang testing optimization,
>> for example, but we try to be good about it)
>>
>>> - LLVM's IR to assembler source
>>> - assembler source to object file
>>> - LLVM's IR to object file (which partly bypasses or can be different
>>> from the previous two paths)
>>
>> & these generally go in LLVM - yeah, they could be separate, but I'd
>> expect the assembler and object file emission to be tested separately
>> already - the only benefit to testing particular IR->object &
>> separately testing particular IR->assembly is probably not worthwhile.
>
> I cite PR13303/PR14524, where asm and direct-object output differ.
> This came up early in my LLVM career and has doubtless poisoned my
> outlook for life....
>
> In many cases I think the same test _source_ can be used to check both
> asm and object, with appropriate RUN lines, and whether you want to
> count that as the same or separate depends on how you like to game the
> counts. What matters to me is both paths get tested.
Sure - I'd just rather see these separated into object emission/asm
tests if possible, rather than littering the other test cases with two
modes each. I assume the code is sufficiently factored such that
testing in this way would be generally fairly reliable (ie: I hope we
can hit the same code paths that produce .byte from debug info as
would produce it from anywhere else in the backend and just test that
once directly)
>
>> If we could test against the precursor to those outputs then we might
>> get the advantage of only having the right tests fail for the right
>> reasons (debug info tests wouldn't fail when we broken the
>> assembler/object emission).
>>
>>> Properly speaking they should each get their own tests.
>>> Not to mention a unit-test (or debuginfo-test) to exercise the complete
>>> Clang -> object (-> debugger) sequence.
>>
>> Just because it makes me twitch (though I admit debating test taxonomy
>> terminology verges on a religious topic): these tests are the
>> antithesis of unit tests. Taking source code, compiling it with
>> clang/LLVM, loading it in a debugger and interacting with the debugger
>> is a scenario test.
>>
>> A unit test would be API level, say building IR by calling Clang APIs
>> & then passing it into DebugInfo generation & watching the MI calls
>> that resulted (preferably stubbing them out in some way).
>
> Not sure why I said unit test there...
> I'd think of compiling .cpp->.o as a Clang/LLVM integration test, while
> I'd think of running the debugger on the object as a system test
> (because gdb is not part of what this community delivers).
> I also think I'd spectacularly fail the CSSP test. :-)
>
>>> I try to be good about this, but as a developer I find that sort of
>>> thing tedious. Which mostly proves that I suck at QA, and have to depend
>>> on reviewers to keep me on the straight and narrow. This works to the
>>> extent that those reviewers are willing to be critical of my efforts,
>>> and insist on adequate (instead of minimal) testing. But testing is
>>> an art unto itself, and most developers aren't good at it.
>>
>> We do what we can (because we must).
>
> Amen, brother.
> --paulr
More information about the llvm-dev
mailing list