[llvm-commits] [llvm] r169218 - in /llvm/trunk: include/llvm/ lib/CodeGen/AsmPrinter/ lib/VMCore/ test/CodeGen/ARM/ test/CodeGen/Thumb/ test/CodeGen/X86/ test/DebugInfo/ test/DebugInfo/X86/ test/JitListener/

Bill Wendling wendling at apple.com
Wed Dec 5 11:45:58 PST 2012


On Dec 5, 2012, at 11:40 AM, David Blaikie <dblaikie at gmail.com> wrote:

> On Wed, Dec 5, 2012 at 11:29 AM, Eric Christopher <echristo at gmail.com> wrote:
>> On Wed, Dec 5, 2012 at 11:25 AM, Bill Wendling <isanbard at gmail.com> wrote:
>>> 
>>> On Dec 5, 2012, at 11:07 AM, Eric Christopher <echristo at gmail.com> wrote:
>>> 
>>>>> Awesome. FWIW we're probably getting those cases wrong in debug info
>>>>> :)
>>>>> 
>>>> We're definitely getting the negative lower bounds wrong, since the
>>>> value's retrieved as a uint64_t. The zero case will be omitted for Ada and
>>>> friends.
>>>> 
>>>> Agreed. If you'd like to fix it that'd be awesome, if not file a bug and
>>>> I'll get to it eventually.
>>>> 
>>> Okay. The fix is to emit the lower bound in all cases. I don't think this
>>> will be a problem for the debugger. It just adds extra information that
>>> it'll probably ignore in C/C++. :)
>> 
>> 
>> No, it's not. The correct is to omit it when the lower bound is the same as
>> the default lower bound which I just gave in this thread :)
> 
> (I'm not sure I'm helping here, but I'm confused as to the degree of
> talking-past that's gone on with this issue... hopefully I'm doing
> more good than harm)
> 
> Frontends (Clang, DragonEgg, etc) know which language they're emitting
> debug info for & what the default is. They emit lower bounds whenever
> the bound is not the language-specific default.
> Backend (LLVM) unconditionally emits whatever the frontend gave it.
> 
> & thus we get the minimum required debug info: values when they're not
> the default, relying on the default when it matches.
> 
Here's my problem. Let's say that Ada is using an array that starts at '0'. Right now, we treat a lower bound of '0' as the default for the language (because we're assuming C/C++ here) and we won't emit it in the DWARF. This is wrong for Ada, which defaults to '1' as the lower bound of an array. Conversely, we *will* emit a lower bound for Ada if it says that the lower bound is '1', since we don't expect that to be the default. Emitting a lower bound of '1' is not needed here, of course.

I just need a way of determining if the lower bound given to us is the default for the source language or not. If it's not, then we will emit the lower bound attribute. Otherwise, we won't.

-bw




More information about the llvm-commits mailing list