[PATCH] D36562: [Bitfield] Make the bitfield a separate location if it has width of legal integer type and its bit offset is naturally aligned for the type

Chandler Carruth via llvm-commits llvm-commits at lists.llvm.org
Wed Aug 9 21:03:36 PDT 2017


Hal already answered much of this, just continuing this part of the
discussion...

On Wed, Aug 9, 2017 at 8:56 PM Xinliang David Li via llvm-commits <
llvm-commits at lists.llvm.org> wrote:

> On Wed, Aug 9, 2017 at 8:37 PM, Hal Finkel <hfinkel at anl.gov> wrote:
>
>>
>> On 08/09/2017 10:14 PM, Xinliang David Li via llvm-commits wrote:
>>
>>  Can you elaborate here too? If there were missed optimization that later
>> got fixed, there should be regression tests for them, right?  And what
>> information is missing?
>>
>>
>> To make a general statement, if we load (a, i8) and (a+2, i16), for
>> example, and these came from some structure, we've lost the information
>> that the load (a+1, i8) would have been legal (i.e. is known to be
>> deferenceable). This is not specific to bit fields, but the fact that we
>> lose information on the dereferenceable byte ranges around memory access
>> turns into a problem when we later can't legally widen. There may be a
>> better way to keep this information other than producing wide loads (which
>> is an imperfect mechanism, especially the way we do it by restricting to
>> legal integer types),
>>
>
I don't think we have such a restriction? Maybe I'm missing something. When
I originally added this logic, it definitely was not restricted to legal
integer types.

but at the moment, we don't have anything better.
>>
>
> Ok, as you mentioned, widening looks like a workaround to paper over the
> weakness in IR to annotate the information.  More importantly, my question
> is whether this is a just theoretical concern.
>

I really disagree with this being a workaround.

I think it is very fundamentally the correct model -- the semantics are
that this is a single, wide memory operation that a narrow data type is
extracted from.

I think the thing that is seriously weak is the ability to aggressively
narrow these operations, and while I agree that this is important it
doesn't actually seem to be common. We have a relatively small number of
(admitedly important) cases, but even our existing narrowing seems to be
very effective for the vast majority of cases.

I would be very hesitant to give up the information preserved by this
lowering and representation when we already have very successfully
optimized the majority of it effectively during code generation, and have
only just started trying a much more aggressive approach. I suspect that
the aggressive approach can be made to work, even though it may be quite
challenging.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20170810/224ceafe/attachment-0001.html>


More information about the llvm-commits mailing list