[PATCH] Review for hoisting and sinking of equivalent memory instruction (Instruction Merge Pass)

Daniel Berlin dberlin at dberlin.org
Wed Jun 18 12:47:35 PDT 2014


FWIW: There is no easy way to do this O(n) for stores in LLVM, due to
the lack of something like memory-ssa (otherwise, you could sink to
the nearest common dominator of all immediate uses, as we do for GCC)
You can do it O(n),or much closer, in LLVM for loads, like this:

Assuming GVN and PRE has been run, all loads that can be determined to
be identical should look identical (if not, our GVN is seriously
busted :P) in their operands[1]

pending = hash table of <block, load operand> -> list of load instructions

for each load in the diamond:
  calculate sink location as nearest common dominator of:
     for each dependency according to memdep, the block of that dependency
     for each RHS operand, the defining block of that operand.
  pending[<block, load operands>].insert(load instruction)

for each entry in pending:
if (list.size() > 1)
  Perform merge and hoist to end of common dominator block


This also would be even easier if GVN produced a value number or value
handle for each thing, like GCC's (then it wouldn't matter if they
looked identical, only if they calculate the same value), but c'est la
vie.

[1] The only case this wouldn't be true is if the load was defined by
operands in the diamond, in which case you couldn't hoist it out of
the diamond anyway without a real load PRE determining whether you
could move/recalculate the operands.


On Thu, Jun 12, 2014 at 5:21 PM, Gerolf Hoflehner <ghoflehner at apple.com> wrote:
> Theoretically the algorithm is quadratic in the number of loads and stores.
>
> The first thought was to integrate it with GVN using a hash table approach Daniel is referring to. It seemed the natural place to look at for deciding when two instructions are equivalent. But the code got unwieldy quickly resulting in an increase in GVN code size which made it look unlikely to get accepted by the community. And you still search the hash table. It could potentially give better compile-time results in cases when there are a few loads and lots of other instructions (the good case of space-time trade-off), but not when eg. the blocks have loads only. In the end it seemed cleanest to move the optimization into its own pass and make it conservative, so that compile-time impact should be negligible.
>
> -Gerolf
>
>
> On Jun 11, 2014, at 9:04 PM, Daniel Berlin <dberlin at dberlin.org> wrote:
>
>> It's not possible to do a *complete* job of store sinking/load
>> hoisting in better time bounds.
>>
>> However,  it is possible to do *this* particular part of it in better
>> time bounds (through creative use of value numbering loads and a hash
>> table).  LLVM doesn't have the infrastructure right now to make this
>> easy, however.
>>
>>
>>
>>
>>
>>
>> On Tue, Jun 10, 2014 at 11:15 PM, Tobias Grosser <tobias at grosser.es> wrote:
>>> On 10/06/2014 22:43, Gerolf Hoflehner wrote:
>>>>
>>>> Hi chandlerc,
>>>>
>>>> This pass iteratively hoists two loads to the same address out of a
>>>> diamond (hammock) and merges them
>>>> into a single load in the header. Similar it sinks and merges two stores
>>>> to the tail block. The algorithm
>>>> iterates over the instructions of one side of the diamond and attempts to
>>>> find a matching load/store on
>>>> the other side. It hoists / sinks when it thinks it safe to do so.  I
>>>> tailored the code as conservative as possible to catch the initial cases we
>>>> are interested in, which keeps code size and complexity in check. The
>>>> optimization helps hiding load latencies and triggering if-conversion.
>>>>
>>>> http://reviews.llvm.org/D4096
>>>
>>>
>>> Hi Gerolf,
>>>
>>> just a high-level comment. This algorithm seems to have quadratic run time.
>>> Is this intended?
>>>
>>> Cheers,
>>> Tobias
>>> _______________________________________________
>>> llvm-commits mailing list
>>> llvm-commits at cs.uiuc.edu
>>> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
>




More information about the llvm-commits mailing list