[LLVMdev] IR Passes and TargetTransformInfo: Straw Man

Tobias Grosser tobias at grosser.es
Wed Jul 31 16:30:06 PDT 2013


On 07/30/2013 09:44 PM, Chris Lattner wrote:
>
> On Jul 30, 2013, at 10:19 AM, Shuxin Yang <shuxin.llvm at gmail.com> wrote:
>
>>   The pro for running LICM early is that it may move big redundant stuff out of loop nest. You never know
>> how big it is.  In case you are lucky , you can move lot of stuff out of
>> loop, the loop may become much smaller and hence enable lots of downstream optimizations. This sound
>> to be a big win for control-intensive programs where Loop-nest-opt normally is a big, expensive no-op.
>>
>>   The con side is that, as you said, the nest is not perfect any more. However, I would argue LNO optimizations
>> should be able to tackle the cases when imperfect part is simple enough (say, no call, no control etc).
>> (FYI, Open64's LNO is able to tackle imperfect nesting so long as imperfect part is simple).  Or you just reverse
>> the LICM, that dosen't sound hard.
>
> FWIW, I completely agree with this.  The canonical form should be that loop invariants are hoisted.  Optimizations should not depend on perfect loops.  This concept really only makes sense for Source/AST level transformations anyway, which don't apply at the LLVM IR level.

Some comments from an LNO such as Polly. In general, Polly and probably 
many modern loop nest optimizers do not care that much about perfectly 
or imperfectly nested loop nests. Transformations work either way.

LICM is problematic due to another reason. LICM introduces new memory 
dependences. Here a simple example

Normal loop:

for i
    for j
      sum[i] += A[i][j]

LICM loop:

for i
    s = sum[i]
    for j
      s += A[i][j]
    sum[i] = s


Calculating precise dependences for the second loop yields a lot more 
dependences that prevent possible transformations. A LNO can always 
remove those LICM introduced dependences by expanding memory, but full 
memory expansion is impractical. Deriving the right amount of memory 
expansion (e.g. the one that just reverts the LICM) is a difficult 
problem. From a LNO perspective first deriving possible 
transformations, then transforming the loop and as a last step applying 
LICM seems to be the better option.

Having said that, if there are compelling reasons outside of LNO to keep 
the LICM in the canonicalization pass, I can see us following Andrews 
suggestion to disable LICM in case a LNO is run and having the LNO 
schedule an additional set of cleanup passes later on.

Cheers,
Tobias






More information about the llvm-dev mailing list