[PATCH] D18069: Throttling LICM to reduce compile time of Value Propagation and Reg Splitting

Hal Finkel via llvm-commits llvm-commits at lists.llvm.org
Sat Mar 12 10:21:13 PST 2016


----- Original Message -----
> From: "Wei Mi" <wmi at google.com>
> To: "Philip Reames" <listmail at philipreames.com>
> Cc: reviews+D18069+public+889dfe1c8e777520 at reviews.llvm.org, "Andrew Trick" <atrick at apple.com>, "Hal Finkel"
> <hfinkel at anl.gov>, dberlin at dberlin.org, gberry at codeaurora.org, "David Li" <davidxl at google.com>, "llvm-commits"
> <llvm-commits at lists.llvm.org>
> Sent: Friday, March 11, 2016 7:48:50 PM
> Subject: Re: [PATCH] D18069: Throttling LICM to reduce compile time of Value Propagation and Reg Splitting
> 
> Note: Stretched vars are not only affecting performance of Value
> Propagation, but also register allocation (Reg Splitting). As Daniel
> said, for the compile time problem in Value Propagation, it could be
> solved in an eager way.  But it still hurts the compile time of reg
> alloca for which I don't have an easy solution.
> 
> Actually I am not convinced that doing LICM as many as possible is
> indispensible to canonicalize IR.

Indispensible seems a bit strong, but hoisting invariant computations out of loops is often a necessary step toward later CSEing them, efficiently vectorizing the loops, estimating unrolling factors, enabling jump threading, and many other things.

It is also not clear that we can get a particularly-accurate estimate of register pressure at the IR level. I'm not opposed to trying, but it involves being able to figure out how many registers are needed for particular values, the latency and scheduling characteristics of different instructions, etc. If we had this, the vectorizer could do a better job at estimating interleaving factors. However, even if we had this, it is not clear that we should use it during canonicalization unless we've exhausted all other options.

> As far as I know, Gcc throttle LICM
> too, in middle end. I cannot think of any optimization depends on it
> to do a better job. If it exists, it will be interesting to know what
> is going on there.
> 
> And I don't think doing opt aggressively without known benefit and
> then nullify it is a good idea. For now, llvm depends on register
> splitting and rematerialization to alleviate the reg pressure pain
> caused by aggressive LICM. But register allocation does it far from
> perfectly...  Although, my major purpose of the patch is not to solve
> the reg pressure brought by LICM.

I think that we need to actually try to fix our rematerialization logic before deciding to work around it at the IR level. We currently only renmaterialize single instructions (i.e. instructions marked as "trivially rematerializable"). However, many expressions require multiple instructions to compute. I don't want to make an overly-general statement here, but at least on PowerPC where I've looked at this in some detail, this is a significant issue. On PowerPC, even materializing basic constants often requires multiple instructions. Once we hoist these, trivial at the IR level, I don't think we ever even consider sinking and/or duplicating them again later.

Thanks again,
Hal

> 
> Wei.
> 
> 
> 
> On Fri, Mar 11, 2016 at 5:20 PM, Philip Reames
> <listmail at philipreames.com> wrote:
> > David,
> >
> > I very deeply believe that limiting hoisting in the IR to reduce
> > register
> > pressure is fundamentally the wrong approach.  The backend is
> > responsible
> > for sinking if desired to reduce register pressure. We do not model
> > register
> > pressure in the IR.  At all.  Period.
> >
> > Philip
> >
> >
> > On 03/11/2016 05:08 PM, David Li wrote:
> >>
> >> davidxl added a comment.
> >>
> >> I have not read the patch in details, but it seems the main
> >> purpose of the
> >> patch to throttle LICM to reduce register pressure and the compile
> >> time
> >> benefit due to reduced LVI queries is just a good side effect of
> >> the patch?
> >> Do we have any performance data (runtime) showing the usefulness
> >> of the
> >> throttling?  (Whether this is the right approach to solve the
> >> spill problem
> >> is also subject to discussions).
> >>
> >>
> >> Repository:
> >>    rL LLVM
> >>
> >> http://reviews.llvm.org/D18069
> >>
> >>
> >>
> >
> 

-- 
Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory


More information about the llvm-commits mailing list