[PATCH] Add threshold for lowering lattice value 'overdefined' in LVI

Hal Finkel hfinkel at anl.gov
Fri Sep 19 06:22:32 PDT 2014


----- Original Message -----
> From: "Jiangning Liu" <liujiangning1 at gmail.com>
> To: "Hal Finkel" <hfinkel at anl.gov>
> Cc: reviews+D5322+public+26bcfcc2c04b0b02 at reviews.llvm.org, mcrosier at codeaurora.org, joerg at netbsd.org,
> "llvm-commits at cs.uiuc.edu for LLVM" <llvm-commits at cs.uiuc.edu>
> Sent: Tuesday, September 16, 2014 8:38:34 AM
> Subject: Re: [PATCH] Add threshold for lowering lattice value 'overdefined' in LVI
> 
> 
> Hi Hal,
> 
> 
> 
> 2014-09-16 13:18 GMT+08:00 Hal Finkel < hfinkel at anl.gov > :
> 
> 
> ----- Original Message -----
> > From: "Jiangning Liu" < liujiangning1 at gmail.com >
> > To: "Hal Finkel" < hfinkel at anl.gov >
> > Cc: reviews+D5322+public+26bcfcc2c04b0b02 at reviews.llvm.org ,
> > mcrosier at codeaurora.org , joerg at netbsd.org ,
> > " llvm-commits at cs.uiuc.edu for LLVM" < llvm-commits at cs.uiuc.edu >
> > Sent: Tuesday, September 16, 2014 12:07:21 AM
> > Subject: Re: [PATCH] Add threshold for lowering lattice value
> > 'overdefined' in LVI
> > 
> > 
> > Hi Hal,
> > 
> > 
> > I don't understand why putting the count inside LVILatticeVal can
> > be
> > helpful to the dilemma you mentioned? Can you explain more about
> > this? And why it can be more local that way?
> 
> Is the problem not that, in this worst case, we're doing too many
> expensive visits to some (LVILatticeVal, BB) pairs? With a global
> threshold for all (LVILatticeVal, BB) visits, we can do more
> optimization in some parts of the function than in others (because
> we'll visit some parts of the function before other parts). If we
> keep the count per value, then we limit the analysis done per value,
> but that is independent of where in the function it is.
> 
> 
> 
> LazyValueInfo module provides interfaces getConstant and the like to
> it's customers. This kind of interface is to ask for V's value in
> basic block BB as specified by parameters like (Value *V, BasicBlock
> *BB, ...). Eventually, the algorithm tries to "solve()" the value by
> lowering the lattice value. Each time before "solving" the value,
> the counter I set will be clear/reset to 0, so it means my solution
> is already per value basis. To some extension, it is already
> independent of where it is in the function, because we can ask for
> "solving" the value anywhere in the function.

Ah, okay, I missed that. Please add a note to the counter that it will be reset on every solve, and the current solution LGTM (please only keep both thresholds if having the second one you added allows an increase in the first one). Thanks for clarifying!

 -Hal

> When "solving" a
> single value for a specific block, the lattice lowering algorithm
> could visit very large number of other basic blocks(predecessors).
> The threshold I added for BB is avoid visiting too many BBs, and it
> simply bails out lowering for this specific value only.
> 
> 
> 
> 
> > 
> > 
> > I think at present we have an unique ValueCacheEntryTy for each
> > value
> > on a specific block. If we put a count inside LVILatticeVal, what
> > is
> > the semantic of this count?
> 
> My thought was that it would be the same as the count you originally
> proposed, but per value. So we'd count the number of times the value
> had been marked as over-defined. And once the limit is reached, we'd
> give up.
> 
> 
> 
> Those two threshold I added already coverd both per BB and per value.
> If what you are proposing is per Latice Value count and add
> threshold for this count, I don't see it could help to solving the
> delimma of optimizing early blocks only for very big functions,
> because we have unique Lative Value for each (BB, Value) pair in the
> algorithm, which are stack values returned by functions like
> getXXX().
> 
> 
> The number of times the value had been marked as overdefined in a
> single value "solving" stage should be very small, because the
> algorithm will merge values after parsing predecessor for the value.
> If the value is lowered to be a fixed value, the algorithm will exit
> shortly. If the value after merge is still overdefined, the
> algorithm should not visit it through walking stack (
> BlockValueStack ) again. Maybe I'm wrong about procedure, but this
> is my current understanding.
> 
> 
> Thanks,
> -Jiangning
> 
> 
> 
> -Hal
> 
> 

-- 
Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory



More information about the llvm-commits mailing list