[llvm] r184105 - Switch spill weights from a basic loop depth estimation to BlockFrequencyInfo.

Benjamin Kramer benny.kra at gmail.com
Fri Jun 21 06:49:47 PDT 2013


On 18.06.2013, at 19:07, Jakob Stoklund Olesen <stoklund at 2pi.dk> wrote:

> 
> On Jun 18, 2013, at 6:35 AM, Benjamin Kramer <benny.kra at gmail.com> wrote:
> 
>> 
>> On 18.06.2013, at 00:14, Jakob Stoklund Olesen <stoklund at 2pi.dk> wrote:
>> 
>>> 
>>> On Jun 17, 2013, at 2:47 PM, Jakob Stoklund Olesen <stoklund at 2pi.dk> wrote:
>>> 
>>> In other words, I wanted to do something like this:
>>> 
>>> BlockFrequency BiasN; // Sum of blocks that prefer a spill.
>>> BlockFrequency BiasP; // Sum of blocks that prefer a register.
>>> 
>>> Then:
>>> 
>>> bool update(const Node nodes[]) {
>>> // Compute the weighted sum of inputs.
>>> BlockFrequency SumN = BiasN;
>>> BlockFrequency SumP = BiasP;
>>> for (LinkVector::iterator I = Links.begin(), E = Links.end(); I != E; ++I) {
>>>   if (nodes[I->second].Value == -1)
>>>     SumN += I->first;
>>>   if (nodes[I->second].Value == 1)
>>>     SumP += I->first;
>>> }
>>> 
>>> Value = 0;
>>> if (SumN > SumP+Thres)
>>>   Value = -1;
>>> if (SumP > SumN+Thres)
>>>   Value = 1;
>>> 
>>> This should safely and symmetrically deal with the saturating block frequency arithmetic, and the result is independent of the order of the link vector.
>>> 
>>> It’s possible a simpler solution is just as good.
>> 
>> Attached is a patch set that implements this.
>> 
>> <0001-SpillPlacement-Remove-scaling.-This-is-no-longer-nee.patch>
> 
> This patch basically looks good, except for two missing functions:
> 
>  /// mustSpill - Return True if this node is so biased that it must spill.
>  bool mustSpill() const {
>    // Actually, we must spill if Bias < sum(weights).
>    // It may be worth it to compute the weight sum here?
>    return Bias < -2.0f;
>  }
> 
> and:
> 
> /// activate - mark node n as active if it wasn't already.
> void SpillPlacement::activate(unsigned n) {
>  if (ActiveNodes->test(n))
>    return;
>  ActiveNodes->set(n);
>  nodes[n].clear();
> 
>  // Very large bundles usually come from big switches, indirect branches,
>  // landing pads, or loops with many 'continue' statements. It is difficult to
>  // allocate registers when so many different blocks are involved.
>  //
>  // Give a small negative bias to large bundles such that 1/32 of the
>  // connected blocks need to be interested before we consider expanding the
>  // region through the bundle. This helps compile time by limiting the number
>  // of blocks visited and the number of links in the Hopfield network.
>  if (bundles->getBlocks(n).size() > 100)
>    nodes[n].Bias = -0.0625f;
> }
> 
> These two functions compare Bias against an absolute number, and Bias has new units now.
> 
> The mustSpill() function identifies nodes that are doomed to spill no matter how many connected basic blocks prefer a register - it is predicting that update() will always compute Value = -1. It is used to prune the network so we don’t have to iterate over cold parts of the CFG. It’s a compile time optimization, but an important one.
> 
> It is also important that the function returns true for nodes with a MustSpill constraint.
> 
> The most accurate implementation is Bias < -sum(LinkWeights), the current version is faster by exploiting the knowledge that sum(LinkWeights) <= 2.0. With this patch, sum(LinkWeights) is a sum of block frequencies which can’t be assumed to be less than 2.0.
> 
> You could simply store the accumulated link weights for each node, that is probably the simplest solution.

You mean something like
    return BiasN >= BlockFrequency::getMaxFrequency() ||
           BiasN < BiasP + SumLinkWeights;

and add the frequency to SumLinkWeights every time addLink is called? I tried this and it breaks tests. The attached patch is what I have so far. Not sure if I have the time to finish this as I still don't fully understand how this code works.

> In the activate() function, you could simply scale the bias by sum_i(freq(bundles->getBlocks(n)[i])).

Won't that be really expensive since we only do this when bundles->getBlocks(n) is really large?

- Ben

-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-SpillPlacement-Remove-floating-point-and-replace-it-.patch
Type: application/octet-stream
Size: 11311 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20130621/ede19a42/attachment.obj>
-------------- next part --------------



>> <0002-SpillPlacement-Remove-floating-point-and-replace-it-.patch>
> 
>   bool mustSpill() const {
>     // Actually, we must spill if Bias < sum(weights).
>     // It may be worth it to compute the weight sum here?
> -    return Bias < -2.0f;
> +    return BiasN > BiasP + BlockFrequency::getEntryFrequency() * 2;
>   }
> 
> This is of course affected by my comments above, but also make sure that mustSpill() always returns true after a MustSpill constraint is added, even if BiasP is also saturated.
> 
> +    case MustSpill:
> +      BiasN = UINT64_MAX;
> 
> Could this constant be provided by the BlockFrequency class somehow? I don’t like the uint64_t being exposed.
> 
> Thanks,
> /jakob
> 



More information about the llvm-commits mailing list