[llvm] r184105 - Switch spill weights from a basic loop depth estimation to BlockFrequencyInfo.
Benjamin Kramer
benny.kra at gmail.com
Fri Jun 21 09:50:25 PDT 2013
On 21.06.2013, at 18:32, Jakob Stoklund Olesen <stoklund at 2pi.dk> wrote:
>
> On Jun 21, 2013, at 6:49 AM, Benjamin Kramer <benny.kra at gmail.com> wrote:
>
>>
>> On 18.06.2013, at 19:07, Jakob Stoklund Olesen <stoklund at 2pi.dk> wrote:
>>
>>> The mustSpill() function identifies nodes that are doomed to spill no matter how many connected basic blocks prefer a register - it is predicting that update() will always compute Value = -1. It is used to prune the network so we don’t have to iterate over cold parts of the CFG. It’s a compile time optimization, but an important one.
>>>
>>> It is also important that the function returns true for nodes with a MustSpill constraint.
>>>
>>> The most accurate implementation is Bias < -sum(LinkWeights), the current version is faster by exploiting the knowledge that sum(LinkWeights) <= 2.0. With this patch, sum(LinkWeights) is a sum of block frequencies which can’t be assumed to be less than 2.0.
>>>
>>> You could simply store the accumulated link weights for each node, that is probably the simplest solution.
>>
>> You mean something like
>> return BiasN >= BlockFrequency::getMaxFrequency() ||
>> BiasN < BiasP + SumLinkWeights;
>>
>> and add the frequency to SumLinkWeights every time addLink is called?
>
> Yes, exactly.
>
>> I tried this and it breaks tests.
>
> I think it’s just a sign error. Try:
>
> return BiasN >= BiasP + SumLinkWeights;
That doesn't help. The test that fails is CodeGen/ARM/lsr-unfolded-offset.ll (one of those that I had to fix when doing the initial BlockFrequency spill weight change), so it might be a harmless change in behavior.
- Ben
>
>> The attached patch is what I have so far. Not sure if I have the time to finish this as I still don't fully understand how this code works.
>
> Thanks for working on it. What you have looks pretty close FWIW.
>
>>> In the activate() function, you could simply scale the bias by sum_i(freq(bundles->getBlocks(n)[i])).
>>
>> Won't that be really expensive since we only do this when bundles->getBlocks(n) is really large?
>
> It potentially could, yes. Bundles this large typically show up after really large switches or in loops with lots of continue statements. They are not very common.
>
> I suppose the total bundle frequency could be precomputed or cached for the large bundles.
>
> Thanks, Ben
> /jakob
More information about the llvm-commits
mailing list