<div dir="ltr">See:<br><br><a href="http://reviews.llvm.org/D9148">http://reviews.llvm.org/D9148</a><br><div><a href="http://reviews.llvm.org/D9149">http://reviews.llvm.org/D9149</a><br></div><div><br></div><div>Cheers,</div><div><br></div><div>James</div></div><br><div class="gmail_quote">On Tue, 21 Apr 2015 at 12:06 James Molloy <<a href="mailto:james@jamesmolloy.co.uk">james@jamesmolloy.co.uk</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Michael, Gerolf,<br><br><div>The main goal of this optimization was to reduce the number of multiply-adds emitted when complex GEPs are used. With regards to SPEC, this is particularly prevalent in sjeng and gobmk (although we don't just care about SPEC and there were plenty of other places we caught this idiom).</div><div><br></div><div>The pass is actively reducing the number of multiply-adds in gobmk:</div><div><br></div><div>$ llc engine.ll -O3 -mllvm -aarch64-gep-opt=false -o - |& grep madd | wc -l</div><div> 297</div><div><br></div><div>$ llc engine.ll -O3 -mllvm -aarch64-gep-opt=true -o - |& grep madd | wc -l</div><div> 158</div><div><br></div><div>So it demonstrably is making at least some code better.</div><div><br></div><div>I was able to reproduce your twofish example. Twofish, like most crypto algorithms, has a pattern repeated many times so the problem could just be down to one pattern producing one more instruction, which it turns out is exactly the case.</div><div><br></div><div>The problem is that with gep-opt enabled, we're associating shifts and constants the other way around.</div><div><br></div><div>With gep-opt disabled (synthetic code example):</div><div> add x0, x0, #72</div><div> ldr x0, [x1, x0, lsl #2]</div><div><br></div><div>With gep-opt enabled:</div><div> add x0, x1, x0, lsl #2</div><div> ldr x0, [x0, #72]</div><div><br></div><div>This is actually fine, until another expression reuses a different shift with the same offset:</div><div><br></div><div>With gep-opt disabled:</div><div> add x0, x0, #72</div><div> ldr x0, [x1, x0, lsl #2]</div><div> ldr x2, [x1, x0, lsl #3]</div><div><br></div><div>With gep-opt enabled:</div><div> add x0, x1, x0, lsl #2</div><div> add x2, x1, x0, lsl #3</div><div> ldr x0, [x0, #72]</div><div> ldr x2, [x0, #72]</div><div><br></div><div>Amusingly, I saw the opposite behaviour on AArch32 on a different testcase and came to the conclusion that expressions with correlated immediate offsets and identical shift amounts were more likely than expressions with correlated shift amounts and identical immediate offsets, but that's a different story.</div><div><br></div><div>Anyway, the reason this is happening is because we're not running reassociate after GEP expansion. If we run reassociate it all goes away, it's a one-line fix.</div><div><br></div><div>The two testcases that were disabled - well, they weren't really disabled, but the behaviour was changed for them. I put that down to a bug in the SeparateConstOffsetFromGEP pass, where it tries to separate a GEP with only one variadic offset:</div><div><br></div><div> GEP i8* %base, i32 %offset</div><div><br></div><div>Nothing beneficial can come of that, so I've got, again, a single line patch to fix this. I'll push both patches upstream just as soon as I verify them.</div><div><br></div><div>So in summary, I don't see any need to disable this pass. It's doing good work, it just has one or two bugs.</div><div><br></div><div>Cheers,</div><div><br></div><div>James</div></div><br><div class="gmail_quote">On Tue, 21 Apr 2015 at 05:27 Michael Zolotukhin <<a href="mailto:mzolotukhin@apple.com" target="_blank">mzolotukhin@apple.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Tim and others,<br>
<br>
So, the entire discussion is moving around the assumption that this pass is beneficial in some cases, but we don’t have such data. Please note that I don’t want to say that the assumption is wrong, but as far as I understand, only Hao observed some gains. Thus, until we hear from him, we have real slow downs versus presumable gains.<br>
<br>
As for the looking into the code later - I’m not an expert in this area, and I don’t even have a motivational example for this optimization, so I’d barely be a right person for it. I’d be glad to assist with collecting some additional data/running tests if that might help though.<br>
<br>
Thanks,<br>
Michael<br>
<br>
<br>
> On Apr 20, 2015, at 8:40 PM, Tim Northover <<a href="mailto:t.p.northover@gmail.com" target="_blank">t.p.northover@gmail.com</a>> wrote:<br>
><br>
> On 20 April 2015 at 20:34, Gerolf Hoflehner <<a href="mailto:ghoflehner@apple.com" target="_blank">ghoflehner@apple.com</a>> wrote:<br>
>> It shouldn’t come across as ‘extreme’.<br>
><br>
> The "extreme" part wasn't so much the tentative disabling as "Could<br>
> you and/or Hao take this on after the code is disabled/backed out?". I<br>
> realise it wasn't an ultimatum, but it could very easily turn into<br>
> "we're washing our hands of this", which I'd object to.<br>
><br>
> I assume the pass was committed with some kind of metric that it<br>
> improved, and now we have a counter-example. It's down to everyone to<br>
> decide what the best way to proceed is.<br>
><br>
> Cheers.<br>
><br>
> Tim.<br>
><br>
> _______________________________________________<br>
> llvm-commits mailing list<br>
> <a href="mailto:llvm-commits@cs.uiuc.edu" target="_blank">llvm-commits@cs.uiuc.edu</a><br>
> <a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><br>
<br>
<br>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@cs.uiuc.edu" target="_blank">llvm-commits@cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><br>
</blockquote></div></blockquote></div>