<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>----- Original Message -----<br>
> From: "Chandler Carruth" <<a href="mailto:chandlerc@google.com" target="_blank">chandlerc@google.com</a>><br>
> I don't have any problem teaching codegen to use very specific<br>
> information to weigh the costs of an additional branch. But that<br>
> needs to be done in codegen, as it is rare on x86 at this point that<br>
> a well predicted branch is more expensive than anything, and given<br>
> the quality of branch predictors on x86, it is also relatively rare<br>
> that branches are so unpredictable.<span><font color="#888888"><br>
</font></span></div></div></blockquote></div></div></div></div></blockquote></div><br>I agree that it's rare, but it's common enough that it makes some of our very knowledgeable and perf-starved customers furious when LLVM unilaterally declares that it knows best and turns all bit-logic into branches. <br><br>At the least, I think this optimization needs a chicken switch because there's no way we can know in advance that adding a branch will beat a compound predicate in all cases. I attached a test program and some results to PR23827. Feedback and more data points are certainly appreciated.<br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Also, we should be careful here: there's no provision for branch prediction in the x86 arch. That's purely a micro-arch concept for x86 AFAIK. We don't know what micro-arches will look like N years from now...even if they're all excellent at prediction today.<br><br></div></div>