<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:56.7pt 42.5pt 56.7pt 85.05pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72">
<div class="WordSection1">
<p class="MsoNormal">Hi all.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">I’m looking for an advice on how to deal with inefficient code generation for Intel Nehalem/Westmere architecture on 64-bit platform for the attached test.cpp (LLVM IR is in test.cpp.ll).<o:p></o:p></p>
<p class="MsoNormal">The inner loop has 11 iterations and eventually unrolled.<o:p></o:p></p>
<p class="MsoNormal">Test.lea.s is the assembly code of the outer loop. It simply has 11 loads, 11 FP add, 11 FP mull, 1 FP store and lea+mov for index computation, cmp and jump.<o:p></o:p></p>
<p class="MsoNormal">The problem is that lea is on critical path because it’s dispatched on the same port as all FP add operations (port 1).<o:p></o:p></p>
<p class="MsoNormal">Intel Architecture Code Analyzer (IACA) reports throughput for that assembly block is 12.95 cycles.<o:p></o:p></p>
<p class="MsoNormal">I made a short investigation and found that there is a pass in code gen that replaces index increment with lea.<o:p></o:p></p>
<p class="MsoNormal">Here is the snippet from llvm/lib/CodeGen/TwoAddressInstructionPass.cpp<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">if (MI.isConvertibleTo3Addr()) {<o:p></o:p></p>
<p class="MsoNormal"> // This instruction is potentially convertible to a true<o:p></o:p></p>
<p class="MsoNormal"> // three-address instruction. Check if it is profitable.<o:p></o:p></p>
<p class="MsoNormal"> if (!regBKilled || isProfitableToConv3Addr(regA, regB)) {<o:p></o:p></p>
<p class="MsoNormal"> // Try to convert it.<o:p></o:p></p>
<p class="MsoNormal"> if (convertInstTo3Addr(mi, nmi, regA, regB, Dist)) {<o:p></o:p></p>
<p class="MsoNormal"> ++NumConvertedTo3Addr;<o:p></o:p></p>
<p class="MsoNormal"> return true; // Done with this instruction.<o:p></o:p></p>
<p class="MsoNormal"> }<o:p></o:p></p>
<p class="MsoNormal"> }<o:p></o:p></p>
<p class="MsoNormal">}<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">regBKilled is false for my test case and isProfitableToConv3Addr is not even called.<o:p></o:p></p>
<p class="MsoNormal">I’ve made an experiment and left only <o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">if (isProfitableToConv3Addr(regA, regB)) {<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">That gave me test.inc.s where lea replaced with inc+mov and this code is ~27% faster on my Westmere system. IACA throughput analysis gives 11 cycles for new block.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">But the best performance I’ve got from switching scheduling algorithm from ILP to BURR (test.burr.s). It gives a few percent more vs. “ILP+INC” and I’m not sure why – it might be because test.burr.s has less instructions (no two moves that
copy index) or it might be because additions scheduled differently. BURR puts loads and FP mul between additions, which are gathered at the end of the loop by ILP.
<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">I didn’t run experiments on sandy bridge, but IACA gives 12.45 cycles for original code (test.lea.s), so I expect BURR to improve performance there too for the attached test case.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Unfortunately I’m familiar enough with the LLVM codegen code to make a good fix for this issue and I would appreciate any help.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Thanks,<o:p></o:p></p>
<p class="MsoNormal">Aleksey<o:p></o:p></p>
</div>
<p><br>
--------------------------------------------------------------------<br>
Closed Joint Stock Company Intel A/O<br>
Registered legal address: Krylatsky Hills Business Park, <br>
17 Krylatskaya Str., Bldg 4, Moscow 121614, <br>
Russian Federation</p>
<p>This e-mail and any attachments may contain confidential material for<br>
the sole use of the intended recipient(s). Any review or distribution<br>
by others is strictly prohibited. If you are not the intended<br>
recipient, please contact the sender and delete all copies.</p></body>
</html>