<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<div class="moz-cite-prefix">On 07/24/2018 02:58 AM, Nema, Ashutosh
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:MWHPR12MB1712C0E6ED0AAC4206EA9547FB550@MWHPR12MB1712.namprd12.prod.outlook.com">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered
medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
color:black;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
pre
{mso-style-priority:99;
mso-style-link:"HTML Preformatted Char";
margin:0in;
margin-bottom:.0001pt;
font-size:10.0pt;
font-family:"Courier New";
color:black;}
p.msonormal0, li.msonormal0, div.msonormal0
{mso-style-name:msonormal;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
color:black;}
span.HTMLPreformattedChar
{mso-style-name:"HTML Preformatted Char";
mso-style-priority:99;
mso-style-link:"HTML Preformatted";
font-family:Consolas;
color:black;}
span.EmailStyle21
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
<div class="WordSection1">
<p class="MsoNormal"><span style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:windowtext"><o:p> </o:p></span></p>
<div>
<div style="border:none;border-top:solid #E1E1E1
1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-left:.5in"><b><span
style="color:windowtext">From:</span></b><span
style="color:windowtext"> Hal Finkel
<a class="moz-txt-link-rfc2396E" href="mailto:hfinkel@anl.gov"><hfinkel@anl.gov></a>
<br>
<b>Sent:</b> Tuesday, July 24, 2018 5:05 AM<br>
<b>To:</b> Craig Topper <a class="moz-txt-link-rfc2396E" href="mailto:craig.topper@gmail.com"><craig.topper@gmail.com></a>;
<a class="moz-txt-link-abbreviated" href="mailto:hideki.saito@intel.com">hideki.saito@intel.com</a>; <a class="moz-txt-link-abbreviated" href="mailto:estotzer@ti.com">estotzer@ti.com</a>; Nemanja
Ivanovic <a class="moz-txt-link-rfc2396E" href="mailto:nemanja.i.ibm@gmail.com"><nemanja.i.ibm@gmail.com></a>; Adam Nemet
<a class="moz-txt-link-rfc2396E" href="mailto:anemet@apple.com"><anemet@apple.com></a>; <a class="moz-txt-link-abbreviated" href="mailto:graham.hunter@arm.com">graham.hunter@arm.com</a>; Michael
Kuperstein <a class="moz-txt-link-rfc2396E" href="mailto:mkuper@google.com"><mkuper@google.com></a>; Sanjay Patel
<a class="moz-txt-link-rfc2396E" href="mailto:spatel@rotateright.com"><spatel@rotateright.com></a>; Simon Pilgrim
<a class="moz-txt-link-rfc2396E" href="mailto:llvm-dev@redking.me.uk"><llvm-dev@redking.me.uk></a>; Nema, Ashutosh
<a class="moz-txt-link-rfc2396E" href="mailto:Ashutosh.Nema@amd.com"><Ashutosh.Nema@amd.com></a>; llvm-dev
<a class="moz-txt-link-rfc2396E" href="mailto:llvm-dev@lists.llvm.org"><llvm-dev@lists.llvm.org></a><br>
<b>Subject:</b> Re: [llvm-dev] [LoopVectorizer]
Improving the performance of dot product reduction loop<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
<p style="margin-left:.5in"><o:p> </o:p></p>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
<div>
<p class="MsoNormal" style="margin-left:.5in">On 07/23/2018
06:23 PM, Hal Finkel via llvm-dev wrote:<o:p></o:p></p>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
<div>
<p class="MsoNormal" style="margin-left:.5in">On 07/23/2018
05:22 PM, Craig Topper wrote:<o:p></o:p></p>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<div>
<p class="MsoNormal" style="margin-left:.5in">Hello all,<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in">This code
<a href="https://godbolt.org/g/tTyxpf" target="_blank"
moz-do-not-send="true">
https://godbolt.org/g/tTyxpf</a> is a dot product
reduction loop multipying sign extended 16-bit values
to produce a 32-bit accumulated result. The x86
backend is currently not able to optimize it as well
as gcc and icc. The IR we are getting from the loop
vectorizer has several v8i32 adds and muls inside the
loop. These are fed by v8i16 loads and sexts from
v8i16 to v8i32. The x86 backend recognizes that these
are addition reductions of multiplication so we use
the vpmaddwd instruction which calculates 32-bit
products from 16-bit inputs and does a horizontal add
of adjacent pairs. A vpmaddwd given two v8i16 inputs
will produce a v4i32 result.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in">In the
example code, because we are reducing the number of
elements from 8->4 in the vpmaddwd step we are left
with a width mismatch between vpmaddwd and the vpaddd
instruction that we use to sum with the results from
the previous loop iterations. We rely on the fact that
a 128-bit vpmaddwd zeros the upper bits of the
register so that we can use a 256-bit vpaddd
instruction so that the upper elements can keep going
around the loop without being disturbed in case they
weren't initialized to 0. But this still means the
vpmaddwd instruction is doing half the amount of work
the CPU is capable of if we had been able to use a
256-bit vpmaddwd instruction. Additionally, future x86
CPUs will be gaining an instruction that can do
VPMADDWD and VPADDD in one instruction, but that width
mismatch makes that instruction difficult to utilize.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in">In order
for the backend to handle this better it would be
great if we could have something like two v32i8 loads,
two shufflevectors to extract the even elements and
the odd elements to create four v16i8 pieces.<o:p></o:p></p>
</div>
</div>
</blockquote>
<p class="MsoNormal" style="margin-left:.5in"><br>
Why v*i8 loads? I thought that we have 16-bit and 32-bit
types here?<br>
<br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<div>
<p class="MsoNormal" style="margin-left:.5in">Sign
extend each of those pieces. Multiply the two even
pieces and the two odd pieces separately, sum those
results with a v8i32 add. Then another v8i32 add to
accumulate the previous loop iterations. Then ensures
that no pieces exceed the target vector width and the
final operation is correctly sized to go around the
loop in one register. All but the last add can then be
pattern matched to vpmaddwd as proposed in <a
href="https://reviews.llvm.org/D49636"
moz-do-not-send="true">https://reviews.llvm.org/D49636</a>.
And for the future CPU the whole thing can be matched
to the new instruction.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in">Do other
targets have a similar instruction or a similar issue
to this? Is this something we can solve in the loop
vectorizer? Or should we have a separate IR
transformation that can recognize this pattern and
generate the new sequence? As a separate pass we would
need to pair two vector loads together, remove a
reduction step outside the loop and remove half the
phis assuming the loop was partially unrolled. Or if
there was only one add/mul inside the loop we'd have
to reduce its width and the width of the phi.<o:p></o:p></p>
</div>
</div>
</blockquote>
<p class="MsoNormal" style="margin-left:.5in"><br>
Can you explain how the desired code from the vectorizer
differs from the code that the vectorizer produces if you
add '#pragma clang loop vectorize(enable)
vectorize_width(16)' above the loop? I tried it in your
godbolt example and the generated code looks very similar to
the icc-generated code.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Vectorizer considers the largest data type
size in the loop body and considers the maximum possible VF as
8, hence in this example it generates the 128 bits vpmaddwd.<o:p></o:p></p>
<p class="MsoNormal">Like to share my observation where instead
of forcing vector factor to 16 (by using pragma) tried option
“vectorizer-maximize-bandwidth”.<o:p></o:p></p>
<p class="MsoNormal">“vectorizer-maximize-bandwidth” considers
the smallest data type size in the loop body and allows the
possible VF up to 16, but LV selects the VF as 8 though VF16
has the same cost.<o:p></o:p></p>
<p class="MsoNormal">LV: Vector loop of width 8 costs: 1.<o:p></o:p></p>
<p class="MsoNormal">LV: Vector loop of width 16 costs: 1.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">It’s because of below check in LV:<o:p></o:p></p>
<p class="MsoNormal">LoopVectorizationCostModel::selectVectorizationFactor()
{<o:p></o:p></p>
<p class="MsoNormal">…<o:p></o:p></p>
<p class="MsoNormal"> if (VectorCost < Cost) {<o:p></o:p></p>
<p class="MsoNormal"> Cost = VectorCost;<o:p></o:p></p>
<p class="MsoNormal"> Width = i;<o:p></o:p></p>
<p class="MsoNormal"> }<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">I don’t know the history behind this change
but wondering why it’s like this, at least for
“vectorizer-maximize-bandwidth” it should be (VectorCost <=
Cost).</p>
</div>
</blockquote>
<br>
Ah, interesting. I was indeed wondering whether this was another
case where we'd benefit from having vectorizer-maximize-bandwidth on
by default. I recall that our defaults have long been to prefer
smaller VF in cases where the costs are equal to avoid extra
legalization costs (and, potentially, to retain more freedom for
interleaving later). Should the costs here be equal, or should we
have some extra target information to distinguish them? It sounds
like they're not really equal in practice.<br>
<br>
-Hal<br>
<br>
<blockquote type="cite"
cite="mid:MWHPR12MB1712C0E6ED0AAC4206EA9547FB550@MWHPR12MB1712.namprd12.prod.outlook.com">
<div class="WordSection1">
<p class="MsoNormal"><o:p></o:p></p>
<p class="MsoNormal">By forcing the vector factor to 16 it
generates 256 bits vpmaddwd.<span style="color:windowtext"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:windowtext">Regards,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:windowtext">Ashutosh</span><o:p></o:p></p>
</div>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory</pre>
</body>
</html>