<div dir="ltr">In theory, we could offload several things to such a target plug-in, I'm just not entirely sure we want to.<div><br></div><div>Two examples I can think of:</div><div><br></div><div>1) This could be a better interface for masked load/stores and gathers.</div><div><br></div><div>2) Horizontal reductions. I tried writing yet-another-horizontals-as-first-class-citizens proposal a couple of months ago, and the main problem from the previous discussions about this was that there's no good common representation. E.g. should a horizontal add return a vector or a scalar, should it return the base type of the vector (assumes saturation) or a wider integer type, etc. With a plugin, we could have the vectorizer emit the right target intrinsic, instead of the crazy backend pattern-matching we have now. </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Sep 25, 2016 at 9:28 PM, Demikhovsky, Elena <span dir="ltr"><<a href="mailto:elena.demikhovsky@intel.com" target="_blank">elena.demikhovsky@intel.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
|<br>
|Hi Elena,<br>
|<br>
|Technically speaking, this seems straightforward.<br>
|<br>
|I wonder, however, how target-independent this is in a practical<br>
|sense; will there be an efficient lowering when targeting any other<br>
|ISA? I don't want to get into the territory where, because the<br>
|vectorizer is supposed to be architecture independent, we need to<br>
|add target-independent intrinsics for all potentially-side-effect-<br>
|carrying idioms (or just complicated idioms) we want the vectorizer to<br>
|support on any target. Is there a way we can design the vectorizer so<br>
|that the targets can plug in their own idiom recognition for these<br>
|kinds of things, and then, via that interface, let the vectorizer produce<br>
|the relevant target-dependent intrinsics?<br>
<br>
</span>Entering target specific plug-in in vectorizer may be a good idea. We need target specific pattern recognition and target specific implementation of “vectorizeMemoryInstruction”. (It may be more functionality in the future)<br>
TTI-><wbr>checkAdditionalVectorizationOp<wbr>potunities() - detects target specific patterns; X86 will find compress/expand and may be others<br>
TTI-><wbr>vectorizeMemoryInstruction() - handle only exotic target-specific cases<br>
<br>
Pros:<br>
It will allow us to implement all X86 specific solutions.<br>
The expandload and compresssrore intrinsics may be x86 specific, polymorphic:<br>
llvm.x86.masked.expandload()<br>
llvm.x86.masked.compressstore(<wbr>)<br>
<br>
Cons:<br>
<br>
TTI will need to deal with Loop Info, SCEVs and other loop analysis info that it does not have today. (I do not like this way)<br>
Or we'll need to introduce TLV - Target Loop Vectorizer - a new class that handles all target specific cases. This solution seems more reasonable, but too heavy just for compress/expand.<br>
Do you see any other target plug-in solution?<br>
<span class="HOEnZb"><font color="#888888"><br>
-Elena<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
|<br>
|Thanks again,<br>
|Hal<br>
|<br>
|----- Original Message -----<br>
|> From: "Elena Demikhovsky" <<a href="mailto:elena.demikhovsky@intel.com">elena.demikhovsky@intel.com</a>><br>
|> To: "llvm-dev" <<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>><br>
|> Cc: "Ayal Zaks" <<a href="mailto:ayal.zaks@intel.com">ayal.zaks@intel.com</a>>, "Michael Kuperstein"<br>
|<<a href="mailto:mkuper@google.com">mkuper@google.com</a>>, "Adam Nemet (<a href="mailto:anemet@apple.com">anemet@apple.com</a>)"<br>
|> <<a href="mailto:anemet@apple.com">anemet@apple.com</a>>, "Hal Finkel (<a href="mailto:hfinkel@anl.gov">hfinkel@anl.gov</a>)"<br>
|<<a href="mailto:hfinkel@anl.gov">hfinkel@anl.gov</a>>, "Sanjay Patel (<a href="mailto:spatel@rotateright.com">spatel@rotateright.com</a>)"<br>
|> <<a href="mailto:spatel@rotateright.com">spatel@rotateright.com</a>>, "Nadav Rotem"<br>
|<<a href="mailto:nadav.rotem@me.com">nadav.rotem@me.com</a>><br>
|> Sent: Monday, September 19, 2016 1:37:02 AM<br>
|> Subject: RFC: New intrinsics masked.expandload and<br>
|> masked.compressstore<br>
|><br>
|><br>
|> Hi all,<br>
|><br>
|> AVX-512 ISA introduces new vector instructions VCOMPRESS and<br>
|VEXPAND<br>
|> in order to allow vectorization of the following loops with two<br>
|> specific types of cross-iteration dependencies:<br>
|><br>
|> Compress:<br>
|> for (int i=0; i<N; ++i)<br>
|> If (t[i])<br>
|> *A++ = expr;<br>
|><br>
|> Expand:<br>
|> for (i=0; i<N; ++i)<br>
|> If (t[i])<br>
|> X[i] = *A++;<br>
|> else<br>
|> X[i] = PassThruV[i];<br>
|><br>
|> On this poster (<br>
|> <a href="http://llvm.org/devmtg/2013-11/slides/Demikhovsky-Poster.pdf" rel="noreferrer" target="_blank">http://llvm.org/devmtg/2013-<wbr>11/slides/Demikhovsky-Poster.<wbr>pdf</a> )<br>
|you’ll<br>
|> find depicted “compress” and “expand” patterns.<br>
|><br>
|> The RFC proposes to support this functionality by introducing two<br>
|> intrinsics to LLVM IR:<br>
|> llvm.masked.expandload.*<br>
|> llvm.masked.compressstore.*<br>
|><br>
|> The syntax of these two intrinsics is similar to the syntax of<br>
|> llvm.masked.load.* and masked.store.*, respectively, but the<br>
|semantics<br>
|> are different, matching the above patterns.<br>
|><br>
|> %res = call <16 x float> @llvm.masked.expandload.<wbr>v16f32.p0f32<br>
|(float*<br>
|> %ptr, <16 x i1>%mask, <16 x float> %passthru) void<br>
|> @llvm.masked.compressstore.<wbr>v16f32.p0f32 (<16 x float> <value>,<br>
|> float* <ptr>, <16 x i1> <mask>)<br>
|><br>
|> The arguments - %mask, %value and %passthru all have the same<br>
|vector<br>
|> length.<br>
|> The underlying type of %ptr corresponds to the scalar type of the<br>
|> vector value.<br>
|> (In brief; the full syntax description will be provided in subsequent<br>
|> full documentation.)<br>
|><br>
|> The intrinsics are planned to be target independent, similar to<br>
|> masked.load/store/gather/<wbr>scatter. They will be lowered effectively<br>
|on<br>
|> AVX-512 and scalarized on other targets, also akin to masked.*<br>
|> intrinsics.<br>
|> Loop vectorizer will query TTI about existence of effective support<br>
|> for these intrinsics, and if provided will be able to handle loops<br>
|> with such cross-iteration dependences.<br>
|><br>
|> The first step will include the full documentation and<br>
|implementation<br>
|> of CodeGen part.<br>
|><br>
|> An additional information about expand load (<br>
|><br>
|<a href="https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=" rel="noreferrer" target="_blank">https://software.intel.com/<wbr>sites/landingpage/<wbr>IntrinsicsGuide/#text=</a><br>
|exp<br>
|> andload&techs=AVX_512<br>
|> ) and compress store (<br>
|><br>
|<a href="https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=" rel="noreferrer" target="_blank">https://software.intel.com/<wbr>sites/landingpage/<wbr>IntrinsicsGuide/#text=</a><br>
|com<br>
|> pressstore&techs=AVX_512<br>
|> ) you also can find in the Intel Intrinsic Guide.<br>
|><br>
|><br>
|> * Elena<br>
|><br>
|> ------------------------------<wbr>------------------------------<wbr>---------<br>
|> Intel Israel (74) Limited<br>
|><br>
|> This e-mail and any attachments may contain confidential material<br>
|for<br>
|> the sole use of the intended recipient(s). Any review or distribution<br>
|> by others is strictly prohibited. If you are not the intended<br>
|> recipient, please contact the sender and delete all copies.<br>
|<br>
|--<br>
|Hal Finkel<br>
|Lead, Compiler Technology and Programming Languages Leadership<br>
|Computing Facility Argonne National Laboratory<br>
------------------------------<wbr>------------------------------<wbr>---------<br>
Intel Israel (74) Limited<br>
<br>
This e-mail and any attachments may contain confidential material for<br>
the sole use of the intended recipient(s). Any review or distribution<br>
by others is strictly prohibited. If you are not the intended<br>
recipient, please contact the sender and delete all copies.<br>
</div></div></blockquote></div><br></div>