[llvm-dev] [LoopVectorizer] Improving the performance of dot product reduction loop
Hal Finkel via llvm-dev
llvm-dev at lists.llvm.org
Tue Jul 24 06:33:35 PDT 2018
On 07/23/2018 08:25 PM, Saito, Hideki wrote:
>
>
>
> My perspective, being a vectorizer guy, is that vectorizer should
>
> 1) Take this optimization into account in the cost modeling so
> that it favors full vector compute.
>
> 2) but generate plain widened computation:
> full vector unit stride load of A[],
> full vector unit stride load of B[],
> sign extend both, (this makes it 2x full vector, on the surface)
> multiply
> add
> …
> standard reduction last value sequence after the loop
>
> and let downstream optimizer, possibly in Target, use instructions
> like (v)pmaddwd effectively.
>
> If needed, an IR-to-IR xform before hitting Target.
>
>
>
> This mechanism also works if a programmer or other FE produces a
> similar naïvely vectorized IR like above.
>
I think that this should be, generally, our strategy. We might have
different reduction strategies (we already do, at least in terms of the
final reduction tree), and one might include what x86 wants here, so
long as we can reasonably create a cost-modeling interface that let's us
differentiate it from other strategies at the IR level. Lacking the
ability to abstract this behind a generalized strategy with an IR-level
cost-modeling interface, I think that the vectorizer should produce
straightforward IR (e.g., what we currently produce with VF=16, see the
other discussion of the vectorizer-maximize-bandwidth option) and then
the target can adjust it as necessary to take advantage of special isel
opportunities.
Thanks again,
Hal
> Since vectorizer should understand the existence of the optimization,
> it can certainly be arm-twisted to
>
> generate the IR desired by the Target. However, whether we want to do
> that is a totally different story.
>
>
>
> Vectorizer should focus on having reasonable cost model and generating
> straight-forward optimizable IR
>
> ---- as opposed to generating convoluted IR (such as breaking up
> unit-stride load into even/odd, simply
>
> to put them back to unit-stride again) wanted by the Target.
>
>
>
> My recommendation is first analyzing the source of the current code
> generation deficiencies and then
>
> try to remedy it there.
>
>
>
> Thanks,
>
> Hideki
>
>
>
> *From:*Craig Topper [mailto:craig.topper at gmail.com]
> *Sent:* Monday, July 23, 2018 4:37 PM
> *To:* Hal Finkel <hfinkel at anl.gov>
> *Cc:* Saito, Hideki <hideki.saito at intel.com>; estotzer at ti.com; Nemanja
> Ivanovic <nemanja.i.ibm at gmail.com>; Adam Nemet <anemet at apple.com>;
> graham.hunter at arm.com; Michael Kuperstein <mkuper at google.com>; Sanjay
> Patel <spatel at rotateright.com>; Simon Pilgrim
> <llvm-dev at redking.me.uk>; ashutosh.nema at amd.com; llvm-dev
> <llvm-dev at lists.llvm.org>
> *Subject:* Re: [LoopVectorizer] Improving the performance of dot
> product reduction loop
>
>
>
>
> ~Craig
>
>
>
> On Mon, Jul 23, 2018 at 4:24 PM Hal Finkel <hfinkel at anl.gov
> <mailto:hfinkel at anl.gov>> wrote:
>
>
>
> On 07/23/2018 05:22 PM, Craig Topper wrote:
>
> Hello all,
>
>
>
> This code https://godbolt.org/g/tTyxpf
> <https://godbolt.org/g/tTyxpf> is a dot product reduction loop
> multipying sign extended 16-bit values to produce a 32-bit
> accumulated result. The x86 backend is currently not able to
> optimize it as well as gcc and icc. The IR we are getting from
> the loop vectorizer has several v8i32 adds and muls inside the
> loop. These are fed by v8i16 loads and sexts from v8i16 to
> v8i32. The x86 backend recognizes that these are addition
> reductions of multiplication so we use the vpmaddwd
> instruction which calculates 32-bit products from 16-bit
> inputs and does a horizontal add of adjacent pairs. A vpmaddwd
> given two v8i16 inputs will produce a v4i32 result.
>
>
>
> That godbolt link seems wrong. It wasn't supposed to be clang IR. This
> should be right.
>
>
>
>
>
> In the example code, because we are reducing the number of
> elements from 8->4 in the vpmaddwd step we are left with a
> width mismatch between vpmaddwd and the vpaddd instruction
> that we use to sum with the results from the previous loop
> iterations. We rely on the fact that a 128-bit vpmaddwd zeros
> the upper bits of the register so that we can use a 256-bit
> vpaddd instruction so that the upper elements can keep going
> around the loop without being disturbed in case they weren't
> initialized to 0. But this still means the vpmaddwd
> instruction is doing half the amount of work the CPU is
> capable of if we had been able to use a 256-bit vpmaddwd
> instruction. Additionally, future x86 CPUs will be gaining an
> instruction that can do VPMADDWD and VPADDD in one
> instruction, but that width mismatch makes that instruction
> difficult to utilize.
>
>
>
> In order for the backend to handle this better it would be
> great if we could have something like two v32i8 loads, two
> shufflevectors to extract the even elements and the odd
> elements to create four v16i8 pieces.
>
>
> Why v*i8 loads? I thought that we have 16-bit and 32-bit types here?
>
>
>
> Oops that should have been v16i16. Mixed up my 256-bit types.
>
>
>
>
>
> Sign extend each of those pieces. Multiply the two even pieces
> and the two odd pieces separately, sum those results with a
> v8i32 add. Then another v8i32 add to accumulate the previous
> loop iterations. Then ensures that no pieces exceed the target
> vector width and the final operation is correctly sized to go
> around the loop in one register. All but the last add can then
> be pattern matched to vpmaddwd as proposed
> in https://reviews.llvm.org/D49636. And for the future CPU the
> whole thing can be matched to the new instruction.
>
>
>
> Do other targets have a similar instruction or a similar issue
> to this? Is this something we can solve in the loop
> vectorizer? Or should we have a separate IR transformation
> that can recognize this pattern and generate the new sequence?
> As a separate pass we would need to pair two vector loads
> together, remove a reduction step outside the loop and remove
> half the phis assuming the loop was partially unrolled. Or if
> there was only one add/mul inside the loop we'd have to reduce
> its width and the width of the phi.
>
>
> Can you explain how the desired code from the vectorizer differs
> from the code that the vectorizer produces if you add '#pragma
> clang loop vectorize(enable) vectorize_width(16)' above the loop?
> I tried it in your godbolt example and the generated code looks
> very similar to the icc-generated code.
>
>
>
> It's similar, but the vpxor %xmm0, %xmm0, %xmm0 is being unnecessarily
> carried across the loop. It's then redundantly added twice in the
> reduction after the loop despite it being 0. This happens because we
> basically tricked the backend into generating a 256-bit vpmaddwd
> concated with a 256-bit zero vector going into a 512-bit vaddd before
> type legalization. The 512-bit concat and vpaddd get split during type
> legalization, and the high half of the add gets constant folded away.
> I'm guessing we probably finished with 4 vpxors before the loop but
> MachineCSE(or some other pass?) combined two of them when it figured
> out the loop didn't modify them.
>
>
>
>
> Thanks again,
> Hal
>
>
>
>
> Thanks,
>
> ~Craig
>
>
>
> --
>
> Hal Finkel
>
> Lead, Compiler Technology and Programming Languages
>
> Leadership Computing Facility
>
> Argonne National Laboratory
>
--
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180724/93103ab0/attachment.html>
More information about the llvm-dev
mailing list