[PATCH] D59710: [SLP] remove lower limit for forming reduction patterns

Alexey Bataev via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Dec 2 08:15:24 PST 2019


ABataev added a comment.

In D59710#1765299 <https://reviews.llvm.org/D59710#1765299>, @spatel wrote:

> In D59710#1765236 <https://reviews.llvm.org/D59710#1765236>, @ABataev wrote:
>
> > In D59710#1764582 <https://reviews.llvm.org/D59710#1764582>, @spatel wrote:
> >
> > > Patch updated:
> > >  Try a different limitation on the 2-way reduction patterns that we consider as candidates. Here I've limited it to compare (boolean type) reductions to avoid regressions on math reductions. This catches the motivating cmp cases from PR39665 and doesn't seem to interfere with any existing cmp vectorization regression tests.
> >
> >
> > It looks mostly like a hack. And I assume in some cases it still may lead to problems with performance. Better to try to fix the cost model for x86, I think.
>
>
> I agree it's a hack. If you go back to the very first draft of this patch using the 'History' tab, we have the ideal code patch.
>
> But I still don't see how we would edit the cost model to get around the regressions seen in that first attempt. The reductions seen here are profitable. The extra reductions without the 'cmp' hack are also profitable, but they are maybe just not as profitable as some other vectorization strategy. We don't seem to have the mechanism to try multiple transforms and choose the best in SLP (IIUC, this is what VPlan will allow).


The reduction is profitable because of the cost model. It is the same problem, the scalar load+add combination in many cases has a too high cost, I think. And because of this problem, the reduction for 2 elements looks profitable in some cases.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D59710/new/

https://reviews.llvm.org/D59710





More information about the llvm-commits mailing list