[PATCH] D132261: [SLP]Do not reduce repeated values, use scalar red ops instead.

Valeriy Dmitriev via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Feb 16 11:39:51 PST 2023


vdmitrie added inline comments.


================
Comment at: llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:13101-13119
+      // root = mul prev_root, <1, 1, n, 1>
+      if (VL.empty()) {
+        unsigned Cnt = SameValuesCounter.lookup(
+            TrackedToOrig.find(VectorizedValue)->second);
+        Value *Scale = ConstantInt::get(VectorizedValue->getType(), Cnt);
+        LLVM_DEBUG(dbgs() << "SLP: Add (to-mul) " << Cnt << "of "
+                          << VectorizedValue << ". (HorRdx)\n");
----------------
ABataev wrote:
> vdmitrie wrote:
> > What I basically see here is that you packed two methods into one and differentiate them by VL argument.
> > May be just split this method in two ?  One to handle identity optimization and another for same scale factor optimization. They seems do not share any essential code, only the switch. But sharing the switch does not look right to me as these optimizations do not fully share reduction kinds.
> I thought about it, just in many cases we reuse the same code, so I decided to put it into single switch (just (f)add and xor have special processing).
I don't buy this kind of savings. Ideally we want to assert that we have one of add/fadd/xor reduction kind when we fall into same scale factor optimization. This is why I advocate for splitting it in two. We do not save a lot on switch which will be handing just 3 cases and default one for the second method. Code inside the switch cases is already separate. So it is easy to do from the very beginning.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D132261/new/

https://reviews.llvm.org/D132261



More information about the llvm-commits mailing list