[PATCH] D123638: [SLP][AArch64] Implement lookahead operand reordering score of splat loads for AArch64

Vasileios Porpodas via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Apr 20 14:23:02 PDT 2022


vporpo added inline comments.


================
Comment at: llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp:2616
+      if (IsLoad) {
+        assert(isLegalBroadcastLoad(Tp->getElementType(),
+                                    LT.second.getVectorElementCount()) &&
----------------
dmgreen wrote:
> This assert looks like it will fire a lot of the time. Should we use `IsLoad && isLegalBroadcastLoad(...)`?
> 
> That might make this testable from the cost model too, Even if it's slightly unorthodox to use a vector load for such cases.
It is already guarded by `if (IsLoad)` so I guess `IsLoad && ` is not needed.

> That might make this testable from the cost model too, Even if it's slightly unorthodox to use a vector load for such cases.
I am not sure I understand. Is this about the assertion?


================
Comment at: llvm/lib/Target/AArch64/AArch64TargetTransformInfo.h:285
+    // Return true if we can generate a `ld1r` splat load instruction.
+    if (!ST->hasNEON() || NumElements.isScalable())
+      return false;
----------------
dmgreen wrote:
> fhahn wrote:
> > It looks like there is at least some test coverage missing, e.g. I think we need a test with scalable vectors and with different element types other than double.
> I don't think we _can_ test scalable vectors, this is only callable from the slp vectorizer, unfortunately.
Yeah, I am not sure how I would test scalable vectors, given that the test will run through the SLP vectorizer.

I added a couple more tests for float, i32 and i64. I tried writing a test for i16 and i8 but I think it is not possible to expose the issue with the current operand reordering. These require 4x or 8x vectors respectively and a deep reduction tree. SLP would need to perform deep operand reordering across the leaf nodes of the deep reduction tree, which I think we don't support currently.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D123638/new/

https://reviews.llvm.org/D123638



More information about the llvm-commits mailing list