[PATCH] D139850: [AArch64][SVE][CodeGen] Prefer ld1r* over indexed-load when consumed by a splat

Peter Waller via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Dec 20 07:35:23 PST 2022


peterwaller-arm added inline comments.


================
Comment at: llvm/test/CodeGen/AArch64/sve-ld1r.ll:1223
+  %ext = sext i8 %tmp to i64
+  %dup = call <vscale x 2 x i64> @llvm.aarch64.sve.dup.nxv2i64(<vscale x 2 x i64> undef, <vscale x 2 x i1> %pg, i64 %ext)
+  store <vscale x 2 x i64> %dup, <vscale x 2 x i64>* %out
----------------
sdesmalen wrote:
> What happens if the passthrough value is something else than undef? That would require a predicated mov to implement the merging, do we still want to use the pre/post increment in that case?
Thanks, this case is now covered by `preindex_load_dup_passthru`.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D139850/new/

https://reviews.llvm.org/D139850



More information about the llvm-commits mailing list