[PATCH] D139850: [AArch64][SVE][CodeGen] Prefer ld1r* over indexed-load when consumed by a splat
Sander de Smalen via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Fri Dec 16 03:28:12 PST 2022
sdesmalen added inline comments.
================
Comment at: llvm/lib/Target/AArch64/AArch64ISelLowering.cpp:21317
+ continue; // Ignore chain.
+ if (ValOnlyUser == nullptr) {
+ ValOnlyUser = *UI;
----------------
nit: unnecessary curly braces
================
Comment at: llvm/test/CodeGen/AArch64/sve-ld1r.ll:1223
+ %ext = sext i8 %tmp to i64
+ %dup = call <vscale x 2 x i64> @llvm.aarch64.sve.dup.nxv2i64(<vscale x 2 x i64> undef, <vscale x 2 x i1> %pg, i64 %ext)
+ store <vscale x 2 x i64> %dup, <vscale x 2 x i64>* %out
----------------
What happens if the passthrough value is something else than undef? That would require a predicated mov to implement the merging, do we still want to use the pre/post increment in that case?
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D139850/new/
https://reviews.llvm.org/D139850
More information about the llvm-commits
mailing list