[llvm] [LLVM][InstCombine] Enable constant folding for SVE asr,lsl and lsr intrinsics. (PR #137350)
David Sherwood via llvm-commits
llvm-commits at lists.llvm.org
Mon Apr 28 05:49:17 PDT 2025
================
@@ -6,8 +6,7 @@ target triple = "aarch64-unknown-linux-gnu"
define <vscale x 16 x i8> @constant_asr_i8_shift_by_0(<vscale x 16 x i1> %pg) #0 {
; CHECK-LABEL: define <vscale x 16 x i8> @constant_asr_i8_shift_by_0(
; CHECK-SAME: <vscale x 16 x i1> [[PG:%.*]]) #[[ATTR0:[0-9]+]] {
-; CHECK-NEXT: [[R:%.*]] = call <vscale x 16 x i8> @llvm.aarch64.sve.asr.nxv16i8(<vscale x 16 x i1> [[PG]], <vscale x 16 x i8> splat (i8 7), <vscale x 16 x i8> zeroinitializer)
-; CHECK-NEXT: ret <vscale x 16 x i8> [[R]]
+; CHECK-NEXT: ret <vscale x 16 x i8> splat (i8 7)
----------------
david-arm wrote:
I assume it folds to `splat (i8 7)` here because it's merging in `vscale x 16 x i8> splat (i8 7)` so the predicate is irrelevant?
https://github.com/llvm/llvm-project/pull/137350
More information about the llvm-commits
mailing list