[llvm] [LV] Increase max VF if vectorized function variants exist (PR #66639)
David Sherwood via llvm-commits
llvm-commits at lists.llvm.org
Thu Oct 19 08:28:48 PDT 2023
================
@@ -0,0 +1,92 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
+; RUN: opt < %s -passes=loop-vectorize,instsimplify -force-vector-interleave=1 -S | FileCheck %s --check-prefixes=WIDE
+; RUN: opt < %s -passes=loop-vectorize,instsimplify -force-vector-interleave=1 -vectorizer-maximize-bandwidth-if-variant-present=false -S | FileCheck %s --check-prefixes=NARROW
+
+target triple = "aarch64-unknown-linux-gnu"
+
+define void @test_widen(ptr noalias %a, ptr readnone %b) #1 {
+; WIDE-LABEL: @test_widen(
+; WIDE-NEXT: entry:
+; WIDE-NEXT: [[TMP0:%.*]] = call i64 @llvm.vscale.i64()
+; WIDE-NEXT: [[TMP1:%.*]] = mul i64 [[TMP0]], 4
+; WIDE-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 1025, [[TMP1]]
+; WIDE-NEXT: br i1 [[MIN_ITERS_CHECK]], label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]]
+; WIDE: vector.ph:
+; WIDE-NEXT: [[TMP2:%.*]] = call i64 @llvm.vscale.i64()
+; WIDE-NEXT: [[TMP3:%.*]] = mul i64 [[TMP2]], 4
+; WIDE-NEXT: [[N_MOD_VF:%.*]] = urem i64 1025, [[TMP3]]
+; WIDE-NEXT: [[N_VEC:%.*]] = sub i64 1025, [[N_MOD_VF]]
+; WIDE-NEXT: br label [[VECTOR_BODY:%.*]]
+; WIDE: vector.body:
+; WIDE-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
+; WIDE-NEXT: [[TMP4:%.*]] = getelementptr i64, ptr [[B:%.*]], i64 [[INDEX]]
+; WIDE-NEXT: [[WIDE_LOAD:%.*]] = load <vscale x 4 x ptr>, ptr [[TMP4]], align 8
+; WIDE-NEXT: [[WIDE_MASKED_GATHER:%.*]] = call <vscale x 4 x i32> @llvm.masked.gather.nxv4i32.nxv4p0(<vscale x 4 x ptr> [[WIDE_LOAD]], i32 4, <vscale x 4 x i1> shufflevector (<vscale x 4 x i1> insertelement (<vscale x 4 x i1> poison, i1 true, i64 0), <vscale x 4 x i1> poison, <vscale x 4 x i32> zeroinitializer), <vscale x 4 x i32> poison)
----------------
david-arm wrote:
I'm quite surprised we still choose to vectorise given the very high cost of the gather instruction! I wonder if the test would be more reliable if you use normal loads instead?
https://github.com/llvm/llvm-project/pull/66639
More information about the llvm-commits
mailing list