[llvm] [RISCV][TTI] Properly model odd vector sized LD/ST operations (PR #100436)
Philip Reames via llvm-commits
llvm-commits at lists.llvm.org
Thu Jul 25 09:04:55 PDT 2024
https://github.com/preames updated https://github.com/llvm/llvm-project/pull/100436
>From 45ad1e5527b619f5a74eb121fc24eb0ae939b1e1 Mon Sep 17 00:00:00 2001
From: Philip Reames <preames at rivosinc.com>
Date: Wed, 24 Jul 2024 09:50:33 -0700
Subject: [PATCH 1/2] [RISCV][TTI] Properly model odd vector sized LD/ST
operations
Our actual lowering for these uses RVV instructions if a wider
container type is available, and splits if not. The splitting
case was reasonable handled costing wise, but not the widening.
For that case, the true cost scales with the next largest legal
type. The default implementation assumes that such a type is
scalarized. For scalable types which can be promoted (which
seems to only happen on zve32x), this would result in an illegal
cost being returned spuriously.
---
.../Target/RISCV/RISCVTargetTransformInfo.cpp | 30 +++++++++--
.../CostModel/RISCV/rvv-load-store.ll | 26 +++++-----
.../LoopVectorize/RISCV/short-trip-count.ll | 50 +++++++++----------
3 files changed, 62 insertions(+), 44 deletions(-)
diff --git a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
index 5a92d6bab31a9..b041ee982f5e7 100644
--- a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
+++ b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
@@ -1390,14 +1390,34 @@ InstructionCost RISCVTTIImpl::getMemoryOpCost(unsigned Opcode, Type *Src,
InstructionCost Cost = 0;
if (Opcode == Instruction::Store && OpInfo.isConstant())
Cost += getStoreImmCost(Src, OpInfo, CostKind);
- InstructionCost BaseCost =
- BaseT::getMemoryOpCost(Opcode, Src, Alignment, AddressSpace,
- CostKind, OpInfo, I);
+
+ std::pair<InstructionCost, MVT> LT = getTypeLegalizationCost(Src);
+
+ InstructionCost BaseCost = [&]() {
+ InstructionCost Cost = LT.first;
+ if (CostKind != TTI::TCK_RecipThroughput)
+ return Cost;
+
+ // Our actual lowering uses a VL predicated load of the next legal type,
+ // or splitting if there is no wider legal type. This is reflected in
+ // the result of getTypeLegalizationCost, but BasicTTI assumes the
+ // widened cases are scalarized.
+ const DataLayout &DL = this->getDataLayout();
+ if (Src->isVectorTy() && LT.second.isVector()) {
+ auto *SrcVT = cast<VectorType>(Src);
+ TypeSize SrcEltSize = DL.getTypeStoreSizeInBits(SrcVT->getElementType());
+ TypeSize LegalEltSize = LT.second.getVectorElementType().getSizeInBits();
+ if (SrcEltSize == LegalEltSize)
+ return Cost;
+ }
+ return BaseT::getMemoryOpCost(Opcode, Src, Alignment, AddressSpace,
+ CostKind, OpInfo, I);
+ }();
+
// Assume memory ops cost scale with the number of vector registers
// possible accessed by the instruction. Note that BasicTTI already
// handles the LT.first term for us.
- if (std::pair<InstructionCost, MVT> LT = getTypeLegalizationCost(Src);
- LT.second.isVector() && CostKind != TTI::TCK_CodeSize)
+ if (LT.second.isVector() && CostKind != TTI::TCK_CodeSize)
BaseCost *= TLI->getLMULCost(LT.second);
return Cost + BaseCost;
diff --git a/llvm/test/Analysis/CostModel/RISCV/rvv-load-store.ll b/llvm/test/Analysis/CostModel/RISCV/rvv-load-store.ll
index 4e92937739d9b..fcc19dfa743f7 100644
--- a/llvm/test/Analysis/CostModel/RISCV/rvv-load-store.ll
+++ b/llvm/test/Analysis/CostModel/RISCV/rvv-load-store.ll
@@ -499,16 +499,16 @@ define void @load_oddsize_vectors(ptr %p) {
; CHECK-LABEL: 'load_oddsize_vectors'
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %1 = load <1 x i32>, ptr %p, align 4
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %2 = load <2 x i32>, ptr %p, align 8
-; CHECK-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %3 = load <3 x i32>, ptr %p, align 16
+; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %3 = load <3 x i32>, ptr %p, align 16
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: %4 = load <4 x i32>, ptr %p, align 16
-; CHECK-NEXT: Cost Model: Found an estimated cost of 12 for instruction: %5 = load <5 x i32>, ptr %p, align 32
-; CHECK-NEXT: Cost Model: Found an estimated cost of 14 for instruction: %6 = load <6 x i32>, ptr %p, align 32
-; CHECK-NEXT: Cost Model: Found an estimated cost of 16 for instruction: %7 = load <7 x i32>, ptr %p, align 32
+; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %5 = load <5 x i32>, ptr %p, align 32
+; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %6 = load <6 x i32>, ptr %p, align 32
+; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %7 = load <7 x i32>, ptr %p, align 32
; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: %8 = load <8 x i32>, ptr %p, align 32
-; CHECK-NEXT: Cost Model: Found an estimated cost of 40 for instruction: %9 = load <9 x i32>, ptr %p, align 64
-; CHECK-NEXT: Cost Model: Found an estimated cost of 64 for instruction: %10 = load <15 x i32>, ptr %p, align 64
+; CHECK-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %9 = load <9 x i32>, ptr %p, align 64
+; CHECK-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %10 = load <15 x i32>, ptr %p, align 64
; CHECK-NEXT: Cost Model: Found an estimated cost of 4 for instruction: %11 = load <16 x i32>, ptr %p, align 64
-; CHECK-NEXT: Cost Model: Found an estimated cost of 256 for instruction: %12 = load <31 x i32>, ptr %p, align 128
+; CHECK-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %12 = load <31 x i32>, ptr %p, align 128
; CHECK-NEXT: Cost Model: Found an estimated cost of 8 for instruction: %13 = load <32 x i32>, ptr %p, align 128
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
@@ -550,15 +550,15 @@ define void @store_oddsize_vectors(ptr %p) {
; CHECK-LABEL: 'store_oddsize_vectors'
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: store <1 x i32> undef, ptr %p, align 4
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: store <2 x i32> undef, ptr %p, align 8
-; CHECK-NEXT: Cost Model: Found an estimated cost of 4 for instruction: store <3 x i32> undef, ptr %p, align 16
+; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: store <3 x i32> undef, ptr %p, align 16
; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction: store <4 x i32> undef, ptr %p, align 16
-; CHECK-NEXT: Cost Model: Found an estimated cost of 12 for instruction: store <5 x i32> undef, ptr %p, align 32
-; CHECK-NEXT: Cost Model: Found an estimated cost of 14 for instruction: store <6 x i32> undef, ptr %p, align 32
-; CHECK-NEXT: Cost Model: Found an estimated cost of 16 for instruction: store <7 x i32> undef, ptr %p, align 32
+; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: store <5 x i32> undef, ptr %p, align 32
+; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: store <6 x i32> undef, ptr %p, align 32
+; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: store <7 x i32> undef, ptr %p, align 32
; CHECK-NEXT: Cost Model: Found an estimated cost of 2 for instruction: store <8 x i32> undef, ptr %p, align 32
-; CHECK-NEXT: Cost Model: Found an estimated cost of 64 for instruction: store <15 x i32> undef, ptr %p, align 64
+; CHECK-NEXT: Cost Model: Found an estimated cost of 4 for instruction: store <15 x i32> undef, ptr %p, align 64
; CHECK-NEXT: Cost Model: Found an estimated cost of 4 for instruction: store <16 x i32> undef, ptr %p, align 64
-; CHECK-NEXT: Cost Model: Found an estimated cost of 256 for instruction: store <31 x i32> undef, ptr %p, align 128
+; CHECK-NEXT: Cost Model: Found an estimated cost of 8 for instruction: store <31 x i32> undef, ptr %p, align 128
; CHECK-NEXT: Cost Model: Found an estimated cost of 8 for instruction: store <32 x i32> undef, ptr %p, align 128
; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction: ret void
;
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/short-trip-count.ll b/llvm/test/Transforms/LoopVectorize/RISCV/short-trip-count.ll
index bce965c723101..bb716d78ca411 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/short-trip-count.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/short-trip-count.ll
@@ -7,24 +7,22 @@ define void @small_trip_count_min_vlen_128(ptr nocapture %a) nounwind vscale_ran
; CHECK-NEXT: br i1 false, label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]]
; CHECK: vector.ph:
; CHECK-NEXT: [[TMP0:%.*]] = call i32 @llvm.vscale.i32()
-; CHECK-NEXT: [[TMP1:%.*]] = mul i32 [[TMP0]], 2
-; CHECK-NEXT: [[TMP4:%.*]] = sub i32 [[TMP1]], 1
-; CHECK-NEXT: [[N_RND_UP:%.*]] = add i32 4, [[TMP4]]
-; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i32 [[N_RND_UP]], [[TMP1]]
+; CHECK-NEXT: [[TMP1:%.*]] = sub i32 [[TMP0]], 1
+; CHECK-NEXT: [[N_RND_UP:%.*]] = add i32 4, [[TMP1]]
+; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i32 [[N_RND_UP]], [[TMP0]]
; CHECK-NEXT: [[N_VEC:%.*]] = sub i32 [[N_RND_UP]], [[N_MOD_VF]]
-; CHECK-NEXT: [[TMP5:%.*]] = call i32 @llvm.vscale.i32()
-; CHECK-NEXT: [[TMP6:%.*]] = mul i32 [[TMP5]], 2
+; CHECK-NEXT: [[TMP2:%.*]] = call i32 @llvm.vscale.i32()
; CHECK-NEXT: br label [[VECTOR_BODY:%.*]]
; CHECK: vector.body:
; CHECK-NEXT: [[INDEX:%.*]] = phi i32 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
-; CHECK-NEXT: [[TMP7:%.*]] = add i32 [[INDEX]], 0
-; CHECK-NEXT: [[ACTIVE_LANE_MASK:%.*]] = call <vscale x 2 x i1> @llvm.get.active.lane.mask.nxv2i1.i32(i32 [[TMP7]], i32 4)
-; CHECK-NEXT: [[TMP8:%.*]] = getelementptr inbounds i32, ptr [[A:%.*]], i32 [[TMP7]]
-; CHECK-NEXT: [[TMP9:%.*]] = getelementptr inbounds i32, ptr [[TMP8]], i32 0
-; CHECK-NEXT: [[WIDE_MASKED_LOAD:%.*]] = call <vscale x 2 x i32> @llvm.masked.load.nxv2i32.p0(ptr [[TMP9]], i32 4, <vscale x 2 x i1> [[ACTIVE_LANE_MASK]], <vscale x 2 x i32> poison)
-; CHECK-NEXT: [[TMP10:%.*]] = add nsw <vscale x 2 x i32> [[WIDE_MASKED_LOAD]], shufflevector (<vscale x 2 x i32> insertelement (<vscale x 2 x i32> poison, i32 1, i64 0), <vscale x 2 x i32> poison, <vscale x 2 x i32> zeroinitializer)
-; CHECK-NEXT: call void @llvm.masked.store.nxv2i32.p0(<vscale x 2 x i32> [[TMP10]], ptr [[TMP9]], i32 4, <vscale x 2 x i1> [[ACTIVE_LANE_MASK]])
-; CHECK-NEXT: [[INDEX_NEXT]] = add i32 [[INDEX]], [[TMP6]]
+; CHECK-NEXT: [[TMP3:%.*]] = add i32 [[INDEX]], 0
+; CHECK-NEXT: [[ACTIVE_LANE_MASK:%.*]] = call <vscale x 1 x i1> @llvm.get.active.lane.mask.nxv1i1.i32(i32 [[TMP3]], i32 4)
+; CHECK-NEXT: [[TMP4:%.*]] = getelementptr inbounds i32, ptr [[A:%.*]], i32 [[TMP3]]
+; CHECK-NEXT: [[TMP5:%.*]] = getelementptr inbounds i32, ptr [[TMP4]], i32 0
+; CHECK-NEXT: [[WIDE_MASKED_LOAD:%.*]] = call <vscale x 1 x i32> @llvm.masked.load.nxv1i32.p0(ptr [[TMP5]], i32 4, <vscale x 1 x i1> [[ACTIVE_LANE_MASK]], <vscale x 1 x i32> poison)
+; CHECK-NEXT: [[TMP6:%.*]] = add nsw <vscale x 1 x i32> [[WIDE_MASKED_LOAD]], shufflevector (<vscale x 1 x i32> insertelement (<vscale x 1 x i32> poison, i32 1, i64 0), <vscale x 1 x i32> poison, <vscale x 1 x i32> zeroinitializer)
+; CHECK-NEXT: call void @llvm.masked.store.nxv1i32.p0(<vscale x 1 x i32> [[TMP6]], ptr [[TMP5]], i32 4, <vscale x 1 x i1> [[ACTIVE_LANE_MASK]])
+; CHECK-NEXT: [[INDEX_NEXT]] = add i32 [[INDEX]], [[TMP2]]
; CHECK-NEXT: br i1 true, label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]]
; CHECK: middle.block:
; CHECK-NEXT: br i1 true, label [[EXIT:%.*]], label [[SCALAR_PH]]
@@ -67,23 +65,23 @@ define void @small_trip_count_min_vlen_32(ptr nocapture %a) nounwind vscale_rang
; CHECK: vector.ph:
; CHECK-NEXT: [[TMP0:%.*]] = call i32 @llvm.vscale.i32()
; CHECK-NEXT: [[TMP1:%.*]] = mul i32 [[TMP0]], 4
-; CHECK-NEXT: [[TMP4:%.*]] = sub i32 [[TMP1]], 1
-; CHECK-NEXT: [[N_RND_UP:%.*]] = add i32 4, [[TMP4]]
+; CHECK-NEXT: [[TMP2:%.*]] = sub i32 [[TMP1]], 1
+; CHECK-NEXT: [[N_RND_UP:%.*]] = add i32 4, [[TMP2]]
; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i32 [[N_RND_UP]], [[TMP1]]
; CHECK-NEXT: [[N_VEC:%.*]] = sub i32 [[N_RND_UP]], [[N_MOD_VF]]
-; CHECK-NEXT: [[TMP5:%.*]] = call i32 @llvm.vscale.i32()
-; CHECK-NEXT: [[TMP6:%.*]] = mul i32 [[TMP5]], 4
+; CHECK-NEXT: [[TMP3:%.*]] = call i32 @llvm.vscale.i32()
+; CHECK-NEXT: [[TMP4:%.*]] = mul i32 [[TMP3]], 4
; CHECK-NEXT: br label [[VECTOR_BODY:%.*]]
; CHECK: vector.body:
; CHECK-NEXT: [[INDEX:%.*]] = phi i32 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
-; CHECK-NEXT: [[TMP7:%.*]] = add i32 [[INDEX]], 0
-; CHECK-NEXT: [[ACTIVE_LANE_MASK:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i32(i32 [[TMP7]], i32 4)
-; CHECK-NEXT: [[TMP8:%.*]] = getelementptr inbounds i32, ptr [[A:%.*]], i32 [[TMP7]]
-; CHECK-NEXT: [[TMP9:%.*]] = getelementptr inbounds i32, ptr [[TMP8]], i32 0
-; CHECK-NEXT: [[WIDE_MASKED_LOAD:%.*]] = call <vscale x 4 x i32> @llvm.masked.load.nxv4i32.p0(ptr [[TMP9]], i32 4, <vscale x 4 x i1> [[ACTIVE_LANE_MASK]], <vscale x 4 x i32> poison)
-; CHECK-NEXT: [[TMP10:%.*]] = add nsw <vscale x 4 x i32> [[WIDE_MASKED_LOAD]], shufflevector (<vscale x 4 x i32> insertelement (<vscale x 4 x i32> poison, i32 1, i64 0), <vscale x 4 x i32> poison, <vscale x 4 x i32> zeroinitializer)
-; CHECK-NEXT: call void @llvm.masked.store.nxv4i32.p0(<vscale x 4 x i32> [[TMP10]], ptr [[TMP9]], i32 4, <vscale x 4 x i1> [[ACTIVE_LANE_MASK]])
-; CHECK-NEXT: [[INDEX_NEXT]] = add i32 [[INDEX]], [[TMP6]]
+; CHECK-NEXT: [[TMP5:%.*]] = add i32 [[INDEX]], 0
+; CHECK-NEXT: [[ACTIVE_LANE_MASK:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i32(i32 [[TMP5]], i32 4)
+; CHECK-NEXT: [[TMP6:%.*]] = getelementptr inbounds i32, ptr [[A:%.*]], i32 [[TMP5]]
+; CHECK-NEXT: [[TMP7:%.*]] = getelementptr inbounds i32, ptr [[TMP6]], i32 0
+; CHECK-NEXT: [[WIDE_MASKED_LOAD:%.*]] = call <vscale x 4 x i32> @llvm.masked.load.nxv4i32.p0(ptr [[TMP7]], i32 4, <vscale x 4 x i1> [[ACTIVE_LANE_MASK]], <vscale x 4 x i32> poison)
+; CHECK-NEXT: [[TMP8:%.*]] = add nsw <vscale x 4 x i32> [[WIDE_MASKED_LOAD]], shufflevector (<vscale x 4 x i32> insertelement (<vscale x 4 x i32> poison, i32 1, i64 0), <vscale x 4 x i32> poison, <vscale x 4 x i32> zeroinitializer)
+; CHECK-NEXT: call void @llvm.masked.store.nxv4i32.p0(<vscale x 4 x i32> [[TMP8]], ptr [[TMP7]], i32 4, <vscale x 4 x i1> [[ACTIVE_LANE_MASK]])
+; CHECK-NEXT: [[INDEX_NEXT]] = add i32 [[INDEX]], [[TMP4]]
; CHECK-NEXT: br i1 true, label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP4:![0-9]+]]
; CHECK: middle.block:
; CHECK-NEXT: br i1 true, label [[EXIT:%.*]], label [[SCALAR_PH]]
>From 78df12659ac40974a69b3a20f1668246c0f1d1b9 Mon Sep 17 00:00:00 2001
From: Philip Reames <preames at rivosinc.com>
Date: Thu, 25 Jul 2024 09:03:41 -0700
Subject: [PATCH 2/2] Restrict to promotion case only
The logic should be correct for splitting as well, but when testing out the
actual codegen for splitting, I found a case where we just crash. Given
that, let's avoid perturbing costs for cases which might need split for
now.
---
llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
index b041ee982f5e7..09f5b72855f74 100644
--- a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
+++ b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
@@ -1398,12 +1398,14 @@ InstructionCost RISCVTTIImpl::getMemoryOpCost(unsigned Opcode, Type *Src,
if (CostKind != TTI::TCK_RecipThroughput)
return Cost;
- // Our actual lowering uses a VL predicated load of the next legal type,
- // or splitting if there is no wider legal type. This is reflected in
+ // Our actual lowering for the case where a wider legal type is available
+ // uses the a VL predicated load on the wider type. This is reflected in
// the result of getTypeLegalizationCost, but BasicTTI assumes the
// widened cases are scalarized.
const DataLayout &DL = this->getDataLayout();
- if (Src->isVectorTy() && LT.second.isVector()) {
+ if (Src->isVectorTy() && LT.second.isVector() &&
+ TypeSize::isKnownLT(DL.getTypeStoreSizeInBits(Src),
+ LT.second.getSizeInBits())) {
auto *SrcVT = cast<VectorType>(Src);
TypeSize SrcEltSize = DL.getTypeStoreSizeInBits(SrcVT->getElementType());
TypeSize LegalEltSize = LT.second.getVectorElementType().getSizeInBits();
More information about the llvm-commits
mailing list