[llvm] 4604441 - [SVE][CodeGen][DAGCombiner] Fix TypeSize warning in redundant store elimination
Joe Ellis via llvm-commits
llvm-commits at lists.llvm.org
Mon Oct 26 09:24:48 PDT 2020
Author: Peter Waller
Date: 2020-10-26T16:23:42Z
New Revision: 4604441386dc5fcd3165f4b39f5fa2e2c600f1bc
URL: https://github.com/llvm/llvm-project/commit/4604441386dc5fcd3165f4b39f5fa2e2c600f1bc
DIFF: https://github.com/llvm/llvm-project/commit/4604441386dc5fcd3165f4b39f5fa2e2c600f1bc.diff
LOG: [SVE][CodeGen][DAGCombiner] Fix TypeSize warning in redundant store elimination
The modified code in visitSTORE was missing a scalable vector check, and still
using the now deprecated implicit cast of TypeSize to uint64_t through the
overloaded operator. This patch fixes these issues.
This brings the logic in line with the comment on the context line immediately
above the added precondition.
Add a test in Redundantstores.ll that the warning is not triggered.
Added:
Modified:
llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
llvm/test/CodeGen/AArch64/Redundantstore.ll
Removed:
################################################################################
diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index f4cf77ba8bc0..4d1074560886 100644
--- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -17326,11 +17326,12 @@ SDValue DAGCombiner::visitSTORE(SDNode *N) {
!ST1->getBasePtr().isUndef() &&
// BaseIndexOffset and the code below requires knowing the size
// of a vector, so bail out if MemoryVT is scalable.
+ !ST->getMemoryVT().isScalableVector() &&
!ST1->getMemoryVT().isScalableVector()) {
const BaseIndexOffset STBase = BaseIndexOffset::match(ST, DAG);
const BaseIndexOffset ChainBase = BaseIndexOffset::match(ST1, DAG);
- unsigned STBitSize = ST->getMemoryVT().getSizeInBits();
- unsigned ChainBitSize = ST1->getMemoryVT().getSizeInBits();
+ unsigned STBitSize = ST->getMemoryVT().getFixedSizeInBits();
+ unsigned ChainBitSize = ST1->getMemoryVT().getFixedSizeInBits();
// If this is a store who's preceding store to a subset of the current
// location and no one other node is chained to that store we can
// effectively drop the store. Do not remove stores to undef as they may
diff --git a/llvm/test/CodeGen/AArch64/Redundantstore.ll b/llvm/test/CodeGen/AArch64/Redundantstore.ll
index b7822a882b4a..6807a861d6cb 100644
--- a/llvm/test/CodeGen/AArch64/Redundantstore.ll
+++ b/llvm/test/CodeGen/AArch64/Redundantstore.ll
@@ -1,8 +1,11 @@
-; RUN: llc < %s -O3 -mtriple=aarch64-eabi | FileCheck %s
+; RUN: llc < %s -O3 -mtriple=aarch64-eabi 2>&1 | FileCheck %s
target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
@end_of_array = common global i8* null, align 8
+; The tests in this file should not produce a TypeSize warning.
+; CHECK-NOT: warning: {{.*}}TypeSize is not scalable
+
; CHECK-LABEL: @test
; CHECK: stur
; CHECK-NOT: stur
@@ -23,3 +26,23 @@ entry:
ret i8* %0
}
+; #include <arm_sve.h>
+; #include <stdint.h>
+;
+; void redundant_store(uint32_t *x) {
+; *x = 1;
+; *(svint32_t *)x = svdup_s32(0);
+; }
+
+; CHECK-LABEL: @redundant_store
+define void @redundant_store(i32* nocapture %x) local_unnamed_addr #0 {
+ %1 = bitcast i32* %x to <vscale x 4 x i32>*
+ store i32 1, i32* %x, align 4
+ %2 = tail call <vscale x 4 x i32> @llvm.aarch64.sve.dup.x.nxv4i32(i32 0)
+ store <vscale x 4 x i32> %2, <vscale x 4 x i32>* %1, align 16
+ ret void
+}
+
+declare <vscale x 4 x i32> @llvm.aarch64.sve.dup.x.nxv4i32(i32)
+
+attributes #0 = { "target-cpu"="generic" "target-features"="+neon,+sve,+v8.2a" }
More information about the llvm-commits
mailing list