[llvm-branch-commits] [llvm] 7cb6829 - SROA: Don't drop atomic load/store alignments (PR45010)

Hans Wennborg via llvm-branch-commits llvm-branch-commits at lists.llvm.org
Fri Feb 28 02:32:20 PST 2020


Author: Hans Wennborg
Date: 2020-02-28T11:27:33+01:00
New Revision: 7cb6829291280a2adcc260346a7a56b8bddd43db

URL: https://github.com/llvm/llvm-project/commit/7cb6829291280a2adcc260346a7a56b8bddd43db
DIFF: https://github.com/llvm/llvm-project/commit/7cb6829291280a2adcc260346a7a56b8bddd43db.diff

LOG: SROA: Don't drop atomic load/store alignments (PR45010)

SROA will drop the explicit alignment on allocas when the ABI guarantees
enough alignment. Because the alignment on new load/store instructions
are set based on the alloca's alignment, that means SROA would end up
dropping the alignment from atomic loads and stores, which is not
allowed (see bug). For those, make sure to always carry over the
alignment from the previous instruction.

Differential revision: https://reviews.llvm.org/D75266

(cherry picked from commit d48c981697a49653efff9dd14fa692d99e6fa868)

Added: 
    

Modified: 
    llvm/lib/Transforms/Scalar/SROA.cpp
    llvm/test/Transforms/SROA/alignment.ll

Removed: 
    


################################################################################
diff  --git a/llvm/lib/Transforms/Scalar/SROA.cpp b/llvm/lib/Transforms/Scalar/SROA.cpp
index 89916e43fce2..1bdd806f15ac 100644
--- a/llvm/lib/Transforms/Scalar/SROA.cpp
+++ b/llvm/lib/Transforms/Scalar/SROA.cpp
@@ -2519,6 +2519,8 @@ class llvm::sroa::AllocaSliceRewriter
         NewLI->setAAMetadata(AATags);
       if (LI.isVolatile())
         NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
+      if (NewLI->isAtomic())
+        NewLI->setAlignment(LI.getAlign());
 
       // Any !nonnull metadata or !range metadata on the old load is also valid
       // on the new load. This is even true in some cases even when the loads
@@ -2709,6 +2711,8 @@ class llvm::sroa::AllocaSliceRewriter
       NewSI->setAAMetadata(AATags);
     if (SI.isVolatile())
       NewSI->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
+    if (NewSI->isAtomic())
+      NewSI->setAlignment(SI.getAlign());
     Pass.DeadInsts.insert(&SI);
     deleteIfTriviallyDead(OldOp);
 

diff  --git a/llvm/test/Transforms/SROA/alignment.ll b/llvm/test/Transforms/SROA/alignment.ll
index 81f8f2a00ba8..674f45fb1665 100644
--- a/llvm/test/Transforms/SROA/alignment.ll
+++ b/llvm/test/Transforms/SROA/alignment.ll
@@ -228,4 +228,19 @@ define void @test10() {
   ret void
 }
 
+%struct = type { i32, i32 }
+define dso_local i32 @pr45010(%struct* %A) {
+; CHECK-LABEL: @pr45010
+; CHECK: load atomic volatile i32, {{.*}}, align 4
+
+  %B = alloca %struct, align 4
+  %A.i = getelementptr inbounds %struct, %struct* %A, i32 0, i32 0
+  %B.i = getelementptr inbounds %struct, %struct* %B, i32 0, i32 0
+  %1 = load i32, i32* %A.i, align 4
+  store atomic volatile i32 %1, i32* %B.i release, align 4
+  %2 = bitcast %struct* %B to i32*
+  %x = load atomic volatile i32, i32* %2 acquire, align 4
+  ret i32 %x
+}
+
 declare void @populate(i8*)


        


More information about the llvm-branch-commits mailing list