[PATCH] D58490: [ARM] Be super conservative about atomics

Philip Reames via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Feb 25 09:51:43 PST 2019


reames updated this revision to Diff 188209.
reames retitled this revision from "[ARM, Lanai] Be super conservative about atomics" to "[ARM] Be super conservative about atomics".
reames added a comment.

Rebase after landing Lanai and addressing ARM comment.  Pending LGTM from ARM contributors.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D58490/new/

https://reviews.llvm.org/D58490

Files:
  lib/Target/ARM/ARMLoadStoreOptimizer.cpp


Index: lib/Target/ARM/ARMLoadStoreOptimizer.cpp
===================================================================
--- lib/Target/ARM/ARMLoadStoreOptimizer.cpp
+++ lib/Target/ARM/ARMLoadStoreOptimizer.cpp
@@ -1580,7 +1580,9 @@
   const MachineMemOperand &MMO = **MI.memoperands_begin();
 
   // Don't touch volatile memory accesses - we may be changing their order.
-  if (MMO.isVolatile())
+  // TODO: We could allow unordered and monotonic atomics here, but we need to
+  // make sure the resulting ldm/stm is correctly marked as atomic. 
+  if (MMO.isVolatile() || MMO.isAtomic())
     return false;
 
   // Unaligned ldr/str is emulated by some kernels, but unaligned ldm/stm is
@@ -2144,7 +2146,8 @@
   // At the moment, we ignore the memoryoperand's value.
   // If we want to use AliasAnalysis, we should check it accordingly.
   if (!Op0->hasOneMemOperand() ||
-      (*Op0->memoperands_begin())->isVolatile())
+      (*Op0->memoperands_begin())->isVolatile() ||
+      (*Op0->memoperands_begin())->isAtomic())
     return false;
 
   unsigned Align = (*Op0->memoperands_begin())->getAlignment();


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D58490.188209.patch
Type: text/x-patch
Size: 1106 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20190225/91a67950/attachment.bin>


More information about the llvm-commits mailing list