[PATCH] D16875: MemorySSA Optimizations: Patch 1 of N
George Burgess IV via llvm-commits
llvm-commits at lists.llvm.org
Thu Feb 25 18:14:52 PST 2016
george.burgess.iv added inline comments.
================
Comment at: lib/Transforms/Utils/MemorySSA.cpp:714
@@ +713,3 @@
+ const Instruction *MaybeDef) {
+ // We only check for load-load clobbers; everything else is AA's problem. :)
+ if (!isa<LoadInst>(MaybeDef) || !isa<LoadInst>(Use))
----------------
davidxl wrote:
> This needs some explanation --> can the alias query be moved here to make this function more general?
Do you still think this should happen (given my response to your comment below)?
================
Comment at: lib/Transforms/Utils/MemorySSA.cpp:724
@@ +723,3 @@
+
+ // ...But volatile operations can be reordered with non-volatile operations.
+ //
----------------
davidxl wrote:
> Move this comment up and combine with the other volatile related comment.
I'm assuming you meant just the first sentence -- done. :)
================
Comment at: lib/Transforms/Utils/MemorySSA.cpp:744
@@ -713,1 +743,3 @@
+ return false;
return AA->getModRefInfo(DefMemoryInst, Loc) & MRI_Mod;
+ }
----------------
davidxl wrote:
> This does not look correct:
>
> When canUseBeReorderedAboveDef returns false, it is not ok to unconditionally feed it into the AA query -- unless AA query also honors the ordering constraint (I have not checked).
Looking at the things we query, it seems that AAResults (lib/Analysis/AliasAnalysis.cpp) *always* hands back `MRI_ModRef` if there's any kind of ordering (including volatile) involved for loads/stores, and will return `MRI_ModRef` if we hand it an `AtomicCmpXchgInst` or `AtomicRMWInst`, with ordering greater than `Monotonic`. This seems inconsistent, so I've sent out http://reviews.llvm.org/D17631 to see if that's intentional or not.
If said patch goes in, `AAResults` should always be conservative in the cases that `canUseBeReorderedAboveDef` says are no bueno.
If you think it would be better, I can try adding a `canUseNeverBeReorderedAboveDef` function, or I can have `canUseBeReorderedAboveDef` return an enum of `{Never, IfNoAlias, Always}`. Entirely up to you. :)
================
Comment at: test/Transforms/Util/MemorySSA/atomic-clobber.ll:16
@@ -14,3 +15,3 @@
%2 = load i32, i32* %a, align 4
- %3 = add i32 %1, %2
- ret i32 %3
+ ret void
+}
----------------
davidxl wrote:
> what is this change about?
Random noise. Reverted.
================
Comment at: test/Transforms/Util/MemorySSA/atomic-clobber.ll:41
@@ +40,3 @@
+ %1 = load atomic i32, i32* %a acquire, align 4
+; CHECK: MemoryUse(1)
+; CHECK-NEXT: %2 = load atomic i32
----------------
davidxl wrote:
> Is this correct? The atomic load can not be reordered before the previous load with acquire.
I believe it is. Even if the `load` for `%2` was non-atomic, it can't be hoisted above an acquire, because that would break cases like
```
struct Foo {
std::atomic<int> a;
int b;
Foo(): a(0), b(0) {}
};
Foo f;
void thread1() {
f.b = 1;
f.a.store(1, std::memory_order_release);
}
void thread2() {
if (f.a.load(std::memory_order_acquire)) {
assert(f.b == 1);
}
}
```
Wouldn't it? (specifically, if we allowed this, `thread2` would be able to load `f.b` before `f.a`)
================
Comment at: test/Transforms/Util/MemorySSA/atomic-clobber.ll:44
@@ +43,3 @@
+ %2 = load atomic i32, i32* %a unordered, align 4
+; CHECK: 2 = MemoryDef(1)
+; CHECK-NEXT: %3 = load atomic i32
----------------
davidxl wrote:
> Is this too conservative?
For the same reason as above, I think this is all we can do here.
http://reviews.llvm.org/D16875
More information about the llvm-commits
mailing list