[llvm] r215943 - Answer to Philip Reames comments
Robin Morisset
morisset at google.com
Mon Aug 18 15:32:10 PDT 2014
Sorry about doing two commits instead of one for this revision, it was a
mistake on my side (forgot about the behaviour of git svn).
On Mon, Aug 18, 2014 at 3:18 PM, Robin Morisset <morisset at google.com> wrote:
> Author: morisset
> Date: Mon Aug 18 17:18:14 2014
> New Revision: 215943
>
> URL: http://llvm.org/viewvc/llvm-project?rev=215943&view=rev
> Log:
> Answer to Philip Reames comments
>
> - add check for volatile (probably unneeded, but I agree that we should be
> conservative about it).
> - strengthen condition from isUnordered() to isSimple(), as I don't
> understand well enough Unordered semantics (and it also matches the comment
> better this way) to be confident in the previous behaviour (thanks for
> catching that one, I had missed the case Monotonic/Unordered).
> - separate a condition in two.
> - lengthen comment about aliasing and loads
> - add tests in GVN/atomic.ll
>
> Modified:
> llvm/trunk/lib/Analysis/MemoryDependenceAnalysis.cpp
> llvm/trunk/test/Transforms/GVN/atomic.ll
>
> Modified: llvm/trunk/lib/Analysis/MemoryDependenceAnalysis.cpp
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Analysis/MemoryDependenceAnalysis.cpp?rev=215943&r1=215942&r2=215943&view=diff
>
> ==============================================================================
> --- llvm/trunk/lib/Analysis/MemoryDependenceAnalysis.cpp (original)
> +++ llvm/trunk/lib/Analysis/MemoryDependenceAnalysis.cpp Mon Aug 18
> 17:18:14 2014
> @@ -407,21 +407,33 @@ getPointerDependencyFrom(const AliasAnal
>
> // Values depend on loads if the pointers are must aliased. This
> means that
> // a load depends on another must aliased load from the same value.
> + // One exception is atomic loads: a value can depend on an atomic
> load that it
> + // does not alias with when this atomic load indicates that another
> thread may
> + // be accessing the location.
> if (LoadInst *LI = dyn_cast<LoadInst>(Inst)) {
> // Atomic loads have complications involved.
> // A monotonic load is OK if the query inst is itself not atomic.
> // FIXME: This is overly conservative.
> if (!LI->isUnordered()) {
> - if (!QueryInst || LI->getOrdering() != Monotonic)
> + if (!QueryInst)
> + return MemDepResult::getClobber(LI);
> + if (LI->getOrdering() != Monotonic)
> return MemDepResult::getClobber(LI);
> if (auto *QueryLI = dyn_cast<LoadInst>(QueryInst))
> - if (!QueryLI->isUnordered())
> + if (!QueryLI->isSimple())
> return MemDepResult::getClobber(LI);
> if (auto *QuerySI = dyn_cast<StoreInst>(QueryInst))
> - if (!QuerySI->isUnordered())
> + if (!QuerySI->isSimple())
> return MemDepResult::getClobber(LI);
> }
>
> + // FIXME: this is overly conservative.
> + // While volatile access cannot be eliminated, they do not have to
> clobber
> + // non-aliasing locations, as normal accesses can for example be
> reordered
> + // with volatile accesses.
> + if (LI->isVolatile())
> + return MemDepResult::getClobber(LI);
> +
> AliasAnalysis::Location LoadLoc = AA->getLocation(LI);
>
> // If we found a pointer, check if it could be the same as our
> pointer.
> @@ -481,16 +493,25 @@ getPointerDependencyFrom(const AliasAnal
> // A monotonic store is OK if the query inst is itself not atomic.
> // FIXME: This is overly conservative.
> if (!SI->isUnordered()) {
> - if (!QueryInst || SI->getOrdering() != Monotonic)
> + if (!QueryInst)
> + return MemDepResult::getClobber(SI);
> + if (SI->getOrdering() != Monotonic)
> return MemDepResult::getClobber(SI);
> if (auto *QueryLI = dyn_cast<LoadInst>(QueryInst))
> - if (!QueryLI->isUnordered())
> + if (!QueryLI->isSimple())
> return MemDepResult::getClobber(SI);
> if (auto *QuerySI = dyn_cast<StoreInst>(QueryInst))
> - if (!QuerySI->isUnordered())
> + if (!QuerySI->isSimple())
> return MemDepResult::getClobber(SI);
> }
>
> + // FIXME: this is overly conservative.
> + // While volatile access cannot be eliminated, they do not have to
> clobber
> + // non-aliasing locations, as normal accesses can for example be
> reordered
> + // with volatile accesses.
> + if (SI->isVolatile())
> + return MemDepResult::getClobber(SI);
> +
> // If alias analysis can tell that this store is guaranteed to not
> modify
> // the query pointer, ignore it. Use getModRefInfo to handle cases
> where
> // the query pointer points to constant memory etc.
>
> Modified: llvm/trunk/test/Transforms/GVN/atomic.ll
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/GVN/atomic.ll?rev=215943&r1=215942&r2=215943&view=diff
>
> ==============================================================================
> --- llvm/trunk/test/Transforms/GVN/atomic.ll (original)
> +++ llvm/trunk/test/Transforms/GVN/atomic.ll Mon Aug 18 17:18:14 2014
> @@ -78,3 +78,29 @@ entry:
> %x3 = add i32 %x, %x2
> ret i32 %x3
> }
> +
> +; GVN across monotonic store (allowed)
> +define i32 @test7() nounwind uwtable ssp {
> +; CHECK: test7
> +; CHECK: add i32 %x, %x
> +entry:
> + %x = load i32* @y
> + store atomic i32 %x, i32* @x monotonic, align 4
> + %y = load i32* @y
> + %z = add i32 %x, %y
> + ret i32 %z
> +}
> +
> +; GVN of an unordered across monotonic load (not allowed)
> +define i32 @test8() nounwind uwtable ssp {
> +; CHECK: test8
> +; CHECK: add i32 %x, %y
> +entry:
> + %x = load atomic i32* @y unordered, align 4
> + %clobber = load atomic i32* @x monotonic, align 4
> + %y = load atomic i32* @y monotonic, align 4
> + %z = add i32 %x, %y
> + ret i32 %z
> +}
> +
> +
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140818/ade400cc/attachment.html>
More information about the llvm-commits
mailing list