[llvm] r283965 - GVN-hoist: fix store past load dependence analysis (PR30216, PR30499)

Reid Kleckner via llvm-commits llvm-commits at lists.llvm.org
Wed Oct 12 17:28:41 PDT 2016


Reverted in r284093. PR30499 still isn't fixed with this change. You might
want to check the LLVM IR you used to test for it.

On Wed, Oct 12, 2016 at 4:48 PM, Reid Kleckner <rnk at google.com> wrote:

> I suspect this causes a crash on valid during the Chromium build:
> https://build.chromium.org/p/chromium.fyi/builders/
> ClangToTLinuxLLD/builds/1719/steps/compile/logs/stdio
>
> The crash is during GVN hoisting:
> 3. Running pass 'Early GVN Hoisting of Expressions' on function
> '@hb_buffer_deserialize_glyphs'
>
> Your new helper is also on the stack:
> #8 0x0000000001d0a6af llvm::defClobbersUseOrDef(llvm::MemoryDef*,
> llvm::MemoryUseOrDef const*, llvm::AAResults&)
>
> I will try to get a reproducer.
>
> On Tue, Oct 11, 2016 at 7:23 PM, Sebastian Pop via llvm-commits <
> llvm-commits at lists.llvm.org> wrote:
>
>> Author: spop
>> Date: Tue Oct 11 21:23:39 2016
>> New Revision: 283965
>>
>> URL: http://llvm.org/viewvc/llvm-project?rev=283965&view=rev
>> Log:
>> GVN-hoist: fix store past load dependence analysis (PR30216, PR30499)
>>
>> This is a refreshed version of a patch that was reverted: it fixes
>> the problems reported in both PR30216 and PR30499, and
>> contains all the test-cases from both bugs.
>>
>> To hoist stores past loads, we used to search for potential
>> conflicting loads on the hoisting path by following a MemorySSA
>> def-def link from the store to be hoisted to the previous
>> defining memory access, and from there we followed the def-use
>> chains to all the uses that occur on the hoisting path. The
>> problem is that the def-def link may point to a store that does
>> not alias with the store to be hoisted, and so the loads that are
>> walked may not alias with the store to be hoisted, and even as in
>> the testcase of PR30216, the loads that may alias with the store
>> to be hoisted are not visited.
>>
>> The current patch visits all loads on the path from the store to
>> be hoisted to the hoisting position and uses the alias analysis
>> to ask whether the store may alias the load. I was not able to
>> use the MemorySSA functionality to ask for whether load and
>> store are clobbered: I'm not sure which function to call, so I
>> used a call to AA->isNoAlias().
>>
>> Store past store is still working as before using a MemorySSA
>> query: I added an extra test to pr30216.ll to make sure store
>> past store does not regress.
>>
>> Tested on x86_64-linux with check and a test-suite run.
>>
>> Differential Revision: https://reviews.llvm.org/D25476
>>
>> Added:
>>     llvm/trunk/test/Transforms/GVNHoist/pr30216.ll
>>     llvm/trunk/test/Transforms/GVNHoist/pr30499.ll
>> Modified:
>>     llvm/trunk/include/llvm/Transforms/Utils/MemorySSA.h
>>     llvm/trunk/lib/Transforms/Scalar/GVNHoist.cpp
>>     llvm/trunk/lib/Transforms/Utils/MemorySSA.cpp
>>
>> Modified: llvm/trunk/include/llvm/Transforms/Utils/MemorySSA.h
>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/
>> Transforms/Utils/MemorySSA.h?rev=283965&r1=283964&r2=283965&view=diff
>> ============================================================
>> ==================
>> --- llvm/trunk/include/llvm/Transforms/Utils/MemorySSA.h (original)
>> +++ llvm/trunk/include/llvm/Transforms/Utils/MemorySSA.h Tue Oct 11
>> 21:23:39 2016
>> @@ -974,6 +974,10 @@ inline upward_defs_iterator upward_defs_
>>
>>  inline upward_defs_iterator upward_defs_end() { return
>> upward_defs_iterator(); }
>>
>> +bool instructionClobbersQuery(MemoryDef *MD,
>> +                              const MemoryLocation &UseLoc,
>> +                              const Instruction *UseInst,
>> +                              AliasAnalysis &AA);
>>  } // end namespace llvm
>>
>>  #endif // LLVM_TRANSFORMS_UTILS_MEMORYSSA_H
>>
>> Modified: llvm/trunk/lib/Transforms/Scalar/GVNHoist.cpp
>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transform
>> s/Scalar/GVNHoist.cpp?rev=283965&r1=283964&r2=283965&view=diff
>> ============================================================
>> ==================
>> --- llvm/trunk/lib/Transforms/Scalar/GVNHoist.cpp (original)
>> +++ llvm/trunk/lib/Transforms/Scalar/GVNHoist.cpp Tue Oct 11 21:23:39
>> 2016
>> @@ -329,31 +329,36 @@ private:
>>      return I1DFS < I2DFS;
>>    }
>>
>> -  // Return true when there are users of Def in BB.
>> -  bool hasMemoryUseOnPath(MemoryAccess *Def, const BasicBlock *BB,
>> -                          const Instruction *OldPt) {
>> -    const BasicBlock *DefBB = Def->getBlock();
>> +  // Return true when there are memory uses of Def in BB.
>> +  bool hasMemoryUse(const Instruction *NewPt, MemoryDef *Def,
>> +                    const BasicBlock *BB) {
>> +    const MemorySSA::AccessList *Acc = MSSA->getBlockAccesses(BB);
>> +    if (!Acc)
>> +      return false;
>> +
>> +    Instruction *OldPt = Def->getMemoryInst();
>>      const BasicBlock *OldBB = OldPt->getParent();
>> +    const BasicBlock *NewBB = NewPt->getParent();
>> +    bool ReachedNewPt = false;
>>
>> -    for (User *U : Def->users())
>> -      if (auto *MU = dyn_cast<MemoryUse>(U)) {
>> -        // FIXME: MU->getBlock() does not get updated when we move the
>> instruction.
>> -        BasicBlock *UBB = MU->getMemoryInst()->getParent();
>> -        // Only analyze uses in BB.
>> -        if (BB != UBB)
>> -          continue;
>> -
>> -        // A use in the same block as the Def is on the path.
>> -        if (UBB == DefBB) {
>> -          assert(MSSA->locallyDominates(Def, MU) && "def not dominating
>> use");
>> -          return true;
>> +    for (const MemoryAccess &MA : *Acc)
>> +      if (const MemoryUse *MU = dyn_cast<MemoryUse>(&MA)) {
>> +        Instruction *Insn = MU->getMemoryInst();
>> +
>> +        // Do not check whether MU aliases Def when MU occurs after
>> OldPt.
>> +        if (BB == OldBB && firstInBB(OldPt, Insn))
>> +          break;
>> +
>> +        // Do not check whether MU aliases Def when MU occurs before
>> NewPt.
>> +        if (BB == NewBB) {
>> +          if (!ReachedNewPt) {
>> +            if (firstInBB(Insn, NewPt))
>> +              continue;
>> +            ReachedNewPt = true;
>> +          }
>>          }
>> -
>> -        if (UBB != OldBB)
>> -          return true;
>> -
>> -        // It is only harmful to hoist when the use is before OldPt.
>> -        if (firstInBB(MU->getMemoryInst(), OldPt))
>> +        if (instructionClobbersQuery(Def, MemoryLocation::get(Insn),
>> Insn,
>> +                                     *AA))
>>            return true;
>>        }
>>
>> @@ -361,17 +366,18 @@ private:
>>    }
>>
>>    // Return true when there are exception handling or loads of memory Def
>> -  // between OldPt and NewPt.
>> +  // between Def and NewPt.  This function is only called for stores:
>> Def is
>> +  // the MemoryDef of the store to be hoisted.
>>
>>    // Decrement by 1 NBBsOnAllPaths for each block between HoistPt and
>> BB, and
>>    // return true when the counter NBBsOnAllPaths reaces 0, except when
>> it is
>>    // initialized to -1 which is unlimited.
>> -  bool hasEHOrLoadsOnPath(const Instruction *NewPt, const Instruction
>> *OldPt,
>> -                          MemoryAccess *Def, int &NBBsOnAllPaths) {
>> +  bool hasEHOrLoadsOnPath(const Instruction *NewPt, MemoryDef *Def,
>> +                          int &NBBsOnAllPaths) {
>>      const BasicBlock *NewBB = NewPt->getParent();
>> -    const BasicBlock *OldBB = OldPt->getParent();
>> +    const BasicBlock *OldBB = Def->getBlock();
>>      assert(DT->dominates(NewBB, OldBB) && "invalid path");
>> -    assert(DT->dominates(Def->getBlock(), NewBB) &&
>> +    assert(DT->dominates(Def->getDefiningAccess()->getBlock(), NewBB) &&
>>             "def does not dominate new hoisting point");
>>
>>      // Walk all basic blocks reachable in depth-first iteration on the
>> inverse
>> @@ -390,7 +396,7 @@ private:
>>          return true;
>>
>>        // Check that we do not move a store past loads.
>> -      if (hasMemoryUseOnPath(Def, *I, OldPt))
>> +      if (hasMemoryUse(NewPt, Def, *I))
>>          return true;
>>
>>        // Stop walk once the limit is reached.
>> @@ -473,7 +479,7 @@ private:
>>
>>      // Check for unsafe hoistings due to side effects.
>>      if (K == InsKind::Store) {
>> -      if (hasEHOrLoadsOnPath(NewPt, OldPt, D, NBBsOnAllPaths))
>> +      if (hasEHOrLoadsOnPath(NewPt, dyn_cast<MemoryDef>(U),
>> NBBsOnAllPaths))
>>          return false;
>>      } else if (hasEHOnPath(NewBB, OldBB, NBBsOnAllPaths))
>>        return false;
>>
>> Modified: llvm/trunk/lib/Transforms/Utils/MemorySSA.cpp
>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transform
>> s/Utils/MemorySSA.cpp?rev=283965&r1=283964&r2=283965&view=diff
>> ============================================================
>> ==================
>> --- llvm/trunk/lib/Transforms/Utils/MemorySSA.cpp (original)
>> +++ llvm/trunk/lib/Transforms/Utils/MemorySSA.cpp Tue Oct 11 21:23:39
>> 2016
>> @@ -169,44 +169,6 @@ template <> struct DenseMapInfo<MemoryLo
>>      return LHS == RHS;
>>    }
>>  };
>> -}
>> -
>> -namespace {
>> -struct UpwardsMemoryQuery {
>> -  // True if our original query started off as a call
>> -  bool IsCall;
>> -  // The pointer location we started the query with. This will be empty
>> if
>> -  // IsCall is true.
>> -  MemoryLocation StartingLoc;
>> -  // This is the instruction we were querying about.
>> -  const Instruction *Inst;
>> -  // The MemoryAccess we actually got called with, used to test local
>> domination
>> -  const MemoryAccess *OriginalAccess;
>> -
>> -  UpwardsMemoryQuery()
>> -      : IsCall(false), Inst(nullptr), OriginalAccess(nullptr) {}
>> -
>> -  UpwardsMemoryQuery(const Instruction *Inst, const MemoryAccess *Access)
>> -      : IsCall(ImmutableCallSite(Inst)), Inst(Inst),
>> OriginalAccess(Access) {
>> -    if (!IsCall)
>> -      StartingLoc = MemoryLocation::get(Inst);
>> -  }
>> -};
>> -
>> -static bool lifetimeEndsAt(MemoryDef *MD, const MemoryLocation &Loc,
>> -                           AliasAnalysis &AA) {
>> -  Instruction *Inst = MD->getMemoryInst();
>> -  if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(Inst)) {
>> -    switch (II->getIntrinsicID()) {
>> -    case Intrinsic::lifetime_start:
>> -    case Intrinsic::lifetime_end:
>> -      return AA.isMustAlias(MemoryLocation(II->getArgOperand(1)), Loc);
>> -    default:
>> -      return false;
>> -    }
>> -  }
>> -  return false;
>> -}
>>
>>  enum class Reorderability { Always, IfNoAlias, Never };
>>
>> @@ -248,21 +210,10 @@ static Reorderability getLoadReorderabil
>>    return Result;
>>  }
>>
>> -static bool isUseTriviallyOptimizableToLiveOnEntry(AliasAnalysis &AA,
>> -                                                   const Instruction *I)
>> {
>> -  // If the memory can't be changed, then loads of the memory can't be
>> -  // clobbered.
>> -  //
>> -  // FIXME: We should handle invariant groups, as well. It's a bit
>> harder,
>> -  // because we need to pay close attention to invariant group barriers.
>> -  return isa<LoadInst>(I) && (I->getMetadata(LLVMContext::MD_invariant_load)
>> ||
>> -                              AA.pointsToConstantMemory(I));
>> -}
>> -
>> -static bool instructionClobbersQuery(MemoryDef *MD,
>> -                                     const MemoryLocation &UseLoc,
>> -                                     const Instruction *UseInst,
>> -                                     AliasAnalysis &AA) {
>> +bool instructionClobbersQuery(MemoryDef *MD,
>> +                              const MemoryLocation &UseLoc,
>> +                              const Instruction *UseInst,
>> +                              AliasAnalysis &AA) {
>>    Instruction *DefInst = MD->getMemoryInst();
>>    assert(DefInst && "Defining instruction not actually an instruction");
>>
>> @@ -303,6 +254,56 @@ static bool instructionClobbersQuery(Mem
>>    return AA.getModRefInfo(DefInst, UseLoc) & MRI_Mod;
>>  }
>>
>> +}
>> +
>> +namespace {
>> +struct UpwardsMemoryQuery {
>> +  // True if our original query started off as a call
>> +  bool IsCall;
>> +  // The pointer location we started the query with. This will be empty
>> if
>> +  // IsCall is true.
>> +  MemoryLocation StartingLoc;
>> +  // This is the instruction we were querying about.
>> +  const Instruction *Inst;
>> +  // The MemoryAccess we actually got called with, used to test local
>> domination
>> +  const MemoryAccess *OriginalAccess;
>> +
>> +  UpwardsMemoryQuery()
>> +      : IsCall(false), Inst(nullptr), OriginalAccess(nullptr) {}
>> +
>> +  UpwardsMemoryQuery(const Instruction *Inst, const MemoryAccess *Access)
>> +      : IsCall(ImmutableCallSite(Inst)), Inst(Inst),
>> OriginalAccess(Access) {
>> +    if (!IsCall)
>> +      StartingLoc = MemoryLocation::get(Inst);
>> +  }
>> +};
>> +
>> +static bool lifetimeEndsAt(MemoryDef *MD, const MemoryLocation &Loc,
>> +                           AliasAnalysis &AA) {
>> +  Instruction *Inst = MD->getMemoryInst();
>> +  if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(Inst)) {
>> +    switch (II->getIntrinsicID()) {
>> +    case Intrinsic::lifetime_start:
>> +    case Intrinsic::lifetime_end:
>> +      return AA.isMustAlias(MemoryLocation(II->getArgOperand(1)), Loc);
>> +    default:
>> +      return false;
>> +    }
>> +  }
>> +  return false;
>> +}
>> +
>> +static bool isUseTriviallyOptimizableToLiveOnEntry(AliasAnalysis &AA,
>> +                                                   const Instruction *I)
>> {
>> +  // If the memory can't be changed, then loads of the memory can't be
>> +  // clobbered.
>> +  //
>> +  // FIXME: We should handle invariant groups, as well. It's a bit
>> harder,
>> +  // because we need to pay close attention to invariant group barriers.
>> +  return isa<LoadInst>(I) && (I->getMetadata(LLVMContext::MD_invariant_load)
>> ||
>> +                              AA.pointsToConstantMemory(I));
>> +}
>> +
>>  static bool instructionClobbersQuery(MemoryDef *MD, MemoryUse *MU,
>>                                       const MemoryLocOrCall &UseMLOC,
>>                                       AliasAnalysis &AA) {
>>
>> Added: llvm/trunk/test/Transforms/GVNHoist/pr30216.ll
>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transfor
>> ms/GVNHoist/pr30216.ll?rev=283965&view=auto
>> ============================================================
>> ==================
>> --- llvm/trunk/test/Transforms/GVNHoist/pr30216.ll (added)
>> +++ llvm/trunk/test/Transforms/GVNHoist/pr30216.ll Tue Oct 11 21:23:39
>> 2016
>> @@ -0,0 +1,52 @@
>> +; RUN: opt -S -gvn-hoist < %s | FileCheck %s
>> +
>> +; Make sure the two stores @B do not get hoisted past the load @B.
>> +
>> +; CHECK-LABEL: define i8* @Foo
>> +; CHECK: store
>> +; CHECK: store
>> +; CHECK: load
>> +; CHECK: store
>> +
>> + at A = external global i8
>> + at B = external global i8*
>> +
>> +define i8* @Foo() {
>> +  store i8 0, i8* @A
>> +  br i1 undef, label %if.then, label %if.else
>> +
>> +if.then:
>> +  store i8* null, i8** @B
>> +  ret i8* null
>> +
>> +if.else:
>> +  %1 = load i8*, i8** @B
>> +  store i8* null, i8** @B
>> +  ret i8* %1
>> +}
>> +
>> +; Make sure the two stores @B do not get hoisted past the store
>> @GlobalVar.
>> +
>> +; CHECK-LABEL: define i8* @Fun
>> +; CHECK: store
>> +; CHECK: store
>> +; CHECK: store
>> +; CHECK: store
>> +; CHECK: load
>> +
>> + at GlobalVar = internal global i8 0
>> +
>> +define i8* @Fun() {
>> +  store i8 0, i8* @A
>> +  br i1 undef, label %if.then, label %if.else
>> +
>> +if.then:
>> +  store i8* null, i8** @B
>> +  ret i8* null
>> +
>> +if.else:
>> +  store i8 0, i8* @GlobalVar
>> +  store i8* null, i8** @B
>> +  %1 = load i8*, i8** @B
>> +  ret i8* %1
>> +}
>>
>> Added: llvm/trunk/test/Transforms/GVNHoist/pr30499.ll
>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transfor
>> ms/GVNHoist/pr30499.ll?rev=283965&view=auto
>> ============================================================
>> ==================
>> --- llvm/trunk/test/Transforms/GVNHoist/pr30499.ll (added)
>> +++ llvm/trunk/test/Transforms/GVNHoist/pr30499.ll Tue Oct 11 21:23:39
>> 2016
>> @@ -0,0 +1,30 @@
>> +; RUN: opt -S -gvn-hoist < %s
>> +
>> +define void @_Z3fn2v() #0 {
>> +entry:
>> +  %a = alloca i8*, align 8
>> +  %b = alloca i32, align 4
>> +  %0 = load i8*, i8** %a, align 8
>> +  store i8 0, i8* %0, align 1
>> +  %1 = load i32, i32* %b, align 4
>> +  %tobool = icmp ne i32 %1, 0
>> +  br i1 %tobool, label %if.then, label %if.end
>> +
>> +if.then:                                          ; preds = %entry
>> +  %call = call i64 @_Z3fn1v() #2
>> +  %conv = trunc i64 %call to i32
>> +  store i32 %conv, i32* %b, align 4
>> +  br label %if.end
>> +
>> +if.end:                                           ; preds = %if.then,
>> %entry
>> +  %2 = load i8*, i8** %a, align 8
>> +  store i8 0, i8* %2, align 1
>> +  ret void
>> +}
>> +
>> +; Function Attrs: nounwind readonly
>> +declare i64 @_Z3fn1v() #1
>> +
>> +attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false"
>> "disable-tail-calls"="false" "less-precise-fpmad"="false"
>> "no-frame-pointer-elim"="false" "no-infs-fp-math"="false"
>> "no-jump-tables"="false" "no-nans-fp-math"="false"
>> "no-signed-zeros-fp-math"="false" "no-trapping-math"="false"
>> "stack-protector-buffer-size"="8" "target-cpu"="x86-64"
>> "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false"
>> "use-soft-float"="false" }
>> +attributes #1 = { nounwind readonly "correctly-rounded-divide-sqrt-fp-math"="false"
>> "disable-tail-calls"="false" "less-precise-fpmad"="false"
>> "no-frame-pointer-elim"="false" "no-infs-fp-math"="false"
>> "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false"
>> "no-trapping-math"="false" "stack-protector-buffer-size"="8"
>> "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87"
>> "unsafe-fp-math"="false" "use-soft-float"="false" }
>> +attributes #2 = { nounwind readonly }
>>
>>
>> _______________________________________________
>> llvm-commits mailing list
>> llvm-commits at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20161012/9213ff20/attachment.html>


More information about the llvm-commits mailing list