[llvm] r272267 - Reapply "[MBP] Reduce code size by running tail merging in MBP.""

Kyle Butt via llvm-commits llvm-commits at lists.llvm.org
Tue Jun 21 17:33:25 PDT 2016


Hi Haicheng,

I've seen the work you've been doing on tail-merging, and I'm not sure
you're taking the right approach here.

As far as I can tell, your current approach is to do layout, then
tail-merge, and if there were changes, re-do layout.

This is problematic for a few reasons:
1) If tail-merging made changes, the analysis that layout relies is now
invalidated. The Dominator tree will give spurious results.
2) It's expensive. Re-doing layout for an entire function because one new
block was created isn't the best approach. It would be better to find these
blocks as they are laid out so that the ongoing layout can use the updated
CFG
3) It undoes tail-duplication during layout (see D18226).
    The tail-dup-layout.ll test shows this best. After rebasing to your
patch, I re-ran the tests, and this one was no longer passing. After I
figured out why it was failing (Block missing from MDT), I was able to get
it to pass again and generate the correct layout, but what's happening
makes me think that we may not be taking the best approach.
Tail-duplication occurs during layout, is completely removed by
tail-merging, and then is re-applied during the second attempt at layout.
We definitely need to arrange the thresholds and behaviors so that
tail-duplication is not undone by tail merging, and doing it in place would
give more scope for that.
   Another example that is troubling is test/X86/block-placement.ll,
specifically test_loop_rotate, in this test, tail duplication copies the
loop condition into the entry, and then tail merging removes it. This
results in a different order for the loop from what was created in an
earlier pass, and so the test, correctly, fails.

I think the simplest alternative that would allow you to keep your simple
approach would be to make tail-merging strictly honor the instruction-count
threshold during layout. That would prevent undoing the work that
tail-duplication does, and be a good first step.

Thanks,
Kyle.


On Thu, Jun 9, 2016 at 8:24 AM, Haicheng Wu via llvm-commits <
llvm-commits at lists.llvm.org> wrote:

> Author: haicheng
> Date: Thu Jun  9 10:24:29 2016
> New Revision: 272267
>
> URL: http://llvm.org/viewvc/llvm-project?rev=272267&view=rev
> Log:
> Reapply "[MBP] Reduce code size by running tail merging in MBP.""
>
> This reapplies commit r271930, r271915, r271923.  They hit a bug in
> Thumb which is fixed in r272258 now.
>
> The original message:
>
> The code layout that TailMerging (inside BranchFolding) works on is not the
> final layout optimized based on the branch probability. Generally, after
> BlockPlacement, many new merging opportunities emerge.
>
> This patch calls Tail Merging after MBP and calls MBP again if Tail Merging
> merges anything.
>
> Added:
>     llvm/trunk/test/CodeGen/AArch64/tailmerging_in_mbp.ll
> Modified:
>     llvm/trunk/lib/CodeGen/BranchFolding.cpp
>     llvm/trunk/lib/CodeGen/BranchFolding.h
>     llvm/trunk/lib/CodeGen/IfConversion.cpp
>     llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp
>     llvm/trunk/test/CodeGen/ARM/arm-and-tst-peephole.ll
>
> Modified: llvm/trunk/lib/CodeGen/BranchFolding.cpp
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/BranchFolding.cpp?rev=272267&r1=272266&r2=272267&view=diff
>
> ==============================================================================
> --- llvm/trunk/lib/CodeGen/BranchFolding.cpp (original)
> +++ llvm/trunk/lib/CodeGen/BranchFolding.cpp Thu Jun  9 10:24:29 2016
> @@ -27,6 +27,7 @@
>  #include "llvm/CodeGen/MachineFunctionPass.h"
>  #include "llvm/CodeGen/MachineJumpTableInfo.h"
>  #include "llvm/CodeGen/MachineMemOperand.h"
> +#include "llvm/CodeGen/MachineLoopInfo.h"
>  #include "llvm/CodeGen/MachineModuleInfo.h"
>  #include "llvm/CodeGen/MachineRegisterInfo.h"
>  #include "llvm/CodeGen/Passes.h"
> @@ -99,8 +100,9 @@ bool BranchFolderPass::runOnMachineFunct
>    // HW that requires structurized CFG.
>    bool EnableTailMerge = !MF.getTarget().requiresStructuredCFG() &&
>                           PassConfig->getEnableTailMerge();
> -  BranchFolder Folder(EnableTailMerge, /*CommonHoist=*/true,
> -                      getAnalysis<MachineBlockFrequencyInfo>(),
> +  BranchFolder::MBFIWrapper MBBFreqInfo(
> +      getAnalysis<MachineBlockFrequencyInfo>());
> +  BranchFolder Folder(EnableTailMerge, /*CommonHoist=*/true, MBBFreqInfo,
>                        getAnalysis<MachineBranchProbabilityInfo>());
>    return Folder.OptimizeFunction(MF, MF.getSubtarget().getInstrInfo(),
>                                   MF.getSubtarget().getRegisterInfo(),
> @@ -108,7 +110,7 @@ bool BranchFolderPass::runOnMachineFunct
>  }
>
>  BranchFolder::BranchFolder(bool defaultEnableTailMerge, bool CommonHoist,
> -                           const MachineBlockFrequencyInfo &FreqInfo,
> +                           MBFIWrapper &FreqInfo,
>                             const MachineBranchProbabilityInfo &ProbInfo)
>      : EnableHoistCommonCode(CommonHoist), MBBFreqInfo(FreqInfo),
>        MBPI(ProbInfo) {
> @@ -136,6 +138,8 @@ void BranchFolder::RemoveDeadBlock(Machi
>    // Remove the block.
>    MF->erase(MBB);
>    FuncletMembership.erase(MBB);
> +  if (MLI)
> +    MLI->removeBlock(MBB);
>  }
>
>  /// OptimizeImpDefsBlock - If a basic block is just a bunch of
> implicit_def
> @@ -192,18 +196,22 @@ bool BranchFolder::OptimizeImpDefsBlock(
>  }
>
>  /// OptimizeFunction - Perhaps branch folding, tail merging and other
> -/// CFG optimizations on the given function.
> +/// CFG optimizations on the given function.  Block placement changes the
> layout
> +/// and may create new tail merging opportunities.
>  bool BranchFolder::OptimizeFunction(MachineFunction &MF,
>                                      const TargetInstrInfo *tii,
>                                      const TargetRegisterInfo *tri,
> -                                    MachineModuleInfo *mmi) {
> +                                    MachineModuleInfo *mmi,
> +                                    MachineLoopInfo *mli, bool
> AfterPlacement) {
>    if (!tii) return false;
>
>    TriedMerging.clear();
>
> +  AfterBlockPlacement = AfterPlacement;
>    TII = tii;
>    TRI = tri;
>    MMI = mmi;
> +  MLI = mli;
>    RS = nullptr;
>
>    // Use a RegScavenger to help update liveness when required.
> @@ -229,7 +237,10 @@ bool BranchFolder::OptimizeFunction(Mach
>    bool MadeChangeThisIteration = true;
>    while (MadeChangeThisIteration) {
>      MadeChangeThisIteration    = TailMergeBlocks(MF);
> -    MadeChangeThisIteration   |= OptimizeBranches(MF);
> +    // No need to clean up if tail merging does not change anything after
> the
> +    // block placement.
> +    if (!AfterBlockPlacement || MadeChangeThisIteration)
> +      MadeChangeThisIteration |= OptimizeBranches(MF);
>      if (EnableHoistCommonCode)
>        MadeChangeThisIteration |= HoistCommonCode(MF);
>      MadeChange |= MadeChangeThisIteration;
> @@ -446,6 +457,11 @@ MachineBasicBlock *BranchFolder::SplitMB
>    // Splice the code over.
>    NewMBB->splice(NewMBB->end(), &CurMBB, BBI1, CurMBB.end());
>
> +  // NewMBB belongs to the same loop as CurMBB.
> +  if (MLI)
> +    if (MachineLoop *ML = MLI->getLoopFor(&CurMBB))
> +      ML->addBasicBlockToLoop(NewMBB, MLI->getBase());
> +
>    // NewMBB inherits CurMBB's block frequency.
>    MBBFreqInfo.setBlockFreq(NewMBB, MBBFreqInfo.getBlockFreq(&CurMBB));
>
> @@ -540,6 +556,18 @@ void BranchFolder::MBFIWrapper::setBlock
>    MergedBBFreq[MBB] = F;
>  }
>
> +raw_ostream &
> +BranchFolder::MBFIWrapper::printBlockFreq(raw_ostream &OS,
> +                                          const MachineBasicBlock *MBB)
> const {
> +  return MBFI.printBlockFreq(OS, getBlockFreq(MBB));
> +}
> +
> +raw_ostream &
> +BranchFolder::MBFIWrapper::printBlockFreq(raw_ostream &OS,
> +                                          const BlockFrequency Freq)
> const {
> +  return MBFI.printBlockFreq(OS, Freq);
> +}
> +
>  /// CountTerminators - Count the number of terminators in the given
>  /// block and set I to the position of the first non-terminator, if there
>  /// is one, or MBB->end() otherwise.
> @@ -921,23 +949,27 @@ bool BranchFolder::TailMergeBlocks(Machi
>    if (!EnableTailMerge) return MadeChange;
>
>    // First find blocks with no successors.
> -  MergePotentials.clear();
> -  for (MachineBasicBlock &MBB : MF) {
> +  // Block placement does not create new tail merging opportunities for
> these
> +  // blocks.
> +  if (!AfterBlockPlacement) {
> +    MergePotentials.clear();
> +    for (MachineBasicBlock &MBB : MF) {
> +      if (MergePotentials.size() == TailMergeThreshold)
> +        break;
> +      if (!TriedMerging.count(&MBB) && MBB.succ_empty())
> +        MergePotentials.push_back(MergePotentialsElt(HashEndOfMBB(MBB),
> &MBB));
> +    }
> +
> +    // If this is a large problem, avoid visiting the same basic blocks
> +    // multiple times.
>      if (MergePotentials.size() == TailMergeThreshold)
> -      break;
> -    if (!TriedMerging.count(&MBB) && MBB.succ_empty())
> -      MergePotentials.push_back(MergePotentialsElt(HashEndOfMBB(MBB),
> &MBB));
> -  }
> +      for (unsigned i = 0, e = MergePotentials.size(); i != e; ++i)
> +        TriedMerging.insert(MergePotentials[i].getBlock());
>
> -  // If this is a large problem, avoid visiting the same basic blocks
> -  // multiple times.
> -  if (MergePotentials.size() == TailMergeThreshold)
> -    for (unsigned i = 0, e = MergePotentials.size(); i != e; ++i)
> -      TriedMerging.insert(MergePotentials[i].getBlock());
> -
> -  // See if we can do any tail merging on those.
> -  if (MergePotentials.size() >= 2)
> -    MadeChange |= TryTailMergeBlocks(nullptr, nullptr);
> +    // See if we can do any tail merging on those.
> +    if (MergePotentials.size() >= 2)
> +      MadeChange |= TryTailMergeBlocks(nullptr, nullptr);
> +  }
>
>    // Look at blocks (IBB) with multiple predecessors (PBB).
>    // We change each predecessor to a canonical form, by
> @@ -984,6 +1016,17 @@ bool BranchFolder::TailMergeBlocks(Machi
>        if (PBB->hasEHPadSuccessor())
>          continue;
>
> +      // Bail out if the loop header (IBB) is not the top of the loop
> chain
> +      // after the block placement.  Otherwise, the common tail of IBB's
> +      // predecessors may become the loop top if block placement is
> called again
> +      // and the predecessors may branch to this common tail.
> +      // FIXME: Relaxed this check if the algorithm of finding loop top is
> +      // changed in MBP.
> +      if (AfterBlockPlacement && MLI)
> +        if (MachineLoop *ML = MLI->getLoopFor(IBB))
> +          if (IBB == ML->getHeader() && ML == MLI->getLoopFor(PBB))
> +            continue;
> +
>        MachineBasicBlock *TBB = nullptr, *FBB = nullptr;
>        SmallVector<MachineOperand, 4> Cond;
>        if (!TII->AnalyzeBranch(*PBB, TBB, FBB, Cond, true)) {
>
> Modified: llvm/trunk/lib/CodeGen/BranchFolding.h
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/BranchFolding.h?rev=272267&r1=272266&r2=272267&view=diff
>
> ==============================================================================
> --- llvm/trunk/lib/CodeGen/BranchFolding.h (original)
> +++ llvm/trunk/lib/CodeGen/BranchFolding.h Thu Jun  9 10:24:29 2016
> @@ -20,20 +20,24 @@ namespace llvm {
>    class MachineBranchProbabilityInfo;
>    class MachineFunction;
>    class MachineModuleInfo;
> +  class MachineLoopInfo;
>    class RegScavenger;
>    class TargetInstrInfo;
>    class TargetRegisterInfo;
>
>    class LLVM_LIBRARY_VISIBILITY BranchFolder {
>    public:
> +    class MBFIWrapper;
> +
>      explicit BranchFolder(bool defaultEnableTailMerge, bool CommonHoist,
> -                          const MachineBlockFrequencyInfo &MBFI,
> +                          MBFIWrapper &MBFI,
>                            const MachineBranchProbabilityInfo &MBPI);
>
> -    bool OptimizeFunction(MachineFunction &MF,
> -                          const TargetInstrInfo *tii,
> -                          const TargetRegisterInfo *tri,
> -                          MachineModuleInfo *mmi);
> +    bool OptimizeFunction(MachineFunction &MF, const TargetInstrInfo *tii,
> +                          const TargetRegisterInfo *tri,
> MachineModuleInfo *mmi,
> +                          MachineLoopInfo *mli = nullptr,
> +                          bool AfterPlacement = false);
> +
>    private:
>      class MergePotentialsElt {
>        unsigned Hash;
> @@ -91,13 +95,16 @@ namespace llvm {
>      };
>      std::vector<SameTailElt> SameTails;
>
> +    bool AfterBlockPlacement;
>      bool EnableTailMerge;
>      bool EnableHoistCommonCode;
>      const TargetInstrInfo *TII;
>      const TargetRegisterInfo *TRI;
>      MachineModuleInfo *MMI;
> +    MachineLoopInfo *MLI;
>      RegScavenger *RS;
>
> +  public:
>      /// \brief This class keeps track of branch frequencies of newly
> created
>      /// blocks and tail-merged blocks.
>      class MBFIWrapper {
> @@ -105,13 +112,18 @@ namespace llvm {
>        MBFIWrapper(const MachineBlockFrequencyInfo &I) : MBFI(I) {}
>        BlockFrequency getBlockFreq(const MachineBasicBlock *MBB) const;
>        void setBlockFreq(const MachineBasicBlock *MBB, BlockFrequency F);
> +      raw_ostream &printBlockFreq(raw_ostream &OS,
> +                                  const MachineBasicBlock *MBB) const;
> +      raw_ostream &printBlockFreq(raw_ostream &OS,
> +                                  const BlockFrequency Freq) const;
>
>      private:
>        const MachineBlockFrequencyInfo &MBFI;
>        DenseMap<const MachineBasicBlock *, BlockFrequency> MergedBBFreq;
>      };
>
> -    MBFIWrapper MBBFreqInfo;
> +  private:
> +    MBFIWrapper &MBBFreqInfo;
>      const MachineBranchProbabilityInfo &MBPI;
>
>      bool TailMergeBlocks(MachineFunction &MF);
>
> Modified: llvm/trunk/lib/CodeGen/IfConversion.cpp
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/IfConversion.cpp?rev=272267&r1=272266&r2=272267&view=diff
>
> ==============================================================================
> --- llvm/trunk/lib/CodeGen/IfConversion.cpp (original)
> +++ llvm/trunk/lib/CodeGen/IfConversion.cpp Thu Jun  9 10:24:29 2016
> @@ -163,7 +163,6 @@ namespace {
>      const TargetLoweringBase *TLI;
>      const TargetInstrInfo *TII;
>      const TargetRegisterInfo *TRI;
> -    const MachineBlockFrequencyInfo *MBFI;
>      const MachineBranchProbabilityInfo *MBPI;
>      MachineRegisterInfo *MRI;
>
> @@ -291,7 +290,7 @@ bool IfConverter::runOnMachineFunction(M
>    TLI = ST.getTargetLowering();
>    TII = ST.getInstrInfo();
>    TRI = ST.getRegisterInfo();
> -  MBFI = &getAnalysis<MachineBlockFrequencyInfo>();
> +  BranchFolder::MBFIWrapper
> MBFI(getAnalysis<MachineBlockFrequencyInfo>());
>    MBPI = &getAnalysis<MachineBranchProbabilityInfo>();
>    MRI = &MF.getRegInfo();
>    SchedModel.init(ST.getSchedModel(), &ST, TII);
> @@ -303,7 +302,7 @@ bool IfConverter::runOnMachineFunction(M
>    bool BFChange = false;
>    if (!PreRegAlloc) {
>      // Tail merge tend to expose more if-conversion opportunities.
> -    BranchFolder BF(true, false, *MBFI, *MBPI);
> +    BranchFolder BF(true, false, MBFI, *MBPI);
>      BFChange = BF.OptimizeFunction(MF, TII, ST.getRegisterInfo(),
>
> getAnalysisIfAvailable<MachineModuleInfo>());
>    }
> @@ -427,7 +426,7 @@ bool IfConverter::runOnMachineFunction(M
>    BBAnalysis.clear();
>
>    if (MadeChange && IfCvtBranchFold) {
> -    BranchFolder BF(false, false, *MBFI, *MBPI);
> +    BranchFolder BF(false, false, MBFI, *MBPI);
>      BF.OptimizeFunction(MF, TII, MF.getSubtarget().getRegisterInfo(),
>                          getAnalysisIfAvailable<MachineModuleInfo>());
>    }
>
> Modified: llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp?rev=272267&r1=272266&r2=272267&view=diff
>
> ==============================================================================
> --- llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp (original)
> +++ llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp Thu Jun  9 10:24:29
> 2016
> @@ -26,6 +26,8 @@
>
>  //===----------------------------------------------------------------------===//
>
>  #include "llvm/CodeGen/Passes.h"
> +#include "llvm/CodeGen/TargetPassConfig.h"
> +#include "BranchFolding.h"
>  #include "llvm/ADT/DenseMap.h"
>  #include "llvm/ADT/SmallPtrSet.h"
>  #include "llvm/ADT/SmallVector.h"
> @@ -116,6 +118,12 @@ static cl::opt<unsigned> JumpInstCost("j
>                                        cl::desc("Cost of jump
> instructions."),
>                                        cl::init(1), cl::Hidden);
>
> +static cl::opt<bool>
> +BranchFoldPlacement("branch-fold-placement",
> +              cl::desc("Perform branch folding during placement. "
> +                       "Reduces code size."),
> +              cl::init(true), cl::Hidden);
> +
>  extern cl::opt<unsigned> StaticLikelyProb;
>
>  namespace {
> @@ -232,10 +240,10 @@ class MachineBlockPlacement : public Mac
>    const MachineBranchProbabilityInfo *MBPI;
>
>    /// \brief A handle to the function-wide block frequency pass.
> -  const MachineBlockFrequencyInfo *MBFI;
> +  std::unique_ptr<BranchFolder::MBFIWrapper> MBFI;
>
>    /// \brief A handle to the loop info.
> -  const MachineLoopInfo *MLI;
> +  MachineLoopInfo *MLI;
>
>    /// \brief A handle to the target's instruction info.
>    const TargetInstrInfo *TII;
> @@ -323,6 +331,7 @@ public:
>      AU.addRequired<MachineBlockFrequencyInfo>();
>      AU.addRequired<MachineDominatorTree>();
>      AU.addRequired<MachineLoopInfo>();
> +    AU.addRequired<TargetPassConfig>();
>      MachineFunctionPass::getAnalysisUsage(AU);
>    }
>  };
> @@ -1469,7 +1478,8 @@ bool MachineBlockPlacement::runOnMachine
>      return false;
>
>    MBPI = &getAnalysis<MachineBranchProbabilityInfo>();
> -  MBFI = &getAnalysis<MachineBlockFrequencyInfo>();
> +  MBFI = llvm::make_unique<BranchFolder::MBFIWrapper>(
> +      getAnalysis<MachineBlockFrequencyInfo>());
>    MLI = &getAnalysis<MachineLoopInfo>();
>    TII = F.getSubtarget().getInstrInfo();
>    TLI = F.getSubtarget().getTargetLowering();
> @@ -1477,6 +1487,29 @@ bool MachineBlockPlacement::runOnMachine
>    assert(BlockToChain.empty());
>
>    buildCFGChains(F);
> +
> +  // Changing the layout can create new tail merging opportunities.
> +  TargetPassConfig *PassConfig = &getAnalysis<TargetPassConfig>();
> +  // TailMerge can create jump into if branches that make CFG irreducible
> for
> +  // HW that requires structurized CFG.
> +  bool EnableTailMerge = !F.getTarget().requiresStructuredCFG() &&
> +                         PassConfig->getEnableTailMerge() &&
> +                         BranchFoldPlacement;
> +  // No tail merging opportunities if the block number is less than four.
> +  if (F.size() > 3 && EnableTailMerge) {
> +    BranchFolder BF(/*EnableTailMerge=*/true, /*CommonHoist=*/false,
> *MBFI,
> +                    *MBPI);
> +
> +    if (BF.OptimizeFunction(F, TII, F.getSubtarget().getRegisterInfo(),
> +                            getAnalysisIfAvailable<MachineModuleInfo>(),
> MLI,
> +                            /*AfterBlockPlacement=*/true)) {
> +      // Redo the layout if tail merging creates/removes/moves blocks.
> +      BlockToChain.clear();
> +      ChainAllocator.DestroyAll();
> +      buildCFGChains(F);
> +    }
> +  }
> +
>    optimizeBranches(F);
>    alignBlocks(F);
>
>
> Added: llvm/trunk/test/CodeGen/AArch64/tailmerging_in_mbp.ll
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/tailmerging_in_mbp.ll?rev=272267&view=auto
>
> ==============================================================================
> --- llvm/trunk/test/CodeGen/AArch64/tailmerging_in_mbp.ll (added)
> +++ llvm/trunk/test/CodeGen/AArch64/tailmerging_in_mbp.ll Thu Jun  9
> 10:24:29 2016
> @@ -0,0 +1,63 @@
> +; RUN: llc <%s -march=aarch64 | FileCheck %s
> +
> +; CHECK-LABEL: test:
> +; CHECK:       LBB0_7:
> +; CHECK:         b.hi
> +; CHECK-NEXT:    b
> +; CHECK-NEXT:  LBB0_8:
> +; CHECK-NEXT:    mov    x8, x9
> +; CHECK-NEXT:  LBB0_9:
> +define i64 @test(i64 %n, i64* %a, i64* %b, i64* %c, i64* %d, i64* %e,
> i64* %f) {
> +entry:
> +  %cmp28 = icmp sgt i64 %n, 1
> +  br i1 %cmp28, label %for.body, label %for.end
> +
> +for.body:                                         ; preds = %
> for.body.lr.ph, %if.end
> +  %j = phi i64 [ %n, %entry ], [ %div, %if.end ]
> +  %div = lshr i64 %j, 1
> +  %a.arrayidx = getelementptr inbounds i64, i64* %a, i64 %div
> +  %a.j = load i64, i64* %a.arrayidx
> +  %b.arrayidx = getelementptr inbounds i64, i64* %b, i64 %div
> +  %b.j = load i64, i64* %b.arrayidx
> +  %cmp.i = icmp slt i64 %a.j, %b.j
> +  br i1 %cmp.i, label %for.end.loopexit, label %cond.false.i
> +
> +cond.false.i:                                     ; preds = %for.body
> +  %cmp4.i = icmp sgt i64 %a.j, %b.j
> +  br i1 %cmp4.i, label %if.end, label %cond.false6.i
> +
> +cond.false6.i:                                    ; preds = %cond.false.i
> +  %c.arrayidx = getelementptr inbounds i64, i64* %c, i64 %div
> +  %c.j = load i64, i64* %c.arrayidx
> +  %d.arrayidx = getelementptr inbounds i64, i64* %d, i64 %div
> +  %d.j = load i64, i64* %d.arrayidx
> +  %cmp9.i = icmp slt i64 %c.j, %d.j
> +  br i1 %cmp9.i, label %for.end.loopexit, label %cond.false11.i
> +
> +cond.false11.i:                                   ; preds = %cond.false6.i
> +  %cmp14.i = icmp sgt i64 %c.j, %d.j
> +  br i1 %cmp14.i, label %if.end, label %cond.false12.i
> +
> +cond.false12.i:                           ; preds = %cond.false11.i
> +  %e.arrayidx = getelementptr inbounds i64, i64* %e, i64 %div
> +  %e.j = load i64, i64* %e.arrayidx
> +  %f.arrayidx = getelementptr inbounds i64, i64* %f, i64 %div
> +  %f.j = load i64, i64* %f.arrayidx
> +  %cmp19.i = icmp sgt i64 %e.j, %f.j
> +  br i1 %cmp19.i, label %if.end, label %for.end.loopexit
> +
> +if.end:                                           ; preds =
> %cond.false12.i, %cond.false11.i, %cond.false.i
> +  %cmp = icmp ugt i64 %j, 3
> +  br i1 %cmp, label %for.body, label %for.end.loopexit
> +
> +for.end.loopexit:                                 ; preds =
> %cond.false12.i, %cond.false6.i, %for.body, %if.end
> +  %j.0.lcssa.ph = phi i64 [ %j, %cond.false12.i ], [ %j, %cond.false6.i
> ], [ %j, %for.body ], [ %div, %if.end ]
> +  br label %for.end
> +
> +for.end:                                          ; preds =
> %for.end.loopexit, %entry
> +  %j.0.lcssa = phi i64 [ %n, %entry ], [ %j.0.lcssa.ph,
> %for.end.loopexit ]
> +  %j.2 = add i64 %j.0.lcssa, %n
> +  %j.3 = mul i64 %j.2, %n
> +  %j.4 = add i64 %j.3, 10
> +  ret i64 %j.4
> +}
>
> Modified: llvm/trunk/test/CodeGen/ARM/arm-and-tst-peephole.ll
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/arm-and-tst-peephole.ll?rev=272267&r1=272266&r2=272267&view=diff
>
> ==============================================================================
> --- llvm/trunk/test/CodeGen/ARM/arm-and-tst-peephole.ll (original)
> +++ llvm/trunk/test/CodeGen/ARM/arm-and-tst-peephole.ll Thu Jun  9
> 10:24:29 2016
> @@ -49,7 +49,7 @@ tailrecurse.switch:
>  ; V8-NEXT: beq
>  ; V8-NEXT: %tailrecurse.switch
>  ; V8: cmp
> -; V8-NEXT: bne
> +; V8-NEXT: beq
>  ; V8-NEXT: b
>  ; The trailing space in the last line checks that the branch is
> unconditional
>    switch i32 %and, label %sw.epilog [
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20160621/b8989f77/attachment.html>


More information about the llvm-commits mailing list