[llvm] r178193 - This patch follows is a follow up to r178171, which uses the register

Gurd, Preston preston.gurd at intel.com
Thu Mar 28 09:20:09 PDT 2013


Thanks, Nadav. We will try to do what you suggest.

Preston

From: Nadav Rotem [mailto:nrotem at apple.com]
Sent: Thursday, March 28, 2013 11:54 AM
To: Gurd, Preston
Cc: llvm-commits at cs.uiuc.edu
Subject: Re: [llvm] r178193 - This patch follows is a follow up to r178171, which uses the register

Hi Preston,

I appreciate your efforts to try to reduce the test case, but the test cases are still large. There are a few things that you can do to make them smaller. First, you can get rid of the complex types. A large part of the test cases are types and most of the type information is not used. You can replace some of the members in these structs with primitive types and this will allow you to delete some of the type definitions. Next, you can generate a new function call (to an external function) with lots of argument, this will ensure that all of the registers that are used to pass arguments to be spilled. If that does not work, you can create a small loop that uses variables that are declared externally, this will also make the variables to be live across the loop and your pointer will be spilled. Finally, you can insert assert(0) in the code that spills registers and use bugpoint to reduce the test case for you. This will allow you to create the smallest possible code that spills. As you can see there are many ways to reduce the test case. The big test cases are fragile and are not very helpful. The next time we would like to make major changes in the register allocator we would have to spend our time on fixing test cases that rely on specific heuristics of the current register allocator. It should not be too hard to come up with a synthetic test case that spills the pointer. Please reduce these test cases.

Thanks,
Nadav



On Mar 28, 2013, at 7:21 AM, "Gurd, Preston" <preston.gurd at intel.com<mailto:preston.gurd at intel.com>> wrote:


Hello Nadav,

I agree that they are excessively large, but I submit that it can't be helped. We did our best to minimize the code size, but the tests had to be complicated enough to force the spill and reload of an indirect call address.

Preston

From: Nadav Rotem [mailto:nrotem at apple.com]
Sent: Wednesday, March 27, 2013 8:08 PM
To: Gurd, Preston; llvm-commits at cs.uiuc.edu<mailto:llvm-commits at cs.uiuc.edu>
Subject: Re: [llvm] r178193 - This patch follows is a follow up to r178171, which uses the register

Hi Preston,

The test cases are really big. Is there a way to make them smaller and still be able to test the reloading optimization ?

Thanks,
Nadav


On 03/27/13, Preston Gurd <preston.gurd at intel.com<mailto:preston.gurd at intel.com>> wrote:
Author: pgurd
Date: Wed Mar 27 18:16:18 2013
New Revision: 178193

URL: http://llvm.org/viewvc/llvm-project?rev=178193&view=rev
Log:
This patch follows is a follow up to r178171, which uses the register
form of call in preference to memory indirect on Atom.

In this case, the patch applies the optimization to the code for reloading
spilled registers.

The patch also includes changes to sibcall.ll and movgs.ll, which were
failing on the Atom buildbot after the first patch was applied.

This patch by Sriram Murali.


Added:
    llvm/trunk/test/CodeGen/X86/atom-call-reg-indirect-foldedreload32.ll
    llvm/trunk/test/CodeGen/X86/atom-call-reg-indirect-foldedreload64.ll
Modified:
    llvm/trunk/lib/Target/X86/X86InstrInfo.cpp
    llvm/trunk/test/CodeGen/X86/movgs.ll
    llvm/trunk/test/CodeGen/X86/sibcall.ll

Modified: llvm/trunk/lib/Target/X86/X86InstrInfo.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86InstrInfo.cpp?rev=178193&r1=178192&r2=178193&view=diff
==============================================================================
--- llvm/trunk/lib/Target/X86/X86InstrInfo.cpp (original)
+++ llvm/trunk/lib/Target/X86/X86InstrInfo.cpp Wed Mar 27 18:16:18 2013
@@ -3655,7 +3655,16 @@ X86InstrInfo::foldMemoryOperandImpl(Mach
                                     const SmallVectorImpl<MachineOperand> &MOs,
                                     unsigned Size, unsigned Align) const {
   const DenseMap<unsigned, std::pair<unsigned,unsigned> > *OpcodeTablePtr = 0;
+  bool isCallRegIndirect = TM.getSubtarget<X86Subtarget>().callRegIndirect();
   bool isTwoAddrFold = false;
+
+  // Atom favors register form of call. So, we do not fold loads into calls
+  // when X86Subtarget is Atom.
+  if (isCallRegIndirect &&
+    (MI->getOpcode() == X86::CALL32r || MI->getOpcode() == X86::CALL64r)) {
+    return NULL;
+  }
+
   unsigned NumOps = MI->getDesc().getNumOperands();
   bool isTwoAddr = NumOps > 1 &&
     MI->getDesc().getOperandConstraint(1, MCOI::TIED_TO) != -1;

Added: llvm/trunk/test/CodeGen/X86/atom-call-reg-indirect-foldedreload32.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/atom-call-reg-indirect-foldedreload32.ll?rev=178193&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/X86/atom-call-reg-indirect-foldedreload32.ll (added)
+++ llvm/trunk/test/CodeGen/X86/atom-call-reg-indirect-foldedreload32.ll Wed Mar 27 18:16:18 2013
@@ -0,0 +1,204 @@
+; RUN: llc < %s -mtriple=i386-linux-gnu -mcpu=atom 2>&1 | \
+; RUN:     grep "calll" | not grep "("
+; RUN: llc < %s -mtriple=i386-linux-gnu -mcpu=core2 2>&1 | \
+; RUN:     grep "calll" | grep "4-byte Folded Reload"
+
+%struct.targettype = type {i32}
+%struct.op_ptr1 = type opaque
+%struct.op_ptr2 = type opaque
+%union.anon = type { [8 x i32], [48 x i8] }
+%struct.const1 = type { [64 x i16], i8 }
+%struct.const2 = type { [17 x i8], [256 x i8], i8 }
+
+%struct.ref1 = type { void (%struct.ref2*)*, i32 (%struct.ref2*)*, void (%struct.ref2*)*, i32 (%struct.ref2*, i8***)*, %struct.op_ptr2** }
+%struct.ref2 = type { %struct.localref13*, %struct.localref15*, %struct.localref12*, i8*, i8, i32, %struct.localref11*, i32, i32, i32, i32, i32, i32, i32, double, i8, i8, i32, i8, i8, i8, i32, i8, i32, i8, i8, i8, i32, i32, i32, i32, i32, i32, i8**, i32, i32, i32, i32, i32, [64 x i32]*, [4 x %struct.const1*], [4 x %struct.const2*], [4 x %struct.const2*], i32, %struct.ref3*, i8, i8, [16 x i8], [16 x i8], [16 x i8], i32, i8, i8, i8, i8, i16, i16, i8, i8, i8, %struct.localref10*, i32, i32, i32, i32, i8*, i32, [4 x %struct.ref3*], i32, i32, i32, [10 x i32], i32, i32, i32, i32, i32, %struct.localref8*, %struct.localref9*, %struct.ref1*, %struct.localref7*, %struct.localref6*, %struct.localref5*, %struct.localref1*, %struct.ref4*, %struct.localref2*, %struct.localref3*, %struct.localref4* }
+%struct.ref3 = type { i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i8, i32, i32, i32, i32, i32, i32, %struct.const1*, i8* }
+%struct.ref4 = type { void (%struct.ref2*)*, [5 x void (%struct.ref2*, %struct.ref3*, i16*, i8**, i32)*] }
+
+%struct.localref1 = type { void (%struct.ref2*)*, i8 (%struct.ref2*, [64 x i16]**)*, i8 }
+%struct.localref2 = type { void (%struct.ref2*)*, void (%struct.ref2*, i8***, i32*, i32, i8**, i32*, i32)*, i8 }
+%struct.localref3 = type { void (%struct.ref2*)*, void (%struct.ref2*, i8***, i32, i8**, i32)* }
+%struct.localref4 = type { {}*, void (%struct.ref2*, i8**, i8**, i32)*, void (%struct.ref2*)*, void (%struct.ref2*)* }
+%struct.localref5 = type { void (%struct.ref2*)*, i32 (%struct.ref2*)*, i8 (%struct.ref2*)*, i8, i8, i32, i32 }
+%struct.localref6 = type { i32 (%struct.ref2*)*, void (%struct.ref2*)*, void (%struct.ref2*)*, void (%struct.ref2*)*, i8, i8 }
+%struct.localref7 = type { void (%struct.ref2*, i32)*, void (%struct.ref2*, i8***, i32*, i32, i8**, i32*, i32)* }
+%struct.localref8 = type { void (%struct.ref2*)*, void (%struct.ref2*)*, i8 }
+%struct.localref9 = type { void (%struct.ref2*, i32)*, void (%struct.ref2*, i8**, i32*, i32)* }
+%struct.localref10 = type { %struct.localref10*, i8, i32, i32, i8* }
+%struct.localref11 = type { i8*, %struct.targettype, void (%struct.ref2*)*, i8 (%struct.ref2*)*, void (%struct.ref2*, %struct.targettype)*, i8 (%struct.ref2*, i32)*, void (%struct.ref2*)* }
+%struct.localref12 = type { {}*, %struct.targettype, %struct.targettype, i32, i32 }
+%struct.localref13 = type { void (%struct.localref14*)*, void (%struct.localref14*, i32)*, void (%struct.localref14*)*, void (%struct.localref14*, i8*)*, void (%struct.localref14*)*, i32, %union.anon, i32, %struct.targettype, i8**, i32, i8**, i32, i32 }
+%struct.localref14 = type { %struct.localref13*, %struct.localref15*, %struct.localref12*, i8*, i8, i32 }
+%struct.localref15 = type { i8* (%struct.localref14*, i32, %struct.targettype)*, i8* (%struct.localref14*, i32, %struct.targettype)*, i8** (%struct.localref14*, i32, i32, i32)*, [64 x i16]** (%struct.localref14*, i32, i32, i32)*, %struct.op_ptr1* (%struct.localref14*, i32, i8, i32, i32, i32)*, %struct.op_ptr2* (%struct.localref14*, i32, i8, i32, i32, i32)*, {}*, i8** (%struct.localref14*, %struct.op_ptr1*, i32, i32, i8)*, [64 x i16]** (%struct.localref14*, %struct.op_ptr2*, i32, i32, i8)*, void (%struct.localref14*, i32)*, {}*, %struct.targettype, %struct.targettype}
+
+define internal i32 @foldedreload(%struct.ref2* %cinfo, i8*** nocapture %output1) {
+  %1 = getelementptr inbounds %struct.ref2* %cinfo, i32 0, i32 79
+  %2 = load %struct.ref1** %1, align 4
+  %3 = getelementptr inbounds %struct.ref2* %cinfo, i32 0, i32 68
+  %4 = load i32* %3, align 4
+  %5 = add i32 %4, -1
+  %6 = getelementptr inbounds %struct.ref2* %cinfo, i32 0, i32 64
+  %7 = load i32* %6, align 4
+  %8 = add i32 %7, -1
+  %9 = getelementptr inbounds %struct.ref1* %2, i32 1, i32 1
+  %10 = bitcast i32 (%struct.ref2*)** %9 to i32*
+  %11 = load i32* %10, align 4
+  %12 = getelementptr inbounds %struct.ref1* %2, i32 1, i32 2
+  %13 = bitcast void (%struct.ref2*)** %12 to i32*
+  %14 = load i32* %13, align 4
+  %15 = icmp slt i32 %11, %14
+  br i1 %15, label %.lr.ph18, label %._crit_edge19
+
+.lr.ph18:
+  %16 = getelementptr inbounds %struct.ref1* %2, i32 1
+  %17 = bitcast %struct.ref1* %16 to i32*
+  %18 = getelementptr inbounds %struct.ref1* %16, i32 0, i32 0
+  %19 = getelementptr inbounds %struct.ref2* %cinfo, i32 0, i32 66
+  %20 = getelementptr inbounds %struct.ref2* %cinfo, i32 0, i32 84
+  %21 = getelementptr inbounds %struct.ref2* %cinfo, i32 0, i32 36
+  %22 = getelementptr inbounds %struct.ref1* %2, i32 1, i32 3
+  %23 = bitcast i32 (%struct.ref2*, i8***)** %22 to [10 x [64 x i16]*]*
+  %.pre = load i32* %17, align 4
+  br label %24
+
+; <label>:24
+  %25 = phi i32 [ %14, %.lr.ph18 ], [ %89, %88 ]
+  %26 = phi i32 [ %.pre, %.lr.ph18 ], [ 0, %88 ]
+  %var1.015 = phi i32 [ %11, %.lr.ph18 ], [ %90, %88 ]
+  %27 = icmp ugt i32 %26, %5
+  br i1 %27, label %88, label %.preheader7.lr.ph
+
+.preheader7.lr.ph:
+  %.pre24 = load i32* %19, align 4
+  br label %.preheader7
+
+.preheader7:
+  %28 = phi i32 [ %.pre24, %.preheader7.lr.ph ], [ %85, %._crit_edge11 ]
+  %var2.012 = phi i32 [ %26, %.preheader7.lr.ph ], [ %86, %._crit_edge11 ]
+  %29 = icmp sgt i32 %28, 0
+  br i1 %29, label %.lr.ph10, label %._crit_edge11
+
+.lr.ph10:
+  %30 = phi i32 [ %28, %.preheader7 ], [ %82, %81 ]
+  %var4.09 = phi i32 [ 0, %.preheader7 ], [ %var4.1.lcssa, %81 ]
+  %ci.08 = phi i32 [ 0, %.preheader7 ], [ %83, %81 ]
+  %31 = getelementptr inbounds %struct.ref2* %cinfo, i32 0, i32 67, i32 %ci.08
+  %32 = load %struct.ref3** %31, align 4
+  %33 = getelementptr inbounds %struct.ref3* %32, i32 0, i32 1
+  %34 = load i32* %33, align 4
+  %35 = load %struct.ref4** %20, align 4
+  %36 = getelementptr inbounds %struct.ref4* %35, i32 0, i32 1, i32 %34
+  %37 = load void (%struct.ref2*, %struct.ref3*, i16*, i8**, i32)** %36, align 4
+  %38 = getelementptr inbounds %struct.ref3* %32, i32 0, i32 17
+  %39 = load i32* %38, align 4
+  %40 = getelementptr inbounds %struct.ref3* %32, i32 0, i32 9
+  %41 = load i32* %40, align 4
+  %42 = getelementptr inbounds %struct.ref3* %32, i32 0, i32 16
+  %43 = load i32* %42, align 4
+  %44 = mul i32 %43, %var2.012
+  %45 = getelementptr inbounds %struct.ref3* %32, i32 0, i32 14
+  %46 = load i32* %45, align 4
+  %47 = icmp sgt i32 %46, 0
+  br i1 %47, label %.lr.ph6, label %81
+
+.lr.ph6:
+  %48 = getelementptr inbounds i8*** %output1, i32 %34
+  %49 = mul nsw i32 %41, %var1.015
+  %50 = load i8*** %48, align 4
+  %51 = getelementptr inbounds i8** %50, i32 %49
+  %52 = getelementptr inbounds %struct.ref3* %32, i32 0, i32 13
+  %53 = getelementptr inbounds %struct.ref3* %32, i32 0, i32 18
+  %54 = icmp sgt i32 %39, 0
+  br i1 %54, label %.lr.ph6.split.us<http://lr.ph6.split.us>, label %.lr.ph6..lr.ph6.split_crit_edge
+
+.lr.ph6..lr.ph6.split_crit_edge:
+  br label %._crit_edge26
+
+.lr.ph6.split.us<http://lr.ph6.split.us>:
+  %55 = phi i32 [ %63, %._crit_edge28 ], [ %46, %.lr.ph6 ]
+  %56 = phi i32 [ %64, %._crit_edge28 ], [ %41, %.lr.ph6 ]
+  %var4.15.us<http://var4.15.us> = phi i32 [ %66, %._crit_edge28 ], [ %var4.09, %.lr.ph6 ]
+  %output2.04.us<http://output2.04.us> = phi i8** [ %69, %._crit_edge28 ], [ %51, %.lr.ph6 ]
+  %var5.03.us<http://var5.03.us> = phi i32 [ %67, %._crit_edge28 ], [ 0, %.lr.ph6 ]
+  %57 = load i32* %21, align 4
+  %58 = icmp ult i32 %57, %8
+  br i1 %58, label %.lr.ph.us<http://lr.ph.us>, label %59
+
+; <label>:59
+  %60 = add nsw i32 %var5.03.us<http://var5.03.us>, %var1.015
+  %61 = load i32* %53, align 4
+  %62 = icmp slt i32 %60, %61
+  br i1 %62, label %.lr.ph.us<http://lr.ph.us>, label %._crit_edge27
+
+._crit_edge27:
+  %63 = phi i32 [ %.pre23.pre, %.loopexit.us<http://loopexit.us> ], [ %55, %59 ]
+  %64 = phi i32 [ %74, %.loopexit.us<http://loopexit.us> ], [ %56, %59 ]
+  %65 = load i32* %52, align 4
+  %66 = add nsw i32 %65, %var4.15.us<http://var4.15.us>
+  %67 = add nsw i32 %var5.03.us<http://var5.03.us>, 1
+  %68 = icmp slt i32 %67, %63
+  br i1 %68, label %._crit_edge28, label %._crit_edge
+
+._crit_edge28:
+  %69 = getelementptr inbounds i8** %output2.04.us<http://output2.04.us>, i32 %64
+  br label %.lr.ph6.split.us<http://lr.ph6.split.us>
+
+.lr.ph.us<http://lr.ph.us>:
+  %var3.02.us<http://var3.02.us> = phi i32 [ %75, %.lr.ph.us<http://lr.ph.us> ], [ %44, %.lr.ph6.split.us<http://lr.ph6.split.us> ], [ %44, %59 ]
+  %xindex.01.us<http://xindex.01.us> = phi i32 [ %76, %.lr.ph.us<http://lr.ph.us> ], [ 0, %.lr.ph6.split.us<http://lr.ph6.split.us> ], [ 0, %59 ]
+  %70 = add nsw i32 %xindex.01.us<http://xindex.01.us>, %var4.15.us<http://var4.15.us>
+  %71 = getelementptr inbounds [10 x [64 x i16]*]* %23, i32 0, i32 %70
+  %72 = load [64 x i16]** %71, align 4
+  %73 = getelementptr inbounds [64 x i16]* %72, i32 0, i32 0
+  tail call void %37(%struct.ref2* %cinfo, %struct.ref3* %32, i16* %73, i8** %output2.04.us<http://output2.04.us>, i32 %var3.02.us<http://var3.02.us>) nounwind
+  %74 = load i32* %40, align 4
+  %75 = add i32 %74, %var3.02.us<http://var3.02.us>
+  %76 = add nsw i32 %xindex.01.us<http://xindex.01.us>, 1
+  %exitcond = icmp eq i32 %76, %39
+  br i1 %exitcond, label %.loopexit.us<http://loopexit.us>, label %.lr.ph.us<http://lr.ph.us>
+
+.loopexit.us<http://loopexit.us>:
+  %.pre23.pre = load i32* %45, align 4
+  br label %._crit_edge27
+
+._crit_edge26:
+  %var4.15 = phi i32 [ %var4.09, %.lr.ph6..lr.ph6.split_crit_edge ], [ %78, %._crit_edge26 ]
+  %var5.03 = phi i32 [ 0, %.lr.ph6..lr.ph6.split_crit_edge ], [ %79, %._crit_edge26 ]
+  %77 = load i32* %52, align 4
+  %78 = add nsw i32 %77, %var4.15
+  %79 = add nsw i32 %var5.03, 1
+  %80 = icmp slt i32 %79, %46
+  br i1 %80, label %._crit_edge26, label %._crit_edge
+
+._crit_edge:
+  %split = phi i32 [ %66, %._crit_edge27 ], [ %78, %._crit_edge26 ]
+  %.pre25 = load i32* %19, align 4
+  br label %81
+
+; <label>:81
+  %82 = phi i32 [ %.pre25, %._crit_edge ], [ %30, %.lr.ph10 ]
+  %var4.1.lcssa = phi i32 [ %split, %._crit_edge ], [ %var4.09, %.lr.ph10 ]
+  %83 = add nsw i32 %ci.08, 1
+  %84 = icmp slt i32 %83, %82
+  br i1 %84, label %.lr.ph10, label %._crit_edge11
+
+._crit_edge11:
+  %85 = phi i32 [ %28, %.preheader7 ], [ %82, %81 ]
+  %86 = add i32 %var2.012, 1
+  %87 = icmp ugt i32 %86, %5
+  br i1 %87, label %._crit_edge14, label %.preheader7
+
+._crit_edge14:
+  %.pre21 = load i32* %13, align 4
+  br label %88
+
+; <label>:88
+  %89 = phi i32 [ %.pre21, %._crit_edge14 ], [ %25, %24 ]
+  store void (%struct.ref2*)* null, void (%struct.ref2*)** %18, align 4
+  %90 = add nsw i32 %var1.015, 1
+  %91 = icmp slt i32 %90, %89
+  br i1 %91, label %24, label %._crit_edge19
+
+._crit_edge19:
+  ret i32 3
+}

Added: llvm/trunk/test/CodeGen/X86/atom-call-reg-indirect-foldedreload64.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/atom-call-reg-indirect-foldedreload64.ll?rev=178193&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/X86/atom-call-reg-indirect-foldedreload64.ll (added)
+++ llvm/trunk/test/CodeGen/X86/atom-call-reg-indirect-foldedreload64.ll Wed Mar 27 18:16:18 2013
@@ -0,0 +1,213 @@
+; RUN: llc < %s -mtriple=x86_64-linux-gnu -mcpu=atom 2>&1 | \
+; RUN:     grep "callq" | not grep "("
+; RUN: llc < %s -mtriple=x86_64-linux-gnu -mcpu=core2 2>&1 | \
+; RUN:     grep "callq" | grep "8-byte Folded Reload"
+
+%struct.targettype = type {i64}
+%struct.op_ptr1 = type opaque
+%struct.op_ptr2 = type opaque
+%union.anon = type { [8 x i32], [48 x i8] }
+%struct.const1 = type { [64 x i16], i8 }
+%struct.const2 = type { [17 x i8], [256 x i8], i8 }
+%struct.coef1 = type { %struct.ref1, i32, i32, i32, [10 x [64 x i16]*] }
+
+%struct.ref1 = type { void (%struct.ref2*)*, i32 (%struct.ref2*)*, void (%struct.ref2*)*, i32 (%struct.ref2*, i8***)*, %struct.op_ptr2** }
+%struct.ref2 = type { %struct.localref13*, %struct.localref15*, %struct.localref12*, i8*, i8, i32, %struct.localref11*, i32, i32, i32, i32, i32, i32, i32, double, i8, i8, i32, i8, i8, i8, i32, i8, i32, i8, i8, i8, i32, i32, i32, i32, i32, i32, i8**, i32, i32, i32, i32, i32, [64 x i32]*, [4 x %struct.const1*], [4 x %struct.const2*], [4 x %struct.const2*], i32, %struct.ref3*, i8, i8, [16 x i8], [16 x i8], [16 x i8], i32, i8, i8, i8, i8, i16, i16, i8, i8, i8, %struct.localref10*, i32, i32, i32, i32, i8*, i32, [4 x %struct.ref3*], i32, i32, i32, [10 x i32], i32, i32, i32, i32, i32, %struct.localref8*, %struct.localref9*, %struct.ref1*, %struct.localref7*, %struct.localref6*, %struct.localref5*, %struct.localref1*, %struct.ref4*, %struct.localref2*, %struct.localref3*, %struct.localref4* }
+%struct.ref3 = type { i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i8, i32, i32, i32, i32, i32, i32, %struct.const1*, i8* }
+%struct.ref4 = type { void (%struct.ref2*)*, [5 x void (%struct.ref2*, %struct.ref3*, i16*, i8**, i32)*] }
+
+%struct.localref1 = type { void (%struct.ref2*)*, i8 (%struct.ref2*, [64 x i16]**)*, i8 }
+%struct.localref2 = type { void (%struct.ref2*)*, void (%struct.ref2*, i8***, i32*, i32, i8**, i32*, i32)*, i8 }
+%struct.localref3 = type { void (%struct.ref2*)*, void (%struct.ref2*, i8***, i32, i8**, i32)* }
+%struct.localref4 = type { {}*, void (%struct.ref2*, i8**, i8**, i32)*, void (%struct.ref2*)*, void (%struct.ref2*)* }
+%struct.localref5 = type { void (%struct.ref2*)*, i32 (%struct.ref2*)*, i8 (%struct.ref2*)*, i8, i8, i32, i32 }
+%struct.localref6 = type { i32 (%struct.ref2*)*, void (%struct.ref2*)*, void (%struct.ref2*)*, void (%struct.ref2*)*, i8, i8 }
+%struct.localref7 = type { void (%struct.ref2*, i32)*, void (%struct.ref2*, i8***, i32*, i32, i8**, i32*, i32)* }
+%struct.localref8 = type { void (%struct.ref2*)*, void (%struct.ref2*)*, i8 }
+%struct.localref9 = type { void (%struct.ref2*, i32)*, void (%struct.ref2*, i8**, i32*, i32)* }
+%struct.localref10 = type { %struct.localref10*, i8, i32, i32, i8* }
+%struct.localref11 = type { i8*, %struct.targettype, void (%struct.ref2*)*, i8 (%struct.ref2*)*, void (%struct.ref2*, %struct.targettype)*, i8 (%struct.ref2*, i32)*, void (%struct.ref2*)* }
+%struct.localref12 = type { {}*, %struct.targettype, %struct.targettype, i32, i32 }
+%struct.localref13 = type { void (%struct.localref14*)*, void (%struct.localref14*, i32)*, void (%struct.localref14*)*, void (%struct.localref14*, i8*)*, void (%struct.localref14*)*, i32, %union.anon, i32, %struct.targettype, i8**, i32, i8**, i32, i32 }
+%struct.localref14 = type { %struct.localref13*, %struct.localref15*, %struct.localref12*, i8*, i8, i32 }
+%struct.localref15 = type { i8* (%struct.localref14*, i32, %struct.targettype)*, i8* (%struct.localref14*, i32, %struct.targettype)*, i8** (%struct.localref14*, i32, i32, i32)*, [64 x i16]** (%struct.localref14*, i32, i32, i32)*, %struct.op_ptr1* (%struct.localref14*, i32, i8, i32, i32, i32)*, %struct.op_ptr2* (%struct.localref14*, i32, i8, i32, i32, i32)*, {}*, i8** (%struct.localref14*, %struct.op_ptr1*, i32, i32, i8)*, [64 x i16]** (%struct.localref14*, %struct.op_ptr2*, i32, i32, i8)*, void (%struct.localref14*, i32)*, {}*, %struct.targettype, %struct.targettype}
+
+define internal i32 @foldedreload(%struct.ref2* %cinfo, i8*** nocapture %output1) {
+  %1 = getelementptr inbounds %struct.ref2* %cinfo, i64 0, i32 79
+  %2 = load %struct.ref1** %1, align 8
+  %3 = bitcast %struct.ref1* %2 to %struct.coef1*
+  %4 = getelementptr inbounds %struct.ref2* %cinfo, i64 0, i32 68
+  %5 = load i32* %4, align 4
+  %6 = add i32 %5, -1
+  %7 = getelementptr inbounds %struct.ref2* %cinfo, i64 0, i32 64
+  %8 = load i32* %7, align 4
+  %9 = add i32 %8, -1
+  %10 = getelementptr inbounds %struct.coef1* %3, i64 0, i32 2
+  %11 = load i32* %10, align 4
+  %12 = getelementptr inbounds %struct.ref1* %2, i64 1, i32 1
+  %13 = bitcast i32 (%struct.ref2*)** %12 to i32*
+  %14 = load i32* %13, align 4
+  %15 = icmp slt i32 %11, %14
+  br i1 %15, label %.lr.ph18, label %._crit_edge19
+
+.lr.ph18:
+  %16 = getelementptr inbounds %struct.ref1* %2, i64 1
+  %17 = bitcast %struct.ref1* %16 to i32*
+  %18 = getelementptr inbounds %struct.ref2* %cinfo, i64 0, i32 66
+  %19 = getelementptr inbounds %struct.ref2* %cinfo, i64 0, i32 84
+  %20 = getelementptr inbounds %struct.ref2* %cinfo, i64 0, i32 36
+  %21 = getelementptr inbounds %struct.ref1* %2, i64 1, i32 2
+  %22 = bitcast void (%struct.ref2*)** %21 to [10 x [64 x i16]*]*
+  %.pre = load i32* %17, align 4
+  br label %23
+
+; <label>:23
+  %24 = phi i32 [ %14, %.lr.ph18 ], [ %92, %91 ]
+  %25 = phi i32 [ %.pre, %.lr.ph18 ], [ 0, %91 ]
+  %var1.015 = phi i32 [ %11, %.lr.ph18 ], [ %93, %91 ]
+  %26 = icmp ugt i32 %25, %6
+  br i1 %26, label %91, label %.preheader7.lr.ph
+
+.preheader7.lr.ph:
+  %.pre24 = load i32* %18, align 4
+  br label %.preheader7
+
+.preheader7:
+  %27 = phi i32 [ %.pre24, %.preheader7.lr.ph ], [ %88, %._crit_edge11 ]
+  %var2.012 = phi i32 [ %25, %.preheader7.lr.ph ], [ %89, %._crit_edge11 ]
+  %28 = icmp sgt i32 %27, 0
+  br i1 %28, label %.lr.ph10, label %._crit_edge11
+
+.lr.ph10:
+  %29 = phi i32 [ %27, %.preheader7 ], [ %85, %84 ]
+  %indvars.iv21 = phi i64 [ 0, %.preheader7 ], [ %indvars.iv.next22, %84 ]
+  %var4.09 = phi i32 [ 0, %.preheader7 ], [ %var4.1.lcssa, %84 ]
+  %30 = getelementptr inbounds %struct.ref2* %cinfo, i64 0, i32 67, i64 %indvars.iv21
+  %31 = load %struct.ref3** %30, align 8
+  %32 = getelementptr inbounds %struct.ref3* %31, i64 0, i32 1
+  %33 = load i32* %32, align 4
+  %34 = sext i32 %33 to i64
+  %35 = load %struct.ref4** %19, align 8
+  %36 = getelementptr inbounds %struct.ref4* %35, i64 0, i32 1, i64 %34
+  %37 = load void (%struct.ref2*, %struct.ref3*, i16*, i8**, i32)** %36, align 8
+  %38 = getelementptr inbounds %struct.ref3* %31, i64 0, i32 17
+  %39 = load i32* %38, align 4
+  %40 = getelementptr inbounds %struct.ref3* %31, i64 0, i32 9
+  %41 = load i32* %40, align 4
+  %42 = getelementptr inbounds %struct.ref3* %31, i64 0, i32 16
+  %43 = load i32* %42, align 4
+  %44 = mul i32 %43, %var2.012
+  %45 = getelementptr inbounds %struct.ref3* %31, i64 0, i32 14
+  %46 = load i32* %45, align 4
+  %47 = icmp sgt i32 %46, 0
+  br i1 %47, label %.lr.ph6, label %84
+
+.lr.ph6:
+  %48 = mul nsw i32 %41, %var1.015
+  %49 = getelementptr inbounds i8*** %output1, i64 %34
+  %50 = sext i32 %48 to i64
+  %51 = load i8*** %49, align 8
+  %52 = getelementptr inbounds i8** %51, i64 %50
+  %53 = getelementptr inbounds %struct.ref3* %31, i64 0, i32 13
+  %54 = getelementptr inbounds %struct.ref3* %31, i64 0, i32 18
+  %55 = icmp sgt i32 %39, 0
+  br i1 %55, label %.lr.ph6.split.us<http://lr.ph6.split.us>, label %.lr.ph6..lr.ph6.split_crit_edge
+
+.lr.ph6..lr.ph6.split_crit_edge:
+  br label %._crit_edge28
+
+.lr.ph6.split.us<http://lr.ph6.split.us>:
+  %56 = phi i32 [ %64, %._crit_edge30 ], [ %46, %.lr.ph6 ]
+  %57 = phi i32 [ %65, %._crit_edge30 ], [ %41, %.lr.ph6 ]
+  %var4.15.us<http://var4.15.us> = phi i32 [ %67, %._crit_edge30 ], [ %var4.09, %.lr.ph6 ]
+  %output2.04.us<http://output2.04.us> = phi i8** [ %71, %._crit_edge30 ], [ %52, %.lr.ph6 ]
+  %var5.03.us<http://var5.03.us> = phi i32 [ %68, %._crit_edge30 ], [ 0, %.lr.ph6 ]
+  %58 = load i32* %20, align 4
+  %59 = icmp ult i32 %58, %9
+  br i1 %59, label %.lr.ph.us<http://lr.ph.us>, label %60
+
+; <label>:60
+  %61 = add nsw i32 %var5.03.us<http://var5.03.us>, %var1.015
+  %62 = load i32* %54, align 4
+  %63 = icmp slt i32 %61, %62
+  br i1 %63, label %.lr.ph.us<http://lr.ph.us>, label %._crit_edge29
+
+._crit_edge29:
+  %64 = phi i32 [ %.pre25.pre, %.loopexit.us<http://loopexit.us> ], [ %56, %60 ]
+  %65 = phi i32 [ %77, %.loopexit.us<http://loopexit.us> ], [ %57, %60 ]
+  %66 = load i32* %53, align 4
+  %67 = add nsw i32 %66, %var4.15.us<http://var4.15.us>
+  %68 = add nsw i32 %var5.03.us<http://var5.03.us>, 1
+  %69 = icmp slt i32 %68, %64
+  br i1 %69, label %._crit_edge30, label %._crit_edge
+
+._crit_edge30:
+  %70 = sext i32 %65 to i64
+  %71 = getelementptr inbounds i8** %output2.04.us<http://output2.04.us>, i64 %70
+  br label %.lr.ph6.split.us<http://lr.ph6.split.us>
+
+; <label>:72
+  %indvars.iv = phi i64 [ 0, %.lr.ph.us<http://lr.ph.us> ], [ %indvars.iv.next, %72 ]
+  %var3.02.us<http://var3.02.us> = phi i32 [ %44, %.lr.ph.us<http://lr.ph.us> ], [ %78, %72 ]
+  %73 = add nsw i64 %indvars.iv, %79
+  %74 = getelementptr inbounds [10 x [64 x i16]*]* %22, i64 0, i64 %73
+  %75 = load [64 x i16]** %74, align 8
+  %76 = getelementptr inbounds [64 x i16]* %75, i64 0, i64 0
+  tail call void %37(%struct.ref2* %cinfo, %struct.ref3* %31, i16* %76, i8** %output2.04.us<http://output2.04.us>, i32 %var3.02.us<http://var3.02.us>) nounwind
+  %77 = load i32* %40, align 4
+  %78 = add i32 %77, %var3.02.us<http://var3.02.us>
+  %indvars.iv.next = add i64 %indvars.iv, 1
+  %lftr.wideiv = trunc i64 %indvars.iv.next to i32
+  %exitcond = icmp eq i32 %lftr.wideiv, %39
+  br i1 %exitcond, label %.loopexit.us<http://loopexit.us>, label %72
+
+.loopexit.us<http://loopexit.us>:
+  %.pre25.pre = load i32* %45, align 4
+  br label %._crit_edge29
+
+.lr.ph.us<http://lr.ph.us>:
+  %79 = sext i32 %var4.15.us<http://var4.15.us> to i64
+  br label %72
+
+._crit_edge28:
+  %var4.15 = phi i32 [ %var4.09, %.lr.ph6..lr.ph6.split_crit_edge ], [ %81, %._crit_edge28 ]
+  %var5.03 = phi i32 [ 0, %.lr.ph6..lr.ph6.split_crit_edge ], [ %82, %._crit_edge28 ]
+  %80 = load i32* %53, align 4
+  %81 = add nsw i32 %80, %var4.15
+  %82 = add nsw i32 %var5.03, 1
+  %83 = icmp slt i32 %82, %46
+  br i1 %83, label %._crit_edge28, label %._crit_edge
+
+._crit_edge:
+  %split = phi i32 [ %67, %._crit_edge29 ], [ %81, %._crit_edge28 ]
+  %.pre27 = load i32* %18, align 4
+  br label %84
+
+; <label>:84
+  %85 = phi i32 [ %.pre27, %._crit_edge ], [ %29, %.lr.ph10 ]
+  %var4.1.lcssa = phi i32 [ %split, %._crit_edge ], [ %var4.09, %.lr.ph10 ]
+  %indvars.iv.next22 = add i64 %indvars.iv21, 1
+  %86 = trunc i64 %indvars.iv.next22 to i32
+  %87 = icmp slt i32 %86, %85
+  br i1 %87, label %.lr.ph10, label %._crit_edge11
+
+._crit_edge11:
+  %88 = phi i32 [ %27, %.preheader7 ], [ %85, %84 ]
+  %89 = add i32 %var2.012, 1
+  %90 = icmp ugt i32 %89, %6
+  br i1 %90, label %._crit_edge14, label %.preheader7
+
+._crit_edge14:
+  %.pre23 = load i32* %13, align 4
+  br label %91
+
+; <label>:91
+  %92 = phi i32 [ %.pre23, %._crit_edge14 ], [ %24, %23 ]
+  store i32 0, i32* %17, align 4
+  %93 = add nsw i32 %var1.015, 1
+  %94 = icmp slt i32 %93, %92
+  br i1 %94, label %23, label %._crit_edge19
+
+._crit_edge19:
+  ret i32 3
+}

Modified: llvm/trunk/test/CodeGen/X86/movgs.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/movgs.ll?rev=178193&r1=178192&r2=178193&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/movgs.ll (original)
+++ llvm/trunk/test/CodeGen/X86/movgs.ll Wed Mar 27 18:16:18 2013
@@ -1,6 +1,6 @@
-; RUN: llc < %s -march=x86 -mtriple=i386-linux-gnu -mattr=sse41 | FileCheck %s --check-prefix=X32
-; RUN: llc < %s -mtriple=x86_64-linux -mattr=sse41 | FileCheck %s --check-prefix=X64
-; RUN: llc < %s -mtriple=x86_64-win32 -mattr=sse41 | FileCheck %s --check-prefix=X64
+; RUN: llc < %s -march=x86 -mtriple=i386-linux-gnu -mcpu=penryn -mattr=sse41 | FileCheck %s --check-prefix=X32
+; RUN: llc < %s -mtriple=x86_64-linux -mcpu=penryn -mattr=sse41 | FileCheck %s --check-prefix=X64
+; RUN: llc < %s -mtriple=x86_64-win32 -mcpu=penryn -mattr=sse41 | FileCheck %s --check-prefix=X64

 define i32 @test1() nounwind readonly {
 entry:

Modified: llvm/trunk/test/CodeGen/X86/sibcall.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/sibcall.ll?rev=178193&r1=178192&r2=178193&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/X86/sibcall.ll (original)
+++ llvm/trunk/test/CodeGen/X86/sibcall.ll Wed Mar 27 18:16:18 2013
@@ -1,5 +1,5 @@
-; RUN: llc < %s -mtriple=i686-linux   -mattr=+sse2 -asm-verbose=false | FileCheck %s -check-prefix=32
-; RUN: llc < %s -mtriple=x86_64-linux -mattr=+sse2 -asm-verbose=false | FileCheck %s -check-prefix=64
+; RUN: llc < %s -mtriple=i686-linux   -mcpu=core2 -mattr=+sse2 -asm-verbose=false | FileCheck %s -check-prefix=32
+; RUN: llc < %s -mtriple=x86_64-linux -mcpu=core2 -mattr=+sse2 -asm-verbose=false | FileCheck %s -check-prefix=64

 define void @t1(i32 %x) nounwind ssp {
 entry:


_______________________________________________
llvm-commits mailing list
llvm-commits at cs.uiuc.edu<mailto:llvm-commits at cs.uiuc.edu>
http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20130328/858ddc57/attachment.html>


More information about the llvm-commits mailing list