<div dir="ltr"><div class="gmail_default" style>On Fri, Jan 11, 2013 at 12:05 PM, Benjamin Kramer <span dir="ltr"><<a href="mailto:benny.kra@googlemail.com" target="_blank" class="cremed">benny.kra@googlemail.com</a>></span> wrote:<br>
</div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Author: d0k<br>
Date: Fri Jan 11 14:05:37 2013<br>
New Revision: 172246<br>
<br>
URL: <a href="http://llvm.org/viewvc/llvm-project?rev=172246&view=rev" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project?rev=172246&view=rev</a><br>
Log:<br>
Split TargetLowering into a CodeGen and a SelectionDAG part.<br>
<br>
This fixes some of the cycles between libCodeGen and libSelectionDAG. It's still<br>
a complete mess but as long as the edges consist of virtual call it doesn't<br>
cause breakage. BasicTTI did static calls and thus broke some build<br>
configurations.<br></blockquote><div><br></div><div style>I have been working on fixing this for several days, and I don't think this is the right fix. There shouldn't be any inheritance here, and I don't think we want the Target ISel stuff to depend on SelectionDAG in all cases....</div>
<div style><br></div><div style>The fix I have almost working actually splits the interface between TargetLowering and TargetSelectionDAGInfo which seems to be the desired way to partition between CodeGen and SelectionDAG.</div>
<div style><br></div><div style>But I don't know whether to even bother with it at this point. It's a very substantial change, and now that this has been committed will require a massive rebasing that I just don't have time for....</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Added:<br>
llvm/trunk/lib/CodeGen/TargetLoweringBase.cpp<br>
Modified:<br>
llvm/trunk/include/llvm/CodeGen/Passes.h<br>
llvm/trunk/include/llvm/Target/TargetLowering.h<br>
llvm/trunk/lib/CodeGen/BasicTargetTransformInfo.cpp<br>
llvm/trunk/lib/CodeGen/CMakeLists.txt<br>
llvm/trunk/lib/CodeGen/DwarfEHPrepare.cpp<br>
llvm/trunk/lib/CodeGen/IfConversion.cpp<br>
llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp<br>
llvm/trunk/lib/CodeGen/MachineLICM.cpp<br>
llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp<br>
llvm/trunk/lib/CodeGen/SjLjEHPrepare.cpp<br>
llvm/trunk/lib/CodeGen/StackProtector.cpp<br>
<br>
Modified: llvm/trunk/include/llvm/CodeGen/Passes.h<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/Passes.h?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/Passes.h?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/include/llvm/CodeGen/Passes.h (original)<br>
+++ llvm/trunk/include/llvm/CodeGen/Passes.h Fri Jan 11 14:05:37 2013<br>
@@ -25,6 +25,7 @@<br>
class MachineFunctionPass;<br>
class PassInfo;<br>
class PassManagerBase;<br>
+ class TargetLoweringBase;<br>
class TargetLowering;<br>
class TargetRegisterClass;<br>
class raw_ostream;<br>
@@ -284,7 +285,8 @@<br>
///<br>
/// This pass implements the target transform info analysis using the target<br>
/// independent information available to the LLVM code generator.<br>
- ImmutablePass *createBasicTargetTransformInfoPass(const TargetLowering *TLI);<br>
+ ImmutablePass *<br>
+ createBasicTargetTransformInfoPass(const TargetLoweringBase *TLI);<br>
<br>
/// createUnreachableBlockEliminationPass - The LLVM code generator does not<br>
/// work well with unreachable basic blocks (what live ranges make sense for a<br>
@@ -481,7 +483,7 @@<br>
<br>
/// createStackProtectorPass - This pass adds stack protectors to functions.<br>
///<br>
- FunctionPass *createStackProtectorPass(const TargetLowering *tli);<br>
+ FunctionPass *createStackProtectorPass(const TargetLoweringBase *tli);<br>
<br>
/// createMachineVerifierPass - This pass verifies cenerated machine code<br>
/// instructions for correctness.<br>
@@ -495,7 +497,7 @@<br>
/// createSjLjEHPreparePass - This pass adapts exception handling code to use<br>
/// the GCC-style builtin setjmp/longjmp (sjlj) to handling EH control flow.<br>
///<br>
- FunctionPass *createSjLjEHPreparePass(const TargetLowering *tli);<br>
+ FunctionPass *createSjLjEHPreparePass(const TargetLoweringBase *tli);<br>
<br>
/// LocalStackSlotAllocation - This pass assigns local frame indices to stack<br>
/// slots relative to one another and allocates base registers to access them<br>
<br>
Modified: llvm/trunk/include/llvm/Target/TargetLowering.h<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Target/TargetLowering.h?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Target/TargetLowering.h?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/include/llvm/Target/TargetLowering.h (original)<br>
+++ llvm/trunk/include/llvm/Target/TargetLowering.h Fri Jan 11 14:05:37 2013<br>
@@ -68,17 +68,12 @@<br>
};<br>
}<br>
<br>
-//===----------------------------------------------------------------------===//<br>
-/// TargetLowering - This class defines information used to lower LLVM code to<br>
-/// legal SelectionDAG operators that the target instruction selector can accept<br>
-/// natively.<br>
-///<br>
-/// This class also defines callbacks that targets must implement to lower<br>
-/// target-specific constructs to SelectionDAG operators.<br>
-///<br>
-class TargetLowering {<br>
- TargetLowering(const TargetLowering&) LLVM_DELETED_FUNCTION;<br>
- void operator=(const TargetLowering&) LLVM_DELETED_FUNCTION;<br>
+/// TargetLoweringBase - This base class for TargetLowering contains the<br>
+/// SelectionDAG-independent parts that can be used from the rest of CodeGen.<br>
+class TargetLoweringBase {<br>
+ TargetLoweringBase(const TargetLoweringBase&) LLVM_DELETED_FUNCTION;<br>
+ void operator=(const TargetLoweringBase&) LLVM_DELETED_FUNCTION;<br>
+<br>
public:<br>
/// LegalizeAction - This enum indicates whether operations are valid for a<br>
/// target, and if not, what action should be used to make them valid.<br>
@@ -136,9 +131,9 @@<br>
}<br>
<br>
/// NOTE: The constructor takes ownership of TLOF.<br>
- explicit TargetLowering(const TargetMachine &TM,<br>
- const TargetLoweringObjectFile *TLOF);<br>
- virtual ~TargetLowering();<br>
+ explicit TargetLoweringBase(const TargetMachine &TM,<br>
+ const TargetLoweringObjectFile *TLOF);<br>
+ virtual ~TargetLoweringBase();<br>
<br>
const TargetMachine &getTargetMachine() const { return TM; }<br>
const DataLayout *getDataLayout() const { return TD; }<br>
@@ -829,55 +824,6 @@<br>
return InsertFencesForAtomic;<br>
}<br>
<br>
- /// getPreIndexedAddressParts - returns true by value, base pointer and<br>
- /// offset pointer and addressing mode by reference if the node's address<br>
- /// can be legally represented as pre-indexed load / store address.<br>
- virtual bool getPreIndexedAddressParts(SDNode * /*N*/, SDValue &/*Base*/,<br>
- SDValue &/*Offset*/,<br>
- ISD::MemIndexedMode &/*AM*/,<br>
- SelectionDAG &/*DAG*/) const {<br>
- return false;<br>
- }<br>
-<br>
- /// getPostIndexedAddressParts - returns true by value, base pointer and<br>
- /// offset pointer and addressing mode by reference if this node can be<br>
- /// combined with a load / store to form a post-indexed load / store.<br>
- virtual bool getPostIndexedAddressParts(SDNode * /*N*/, SDNode * /*Op*/,<br>
- SDValue &/*Base*/, SDValue &/*Offset*/,<br>
- ISD::MemIndexedMode &/*AM*/,<br>
- SelectionDAG &/*DAG*/) const {<br>
- return false;<br>
- }<br>
-<br>
- /// getJumpTableEncoding - Return the entry encoding for a jump table in the<br>
- /// current function. The returned value is a member of the<br>
- /// MachineJumpTableInfo::JTEntryKind enum.<br>
- virtual unsigned getJumpTableEncoding() const;<br>
-<br>
- virtual const MCExpr *<br>
- LowerCustomJumpTableEntry(const MachineJumpTableInfo * /*MJTI*/,<br>
- const MachineBasicBlock * /*MBB*/, unsigned /*uid*/,<br>
- MCContext &/*Ctx*/) const {<br>
- llvm_unreachable("Need to implement this hook if target has custom JTIs");<br>
- }<br>
-<br>
- /// getPICJumpTableRelocaBase - Returns relocation base for the given PIC<br>
- /// jumptable.<br>
- virtual SDValue getPICJumpTableRelocBase(SDValue Table,<br>
- SelectionDAG &DAG) const;<br>
-<br>
- /// getPICJumpTableRelocBaseExpr - This returns the relocation base for the<br>
- /// given PIC jumptable, the same as getPICJumpTableRelocBase, but as an<br>
- /// MCExpr.<br>
- virtual const MCExpr *<br>
- getPICJumpTableRelocBaseExpr(const MachineFunction *MF,<br>
- unsigned JTI, MCContext &Ctx) const;<br>
-<br>
- /// isOffsetFoldingLegal - Return true if folding a constant offset<br>
- /// with the given GlobalAddress is legal. It is frequently not legal in<br>
- /// PIC relocation models.<br>
- virtual bool isOffsetFoldingLegal(const GlobalAddressSDNode *GA) const;<br>
-<br>
/// getStackCookieLocation - Return true if the target stores stack<br>
/// protector cookies at a fixed offset in some non-standard address<br>
/// space, and populates the address space and offset as<br>
@@ -906,152 +852,6 @@<br>
/// @}<br>
<br>
//===--------------------------------------------------------------------===//<br>
- // TargetLowering Optimization Methods<br>
- //<br>
-<br>
- /// TargetLoweringOpt - A convenience struct that encapsulates a DAG, and two<br>
- /// SDValues for returning information from TargetLowering to its clients<br>
- /// that want to combine<br>
- struct TargetLoweringOpt {<br>
- SelectionDAG &DAG;<br>
- bool LegalTys;<br>
- bool LegalOps;<br>
- SDValue Old;<br>
- SDValue New;<br>
-<br>
- explicit TargetLoweringOpt(SelectionDAG &InDAG,<br>
- bool LT, bool LO) :<br>
- DAG(InDAG), LegalTys(LT), LegalOps(LO) {}<br>
-<br>
- bool LegalTypes() const { return LegalTys; }<br>
- bool LegalOperations() const { return LegalOps; }<br>
-<br>
- bool CombineTo(SDValue O, SDValue N) {<br>
- Old = O;<br>
- New = N;<br>
- return true;<br>
- }<br>
-<br>
- /// ShrinkDemandedConstant - Check to see if the specified operand of the<br>
- /// specified instruction is a constant integer. If so, check to see if<br>
- /// there are any bits set in the constant that are not demanded. If so,<br>
- /// shrink the constant and return true.<br>
- bool ShrinkDemandedConstant(SDValue Op, const APInt &Demanded);<br>
-<br>
- /// ShrinkDemandedOp - Convert x+y to (VT)((SmallVT)x+(SmallVT)y) if the<br>
- /// casts are free. This uses isZExtFree and ZERO_EXTEND for the widening<br>
- /// cast, but it could be generalized for targets with other types of<br>
- /// implicit widening casts.<br>
- bool ShrinkDemandedOp(SDValue Op, unsigned BitWidth, const APInt &Demanded,<br>
- DebugLoc dl);<br>
- };<br>
-<br>
- /// SimplifyDemandedBits - Look at Op. At this point, we know that only the<br>
- /// DemandedMask bits of the result of Op are ever used downstream. If we can<br>
- /// use this information to simplify Op, create a new simplified DAG node and<br>
- /// return true, returning the original and new nodes in Old and New.<br>
- /// Otherwise, analyze the expression and return a mask of KnownOne and<br>
- /// KnownZero bits for the expression (used to simplify the caller).<br>
- /// The KnownZero/One bits may only be accurate for those bits in the<br>
- /// DemandedMask.<br>
- bool SimplifyDemandedBits(SDValue Op, const APInt &DemandedMask,<br>
- APInt &KnownZero, APInt &KnownOne,<br>
- TargetLoweringOpt &TLO, unsigned Depth = 0) const;<br>
-<br>
- /// computeMaskedBitsForTargetNode - Determine which of the bits specified in<br>
- /// Mask are known to be either zero or one and return them in the<br>
- /// KnownZero/KnownOne bitsets.<br>
- virtual void computeMaskedBitsForTargetNode(const SDValue Op,<br>
- APInt &KnownZero,<br>
- APInt &KnownOne,<br>
- const SelectionDAG &DAG,<br>
- unsigned Depth = 0) const;<br>
-<br>
- /// ComputeNumSignBitsForTargetNode - This method can be implemented by<br>
- /// targets that want to expose additional information about sign bits to the<br>
- /// DAG Combiner.<br>
- virtual unsigned ComputeNumSignBitsForTargetNode(SDValue Op,<br>
- unsigned Depth = 0) const;<br>
-<br>
- struct DAGCombinerInfo {<br>
- void *DC; // The DAG Combiner object.<br>
- CombineLevel Level;<br>
- bool CalledByLegalizer;<br>
- public:<br>
- SelectionDAG &DAG;<br>
-<br>
- DAGCombinerInfo(SelectionDAG &dag, CombineLevel level, bool cl, void *dc)<br>
- : DC(dc), Level(level), CalledByLegalizer(cl), DAG(dag) {}<br>
-<br>
- bool isBeforeLegalize() const { return Level == BeforeLegalizeTypes; }<br>
- bool isBeforeLegalizeOps() const { return Level < AfterLegalizeVectorOps; }<br>
- bool isAfterLegalizeVectorOps() const {<br>
- return Level == AfterLegalizeDAG;<br>
- }<br>
- CombineLevel getDAGCombineLevel() { return Level; }<br>
- bool isCalledByLegalizer() const { return CalledByLegalizer; }<br>
-<br>
- void AddToWorklist(SDNode *N);<br>
- void RemoveFromWorklist(SDNode *N);<br>
- SDValue CombineTo(SDNode *N, const std::vector<SDValue> &To,<br>
- bool AddTo = true);<br>
- SDValue CombineTo(SDNode *N, SDValue Res, bool AddTo = true);<br>
- SDValue CombineTo(SDNode *N, SDValue Res0, SDValue Res1, bool AddTo = true);<br>
-<br>
- void CommitTargetLoweringOpt(const TargetLoweringOpt &TLO);<br>
- };<br>
-<br>
- /// SimplifySetCC - Try to simplify a setcc built with the specified operands<br>
- /// and cc. If it is unable to simplify it, return a null SDValue.<br>
- SDValue SimplifySetCC(EVT VT, SDValue N0, SDValue N1,<br>
- ISD::CondCode Cond, bool foldBooleans,<br>
- DAGCombinerInfo &DCI, DebugLoc dl) const;<br>
-<br>
- /// isGAPlusOffset - Returns true (and the GlobalValue and the offset) if the<br>
- /// node is a GlobalAddress + offset.<br>
- virtual bool<br>
- isGAPlusOffset(SDNode *N, const GlobalValue* &GA, int64_t &Offset) const;<br>
-<br>
- /// PerformDAGCombine - This method will be invoked for all target nodes and<br>
- /// for any target-independent nodes that the target has registered with<br>
- /// invoke it for.<br>
- ///<br>
- /// The semantics are as follows:<br>
- /// Return Value:<br>
- /// SDValue.Val == 0 - No change was made<br>
- /// SDValue.Val == N - N was replaced, is dead, and is already handled.<br>
- /// otherwise - N should be replaced by the returned Operand.<br>
- ///<br>
- /// In addition, methods provided by DAGCombinerInfo may be used to perform<br>
- /// more complex transformations.<br>
- ///<br>
- virtual SDValue PerformDAGCombine(SDNode *N, DAGCombinerInfo &DCI) const;<br>
-<br>
- /// isTypeDesirableForOp - Return true if the target has native support for<br>
- /// the specified value type and it is 'desirable' to use the type for the<br>
- /// given node type. e.g. On x86 i16 is legal, but undesirable since i16<br>
- /// instruction encodings are longer and some i16 instructions are slow.<br>
- virtual bool isTypeDesirableForOp(unsigned /*Opc*/, EVT VT) const {<br>
- // By default, assume all legal types are desirable.<br>
- return isTypeLegal(VT);<br>
- }<br>
-<br>
- /// isDesirableToPromoteOp - Return true if it is profitable for dag combiner<br>
- /// to transform a floating point op of specified opcode to a equivalent op of<br>
- /// an integer type. e.g. f32 load -> i32 load can be profitable on ARM.<br>
- virtual bool isDesirableToTransformToIntegerOp(unsigned /*Opc*/,<br>
- EVT /*VT*/) const {<br>
- return false;<br>
- }<br>
-<br>
- /// IsDesirableToPromoteOp - This method query the target whether it is<br>
- /// beneficial for dag combiner to promote the specified node. If true, it<br>
- /// should return the desired promotion type by reference.<br>
- virtual bool IsDesirableToPromoteOp(SDValue /*Op*/, EVT &/*PVT*/) const {<br>
- return false;<br>
- }<br>
-<br>
- //===--------------------------------------------------------------------===//<br>
// TargetLowering Configuration Methods - These methods should be invoked by<br>
// the derived class constructor to configure this object for the target.<br>
//<br>
@@ -1302,666 +1102,263 @@<br>
<br>
public:<br>
//===--------------------------------------------------------------------===//<br>
- // Lowering methods - These methods must be implemented by targets so that<br>
- // the SelectionDAGBuilder code knows how to lower these.<br>
+ // Addressing mode description hooks (used by LSR etc).<br>
//<br>
<br>
- /// LowerFormalArguments - This hook must be implemented to lower the<br>
- /// incoming (formal) arguments, described by the Ins array, into the<br>
- /// specified DAG. The implementation should fill in the InVals array<br>
- /// with legal-type argument values, and return the resulting token<br>
- /// chain value.<br>
- ///<br>
- virtual SDValue<br>
- LowerFormalArguments(SDValue /*Chain*/, CallingConv::ID /*CallConv*/,<br>
- bool /*isVarArg*/,<br>
- const SmallVectorImpl<ISD::InputArg> &/*Ins*/,<br>
- DebugLoc /*dl*/, SelectionDAG &/*DAG*/,<br>
- SmallVectorImpl<SDValue> &/*InVals*/) const {<br>
- llvm_unreachable("Not Implemented");<br>
+ /// GetAddrModeArguments - CodeGenPrepare sinks address calculations into the<br>
+ /// same BB as Load/Store instructions reading the address. This allows as<br>
+ /// much computation as possible to be done in the address mode for that<br>
+ /// operand. This hook lets targets also pass back when this should be done<br>
+ /// on intrinsics which load/store.<br>
+ virtual bool GetAddrModeArguments(IntrinsicInst *I,<br>
+ SmallVectorImpl<Value*> &Ops,<br>
+ Type *&AccessTy) const {<br>
+ return false;<br>
}<br>
<br>
- struct ArgListEntry {<br>
- SDValue Node;<br>
- Type* Ty;<br>
- bool isSExt : 1;<br>
- bool isZExt : 1;<br>
- bool isInReg : 1;<br>
- bool isSRet : 1;<br>
- bool isNest : 1;<br>
- bool isByVal : 1;<br>
- uint16_t Alignment;<br>
-<br>
- ArgListEntry() : isSExt(false), isZExt(false), isInReg(false),<br>
- isSRet(false), isNest(false), isByVal(false), Alignment(0) { }<br>
- };<br>
- typedef std::vector<ArgListEntry> ArgListTy;<br>
-<br>
- /// CallLoweringInfo - This structure contains all information that is<br>
- /// necessary for lowering calls. It is passed to TLI::LowerCallTo when the<br>
- /// SelectionDAG builder needs to lower a call, and targets will see this<br>
- /// struct in their LowerCall implementation.<br>
- struct CallLoweringInfo {<br>
- SDValue Chain;<br>
- Type *RetTy;<br>
- bool RetSExt : 1;<br>
- bool RetZExt : 1;<br>
- bool IsVarArg : 1;<br>
- bool IsInReg : 1;<br>
- bool DoesNotReturn : 1;<br>
- bool IsReturnValueUsed : 1;<br>
-<br>
- // IsTailCall should be modified by implementations of<br>
- // TargetLowering::LowerCall that perform tail call conversions.<br>
- bool IsTailCall;<br>
-<br>
- unsigned NumFixedArgs;<br>
- CallingConv::ID CallConv;<br>
- SDValue Callee;<br>
- ArgListTy &Args;<br>
- SelectionDAG &DAG;<br>
- DebugLoc DL;<br>
- ImmutableCallSite *CS;<br>
- SmallVector<ISD::OutputArg, 32> Outs;<br>
- SmallVector<SDValue, 32> OutVals;<br>
- SmallVector<ISD::InputArg, 32> Ins;<br>
-<br>
-<br>
- /// CallLoweringInfo - Constructs a call lowering context based on the<br>
- /// ImmutableCallSite \p cs.<br>
- CallLoweringInfo(SDValue chain, Type *retTy,<br>
- FunctionType *FTy, bool isTailCall, SDValue callee,<br>
- ArgListTy &args, SelectionDAG &dag, DebugLoc dl,<br>
- ImmutableCallSite &cs)<br>
- : Chain(chain), RetTy(retTy), RetSExt(cs.paramHasAttr(0, Attribute::SExt)),<br>
- RetZExt(cs.paramHasAttr(0, Attribute::ZExt)), IsVarArg(FTy->isVarArg()),<br>
- IsInReg(cs.paramHasAttr(0, Attribute::InReg)),<br>
- DoesNotReturn(cs.doesNotReturn()),<br>
- IsReturnValueUsed(!cs.getInstruction()->use_empty()),<br>
- IsTailCall(isTailCall), NumFixedArgs(FTy->getNumParams()),<br>
- CallConv(cs.getCallingConv()), Callee(callee), Args(args), DAG(dag),<br>
- DL(dl), CS(&cs) {}<br>
-<br>
- /// CallLoweringInfo - Constructs a call lowering context based on the<br>
- /// provided call information.<br>
- CallLoweringInfo(SDValue chain, Type *retTy, bool retSExt, bool retZExt,<br>
- bool isVarArg, bool isInReg, unsigned numFixedArgs,<br>
- CallingConv::ID callConv, bool isTailCall,<br>
- bool doesNotReturn, bool isReturnValueUsed, SDValue callee,<br>
- ArgListTy &args, SelectionDAG &dag, DebugLoc dl)<br>
- : Chain(chain), RetTy(retTy), RetSExt(retSExt), RetZExt(retZExt),<br>
- IsVarArg(isVarArg), IsInReg(isInReg), DoesNotReturn(doesNotReturn),<br>
- IsReturnValueUsed(isReturnValueUsed), IsTailCall(isTailCall),<br>
- NumFixedArgs(numFixedArgs), CallConv(callConv), Callee(callee),<br>
- Args(args), DAG(dag), DL(dl), CS(NULL) {}<br>
+ /// AddrMode - This represents an addressing mode of:<br>
+ /// BaseGV + BaseOffs + BaseReg + Scale*ScaleReg<br>
+ /// If BaseGV is null, there is no BaseGV.<br>
+ /// If BaseOffs is zero, there is no base offset.<br>
+ /// If HasBaseReg is false, there is no base register.<br>
+ /// If Scale is zero, there is no ScaleReg. Scale of 1 indicates a reg with<br>
+ /// no scale.<br>
+ ///<br>
+ struct AddrMode {<br>
+ GlobalValue *BaseGV;<br>
+ int64_t BaseOffs;<br>
+ bool HasBaseReg;<br>
+ int64_t Scale;<br>
+ AddrMode() : BaseGV(0), BaseOffs(0), HasBaseReg(false), Scale(0) {}<br>
};<br>
<br>
- /// LowerCallTo - This function lowers an abstract call to a function into an<br>
- /// actual call. This returns a pair of operands. The first element is the<br>
- /// return value for the function (if RetTy is not VoidTy). The second<br>
- /// element is the outgoing token chain. It calls LowerCall to do the actual<br>
- /// lowering.<br>
- std::pair<SDValue, SDValue> LowerCallTo(CallLoweringInfo &CLI) const;<br>
+ /// isLegalAddressingMode - Return true if the addressing mode represented by<br>
+ /// AM is legal for this target, for a load/store of the specified type.<br>
+ /// The type may be VoidTy, in which case only return true if the addressing<br>
+ /// mode is legal for a load/store of any legal type.<br>
+ /// TODO: Handle pre/postinc as well.<br>
+ virtual bool isLegalAddressingMode(const AddrMode &AM, Type *Ty) const;<br>
<br>
- /// LowerCall - This hook must be implemented to lower calls into the<br>
- /// the specified DAG. The outgoing arguments to the call are described<br>
- /// by the Outs array, and the values to be returned by the call are<br>
- /// described by the Ins array. The implementation should fill in the<br>
- /// InVals array with legal-type return values from the call, and return<br>
- /// the resulting token chain value.<br>
- virtual SDValue<br>
- LowerCall(CallLoweringInfo &/*CLI*/,<br>
- SmallVectorImpl<SDValue> &/*InVals*/) const {<br>
- llvm_unreachable("Not Implemented");<br>
+ /// isLegalICmpImmediate - Return true if the specified immediate is legal<br>
+ /// icmp immediate, that is the target has icmp instructions which can compare<br>
+ /// a register against the immediate without having to materialize the<br>
+ /// immediate into a register.<br>
+ virtual bool isLegalICmpImmediate(int64_t) const {<br>
+ return true;<br>
}<br>
<br>
- /// HandleByVal - Target-specific cleanup for formal ByVal parameters.<br>
- virtual void HandleByVal(CCState *, unsigned &, unsigned) const {}<br>
-<br>
- /// CanLowerReturn - This hook should be implemented to check whether the<br>
- /// return values described by the Outs array can fit into the return<br>
- /// registers. If false is returned, an sret-demotion is performed.<br>
- ///<br>
- virtual bool CanLowerReturn(CallingConv::ID /*CallConv*/,<br>
- MachineFunction &/*MF*/, bool /*isVarArg*/,<br>
- const SmallVectorImpl<ISD::OutputArg> &/*Outs*/,<br>
- LLVMContext &/*Context*/) const<br>
- {<br>
- // Return true by default to get preexisting behavior.<br>
+ /// isLegalAddImmediate - Return true if the specified immediate is legal<br>
+ /// add immediate, that is the target has add instructions which can add<br>
+ /// a register with the immediate without having to materialize the<br>
+ /// immediate into a register.<br>
+ virtual bool isLegalAddImmediate(int64_t) const {<br>
return true;<br>
}<br>
<br>
- /// LowerReturn - This hook must be implemented to lower outgoing<br>
- /// return values, described by the Outs array, into the specified<br>
- /// DAG. The implementation should return the resulting token chain<br>
- /// value.<br>
- ///<br>
- virtual SDValue<br>
- LowerReturn(SDValue /*Chain*/, CallingConv::ID /*CallConv*/,<br>
- bool /*isVarArg*/,<br>
- const SmallVectorImpl<ISD::OutputArg> &/*Outs*/,<br>
- const SmallVectorImpl<SDValue> &/*OutVals*/,<br>
- DebugLoc /*dl*/, SelectionDAG &/*DAG*/) const {<br>
- llvm_unreachable("Not Implemented");<br>
+ /// isTruncateFree - Return true if it's free to truncate a value of<br>
+ /// type Ty1 to type Ty2. e.g. On x86 it's free to truncate a i32 value in<br>
+ /// register EAX to i16 by referencing its sub-register AX.<br>
+ virtual bool isTruncateFree(Type * /*Ty1*/, Type * /*Ty2*/) const {<br>
+ return false;<br>
}<br>
<br>
- /// isUsedByReturnOnly - Return true if result of the specified node is used<br>
- /// by a return node only. It also compute and return the input chain for the<br>
- /// tail call.<br>
- /// This is used to determine whether it is possible<br>
- /// to codegen a libcall as tail call at legalization time.<br>
- virtual bool isUsedByReturnOnly(SDNode *, SDValue &Chain) const {<br>
+ virtual bool isTruncateFree(EVT /*VT1*/, EVT /*VT2*/) const {<br>
return false;<br>
}<br>
<br>
- /// mayBeEmittedAsTailCall - Return true if the target may be able emit the<br>
- /// call instruction as a tail call. This is used by optimization passes to<br>
- /// determine if it's profitable to duplicate return instructions to enable<br>
- /// tailcall optimization.<br>
- virtual bool mayBeEmittedAsTailCall(CallInst *) const {<br>
+ /// isZExtFree - Return true if any actual instruction that defines a<br>
+ /// value of type Ty1 implicitly zero-extends the value to Ty2 in the result<br>
+ /// register. This does not necessarily include registers defined in<br>
+ /// unknown ways, such as incoming arguments, or copies from unknown<br>
+ /// virtual registers. Also, if isTruncateFree(Ty2, Ty1) is true, this<br>
+ /// does not necessarily apply to truncate instructions. e.g. on x86-64,<br>
+ /// all instructions that define 32-bit values implicit zero-extend the<br>
+ /// result out to 64 bits.<br>
+ virtual bool isZExtFree(Type * /*Ty1*/, Type * /*Ty2*/) const {<br>
return false;<br>
}<br>
<br>
- /// getTypeForExtArgOrReturn - Return the type that should be used to zero or<br>
- /// sign extend a zeroext/signext integer argument or return value.<br>
- /// FIXME: Most C calling convention requires the return type to be promoted,<br>
- /// but this is not true all the time, e.g. i1 on x86-64. It is also not<br>
- /// necessary for non-C calling conventions. The frontend should handle this<br>
- /// and include all of the necessary information.<br>
- virtual MVT getTypeForExtArgOrReturn(MVT VT,<br>
- ISD::NodeType /*ExtendKind*/) const {<br>
- MVT MinVT = getRegisterType(MVT::i32);<br>
- return VT.bitsLT(MinVT) ? MinVT : VT;<br>
+ virtual bool isZExtFree(EVT /*VT1*/, EVT /*VT2*/) const {<br>
+ return false;<br>
}<br>
<br>
- /// LowerOperationWrapper - This callback is invoked by the type legalizer<br>
- /// to legalize nodes with an illegal operand type but legal result types.<br>
- /// It replaces the LowerOperation callback in the type Legalizer.<br>
- /// The reason we can not do away with LowerOperation entirely is that<br>
- /// LegalizeDAG isn't yet ready to use this callback.<br>
- /// TODO: Consider merging with ReplaceNodeResults.<br>
-<br>
- /// The target places new result values for the node in Results (their number<br>
- /// and types must exactly match those of the original return values of<br>
- /// the node), or leaves Results empty, which indicates that the node is not<br>
- /// to be custom lowered after all.<br>
- /// The default implementation calls LowerOperation.<br>
- virtual void LowerOperationWrapper(SDNode *N,<br>
- SmallVectorImpl<SDValue> &Results,<br>
- SelectionDAG &DAG) const;<br>
+ /// isZExtFree - Return true if zero-extending the specific node Val to type<br>
+ /// VT2 is free (either because it's implicitly zero-extended such as ARM<br>
+ /// ldrb / ldrh or because it's folded such as X86 zero-extending loads).<br>
+ virtual bool isZExtFree(SDValue Val, EVT VT2) const {<br>
+ return isZExtFree(Val.getValueType(), VT2);<br>
+ }<br>
<br>
- /// LowerOperation - This callback is invoked for operations that are<br>
- /// unsupported by the target, which are registered to use 'custom' lowering,<br>
- /// and whose defined values are all legal.<br>
- /// If the target has no operations that require custom lowering, it need not<br>
- /// implement this. The default implementation of this aborts.<br>
- virtual SDValue LowerOperation(SDValue Op, SelectionDAG &DAG) const;<br>
+ /// isFNegFree - Return true if an fneg operation is free to the point where<br>
+ /// it is never worthwhile to replace it with a bitwise operation.<br>
+ virtual bool isFNegFree(EVT) const {<br>
+ return false;<br>
+ }<br>
<br>
- /// ReplaceNodeResults - This callback is invoked when a node result type is<br>
- /// illegal for the target, and the operation was registered to use 'custom'<br>
- /// lowering for that result type. The target places new result values for<br>
- /// the node in Results (their number and types must exactly match those of<br>
- /// the original return values of the node), or leaves Results empty, which<br>
- /// indicates that the node is not to be custom lowered after all.<br>
- ///<br>
- /// If the target has no operations that require custom lowering, it need not<br>
- /// implement this. The default implementation aborts.<br>
- virtual void ReplaceNodeResults(SDNode * /*N*/,<br>
- SmallVectorImpl<SDValue> &/*Results*/,<br>
- SelectionDAG &/*DAG*/) const {<br>
- llvm_unreachable("ReplaceNodeResults not implemented for this target!");<br>
+ /// isFAbsFree - Return true if an fneg operation is free to the point where<br>
+ /// it is never worthwhile to replace it with a bitwise operation.<br>
+ virtual bool isFAbsFree(EVT) const {<br>
+ return false;<br>
}<br>
<br>
- /// getTargetNodeName() - This method returns the name of a target specific<br>
- /// DAG node.<br>
- virtual const char *getTargetNodeName(unsigned Opcode) const;<br>
+ /// isFMAFasterThanMulAndAdd - Return true if an FMA operation is faster than<br>
+ /// a pair of mul and add instructions. fmuladd intrinsics will be expanded to<br>
+ /// FMAs when this method returns true (and FMAs are legal), otherwise fmuladd<br>
+ /// is expanded to mul + add.<br>
+ virtual bool isFMAFasterThanMulAndAdd(EVT) const {<br>
+ return false;<br>
+ }<br>
<br>
- /// createFastISel - This method returns a target specific FastISel object,<br>
- /// or null if the target does not support "fast" ISel.<br>
- virtual FastISel *createFastISel(FunctionLoweringInfo &,<br>
- const TargetLibraryInfo *) const {<br>
- return 0;<br>
+ /// isNarrowingProfitable - Return true if it's profitable to narrow<br>
+ /// operations of type VT1 to VT2. e.g. on x86, it's profitable to narrow<br>
+ /// from i32 to i8 but not from i32 to i16.<br>
+ virtual bool isNarrowingProfitable(EVT /*VT1*/, EVT /*VT2*/) const {<br>
+ return false;<br>
}<br>
<br>
//===--------------------------------------------------------------------===//<br>
- // Inline Asm Support hooks<br>
+ // Runtime Library hooks<br>
//<br>
<br>
- /// ExpandInlineAsm - This hook allows the target to expand an inline asm<br>
- /// call to be explicit llvm code if it wants to. This is useful for<br>
- /// turning simple inline asms into LLVM intrinsics, which gives the<br>
- /// compiler more information about the behavior of the code.<br>
- virtual bool ExpandInlineAsm(CallInst *) const {<br>
- return false;<br>
+ /// setLibcallName - Rename the default libcall routine name for the specified<br>
+ /// libcall.<br>
+ void setLibcallName(RTLIB::Libcall Call, const char *Name) {<br>
+ LibcallRoutineNames[Call] = Name;<br>
}<br>
<br>
- enum ConstraintType {<br>
- C_Register, // Constraint represents specific register(s).<br>
- C_RegisterClass, // Constraint represents any of register(s) in class.<br>
- C_Memory, // Memory constraint.<br>
- C_Other, // Something else.<br>
- C_Unknown // Unsupported constraint.<br>
- };<br>
+ /// getLibcallName - Get the libcall routine name for the specified libcall.<br>
+ ///<br>
+ const char *getLibcallName(RTLIB::Libcall Call) const {<br>
+ return LibcallRoutineNames[Call];<br>
+ }<br>
<br>
- enum ConstraintWeight {<br>
- // Generic weights.<br>
- CW_Invalid = -1, // No match.<br>
- CW_Okay = 0, // Acceptable.<br>
- CW_Good = 1, // Good weight.<br>
- CW_Better = 2, // Better weight.<br>
- CW_Best = 3, // Best weight.<br>
-<br>
- // Well-known weights.<br>
- CW_SpecificReg = CW_Okay, // Specific register operands.<br>
- CW_Register = CW_Good, // Register operands.<br>
- CW_Memory = CW_Better, // Memory operands.<br>
- CW_Constant = CW_Best, // Constant operand.<br>
- CW_Default = CW_Okay // Default or don't know type.<br>
- };<br>
+ /// setCmpLibcallCC - Override the default CondCode to be used to test the<br>
+ /// result of the comparison libcall against zero.<br>
+ void setCmpLibcallCC(RTLIB::Libcall Call, ISD::CondCode CC) {<br>
+ CmpLibcallCCs[Call] = CC;<br>
+ }<br>
<br>
- /// AsmOperandInfo - This contains information for each constraint that we are<br>
- /// lowering.<br>
- struct AsmOperandInfo : public InlineAsm::ConstraintInfo {<br>
- /// ConstraintCode - This contains the actual string for the code, like "m".<br>
- /// TargetLowering picks the 'best' code from ConstraintInfo::Codes that<br>
- /// most closely matches the operand.<br>
- std::string ConstraintCode;<br>
+ /// getCmpLibcallCC - Get the CondCode that's to be used to test the result of<br>
+ /// the comparison libcall against zero.<br>
+ ISD::CondCode getCmpLibcallCC(RTLIB::Libcall Call) const {<br>
+ return CmpLibcallCCs[Call];<br>
+ }<br>
<br>
- /// ConstraintType - Information about the constraint code, e.g. Register,<br>
- /// RegisterClass, Memory, Other, Unknown.<br>
- TargetLowering::ConstraintType ConstraintType;<br>
+ /// setLibcallCallingConv - Set the CallingConv that should be used for the<br>
+ /// specified libcall.<br>
+ void setLibcallCallingConv(RTLIB::Libcall Call, CallingConv::ID CC) {<br>
+ LibcallCallingConvs[Call] = CC;<br>
+ }<br>
<br>
- /// CallOperandval - If this is the result output operand or a<br>
- /// clobber, this is null, otherwise it is the incoming operand to the<br>
- /// CallInst. This gets modified as the asm is processed.<br>
- Value *CallOperandVal;<br>
+ /// getLibcallCallingConv - Get the CallingConv that should be used for the<br>
+ /// specified libcall.<br>
+ CallingConv::ID getLibcallCallingConv(RTLIB::Libcall Call) const {<br>
+ return LibcallCallingConvs[Call];<br>
+ }<br>
<br>
- /// ConstraintVT - The ValueType for the operand value.<br>
- MVT ConstraintVT;<br>
+private:<br>
+ const TargetMachine &TM;<br>
+ const DataLayout *TD;<br>
+ const TargetLoweringObjectFile &TLOF;<br>
<br>
- /// isMatchingInputConstraint - Return true of this is an input operand that<br>
- /// is a matching constraint like "4".<br>
- bool isMatchingInputConstraint() const;<br>
+ /// PointerTy - The type to use for pointers for the default address space,<br>
+ /// usually i32 or i64.<br>
+ ///<br>
+ MVT PointerTy;<br>
<br>
- /// getMatchedOperand - If this is an input matching constraint, this method<br>
- /// returns the output operand it matches.<br>
- unsigned getMatchedOperand() const;<br>
+ /// IsLittleEndian - True if this is a little endian target.<br>
+ ///<br>
+ bool IsLittleEndian;<br>
<br>
- /// Copy constructor for copying from an AsmOperandInfo.<br>
- AsmOperandInfo(const AsmOperandInfo &info)<br>
- : InlineAsm::ConstraintInfo(info),<br>
- ConstraintCode(info.ConstraintCode),<br>
- ConstraintType(info.ConstraintType),<br>
- CallOperandVal(info.CallOperandVal),<br>
- ConstraintVT(info.ConstraintVT) {<br>
- }<br>
+ /// SelectIsExpensive - Tells the code generator not to expand operations<br>
+ /// into sequences that use the select operations if possible.<br>
+ bool SelectIsExpensive;<br>
<br>
- /// Copy constructor for copying from a ConstraintInfo.<br>
- AsmOperandInfo(const InlineAsm::ConstraintInfo &info)<br>
- : InlineAsm::ConstraintInfo(info),<br>
- ConstraintType(TargetLowering::C_Unknown),<br>
- CallOperandVal(0), ConstraintVT(MVT::Other) {<br>
- }<br>
- };<br>
+ /// IntDivIsCheap - Tells the code generator not to expand integer divides by<br>
+ /// constants into a sequence of muls, adds, and shifts. This is a hack until<br>
+ /// a real cost model is in place. If we ever optimize for size, this will be<br>
+ /// set to true unconditionally.<br>
+ bool IntDivIsCheap;<br>
<br>
- typedef std::vector<AsmOperandInfo> AsmOperandInfoVector;<br>
+ /// BypassSlowDivMap - Tells the code generator to bypass slow divide or<br>
+ /// remainder instructions. For example, BypassSlowDivWidths[32,8] tells the<br>
+ /// code generator to bypass 32-bit integer div/rem with an 8-bit unsigned<br>
+ /// integer div/rem when the operands are positive and less than 256.<br>
+ DenseMap <unsigned int, unsigned int> BypassSlowDivWidths;<br>
<br>
- /// ParseConstraints - Split up the constraint string from the inline<br>
- /// assembly value into the specific constraints and their prefixes,<br>
- /// and also tie in the associated operand values.<br>
- /// If this returns an empty vector, and if the constraint string itself<br>
- /// isn't empty, there was an error parsing.<br>
- virtual AsmOperandInfoVector ParseConstraints(ImmutableCallSite CS) const;<br>
+ /// Pow2DivIsCheap - Tells the code generator that it shouldn't generate<br>
+ /// srl/add/sra for a signed divide by power of two, and let the target handle<br>
+ /// it.<br>
+ bool Pow2DivIsCheap;<br>
<br>
- /// Examine constraint type and operand type and determine a weight value.<br>
- /// The operand object must already have been set up with the operand type.<br>
- virtual ConstraintWeight getMultipleConstraintMatchWeight(<br>
- AsmOperandInfo &info, int maIndex) const;<br>
+ /// JumpIsExpensive - Tells the code generator that it shouldn't generate<br>
+ /// extra flow control instructions and should attempt to combine flow<br>
+ /// control instructions via predication.<br>
+ bool JumpIsExpensive;<br>
<br>
- /// Examine constraint string and operand type and determine a weight value.<br>
- /// The operand object must already have been set up with the operand type.<br>
- virtual ConstraintWeight getSingleConstraintMatchWeight(<br>
- AsmOperandInfo &info, const char *constraint) const;<br>
+ /// UseUnderscoreSetJmp - This target prefers to use _setjmp to implement<br>
+ /// llvm.setjmp. Defaults to false.<br>
+ bool UseUnderscoreSetJmp;<br>
<br>
- /// ComputeConstraintToUse - Determines the constraint code and constraint<br>
- /// type to use for the specific AsmOperandInfo, setting<br>
- /// OpInfo.ConstraintCode and OpInfo.ConstraintType. If the actual operand<br>
- /// being passed in is available, it can be passed in as Op, otherwise an<br>
- /// empty SDValue can be passed.<br>
- virtual void ComputeConstraintToUse(AsmOperandInfo &OpInfo,<br>
- SDValue Op,<br>
- SelectionDAG *DAG = 0) const;<br>
+ /// UseUnderscoreLongJmp - This target prefers to use _longjmp to implement<br>
+ /// llvm.longjmp. Defaults to false.<br>
+ bool UseUnderscoreLongJmp;<br>
<br>
- /// getConstraintType - Given a constraint, return the type of constraint it<br>
- /// is for this target.<br>
- virtual ConstraintType getConstraintType(const std::string &Constraint) const;<br>
+ /// SupportJumpTables - Whether the target can generate code for jumptables.<br>
+ /// If it's not true, then each jumptable must be lowered into if-then-else's.<br>
+ bool SupportJumpTables;<br>
<br>
- /// getRegForInlineAsmConstraint - Given a physical register constraint (e.g.<br>
- /// {edx}), return the register number and the register class for the<br>
- /// register.<br>
- ///<br>
- /// Given a register class constraint, like 'r', if this corresponds directly<br>
- /// to an LLVM register class, return a register of 0 and the register class<br>
- /// pointer.<br>
- ///<br>
- /// This should only be used for C_Register constraints. On error,<br>
- /// this returns a register number of 0 and a null register class pointer..<br>
- virtual std::pair<unsigned, const TargetRegisterClass*><br>
- getRegForInlineAsmConstraint(const std::string &Constraint,<br>
- EVT VT) const;<br>
+ /// MinimumJumpTableEntries - Number of blocks threshold to use jump tables.<br>
+ int MinimumJumpTableEntries;<br>
<br>
- /// LowerXConstraint - try to replace an X constraint, which matches anything,<br>
- /// with another that has more specific requirements based on the type of the<br>
- /// corresponding operand. This returns null if there is no replacement to<br>
- /// make.<br>
- virtual const char *LowerXConstraint(EVT ConstraintVT) const;<br>
+ /// BooleanContents - Information about the contents of the high-bits in<br>
+ /// boolean values held in a type wider than i1. See getBooleanContents.<br>
+ BooleanContent BooleanContents;<br>
+ /// BooleanVectorContents - Information about the contents of the high-bits<br>
+ /// in boolean vector values when the element type is wider than i1. See<br>
+ /// getBooleanContents.<br>
+ BooleanContent BooleanVectorContents;<br>
<br>
- /// LowerAsmOperandForConstraint - Lower the specified operand into the Ops<br>
- /// vector. If it is invalid, don't add anything to Ops.<br>
- virtual void LowerAsmOperandForConstraint(SDValue Op, std::string &Constraint,<br>
- std::vector<SDValue> &Ops,<br>
- SelectionDAG &DAG) const;<br>
+ /// SchedPreferenceInfo - The target scheduling preference: shortest possible<br>
+ /// total cycles or lowest register usage.<br>
+ Sched::Preference SchedPreferenceInfo;<br>
<br>
- //===--------------------------------------------------------------------===//<br>
- // Instruction Emitting Hooks<br>
- //<br>
+ /// JumpBufSize - The size, in bytes, of the target's jmp_buf buffers<br>
+ unsigned JumpBufSize;<br>
<br>
- // EmitInstrWithCustomInserter - This method should be implemented by targets<br>
- // that mark instructions with the 'usesCustomInserter' flag. These<br>
- // instructions are special in various ways, which require special support to<br>
- // insert. The specified MachineInstr is created but not inserted into any<br>
- // basic blocks, and this method is called to expand it into a sequence of<br>
- // instructions, potentially also creating new basic blocks and control flow.<br>
- virtual MachineBasicBlock *<br>
- EmitInstrWithCustomInserter(MachineInstr *MI, MachineBasicBlock *MBB) const;<br>
+ /// JumpBufAlignment - The alignment, in bytes, of the target's jmp_buf<br>
+ /// buffers<br>
+ unsigned JumpBufAlignment;<br>
<br>
- /// AdjustInstrPostInstrSelection - This method should be implemented by<br>
- /// targets that mark instructions with the 'hasPostISelHook' flag. These<br>
- /// instructions must be adjusted after instruction selection by target hooks.<br>
- /// e.g. To fill in optional defs for ARM 's' setting instructions.<br>
- virtual void<br>
- AdjustInstrPostInstrSelection(MachineInstr *MI, SDNode *Node) const;<br>
+ /// MinStackArgumentAlignment - The minimum alignment that any argument<br>
+ /// on the stack needs to have.<br>
+ ///<br>
+ unsigned MinStackArgumentAlignment;<br>
<br>
- //===--------------------------------------------------------------------===//<br>
- // Addressing mode description hooks (used by LSR etc).<br>
- //<br>
+ /// MinFunctionAlignment - The minimum function alignment (used when<br>
+ /// optimizing for size, and to prevent explicitly provided alignment<br>
+ /// from leading to incorrect code).<br>
+ ///<br>
+ unsigned MinFunctionAlignment;<br>
<br>
- /// GetAddrModeArguments - CodeGenPrepare sinks address calculations into the<br>
- /// same BB as Load/Store instructions reading the address. This allows as<br>
- /// much computation as possible to be done in the address mode for that<br>
- /// operand. This hook lets targets also pass back when this should be done<br>
- /// on intrinsics which load/store.<br>
- virtual bool GetAddrModeArguments(IntrinsicInst *I,<br>
- SmallVectorImpl<Value*> &Ops,<br>
- Type *&AccessTy) const {<br>
- return false;<br>
- }<br>
+ /// PrefFunctionAlignment - The preferred function alignment (used when<br>
+ /// alignment unspecified and optimizing for speed).<br>
+ ///<br>
+ unsigned PrefFunctionAlignment;<br>
<br>
- /// AddrMode - This represents an addressing mode of:<br>
- /// BaseGV + BaseOffs + BaseReg + Scale*ScaleReg<br>
- /// If BaseGV is null, there is no BaseGV.<br>
- /// If BaseOffs is zero, there is no base offset.<br>
- /// If HasBaseReg is false, there is no base register.<br>
- /// If Scale is zero, there is no ScaleReg. Scale of 1 indicates a reg with<br>
- /// no scale.<br>
+ /// PrefLoopAlignment - The preferred loop alignment.<br>
///<br>
- struct AddrMode {<br>
- GlobalValue *BaseGV;<br>
- int64_t BaseOffs;<br>
- bool HasBaseReg;<br>
- int64_t Scale;<br>
- AddrMode() : BaseGV(0), BaseOffs(0), HasBaseReg(false), Scale(0) {}<br>
- };<br>
+ unsigned PrefLoopAlignment;<br>
<br>
- /// isLegalAddressingMode - Return true if the addressing mode represented by<br>
- /// AM is legal for this target, for a load/store of the specified type.<br>
- /// The type may be VoidTy, in which case only return true if the addressing<br>
- /// mode is legal for a load/store of any legal type.<br>
- /// TODO: Handle pre/postinc as well.<br>
- virtual bool isLegalAddressingMode(const AddrMode &AM, Type *Ty) const;<br>
+ /// ShouldFoldAtomicFences - Whether fencing MEMBARRIER instructions should<br>
+ /// be folded into the enclosed atomic intrinsic instruction by the<br>
+ /// combiner.<br>
+ bool ShouldFoldAtomicFences;<br>
<br>
- /// isLegalICmpImmediate - Return true if the specified immediate is legal<br>
- /// icmp immediate, that is the target has icmp instructions which can compare<br>
- /// a register against the immediate without having to materialize the<br>
- /// immediate into a register.<br>
- virtual bool isLegalICmpImmediate(int64_t) const {<br>
- return true;<br>
- }<br>
-<br>
- /// isLegalAddImmediate - Return true if the specified immediate is legal<br>
- /// add immediate, that is the target has add instructions which can add<br>
- /// a register with the immediate without having to materialize the<br>
- /// immediate into a register.<br>
- virtual bool isLegalAddImmediate(int64_t) const {<br>
- return true;<br>
- }<br>
-<br>
- /// isTruncateFree - Return true if it's free to truncate a value of<br>
- /// type Ty1 to type Ty2. e.g. On x86 it's free to truncate a i32 value in<br>
- /// register EAX to i16 by referencing its sub-register AX.<br>
- virtual bool isTruncateFree(Type * /*Ty1*/, Type * /*Ty2*/) const {<br>
- return false;<br>
- }<br>
-<br>
- virtual bool isTruncateFree(EVT /*VT1*/, EVT /*VT2*/) const {<br>
- return false;<br>
- }<br>
-<br>
- /// isZExtFree - Return true if any actual instruction that defines a<br>
- /// value of type Ty1 implicitly zero-extends the value to Ty2 in the result<br>
- /// register. This does not necessarily include registers defined in<br>
- /// unknown ways, such as incoming arguments, or copies from unknown<br>
- /// virtual registers. Also, if isTruncateFree(Ty2, Ty1) is true, this<br>
- /// does not necessarily apply to truncate instructions. e.g. on x86-64,<br>
- /// all instructions that define 32-bit values implicit zero-extend the<br>
- /// result out to 64 bits.<br>
- virtual bool isZExtFree(Type * /*Ty1*/, Type * /*Ty2*/) const {<br>
- return false;<br>
- }<br>
-<br>
- virtual bool isZExtFree(EVT /*VT1*/, EVT /*VT2*/) const {<br>
- return false;<br>
- }<br>
-<br>
- /// isZExtFree - Return true if zero-extending the specific node Val to type<br>
- /// VT2 is free (either because it's implicitly zero-extended such as ARM<br>
- /// ldrb / ldrh or because it's folded such as X86 zero-extending loads).<br>
- virtual bool isZExtFree(SDValue Val, EVT VT2) const {<br>
- return isZExtFree(Val.getValueType(), VT2);<br>
- }<br>
-<br>
- /// isFNegFree - Return true if an fneg operation is free to the point where<br>
- /// it is never worthwhile to replace it with a bitwise operation.<br>
- virtual bool isFNegFree(EVT) const {<br>
- return false;<br>
- }<br>
-<br>
- /// isFAbsFree - Return true if an fneg operation is free to the point where<br>
- /// it is never worthwhile to replace it with a bitwise operation.<br>
- virtual bool isFAbsFree(EVT) const {<br>
- return false;<br>
- }<br>
-<br>
- /// isFMAFasterThanMulAndAdd - Return true if an FMA operation is faster than<br>
- /// a pair of mul and add instructions. fmuladd intrinsics will be expanded to<br>
- /// FMAs when this method returns true (and FMAs are legal), otherwise fmuladd<br>
- /// is expanded to mul + add.<br>
- virtual bool isFMAFasterThanMulAndAdd(EVT) const {<br>
- return false;<br>
- }<br>
-<br>
- /// isNarrowingProfitable - Return true if it's profitable to narrow<br>
- /// operations of type VT1 to VT2. e.g. on x86, it's profitable to narrow<br>
- /// from i32 to i8 but not from i32 to i16.<br>
- virtual bool isNarrowingProfitable(EVT /*VT1*/, EVT /*VT2*/) const {<br>
- return false;<br>
- }<br>
-<br>
- //===--------------------------------------------------------------------===//<br>
- // Div utility functions<br>
- //<br>
- SDValue BuildExactSDIV(SDValue Op1, SDValue Op2, DebugLoc dl,<br>
- SelectionDAG &DAG) const;<br>
- SDValue BuildSDIV(SDNode *N, SelectionDAG &DAG, bool IsAfterLegalization,<br>
- std::vector<SDNode*> *Created) const;<br>
- SDValue BuildUDIV(SDNode *N, SelectionDAG &DAG, bool IsAfterLegalization,<br>
- std::vector<SDNode*> *Created) const;<br>
-<br>
-<br>
- //===--------------------------------------------------------------------===//<br>
- // Runtime Library hooks<br>
- //<br>
-<br>
- /// setLibcallName - Rename the default libcall routine name for the specified<br>
- /// libcall.<br>
- void setLibcallName(RTLIB::Libcall Call, const char *Name) {<br>
- LibcallRoutineNames[Call] = Name;<br>
- }<br>
-<br>
- /// getLibcallName - Get the libcall routine name for the specified libcall.<br>
- ///<br>
- const char *getLibcallName(RTLIB::Libcall Call) const {<br>
- return LibcallRoutineNames[Call];<br>
- }<br>
-<br>
- /// setCmpLibcallCC - Override the default CondCode to be used to test the<br>
- /// result of the comparison libcall against zero.<br>
- void setCmpLibcallCC(RTLIB::Libcall Call, ISD::CondCode CC) {<br>
- CmpLibcallCCs[Call] = CC;<br>
- }<br>
-<br>
- /// getCmpLibcallCC - Get the CondCode that's to be used to test the result of<br>
- /// the comparison libcall against zero.<br>
- ISD::CondCode getCmpLibcallCC(RTLIB::Libcall Call) const {<br>
- return CmpLibcallCCs[Call];<br>
- }<br>
-<br>
- /// setLibcallCallingConv - Set the CallingConv that should be used for the<br>
- /// specified libcall.<br>
- void setLibcallCallingConv(RTLIB::Libcall Call, CallingConv::ID CC) {<br>
- LibcallCallingConvs[Call] = CC;<br>
- }<br>
-<br>
- /// getLibcallCallingConv - Get the CallingConv that should be used for the<br>
- /// specified libcall.<br>
- CallingConv::ID getLibcallCallingConv(RTLIB::Libcall Call) const {<br>
- return LibcallCallingConvs[Call];<br>
- }<br>
-<br>
- bool isInTailCallPosition(SelectionDAG &DAG, SDNode *Node,<br>
- SDValue &Chain) const;<br>
-<br>
- void softenSetCCOperands(SelectionDAG &DAG, EVT VT,<br>
- SDValue &NewLHS, SDValue &NewRHS,<br>
- ISD::CondCode &CCCode, DebugLoc DL) const;<br>
-<br>
- SDValue makeLibCall(SelectionDAG &DAG, RTLIB::Libcall LC, EVT RetVT,<br>
- const SDValue *Ops, unsigned NumOps,<br>
- bool isSigned, DebugLoc dl) const;<br>
-<br>
-private:<br>
- const TargetMachine &TM;<br>
- const DataLayout *TD;<br>
- const TargetLoweringObjectFile &TLOF;<br>
-<br>
- /// PointerTy - The type to use for pointers for the default address space,<br>
- /// usually i32 or i64.<br>
- ///<br>
- MVT PointerTy;<br>
-<br>
- /// IsLittleEndian - True if this is a little endian target.<br>
- ///<br>
- bool IsLittleEndian;<br>
-<br>
- /// SelectIsExpensive - Tells the code generator not to expand operations<br>
- /// into sequences that use the select operations if possible.<br>
- bool SelectIsExpensive;<br>
-<br>
- /// IntDivIsCheap - Tells the code generator not to expand integer divides by<br>
- /// constants into a sequence of muls, adds, and shifts. This is a hack until<br>
- /// a real cost model is in place. If we ever optimize for size, this will be<br>
- /// set to true unconditionally.<br>
- bool IntDivIsCheap;<br>
-<br>
- /// BypassSlowDivMap - Tells the code generator to bypass slow divide or<br>
- /// remainder instructions. For example, BypassSlowDivWidths[32,8] tells the<br>
- /// code generator to bypass 32-bit integer div/rem with an 8-bit unsigned<br>
- /// integer div/rem when the operands are positive and less than 256.<br>
- DenseMap <unsigned int, unsigned int> BypassSlowDivWidths;<br>
-<br>
- /// Pow2DivIsCheap - Tells the code generator that it shouldn't generate<br>
- /// srl/add/sra for a signed divide by power of two, and let the target handle<br>
- /// it.<br>
- bool Pow2DivIsCheap;<br>
-<br>
- /// JumpIsExpensive - Tells the code generator that it shouldn't generate<br>
- /// extra flow control instructions and should attempt to combine flow<br>
- /// control instructions via predication.<br>
- bool JumpIsExpensive;<br>
-<br>
- /// UseUnderscoreSetJmp - This target prefers to use _setjmp to implement<br>
- /// llvm.setjmp. Defaults to false.<br>
- bool UseUnderscoreSetJmp;<br>
-<br>
- /// UseUnderscoreLongJmp - This target prefers to use _longjmp to implement<br>
- /// llvm.longjmp. Defaults to false.<br>
- bool UseUnderscoreLongJmp;<br>
-<br>
- /// SupportJumpTables - Whether the target can generate code for jumptables.<br>
- /// If it's not true, then each jumptable must be lowered into if-then-else's.<br>
- bool SupportJumpTables;<br>
-<br>
- /// MinimumJumpTableEntries - Number of blocks threshold to use jump tables.<br>
- int MinimumJumpTableEntries;<br>
-<br>
- /// BooleanContents - Information about the contents of the high-bits in<br>
- /// boolean values held in a type wider than i1. See getBooleanContents.<br>
- BooleanContent BooleanContents;<br>
- /// BooleanVectorContents - Information about the contents of the high-bits<br>
- /// in boolean vector values when the element type is wider than i1. See<br>
- /// getBooleanContents.<br>
- BooleanContent BooleanVectorContents;<br>
-<br>
- /// SchedPreferenceInfo - The target scheduling preference: shortest possible<br>
- /// total cycles or lowest register usage.<br>
- Sched::Preference SchedPreferenceInfo;<br>
-<br>
- /// JumpBufSize - The size, in bytes, of the target's jmp_buf buffers<br>
- unsigned JumpBufSize;<br>
-<br>
- /// JumpBufAlignment - The alignment, in bytes, of the target's jmp_buf<br>
- /// buffers<br>
- unsigned JumpBufAlignment;<br>
-<br>
- /// MinStackArgumentAlignment - The minimum alignment that any argument<br>
- /// on the stack needs to have.<br>
- ///<br>
- unsigned MinStackArgumentAlignment;<br>
-<br>
- /// MinFunctionAlignment - The minimum function alignment (used when<br>
- /// optimizing for size, and to prevent explicitly provided alignment<br>
- /// from leading to incorrect code).<br>
- ///<br>
- unsigned MinFunctionAlignment;<br>
-<br>
- /// PrefFunctionAlignment - The preferred function alignment (used when<br>
- /// alignment unspecified and optimizing for speed).<br>
- ///<br>
- unsigned PrefFunctionAlignment;<br>
-<br>
- /// PrefLoopAlignment - The preferred loop alignment.<br>
- ///<br>
- unsigned PrefLoopAlignment;<br>
-<br>
- /// ShouldFoldAtomicFences - Whether fencing MEMBARRIER instructions should<br>
- /// be folded into the enclosed atomic intrinsic instruction by the<br>
- /// combiner.<br>
- bool ShouldFoldAtomicFences;<br>
-<br>
- /// InsertFencesForAtomic - Whether the DAG builder should automatically<br>
- /// insert fences and reduce ordering for atomics. (This will be set for<br>
- /// for most architectures with weak memory ordering.)<br>
- bool InsertFencesForAtomic;<br>
+ /// InsertFencesForAtomic - Whether the DAG builder should automatically<br>
+ /// insert fences and reduce ordering for atomics. (This will be set for<br>
+ /// for most architectures with weak memory ordering.)<br>
+ bool InsertFencesForAtomic;<br>
<br>
/// StackPointerRegisterToSaveRestore - If set to a physical register, this<br>
/// specifies the register that llvm.savestack/llvm.restorestack should save<br>
@@ -2246,12 +1643,627 @@<br>
/// more expensive than a branch if the branch is usually predicted right.<br>
bool predictableSelectIsExpensive;<br>
<br>
-private:<br>
+protected:<br>
/// isLegalRC - Return true if the value types that can be represented by the<br>
/// specified register class are all legal.<br>
bool isLegalRC(const TargetRegisterClass *RC) const;<br>
};<br>
<br>
+//===----------------------------------------------------------------------===//<br>
+/// TargetLowering - This class defines information used to lower LLVM code to<br>
+/// legal SelectionDAG operators that the target instruction selector can accept<br>
+/// natively.<br>
+///<br>
+/// This class also defines callbacks that targets must implement to lower<br>
+/// target-specific constructs to SelectionDAG operators.<br>
+///<br>
+class TargetLowering : public TargetLoweringBase {<br>
+ TargetLowering(const TargetLowering&) LLVM_DELETED_FUNCTION;<br>
+ void operator=(const TargetLowering&) LLVM_DELETED_FUNCTION;<br>
+<br>
+public:<br>
+ /// NOTE: The constructor takes ownership of TLOF.<br>
+ explicit TargetLowering(const TargetMachine &TM,<br>
+ const TargetLoweringObjectFile *TLOF);<br>
+<br>
+ /// getPreIndexedAddressParts - returns true by value, base pointer and<br>
+ /// offset pointer and addressing mode by reference if the node's address<br>
+ /// can be legally represented as pre-indexed load / store address.<br>
+ virtual bool getPreIndexedAddressParts(SDNode * /*N*/, SDValue &/*Base*/,<br>
+ SDValue &/*Offset*/,<br>
+ ISD::MemIndexedMode &/*AM*/,<br>
+ SelectionDAG &/*DAG*/) const {<br>
+ return false;<br>
+ }<br>
+<br>
+ /// getPostIndexedAddressParts - returns true by value, base pointer and<br>
+ /// offset pointer and addressing mode by reference if this node can be<br>
+ /// combined with a load / store to form a post-indexed load / store.<br>
+ virtual bool getPostIndexedAddressParts(SDNode * /*N*/, SDNode * /*Op*/,<br>
+ SDValue &/*Base*/, SDValue &/*Offset*/,<br>
+ ISD::MemIndexedMode &/*AM*/,<br>
+ SelectionDAG &/*DAG*/) const {<br>
+ return false;<br>
+ }<br>
+<br>
+ /// getJumpTableEncoding - Return the entry encoding for a jump table in the<br>
+ /// current function. The returned value is a member of the<br>
+ /// MachineJumpTableInfo::JTEntryKind enum.<br>
+ virtual unsigned getJumpTableEncoding() const;<br>
+<br>
+ virtual const MCExpr *<br>
+ LowerCustomJumpTableEntry(const MachineJumpTableInfo * /*MJTI*/,<br>
+ const MachineBasicBlock * /*MBB*/, unsigned /*uid*/,<br>
+ MCContext &/*Ctx*/) const {<br>
+ llvm_unreachable("Need to implement this hook if target has custom JTIs");<br>
+ }<br>
+<br>
+ /// getPICJumpTableRelocaBase - Returns relocation base for the given PIC<br>
+ /// jumptable.<br>
+ virtual SDValue getPICJumpTableRelocBase(SDValue Table,<br>
+ SelectionDAG &DAG) const;<br>
+<br>
+ /// getPICJumpTableRelocBaseExpr - This returns the relocation base for the<br>
+ /// given PIC jumptable, the same as getPICJumpTableRelocBase, but as an<br>
+ /// MCExpr.<br>
+ virtual const MCExpr *<br>
+ getPICJumpTableRelocBaseExpr(const MachineFunction *MF,<br>
+ unsigned JTI, MCContext &Ctx) const;<br>
+<br>
+ /// isOffsetFoldingLegal - Return true if folding a constant offset<br>
+ /// with the given GlobalAddress is legal. It is frequently not legal in<br>
+ /// PIC relocation models.<br>
+ virtual bool isOffsetFoldingLegal(const GlobalAddressSDNode *GA) const;<br>
+<br>
+ bool isInTailCallPosition(SelectionDAG &DAG, SDNode *Node,<br>
+ SDValue &Chain) const;<br>
+<br>
+ void softenSetCCOperands(SelectionDAG &DAG, EVT VT,<br>
+ SDValue &NewLHS, SDValue &NewRHS,<br>
+ ISD::CondCode &CCCode, DebugLoc DL) const;<br>
+<br>
+ SDValue makeLibCall(SelectionDAG &DAG, RTLIB::Libcall LC, EVT RetVT,<br>
+ const SDValue *Ops, unsigned NumOps,<br>
+ bool isSigned, DebugLoc dl) const;<br>
+<br>
+ //===--------------------------------------------------------------------===//<br>
+ // TargetLowering Optimization Methods<br>
+ //<br>
+<br>
+ /// TargetLoweringOpt - A convenience struct that encapsulates a DAG, and two<br>
+ /// SDValues for returning information from TargetLowering to its clients<br>
+ /// that want to combine<br>
+ struct TargetLoweringOpt {<br>
+ SelectionDAG &DAG;<br>
+ bool LegalTys;<br>
+ bool LegalOps;<br>
+ SDValue Old;<br>
+ SDValue New;<br>
+<br>
+ explicit TargetLoweringOpt(SelectionDAG &InDAG,<br>
+ bool LT, bool LO) :<br>
+ DAG(InDAG), LegalTys(LT), LegalOps(LO) {}<br>
+<br>
+ bool LegalTypes() const { return LegalTys; }<br>
+ bool LegalOperations() const { return LegalOps; }<br>
+<br>
+ bool CombineTo(SDValue O, SDValue N) {<br>
+ Old = O;<br>
+ New = N;<br>
+ return true;<br>
+ }<br>
+<br>
+ /// ShrinkDemandedConstant - Check to see if the specified operand of the<br>
+ /// specified instruction is a constant integer. If so, check to see if<br>
+ /// there are any bits set in the constant that are not demanded. If so,<br>
+ /// shrink the constant and return true.<br>
+ bool ShrinkDemandedConstant(SDValue Op, const APInt &Demanded);<br>
+<br>
+ /// ShrinkDemandedOp - Convert x+y to (VT)((SmallVT)x+(SmallVT)y) if the<br>
+ /// casts are free. This uses isZExtFree and ZERO_EXTEND for the widening<br>
+ /// cast, but it could be generalized for targets with other types of<br>
+ /// implicit widening casts.<br>
+ bool ShrinkDemandedOp(SDValue Op, unsigned BitWidth, const APInt &Demanded,<br>
+ DebugLoc dl);<br>
+ };<br>
+<br>
+ /// SimplifyDemandedBits - Look at Op. At this point, we know that only the<br>
+ /// DemandedMask bits of the result of Op are ever used downstream. If we can<br>
+ /// use this information to simplify Op, create a new simplified DAG node and<br>
+ /// return true, returning the original and new nodes in Old and New.<br>
+ /// Otherwise, analyze the expression and return a mask of KnownOne and<br>
+ /// KnownZero bits for the expression (used to simplify the caller).<br>
+ /// The KnownZero/One bits may only be accurate for those bits in the<br>
+ /// DemandedMask.<br>
+ bool SimplifyDemandedBits(SDValue Op, const APInt &DemandedMask,<br>
+ APInt &KnownZero, APInt &KnownOne,<br>
+ TargetLoweringOpt &TLO, unsigned Depth = 0) const;<br>
+<br>
+ /// computeMaskedBitsForTargetNode - Determine which of the bits specified in<br>
+ /// Mask are known to be either zero or one and return them in the<br>
+ /// KnownZero/KnownOne bitsets.<br>
+ virtual void computeMaskedBitsForTargetNode(const SDValue Op,<br>
+ APInt &KnownZero,<br>
+ APInt &KnownOne,<br>
+ const SelectionDAG &DAG,<br>
+ unsigned Depth = 0) const;<br>
+<br>
+ /// ComputeNumSignBitsForTargetNode - This method can be implemented by<br>
+ /// targets that want to expose additional information about sign bits to the<br>
+ /// DAG Combiner.<br>
+ virtual unsigned ComputeNumSignBitsForTargetNode(SDValue Op,<br>
+ unsigned Depth = 0) const;<br>
+<br>
+ struct DAGCombinerInfo {<br>
+ void *DC; // The DAG Combiner object.<br>
+ CombineLevel Level;<br>
+ bool CalledByLegalizer;<br>
+ public:<br>
+ SelectionDAG &DAG;<br>
+<br>
+ DAGCombinerInfo(SelectionDAG &dag, CombineLevel level, bool cl, void *dc)<br>
+ : DC(dc), Level(level), CalledByLegalizer(cl), DAG(dag) {}<br>
+<br>
+ bool isBeforeLegalize() const { return Level == BeforeLegalizeTypes; }<br>
+ bool isBeforeLegalizeOps() const { return Level < AfterLegalizeVectorOps; }<br>
+ bool isAfterLegalizeVectorOps() const {<br>
+ return Level == AfterLegalizeDAG;<br>
+ }<br>
+ CombineLevel getDAGCombineLevel() { return Level; }<br>
+ bool isCalledByLegalizer() const { return CalledByLegalizer; }<br>
+<br>
+ void AddToWorklist(SDNode *N);<br>
+ void RemoveFromWorklist(SDNode *N);<br>
+ SDValue CombineTo(SDNode *N, const std::vector<SDValue> &To,<br>
+ bool AddTo = true);<br>
+ SDValue CombineTo(SDNode *N, SDValue Res, bool AddTo = true);<br>
+ SDValue CombineTo(SDNode *N, SDValue Res0, SDValue Res1, bool AddTo = true);<br>
+<br>
+ void CommitTargetLoweringOpt(const TargetLoweringOpt &TLO);<br>
+ };<br>
+<br>
+ /// SimplifySetCC - Try to simplify a setcc built with the specified operands<br>
+ /// and cc. If it is unable to simplify it, return a null SDValue.<br>
+ SDValue SimplifySetCC(EVT VT, SDValue N0, SDValue N1,<br>
+ ISD::CondCode Cond, bool foldBooleans,<br>
+ DAGCombinerInfo &DCI, DebugLoc dl) const;<br>
+<br>
+ /// isGAPlusOffset - Returns true (and the GlobalValue and the offset) if the<br>
+ /// node is a GlobalAddress + offset.<br>
+ virtual bool<br>
+ isGAPlusOffset(SDNode *N, const GlobalValue* &GA, int64_t &Offset) const;<br>
+<br>
+ /// PerformDAGCombine - This method will be invoked for all target nodes and<br>
+ /// for any target-independent nodes that the target has registered with<br>
+ /// invoke it for.<br>
+ ///<br>
+ /// The semantics are as follows:<br>
+ /// Return Value:<br>
+ /// SDValue.Val == 0 - No change was made<br>
+ /// SDValue.Val == N - N was replaced, is dead, and is already handled.<br>
+ /// otherwise - N should be replaced by the returned Operand.<br>
+ ///<br>
+ /// In addition, methods provided by DAGCombinerInfo may be used to perform<br>
+ /// more complex transformations.<br>
+ ///<br>
+ virtual SDValue PerformDAGCombine(SDNode *N, DAGCombinerInfo &DCI) const;<br>
+<br>
+ /// isTypeDesirableForOp - Return true if the target has native support for<br>
+ /// the specified value type and it is 'desirable' to use the type for the<br>
+ /// given node type. e.g. On x86 i16 is legal, but undesirable since i16<br>
+ /// instruction encodings are longer and some i16 instructions are slow.<br>
+ virtual bool isTypeDesirableForOp(unsigned /*Opc*/, EVT VT) const {<br>
+ // By default, assume all legal types are desirable.<br>
+ return isTypeLegal(VT);<br>
+ }<br>
+<br>
+ /// isDesirableToPromoteOp - Return true if it is profitable for dag combiner<br>
+ /// to transform a floating point op of specified opcode to a equivalent op of<br>
+ /// an integer type. e.g. f32 load -> i32 load can be profitable on ARM.<br>
+ virtual bool isDesirableToTransformToIntegerOp(unsigned /*Opc*/,<br>
+ EVT /*VT*/) const {<br>
+ return false;<br>
+ }<br>
+<br>
+ /// IsDesirableToPromoteOp - This method query the target whether it is<br>
+ /// beneficial for dag combiner to promote the specified node. If true, it<br>
+ /// should return the desired promotion type by reference.<br>
+ virtual bool IsDesirableToPromoteOp(SDValue /*Op*/, EVT &/*PVT*/) const {<br>
+ return false;<br>
+ }<br>
+<br>
+ //===--------------------------------------------------------------------===//<br>
+ // Lowering methods - These methods must be implemented by targets so that<br>
+ // the SelectionDAGBuilder code knows how to lower these.<br>
+ //<br>
+<br>
+ /// LowerFormalArguments - This hook must be implemented to lower the<br>
+ /// incoming (formal) arguments, described by the Ins array, into the<br>
+ /// specified DAG. The implementation should fill in the InVals array<br>
+ /// with legal-type argument values, and return the resulting token<br>
+ /// chain value.<br>
+ ///<br>
+ virtual SDValue<br>
+ LowerFormalArguments(SDValue /*Chain*/, CallingConv::ID /*CallConv*/,<br>
+ bool /*isVarArg*/,<br>
+ const SmallVectorImpl<ISD::InputArg> &/*Ins*/,<br>
+ DebugLoc /*dl*/, SelectionDAG &/*DAG*/,<br>
+ SmallVectorImpl<SDValue> &/*InVals*/) const {<br>
+ llvm_unreachable("Not Implemented");<br>
+ }<br>
+<br>
+ struct ArgListEntry {<br>
+ SDValue Node;<br>
+ Type* Ty;<br>
+ bool isSExt : 1;<br>
+ bool isZExt : 1;<br>
+ bool isInReg : 1;<br>
+ bool isSRet : 1;<br>
+ bool isNest : 1;<br>
+ bool isByVal : 1;<br>
+ uint16_t Alignment;<br>
+<br>
+ ArgListEntry() : isSExt(false), isZExt(false), isInReg(false),<br>
+ isSRet(false), isNest(false), isByVal(false), Alignment(0) { }<br>
+ };<br>
+ typedef std::vector<ArgListEntry> ArgListTy;<br>
+<br>
+ /// CallLoweringInfo - This structure contains all information that is<br>
+ /// necessary for lowering calls. It is passed to TLI::LowerCallTo when the<br>
+ /// SelectionDAG builder needs to lower a call, and targets will see this<br>
+ /// struct in their LowerCall implementation.<br>
+ struct CallLoweringInfo {<br>
+ SDValue Chain;<br>
+ Type *RetTy;<br>
+ bool RetSExt : 1;<br>
+ bool RetZExt : 1;<br>
+ bool IsVarArg : 1;<br>
+ bool IsInReg : 1;<br>
+ bool DoesNotReturn : 1;<br>
+ bool IsReturnValueUsed : 1;<br>
+<br>
+ // IsTailCall should be modified by implementations of<br>
+ // TargetLowering::LowerCall that perform tail call conversions.<br>
+ bool IsTailCall;<br>
+<br>
+ unsigned NumFixedArgs;<br>
+ CallingConv::ID CallConv;<br>
+ SDValue Callee;<br>
+ ArgListTy &Args;<br>
+ SelectionDAG &DAG;<br>
+ DebugLoc DL;<br>
+ ImmutableCallSite *CS;<br>
+ SmallVector<ISD::OutputArg, 32> Outs;<br>
+ SmallVector<SDValue, 32> OutVals;<br>
+ SmallVector<ISD::InputArg, 32> Ins;<br>
+<br>
+<br>
+ /// CallLoweringInfo - Constructs a call lowering context based on the<br>
+ /// ImmutableCallSite \p cs.<br>
+ CallLoweringInfo(SDValue chain, Type *retTy,<br>
+ FunctionType *FTy, bool isTailCall, SDValue callee,<br>
+ ArgListTy &args, SelectionDAG &dag, DebugLoc dl,<br>
+ ImmutableCallSite &cs)<br>
+ : Chain(chain), RetTy(retTy), RetSExt(cs.paramHasAttr(0, Attribute::SExt)),<br>
+ RetZExt(cs.paramHasAttr(0, Attribute::ZExt)), IsVarArg(FTy->isVarArg()),<br>
+ IsInReg(cs.paramHasAttr(0, Attribute::InReg)),<br>
+ DoesNotReturn(cs.doesNotReturn()),<br>
+ IsReturnValueUsed(!cs.getInstruction()->use_empty()),<br>
+ IsTailCall(isTailCall), NumFixedArgs(FTy->getNumParams()),<br>
+ CallConv(cs.getCallingConv()), Callee(callee), Args(args), DAG(dag),<br>
+ DL(dl), CS(&cs) {}<br>
+<br>
+ /// CallLoweringInfo - Constructs a call lowering context based on the<br>
+ /// provided call information.<br>
+ CallLoweringInfo(SDValue chain, Type *retTy, bool retSExt, bool retZExt,<br>
+ bool isVarArg, bool isInReg, unsigned numFixedArgs,<br>
+ CallingConv::ID callConv, bool isTailCall,<br>
+ bool doesNotReturn, bool isReturnValueUsed, SDValue callee,<br>
+ ArgListTy &args, SelectionDAG &dag, DebugLoc dl)<br>
+ : Chain(chain), RetTy(retTy), RetSExt(retSExt), RetZExt(retZExt),<br>
+ IsVarArg(isVarArg), IsInReg(isInReg), DoesNotReturn(doesNotReturn),<br>
+ IsReturnValueUsed(isReturnValueUsed), IsTailCall(isTailCall),<br>
+ NumFixedArgs(numFixedArgs), CallConv(callConv), Callee(callee),<br>
+ Args(args), DAG(dag), DL(dl), CS(NULL) {}<br>
+ };<br>
+<br>
+ /// LowerCallTo - This function lowers an abstract call to a function into an<br>
+ /// actual call. This returns a pair of operands. The first element is the<br>
+ /// return value for the function (if RetTy is not VoidTy). The second<br>
+ /// element is the outgoing token chain. It calls LowerCall to do the actual<br>
+ /// lowering.<br>
+ std::pair<SDValue, SDValue> LowerCallTo(CallLoweringInfo &CLI) const;<br>
+<br>
+ /// LowerCall - This hook must be implemented to lower calls into the<br>
+ /// the specified DAG. The outgoing arguments to the call are described<br>
+ /// by the Outs array, and the values to be returned by the call are<br>
+ /// described by the Ins array. The implementation should fill in the<br>
+ /// InVals array with legal-type return values from the call, and return<br>
+ /// the resulting token chain value.<br>
+ virtual SDValue<br>
+ LowerCall(CallLoweringInfo &/*CLI*/,<br>
+ SmallVectorImpl<SDValue> &/*InVals*/) const {<br>
+ llvm_unreachable("Not Implemented");<br>
+ }<br>
+<br>
+ /// HandleByVal - Target-specific cleanup for formal ByVal parameters.<br>
+ virtual void HandleByVal(CCState *, unsigned &, unsigned) const {}<br>
+<br>
+ /// CanLowerReturn - This hook should be implemented to check whether the<br>
+ /// return values described by the Outs array can fit into the return<br>
+ /// registers. If false is returned, an sret-demotion is performed.<br>
+ ///<br>
+ virtual bool CanLowerReturn(CallingConv::ID /*CallConv*/,<br>
+ MachineFunction &/*MF*/, bool /*isVarArg*/,<br>
+ const SmallVectorImpl<ISD::OutputArg> &/*Outs*/,<br>
+ LLVMContext &/*Context*/) const<br>
+ {<br>
+ // Return true by default to get preexisting behavior.<br>
+ return true;<br>
+ }<br>
+<br>
+ /// LowerReturn - This hook must be implemented to lower outgoing<br>
+ /// return values, described by the Outs array, into the specified<br>
+ /// DAG. The implementation should return the resulting token chain<br>
+ /// value.<br>
+ ///<br>
+ virtual SDValue<br>
+ LowerReturn(SDValue /*Chain*/, CallingConv::ID /*CallConv*/,<br>
+ bool /*isVarArg*/,<br>
+ const SmallVectorImpl<ISD::OutputArg> &/*Outs*/,<br>
+ const SmallVectorImpl<SDValue> &/*OutVals*/,<br>
+ DebugLoc /*dl*/, SelectionDAG &/*DAG*/) const {<br>
+ llvm_unreachable("Not Implemented");<br>
+ }<br>
+<br>
+ /// isUsedByReturnOnly - Return true if result of the specified node is used<br>
+ /// by a return node only. It also compute and return the input chain for the<br>
+ /// tail call.<br>
+ /// This is used to determine whether it is possible<br>
+ /// to codegen a libcall as tail call at legalization time.<br>
+ virtual bool isUsedByReturnOnly(SDNode *, SDValue &Chain) const {<br>
+ return false;<br>
+ }<br>
+<br>
+ /// mayBeEmittedAsTailCall - Return true if the target may be able emit the<br>
+ /// call instruction as a tail call. This is used by optimization passes to<br>
+ /// determine if it's profitable to duplicate return instructions to enable<br>
+ /// tailcall optimization.<br>
+ virtual bool mayBeEmittedAsTailCall(CallInst *) const {<br>
+ return false;<br>
+ }<br>
+<br>
+ /// getTypeForExtArgOrReturn - Return the type that should be used to zero or<br>
+ /// sign extend a zeroext/signext integer argument or return value.<br>
+ /// FIXME: Most C calling convention requires the return type to be promoted,<br>
+ /// but this is not true all the time, e.g. i1 on x86-64. It is also not<br>
+ /// necessary for non-C calling conventions. The frontend should handle this<br>
+ /// and include all of the necessary information.<br>
+ virtual MVT getTypeForExtArgOrReturn(MVT VT,<br>
+ ISD::NodeType /*ExtendKind*/) const {<br>
+ MVT MinVT = getRegisterType(MVT::i32);<br>
+ return VT.bitsLT(MinVT) ? MinVT : VT;<br>
+ }<br>
+<br>
+ /// LowerOperationWrapper - This callback is invoked by the type legalizer<br>
+ /// to legalize nodes with an illegal operand type but legal result types.<br>
+ /// It replaces the LowerOperation callback in the type Legalizer.<br>
+ /// The reason we can not do away with LowerOperation entirely is that<br>
+ /// LegalizeDAG isn't yet ready to use this callback.<br>
+ /// TODO: Consider merging with ReplaceNodeResults.<br>
+<br>
+ /// The target places new result values for the node in Results (their number<br>
+ /// and types must exactly match those of the original return values of<br>
+ /// the node), or leaves Results empty, which indicates that the node is not<br>
+ /// to be custom lowered after all.<br>
+ /// The default implementation calls LowerOperation.<br>
+ virtual void LowerOperationWrapper(SDNode *N,<br>
+ SmallVectorImpl<SDValue> &Results,<br>
+ SelectionDAG &DAG) const;<br>
+<br>
+ /// LowerOperation - This callback is invoked for operations that are<br>
+ /// unsupported by the target, which are registered to use 'custom' lowering,<br>
+ /// and whose defined values are all legal.<br>
+ /// If the target has no operations that require custom lowering, it need not<br>
+ /// implement this. The default implementation of this aborts.<br>
+ virtual SDValue LowerOperation(SDValue Op, SelectionDAG &DAG) const;<br>
+<br>
+ /// ReplaceNodeResults - This callback is invoked when a node result type is<br>
+ /// illegal for the target, and the operation was registered to use 'custom'<br>
+ /// lowering for that result type. The target places new result values for<br>
+ /// the node in Results (their number and types must exactly match those of<br>
+ /// the original return values of the node), or leaves Results empty, which<br>
+ /// indicates that the node is not to be custom lowered after all.<br>
+ ///<br>
+ /// If the target has no operations that require custom lowering, it need not<br>
+ /// implement this. The default implementation aborts.<br>
+ virtual void ReplaceNodeResults(SDNode * /*N*/,<br>
+ SmallVectorImpl<SDValue> &/*Results*/,<br>
+ SelectionDAG &/*DAG*/) const {<br>
+ llvm_unreachable("ReplaceNodeResults not implemented for this target!");<br>
+ }<br>
+<br>
+ /// getTargetNodeName() - This method returns the name of a target specific<br>
+ /// DAG node.<br>
+ virtual const char *getTargetNodeName(unsigned Opcode) const;<br>
+<br>
+ /// createFastISel - This method returns a target specific FastISel object,<br>
+ /// or null if the target does not support "fast" ISel.<br>
+ virtual FastISel *createFastISel(FunctionLoweringInfo &,<br>
+ const TargetLibraryInfo *) const {<br>
+ return 0;<br>
+ }<br>
+<br>
+ //===--------------------------------------------------------------------===//<br>
+ // Inline Asm Support hooks<br>
+ //<br>
+<br>
+ /// ExpandInlineAsm - This hook allows the target to expand an inline asm<br>
+ /// call to be explicit llvm code if it wants to. This is useful for<br>
+ /// turning simple inline asms into LLVM intrinsics, which gives the<br>
+ /// compiler more information about the behavior of the code.<br>
+ virtual bool ExpandInlineAsm(CallInst *) const {<br>
+ return false;<br>
+ }<br>
+<br>
+ enum ConstraintType {<br>
+ C_Register, // Constraint represents specific register(s).<br>
+ C_RegisterClass, // Constraint represents any of register(s) in class.<br>
+ C_Memory, // Memory constraint.<br>
+ C_Other, // Something else.<br>
+ C_Unknown // Unsupported constraint.<br>
+ };<br>
+<br>
+ enum ConstraintWeight {<br>
+ // Generic weights.<br>
+ CW_Invalid = -1, // No match.<br>
+ CW_Okay = 0, // Acceptable.<br>
+ CW_Good = 1, // Good weight.<br>
+ CW_Better = 2, // Better weight.<br>
+ CW_Best = 3, // Best weight.<br>
+<br>
+ // Well-known weights.<br>
+ CW_SpecificReg = CW_Okay, // Specific register operands.<br>
+ CW_Register = CW_Good, // Register operands.<br>
+ CW_Memory = CW_Better, // Memory operands.<br>
+ CW_Constant = CW_Best, // Constant operand.<br>
+ CW_Default = CW_Okay // Default or don't know type.<br>
+ };<br>
+<br>
+ /// AsmOperandInfo - This contains information for each constraint that we are<br>
+ /// lowering.<br>
+ struct AsmOperandInfo : public InlineAsm::ConstraintInfo {<br>
+ /// ConstraintCode - This contains the actual string for the code, like "m".<br>
+ /// TargetLowering picks the 'best' code from ConstraintInfo::Codes that<br>
+ /// most closely matches the operand.<br>
+ std::string ConstraintCode;<br>
+<br>
+ /// ConstraintType - Information about the constraint code, e.g. Register,<br>
+ /// RegisterClass, Memory, Other, Unknown.<br>
+ TargetLowering::ConstraintType ConstraintType;<br>
+<br>
+ /// CallOperandval - If this is the result output operand or a<br>
+ /// clobber, this is null, otherwise it is the incoming operand to the<br>
+ /// CallInst. This gets modified as the asm is processed.<br>
+ Value *CallOperandVal;<br>
+<br>
+ /// ConstraintVT - The ValueType for the operand value.<br>
+ MVT ConstraintVT;<br>
+<br>
+ /// isMatchingInputConstraint - Return true of this is an input operand that<br>
+ /// is a matching constraint like "4".<br>
+ bool isMatchingInputConstraint() const;<br>
+<br>
+ /// getMatchedOperand - If this is an input matching constraint, this method<br>
+ /// returns the output operand it matches.<br>
+ unsigned getMatchedOperand() const;<br>
+<br>
+ /// Copy constructor for copying from an AsmOperandInfo.<br>
+ AsmOperandInfo(const AsmOperandInfo &info)<br>
+ : InlineAsm::ConstraintInfo(info),<br>
+ ConstraintCode(info.ConstraintCode),<br>
+ ConstraintType(info.ConstraintType),<br>
+ CallOperandVal(info.CallOperandVal),<br>
+ ConstraintVT(info.ConstraintVT) {<br>
+ }<br>
+<br>
+ /// Copy constructor for copying from a ConstraintInfo.<br>
+ AsmOperandInfo(const InlineAsm::ConstraintInfo &info)<br>
+ : InlineAsm::ConstraintInfo(info),<br>
+ ConstraintType(TargetLowering::C_Unknown),<br>
+ CallOperandVal(0), ConstraintVT(MVT::Other) {<br>
+ }<br>
+ };<br>
+<br>
+ typedef std::vector<AsmOperandInfo> AsmOperandInfoVector;<br>
+<br>
+ /// ParseConstraints - Split up the constraint string from the inline<br>
+ /// assembly value into the specific constraints and their prefixes,<br>
+ /// and also tie in the associated operand values.<br>
+ /// If this returns an empty vector, and if the constraint string itself<br>
+ /// isn't empty, there was an error parsing.<br>
+ virtual AsmOperandInfoVector ParseConstraints(ImmutableCallSite CS) const;<br>
+<br>
+ /// Examine constraint type and operand type and determine a weight value.<br>
+ /// The operand object must already have been set up with the operand type.<br>
+ virtual ConstraintWeight getMultipleConstraintMatchWeight(<br>
+ AsmOperandInfo &info, int maIndex) const;<br>
+<br>
+ /// Examine constraint string and operand type and determine a weight value.<br>
+ /// The operand object must already have been set up with the operand type.<br>
+ virtual ConstraintWeight getSingleConstraintMatchWeight(<br>
+ AsmOperandInfo &info, const char *constraint) const;<br>
+<br>
+ /// ComputeConstraintToUse - Determines the constraint code and constraint<br>
+ /// type to use for the specific AsmOperandInfo, setting<br>
+ /// OpInfo.ConstraintCode and OpInfo.ConstraintType. If the actual operand<br>
+ /// being passed in is available, it can be passed in as Op, otherwise an<br>
+ /// empty SDValue can be passed.<br>
+ virtual void ComputeConstraintToUse(AsmOperandInfo &OpInfo,<br>
+ SDValue Op,<br>
+ SelectionDAG *DAG = 0) const;<br>
+<br>
+ /// getConstraintType - Given a constraint, return the type of constraint it<br>
+ /// is for this target.<br>
+ virtual ConstraintType getConstraintType(const std::string &Constraint) const;<br>
+<br>
+ /// getRegForInlineAsmConstraint - Given a physical register constraint (e.g.<br>
+ /// {edx}), return the register number and the register class for the<br>
+ /// register.<br>
+ ///<br>
+ /// Given a register class constraint, like 'r', if this corresponds directly<br>
+ /// to an LLVM register class, return a register of 0 and the register class<br>
+ /// pointer.<br>
+ ///<br>
+ /// This should only be used for C_Register constraints. On error,<br>
+ /// this returns a register number of 0 and a null register class pointer..<br>
+ virtual std::pair<unsigned, const TargetRegisterClass*><br>
+ getRegForInlineAsmConstraint(const std::string &Constraint,<br>
+ EVT VT) const;<br>
+<br>
+ /// LowerXConstraint - try to replace an X constraint, which matches anything,<br>
+ /// with another that has more specific requirements based on the type of the<br>
+ /// corresponding operand. This returns null if there is no replacement to<br>
+ /// make.<br>
+ virtual const char *LowerXConstraint(EVT ConstraintVT) const;<br>
+<br>
+ /// LowerAsmOperandForConstraint - Lower the specified operand into the Ops<br>
+ /// vector. If it is invalid, don't add anything to Ops.<br>
+ virtual void LowerAsmOperandForConstraint(SDValue Op, std::string &Constraint,<br>
+ std::vector<SDValue> &Ops,<br>
+ SelectionDAG &DAG) const;<br>
+<br>
+ //===--------------------------------------------------------------------===//<br>
+ // Div utility functions<br>
+ //<br>
+ SDValue BuildExactSDIV(SDValue Op1, SDValue Op2, DebugLoc dl,<br>
+ SelectionDAG &DAG) const;<br>
+ SDValue BuildSDIV(SDNode *N, SelectionDAG &DAG, bool IsAfterLegalization,<br>
+ std::vector<SDNode*> *Created) const;<br>
+ SDValue BuildUDIV(SDNode *N, SelectionDAG &DAG, bool IsAfterLegalization,<br>
+ std::vector<SDNode*> *Created) const;<br>
+<br>
+ //===--------------------------------------------------------------------===//<br>
+ // Instruction Emitting Hooks<br>
+ //<br>
+<br>
+ // EmitInstrWithCustomInserter - This method should be implemented by targets<br>
+ // that mark instructions with the 'usesCustomInserter' flag. These<br>
+ // instructions are special in various ways, which require special support to<br>
+ // insert. The specified MachineInstr is created but not inserted into any<br>
+ // basic blocks, and this method is called to expand it into a sequence of<br>
+ // instructions, potentially also creating new basic blocks and control flow.<br>
+ virtual MachineBasicBlock *<br>
+ EmitInstrWithCustomInserter(MachineInstr *MI, MachineBasicBlock *MBB) const;<br>
+<br>
+ /// AdjustInstrPostInstrSelection - This method should be implemented by<br>
+ /// targets that mark instructions with the 'hasPostISelHook' flag. These<br>
+ /// instructions must be adjusted after instruction selection by target hooks.<br>
+ /// e.g. To fill in optional defs for ARM 's' setting instructions.<br>
+ virtual void<br>
+ AdjustInstrPostInstrSelection(MachineInstr *MI, SDNode *Node) const;<br>
+};<br>
+<br>
/// GetReturnInfo - Given an LLVM IR type and return type attributes,<br>
/// compute the return value EVTs and flags, and optionally also<br>
/// the offsets, if the return value is being lowered to memory.<br>
<br>
Modified: llvm/trunk/lib/CodeGen/BasicTargetTransformInfo.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/BasicTargetTransformInfo.cpp?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/BasicTargetTransformInfo.cpp?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/CodeGen/BasicTargetTransformInfo.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/BasicTargetTransformInfo.cpp Fri Jan 11 14:05:37 2013<br>
@@ -26,7 +26,7 @@<br>
namespace {<br>
<br>
class BasicTTI : public ImmutablePass, public TargetTransformInfo {<br>
- const TargetLowering *TLI;<br>
+ const TargetLoweringBase *TLI;<br>
<br>
/// Estimate the overhead of scalarizing an instruction. Insert and Extract<br>
/// are set if the result needs to be inserted and/or extracted from vectors.<br>
@@ -37,7 +37,7 @@<br>
llvm_unreachable("This pass cannot be directly constructed");<br>
}<br>
<br>
- BasicTTI(const TargetLowering *TLI) : ImmutablePass(ID), TLI(TLI) {<br>
+ BasicTTI(const TargetLoweringBase *TLI) : ImmutablePass(ID), TLI(TLI) {<br>
initializeBasicTTIPass(*PassRegistry::getPassRegistry());<br>
}<br>
<br>
@@ -112,7 +112,7 @@<br>
char BasicTTI::ID = 0;<br>
<br>
ImmutablePass *<br>
-llvm::createBasicTargetTransformInfoPass(const TargetLowering *TLI) {<br>
+llvm::createBasicTargetTransformInfoPass(const TargetLoweringBase *TLI) {<br>
return new BasicTTI(TLI);<br>
}<br>
<br>
@@ -128,7 +128,7 @@<br>
bool BasicTTI::isLegalAddressingMode(Type *Ty, GlobalValue *BaseGV,<br>
int64_t BaseOffset, bool HasBaseReg,<br>
int64_t Scale) const {<br>
- TargetLowering::AddrMode AM;<br>
+ TargetLoweringBase::AddrMode AM;<br>
AM.BaseGV = BaseGV;<br>
AM.BaseOffs = BaseOffset;<br>
AM.HasBaseReg = HasBaseReg;<br>
<br>
Modified: llvm/trunk/lib/CodeGen/CMakeLists.txt<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/CMakeLists.txt?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/CMakeLists.txt?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/CodeGen/CMakeLists.txt (original)<br>
+++ llvm/trunk/lib/CodeGen/CMakeLists.txt Fri Jan 11 14:05:37 2013<br>
@@ -9,8 +9,8 @@<br>
CodeGen.cpp<br>
CodePlacementOpt.cpp<br>
CriticalAntiDepBreaker.cpp<br>
- DeadMachineInstructionElim.cpp<br>
DFAPacketizer.cpp<br>
+ DeadMachineInstructionElim.cpp<br>
DwarfEHPrepare.cpp<br>
EarlyIfConversion.cpp<br>
EdgeBundles.cpp<br>
@@ -32,21 +32,20 @@<br>
LiveInterval.cpp<br>
LiveIntervalAnalysis.cpp<br>
LiveIntervalUnion.cpp<br>
+ LiveRangeCalc.cpp<br>
+ LiveRangeEdit.cpp<br>
LiveRegMatrix.cpp<br>
LiveStackAnalysis.cpp<br>
LiveVariables.cpp<br>
- LiveRangeCalc.cpp<br>
- LiveRangeEdit.cpp<br>
LocalStackSlotAllocation.cpp<br>
MachineBasicBlock.cpp<br>
MachineBlockFrequencyInfo.cpp<br>
MachineBlockPlacement.cpp<br>
MachineBranchProbabilityInfo.cpp<br>
+ MachineCSE.cpp<br>
MachineCodeEmitter.cpp<br>
MachineCopyPropagation.cpp<br>
- MachineCSE.cpp<br>
MachineDominators.cpp<br>
- MachinePostDominators.cpp<br>
MachineFunction.cpp<br>
MachineFunctionAnalysis.cpp<br>
MachineFunctionPass.cpp<br>
@@ -58,6 +57,7 @@<br>
MachineModuleInfo.cpp<br>
MachineModuleInfoImpls.cpp<br>
MachinePassRegistry.cpp<br>
+ MachinePostDominators.cpp<br>
MachineRegisterInfo.cpp<br>
MachineSSAUpdater.cpp<br>
MachineScheduler.cpp<br>
@@ -91,16 +91,17 @@<br>
ShrinkWrapping.cpp<br>
SjLjEHPrepare.cpp<br>
SlotIndexes.cpp<br>
- Spiller.cpp<br>
SpillPlacement.cpp<br>
+ Spiller.cpp<br>
SplitKit.cpp<br>
+ StackColoring.cpp<br>
StackProtector.cpp<br>
StackSlotColoring.cpp<br>
- StackColoring.cpp<br>
StrongPHIElimination.cpp<br>
TailDuplication.cpp<br>
TargetFrameLoweringImpl.cpp<br>
TargetInstrInfo.cpp<br>
+ TargetLoweringBase.cpp<br>
TargetLoweringObjectFileImpl.cpp<br>
TargetOptionsImpl.cpp<br>
TargetRegisterInfo.cpp<br>
<br>
Modified: llvm/trunk/lib/CodeGen/DwarfEHPrepare.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/DwarfEHPrepare.cpp?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/DwarfEHPrepare.cpp?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/CodeGen/DwarfEHPrepare.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/DwarfEHPrepare.cpp Fri Jan 11 14:05:37 2013<br>
@@ -33,7 +33,7 @@<br>
namespace {<br>
class DwarfEHPrepare : public FunctionPass {<br>
const TargetMachine *TM;<br>
- const TargetLowering *TLI;<br>
+ const TargetLoweringBase *TLI;<br>
<br>
// RewindFunction - _Unwind_Resume or the target equivalent.<br>
Constant *RewindFunction;<br>
<br>
Modified: llvm/trunk/lib/CodeGen/IfConversion.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/IfConversion.cpp?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/IfConversion.cpp?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/CodeGen/IfConversion.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/IfConversion.cpp Fri Jan 11 14:05:37 2013<br>
@@ -151,7 +151,7 @@<br>
/// basic block number.<br>
std::vector<BBInfo> BBAnalysis;<br>
<br>
- const TargetLowering *TLI;<br>
+ const TargetLoweringBase *TLI;<br>
const TargetInstrInfo *TII;<br>
const TargetRegisterInfo *TRI;<br>
const InstrItineraryData *InstrItins;<br>
<br>
Modified: llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/MachineBlockPlacement.cpp Fri Jan 11 14:05:37 2013<br>
@@ -171,7 +171,7 @@<br>
const TargetInstrInfo *TII;<br>
<br>
/// \brief A handle to the target's lowering info.<br>
- const TargetLowering *TLI;<br>
+ const TargetLoweringBase *TLI;<br>
<br>
/// \brief Allocator and owner of BlockChain structures.<br>
///<br>
<br>
Modified: llvm/trunk/lib/CodeGen/MachineLICM.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineLICM.cpp?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineLICM.cpp?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/CodeGen/MachineLICM.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/MachineLICM.cpp Fri Jan 11 14:05:37 2013<br>
@@ -62,7 +62,7 @@<br>
class MachineLICM : public MachineFunctionPass {<br>
const TargetMachine *TM;<br>
const TargetInstrInfo *TII;<br>
- const TargetLowering *TLI;<br>
+ const TargetLoweringBase *TLI;<br>
const TargetRegisterInfo *TRI;<br>
const MachineFrameInfo *MFI;<br>
MachineRegisterInfo *MRI;<br>
<br>
Modified: llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/SelectionDAG/TargetLowering.cpp Fri Jan 11 14:05:37 2013<br>
@@ -33,324 +33,6 @@<br>
#include <cctype><br>
using namespace llvm;<br>
<br>
-/// InitLibcallNames - Set default libcall names.<br>
-///<br>
-static void InitLibcallNames(const char **Names) {<br>
- Names[RTLIB::SHL_I16] = "__ashlhi3";<br>
- Names[RTLIB::SHL_I32] = "__ashlsi3";<br>
- Names[RTLIB::SHL_I64] = "__ashldi3";<br>
- Names[RTLIB::SHL_I128] = "__ashlti3";<br>
- Names[RTLIB::SRL_I16] = "__lshrhi3";<br>
- Names[RTLIB::SRL_I32] = "__lshrsi3";<br>
- Names[RTLIB::SRL_I64] = "__lshrdi3";<br>
- Names[RTLIB::SRL_I128] = "__lshrti3";<br>
- Names[RTLIB::SRA_I16] = "__ashrhi3";<br>
- Names[RTLIB::SRA_I32] = "__ashrsi3";<br>
- Names[RTLIB::SRA_I64] = "__ashrdi3";<br>
- Names[RTLIB::SRA_I128] = "__ashrti3";<br>
- Names[RTLIB::MUL_I8] = "__mulqi3";<br>
- Names[RTLIB::MUL_I16] = "__mulhi3";<br>
- Names[RTLIB::MUL_I32] = "__mulsi3";<br>
- Names[RTLIB::MUL_I64] = "__muldi3";<br>
- Names[RTLIB::MUL_I128] = "__multi3";<br>
- Names[RTLIB::MULO_I32] = "__mulosi4";<br>
- Names[RTLIB::MULO_I64] = "__mulodi4";<br>
- Names[RTLIB::MULO_I128] = "__muloti4";<br>
- Names[RTLIB::SDIV_I8] = "__divqi3";<br>
- Names[RTLIB::SDIV_I16] = "__divhi3";<br>
- Names[RTLIB::SDIV_I32] = "__divsi3";<br>
- Names[RTLIB::SDIV_I64] = "__divdi3";<br>
- Names[RTLIB::SDIV_I128] = "__divti3";<br>
- Names[RTLIB::UDIV_I8] = "__udivqi3";<br>
- Names[RTLIB::UDIV_I16] = "__udivhi3";<br>
- Names[RTLIB::UDIV_I32] = "__udivsi3";<br>
- Names[RTLIB::UDIV_I64] = "__udivdi3";<br>
- Names[RTLIB::UDIV_I128] = "__udivti3";<br>
- Names[RTLIB::SREM_I8] = "__modqi3";<br>
- Names[RTLIB::SREM_I16] = "__modhi3";<br>
- Names[RTLIB::SREM_I32] = "__modsi3";<br>
- Names[RTLIB::SREM_I64] = "__moddi3";<br>
- Names[RTLIB::SREM_I128] = "__modti3";<br>
- Names[RTLIB::UREM_I8] = "__umodqi3";<br>
- Names[RTLIB::UREM_I16] = "__umodhi3";<br>
- Names[RTLIB::UREM_I32] = "__umodsi3";<br>
- Names[RTLIB::UREM_I64] = "__umoddi3";<br>
- Names[RTLIB::UREM_I128] = "__umodti3";<br>
-<br>
- // These are generally not available.<br>
- Names[RTLIB::SDIVREM_I8] = 0;<br>
- Names[RTLIB::SDIVREM_I16] = 0;<br>
- Names[RTLIB::SDIVREM_I32] = 0;<br>
- Names[RTLIB::SDIVREM_I64] = 0;<br>
- Names[RTLIB::SDIVREM_I128] = 0;<br>
- Names[RTLIB::UDIVREM_I8] = 0;<br>
- Names[RTLIB::UDIVREM_I16] = 0;<br>
- Names[RTLIB::UDIVREM_I32] = 0;<br>
- Names[RTLIB::UDIVREM_I64] = 0;<br>
- Names[RTLIB::UDIVREM_I128] = 0;<br>
-<br>
- Names[RTLIB::NEG_I32] = "__negsi2";<br>
- Names[RTLIB::NEG_I64] = "__negdi2";<br>
- Names[RTLIB::ADD_F32] = "__addsf3";<br>
- Names[RTLIB::ADD_F64] = "__adddf3";<br>
- Names[RTLIB::ADD_F80] = "__addxf3";<br>
- Names[RTLIB::ADD_F128] = "__addtf3";<br>
- Names[RTLIB::ADD_PPCF128] = "__gcc_qadd";<br>
- Names[RTLIB::SUB_F32] = "__subsf3";<br>
- Names[RTLIB::SUB_F64] = "__subdf3";<br>
- Names[RTLIB::SUB_F80] = "__subxf3";<br>
- Names[RTLIB::SUB_F128] = "__subtf3";<br>
- Names[RTLIB::SUB_PPCF128] = "__gcc_qsub";<br>
- Names[RTLIB::MUL_F32] = "__mulsf3";<br>
- Names[RTLIB::MUL_F64] = "__muldf3";<br>
- Names[RTLIB::MUL_F80] = "__mulxf3";<br>
- Names[RTLIB::MUL_F128] = "__multf3";<br>
- Names[RTLIB::MUL_PPCF128] = "__gcc_qmul";<br>
- Names[RTLIB::DIV_F32] = "__divsf3";<br>
- Names[RTLIB::DIV_F64] = "__divdf3";<br>
- Names[RTLIB::DIV_F80] = "__divxf3";<br>
- Names[RTLIB::DIV_F128] = "__divtf3";<br>
- Names[RTLIB::DIV_PPCF128] = "__gcc_qdiv";<br>
- Names[RTLIB::REM_F32] = "fmodf";<br>
- Names[RTLIB::REM_F64] = "fmod";<br>
- Names[RTLIB::REM_F80] = "fmodl";<br>
- Names[RTLIB::REM_F128] = "fmodl";<br>
- Names[RTLIB::REM_PPCF128] = "fmodl";<br>
- Names[RTLIB::FMA_F32] = "fmaf";<br>
- Names[RTLIB::FMA_F64] = "fma";<br>
- Names[RTLIB::FMA_F80] = "fmal";<br>
- Names[RTLIB::FMA_F128] = "fmal";<br>
- Names[RTLIB::FMA_PPCF128] = "fmal";<br>
- Names[RTLIB::POWI_F32] = "__powisf2";<br>
- Names[RTLIB::POWI_F64] = "__powidf2";<br>
- Names[RTLIB::POWI_F80] = "__powixf2";<br>
- Names[RTLIB::POWI_F128] = "__powitf2";<br>
- Names[RTLIB::POWI_PPCF128] = "__powitf2";<br>
- Names[RTLIB::SQRT_F32] = "sqrtf";<br>
- Names[RTLIB::SQRT_F64] = "sqrt";<br>
- Names[RTLIB::SQRT_F80] = "sqrtl";<br>
- Names[RTLIB::SQRT_F128] = "sqrtl";<br>
- Names[RTLIB::SQRT_PPCF128] = "sqrtl";<br>
- Names[RTLIB::LOG_F32] = "logf";<br>
- Names[RTLIB::LOG_F64] = "log";<br>
- Names[RTLIB::LOG_F80] = "logl";<br>
- Names[RTLIB::LOG_F128] = "logl";<br>
- Names[RTLIB::LOG_PPCF128] = "logl";<br>
- Names[RTLIB::LOG2_F32] = "log2f";<br>
- Names[RTLIB::LOG2_F64] = "log2";<br>
- Names[RTLIB::LOG2_F80] = "log2l";<br>
- Names[RTLIB::LOG2_F128] = "log2l";<br>
- Names[RTLIB::LOG2_PPCF128] = "log2l";<br>
- Names[RTLIB::LOG10_F32] = "log10f";<br>
- Names[RTLIB::LOG10_F64] = "log10";<br>
- Names[RTLIB::LOG10_F80] = "log10l";<br>
- Names[RTLIB::LOG10_F128] = "log10l";<br>
- Names[RTLIB::LOG10_PPCF128] = "log10l";<br>
- Names[RTLIB::EXP_F32] = "expf";<br>
- Names[RTLIB::EXP_F64] = "exp";<br>
- Names[RTLIB::EXP_F80] = "expl";<br>
- Names[RTLIB::EXP_F128] = "expl";<br>
- Names[RTLIB::EXP_PPCF128] = "expl";<br>
- Names[RTLIB::EXP2_F32] = "exp2f";<br>
- Names[RTLIB::EXP2_F64] = "exp2";<br>
- Names[RTLIB::EXP2_F80] = "exp2l";<br>
- Names[RTLIB::EXP2_F128] = "exp2l";<br>
- Names[RTLIB::EXP2_PPCF128] = "exp2l";<br>
- Names[RTLIB::SIN_F32] = "sinf";<br>
- Names[RTLIB::SIN_F64] = "sin";<br>
- Names[RTLIB::SIN_F80] = "sinl";<br>
- Names[RTLIB::SIN_F128] = "sinl";<br>
- Names[RTLIB::SIN_PPCF128] = "sinl";<br>
- Names[RTLIB::COS_F32] = "cosf";<br>
- Names[RTLIB::COS_F64] = "cos";<br>
- Names[RTLIB::COS_F80] = "cosl";<br>
- Names[RTLIB::COS_F128] = "cosl";<br>
- Names[RTLIB::COS_PPCF128] = "cosl";<br>
- Names[RTLIB::POW_F32] = "powf";<br>
- Names[RTLIB::POW_F64] = "pow";<br>
- Names[RTLIB::POW_F80] = "powl";<br>
- Names[RTLIB::POW_F128] = "powl";<br>
- Names[RTLIB::POW_PPCF128] = "powl";<br>
- Names[RTLIB::CEIL_F32] = "ceilf";<br>
- Names[RTLIB::CEIL_F64] = "ceil";<br>
- Names[RTLIB::CEIL_F80] = "ceill";<br>
- Names[RTLIB::CEIL_F128] = "ceill";<br>
- Names[RTLIB::CEIL_PPCF128] = "ceill";<br>
- Names[RTLIB::TRUNC_F32] = "truncf";<br>
- Names[RTLIB::TRUNC_F64] = "trunc";<br>
- Names[RTLIB::TRUNC_F80] = "truncl";<br>
- Names[RTLIB::TRUNC_F128] = "truncl";<br>
- Names[RTLIB::TRUNC_PPCF128] = "truncl";<br>
- Names[RTLIB::RINT_F32] = "rintf";<br>
- Names[RTLIB::RINT_F64] = "rint";<br>
- Names[RTLIB::RINT_F80] = "rintl";<br>
- Names[RTLIB::RINT_F128] = "rintl";<br>
- Names[RTLIB::RINT_PPCF128] = "rintl";<br>
- Names[RTLIB::NEARBYINT_F32] = "nearbyintf";<br>
- Names[RTLIB::NEARBYINT_F64] = "nearbyint";<br>
- Names[RTLIB::NEARBYINT_F80] = "nearbyintl";<br>
- Names[RTLIB::NEARBYINT_F128] = "nearbyintl";<br>
- Names[RTLIB::NEARBYINT_PPCF128] = "nearbyintl";<br>
- Names[RTLIB::FLOOR_F32] = "floorf";<br>
- Names[RTLIB::FLOOR_F64] = "floor";<br>
- Names[RTLIB::FLOOR_F80] = "floorl";<br>
- Names[RTLIB::FLOOR_F128] = "floorl";<br>
- Names[RTLIB::FLOOR_PPCF128] = "floorl";<br>
- Names[RTLIB::COPYSIGN_F32] = "copysignf";<br>
- Names[RTLIB::COPYSIGN_F64] = "copysign";<br>
- Names[RTLIB::COPYSIGN_F80] = "copysignl";<br>
- Names[RTLIB::COPYSIGN_F128] = "copysignl";<br>
- Names[RTLIB::COPYSIGN_PPCF128] = "copysignl";<br>
- Names[RTLIB::FPEXT_F64_F128] = "__extenddftf2";<br>
- Names[RTLIB::FPEXT_F32_F128] = "__extendsftf2";<br>
- Names[RTLIB::FPEXT_F32_F64] = "__extendsfdf2";<br>
- Names[RTLIB::FPEXT_F16_F32] = "__gnu_h2f_ieee";<br>
- Names[RTLIB::FPROUND_F32_F16] = "__gnu_f2h_ieee";<br>
- Names[RTLIB::FPROUND_F64_F32] = "__truncdfsf2";<br>
- Names[RTLIB::FPROUND_F80_F32] = "__truncxfsf2";<br>
- Names[RTLIB::FPROUND_F128_F32] = "__trunctfsf2";<br>
- Names[RTLIB::FPROUND_PPCF128_F32] = "__trunctfsf2";<br>
- Names[RTLIB::FPROUND_F80_F64] = "__truncxfdf2";<br>
- Names[RTLIB::FPROUND_F128_F64] = "__trunctfdf2";<br>
- Names[RTLIB::FPROUND_PPCF128_F64] = "__trunctfdf2";<br>
- Names[RTLIB::FPTOSINT_F32_I8] = "__fixsfqi";<br>
- Names[RTLIB::FPTOSINT_F32_I16] = "__fixsfhi";<br>
- Names[RTLIB::FPTOSINT_F32_I32] = "__fixsfsi";<br>
- Names[RTLIB::FPTOSINT_F32_I64] = "__fixsfdi";<br>
- Names[RTLIB::FPTOSINT_F32_I128] = "__fixsfti";<br>
- Names[RTLIB::FPTOSINT_F64_I8] = "__fixdfqi";<br>
- Names[RTLIB::FPTOSINT_F64_I16] = "__fixdfhi";<br>
- Names[RTLIB::FPTOSINT_F64_I32] = "__fixdfsi";<br>
- Names[RTLIB::FPTOSINT_F64_I64] = "__fixdfdi";<br>
- Names[RTLIB::FPTOSINT_F64_I128] = "__fixdfti";<br>
- Names[RTLIB::FPTOSINT_F80_I32] = "__fixxfsi";<br>
- Names[RTLIB::FPTOSINT_F80_I64] = "__fixxfdi";<br>
- Names[RTLIB::FPTOSINT_F80_I128] = "__fixxfti";<br>
- Names[RTLIB::FPTOSINT_F128_I32] = "__fixtfsi";<br>
- Names[RTLIB::FPTOSINT_F128_I64] = "__fixtfdi";<br>
- Names[RTLIB::FPTOSINT_F128_I128] = "__fixtfti";<br>
- Names[RTLIB::FPTOSINT_PPCF128_I32] = "__fixtfsi";<br>
- Names[RTLIB::FPTOSINT_PPCF128_I64] = "__fixtfdi";<br>
- Names[RTLIB::FPTOSINT_PPCF128_I128] = "__fixtfti";<br>
- Names[RTLIB::FPTOUINT_F32_I8] = "__fixunssfqi";<br>
- Names[RTLIB::FPTOUINT_F32_I16] = "__fixunssfhi";<br>
- Names[RTLIB::FPTOUINT_F32_I32] = "__fixunssfsi";<br>
- Names[RTLIB::FPTOUINT_F32_I64] = "__fixunssfdi";<br>
- Names[RTLIB::FPTOUINT_F32_I128] = "__fixunssfti";<br>
- Names[RTLIB::FPTOUINT_F64_I8] = "__fixunsdfqi";<br>
- Names[RTLIB::FPTOUINT_F64_I16] = "__fixunsdfhi";<br>
- Names[RTLIB::FPTOUINT_F64_I32] = "__fixunsdfsi";<br>
- Names[RTLIB::FPTOUINT_F64_I64] = "__fixunsdfdi";<br>
- Names[RTLIB::FPTOUINT_F64_I128] = "__fixunsdfti";<br>
- Names[RTLIB::FPTOUINT_F80_I32] = "__fixunsxfsi";<br>
- Names[RTLIB::FPTOUINT_F80_I64] = "__fixunsxfdi";<br>
- Names[RTLIB::FPTOUINT_F80_I128] = "__fixunsxfti";<br>
- Names[RTLIB::FPTOUINT_F128_I32] = "__fixunstfsi";<br>
- Names[RTLIB::FPTOUINT_F128_I64] = "__fixunstfdi";<br>
- Names[RTLIB::FPTOUINT_F128_I128] = "__fixunstfti";<br>
- Names[RTLIB::FPTOUINT_PPCF128_I32] = "__fixunstfsi";<br>
- Names[RTLIB::FPTOUINT_PPCF128_I64] = "__fixunstfdi";<br>
- Names[RTLIB::FPTOUINT_PPCF128_I128] = "__fixunstfti";<br>
- Names[RTLIB::SINTTOFP_I32_F32] = "__floatsisf";<br>
- Names[RTLIB::SINTTOFP_I32_F64] = "__floatsidf";<br>
- Names[RTLIB::SINTTOFP_I32_F80] = "__floatsixf";<br>
- Names[RTLIB::SINTTOFP_I32_F128] = "__floatsitf";<br>
- Names[RTLIB::SINTTOFP_I32_PPCF128] = "__floatsitf";<br>
- Names[RTLIB::SINTTOFP_I64_F32] = "__floatdisf";<br>
- Names[RTLIB::SINTTOFP_I64_F64] = "__floatdidf";<br>
- Names[RTLIB::SINTTOFP_I64_F80] = "__floatdixf";<br>
- Names[RTLIB::SINTTOFP_I64_F128] = "__floatditf";<br>
- Names[RTLIB::SINTTOFP_I64_PPCF128] = "__floatditf";<br>
- Names[RTLIB::SINTTOFP_I128_F32] = "__floattisf";<br>
- Names[RTLIB::SINTTOFP_I128_F64] = "__floattidf";<br>
- Names[RTLIB::SINTTOFP_I128_F80] = "__floattixf";<br>
- Names[RTLIB::SINTTOFP_I128_F128] = "__floattitf";<br>
- Names[RTLIB::SINTTOFP_I128_PPCF128] = "__floattitf";<br>
- Names[RTLIB::UINTTOFP_I32_F32] = "__floatunsisf";<br>
- Names[RTLIB::UINTTOFP_I32_F64] = "__floatunsidf";<br>
- Names[RTLIB::UINTTOFP_I32_F80] = "__floatunsixf";<br>
- Names[RTLIB::UINTTOFP_I32_F128] = "__floatunsitf";<br>
- Names[RTLIB::UINTTOFP_I32_PPCF128] = "__floatunsitf";<br>
- Names[RTLIB::UINTTOFP_I64_F32] = "__floatundisf";<br>
- Names[RTLIB::UINTTOFP_I64_F64] = "__floatundidf";<br>
- Names[RTLIB::UINTTOFP_I64_F80] = "__floatundixf";<br>
- Names[RTLIB::UINTTOFP_I64_F128] = "__floatunditf";<br>
- Names[RTLIB::UINTTOFP_I64_PPCF128] = "__floatunditf";<br>
- Names[RTLIB::UINTTOFP_I128_F32] = "__floatuntisf";<br>
- Names[RTLIB::UINTTOFP_I128_F64] = "__floatuntidf";<br>
- Names[RTLIB::UINTTOFP_I128_F80] = "__floatuntixf";<br>
- Names[RTLIB::UINTTOFP_I128_F128] = "__floatuntitf";<br>
- Names[RTLIB::UINTTOFP_I128_PPCF128] = "__floatuntitf";<br>
- Names[RTLIB::OEQ_F32] = "__eqsf2";<br>
- Names[RTLIB::OEQ_F64] = "__eqdf2";<br>
- Names[RTLIB::OEQ_F128] = "__eqtf2";<br>
- Names[RTLIB::UNE_F32] = "__nesf2";<br>
- Names[RTLIB::UNE_F64] = "__nedf2";<br>
- Names[RTLIB::UNE_F128] = "__netf2";<br>
- Names[RTLIB::OGE_F32] = "__gesf2";<br>
- Names[RTLIB::OGE_F64] = "__gedf2";<br>
- Names[RTLIB::OGE_F128] = "__getf2";<br>
- Names[RTLIB::OLT_F32] = "__ltsf2";<br>
- Names[RTLIB::OLT_F64] = "__ltdf2";<br>
- Names[RTLIB::OLT_F128] = "__lttf2";<br>
- Names[RTLIB::OLE_F32] = "__lesf2";<br>
- Names[RTLIB::OLE_F64] = "__ledf2";<br>
- Names[RTLIB::OLE_F128] = "__letf2";<br>
- Names[RTLIB::OGT_F32] = "__gtsf2";<br>
- Names[RTLIB::OGT_F64] = "__gtdf2";<br>
- Names[RTLIB::OGT_F128] = "__gttf2";<br>
- Names[RTLIB::UO_F32] = "__unordsf2";<br>
- Names[RTLIB::UO_F64] = "__unorddf2";<br>
- Names[RTLIB::UO_F128] = "__unordtf2";<br>
- Names[RTLIB::O_F32] = "__unordsf2";<br>
- Names[RTLIB::O_F64] = "__unorddf2";<br>
- Names[RTLIB::O_F128] = "__unordtf2";<br>
- Names[RTLIB::MEMCPY] = "memcpy";<br>
- Names[RTLIB::MEMMOVE] = "memmove";<br>
- Names[RTLIB::MEMSET] = "memset";<br>
- Names[RTLIB::UNWIND_RESUME] = "_Unwind_Resume";<br>
- Names[RTLIB::SYNC_VAL_COMPARE_AND_SWAP_1] = "__sync_val_compare_and_swap_1";<br>
- Names[RTLIB::SYNC_VAL_COMPARE_AND_SWAP_2] = "__sync_val_compare_and_swap_2";<br>
- Names[RTLIB::SYNC_VAL_COMPARE_AND_SWAP_4] = "__sync_val_compare_and_swap_4";<br>
- Names[RTLIB::SYNC_VAL_COMPARE_AND_SWAP_8] = "__sync_val_compare_and_swap_8";<br>
- Names[RTLIB::SYNC_LOCK_TEST_AND_SET_1] = "__sync_lock_test_and_set_1";<br>
- Names[RTLIB::SYNC_LOCK_TEST_AND_SET_2] = "__sync_lock_test_and_set_2";<br>
- Names[RTLIB::SYNC_LOCK_TEST_AND_SET_4] = "__sync_lock_test_and_set_4";<br>
- Names[RTLIB::SYNC_LOCK_TEST_AND_SET_8] = "__sync_lock_test_and_set_8";<br>
- Names[RTLIB::SYNC_FETCH_AND_ADD_1] = "__sync_fetch_and_add_1";<br>
- Names[RTLIB::SYNC_FETCH_AND_ADD_2] = "__sync_fetch_and_add_2";<br>
- Names[RTLIB::SYNC_FETCH_AND_ADD_4] = "__sync_fetch_and_add_4";<br>
- Names[RTLIB::SYNC_FETCH_AND_ADD_8] = "__sync_fetch_and_add_8";<br>
- Names[RTLIB::SYNC_FETCH_AND_SUB_1] = "__sync_fetch_and_sub_1";<br>
- Names[RTLIB::SYNC_FETCH_AND_SUB_2] = "__sync_fetch_and_sub_2";<br>
- Names[RTLIB::SYNC_FETCH_AND_SUB_4] = "__sync_fetch_and_sub_4";<br>
- Names[RTLIB::SYNC_FETCH_AND_SUB_8] = "__sync_fetch_and_sub_8";<br>
- Names[RTLIB::SYNC_FETCH_AND_AND_1] = "__sync_fetch_and_and_1";<br>
- Names[RTLIB::SYNC_FETCH_AND_AND_2] = "__sync_fetch_and_and_2";<br>
- Names[RTLIB::SYNC_FETCH_AND_AND_4] = "__sync_fetch_and_and_4";<br>
- Names[RTLIB::SYNC_FETCH_AND_AND_8] = "__sync_fetch_and_and_8";<br>
- Names[RTLIB::SYNC_FETCH_AND_OR_1] = "__sync_fetch_and_or_1";<br>
- Names[RTLIB::SYNC_FETCH_AND_OR_2] = "__sync_fetch_and_or_2";<br>
- Names[RTLIB::SYNC_FETCH_AND_OR_4] = "__sync_fetch_and_or_4";<br>
- Names[RTLIB::SYNC_FETCH_AND_OR_8] = "__sync_fetch_and_or_8";<br>
- Names[RTLIB::SYNC_FETCH_AND_XOR_1] = "__sync_fetch_and_xor_1";<br>
- Names[RTLIB::SYNC_FETCH_AND_XOR_2] = "__sync_fetch_and_xor_2";<br>
- Names[RTLIB::SYNC_FETCH_AND_XOR_4] = "__sync_fetch_and_xor_4";<br>
- Names[RTLIB::SYNC_FETCH_AND_XOR_8] = "__sync_fetch_and_xor_8";<br>
- Names[RTLIB::SYNC_FETCH_AND_NAND_1] = "__sync_fetch_and_nand_1";<br>
- Names[RTLIB::SYNC_FETCH_AND_NAND_2] = "__sync_fetch_and_nand_2";<br>
- Names[RTLIB::SYNC_FETCH_AND_NAND_4] = "__sync_fetch_and_nand_4";<br>
- Names[RTLIB::SYNC_FETCH_AND_NAND_8] = "__sync_fetch_and_nand_8";<br>
-}<br>
-<br>
-/// InitLibcallCallingConvs - Set default libcall CallingConvs.<br>
-///<br>
-static void InitLibcallCallingConvs(CallingConv::ID *CCs) {<br>
- for (int i = 0; i < RTLIB::UNKNOWN_LIBCALL; ++i) {<br>
- CCs[i] = CallingConv::C;<br>
- }<br>
-}<br>
-<br>
/// getFPEXT - Return the FPEXT_*_* value for the given types, or<br>
/// UNKNOWN_LIBCALL if there is none.<br>
RTLIB::Libcall RTLIB::getFPEXT(EVT OpVT, EVT RetVT) {<br>
@@ -571,447 +253,15 @@<br>
return UNKNOWN_LIBCALL;<br>
}<br>
<br>
-/// InitCmpLibcallCCs - Set default comparison libcall CC.<br>
-///<br>
-static void InitCmpLibcallCCs(ISD::CondCode *CCs) {<br>
- memset(CCs, ISD::SETCC_INVALID, sizeof(ISD::CondCode)*RTLIB::UNKNOWN_LIBCALL);<br>
- CCs[RTLIB::OEQ_F32] = ISD::SETEQ;<br>
- CCs[RTLIB::OEQ_F64] = ISD::SETEQ;<br>
- CCs[RTLIB::OEQ_F128] = ISD::SETEQ;<br>
- CCs[RTLIB::UNE_F32] = ISD::SETNE;<br>
- CCs[RTLIB::UNE_F64] = ISD::SETNE;<br>
- CCs[RTLIB::UNE_F128] = ISD::SETNE;<br>
- CCs[RTLIB::OGE_F32] = ISD::SETGE;<br>
- CCs[RTLIB::OGE_F64] = ISD::SETGE;<br>
- CCs[RTLIB::OGE_F128] = ISD::SETGE;<br>
- CCs[RTLIB::OLT_F32] = ISD::SETLT;<br>
- CCs[RTLIB::OLT_F64] = ISD::SETLT;<br>
- CCs[RTLIB::OLT_F128] = ISD::SETLT;<br>
- CCs[RTLIB::OLE_F32] = ISD::SETLE;<br>
- CCs[RTLIB::OLE_F64] = ISD::SETLE;<br>
- CCs[RTLIB::OLE_F128] = ISD::SETLE;<br>
- CCs[RTLIB::OGT_F32] = ISD::SETGT;<br>
- CCs[RTLIB::OGT_F64] = ISD::SETGT;<br>
- CCs[RTLIB::OGT_F128] = ISD::SETGT;<br>
- CCs[RTLIB::UO_F32] = ISD::SETNE;<br>
- CCs[RTLIB::UO_F64] = ISD::SETNE;<br>
- CCs[RTLIB::UO_F128] = ISD::SETNE;<br>
- CCs[RTLIB::O_F32] = ISD::SETEQ;<br>
- CCs[RTLIB::O_F64] = ISD::SETEQ;<br>
- CCs[RTLIB::O_F128] = ISD::SETEQ;<br>
-}<br>
-<br>
/// NOTE: The constructor takes ownership of TLOF.<br>
TargetLowering::TargetLowering(const TargetMachine &tm,<br>
const TargetLoweringObjectFile *tlof)<br>
- : TM(tm), TD(TM.getDataLayout()), TLOF(*tlof) {<br>
- // All operations default to being supported.<br>
- memset(OpActions, 0, sizeof(OpActions));<br>
- memset(LoadExtActions, 0, sizeof(LoadExtActions));<br>
- memset(TruncStoreActions, 0, sizeof(TruncStoreActions));<br>
- memset(IndexedModeActions, 0, sizeof(IndexedModeActions));<br>
- memset(CondCodeActions, 0, sizeof(CondCodeActions));<br>
-<br>
- // Set default actions for various operations.<br>
- for (unsigned VT = 0; VT != (unsigned)MVT::LAST_VALUETYPE; ++VT) {<br>
- // Default all indexed load / store to expand.<br>
- for (unsigned IM = (unsigned)ISD::PRE_INC;<br>
- IM != (unsigned)ISD::LAST_INDEXED_MODE; ++IM) {<br>
- setIndexedLoadAction(IM, (MVT::SimpleValueType)VT, Expand);<br>
- setIndexedStoreAction(IM, (MVT::SimpleValueType)VT, Expand);<br>
- }<br>
-<br>
- // These operations default to expand.<br>
- setOperationAction(ISD::FGETSIGN, (MVT::SimpleValueType)VT, Expand);<br>
- setOperationAction(ISD::CONCAT_VECTORS, (MVT::SimpleValueType)VT, Expand);<br>
- }<br>
-<br>
- // Most targets ignore the @llvm.prefetch intrinsic.<br>
- setOperationAction(ISD::PREFETCH, MVT::Other, Expand);<br>
-<br>
- // ConstantFP nodes default to expand. Targets can either change this to<br>
- // Legal, in which case all fp constants are legal, or use isFPImmLegal()<br>
- // to optimize expansions for certain constants.<br>
- setOperationAction(ISD::ConstantFP, MVT::f16, Expand);<br>
- setOperationAction(ISD::ConstantFP, MVT::f32, Expand);<br>
- setOperationAction(ISD::ConstantFP, MVT::f64, Expand);<br>
- setOperationAction(ISD::ConstantFP, MVT::f80, Expand);<br>
- setOperationAction(ISD::ConstantFP, MVT::f128, Expand);<br>
-<br>
- // These library functions default to expand.<br>
- setOperationAction(ISD::FLOG , MVT::f16, Expand);<br>
- setOperationAction(ISD::FLOG2, MVT::f16, Expand);<br>
- setOperationAction(ISD::FLOG10, MVT::f16, Expand);<br>
- setOperationAction(ISD::FEXP , MVT::f16, Expand);<br>
- setOperationAction(ISD::FEXP2, MVT::f16, Expand);<br>
- setOperationAction(ISD::FFLOOR, MVT::f16, Expand);<br>
- setOperationAction(ISD::FNEARBYINT, MVT::f16, Expand);<br>
- setOperationAction(ISD::FCEIL, MVT::f16, Expand);<br>
- setOperationAction(ISD::FRINT, MVT::f16, Expand);<br>
- setOperationAction(ISD::FTRUNC, MVT::f16, Expand);<br>
- setOperationAction(ISD::FLOG , MVT::f32, Expand);<br>
- setOperationAction(ISD::FLOG2, MVT::f32, Expand);<br>
- setOperationAction(ISD::FLOG10, MVT::f32, Expand);<br>
- setOperationAction(ISD::FEXP , MVT::f32, Expand);<br>
- setOperationAction(ISD::FEXP2, MVT::f32, Expand);<br>
- setOperationAction(ISD::FFLOOR, MVT::f32, Expand);<br>
- setOperationAction(ISD::FNEARBYINT, MVT::f32, Expand);<br>
- setOperationAction(ISD::FCEIL, MVT::f32, Expand);<br>
- setOperationAction(ISD::FRINT, MVT::f32, Expand);<br>
- setOperationAction(ISD::FTRUNC, MVT::f32, Expand);<br>
- setOperationAction(ISD::FLOG , MVT::f64, Expand);<br>
- setOperationAction(ISD::FLOG2, MVT::f64, Expand);<br>
- setOperationAction(ISD::FLOG10, MVT::f64, Expand);<br>
- setOperationAction(ISD::FEXP , MVT::f64, Expand);<br>
- setOperationAction(ISD::FEXP2, MVT::f64, Expand);<br>
- setOperationAction(ISD::FFLOOR, MVT::f64, Expand);<br>
- setOperationAction(ISD::FNEARBYINT, MVT::f64, Expand);<br>
- setOperationAction(ISD::FCEIL, MVT::f64, Expand);<br>
- setOperationAction(ISD::FRINT, MVT::f64, Expand);<br>
- setOperationAction(ISD::FTRUNC, MVT::f64, Expand);<br>
- setOperationAction(ISD::FLOG , MVT::f128, Expand);<br>
- setOperationAction(ISD::FLOG2, MVT::f128, Expand);<br>
- setOperationAction(ISD::FLOG10, MVT::f128, Expand);<br>
- setOperationAction(ISD::FEXP , MVT::f128, Expand);<br>
- setOperationAction(ISD::FEXP2, MVT::f128, Expand);<br>
- setOperationAction(ISD::FFLOOR, MVT::f128, Expand);<br>
- setOperationAction(ISD::FNEARBYINT, MVT::f128, Expand);<br>
- setOperationAction(ISD::FCEIL, MVT::f128, Expand);<br>
- setOperationAction(ISD::FRINT, MVT::f128, Expand);<br>
- setOperationAction(ISD::FTRUNC, MVT::f128, Expand);<br>
-<br>
- // Default ISD::TRAP to expand (which turns it into abort).<br>
- setOperationAction(ISD::TRAP, MVT::Other, Expand);<br>
-<br>
- // On most systems, DEBUGTRAP and TRAP have no difference. The "Expand"<br>
- // here is to inform DAG Legalizer to replace DEBUGTRAP with TRAP.<br>
- //<br>
- setOperationAction(ISD::DEBUGTRAP, MVT::Other, Expand);<br>
-<br>
- IsLittleEndian = TD->isLittleEndian();<br>
- PointerTy = MVT::getIntegerVT(8*TD->getPointerSize(0));<br>
- memset(RegClassForVT, 0,MVT::LAST_VALUETYPE*sizeof(TargetRegisterClass*));<br>
- memset(TargetDAGCombineArray, 0, array_lengthof(TargetDAGCombineArray));<br>
- maxStoresPerMemset = maxStoresPerMemcpy = maxStoresPerMemmove = 8;<br>
- maxStoresPerMemsetOptSize = maxStoresPerMemcpyOptSize<br>
- = maxStoresPerMemmoveOptSize = 4;<br>
- benefitFromCodePlacementOpt = false;<br>
- UseUnderscoreSetJmp = false;<br>
- UseUnderscoreLongJmp = false;<br>
- SelectIsExpensive = false;<br>
- IntDivIsCheap = false;<br>
- Pow2DivIsCheap = false;<br>
- JumpIsExpensive = false;<br>
- predictableSelectIsExpensive = false;<br>
- StackPointerRegisterToSaveRestore = 0;<br>
- ExceptionPointerRegister = 0;<br>
- ExceptionSelectorRegister = 0;<br>
- BooleanContents = UndefinedBooleanContent;<br>
- BooleanVectorContents = UndefinedBooleanContent;<br>
- SchedPreferenceInfo = Sched::ILP;<br>
- JumpBufSize = 0;<br>
- JumpBufAlignment = 0;<br>
- MinFunctionAlignment = 0;<br>
- PrefFunctionAlignment = 0;<br>
- PrefLoopAlignment = 0;<br>
- MinStackArgumentAlignment = 1;<br>
- ShouldFoldAtomicFences = false;<br>
- InsertFencesForAtomic = false;<br>
- SupportJumpTables = true;<br>
- MinimumJumpTableEntries = 4;<br>
-<br>
- InitLibcallNames(LibcallRoutineNames);<br>
- InitCmpLibcallCCs(CmpLibcallCCs);<br>
- InitLibcallCallingConvs(LibcallCallingConvs);<br>
-}<br>
-<br>
-TargetLowering::~TargetLowering() {<br>
- delete &TLOF;<br>
-}<br>
-<br>
-MVT TargetLowering::getShiftAmountTy(EVT LHSTy) const {<br>
- return MVT::getIntegerVT(8*TD->getPointerSize(0));<br>
-}<br>
-<br>
-/// canOpTrap - Returns true if the operation can trap for the value type.<br>
-/// VT must be a legal type.<br>
-bool TargetLowering::canOpTrap(unsigned Op, EVT VT) const {<br>
- assert(isTypeLegal(VT));<br>
- switch (Op) {<br>
- default:<br>
- return false;<br>
- case ISD::FDIV:<br>
- case ISD::FREM:<br>
- case ISD::SDIV:<br>
- case ISD::UDIV:<br>
- case ISD::SREM:<br>
- case ISD::UREM:<br>
- return true;<br>
- }<br>
-}<br>
-<br>
-<br>
-static unsigned getVectorTypeBreakdownMVT(MVT VT, MVT &IntermediateVT,<br>
- unsigned &NumIntermediates,<br>
- MVT &RegisterVT,<br>
- TargetLowering *TLI) {<br>
- // Figure out the right, legal destination reg to copy into.<br>
- unsigned NumElts = VT.getVectorNumElements();<br>
- MVT EltTy = VT.getVectorElementType();<br>
-<br>
- unsigned NumVectorRegs = 1;<br>
-<br>
- // FIXME: We don't support non-power-of-2-sized vectors for now. Ideally we<br>
- // could break down into LHS/RHS like LegalizeDAG does.<br>
- if (!isPowerOf2_32(NumElts)) {<br>
- NumVectorRegs = NumElts;<br>
- NumElts = 1;<br>
- }<br>
-<br>
- // Divide the input until we get to a supported size. This will always<br>
- // end with a scalar if the target doesn't support vectors.<br>
- while (NumElts > 1 && !TLI->isTypeLegal(MVT::getVectorVT(EltTy, NumElts))) {<br>
- NumElts >>= 1;<br>
- NumVectorRegs <<= 1;<br>
- }<br>
-<br>
- NumIntermediates = NumVectorRegs;<br>
-<br>
- MVT NewVT = MVT::getVectorVT(EltTy, NumElts);<br>
- if (!TLI->isTypeLegal(NewVT))<br>
- NewVT = EltTy;<br>
- IntermediateVT = NewVT;<br>
-<br>
- unsigned NewVTSize = NewVT.getSizeInBits();<br>
-<br>
- // Convert sizes such as i33 to i64.<br>
- if (!isPowerOf2_32(NewVTSize))<br>
- NewVTSize = NextPowerOf2(NewVTSize);<br>
-<br>
- MVT DestVT = TLI->getRegisterType(NewVT);<br>
- RegisterVT = DestVT;<br>
- if (EVT(DestVT).bitsLT(NewVT)) // Value is expanded, e.g. i64 -> i16.<br>
- return NumVectorRegs*(NewVTSize/DestVT.getSizeInBits());<br>
-<br>
- // Otherwise, promotion or legal types use the same number of registers as<br>
- // the vector decimated to the appropriate level.<br>
- return NumVectorRegs;<br>
-}<br>
-<br>
-/// isLegalRC - Return true if the value types that can be represented by the<br>
-/// specified register class are all legal.<br>
-bool TargetLowering::isLegalRC(const TargetRegisterClass *RC) const {<br>
- for (TargetRegisterClass::vt_iterator I = RC->vt_begin(), E = RC->vt_end();<br>
- I != E; ++I) {<br>
- if (isTypeLegal(*I))<br>
- return true;<br>
- }<br>
- return false;<br>
-}<br>
-<br>
-/// findRepresentativeClass - Return the largest legal super-reg register class<br>
-/// of the register class for the specified type and its associated "cost".<br>
-std::pair<const TargetRegisterClass*, uint8_t><br>
-TargetLowering::findRepresentativeClass(MVT VT) const {<br>
- const TargetRegisterInfo *TRI = getTargetMachine().getRegisterInfo();<br>
- const TargetRegisterClass *RC = RegClassForVT[VT.SimpleTy];<br>
- if (!RC)<br>
- return std::make_pair(RC, 0);<br>
-<br>
- // Compute the set of all super-register classes.<br>
- BitVector SuperRegRC(TRI->getNumRegClasses());<br>
- for (SuperRegClassIterator RCI(RC, TRI); RCI.isValid(); ++RCI)<br>
- SuperRegRC.setBitsInMask(RCI.getMask());<br>
-<br>
- // Find the first legal register class with the largest spill size.<br>
- const TargetRegisterClass *BestRC = RC;<br>
- for (int i = SuperRegRC.find_first(); i >= 0; i = SuperRegRC.find_next(i)) {<br>
- const TargetRegisterClass *SuperRC = TRI->getRegClass(i);<br>
- // We want the largest possible spill size.<br>
- if (SuperRC->getSize() <= BestRC->getSize())<br>
- continue;<br>
- if (!isLegalRC(SuperRC))<br>
- continue;<br>
- BestRC = SuperRC;<br>
- }<br>
- return std::make_pair(BestRC, 1);<br>
-}<br>
-<br>
-/// computeRegisterProperties - Once all of the register classes are added,<br>
-/// this allows us to compute derived properties we expose.<br>
-void TargetLowering::computeRegisterProperties() {<br>
- assert(MVT::LAST_VALUETYPE <= MVT::MAX_ALLOWED_VALUETYPE &&<br>
- "Too many value types for ValueTypeActions to hold!");<br>
-<br>
- // Everything defaults to needing one register.<br>
- for (unsigned i = 0; i != MVT::LAST_VALUETYPE; ++i) {<br>
- NumRegistersForVT[i] = 1;<br>
- RegisterTypeForVT[i] = TransformToType[i] = (MVT::SimpleValueType)i;<br>
- }<br>
- // ...except isVoid, which doesn't need any registers.<br>
- NumRegistersForVT[MVT::isVoid] = 0;<br>
-<br>
- // Find the largest integer register class.<br>
- unsigned LargestIntReg = MVT::LAST_INTEGER_VALUETYPE;<br>
- for (; RegClassForVT[LargestIntReg] == 0; --LargestIntReg)<br>
- assert(LargestIntReg != MVT::i1 && "No integer registers defined!");<br>
-<br>
- // Every integer value type larger than this largest register takes twice as<br>
- // many registers to represent as the previous ValueType.<br>
- for (unsigned ExpandedReg = LargestIntReg + 1;<br>
- ExpandedReg <= MVT::LAST_INTEGER_VALUETYPE; ++ExpandedReg) {<br>
- NumRegistersForVT[ExpandedReg] = 2*NumRegistersForVT[ExpandedReg-1];<br>
- RegisterTypeForVT[ExpandedReg] = (MVT::SimpleValueType)LargestIntReg;<br>
- TransformToType[ExpandedReg] = (MVT::SimpleValueType)(ExpandedReg - 1);<br>
- ValueTypeActions.setTypeAction((MVT::SimpleValueType)ExpandedReg,<br>
- TypeExpandInteger);<br>
- }<br>
-<br>
- // Inspect all of the ValueType's smaller than the largest integer<br>
- // register to see which ones need promotion.<br>
- unsigned LegalIntReg = LargestIntReg;<br>
- for (unsigned IntReg = LargestIntReg - 1;<br>
- IntReg >= (unsigned)MVT::i1; --IntReg) {<br>
- MVT IVT = (MVT::SimpleValueType)IntReg;<br>
- if (isTypeLegal(IVT)) {<br>
- LegalIntReg = IntReg;<br>
- } else {<br>
- RegisterTypeForVT[IntReg] = TransformToType[IntReg] =<br>
- (const MVT::SimpleValueType)LegalIntReg;<br>
- ValueTypeActions.setTypeAction(IVT, TypePromoteInteger);<br>
- }<br>
- }<br>
-<br>
- // ppcf128 type is really two f64's.<br>
- if (!isTypeLegal(MVT::ppcf128)) {<br>
- NumRegistersForVT[MVT::ppcf128] = 2*NumRegistersForVT[MVT::f64];<br>
- RegisterTypeForVT[MVT::ppcf128] = MVT::f64;<br>
- TransformToType[MVT::ppcf128] = MVT::f64;<br>
- ValueTypeActions.setTypeAction(MVT::ppcf128, TypeExpandFloat);<br>
- }<br>
-<br>
- // Decide how to handle f64. If the target does not have native f64 support,<br>
- // expand it to i64 and we will be generating soft float library calls.<br>
- if (!isTypeLegal(MVT::f64)) {<br>
- NumRegistersForVT[MVT::f64] = NumRegistersForVT[MVT::i64];<br>
- RegisterTypeForVT[MVT::f64] = RegisterTypeForVT[MVT::i64];<br>
- TransformToType[MVT::f64] = MVT::i64;<br>
- ValueTypeActions.setTypeAction(MVT::f64, TypeSoftenFloat);<br>
- }<br>
-<br>
- // Decide how to handle f32. If the target does not have native support for<br>
- // f32, promote it to f64 if it is legal. Otherwise, expand it to i32.<br>
- if (!isTypeLegal(MVT::f32)) {<br>
- if (isTypeLegal(MVT::f64)) {<br>
- NumRegistersForVT[MVT::f32] = NumRegistersForVT[MVT::f64];<br>
- RegisterTypeForVT[MVT::f32] = RegisterTypeForVT[MVT::f64];<br>
- TransformToType[MVT::f32] = MVT::f64;<br>
- ValueTypeActions.setTypeAction(MVT::f32, TypePromoteInteger);<br>
- } else {<br>
- NumRegistersForVT[MVT::f32] = NumRegistersForVT[MVT::i32];<br>
- RegisterTypeForVT[MVT::f32] = RegisterTypeForVT[MVT::i32];<br>
- TransformToType[MVT::f32] = MVT::i32;<br>
- ValueTypeActions.setTypeAction(MVT::f32, TypeSoftenFloat);<br>
- }<br>
- }<br>
-<br>
- // Loop over all of the vector value types to see which need transformations.<br>
- for (unsigned i = MVT::FIRST_VECTOR_VALUETYPE;<br>
- i <= (unsigned)MVT::LAST_VECTOR_VALUETYPE; ++i) {<br>
- MVT VT = (MVT::SimpleValueType)i;<br>
- if (isTypeLegal(VT)) continue;<br>
-<br>
- // Determine if there is a legal wider type. If so, we should promote to<br>
- // that wider vector type.<br>
- MVT EltVT = VT.getVectorElementType();<br>
- unsigned NElts = VT.getVectorNumElements();<br>
- if (NElts != 1 && !shouldSplitVectorElementType(EltVT)) {<br>
- bool IsLegalWiderType = false;<br>
- // First try to promote the elements of integer vectors. If no legal<br>
- // promotion was found, fallback to the widen-vector method.<br>
- for (unsigned nVT = i+1; nVT <= MVT::LAST_VECTOR_VALUETYPE; ++nVT) {<br>
- MVT SVT = (MVT::SimpleValueType)nVT;<br>
- // Promote vectors of integers to vectors with the same number<br>
- // of elements, with a wider element type.<br>
- if (SVT.getVectorElementType().getSizeInBits() > EltVT.getSizeInBits()<br>
- && SVT.getVectorNumElements() == NElts &&<br>
- isTypeLegal(SVT) && SVT.getScalarType().isInteger()) {<br>
- TransformToType[i] = SVT;<br>
- RegisterTypeForVT[i] = SVT;<br>
- NumRegistersForVT[i] = 1;<br>
- ValueTypeActions.setTypeAction(VT, TypePromoteInteger);<br>
- IsLegalWiderType = true;<br>
- break;<br>
- }<br>
- }<br>
-<br>
- if (IsLegalWiderType) continue;<br>
-<br>
- // Try to widen the vector.<br>
- for (unsigned nVT = i+1; nVT <= MVT::LAST_VECTOR_VALUETYPE; ++nVT) {<br>
- MVT SVT = (MVT::SimpleValueType)nVT;<br>
- if (SVT.getVectorElementType() == EltVT &&<br>
- SVT.getVectorNumElements() > NElts &&<br>
- isTypeLegal(SVT)) {<br>
- TransformToType[i] = SVT;<br>
- RegisterTypeForVT[i] = SVT;<br>
- NumRegistersForVT[i] = 1;<br>
- ValueTypeActions.setTypeAction(VT, TypeWidenVector);<br>
- IsLegalWiderType = true;<br>
- break;<br>
- }<br>
- }<br>
- if (IsLegalWiderType) continue;<br>
- }<br>
-<br>
- MVT IntermediateVT;<br>
- MVT RegisterVT;<br>
- unsigned NumIntermediates;<br>
- NumRegistersForVT[i] =<br>
- getVectorTypeBreakdownMVT(VT, IntermediateVT, NumIntermediates,<br>
- RegisterVT, this);<br>
- RegisterTypeForVT[i] = RegisterVT;<br>
-<br>
- MVT NVT = VT.getPow2VectorType();<br>
- if (NVT == VT) {<br>
- // Type is already a power of 2. The default action is to split.<br>
- TransformToType[i] = MVT::Other;<br>
- unsigned NumElts = VT.getVectorNumElements();<br>
- ValueTypeActions.setTypeAction(VT,<br>
- NumElts > 1 ? TypeSplitVector : TypeScalarizeVector);<br>
- } else {<br>
- TransformToType[i] = NVT;<br>
- ValueTypeActions.setTypeAction(VT, TypeWidenVector);<br>
- }<br>
- }<br>
-<br>
- // Determine the 'representative' register class for each value type.<br>
- // An representative register class is the largest (meaning one which is<br>
- // not a sub-register class / subreg register class) legal register class for<br>
- // a group of value types. For example, on i386, i8, i16, and i32<br>
- // representative would be GR32; while on x86_64 it's GR64.<br>
- for (unsigned i = 0; i != MVT::LAST_VALUETYPE; ++i) {<br>
- const TargetRegisterClass* RRC;<br>
- uint8_t Cost;<br>
- tie(RRC, Cost) = findRepresentativeClass((MVT::SimpleValueType)i);<br>
- RepRegClassForVT[i] = RRC;<br>
- RepRegClassCostForVT[i] = Cost;<br>
- }<br>
-}<br>
+ : TargetLoweringBase(tm, tlof) {}<br>
<br>
const char *TargetLowering::getTargetNodeName(unsigned Opcode) const {<br>
return NULL;<br>
}<br>
<br>
-EVT TargetLowering::getSetCCResultType(EVT VT) const {<br>
- assert(!VT.isVector() && "No default SetCC type for vectors!");<br>
- return getPointerTy(0).SimpleTy;<br>
-}<br>
-<br>
-MVT::SimpleValueType TargetLowering::getCmpLibcallReturnType() const {<br>
- return MVT::i32; // return the default value<br>
-}<br>
-<br>
/// Check whether a given call node is in tail position within its function. If<br>
/// so, it sets Chain to the input chain of the tail call.<br>
bool TargetLowering::isInTailCallPosition(SelectionDAG &DAG, SDNode *Node,<br>
@@ -1167,80 +417,6 @@<br>
}<br>
}<br>
<br>
-/// getVectorTypeBreakdown - Vector types are broken down into some number of<br>
-/// legal first class types. For example, MVT::v8f32 maps to 2 MVT::v4f32<br>
-/// with Altivec or SSE1, or 8 promoted MVT::f64 values with the X86 FP stack.<br>
-/// Similarly, MVT::v2i64 turns into 4 MVT::i32 values with both PPC and X86.<br>
-///<br>
-/// This method returns the number of registers needed, and the VT for each<br>
-/// register. It also returns the VT and quantity of the intermediate values<br>
-/// before they are promoted/expanded.<br>
-///<br>
-unsigned TargetLowering::getVectorTypeBreakdown(LLVMContext &Context, EVT VT,<br>
- EVT &IntermediateVT,<br>
- unsigned &NumIntermediates,<br>
- MVT &RegisterVT) const {<br>
- unsigned NumElts = VT.getVectorNumElements();<br>
-<br>
- // If there is a wider vector type with the same element type as this one,<br>
- // or a promoted vector type that has the same number of elements which<br>
- // are wider, then we should convert to that legal vector type.<br>
- // This handles things like <2 x float> -> <4 x float> and<br>
- // <4 x i1> -> <4 x i32>.<br>
- LegalizeTypeAction TA = getTypeAction(Context, VT);<br>
- if (NumElts != 1 && (TA == TypeWidenVector || TA == TypePromoteInteger)) {<br>
- EVT RegisterEVT = getTypeToTransformTo(Context, VT);<br>
- if (isTypeLegal(RegisterEVT)) {<br>
- IntermediateVT = RegisterEVT;<br>
- RegisterVT = RegisterEVT.getSimpleVT();<br>
- NumIntermediates = 1;<br>
- return 1;<br>
- }<br>
- }<br>
-<br>
- // Figure out the right, legal destination reg to copy into.<br>
- EVT EltTy = VT.getVectorElementType();<br>
-<br>
- unsigned NumVectorRegs = 1;<br>
-<br>
- // FIXME: We don't support non-power-of-2-sized vectors for now. Ideally we<br>
- // could break down into LHS/RHS like LegalizeDAG does.<br>
- if (!isPowerOf2_32(NumElts)) {<br>
- NumVectorRegs = NumElts;<br>
- NumElts = 1;<br>
- }<br>
-<br>
- // Divide the input until we get to a supported size. This will always<br>
- // end with a scalar if the target doesn't support vectors.<br>
- while (NumElts > 1 && !isTypeLegal(<br>
- EVT::getVectorVT(Context, EltTy, NumElts))) {<br>
- NumElts >>= 1;<br>
- NumVectorRegs <<= 1;<br>
- }<br>
-<br>
- NumIntermediates = NumVectorRegs;<br>
-<br>
- EVT NewVT = EVT::getVectorVT(Context, EltTy, NumElts);<br>
- if (!isTypeLegal(NewVT))<br>
- NewVT = EltTy;<br>
- IntermediateVT = NewVT;<br>
-<br>
- MVT DestVT = getRegisterType(Context, NewVT);<br>
- RegisterVT = DestVT;<br>
- unsigned NewVTSize = NewVT.getSizeInBits();<br>
-<br>
- // Convert sizes such as i33 to i64.<br>
- if (!isPowerOf2_32(NewVTSize))<br>
- NewVTSize = NextPowerOf2(NewVTSize);<br>
-<br>
- if (EVT(DestVT).bitsLT(NewVT)) // Value is expanded, e.g. i64 -> i16.<br>
- return NumVectorRegs*(NewVTSize/DestVT.getSizeInBits());<br>
-<br>
- // Otherwise, promotion or legal types use the same number of registers as<br>
- // the vector decimated to the appropriate level.<br>
- return NumVectorRegs;<br>
-}<br>
-<br>
/// Get the EVTs and ArgFlags collections that represent the legalized return<br>
/// type of the given function. This does not require a DAG or a return value,<br>
/// and is suitable for use before any DAGs for the function are constructed.<br>
@@ -1291,13 +467,6 @@<br>
}<br>
}<br>
<br>
-/// getByValTypeAlignment - Return the desired alignment for ByVal aggregate<br>
-/// function arguments in the caller parameter area. This is the actual<br>
-/// alignment, not its logarithm.<br>
-unsigned TargetLowering::getByValTypeAlignment(Type *Ty) const {<br>
- return TD->getCallFrameTypeAlignment(Ty);<br>
-}<br>
-<br>
/// getJumpTableEncoding - Return the entry encoding for a jump table in the<br>
/// current function. The returned value is a member of the<br>
/// MachineJumpTableInfo::JTEntryKind enum.<br>
@@ -1354,103 +523,6 @@<br>
}<br>
<br>
//===----------------------------------------------------------------------===//<br>
-// TargetTransformInfo Helpers<br>
-//===----------------------------------------------------------------------===//<br>
-<br>
-int TargetLowering::InstructionOpcodeToISD(unsigned Opcode) const {<br>
- enum InstructionOpcodes {<br>
-#define HANDLE_INST(NUM, OPCODE, CLASS) OPCODE = NUM,<br>
-#define LAST_OTHER_INST(NUM) InstructionOpcodesCount = NUM<br>
-#include "llvm/IR/Instruction.def"<br>
- };<br>
- switch (static_cast<InstructionOpcodes>(Opcode)) {<br>
- case Ret: return 0;<br>
- case Br: return 0;<br>
- case Switch: return 0;<br>
- case IndirectBr: return 0;<br>
- case Invoke: return 0;<br>
- case Resume: return 0;<br>
- case Unreachable: return 0;<br>
- case Add: return ISD::ADD;<br>
- case FAdd: return ISD::FADD;<br>
- case Sub: return ISD::SUB;<br>
- case FSub: return ISD::FSUB;<br>
- case Mul: return ISD::MUL;<br>
- case FMul: return ISD::FMUL;<br>
- case UDiv: return ISD::UDIV;<br>
- case SDiv: return ISD::UDIV;<br>
- case FDiv: return ISD::FDIV;<br>
- case URem: return ISD::UREM;<br>
- case SRem: return ISD::SREM;<br>
- case FRem: return ISD::FREM;<br>
- case Shl: return ISD::SHL;<br>
- case LShr: return ISD::SRL;<br>
- case AShr: return ISD::SRA;<br>
- case And: return ISD::AND;<br>
- case Or: return ISD::OR;<br>
- case Xor: return ISD::XOR;<br>
- case Alloca: return 0;<br>
- case Load: return ISD::LOAD;<br>
- case Store: return ISD::STORE;<br>
- case GetElementPtr: return 0;<br>
- case Fence: return 0;<br>
- case AtomicCmpXchg: return 0;<br>
- case AtomicRMW: return 0;<br>
- case Trunc: return ISD::TRUNCATE;<br>
- case ZExt: return ISD::ZERO_EXTEND;<br>
- case SExt: return ISD::SIGN_EXTEND;<br>
- case FPToUI: return ISD::FP_TO_UINT;<br>
- case FPToSI: return ISD::FP_TO_SINT;<br>
- case UIToFP: return ISD::UINT_TO_FP;<br>
- case SIToFP: return ISD::SINT_TO_FP;<br>
- case FPTrunc: return ISD::FP_ROUND;<br>
- case FPExt: return ISD::FP_EXTEND;<br>
- case PtrToInt: return ISD::BITCAST;<br>
- case IntToPtr: return ISD::BITCAST;<br>
- case BitCast: return ISD::BITCAST;<br>
- case ICmp: return ISD::SETCC;<br>
- case FCmp: return ISD::SETCC;<br>
- case PHI: return 0;<br>
- case Call: return 0;<br>
- case Select: return ISD::SELECT;<br>
- case UserOp1: return 0;<br>
- case UserOp2: return 0;<br>
- case VAArg: return 0;<br>
- case ExtractElement: return ISD::EXTRACT_VECTOR_ELT;<br>
- case InsertElement: return ISD::INSERT_VECTOR_ELT;<br>
- case ShuffleVector: return ISD::VECTOR_SHUFFLE;<br>
- case ExtractValue: return ISD::MERGE_VALUES;<br>
- case InsertValue: return ISD::MERGE_VALUES;<br>
- case LandingPad: return 0;<br>
- }<br>
-<br>
- llvm_unreachable("Unknown instruction type encountered!");<br>
-}<br>
-<br>
-std::pair<unsigned, MVT><br>
-TargetLowering::getTypeLegalizationCost(Type *Ty) const {<br>
- LLVMContext &C = Ty->getContext();<br>
- EVT MTy = getValueType(Ty);<br>
-<br>
- unsigned Cost = 1;<br>
- // We keep legalizing the type until we find a legal kind. We assume that<br>
- // the only operation that costs anything is the split. After splitting<br>
- // we need to handle two types.<br>
- while (true) {<br>
- LegalizeKind LK = getTypeConversion(C, MTy);<br>
-<br>
- if (LK.first == TypeLegal)<br>
- return std::make_pair(Cost, MTy.getSimpleVT());<br>
-<br>
- if (LK.first == TypeSplitVector || LK.first == TypeExpandInteger)<br>
- Cost *= 2;<br>
-<br>
- // Keep legalizing the type.<br>
- MTy = LK.second;<br>
- }<br>
-}<br>
-<br>
-//===----------------------------------------------------------------------===//<br>
// Optimization Methods<br>
//===----------------------------------------------------------------------===//<br>
<br>
@@ -2394,7 +1466,7 @@<br>
APInt newMask = APInt::getLowBitsSet(maskWidth, width);<br>
for (unsigned offset=0; offset<origWidth/width; offset++) {<br>
if ((newMask & Mask) == Mask) {<br>
- if (!TD->isLittleEndian())<br>
+ if (!getDataLayout()->isLittleEndian())<br>
bestOffset = (origWidth/width - offset - 1) * (width/8);<br>
else<br>
bestOffset = (uint64_t)offset * (width/8);<br>
@@ -3199,7 +2271,7 @@<br>
std::make_pair(0u, static_cast<const TargetRegisterClass*>(0));<br>
<br>
// Figure out which register class contains this reg.<br>
- const TargetRegisterInfo *RI = TM.getRegisterInfo();<br>
+ const TargetRegisterInfo *RI = getTargetMachine().getRegisterInfo();<br>
for (TargetRegisterInfo::regclass_iterator RCI = RI->regclass_begin(),<br>
E = RI->regclass_end(); RCI != E; ++RCI) {<br>
const TargetRegisterClass *RC = *RCI;<br>
@@ -3323,7 +2395,7 @@<br>
// If OpTy is not a single value, it may be a struct/union that we<br>
// can tile with integers.<br>
if (!OpTy->isSingleValueType() && OpTy->isSized()) {<br>
- unsigned BitSize = TD->getTypeSizeInBits(OpTy);<br>
+ unsigned BitSize = getDataLayout()->getTypeSizeInBits(OpTy);<br>
switch (BitSize) {<br>
default: break;<br>
case 1:<br>
@@ -3338,7 +2410,7 @@<br>
}<br>
} else if (PointerType *PT = dyn_cast<PointerType>(OpTy)) {<br>
OpInfo.ConstraintVT = MVT::getIntegerVT(<br>
- 8*TD->getPointerSize(PT->getAddressSpace()));<br>
+ 8*getDataLayout()->getPointerSize(PT->getAddressSpace()));<br>
} else {<br>
OpInfo.ConstraintVT = MVT::getVT(OpTy, true);<br>
}<br>
@@ -3633,44 +2705,6 @@<br>
}<br>
}<br>
<br>
-//===----------------------------------------------------------------------===//<br>
-// Loop Strength Reduction hooks<br>
-//===----------------------------------------------------------------------===//<br>
-<br>
-/// isLegalAddressingMode - Return true if the addressing mode represented<br>
-/// by AM is legal for this target, for a load/store of the specified type.<br>
-bool TargetLowering::isLegalAddressingMode(const AddrMode &AM,<br>
- Type *Ty) const {<br>
- // The default implementation of this implements a conservative RISCy, r+r and<br>
- // r+i addr mode.<br>
-<br>
- // Allows a sign-extended 16-bit immediate field.<br>
- if (AM.BaseOffs <= -(1LL << 16) || AM.BaseOffs >= (1LL << 16)-1)<br>
- return false;<br>
-<br>
- // No global is ever allowed as a base.<br>
- if (AM.BaseGV)<br>
- return false;<br>
-<br>
- // Only support r+r,<br>
- switch (AM.Scale) {<br>
- case 0: // "r+i" or just "i", depending on HasBaseReg.<br>
- break;<br>
- case 1:<br>
- if (AM.HasBaseReg && AM.BaseOffs) // "r+r+i" is not allowed.<br>
- return false;<br>
- // Otherwise we have r+r or r+i.<br>
- break;<br>
- case 2:<br>
- if (AM.HasBaseReg || AM.BaseOffs) // 2*r+r or 2*r+i is not allowed.<br>
- return false;<br>
- // Allow 2*r as r+r.<br>
- break;<br>
- }<br>
-<br>
- return true;<br>
-}<br>
-<br>
/// BuildExactDiv - Given an exact SDIV by a constant, create a multiplication<br>
/// with the multiplicative inverse of the constant.<br>
SDValue TargetLowering::BuildExactSDIV(SDValue Op1, SDValue Op2, DebugLoc dl,<br>
<br>
Modified: llvm/trunk/lib/CodeGen/SjLjEHPrepare.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SjLjEHPrepare.cpp?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SjLjEHPrepare.cpp?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/CodeGen/SjLjEHPrepare.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/SjLjEHPrepare.cpp Fri Jan 11 14:05:37 2013<br>
@@ -43,7 +43,7 @@<br>
<br>
namespace {<br>
class SjLjEHPrepare : public FunctionPass {<br>
- const TargetLowering *TLI;<br>
+ const TargetLoweringBase *TLI;<br>
Type *FunctionContextTy;<br>
Constant *RegisterFn;<br>
Constant *UnregisterFn;<br>
@@ -58,7 +58,7 @@<br>
AllocaInst *FuncCtx;<br>
public:<br>
static char ID; // Pass identification, replacement for typeid<br>
- explicit SjLjEHPrepare(const TargetLowering *tli = NULL)<br>
+ explicit SjLjEHPrepare(const TargetLoweringBase *tli = NULL)<br>
: FunctionPass(ID), TLI(tli) { }<br>
bool doInitialization(Module &M);<br>
bool runOnFunction(Function &F);<br>
@@ -82,7 +82,7 @@<br>
char SjLjEHPrepare::ID = 0;<br>
<br>
// Public Interface To the SjLjEHPrepare pass.<br>
-FunctionPass *llvm::createSjLjEHPreparePass(const TargetLowering *TLI) {<br>
+FunctionPass *llvm::createSjLjEHPreparePass(const TargetLoweringBase *TLI) {<br>
return new SjLjEHPrepare(TLI);<br>
}<br>
// doInitialization - Set up decalarations and types needed to process<br>
<br>
Modified: llvm/trunk/lib/CodeGen/StackProtector.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/StackProtector.cpp?rev=172246&r1=172245&r2=172246&view=diff" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/StackProtector.cpp?rev=172246&r1=172245&r2=172246&view=diff</a><br>
==============================================================================<br>
--- llvm/trunk/lib/CodeGen/StackProtector.cpp (original)<br>
+++ llvm/trunk/lib/CodeGen/StackProtector.cpp Fri Jan 11 14:05:37 2013<br>
@@ -36,7 +36,7 @@<br>
class StackProtector : public FunctionPass {<br>
/// TLI - Keep a pointer of a TargetLowering to consult for determining<br>
/// target type sizes.<br>
- const TargetLowering *TLI;<br>
+ const TargetLoweringBase *TLI;<br>
<br>
Function *F;<br>
Module *M;<br>
@@ -68,7 +68,7 @@<br>
StackProtector() : FunctionPass(ID), TLI(0) {<br>
initializeStackProtectorPass(*PassRegistry::getPassRegistry());<br>
}<br>
- StackProtector(const TargetLowering *tli)<br>
+ StackProtector(const TargetLoweringBase *tli)<br>
: FunctionPass(ID), TLI(tli) {<br>
initializeStackProtectorPass(*PassRegistry::getPassRegistry());<br>
}<br>
@@ -85,7 +85,7 @@<br>
INITIALIZE_PASS(StackProtector, "stack-protector",<br>
"Insert stack protectors", false, false)<br>
<br>
-FunctionPass *llvm::createStackProtectorPass(const TargetLowering *tli) {<br>
+FunctionPass *llvm::createStackProtectorPass(const TargetLoweringBase *tli) {<br>
return new StackProtector(tli);<br>
}<br>
<br>
<br>
Added: llvm/trunk/lib/CodeGen/TargetLoweringBase.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/TargetLoweringBase.cpp?rev=172246&view=auto" target="_blank" class="cremed">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/TargetLoweringBase.cpp?rev=172246&view=auto</a><br>
==============================================================================<br>
--- llvm/trunk/lib/CodeGen/TargetLoweringBase.cpp (added)<br>
+++ llvm/trunk/lib/CodeGen/TargetLoweringBase.cpp Fri Jan 11 14:05:37 2013<br>
@@ -0,0 +1,1274 @@<br>
+//===-- TargetLoweringBase.cpp - Implement the TargetLoweringBase class ---===//<br>
+//<br>
+// The LLVM Compiler Infrastructure<br>
+//<br>
+// This file is distributed under the University of Illinois Open Source<br>
+// License. See LICENSE.TXT for details.<br>
+//<br>
+//===----------------------------------------------------------------------===//<br>
+//<br>
+// This implements the TargetLoweringBase class.<br>
+//<br>
+//===----------------------------------------------------------------------===//<br>
+<br>
+#include "llvm/Target/TargetLowering.h"<br>
+#include "llvm/ADT/BitVector.h"<br>
+#include "llvm/ADT/STLExtras.h"<br>
+#include "llvm/CodeGen/Analysis.h"<br>
+#include "llvm/CodeGen/MachineFrameInfo.h"<br>
+#include "llvm/CodeGen/MachineFunction.h"<br>
+#include "llvm/CodeGen/MachineJumpTableInfo.h"<br>
+#include "llvm/IR/DataLayout.h"<br>
+#include "llvm/IR/DerivedTypes.h"<br>
+#include "llvm/IR/GlobalVariable.h"<br>
+#include "llvm/MC/MCAsmInfo.h"<br>
+#include "llvm/MC/MCExpr.h"<br>
+#include "llvm/Support/CommandLine.h"<br>
+#include "llvm/Support/ErrorHandling.h"<br>
+#include "llvm/Support/MathExtras.h"<br>
+#include "llvm/Target/TargetLoweringObjectFile.h"<br>
+#include "llvm/Target/TargetMachine.h"<br>
+#include "llvm/Target/TargetRegisterInfo.h"<br>
+#include <cctype><br>
+using namespace llvm;<br>
+<br>
+/// InitLibcallNames - Set default libcall names.<br>
+///<br>
+static void InitLibcallNames(const char **Names) {<br>
+ Names[RTLIB::SHL_I16] = "__ashlhi3";<br>
+ Names[RTLIB::SHL_I32] = "__ashlsi3";<br>
+ Names[RTLIB::SHL_I64] = "__ashldi3";<br>
+ Names[RTLIB::SHL_I128] = "__ashlti3";<br>
+ Names[RTLIB::SRL_I16] = "__lshrhi3";<br>
+ Names[RTLIB::SRL_I32] = "__lshrsi3";<br>
+ Names[RTLIB::SRL_I64] = "__lshrdi3";<br>
+ Names[RTLIB::SRL_I128] = "__lshrti3";<br>
+ Names[RTLIB::SRA_I16] = "__ashrhi3";<br>
+ Names[RTLIB::SRA_I32] = "__ashrsi3";<br>
+ Names[RTLIB::SRA_I64] = "__ashrdi3";<br>
+ Names[RTLIB::SRA_I128] = "__ashrti3";<br>
+ Names[RTLIB::MUL_I8] = "__mulqi3";<br>
+ Names[RTLIB::MUL_I16] = "__mulhi3";<br>
+ Names[RTLIB::MUL_I32] = "__mulsi3";<br>
+ Names[RTLIB::MUL_I64] = "__muldi3";<br>
+ Names[RTLIB::MUL_I128] = "__multi3";<br>
+ Names[RTLIB::MULO_I32] = "__mulosi4";<br>
+ Names[RTLIB::MULO_I64] = "__mulodi4";<br>
+ Names[RTLIB::MULO_I128] = "__muloti4";<br>
+ Names[RTLIB::SDIV_I8] = "__divqi3";<br>
+ Names[RTLIB::SDIV_I16] = "__divhi3";<br>
+ Names[RTLIB::SDIV_I32] = "__divsi3";<br>
+ Names[RTLIB::SDIV_I64] = "__divdi3";<br>
+ Names[RTLIB::SDIV_I128] = "__divti3";<br>
+ Names[RTLIB::UDIV_I8] = "__udivqi3";<br>
+ Names[RTLIB::UDIV_I16] = "__udivhi3";<br>
+ Names[RTLIB::UDIV_I32] = "__udivsi3";<br>
+ Names[RTLIB::UDIV_I64] = "__udivdi3";<br>
+ Names[RTLIB::UDIV_I128] = "__udivti3";<br>
+ Names[RTLIB::SREM_I8] = "__modqi3";<br>
+ Names[RTLIB::SREM_I16] = "__modhi3";<br>
+ Names[RTLIB::SREM_I32] = "__modsi3";<br>
+ Names[RTLIB::SREM_I64] = "__moddi3";<br>
+ Names[RTLIB::SREM_I128] = "__modti3";<br>
+ Names[RTLIB::UREM_I8] = "__umodqi3";<br>
+ Names[RTLIB::UREM_I16] = "__umodhi3";<br>
+ Names[RTLIB::UREM_I32] = "__umodsi3";<br>
+ Names[RTLIB::UREM_I64] = "__umoddi3";<br>
+ Names[RTLIB::UREM_I128] = "__umodti3";<br>
+<br>
+ // These are generally not available.<br>
+ Names[RTLIB::SDIVREM_I8] = 0;<br>
+ Names[RTLIB::SDIVREM_I16] = 0;<br>
+ Names[RTLIB::SDIVREM_I32] = 0;<br>
+ Names[RTLIB::SDIVREM_I64] = 0;<br>
+ Names[RTLIB::SDIVREM_I128] = 0;<br>
+ Names[RTLIB::UDIVREM_I8] = 0;<br>
+ Names[RTLIB::UDIVREM_I16] = 0;<br>
+ Names[RTLIB::UDIVREM_I32] = 0;<br>
+ Names[RTLIB::UDIVREM_I64] = 0;<br>
+ Names[RTLIB::UDIVREM_I128] = 0;<br>
+<br>
+ Names[RTLIB::NEG_I32] = "__negsi2";<br>
+ Names[RTLIB::NEG_I64] = "__negdi2";<br>
+ Names[RTLIB::ADD_F32] = "__addsf3";<br>
+ Names[RTLIB::ADD_F64] = "__adddf3";<br>
+ Names[RTLIB::ADD_F80] = "__addxf3";<br>
+ Names[RTLIB::ADD_F128] = "__addtf3";<br>
+ Names[RTLIB::ADD_PPCF128] = "__gcc_qadd";<br>
+ Names[RTLIB::SUB_F32] = "__subsf3";<br>
+ Names[RTLIB::SUB_F64] = "__subdf3";<br>
+ Names[RTLIB::SUB_F80] = "__subxf3";<br>
+ Names[RTLIB::SUB_F128] = "__subtf3";<br>
+ Names[RTLIB::SUB_PPCF128] = "__gcc_qsub";<br>
+ Names[RTLIB::MUL_F32] = "__mulsf3";<br>
+ Names[RTLIB::MUL_F64] = "__muldf3";<br>
+ Names[RTLIB::MUL_F80] = "__mulxf3";<br>
+ Names[RTLIB::MUL_F128] = "__multf3";<br>
+ Names[RTLIB::MUL_PPCF128] = "__gcc_qmul";<br>
+ Names[RTLIB::DIV_F32] = "__divsf3";<br>
+ Names[RTLIB::DIV_F64] = "__divdf3";<br>
+ Names[RTLIB::DIV_F80] = "__divxf3";<br>
+ Names[RTLIB::DIV_F128] = "__divtf3";<br>
+ Names[RTLIB::DIV_PPCF128] = "__gcc_qdiv";<br>
+ Names[RTLIB::REM_F32] = "fmodf";<br>
+ Names[RTLIB::REM_F64] = "fmod";<br>
+ Names[RTLIB::REM_F80] = "fmodl";<br>
+ Names[RTLIB::REM_F128] = "fmodl";<br>
+ Names[RTLIB::REM_PPCF128] = "fmodl";<br>
+ Names[RTLIB::FMA_F32] = "fmaf";<br>
+ Names[RTLIB::FMA_F64] = "fma";<br>
+ Names[RTLIB::FMA_F80] = "fmal";<br>
+ Names[RTLIB::FMA_F128] = "fmal";<br>
+ Names[RTLIB::FMA_PPCF128] = "fmal";<br>
+ Names[RTLIB::POWI_F32] = "__powisf2";<br>
+ Names[RTLIB::POWI_F64] = "__powidf2";<br>
+ Names[RTLIB::POWI_F80] = "__powixf2";<br>
+ Names[RTLIB::POWI_F128] = "__powitf2";<br>
+ Names[RTLIB::POWI_PPCF128] = "__powitf2";<br>
+ Names[RTLIB::SQRT_F32] = "sqrtf";<br>
+ Names[RTLIB::SQRT_F64] = "sqrt";<br>
+ Names[RTLIB::SQRT_F80] = "sqrtl";<br>
+ Names[RTLIB::SQRT_F128] = "sqrtl";<br>
+ Names[RTLIB::SQRT_PPCF128] = "sqrtl";<br>
+ Names[RTLIB::LOG_F32] = "logf";<br>
+ Names[RTLIB::LOG_F64] = "log";<br>
+ Names[RTLIB::LOG_F80] = "logl";<br>
+ Names[RTLIB::LOG_F128] = "logl";<br>
+ Names[RTLIB::LOG_PPCF128] = "logl";<br>
+ Names[RTLIB::LOG2_F32] = "log2f";<br>
+ Names[RTLIB::LOG2_F64] = "log2";<br>
+ Names[RTLIB::LOG2_F80] = "log2l";<br>
+ Names[RTLIB::LOG2_F128] = "log2l";<br>
+ Names[RTLIB::LOG2_PPCF128] = "log2l";<br>
+ Names[RTLIB::LOG10_F32] = "log10f";<br>
+ Names[RTLIB::LOG10_F64] = "log10";<br>
+ Names[RTLIB::LOG10_F80] = "log10l";<br>
+ Names[RTLIB::LOG10_F128] = "log10l";<br>
+ Names[RTLIB::LOG10_PPCF128] = "log10l";<br>
+ Names[RTLIB::EXP_F32] = "expf";<br>
+ Names[RTLIB::EXP_F64] = "exp";<br>
+ Names[RTLIB::EXP_F80] = "expl";<br>
+ Names[RTLIB::EXP_F128] = "expl";<br>
+ Names[RTLIB::EXP_PPCF128] = "expl";<br>
+ Names[RTLIB::EXP2_F32] = "exp2f";<br>
+ Names[RTLIB::EXP2_F64] = "exp2";<br>
+ Names[RTLIB::EXP2_F80] = "exp2l";<br>
+ Names[RTLIB::EXP2_F128] = "exp2l";<br>
+ Names[RTLIB::EXP2_PPCF128] = "exp2l";<br>
+ Names[RTLIB::SIN_F32] = "sinf";<br>
+ Names[RTLIB::SIN_F64] = "sin";<br>
+ Names[RTLIB::SIN_F80] = "sinl";<br>
+ Names[RTLIB::SIN_F128] = "sinl";<br>
+ Names[RTLIB::SIN_PPCF128] = "sinl";<br>
+ Names[RTLIB::COS_F32] = "cosf";<br>
+ Names[RTLIB::COS_F64] = "cos";<br>
+ Names[RTLIB::COS_F80] = "cosl";<br>
+ Names[RTLIB::COS_F128] = "cosl";<br>
+ Names[RTLIB::COS_PPCF128] = "cosl";<br>
+ Names[RTLIB::POW_F32] = "powf";<br>
+ Names[RTLIB::POW_F64] = "pow";<br>
+ Names[RTLIB::POW_F80] = "powl";<br>
+ Names[RTLIB::POW_F128] = "powl";<br>
+ Names[RTLIB::POW_PPCF128] = "powl";<br>
+ Names[RTLIB::CEIL_F32] = "ceilf";<br>
+ Names[RTLIB::CEIL_F64] = "ceil";<br>
+ Names[RTLIB::CEIL_F80] = "ceill";<br>
+ Names[RTLIB::CEIL_F128] = "ceill";<br>
+ Names[RTLIB::CEIL_PPCF128] = "ceill";<br>
+ Names[RTLIB::TRUNC_F32] = "truncf";<br>
+ Names[RTLIB::TRUNC_F64] = "trunc";<br>
+ Names[RTLIB::TRUNC_F80] = "truncl";<br>
+ Names[RTLIB::TRUNC_F128] = "truncl";<br>
+ Names[RTLIB::TRUNC_PPCF128] = "truncl";<br>
+ Names[RTLIB::RINT_F32] = "rintf";<br>
+ Names[RTLIB::RINT_F64] = "rint";<br>
+ Names[RTLIB::RINT_F80] = "rintl";<br>
+ Names[RTLIB::RINT_F128] = "rintl";<br>
+ Names[RTLIB::RINT_PPCF128] = "rintl";<br>
+ Names[RTLIB::NEARBYINT_F32] = "nearbyintf";<br>
+ Names[RTLIB::NEARBYINT_F64] = "nearbyint";<br>
+ Names[RTLIB::NEARBYINT_F80] = "nearbyintl";<br>
+ Names[RTLIB::NEARBYINT_F128] = "nearbyintl";<br>
+ Names[RTLIB::NEARBYINT_PPCF128] = "nearbyintl";<br>
+ Names[RTLIB::FLOOR_F32] = "floorf";<br>
+ Names[RTLIB::FLOOR_F64] = "floor";<br>
+ Names[RTLIB::FLOOR_F80] = "floorl";<br>
+ Names[RTLIB::FLOOR_F128] = "floorl";<br>
+ Names[RTLIB::FLOOR_PPCF128] = "floorl";<br>
+ Names[RTLIB::COPYSIGN_F32] = "copysignf";<br>
+ Names[RTLIB::COPYSIGN_F64] = "copysign";<br>
+ Names[RTLIB::COPYSIGN_F80] = "copysignl";<br>
+ Names[RTLIB::COPYSIGN_F128] = "copysignl";<br>
+ Names[RTLIB::COPYSIGN_PPCF128] = "copysignl";<br>
+ Names[RTLIB::FPEXT_F64_F128] = "__extenddftf2";<br>
+ Names[RTLIB::FPEXT_F32_F128] = "__extendsftf2";<br>
+ Names[RTLIB::FPEXT_F32_F64] = "__extendsfdf2";<br>
+ Names[RTLIB::FPEXT_F16_F32] = "__gnu_h2f_ieee";<br>
+ Names[RTLIB::FPROUND_F32_F16] = "__gnu_f2h_ieee";<br>
+ Names[RTLIB::FPROUND_F64_F32] = "__truncdfsf2";<br>
+ Names[RTLIB::FPROUND_F80_F32] = "__truncxfsf2";<br>
+ Names[RTLIB::FPROUND_F128_F32] = "__trunctfsf2";<br>
+ Names[RTLIB::FPROUND_PPCF128_F32] = "__trunctfsf2";<br>
+ Names[RTLIB::FPROUND_F80_F64] = "__truncxfdf2";<br>
+ Names[RTLIB::FPROUND_F128_F64] = "__trunctfdf2";<br>
+ Names[RTLIB::FPROUND_PPCF128_F64] = "__trunctfdf2";<br>
+ Names[RTLIB::FPTOSINT_F32_I8] = "__fixsfqi";<br>
+ Names[RTLIB::FPTOSINT_F32_I16] = "__fixsfhi";<br>
+ Names[RTLIB::FPTOSINT_F32_I32] = "__fixsfsi";<br>
+ Names[RTLIB::FPTOSINT_F32_I64] = "__fixsfdi";<br>
+ Names[RTLIB::FPTOSINT_F32_I128] = "__fixsfti";<br>
+ Names[RTLIB::FPTOSINT_F64_I8] = "__fixdfqi";<br>
+ Names[RTLIB::FPTOSINT_F64_I16] = "__fixdfhi";<br>
+ Names[RTLIB::FPTOSINT_F64_I32] = "__fixdfsi";<br>
+ Names[RTLIB::FPTOSINT_F64_I64] = "__fixdfdi";<br>
+ Names[RTLIB::FPTOSINT_F64_I128] = "__fixdfti";<br>
+ Names[RTLIB::FPTOSINT_F80_I32] = "__fixxfsi";<br>
+ Names[RTLIB::FPTOSINT_F80_I64] = "__fixxfdi";<br>
+ Names[RTLIB::FPTOSINT_F80_I128] = "__fixxfti";<br>
+ Names[RTLIB::FPTOSINT_F128_I32] = "__fixtfsi";<br>
+ Names[RTLIB::FPTOSINT_F128_I64] = "__fixtfdi";<br>
+ Names[RTLIB::FPTOSINT_F128_I128] = "__fixtfti";<br>
+ Names[RTLIB::FPTOSINT_PPCF128_I32] = "__fixtfsi";<br>
+ Names[RTLIB::FPTOSINT_PPCF128_I64] = "__fixtfdi";<br>
+ Names[RTLIB::FPTOSINT_PPCF128_I128] = "__fixtfti";<br>
+ Names[RTLIB::FPTOUINT_F32_I8] = "__fixunssfqi";<br>
+ Names[RTLIB::FPTOUINT_F32_I16] = "__fixunssfhi";<br>
+ Names[RTLIB::FPTOUINT_F32_I32] = "__fixunssfsi";<br>
+ Names[RTLIB::FPTOUINT_F32_I64] = "__fixunssfdi";<br>
+ Names[RTLIB::FPTOUINT_F32_I128] = "__fixunssfti";<br>
+ Names[RTLIB::FPTOUINT_F64_I8] = "__fixunsdfqi";<br>
+ Names[RTLIB::FPTOUINT_F64_I16] = "__fixunsdfhi";<br>
+ Names[RTLIB::FPTOUINT_F64_I32] = "__fixunsdfsi";<br>
+ Names[RTLIB::FPTOUINT_F64_I64] = "__fixunsdfdi";<br>
+ Names[RTLIB::FPTOUINT_F64_I128] = "__fixunsdfti";<br>
+ Names[RTLIB::FPTOUINT_F80_I32] = "__fixunsxfsi";<br>
+ Names[RTLIB::FPTOUINT_F80_I64] = "__fixunsxfdi";<br>
+ Names[RTLIB::FPTOUINT_F80_I128] = "__fixunsxfti";<br>
+ Names[RTLIB::FPTOUINT_F128_I32] = "__fixunstfsi";<br>
+ Names[RTLIB::FPTOUINT_F128_I64] = "__fixunstfdi";<br>
+ Names[RTLIB::FPTOUINT_F128_I128] = "__fixunstfti";<br>
+ Names[RTLIB::FPTOUINT_PPCF128_I32] = "__fixunstfsi";<br>
+ Names[RTLIB::FPTOUINT_PPCF128_I64] = "__fixunstfdi";<br>
+ Names[RTLIB::FPTOUINT_PPCF128_I128] = "__fixunstfti";<br>
+ Names[RTLIB::SINTTOFP_I32_F32] = "__floatsisf";<br>
+ Names[RTLIB::SINTTOFP_I32_F64] = "__floatsidf";<br>
+ Names[RTLIB::SINTTOFP_I32_F80] = "__floatsixf";<br>
+ Names[RTLIB::SINTTOFP_I32_F128] = "__floatsitf";<br>
+ Names[RTLIB::SINTTOFP_I32_PPCF128] = "__floatsitf";<br>
+ Names[RTLIB::SINTTOFP_I64_F32] = "__floatdisf";<br>
+ Names[RTLIB::SINTTOFP_I64_F64] = "__floatdidf";<br>
+ Names[RTLIB::SINTTOFP_I64_F80] = "__floatdixf";<br>
+ Names[RTLIB::SINTTOFP_I64_F128] = "__floatditf";<br>
+ Names[RTLIB::SINTTOFP_I64_PPCF128] = "__floatditf";<br>
+ Names[RTLIB::SINTTOFP_I128_F32] = "__floattisf";<br>
+ Names[RTLIB::SINTTOFP_I128_F64] = "__floattidf";<br>
+ Names[RTLIB::SINTTOFP_I128_F80] = "__floattixf";<br>
+ Names[RTLIB::SINTTOFP_I128_F128] = "__floattitf";<br>
+ Names[RTLIB::SINTTOFP_I128_PPCF128] = "__floattitf";<br>
+ Names[RTLIB::UINTTOFP_I32_F32] = "__floatunsisf";<br>
+ Names[RTLIB::UINTTOFP_I32_F64] = "__floatunsidf";<br>
+ Names[RTLIB::UINTTOFP_I32_F80] = "__floatunsixf";<br>
+ Names[RTLIB::UINTTOFP_I32_F128] = "__floatunsitf";<br>
+ Names[RTLIB::UINTTOFP_I32_PPCF128] = "__floatunsitf";<br>
+ Names[RTLIB::UINTTOFP_I64_F32] = "__floatundisf";<br>
+ Names[RTLIB::UINTTOFP_I64_F64] = "__floatundidf";<br>
+ Names[RTLIB::UINTTOFP_I64_F80] = "__floatundixf";<br>
+ Names[RTLIB::UINTTOFP_I64_F128] = "__floatunditf";<br>
+ Names[RTLIB::UINTTOFP_I64_PPCF128] = "__floatunditf";<br>
+ Names[RTLIB::UINTTOFP_I128_F32] = "__floatuntisf";<br>
+ Names[RTLIB::UINTTOFP_I128_F64] = "__floatuntidf";<br>
+ Names[RTLIB::UINTTOFP_I128_F80] = "__floatuntixf";<br>
+ Names[RTLIB::UINTTOFP_I128_F128] = "__floatuntitf";<br>
+ Names[RTLIB::UINTTOFP_I128_PPCF128] = "__floatuntitf";<br>
+ Names[RTLIB::OEQ_F32] = "__eqsf2";<br>
+ Names[RTLIB::OEQ_F64] = "__eqdf2";<br>
+ Names[RTLIB::OEQ_F128] = "__eqtf2";<br>
+ Names[RTLIB::UNE_F32] = "__nesf2";<br>
+ Names[RTLIB::UNE_F64] = "__nedf2";<br>
+ Names[RTLIB::UNE_F128] = "__netf2";<br>
+ Names[RTLIB::OGE_F32] = "__gesf2";<br>
+ Names[RTLIB::OGE_F64] = "__gedf2";<br>
+ Names[RTLIB::OGE_F128] = "__getf2";<br>
+ Names[RTLIB::OLT_F32] = "__ltsf2";<br>
+ Names[RTLIB::OLT_F64] = "__ltdf2";<br>
+ Names[RTLIB::OLT_F128] = "__lttf2";<br>
+ Names[RTLIB::OLE_F32] = "__lesf2";<br>
+ Names[RTLIB::OLE_F64] = "__ledf2";<br>
+ Names[RTLIB::OLE_F128] = "__letf2";<br>
+ Names[RTLIB::OGT_F32] = "__gtsf2";<br>
+ Names[RTLIB::OGT_F64] = "__gtdf2";<br>
+ Names[RTLIB::OGT_F128] = "__gttf2";<br>
+ Names[RTLIB::UO_F32] = "__unordsf2";<br>
+ Names[RTLIB::UO_F64] = "__unorddf2";<br>
+ Names[RTLIB::UO_F128] = "__unordtf2";<br>
+ Names[RTLIB::O_F32] = "__unordsf2";<br>
+ Names[RTLIB::O_F64] = "__unorddf2";<br>
+ Names[RTLIB::O_F128] = "__unordtf2";<br>
+ Names[RTLIB::MEMCPY] = "memcpy";<br>
+ Names[RTLIB::MEMMOVE] = "memmove";<br>
+ Names[RTLIB::MEMSET] = "memset";<br>
+ Names[RTLIB::UNWIND_RESUME] = "_Unwind_Resume";<br>
+ Names[RTLIB::SYNC_VAL_COMPARE_AND_SWAP_1] = "__sync_val_compare_and_swap_1";<br>
+ Names[RTLIB::SYNC_VAL_COMPARE_AND_SWAP_2] = "__sync_val_compare_and_swap_2";<br>
+ Names[RTLIB::SYNC_VAL_COMPARE_AND_SWAP_4] = "__sync_val_compare_and_swap_4";<br>
+ Names[RTLIB::SYNC_VAL_COMPARE_AND_SWAP_8] = "__sync_val_compare_and_swap_8";<br>
+ Names[RTLIB::SYNC_LOCK_TEST_AND_SET_1] = "__sync_lock_test_and_set_1";<br>
+ Names[RTLIB::SYNC_LOCK_TEST_AND_SET_2] = "__sync_lock_test_and_set_2";<br>
+ Names[RTLIB::SYNC_LOCK_TEST_AND_SET_4] = "__sync_lock_test_and_set_4";<br>
+ Names[RTLIB::SYNC_LOCK_TEST_AND_SET_8] = "__sync_lock_test_and_set_8";<br>
+ Names[RTLIB::SYNC_FETCH_AND_ADD_1] = "__sync_fetch_and_add_1";<br>
+ Names[RTLIB::SYNC_FETCH_AND_ADD_2] = "__sync_fetch_and_add_2";<br>
+ Names[RTLIB::SYNC_FETCH_AND_ADD_4] = "__sync_fetch_and_add_4";<br>
+ Names[RTLIB::SYNC_FETCH_AND_ADD_8] = "__sync_fetch_and_add_8";<br>
+ Names[RTLIB::SYNC_FETCH_AND_SUB_1] = "__sync_fetch_and_sub_1";<br>
+ Names[RTLIB::SYNC_FETCH_AND_SUB_2] = "__sync_fetch_and_sub_2";<br>
+ Names[RTLIB::SYNC_FETCH_AND_SUB_4] = "__sync_fetch_and_sub_4";<br>
+ Names[RTLIB::SYNC_FETCH_AND_SUB_8] = "__sync_fetch_and_sub_8";<br>
+ Names[RTLIB::SYNC_FETCH_AND_AND_1] = "__sync_fetch_and_and_1";<br>
+ Names[RTLIB::SYNC_FETCH_AND_AND_2] = "__sync_fetch_and_and_2";<br>
+ Names[RTLIB::SYNC_FETCH_AND_AND_4] = "__sync_fetch_and_and_4";<br>
+ Names[RTLIB::SYNC_FETCH_AND_AND_8] = "__sync_fetch_and_and_8";<br>
+ Names[RTLIB::SYNC_FETCH_AND_OR_1] = "__sync_fetch_and_or_1";<br>
+ Names[RTLIB::SYNC_FETCH_AND_OR_2] = "__sync_fetch_and_or_2";<br>
+ Names[RTLIB::SYNC_FETCH_AND_OR_4] = "__sync_fetch_and_or_4";<br>
+ Names[RTLIB::SYNC_FETCH_AND_OR_8] = "__sync_fetch_and_or_8";<br>
+ Names[RTLIB::SYNC_FETCH_AND_XOR_1] = "__sync_fetch_and_xor_1";<br>
+ Names[RTLIB::SYNC_FETCH_AND_XOR_2] = "__sync_fetch_and_xor_2";<br>
+ Names[RTLIB::SYNC_FETCH_AND_XOR_4] = "__sync_fetch_and_xor_4";<br>
+ Names[RTLIB::SYNC_FETCH_AND_XOR_8] = "__sync_fetch_and_xor_8";<br>
+ Names[RTLIB::SYNC_FETCH_AND_NAND_1] = "__sync_fetch_and_nand_1";<br>
+ Names[RTLIB::SYNC_FETCH_AND_NAND_2] = "__sync_fetch_and_nand_2";<br>
+ Names[RTLIB::SYNC_FETCH_AND_NAND_4] = "__sync_fetch_and_nand_4";<br>
+ Names[RTLIB::SYNC_FETCH_AND_NAND_8] = "__sync_fetch_and_nand_8";<br>
+}<br>
+<br>
+/// InitLibcallCallingConvs - Set default libcall CallingConvs.<br>
+///<br>
+static void InitLibcallCallingConvs(CallingConv::ID *CCs) {<br>
+ for (int i = 0; i < RTLIB::UNKNOWN_LIBCALL; ++i) {<br>
+ CCs[i] = CallingConv::C;<br>
+ }<br>
+}<br>
+<br>
+/// getFPEXT - Return the FPEXT_*_* value for the given types, or<br>
+/// UNKNOWN_LIBCALL if there is none.<br>
+RTLIB::Libcall RTLIB::getFPEXT(EVT OpVT, EVT RetVT) {<br>
+ if (OpVT == MVT::f32) {<br>
+ if (RetVT == MVT::f64)<br>
+ return FPEXT_F32_F64;<br>
+ if (RetVT == MVT::f128)<br>
+ return FPEXT_F32_F128;<br>
+ } else if (OpVT == MVT::f64) {<br>
+ if (RetVT == MVT::f128)<br>
+ return FPEXT_F64_F128;<br>
+ }<br>
+<br>
+ return UNKNOWN_LIBCALL;<br>
+}<br>
+<br>
+/// getFPROUND - Return the FPROUND_*_* value for the given types, or<br>
+/// UNKNOWN_LIBCALL if there is none.<br>
+RTLIB::Libcall RTLIB::getFPROUND(EVT OpVT, EVT RetVT) {<br>
+ if (RetVT == MVT::f32) {<br>
+ if (OpVT == MVT::f64)<br>
+ return FPROUND_F64_F32;<br>
+ if (OpVT == MVT::f80)<br>
+ return FPROUND_F80_F32;<br>
+ if (OpVT == MVT::f128)<br>
+ return FPROUND_F128_F32;<br>
+ if (OpVT == MVT::ppcf128)<br>
+ return FPROUND_PPCF128_F32;<br>
+ } else if (RetVT == MVT::f64) {<br>
+ if (OpVT == MVT::f80)<br>
+ return FPROUND_F80_F64;<br>
+ if (OpVT == MVT::f128)<br>
+ return FPROUND_F128_F64;<br>
+ if (OpVT == MVT::ppcf128)<br>
+ return FPROUND_PPCF128_F64;<br>
+ }<br>
+<br>
+ return UNKNOWN_LIBCALL;<br>
+}<br>
+<br>
+/// getFPTOSINT - Return the FPTOSINT_*_* value for the given types, or<br>
+/// UNKNOWN_LIBCALL if there is none.<br>
+RTLIB::Libcall RTLIB::getFPTOSINT(EVT OpVT, EVT RetVT) {<br>
+ if (OpVT == MVT::f32) {<br>
+ if (RetVT == MVT::i8)<br>
+ return FPTOSINT_F32_I8;<br>
+ if (RetVT == MVT::i16)<br>
+ return FPTOSINT_F32_I16;<br>
+ if (RetVT == MVT::i32)<br>
+ return FPTOSINT_F32_I32;<br>
+ if (RetVT == MVT::i64)<br>
+ return FPTOSINT_F32_I64;<br>
+ if (RetVT == MVT::i128)<br>
+ return FPTOSINT_F32_I128;<br>
+ } else if (OpVT == MVT::f64) {<br>
+ if (RetVT == MVT::i8)<br>
+ return FPTOSINT_F64_I8;<br>
+ if (RetVT == MVT::i16)<br>
+ return FPTOSINT_F64_I16;<br>
+ if (RetVT == MVT::i32)<br>
+ return FPTOSINT_F64_I32;<br>
+ if (RetVT == MVT::i64)<br>
+ return FPTOSINT_F64_I64;<br>
+ if (RetVT == MVT::i128)<br>
+ return FPTOSINT_F64_I128;<br>
+ } else if (OpVT == MVT::f80) {<br>
+ if (RetVT == MVT::i32)<br>
+ return FPTOSINT_F80_I32;<br>
+ if (RetVT == MVT::i64)<br>
+ return FPTOSINT_F80_I64;<br>
+ if (RetVT == MVT::i128)<br>
+ return FPTOSINT_F80_I128;<br>
+ } else if (OpVT == MVT::f128) {<br>
+ if (RetVT == MVT::i32)<br>
+ return FPTOSINT_F128_I32;<br>
+ if (RetVT == MVT::i64)<br>
+ return FPTOSINT_F128_I64;<br>
+ if (RetVT == MVT::i128)<br>
+ return FPTOSINT_F128_I128;<br>
+ } else if (OpVT == MVT::ppcf128) {<br>
+ if (RetVT == MVT::i32)<br>
+ return FPTOSINT_PPCF128_I32;<br>
+ if (RetVT == MVT::i64)<br>
+ return FPTOSINT_PPCF128_I64;<br>
+ if (RetVT == MVT::i128)<br>
+ return FPTOSINT_PPCF128_I128;<br>
+ }<br>
+ return UNKNOWN_LIBCALL;<br>
+}<br>
+<br>
+/// getFPTOUINT - Return the FPTOUINT_*_* value for the given types, or<br>
+/// UNKNOWN_LIBCALL if there is none.<br>
+RTLIB::Libcall RTLIB::getFPTOUINT(EVT OpVT, EVT RetVT) {<br>
+ if (OpVT == MVT::f32) {<br>
+ if (RetVT == MVT::i8)<br>
+ return FPTOUINT_F32_I8;<br>
+ if (RetVT == MVT::i16)<br>
+ return FPTOUINT_F32_I16;<br>
+ if (RetVT == MVT::i32)<br>
+ return FPTOUINT_F32_I32;<br>
+ if (RetVT == MVT::i64)<br>
+ return FPTOUINT_F32_I64;<br>
+ if (RetVT == MVT::i128)<br>
+ return FPTOUINT_F32_I128;<br>
+ } else if (OpVT == MVT::f64) {<br>
+ if (RetVT == MVT::i8)<br>
+ return FPTOUINT_F64_I8;<br>
+ if (RetVT == MVT::i16)<br>
+ return FPTOUINT_F64_I16;<br>
+ if (RetVT == MVT::i32)<br>
+ return FPTOUINT_F64_I32;<br>
+ if (RetVT == MVT::i64)<br>
+ return FPTOUINT_F64_I64;<br>
+ if (RetVT == MVT::i128)<br>
+ return FPTOUINT_F64_I128;<br>
+ } else if (OpVT == MVT::f80) {<br>
+ if (RetVT == MVT::i32)<br>
+ return FPTOUINT_F80_I32;<br>
+ if (RetVT == MVT::i64)<br>
+ return FPTOUINT_F80_I64;<br>
+ if (RetVT == MVT::i128)<br>
+ return FPTOUINT_F80_I128;<br>
+ } else if (OpVT == MVT::f128) {<br>
+ if (RetVT == MVT::i32)<br>
+ return FPTOUINT_F128_I32;<br>
+ if (RetVT == MVT::i64)<br>
+ return FPTOUINT_F128_I64;<br>
+ if (RetVT == MVT::i128)<br>
+ return FPTOUINT_F128_I128;<br>
+ } else if (OpVT == MVT::ppcf128) {<br>
+ if (RetVT == MVT::i32)<br>
+ return FPTOUINT_PPCF128_I32;<br>
+ if (RetVT == MVT::i64)<br>
+ return FPTOUINT_PPCF128_I64;<br>
+ if (RetVT == MVT::i128)<br>
+ return FPTOUINT_PPCF128_I128;<br>
+ }<br>
+ return UNKNOWN_LIBCALL;<br>
+}<br>
+<br>
+/// getSINTTOFP - Return the SINTTOFP_*_* value for the given types, or<br>
+/// UNKNOWN_LIBCALL if there is none.<br>
+RTLIB::Libcall RTLIB::getSINTTOFP(EVT OpVT, EVT RetVT) {<br>
+ if (OpVT == MVT::i32) {<br>
+ if (RetVT == MVT::f32)<br>
+ return SINTTOFP_I32_F32;<br>
+ if (RetVT == MVT::f64)<br>
+ return SINTTOFP_I32_F64;<br>
+ if (RetVT == MVT::f80)<br>
+ return SINTTOFP_I32_F80;<br>
+ if (RetVT == MVT::f128)<br>
+ return SINTTOFP_I32_F128;<br>
+ if (RetVT == MVT::ppcf128)<br>
+ return SINTTOFP_I32_PPCF128;<br>
+ } else if (OpVT == MVT::i64) {<br>
+ if (RetVT == MVT::f32)<br>
+ return SINTTOFP_I64_F32;<br>
+ if (RetVT == MVT::f64)<br>
+ return SINTTOFP_I64_F64;<br>
+ if (RetVT == MVT::f80)<br>
+ return SINTTOFP_I64_F80;<br>
+ if (RetVT == MVT::f128)<br>
+ return SINTTOFP_I64_F128;<br>
+ if (RetVT == MVT::ppcf128)<br>
+ return SINTTOFP_I64_PPCF128;<br>
+ } else if (OpVT == MVT::i128) {<br>
+ if (RetVT == MVT::f32)<br>
+ return SINTTOFP_I128_F32;<br>
+ if (RetVT == MVT::f64)<br>
+ return SINTTOFP_I128_F64;<br>
+ if (RetVT == MVT::f80)<br>
+ return SINTTOFP_I128_F80;<br>
+ if (RetVT == MVT::f128)<br>
+ return SINTTOFP_I128_F128;<br>
+ if (RetVT == MVT::ppcf128)<br>
+ return SINTTOFP_I128_PPCF128;<br>
+ }<br>
+ return UNKNOWN_LIBCALL;<br>
+}<br>
+<br>
+/// getUINTTOFP - Return the UINTTOFP_*_* value for the given types, or<br>
+/// UNKNOWN_LIBCALL if there is none.<br>
+RTLIB::Libcall RTLIB::getUINTTOFP(EVT OpVT, EVT RetVT) {<br>
+ if (OpVT == MVT::i32) {<br>
+ if (RetVT == MVT::f32)<br>
+ return UINTTOFP_I32_F32;<br>
+ if (RetVT == MVT::f64)<br>
+ return UINTTOFP_I32_F64;<br>
+ if (RetVT == MVT::f80)<br>
+ return UINTTOFP_I32_F80;<br>
+ if (RetVT == MVT::f128)<br>
+ return UINTTOFP_I32_F128;<br>
+ if (RetVT == MVT::ppcf128)<br>
+ return UINTTOFP_I32_PPCF128;<br>
+ } else if (OpVT == MVT::i64) {<br>
+ if (RetVT == MVT::f32)<br>
+ return UINTTOFP_I64_F32;<br>
+ if (RetVT == MVT::f64)<br>
+ return UINTTOFP_I64_F64;<br>
+ if (RetVT == MVT::f80)<br>
+ return UINTTOFP_I64_F80;<br>
+ if (RetVT == MVT::f128)<br>
+ return UINTTOFP_I64_F128;<br>
+ if (RetVT == MVT::ppcf128)<br>
+ return UINTTOFP_I64_PPCF128;<br>
+ } else if (OpVT == MVT::i128) {<br>
+ if (RetVT == MVT::f32)<br>
+ return UINTTOFP_I128_F32;<br>
+ if (RetVT == MVT::f64)<br>
+ return UINTTOFP_I128_F64;<br>
+ if (RetVT == MVT::f80)<br>
+ return UINTTOFP_I128_F80;<br>
+ if (RetVT == MVT::f128)<br>
+ return UINTTOFP_I128_F128;<br>
+ if (RetVT == MVT::ppcf128)<br>
+ return UINTTOFP_I128_PPCF128;<br>
+ }<br>
+ return UNKNOWN_LIBCALL;<br>
+}<br>
+<br>
+/// InitCmpLibcallCCs - Set default comparison libcall CC.<br>
+///<br>
+static void InitCmpLibcallCCs(ISD::CondCode *CCs) {<br>
+ memset(CCs, ISD::SETCC_INVALID, sizeof(ISD::CondCode)*RTLIB::UNKNOWN_LIBCALL);<br>
+ CCs[RTLIB::OEQ_F32] = ISD::SETEQ;<br>
+ CCs[RTLIB::OEQ_F64] = ISD::SETEQ;<br>
+ CCs[RTLIB::OEQ_F128] = ISD::SETEQ;<br>
+ CCs[RTLIB::UNE_F32] = ISD::SETNE;<br>
+ CCs[RTLIB::UNE_F64] = ISD::SETNE;<br>
+ CCs[RTLIB::UNE_F128] = ISD::SETNE;<br>
+ CCs[RTLIB::OGE_F32] = ISD::SETGE;<br>
+ CCs[RTLIB::OGE_F64] = ISD::SETGE;<br>
+ CCs[RTLIB::OGE_F128] = ISD::SETGE;<br>
+ CCs[RTLIB::OLT_F32] = ISD::SETLT;<br>
+ CCs[RTLIB::OLT_F64] = ISD::SETLT;<br>
+ CCs[RTLIB::OLT_F128] = ISD::SETLT;<br>
+ CCs[RTLIB::OLE_F32] = ISD::SETLE;<br>
+ CCs[RTLIB::OLE_F64] = ISD::SETLE;<br>
+ CCs[RTLIB::OLE_F128] = ISD::SETLE;<br>
+ CCs[RTLIB::OGT_F32] = ISD::SETGT;<br>
+ CCs[RTLIB::OGT_F64] = ISD::SETGT;<br>
+ CCs[RTLIB::OGT_F128] = ISD::SETGT;<br>
+ CCs[RTLIB::UO_F32] = ISD::SETNE;<br>
+ CCs[RTLIB::UO_F64] = ISD::SETNE;<br>
+ CCs[RTLIB::UO_F128] = ISD::SETNE;<br>
+ CCs[RTLIB::O_F32] = ISD::SETEQ;<br>
+ CCs[RTLIB::O_F64] = ISD::SETEQ;<br>
+ CCs[RTLIB::O_F128] = ISD::SETEQ;<br>
+}<br>
+<br>
+/// NOTE: The constructor takes ownership of TLOF.<br>
+TargetLoweringBase::TargetLoweringBase(const TargetMachine &tm,<br>
+ const TargetLoweringObjectFile *tlof)<br>
+ : TM(tm), TD(TM.getDataLayout()), TLOF(*tlof) {<br>
+ // All operations default to being supported.<br>
+ memset(OpActions, 0, sizeof(OpActions));<br>
+ memset(LoadExtActions, 0, sizeof(LoadExtActions));<br>
+ memset(TruncStoreActions, 0, sizeof(TruncStoreActions));<br>
+ memset(IndexedModeActions, 0, sizeof(IndexedModeActions));<br>
+ memset(CondCodeActions, 0, sizeof(CondCodeActions));<br>
+<br>
+ // Set default actions for various operations.<br>
+ for (unsigned VT = 0; VT != (unsigned)MVT::LAST_VALUETYPE; ++VT) {<br>
+ // Default all indexed load / store to expand.<br>
+ for (unsigned IM = (unsigned)ISD::PRE_INC;<br>
+ IM != (unsigned)ISD::LAST_INDEXED_MODE; ++IM) {<br>
+ setIndexedLoadAction(IM, (MVT::SimpleValueType)VT, Expand);<br>
+ setIndexedStoreAction(IM, (MVT::SimpleValueType)VT, Expand);<br>
+ }<br>
+<br>
+ // These operations default to expand.<br>
+ setOperationAction(ISD::FGETSIGN, (MVT::SimpleValueType)VT, Expand);<br>
+ setOperationAction(ISD::CONCAT_VECTORS, (MVT::SimpleValueType)VT, Expand);<br>
+ }<br>
+<br>
+ // Most targets ignore the @llvm.prefetch intrinsic.<br>
+ setOperationAction(ISD::PREFETCH, MVT::Other, Expand);<br>
+<br>
+ // ConstantFP nodes default to expand. Targets can either change this to<br>
+ // Legal, in which case all fp constants are legal, or use isFPImmLegal()<br>
+ // to optimize expansions for certain constants.<br>
+ setOperationAction(ISD::ConstantFP, MVT::f16, Expand);<br>
+ setOperationAction(ISD::ConstantFP, MVT::f32, Expand);<br>
+ setOperationAction(ISD::ConstantFP, MVT::f64, Expand);<br>
+ setOperationAction(ISD::ConstantFP, MVT::f80, Expand);<br>
+ setOperationAction(ISD::ConstantFP, MVT::f128, Expand);<br>
+<br>
+ // These library functions default to expand.<br>
+ setOperationAction(ISD::FLOG , MVT::f16, Expand);<br>
+ setOperationAction(ISD::FLOG2, MVT::f16, Expand);<br>
+ setOperationAction(ISD::FLOG10, MVT::f16, Expand);<br>
+ setOperationAction(ISD::FEXP , MVT::f16, Expand);<br>
+ setOperationAction(ISD::FEXP2, MVT::f16, Expand);<br>
+ setOperationAction(ISD::FFLOOR, MVT::f16, Expand);<br>
+ setOperationAction(ISD::FNEARBYINT, MVT::f16, Expand);<br>
+ setOperationAction(ISD::FCEIL, MVT::f16, Expand);<br>
+ setOperationAction(ISD::FRINT, MVT::f16, Expand);<br>
+ setOperationAction(ISD::FTRUNC, MVT::f16, Expand);<br>
+ setOperationAction(ISD::FLOG , MVT::f32, Expand);<br>
+ setOperationAction(ISD::FLOG2, MVT::f32, Expand);<br>
+ setOperationAction(ISD::FLOG10, MVT::f32, Expand);<br>
+ setOperationAction(ISD::FEXP , MVT::f32, Expand);<br>
+ setOperationAction(ISD::FEXP2, MVT::f32, Expand);<br>
+ setOperationAction(ISD::FFLOOR, MVT::f32, Expand);<br>
+ setOperationAction(ISD::FNEARBYINT, MVT::f32, Expand);<br>
+ setOperationAction(ISD::FCEIL, MVT::f32, Expand);<br>
+ setOperationAction(ISD::FRINT, MVT::f32, Expand);<br>
+ setOperationAction(ISD::FTRUNC, MVT::f32, Expand);<br>
+ setOperationAction(ISD::FLOG , MVT::f64, Expand);<br>
+ setOperationAction(ISD::FLOG2, MVT::f64, Expand);<br>
+ setOperationAction(ISD::FLOG10, MVT::f64, Expand);<br>
+ setOperationAction(ISD::FEXP , MVT::f64, Expand);<br>
+ setOperationAction(ISD::FEXP2, MVT::f64, Expand);<br>
+ setOperationAction(ISD::FFLOOR, MVT::f64, Expand);<br>
+ setOperationAction(ISD::FNEARBYINT, MVT::f64, Expand);<br>
+ setOperationAction(ISD::FCEIL, MVT::f64, Expand);<br>
+ setOperationAction(ISD::FRINT, MVT::f64, Expand);<br>
+ setOperationAction(ISD::FTRUNC, MVT::f64, Expand);<br>
+ setOperationAction(ISD::FLOG , MVT::f128, Expand);<br>
+ setOperationAction(ISD::FLOG2, MVT::f128, Expand);<br>
+ setOperationAction(ISD::FLOG10, MVT::f128, Expand);<br>
+ setOperationAction(ISD::FEXP , MVT::f128, Expand);<br>
+ setOperationAction(ISD::FEXP2, MVT::f128, Expand);<br>
+ setOperationAction(ISD::FFLOOR, MVT::f128, Expand);<br>
+ setOperationAction(ISD::FNEARBYINT, MVT::f128, Expand);<br>
+ setOperationAction(ISD::FCEIL, MVT::f128, Expand);<br>
+ setOperationAction(ISD::FRINT, MVT::f128, Expand);<br>
+ setOperationAction(ISD::FTRUNC, MVT::f128, Expand);<br>
+<br>
+ // Default ISD::TRAP to expand (which turns it into abort).<br>
+ setOperationAction(ISD::TRAP, MVT::Other, Expand);<br>
+<br>
+ // On most systems, DEBUGTRAP and TRAP have no difference. The "Expand"<br>
+ // here is to inform DAG Legalizer to replace DEBUGTRAP with TRAP.<br>
+ //<br>
+ setOperationAction(ISD::DEBUGTRAP, MVT::Other, Expand);<br>
+<br>
+ IsLittleEndian = TD->isLittleEndian();<br>
+ PointerTy = MVT::getIntegerVT(8*TD->getPointerSize(0));<br>
+ memset(RegClassForVT, 0,MVT::LAST_VALUETYPE*sizeof(TargetRegisterClass*));<br>
+ memset(TargetDAGCombineArray, 0, array_lengthof(TargetDAGCombineArray));<br>
+ maxStoresPerMemset = maxStoresPerMemcpy = maxStoresPerMemmove = 8;<br>
+ maxStoresPerMemsetOptSize = maxStoresPerMemcpyOptSize<br>
+ = maxStoresPerMemmoveOptSize = 4;<br>
+ benefitFromCodePlacementOpt = false;<br>
+ UseUnderscoreSetJmp = false;<br>
+ UseUnderscoreLongJmp = false;<br>
+ SelectIsExpensive = false;<br>
+ IntDivIsCheap = false;<br>
+ Pow2DivIsCheap = false;<br>
+ JumpIsExpensive = false;<br>
+ predictableSelectIsExpensive = false;<br>
+ StackPointerRegisterToSaveRestore = 0;<br>
+ ExceptionPointerRegister = 0;<br>
+ ExceptionSelectorRegister = 0;<br>
+ BooleanContents = UndefinedBooleanContent;<br>
+ BooleanVectorContents = UndefinedBooleanContent;<br>
+ SchedPreferenceInfo = Sched::ILP;<br>
+ JumpBufSize = 0;<br>
+ JumpBufAlignment = 0;<br>
+ MinFunctionAlignment = 0;<br>
+ PrefFunctionAlignment = 0;<br>
+ PrefLoopAlignment = 0;<br>
+ MinStackArgumentAlignment = 1;<br>
+ ShouldFoldAtomicFences = false;<br>
+ InsertFencesForAtomic = false;<br>
+ SupportJumpTables = true;<br>
+ MinimumJumpTableEntries = 4;<br>
+<br>
+ InitLibcallNames(LibcallRoutineNames);<br>
+ InitCmpLibcallCCs(CmpLibcallCCs);<br>
+ InitLibcallCallingConvs(LibcallCallingConvs);<br>
+}<br>
+<br>
+TargetLoweringBase::~TargetLoweringBase() {<br>
+ delete &TLOF;<br>
+}<br>
+<br>
+MVT TargetLoweringBase::getShiftAmountTy(EVT LHSTy) const {<br>
+ return MVT::getIntegerVT(8*TD->getPointerSize(0));<br>
+}<br>
+<br>
+/// canOpTrap - Returns true if the operation can trap for the value type.<br>
+/// VT must be a legal type.<br>
+bool TargetLoweringBase::canOpTrap(unsigned Op, EVT VT) const {<br>
+ assert(isTypeLegal(VT));<br>
+ switch (Op) {<br>
+ default:<br>
+ return false;<br>
+ case ISD::FDIV:<br>
+ case ISD::FREM:<br>
+ case ISD::SDIV:<br>
+ case ISD::UDIV:<br>
+ case ISD::SREM:<br>
+ case ISD::UREM:<br>
+ return true;<br>
+ }<br>
+}<br>
+<br>
+<br>
+static unsigned getVectorTypeBreakdownMVT(MVT VT, MVT &IntermediateVT,<br>
+ unsigned &NumIntermediates,<br>
+ MVT &RegisterVT,<br>
+ TargetLoweringBase *TLI) {<br>
+ // Figure out the right, legal destination reg to copy into.<br>
+ unsigned NumElts = VT.getVectorNumElements();<br>
+ MVT EltTy = VT.getVectorElementType();<br>
+<br>
+ unsigned NumVectorRegs = 1;<br>
+<br>
+ // FIXME: We don't support non-power-of-2-sized vectors for now. Ideally we<br>
+ // could break down into LHS/RHS like LegalizeDAG does.<br>
+ if (!isPowerOf2_32(NumElts)) {<br>
+ NumVectorRegs = NumElts;<br>
+ NumElts = 1;<br>
+ }<br>
+<br>
+ // Divide the input until we get to a supported size. This will always<br>
+ // end with a scalar if the target doesn't support vectors.<br>
+ while (NumElts > 1 && !TLI->isTypeLegal(MVT::getVectorVT(EltTy, NumElts))) {<br>
+ NumElts >>= 1;<br>
+ NumVectorRegs <<= 1;<br>
+ }<br>
+<br>
+ NumIntermediates = NumVectorRegs;<br>
+<br>
+ MVT NewVT = MVT::getVectorVT(EltTy, NumElts);<br>
+ if (!TLI->isTypeLegal(NewVT))<br>
+ NewVT = EltTy;<br>
+ IntermediateVT = NewVT;<br>
+<br>
+ unsigned NewVTSize = NewVT.getSizeInBits();<br>
+<br>
+ // Convert sizes such as i33 to i64.<br>
+ if (!isPowerOf2_32(NewVTSize))<br>
+ NewVTSize = NextPowerOf2(NewVTSize);<br>
+<br>
+ MVT DestVT = TLI->getRegisterType(NewVT);<br>
+ RegisterVT = DestVT;<br>
+ if (EVT(DestVT).bitsLT(NewVT)) // Value is expanded, e.g. i64 -> i16.<br>
+ return NumVectorRegs*(NewVTSize/DestVT.getSizeInBits());<br>
+<br>
+ // Otherwise, promotion or legal types use the same number of registers as<br>
+ // the vector decimated to the appropriate level.<br>
+ return NumVectorRegs;<br>
+}<br>
+<br>
+/// isLegalRC - Return true if the value types that can be represented by the<br>
+/// specified register class are all legal.<br>
+bool TargetLoweringBase::isLegalRC(const TargetRegisterClass *RC) const {<br>
+ for (TargetRegisterClass::vt_iterator I = RC->vt_begin(), E = RC->vt_end();<br>
+ I != E; ++I) {<br>
+ if (isTypeLegal(*I))<br>
+ return true;<br>
+ }<br>
+ return false;<br>
+}<br>
+<br>
+/// findRepresentativeClass - Return the largest legal super-reg register class<br>
+/// of the register class for the specified type and its associated "cost".<br>
+std::pair<const TargetRegisterClass*, uint8_t><br>
+TargetLoweringBase::findRepresentativeClass(MVT VT) const {<br>
+ const TargetRegisterInfo *TRI = getTargetMachine().getRegisterInfo();<br>
+ const TargetRegisterClass *RC = RegClassForVT[VT.SimpleTy];<br>
+ if (!RC)<br>
+ return std::make_pair(RC, 0);<br>
+<br>
+ // Compute the set of all super-register classes.<br>
+ BitVector SuperRegRC(TRI->getNumRegClasses());<br>
+ for (SuperRegClassIterator RCI(RC, TRI); RCI.isValid(); ++RCI)<br>
+ SuperRegRC.setBitsInMask(RCI.getMask());<br>
+<br>
+ // Find the first legal register class with the largest spill size.<br>
+ const TargetRegisterClass *BestRC = RC;<br>
+ for (int i = SuperRegRC.find_first(); i >= 0; i = SuperRegRC.find_next(i)) {<br>
+ const TargetRegisterClass *SuperRC = TRI->getRegClass(i);<br>
+ // We want the largest possible spill size.<br>
+ if (SuperRC->getSize() <= BestRC->getSize())<br>
+ continue;<br>
+ if (!isLegalRC(SuperRC))<br>
+ continue;<br>
+ BestRC = SuperRC;<br>
+ }<br>
+ return std::make_pair(BestRC, 1);<br>
+}<br>
+<br>
+/// computeRegisterProperties - Once all of the register classes are added,<br>
+/// this allows us to compute derived properties we expose.<br>
+void TargetLoweringBase::computeRegisterProperties() {<br>
+ assert(MVT::LAST_VALUETYPE <= MVT::MAX_ALLOWED_VALUETYPE &&<br>
+ "Too many value types for ValueTypeActions to hold!");<br>
+<br>
+ // Everything defaults to needing one register.<br>
+ for (unsigned i = 0; i != MVT::LAST_VALUETYPE; ++i) {<br>
+ NumRegistersForVT[i] = 1;<br>
+ RegisterTypeForVT[i] = TransformToType[i] = (MVT::SimpleValueType)i;<br>
+ }<br>
+ // ...except isVoid, which doesn't need any registers.<br>
+ NumRegistersForVT[MVT::isVoid] = 0;<br>
+<br>
+ // Find the largest integer register class.<br>
+ unsigned LargestIntReg = MVT::LAST_INTEGER_VALUETYPE;<br>
+ for (; RegClassForVT[LargestIntReg] == 0; --LargestIntReg)<br>
+ assert(LargestIntReg != MVT::i1 && "No integer registers defined!");<br>
+<br>
+ // Every integer value type larger than this largest register takes twice as<br>
+ // many registers to represent as the previous ValueType.<br>
+ for (unsigned ExpandedReg = LargestIntReg + 1;<br>
+ ExpandedReg <= MVT::LAST_INTEGER_VALUETYPE; ++ExpandedReg) {<br>
+ NumRegistersForVT[ExpandedReg] = 2*NumRegistersForVT[ExpandedReg-1];<br>
+ RegisterTypeForVT[ExpandedReg] = (MVT::SimpleValueType)LargestIntReg;<br>
+ TransformToType[ExpandedReg] = (MVT::SimpleValueType)(ExpandedReg - 1);<br>
+ ValueTypeActions.setTypeAction((MVT::SimpleValueType)ExpandedReg,<br>
+ TypeExpandInteger);<br>
+ }<br>
+<br>
+ // Inspect all of the ValueType's smaller than the largest integer<br>
+ // register to see which ones need promotion.<br>
+ unsigned LegalIntReg = LargestIntReg;<br>
+ for (unsigned IntReg = LargestIntReg - 1;<br>
+ IntReg >= (unsigned)MVT::i1; --IntReg) {<br>
+ MVT IVT = (MVT::SimpleValueType)IntReg;<br>
+ if (isTypeLegal(IVT)) {<br>
+ LegalIntReg = IntReg;<br>
+ } else {<br>
+ RegisterTypeForVT[IntReg] = TransformToType[IntReg] =<br>
+ (const MVT::SimpleValueType)LegalIntReg;<br>
+ ValueTypeActions.setTypeAction(IVT, TypePromoteInteger);<br>
+ }<br>
+ }<br>
+<br>
+ // ppcf128 type is really two f64's.<br>
+ if (!isTypeLegal(MVT::ppcf128)) {<br>
+ NumRegistersForVT[MVT::ppcf128] = 2*NumRegistersForVT[MVT::f64];<br>
+ RegisterTypeForVT[MVT::ppcf128] = MVT::f64;<br>
+ TransformToType[MVT::ppcf128] = MVT::f64;<br>
+ ValueTypeActions.setTypeAction(MVT::ppcf128, TypeExpandFloat);<br>
+ }<br>
+<br>
+ // Decide how to handle f64. If the target does not have native f64 support,<br>
+ // expand it to i64 and we will be generating soft float library calls.<br>
+ if (!isTypeLegal(MVT::f64)) {<br>
+ NumRegistersForVT[MVT::f64] = NumRegistersForVT[MVT::i64];<br>
+ RegisterTypeForVT[MVT::f64] = RegisterTypeForVT[MVT::i64];<br>
+ TransformToType[MVT::f64] = MVT::i64;<br>
+ ValueTypeActions.setTypeAction(MVT::f64, TypeSoftenFloat);<br>
+ }<br>
+<br>
+ // Decide how to handle f32. If the target does not have native support for<br>
+ // f32, promote it to f64 if it is legal. Otherwise, expand it to i32.<br>
+ if (!isTypeLegal(MVT::f32)) {<br>
+ if (isTypeLegal(MVT::f64)) {<br>
+ NumRegistersForVT[MVT::f32] = NumRegistersForVT[MVT::f64];<br>
+ RegisterTypeForVT[MVT::f32] = RegisterTypeForVT[MVT::f64];<br>
+ TransformToType[MVT::f32] = MVT::f64;<br>
+ ValueTypeActions.setTypeAction(MVT::f32, TypePromoteInteger);<br>
+ } else {<br>
+ NumRegistersForVT[MVT::f32] = NumRegistersForVT[MVT::i32];<br>
+ RegisterTypeForVT[MVT::f32] = RegisterTypeForVT[MVT::i32];<br>
+ TransformToType[MVT::f32] = MVT::i32;<br>
+ ValueTypeActions.setTypeAction(MVT::f32, TypeSoftenFloat);<br>
+ }<br>
+ }<br>
+<br>
+ // Loop over all of the vector value types to see which need transformations.<br>
+ for (unsigned i = MVT::FIRST_VECTOR_VALUETYPE;<br>
+ i <= (unsigned)MVT::LAST_VECTOR_VALUETYPE; ++i) {<br>
+ MVT VT = (MVT::SimpleValueType)i;<br>
+ if (isTypeLegal(VT)) continue;<br>
+<br>
+ // Determine if there is a legal wider type. If so, we should promote to<br>
+ // that wider vector type.<br>
+ MVT EltVT = VT.getVectorElementType();<br>
+ unsigned NElts = VT.getVectorNumElements();<br>
+ if (NElts != 1 && !shouldSplitVectorElementType(EltVT)) {<br>
+ bool IsLegalWiderType = false;<br>
+ // First try to promote the elements of integer vectors. If no legal<br>
+ // promotion was found, fallback to the widen-vector method.<br>
+ for (unsigned nVT = i+1; nVT <= MVT::LAST_VECTOR_VALUETYPE; ++nVT) {<br>
+ MVT SVT = (MVT::SimpleValueType)nVT;<br>
+ // Promote vectors of integers to vectors with the same number<br>
+ // of elements, with a wider element type.<br>
+ if (SVT.getVectorElementType().getSizeInBits() > EltVT.getSizeInBits()<br>
+ && SVT.getVectorNumElements() == NElts &&<br>
+ isTypeLegal(SVT) && SVT.getScalarType().isInteger()) {<br>
+ TransformToType[i] = SVT;<br>
+ RegisterTypeForVT[i] = SVT;<br>
+ NumRegistersForVT[i] = 1;<br>
+ ValueTypeActions.setTypeAction(VT, TypePromoteInteger);<br>
+ IsLegalWiderType = true;<br>
+ break;<br>
+ }<br>
+ }<br>
+<br>
+ if (IsLegalWiderType) continue;<br>
+<br>
+ // Try to widen the vector.<br>
+ for (unsigned nVT = i+1; nVT <= MVT::LAST_VECTOR_VALUETYPE; ++nVT) {<br>
+ MVT SVT = (MVT::SimpleValueType)nVT;<br>
+ if (SVT.getVectorElementType() == EltVT &&<br>
+ SVT.getVectorNumElements() > NElts &&<br>
+ isTypeLegal(SVT)) {<br>
+ TransformToType[i] = SVT;<br>
+ RegisterTypeForVT[i] = SVT;<br>
+ NumRegistersForVT[i] = 1;<br>
+ ValueTypeActions.setTypeAction(VT, TypeWidenVector);<br>
+ IsLegalWiderType = true;<br>
+ break;<br>
+ }<br>
+ }<br>
+ if (IsLegalWiderType) continue;<br>
+ }<br>
+<br>
+ MVT IntermediateVT;<br>
+ MVT RegisterVT;<br>
+ unsigned NumIntermediates;<br>
+ NumRegistersForVT[i] =<br>
+ getVectorTypeBreakdownMVT(VT, IntermediateVT, NumIntermediates,<br>
+ RegisterVT, this);<br>
+ RegisterTypeForVT[i] = RegisterVT;<br>
+<br>
+ MVT NVT = VT.getPow2VectorType();<br>
+ if (NVT == VT) {<br>
+ // Type is already a power of 2. The default action is to split.<br>
+ TransformToType[i] = MVT::Other;<br>
+ unsigned NumElts = VT.getVectorNumElements();<br>
+ ValueTypeActions.setTypeAction(VT,<br>
+ NumElts > 1 ? TypeSplitVector : TypeScalarizeVector);<br>
+ } else {<br>
+ TransformToType[i] = NVT;<br>
+ ValueTypeActions.setTypeAction(VT, TypeWidenVector);<br>
+ }<br>
+ }<br>
+<br>
+ // Determine the 'representative' register class for each value type.<br>
+ // An representative register class is the largest (meaning one which is<br>
+ // not a sub-register class / subreg register class) legal register class for<br>
+ // a group of value types. For example, on i386, i8, i16, and i32<br>
+ // representative would be GR32; while on x86_64 it's GR64.<br>
+ for (unsigned i = 0; i != MVT::LAST_VALUETYPE; ++i) {<br>
+ const TargetRegisterClass* RRC;<br>
+ uint8_t Cost;<br>
+ tie(RRC, Cost) = findRepresentativeClass((MVT::SimpleValueType)i);<br>
+ RepRegClassForVT[i] = RRC;<br>
+ RepRegClassCostForVT[i] = Cost;<br>
+ }<br>
+}<br>
+<br>
+EVT TargetLoweringBase::getSetCCResultType(EVT VT) const {<br>
+ assert(!VT.isVector() && "No default SetCC type for vectors!");<br>
+ return getPointerTy(0).SimpleTy;<br>
+}<br>
+<br>
+MVT::SimpleValueType TargetLoweringBase::getCmpLibcallReturnType() const {<br>
+ return MVT::i32; // return the default value<br>
+}<br>
+<br>
+/// getVectorTypeBreakdown - Vector types are broken down into some number of<br>
+/// legal first class types. For example, MVT::v8f32 maps to 2 MVT::v4f32<br>
+/// with Altivec or SSE1, or 8 promoted MVT::f64 values with the X86 FP stack.<br>
+/// Similarly, MVT::v2i64 turns into 4 MVT::i32 values with both PPC and X86.<br>
+///<br>
+/// This method returns the number of registers needed, and the VT for each<br>
+/// register. It also returns the VT and quantity of the intermediate values<br>
+/// before they are promoted/expanded.<br>
+///<br>
+unsigned TargetLoweringBase::getVectorTypeBreakdown(LLVMContext &Context, EVT VT,<br>
+ EVT &IntermediateVT,<br>
+ unsigned &NumIntermediates,<br>
+ MVT &RegisterVT) const {<br>
+ unsigned NumElts = VT.getVectorNumElements();<br>
+<br>
+ // If there is a wider vector type with the same element type as this one,<br>
+ // or a promoted vector type that has the same number of elements which<br>
+ // are wider, then we should convert to that legal vector type.<br>
+ // This handles things like <2 x float> -> <4 x float> and<br>
+ // <4 x i1> -> <4 x i32>.<br>
+ LegalizeTypeAction TA = getTypeAction(Context, VT);<br>
+ if (NumElts != 1 && (TA == TypeWidenVector || TA == TypePromoteInteger)) {<br>
+ EVT RegisterEVT = getTypeToTransformTo(Context, VT);<br>
+ if (isTypeLegal(RegisterEVT)) {<br>
+ IntermediateVT = RegisterEVT;<br>
+ RegisterVT = RegisterEVT.getSimpleVT();<br>
+ NumIntermediates = 1;<br>
+ return 1;<br>
+ }<br>
+ }<br>
+<br>
+ // Figure out the right, legal destination reg to copy into.<br>
+ EVT EltTy = VT.getVectorElementType();<br>
+<br>
+ unsigned NumVectorRegs = 1;<br>
+<br>
+ // FIXME: We don't support non-power-of-2-sized vectors for now. Ideally we<br>
+ // could break down into LHS/RHS like LegalizeDAG does.<br>
+ if (!isPowerOf2_32(NumElts)) {<br>
+ NumVectorRegs = NumElts;<br>
+ NumElts = 1;<br>
+ }<br>
+<br>
+ // Divide the input until we get to a supported size. This will always<br>
+ // end with a scalar if the target doesn't support vectors.<br>
+ while (NumElts > 1 && !isTypeLegal(<br>
+ EVT::getVectorVT(Context, EltTy, NumElts))) {<br>
+ NumElts >>= 1;<br>
+ NumVectorRegs <<= 1;<br>
+ }<br>
+<br>
+ NumIntermediates = NumVectorRegs;<br>
+<br>
+ EVT NewVT = EVT::getVectorVT(Context, EltTy, NumElts);<br>
+ if (!isTypeLegal(NewVT))<br>
+ NewVT = EltTy;<br>
+ IntermediateVT = NewVT;<br>
+<br>
+ MVT DestVT = getRegisterType(Context, NewVT);<br>
+ RegisterVT = DestVT;<br>
+ unsigned NewVTSize = NewVT.getSizeInBits();<br>
+<br>
+ // Convert sizes such as i33 to i64.<br>
+ if (!isPowerOf2_32(NewVTSize))<br>
+ NewVTSize = NextPowerOf2(NewVTSize);<br>
+<br>
+ if (EVT(DestVT).bitsLT(NewVT)) // Value is expanded, e.g. i64 -> i16.<br>
+ return NumVectorRegs*(NewVTSize/DestVT.getSizeInBits());<br>
+<br>
+ // Otherwise, promotion or legal types use the same number of registers as<br>
+ // the vector decimated to the appropriate level.<br>
+ return NumVectorRegs;<br>
+}<br>
+<br>
+/// Get the EVTs and ArgFlags collections that represent the legalized return<br>
+/// type of the given function. This does not require a DAG or a return value,<br>
+/// and is suitable for use before any DAGs for the function are constructed.<br>
+/// TODO: Move this out of TargetLowering.cpp.<br>
+void llvm::GetReturnInfo(Type* ReturnType, AttributeSet attr,<br>
+ SmallVectorImpl<ISD::OutputArg> &Outs,<br>
+ const TargetLowering &TLI) {<br>
+ SmallVector<EVT, 4> ValueVTs;<br>
+ ComputeValueVTs(TLI, ReturnType, ValueVTs);<br>
+ unsigned NumValues = ValueVTs.size();<br>
+ if (NumValues == 0) return;<br>
+<br>
+ for (unsigned j = 0, f = NumValues; j != f; ++j) {<br>
+ EVT VT = ValueVTs[j];<br>
+ ISD::NodeType ExtendKind = ISD::ANY_EXTEND;<br>
+<br>
+ if (attr.hasAttribute(AttributeSet::ReturnIndex, Attribute::SExt))<br>
+ ExtendKind = ISD::SIGN_EXTEND;<br>
+ else if (attr.hasAttribute(AttributeSet::ReturnIndex, Attribute::ZExt))<br>
+ ExtendKind = ISD::ZERO_EXTEND;<br>
+<br>
+ // FIXME: C calling convention requires the return type to be promoted to<br>
+ // at least 32-bit. But this is not necessary for non-C calling<br>
+ // conventions. The frontend should mark functions whose return values<br>
+ // require promoting with signext or zeroext attributes.<br>
+ if (ExtendKind != ISD::ANY_EXTEND && VT.isInteger()) {<br>
+ MVT MinVT = TLI.getRegisterType(ReturnType->getContext(), MVT::i32);<br>
+ if (VT.bitsLT(MinVT))<br>
+ VT = MinVT;<br>
+ }<br>
+<br>
+ unsigned NumParts = TLI.getNumRegisters(ReturnType->getContext(), VT);<br>
+ MVT PartVT = TLI.getRegisterType(ReturnType->getContext(), VT);<br>
+<br>
+ // 'inreg' on function refers to return value<br>
+ ISD::ArgFlagsTy Flags = ISD::ArgFlagsTy();<br>
+ if (attr.hasAttribute(AttributeSet::ReturnIndex, Attribute::InReg))<br>
+ Flags.setInReg();<br>
+<br>
+ // Propagate extension type if any<br>
+ if (attr.hasAttribute(AttributeSet::ReturnIndex, Attribute::SExt))<br>
+ Flags.setSExt();<br>
+ else if (attr.hasAttribute(AttributeSet::ReturnIndex, Attribute::ZExt))<br>
+ Flags.setZExt();<br>
+<br>
+ for (unsigned i = 0; i < NumParts; ++i)<br>
+ Outs.push_back(ISD::OutputArg(Flags, PartVT, /*isFixed=*/true, 0, 0));<br>
+ }<br>
+}<br>
+<br>
+/// getByValTypeAlignment - Return the desired alignment for ByVal aggregate<br>
+/// function arguments in the caller parameter area. This is the actual<br>
+/// alignment, not its logarithm.<br>
+unsigned TargetLoweringBase::getByValTypeAlignment(Type *Ty) const {<br>
+ return TD->getCallFrameTypeAlignment(Ty);<br>
+}<br>
+<br>
+//===----------------------------------------------------------------------===//<br>
+// TargetTransformInfo Helpers<br>
+//===----------------------------------------------------------------------===//<br>
+<br>
+int TargetLoweringBase::InstructionOpcodeToISD(unsigned Opcode) const {<br>
+ enum InstructionOpcodes {<br>
+#define HANDLE_INST(NUM, OPCODE, CLASS) OPCODE = NUM,<br>
+#define LAST_OTHER_INST(NUM) InstructionOpcodesCount = NUM<br>
+#include "llvm/IR/Instruction.def"<br>
+ };<br>
+ switch (static_cast<InstructionOpcodes>(Opcode)) {<br>
+ case Ret: return 0;<br>
+ case Br: return 0;<br>
+ case Switch: return 0;<br>
+ case IndirectBr: return 0;<br>
+ case Invoke: return 0;<br>
+ case Resume: return 0;<br>
+ case Unreachable: return 0;<br>
+ case Add: return ISD::ADD;<br>
+ case FAdd: return ISD::FADD;<br>
+ case Sub: return ISD::SUB;<br>
+ case FSub: return ISD::FSUB;<br>
+ case Mul: return ISD::MUL;<br>
+ case FMul: return ISD::FMUL;<br>
+ case UDiv: return ISD::UDIV;<br>
+ case SDiv: return ISD::UDIV;<br>
+ case FDiv: return ISD::FDIV;<br>
+ case URem: return ISD::UREM;<br>
+ case SRem: return ISD::SREM;<br>
+ case FRem: return ISD::FREM;<br>
+ case Shl: return ISD::SHL;<br>
+ case LShr: return ISD::SRL;<br>
+ case AShr: return ISD::SRA;<br>
+ case And: return ISD::AND;<br>
+ case Or: return ISD::OR;<br>
+ case Xor: return ISD::XOR;<br>
+ case Alloca: return 0;<br>
+ case Load: return ISD::LOAD;<br>
+ case Store: return ISD::STORE;<br>
+ case GetElementPtr: return 0;<br>
+ case Fence: return 0;<br>
+ case AtomicCmpXchg: return 0;<br>
+ case AtomicRMW: return 0;<br>
+ case Trunc: return ISD::TRUNCATE;<br>
+ case ZExt: return ISD::ZERO_EXTEND;<br>
+ case SExt: return ISD::SIGN_EXTEND;<br>
+ case FPToUI: return ISD::FP_TO_UINT;<br>
+ case FPToSI: return ISD::FP_TO_SINT;<br>
+ case UIToFP: return ISD::UINT_TO_FP;<br>
+ case SIToFP: return ISD::SINT_TO_FP;<br>
+ case FPTrunc: return ISD::FP_ROUND;<br>
+ case FPExt: return ISD::FP_EXTEND;<br>
+ case PtrToInt: return ISD::BITCAST;<br>
+ case IntToPtr: return ISD::BITCAST;<br>
+ case BitCast: return ISD::BITCAST;<br>
+ case ICmp: return ISD::SETCC;<br>
+ case FCmp: return ISD::SETCC;<br>
+ case PHI: return 0;<br>
+ case Call: return 0;<br>
+ case Select: return ISD::SELECT;<br>
+ case UserOp1: return 0;<br>
+ case UserOp2: return 0;<br>
+ case VAArg: return 0;<br>
+ case ExtractElement: return ISD::EXTRACT_VECTOR_ELT;<br>
+ case InsertElement: return ISD::INSERT_VECTOR_ELT;<br>
+ case ShuffleVector: return ISD::VECTOR_SHUFFLE;<br>
+ case ExtractValue: return ISD::MERGE_VALUES;<br>
+ case InsertValue: return ISD::MERGE_VALUES;<br>
+ case LandingPad: return 0;<br>
+ }<br>
+<br>
+ llvm_unreachable("Unknown instruction type encountered!");<br>
+}<br>
+<br>
+std::pair<unsigned, MVT><br>
+TargetLoweringBase::getTypeLegalizationCost(Type *Ty) const {<br>
+ LLVMContext &C = Ty->getContext();<br>
+ EVT MTy = getValueType(Ty);<br>
+<br>
+ unsigned Cost = 1;<br>
+ // We keep legalizing the type until we find a legal kind. We assume that<br>
+ // the only operation that costs anything is the split. After splitting<br>
+ // we need to handle two types.<br>
+ while (true) {<br>
+ LegalizeKind LK = getTypeConversion(C, MTy);<br>
+<br>
+ if (LK.first == TypeLegal)<br>
+ return std::make_pair(Cost, MTy.getSimpleVT());<br>
+<br>
+ if (LK.first == TypeSplitVector || LK.first == TypeExpandInteger)<br>
+ Cost *= 2;<br>
+<br>
+ // Keep legalizing the type.<br>
+ MTy = LK.second;<br>
+ }<br>
+}<br>
+<br>
+//===----------------------------------------------------------------------===//<br>
+// Loop Strength Reduction hooks<br>
+//===----------------------------------------------------------------------===//<br>
+<br>
+/// isLegalAddressingMode - Return true if the addressing mode represented<br>
+/// by AM is legal for this target, for a load/store of the specified type.<br>
+bool TargetLoweringBase::isLegalAddressingMode(const AddrMode &AM,<br>
+ Type *Ty) const {<br>
+ // The default implementation of this implements a conservative RISCy, r+r and<br>
+ // r+i addr mode.<br>
+<br>
+ // Allows a sign-extended 16-bit immediate field.<br>
+ if (AM.BaseOffs <= -(1LL << 16) || AM.BaseOffs >= (1LL << 16)-1)<br>
+ return false;<br>
+<br>
+ // No global is ever allowed as a base.<br>
+ if (AM.BaseGV)<br>
+ return false;<br>
+<br>
+ // Only support r+r,<br>
+ switch (AM.Scale) {<br>
+ case 0: // "r+i" or just "i", depending on HasBaseReg.<br>
+ break;<br>
+ case 1:<br>
+ if (AM.HasBaseReg && AM.BaseOffs) // "r+r+i" is not allowed.<br>
+ return false;<br>
+ // Otherwise we have r+r or r+i.<br>
+ break;<br>
+ case 2:<br>
+ if (AM.HasBaseReg || AM.BaseOffs) // 2*r+r or 2*r+i is not allowed.<br>
+ return false;<br>
+ // Allow 2*r as r+r.<br>
+ break;<br>
+ }<br>
+<br>
+ return true;<br>
+}<br>
<br>
<br>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@cs.uiuc.edu" class="cremed">llvm-commits@cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits" target="_blank" class="cremed">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><br>
</blockquote></div><br></div></div>