[llvm-commits] [llvm] r100199 - in /llvm/trunk: include/llvm/ include/llvm/CodeGen/ include/llvm/Support/ include/llvm/Target/ include/llvm/Transforms/Utils/ lib/CodeGen/SelectionDAG/ lib/Target/ARM/ lib/Target/PowerPC/ lib/Target/X86/ lib/Target/XCore/ lib/Transforms/InstCombine/ lib/Transforms/Scalar/ lib/Transforms/Utils/ lib/VMCore/ test/Analysis/BasicAA/ test/Bitcode/ test/Transforms/InstCombine/ test/Transforms/MemCpyOpt/ test/Transforms/SimplifyLibCalls/ test/Verifier/

Evan Cheng evan.cheng at apple.com
Fri Apr 2 13:06:18 PDT 2010


Reverted David's patch, now we are good.

Evan

On Apr 2, 2010, at 12:14 PM, Evan Cheng wrote:

> TOT is still failing a lot of tests.
> 
> Evan
> On Apr 2, 2010, at 11:50 AM, Bob Wilson wrote:
> 
>> Are you sure?  Was it the virt.cpp failure?  All of the initial buildbot complaints were synced to 100192, so they were missing the clang change in 100193.
>> 
>> On Apr 2, 2010, at 11:43 AM, Mon P Wang wrote:
>> 
>>> Author: wangmp
>>> Date: Fri Apr  2 13:43:02 2010
>>> New Revision: 100199
>>> 
>>> URL: http://llvm.org/viewvc/llvm-project?rev=100199&view=rev
>>> Log:
>>> Revert r100191 since it breaks objc in clang 
>>> 
>>> Modified:
>>>  llvm/trunk/include/llvm/CodeGen/SelectionDAG.h
>>>  llvm/trunk/include/llvm/IntrinsicInst.h
>>>  llvm/trunk/include/llvm/Intrinsics.td
>>>  llvm/trunk/include/llvm/Support/IRBuilder.h
>>>  llvm/trunk/include/llvm/Target/TargetLowering.h
>>>  llvm/trunk/include/llvm/Transforms/Utils/BuildLibCalls.h
>>>  llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
>>>  llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
>>>  llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp
>>>  llvm/trunk/lib/Target/ARM/ARMISelLowering.h
>>>  llvm/trunk/lib/Target/PowerPC/PPCISelLowering.cpp
>>>  llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
>>>  llvm/trunk/lib/Target/X86/X86ISelLowering.h
>>>  llvm/trunk/lib/Target/XCore/XCoreISelLowering.cpp
>>>  llvm/trunk/lib/Transforms/InstCombine/InstCombineCalls.cpp
>>>  llvm/trunk/lib/Transforms/Scalar/MemCpyOptimizer.cpp
>>>  llvm/trunk/lib/Transforms/Scalar/ScalarReplAggregates.cpp
>>>  llvm/trunk/lib/Transforms/Scalar/SimplifyLibCalls.cpp
>>>  llvm/trunk/lib/Transforms/Utils/BuildLibCalls.cpp
>>>  llvm/trunk/lib/Transforms/Utils/InlineFunction.cpp
>>>  llvm/trunk/lib/VMCore/AutoUpgrade.cpp
>>>  llvm/trunk/test/Analysis/BasicAA/modref.ll
>>>  llvm/trunk/test/Bitcode/memcpy.ll
>>>  llvm/trunk/test/Transforms/InstCombine/memset_chk.ll
>>>  llvm/trunk/test/Transforms/InstCombine/objsize.ll
>>>  llvm/trunk/test/Transforms/MemCpyOpt/align.ll
>>>  llvm/trunk/test/Transforms/SimplifyLibCalls/StrCpy.ll
>>>  llvm/trunk/test/Verifier/2006-12-12-IntrinsicDefine.ll
>>> 
>>> Modified: llvm/trunk/include/llvm/CodeGen/SelectionDAG.h
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/SelectionDAG.h?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/include/llvm/CodeGen/SelectionDAG.h (original)
>>> +++ llvm/trunk/include/llvm/CodeGen/SelectionDAG.h Fri Apr  2 13:43:02 2010
>>> @@ -534,17 +534,17 @@
>>> SDValue getStackArgumentTokenFactor(SDValue Chain);
>>> 
>>> SDValue getMemcpy(SDValue Chain, DebugLoc dl, SDValue Dst, SDValue Src,
>>> -                    SDValue Size, unsigned Align, bool isVol, bool AlwaysInline,
>>> +                    SDValue Size, unsigned Align, bool AlwaysInline,
>>>                   const Value *DstSV, uint64_t DstSVOff,
>>>                   const Value *SrcSV, uint64_t SrcSVOff);
>>> 
>>> SDValue getMemmove(SDValue Chain, DebugLoc dl, SDValue Dst, SDValue Src,
>>> -                     SDValue Size, unsigned Align, bool isVol,
>>> +                     SDValue Size, unsigned Align,
>>>                    const Value *DstSV, uint64_t DstOSVff,
>>>                    const Value *SrcSV, uint64_t SrcSVOff);
>>> 
>>> SDValue getMemset(SDValue Chain, DebugLoc dl, SDValue Dst, SDValue Src,
>>> -                    SDValue Size, unsigned Align, bool isVol,
>>> +                    SDValue Size, unsigned Align,
>>>                   const Value *DstSV, uint64_t DstSVOff);
>>> 
>>> /// getSetCC - Helper function to make it easier to build SetCC's if you just
>>> 
>>> Modified: llvm/trunk/include/llvm/IntrinsicInst.h
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/IntrinsicInst.h?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/include/llvm/IntrinsicInst.h (original)
>>> +++ llvm/trunk/include/llvm/IntrinsicInst.h Fri Apr  2 13:43:02 2010
>>> @@ -133,13 +133,6 @@
>>>     return getAlignmentCst()->getZExtValue();
>>>   }
>>> 
>>> -    ConstantInt *getVolatileCst() const {
>>> -      return cast<ConstantInt>(const_cast<Value*>(getOperand(5)));
>>> -    }
>>> -    bool isVolatile() const {
>>> -      return getVolatileCst()->getZExtValue() != 0;
>>> -    }
>>> -
>>>   /// getDest - This is just like getRawDest, but it strips off any cast
>>>   /// instructions that feed it, giving the original input.  The returned
>>>   /// value is guaranteed to be a pointer.
>>> @@ -162,11 +155,7 @@
>>>   void setAlignment(Constant* A) {
>>>     setOperand(4, A);
>>>   }
>>> -
>>> -    void setVolatile(Constant* V) {
>>> -      setOperand(5, V);
>>> -    }
>>> -
>>> +    
>>>   const Type *getAlignmentType() const {
>>>     return getOperand(4)->getType();
>>>   }
>>> 
>>> Modified: llvm/trunk/include/llvm/Intrinsics.td
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Intrinsics.td?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/include/llvm/Intrinsics.td (original)
>>> +++ llvm/trunk/include/llvm/Intrinsics.td Fri Apr  2 13:43:02 2010
>>> @@ -224,16 +224,16 @@
>>> //
>>> 
>>> def int_memcpy  : Intrinsic<[],
>>> -                             [llvm_anyptr_ty, llvm_anyptr_ty, llvm_anyint_ty,
>>> -                              llvm_i32_ty, llvm_i1_ty],
>>> +                             [llvm_ptr_ty, llvm_ptr_ty, llvm_anyint_ty,
>>> +                              llvm_i32_ty],
>>>                           [IntrWriteArgMem, NoCapture<0>, NoCapture<1>]>;
>>> def int_memmove : Intrinsic<[],
>>> -                            [llvm_anyptr_ty, llvm_anyptr_ty, llvm_anyint_ty,
>>> -                             llvm_i32_ty, llvm_i1_ty],
>>> +                            [llvm_ptr_ty, llvm_ptr_ty, llvm_anyint_ty,
>>> +                             llvm_i32_ty],
>>>                           [IntrWriteArgMem, NoCapture<0>, NoCapture<1>]>;
>>> def int_memset  : Intrinsic<[],
>>> -                            [llvm_anyptr_ty, llvm_i8_ty, llvm_anyint_ty,
>>> -                             llvm_i32_ty, llvm_i1_ty],
>>> +                            [llvm_ptr_ty, llvm_i8_ty, llvm_anyint_ty,
>>> +                             llvm_i32_ty],
>>>                           [IntrWriteArgMem, NoCapture<0>]>;
>>> 
>>> // These functions do not actually read memory, but they are sensitive to the
>>> 
>>> Modified: llvm/trunk/include/llvm/Support/IRBuilder.h
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Support/IRBuilder.h?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/include/llvm/Support/IRBuilder.h (original)
>>> +++ llvm/trunk/include/llvm/Support/IRBuilder.h Fri Apr  2 13:43:02 2010
>>> @@ -917,11 +917,6 @@
>>>   Value *Args[] = { Arg1, Arg2, Arg3, Arg4 };
>>>   return Insert(CallInst::Create(Callee, Args, Args+4), Name);
>>> }
>>> -  CallInst *CreateCall5(Value *Callee, Value *Arg1, Value *Arg2, Value *Arg3,
>>> -                        Value *Arg4, Value *Arg5, const Twine &Name = "") {
>>> -    Value *Args[] = { Arg1, Arg2, Arg3, Arg4, Arg5 };
>>> -    return Insert(CallInst::Create(Callee, Args, Args+5), Name);
>>> -  }
>>> 
>>> template<typename InputIterator>
>>> CallInst *CreateCall(Value *Callee, InputIterator ArgBegin,
>>> 
>>> Modified: llvm/trunk/include/llvm/Target/TargetLowering.h
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Target/TargetLowering.h?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/include/llvm/Target/TargetLowering.h (original)
>>> +++ llvm/trunk/include/llvm/Target/TargetLowering.h Fri Apr  2 13:43:02 2010
>>> @@ -1187,7 +1187,7 @@
>>> EmitTargetCodeForMemcpy(SelectionDAG &DAG, DebugLoc dl,
>>>                         SDValue Chain,
>>>                         SDValue Op1, SDValue Op2,
>>> -                          SDValue Op3, unsigned Align, bool isVolatile,
>>> +                          SDValue Op3, unsigned Align,
>>>                         bool AlwaysInline,
>>>                         const Value *DstSV, uint64_t DstOff,
>>>                         const Value *SrcSV, uint64_t SrcOff) {
>>> @@ -1204,7 +1204,7 @@
>>> EmitTargetCodeForMemmove(SelectionDAG &DAG, DebugLoc dl,
>>>                          SDValue Chain,
>>>                          SDValue Op1, SDValue Op2,
>>> -                           SDValue Op3, unsigned Align, bool isVolatile,
>>> +                           SDValue Op3, unsigned Align,
>>>                          const Value *DstSV, uint64_t DstOff,
>>>                          const Value *SrcSV, uint64_t SrcOff) {
>>>   return SDValue();
>>> @@ -1220,7 +1220,7 @@
>>> EmitTargetCodeForMemset(SelectionDAG &DAG, DebugLoc dl,
>>>                         SDValue Chain,
>>>                         SDValue Op1, SDValue Op2,
>>> -                          SDValue Op3, unsigned Align, bool isVolatile,
>>> +                          SDValue Op3, unsigned Align,
>>>                         const Value *DstSV, uint64_t DstOff) {
>>>   return SDValue();
>>> }
>>> 
>>> Modified: llvm/trunk/include/llvm/Transforms/Utils/BuildLibCalls.h
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Transforms/Utils/BuildLibCalls.h?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/include/llvm/Transforms/Utils/BuildLibCalls.h (original)
>>> +++ llvm/trunk/include/llvm/Transforms/Utils/BuildLibCalls.h Fri Apr  2 13:43:02 2010
>>> @@ -46,8 +46,8 @@
>>> 
>>> /// EmitMemCpy - Emit a call to the memcpy function to the builder.  This
>>> /// always expects that the size has type 'intptr_t' and Dst/Src are pointers.
>>> -  Value *EmitMemCpy(Value *Dst, Value *Src, Value *Len, unsigned Align,
>>> -                    bool isVolatile, IRBuilder<> &B, const TargetData *TD);
>>> +  Value *EmitMemCpy(Value *Dst, Value *Src, Value *Len,
>>> +                    unsigned Align, IRBuilder<> &B, const TargetData *TD);
>>> 
>>> /// EmitMemCpyChk - Emit a call to the __memcpy_chk function to the builder.
>>> /// This expects that the Len and ObjSize have type 'intptr_t' and Dst/Src
>>> @@ -57,8 +57,8 @@
>>> 
>>> /// EmitMemMove - Emit a call to the memmove function to the builder.  This
>>> /// always expects that the size has type 'intptr_t' and Dst/Src are pointers.
>>> -  Value *EmitMemMove(Value *Dst, Value *Src, Value *Len, unsigned Align,
>>> -                     bool isVolatile, IRBuilder<> &B, const TargetData *TD);
>>> +  Value *EmitMemMove(Value *Dst, Value *Src, Value *Len,
>>> +		                 unsigned Align, IRBuilder<> &B, const TargetData *TD);
>>> 
>>> /// EmitMemChr - Emit a call to the memchr function.  This assumes that Ptr is
>>> /// a pointer, Val is an i32 value, and Len is an 'intptr_t' value.
>>> @@ -70,8 +70,8 @@
>>>                   const TargetData *TD);
>>> 
>>> /// EmitMemSet - Emit a call to the memset function
>>> -  Value *EmitMemSet(Value *Dst, Value *Val, Value *Len, bool isVolatile,
>>> -                    IRBuilder<> &B, const TargetData *TD);
>>> +  Value *EmitMemSet(Value *Dst, Value *Val, Value *Len, IRBuilder<> &B,
>>> +                    const TargetData *TD);
>>> 
>>> /// EmitUnaryFloatFnCall - Emit a call to the unary function named 'Name'
>>> /// (e.g.  'floor').  This function is known to take a single of type matching
>>> 
>>> Modified: llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAG.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAG.cpp (original)
>>> +++ llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAG.cpp Fri Apr  2 13:43:02 2010
>>> @@ -3263,8 +3263,7 @@
>>> static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, DebugLoc dl,
>>>                                      SDValue Chain, SDValue Dst,
>>>                                      SDValue Src, uint64_t Size,
>>> -                                       unsigned Align, bool isVol,
>>> -                                       bool AlwaysInline,
>>> +                                       unsigned Align, bool AlwaysInline,
>>>                                      const Value *DstSV, uint64_t DstSVOff,
>>>                                      const Value *SrcSV, uint64_t SrcSVOff) {
>>> const TargetLowering &TLI = DAG.getTargetLoweringInfo();
>>> @@ -3320,7 +3319,7 @@
>>>     Value = getMemsetStringVal(VT, dl, DAG, TLI, Str, SrcOff);
>>>     Store = DAG.getStore(Chain, dl, Value,
>>>                          getMemBasePlusOffset(Dst, DstOff, DAG),
>>> -                           DstSV, DstSVOff + DstOff, isVol, false, Align);
>>> +                           DstSV, DstSVOff + DstOff, false, false, Align);
>>>   } else {
>>>     // The type might not be legal for the target.  This should only happen
>>>     // if the type is smaller than a legal type, as on PPC, so the right
>>> @@ -3331,11 +3330,11 @@
>>>     assert(NVT.bitsGE(VT));
>>>     Value = DAG.getExtLoad(ISD::EXTLOAD, dl, NVT, Chain,
>>>                            getMemBasePlusOffset(Src, SrcOff, DAG),
>>> -                             SrcSV, SrcSVOff + SrcOff, VT, isVol, false,
>>> +                             SrcSV, SrcSVOff + SrcOff, VT, false, false,
>>>                            MinAlign(SrcAlign, SrcOff));
>>>     Store = DAG.getTruncStore(Chain, dl, Value,
>>>                               getMemBasePlusOffset(Dst, DstOff, DAG),
>>> -                                DstSV, DstSVOff + DstOff, VT, isVol, false,
>>> +                                DstSV, DstSVOff + DstOff, VT, false, false,
>>>                               Align);
>>>   }
>>>   OutChains.push_back(Store);
>>> @@ -3350,8 +3349,7 @@
>>> static SDValue getMemmoveLoadsAndStores(SelectionDAG &DAG, DebugLoc dl,
>>>                                       SDValue Chain, SDValue Dst,
>>>                                       SDValue Src, uint64_t Size,
>>> -                                        unsigned Align,  bool isVol,
>>> -                                        bool AlwaysInline,
>>> +                                        unsigned Align,bool AlwaysInline,
>>>                                       const Value *DstSV, uint64_t DstSVOff,
>>>                                       const Value *SrcSV, uint64_t SrcSVOff) {
>>> const TargetLowering &TLI = DAG.getTargetLoweringInfo();
>>> @@ -3399,7 +3397,7 @@
>>> 
>>>   Value = DAG.getLoad(VT, dl, Chain,
>>>                       getMemBasePlusOffset(Src, SrcOff, DAG),
>>> -                        SrcSV, SrcSVOff + SrcOff, isVol, false, SrcAlign);
>>> +                        SrcSV, SrcSVOff + SrcOff, false, false, SrcAlign);
>>>   LoadValues.push_back(Value);
>>>   LoadChains.push_back(Value.getValue(1));
>>>   SrcOff += VTSize;
>>> @@ -3414,7 +3412,7 @@
>>> 
>>>   Store = DAG.getStore(Chain, dl, LoadValues[i],
>>>                        getMemBasePlusOffset(Dst, DstOff, DAG),
>>> -                         DstSV, DstSVOff + DstOff, isVol, false, Align);
>>> +                         DstSV, DstSVOff + DstOff, false, false, Align);
>>>   OutChains.push_back(Store);
>>>   DstOff += VTSize;
>>> }
>>> @@ -3426,7 +3424,7 @@
>>> static SDValue getMemsetStores(SelectionDAG &DAG, DebugLoc dl,
>>>                              SDValue Chain, SDValue Dst,
>>>                              SDValue Src, uint64_t Size,
>>> -                               unsigned Align, bool isVol,
>>> +                               unsigned Align,
>>>                              const Value *DstSV, uint64_t DstSVOff) {
>>> const TargetLowering &TLI = DAG.getTargetLoweringInfo();
>>> 
>>> @@ -3465,7 +3463,7 @@
>>>   SDValue Value = getMemsetValue(Src, VT, DAG, dl);
>>>   SDValue Store = DAG.getStore(Chain, dl, Value,
>>>                                getMemBasePlusOffset(Dst, DstOff, DAG),
>>> -                                 DstSV, DstSVOff + DstOff, isVol, false, 0);
>>> +                                 DstSV, DstSVOff + DstOff, false, false, 0);
>>>   OutChains.push_back(Store);
>>>   DstOff += VTSize;
>>> }
>>> @@ -3476,7 +3474,7 @@
>>> 
>>> SDValue SelectionDAG::getMemcpy(SDValue Chain, DebugLoc dl, SDValue Dst,
>>>                               SDValue Src, SDValue Size,
>>> -                                unsigned Align, bool isVol, bool AlwaysInline,
>>> +                                unsigned Align, bool AlwaysInline,
>>>                               const Value *DstSV, uint64_t DstSVOff,
>>>                               const Value *SrcSV, uint64_t SrcSVOff) {
>>> 
>>> @@ -3490,7 +3488,7 @@
>>> 
>>>   SDValue Result = getMemcpyLoadsAndStores(*this, dl, Chain, Dst, Src,
>>>                                            ConstantSize->getZExtValue(),Align,
>>> -                                isVol, false, DstSV, DstSVOff, SrcSV, SrcSVOff);
>>> +                                       false, DstSV, DstSVOff, SrcSV, SrcSVOff);
>>>   if (Result.getNode())
>>>     return Result;
>>> }
>>> @@ -3499,7 +3497,7 @@
>>> // code. If the target chooses to do this, this is the next best.
>>> SDValue Result =
>>>   TLI.EmitTargetCodeForMemcpy(*this, dl, Chain, Dst, Src, Size, Align,
>>> -                                isVol, AlwaysInline,
>>> +                                AlwaysInline,
>>>                               DstSV, DstSVOff, SrcSV, SrcSVOff);
>>> if (Result.getNode())
>>>   return Result;
>>> @@ -3509,12 +3507,11 @@
>>> if (AlwaysInline) {
>>>   assert(ConstantSize && "AlwaysInline requires a constant size!");
>>>   return getMemcpyLoadsAndStores(*this, dl, Chain, Dst, Src,
>>> -                                   ConstantSize->getZExtValue(), Align, isVol,
>>> -                                   true, DstSV, DstSVOff, SrcSV, SrcSVOff);
>>> +                                   ConstantSize->getZExtValue(), Align, true,
>>> +                                   DstSV, DstSVOff, SrcSV, SrcSVOff);
>>> }
>>> 
>>> // Emit a library call.
>>> -  assert(!isVol && "library memcpy does not support volatile");
>>> TargetLowering::ArgListTy Args;
>>> TargetLowering::ArgListEntry Entry;
>>> Entry.Ty = TLI.getTargetData()->getIntPtrType(*getContext());
>>> @@ -3535,7 +3532,7 @@
>>> 
>>> SDValue SelectionDAG::getMemmove(SDValue Chain, DebugLoc dl, SDValue Dst,
>>>                                SDValue Src, SDValue Size,
>>> -                                 unsigned Align, bool isVol,
>>> +                                 unsigned Align,
>>>                                const Value *DstSV, uint64_t DstSVOff,
>>>                                const Value *SrcSV, uint64_t SrcSVOff) {
>>> 
>>> @@ -3549,8 +3546,8 @@
>>> 
>>>   SDValue Result =
>>>     getMemmoveLoadsAndStores(*this, dl, Chain, Dst, Src,
>>> -                               ConstantSize->getZExtValue(), Align, isVol,
>>> -                               false, DstSV, DstSVOff, SrcSV, SrcSVOff);
>>> +                               ConstantSize->getZExtValue(),
>>> +                               Align, false, DstSV, DstSVOff, SrcSV, SrcSVOff);
>>>   if (Result.getNode())
>>>     return Result;
>>> }
>>> @@ -3558,13 +3555,12 @@
>>> // Then check to see if we should lower the memmove with target-specific
>>> // code. If the target chooses to do this, this is the next best.
>>> SDValue Result =
>>> -    TLI.EmitTargetCodeForMemmove(*this, dl, Chain, Dst, Src, Size, Align, isVol,
>>> +    TLI.EmitTargetCodeForMemmove(*this, dl, Chain, Dst, Src, Size, Align,
>>>                                DstSV, DstSVOff, SrcSV, SrcSVOff);
>>> if (Result.getNode())
>>>   return Result;
>>> 
>>> // Emit a library call.
>>> -  assert(!isVol && "library memmove does not support volatile");
>>> TargetLowering::ArgListTy Args;
>>> TargetLowering::ArgListEntry Entry;
>>> Entry.Ty = TLI.getTargetData()->getIntPtrType(*getContext());
>>> @@ -3585,7 +3581,7 @@
>>> 
>>> SDValue SelectionDAG::getMemset(SDValue Chain, DebugLoc dl, SDValue Dst,
>>>                               SDValue Src, SDValue Size,
>>> -                                unsigned Align, bool isVol,
>>> +                                unsigned Align,
>>>                               const Value *DstSV, uint64_t DstSVOff) {
>>> 
>>> // Check to see if we should lower the memset to stores first.
>>> @@ -3598,7 +3594,7 @@
>>> 
>>>   SDValue Result =
>>>     getMemsetStores(*this, dl, Chain, Dst, Src, ConstantSize->getZExtValue(),
>>> -                      Align, isVol, DstSV, DstSVOff);
>>> +                      Align, DstSV, DstSVOff);
>>>   if (Result.getNode())
>>>     return Result;
>>> }
>>> @@ -3606,13 +3602,12 @@
>>> // Then check to see if we should lower the memset with target-specific
>>> // code. If the target chooses to do this, this is the next best.
>>> SDValue Result =
>>> -    TLI.EmitTargetCodeForMemset(*this, dl, Chain, Dst, Src, Size, Align, isVol,
>>> +    TLI.EmitTargetCodeForMemset(*this, dl, Chain, Dst, Src, Size, Align,
>>>                               DstSV, DstSVOff);
>>> if (Result.getNode())
>>>   return Result;
>>> 
>>> // Emit a library call.
>>> -  assert(!isVol && "library memset does not support volatile");
>>> const Type *IntPtrTy = TLI.getTargetData()->getIntPtrType(*getContext());
>>> TargetLowering::ArgListTy Args;
>>> TargetLowering::ArgListEntry Entry;
>>> 
>>> Modified: llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp (original)
>>> +++ llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp Fri Apr  2 13:43:02 2010
>>> @@ -3731,50 +3731,28 @@
>>> case Intrinsic::longjmp:
>>>   return "_longjmp"+!TLI.usesUnderscoreLongJmp();
>>> case Intrinsic::memcpy: {
>>> -    // Assert for address < 256 since we support only user defined address
>>> -    // spaces.
>>> -    assert(cast<PointerType>(I.getOperand(1)->getType())->getAddressSpace()
>>> -           < 256 &&
>>> -           cast<PointerType>(I.getOperand(2)->getType())->getAddressSpace()
>>> -           < 256 &&
>>> -           "Unknown address space");
>>>   SDValue Op1 = getValue(I.getOperand(1));
>>>   SDValue Op2 = getValue(I.getOperand(2));
>>>   SDValue Op3 = getValue(I.getOperand(3));
>>>   unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
>>> -    bool isVol = cast<ConstantInt>(I.getOperand(5))->getZExtValue();
>>> -    DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, isVol, false,
>>> +    DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
>>>                             I.getOperand(1), 0, I.getOperand(2), 0));
>>>   return 0;
>>> }
>>> case Intrinsic::memset: {
>>> -    // Assert for address < 256 since we support only user defined address
>>> -    // spaces.
>>> -    assert(cast<PointerType>(I.getOperand(1)->getType())->getAddressSpace()
>>> -           < 256 &&
>>> -           "Unknown address space");
>>>   SDValue Op1 = getValue(I.getOperand(1));
>>>   SDValue Op2 = getValue(I.getOperand(2));
>>>   SDValue Op3 = getValue(I.getOperand(3));
>>>   unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
>>> -    bool isVol = cast<ConstantInt>(I.getOperand(5))->getZExtValue();
>>> -    DAG.setRoot(DAG.getMemset(getRoot(), dl, Op1, Op2, Op3, Align, isVol,
>>> +    DAG.setRoot(DAG.getMemset(getRoot(), dl, Op1, Op2, Op3, Align,
>>>                             I.getOperand(1), 0));
>>>   return 0;
>>> }
>>> case Intrinsic::memmove: {
>>> -    // Assert for address < 256 since we support only user defined address
>>> -    // spaces.
>>> -    assert(cast<PointerType>(I.getOperand(1)->getType())->getAddressSpace()
>>> -           < 256 &&
>>> -           cast<PointerType>(I.getOperand(2)->getType())->getAddressSpace()
>>> -           < 256 &&
>>> -           "Unknown address space");
>>>   SDValue Op1 = getValue(I.getOperand(1));
>>>   SDValue Op2 = getValue(I.getOperand(2));
>>>   SDValue Op3 = getValue(I.getOperand(3));
>>>   unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
>>> -    bool isVol = cast<ConstantInt>(I.getOperand(5))->getZExtValue();
>>> 
>>>   // If the source and destination are known to not be aliases, we can
>>>   // lower memmove as memcpy.
>>> @@ -3783,12 +3761,12 @@
>>>     Size = C->getZExtValue();
>>>   if (AA->alias(I.getOperand(1), Size, I.getOperand(2), Size) ==
>>>       AliasAnalysis::NoAlias) {
>>> -      DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, isVol, 
>>> -                                false, I.getOperand(1), 0, I.getOperand(2), 0));
>>> +      DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
>>> +                                I.getOperand(1), 0, I.getOperand(2), 0));
>>>     return 0;
>>>   }
>>> 
>>> -    DAG.setRoot(DAG.getMemmove(getRoot(), dl, Op1, Op2, Op3, Align, isVol,
>>> +    DAG.setRoot(DAG.getMemmove(getRoot(), dl, Op1, Op2, Op3, Align,
>>>                              I.getOperand(1), 0, I.getOperand(2), 0));
>>>   return 0;
>>> }
>>> 
>>> Modified: llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp (original)
>>> +++ llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp Fri Apr  2 13:43:02 2010
>>> @@ -861,8 +861,7 @@
>>>                         DebugLoc dl) {
>>> SDValue SizeNode = DAG.getConstant(Flags.getByValSize(), MVT::i32);
>>> return DAG.getMemcpy(Chain, dl, Dst, Src, SizeNode, Flags.getByValAlign(),
>>> -                       /*isVolatile=*/false, /*AlwaysInline=*/false,
>>> -                       NULL, 0, NULL, 0);
>>> +                       /*AlwaysInline=*/false, NULL, 0, NULL, 0);
>>> }
>>> 
>>> /// LowerMemOpCallTo - Store the argument to the stack.
>>> @@ -2054,7 +2053,7 @@
>>>                                          SDValue Chain,
>>>                                          SDValue Dst, SDValue Src,
>>>                                          SDValue Size, unsigned Align,
>>> -                                           bool isVolatile, bool AlwaysInline,
>>> +                                           bool AlwaysInline,
>>>                                        const Value *DstSV, uint64_t DstSVOff,
>>>                                        const Value *SrcSV, uint64_t SrcSVOff){
>>> // Do repeated 4-byte loads and stores. To be improved.
>>> @@ -2090,7 +2089,7 @@
>>>     Loads[i] = DAG.getLoad(VT, dl, Chain,
>>>                            DAG.getNode(ISD::ADD, dl, MVT::i32, Src,
>>>                                        DAG.getConstant(SrcOff, MVT::i32)),
>>> -                             SrcSV, SrcSVOff + SrcOff, isVolatile, false, 0);
>>> +                             SrcSV, SrcSVOff + SrcOff, false, false, 0);
>>>     TFOps[i] = Loads[i].getValue(1);
>>>     SrcOff += VTSize;
>>>   }
>>> @@ -2101,7 +2100,7 @@
>>>     TFOps[i] = DAG.getStore(Chain, dl, Loads[i],
>>>                             DAG.getNode(ISD::ADD, dl, MVT::i32, Dst,
>>>                                         DAG.getConstant(DstOff, MVT::i32)),
>>> -                              DstSV, DstSVOff + DstOff, isVolatile, false, 0);
>>> +                              DstSV, DstSVOff + DstOff, false, false, 0);
>>>     DstOff += VTSize;
>>>   }
>>>   Chain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, &TFOps[0], i);
>>> 
>>> Modified: llvm/trunk/lib/Target/ARM/ARMISelLowering.h
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/ARM/ARMISelLowering.h?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Target/ARM/ARMISelLowering.h (original)
>>> +++ llvm/trunk/lib/Target/ARM/ARMISelLowering.h Fri Apr  2 13:43:02 2010
>>> @@ -305,7 +305,7 @@
>>>                                     SDValue Chain,
>>>                                     SDValue Dst, SDValue Src,
>>>                                     SDValue Size, unsigned Align,
>>> -                                      bool isVolatile, bool AlwaysInline,
>>> +                                      bool AlwaysInline,
>>>                                     const Value *DstSV, uint64_t DstSVOff,
>>>                                     const Value *SrcSV, uint64_t SrcSVOff);
>>>   SDValue LowerCallResult(SDValue Chain, SDValue InFlag,
>>> 
>>> Modified: llvm/trunk/lib/Target/PowerPC/PPCISelLowering.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/PowerPC/PPCISelLowering.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Target/PowerPC/PPCISelLowering.cpp (original)
>>> +++ llvm/trunk/lib/Target/PowerPC/PPCISelLowering.cpp Fri Apr  2 13:43:02 2010
>>> @@ -2392,7 +2392,7 @@
>>>                         DebugLoc dl) {
>>> SDValue SizeNode = DAG.getConstant(Flags.getByValSize(), MVT::i32);
>>> return DAG.getMemcpy(Chain, dl, Dst, Src, SizeNode, Flags.getByValAlign(),
>>> -                       false, false, NULL, 0, NULL, 0);
>>> +                       false, NULL, 0, NULL, 0);
>>> }
>>> 
>>> /// LowerMemOpCallTo - Store the argument to the stack or remember it in case of
>>> 
>>> Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Target/X86/X86ISelLowering.cpp (original)
>>> +++ llvm/trunk/lib/Target/X86/X86ISelLowering.cpp Fri Apr  2 13:43:02 2010
>>> @@ -1433,8 +1433,7 @@
>>>                         DebugLoc dl) {
>>> SDValue SizeNode     = DAG.getConstant(Flags.getByValSize(), MVT::i32);
>>> return DAG.getMemcpy(Chain, dl, Dst, Src, SizeNode, Flags.getByValAlign(),
>>> -                       /*isVolatile*/false, /*AlwaysInline=*/true,
>>> -                       NULL, 0, NULL, 0);
>>> +                       /*AlwaysInline=*/true, NULL, 0, NULL, 0);
>>> }
>>> 
>>> /// IsTailCallConvention - Return true if the calling convention is one that
>>> @@ -6551,7 +6550,6 @@
>>>                                          SDValue Chain,
>>>                                          SDValue Dst, SDValue Src,
>>>                                          SDValue Size, unsigned Align,
>>> -                                           bool isVolatile,
>>>                                          const Value *DstSV,
>>>                                          uint64_t DstSVOff) {
>>> ConstantSDNode *ConstantSize = dyn_cast<ConstantSDNode>(Size);
>>> @@ -6680,7 +6678,7 @@
>>>                                     DAG.getConstant(Offset, AddrVT)),
>>>                         Src,
>>>                         DAG.getConstant(BytesLeft, SizeVT),
>>> -                          Align, isVolatile, DstSV, DstSVOff + Offset);
>>> +                          Align, DstSV, DstSVOff + Offset);
>>> }
>>> 
>>> // TODO: Use a Tokenfactor, as in memcpy, instead of a single chain.
>>> @@ -6691,7 +6689,7 @@
>>> X86TargetLowering::EmitTargetCodeForMemcpy(SelectionDAG &DAG, DebugLoc dl,
>>>                                     SDValue Chain, SDValue Dst, SDValue Src,
>>>                                     SDValue Size, unsigned Align,
>>> -                                      bool isVolatile, bool AlwaysInline,
>>> +                                      bool AlwaysInline,
>>>                                     const Value *DstSV, uint64_t DstSVOff,
>>>                                     const Value *SrcSV, uint64_t SrcSVOff) {
>>> // This requires the copy size to be a constant, preferrably
>>> @@ -6750,7 +6748,7 @@
>>>                                   DAG.getNode(ISD::ADD, dl, SrcVT, Src,
>>>                                               DAG.getConstant(Offset, SrcVT)),
>>>                                   DAG.getConstant(BytesLeft, SizeVT),
>>> -                                    Align, isVolatile, AlwaysInline,
>>> +                                    Align, AlwaysInline,
>>>                                   DstSV, DstSVOff + Offset,
>>>                                   SrcSV, SrcSVOff + Offset));
>>> }
>>> @@ -6833,8 +6831,8 @@
>>> DebugLoc dl = Op.getDebugLoc();
>>> 
>>> return DAG.getMemcpy(Chain, dl, DstPtr, SrcPtr,
>>> -                       DAG.getIntPtrConstant(24), 8, /*isVolatile*/false,
>>> -                       false, DstSV, 0, SrcSV, 0);
>>> +                       DAG.getIntPtrConstant(24), 8, false,
>>> +                       DstSV, 0, SrcSV, 0);
>>> }
>>> 
>>> SDValue
>>> 
>>> Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.h
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.h?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Target/X86/X86ISelLowering.h (original)
>>> +++ llvm/trunk/lib/Target/X86/X86ISelLowering.h Fri Apr  2 13:43:02 2010
>>> @@ -737,13 +737,12 @@
>>>                                   SDValue Chain,
>>>                                   SDValue Dst, SDValue Src,
>>>                                   SDValue Size, unsigned Align,
>>> -                                    bool isVolatile,
>>>                                   const Value *DstSV, uint64_t DstSVOff);
>>>   SDValue EmitTargetCodeForMemcpy(SelectionDAG &DAG, DebugLoc dl,
>>>                                   SDValue Chain,
>>>                                   SDValue Dst, SDValue Src,
>>>                                   SDValue Size, unsigned Align,
>>> -                                    bool isVolatile, bool AlwaysInline,
>>> +                                    bool AlwaysInline,
>>>                                   const Value *DstSV, uint64_t DstSVOff,
>>>                                   const Value *SrcSV, uint64_t SrcSVOff);
>>> 
>>> @@ -753,7 +752,7 @@
>>>   /// block, the number of args, and whether or not the second arg is
>>>   /// in memory or not.
>>>   MachineBasicBlock *EmitPCMP(MachineInstr *BInstr, MachineBasicBlock *BB,
>>> -                                unsigned argNum, bool inMem) const;
>>> +				unsigned argNum, bool inMem) const;
>>> 
>>>   /// Utility function to emit atomic bitwise operations (and, or, xor).
>>>   /// It takes the bitwise instruction to expand, the associated machine basic
>>> 
>>> Modified: llvm/trunk/lib/Target/XCore/XCoreISelLowering.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/XCore/XCoreISelLowering.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Target/XCore/XCoreISelLowering.cpp (original)
>>> +++ llvm/trunk/lib/Target/XCore/XCoreISelLowering.cpp Fri Apr  2 13:43:02 2010
>>> @@ -1443,7 +1443,7 @@
>>>       return DAG.getMemmove(Chain, dl, ST->getBasePtr(),
>>>                             LD->getBasePtr(),
>>>                             DAG.getConstant(StoreBits/8, MVT::i32),
>>> -                              Alignment, false, ST->getSrcValue(),
>>> +                              Alignment, ST->getSrcValue(),
>>>                             ST->getSrcValueOffset(), LD->getSrcValue(),
>>>                             LD->getSrcValueOffset());
>>>     }
>>> 
>>> Modified: llvm/trunk/lib/Transforms/InstCombine/InstCombineCalls.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/InstCombine/InstCombineCalls.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Transforms/InstCombine/InstCombineCalls.cpp (original)
>>> +++ llvm/trunk/lib/Transforms/InstCombine/InstCombineCalls.cpp Fri Apr  2 13:43:02 2010
>>> @@ -136,14 +136,8 @@
>>>   return 0;  // If not 1/2/4/8 bytes, exit.
>>> 
>>> // Use an integer load+store unless we can find something better.
>>> -  unsigned SrcAddrSp =
>>> -    cast<PointerType>(MI->getOperand(2)->getType())->getAddressSpace();
>>> -  unsigned DstAddrSp =
>>> -    cast<PointerType>(MI->getOperand(1)->getType())->getAddressSpace();
>>> -
>>> -  const IntegerType* IntType = IntegerType::get(MI->getContext(), Size<<3);
>>> -  Type *NewSrcPtrTy = PointerType::get(IntType, SrcAddrSp);
>>> -  Type *NewDstPtrTy = PointerType::get(IntType, DstAddrSp);
>>> +  Type *NewPtrTy =
>>> +            PointerType::getUnqual(IntegerType::get(MI->getContext(), Size<<3));
>>> 
>>> // Memcpy forces the use of i8* for the source and destination.  That means
>>> // that if you're using memcpy to move one double around, you'll get a cast
>>> @@ -173,10 +167,8 @@
>>>         break;
>>>     }
>>> 
>>> -      if (SrcETy->isSingleValueType()) {
>>> -        NewSrcPtrTy = PointerType::get(SrcETy, SrcAddrSp);
>>> -        NewDstPtrTy = PointerType::get(SrcETy, DstAddrSp);
>>> -      }
>>> +      if (SrcETy->isSingleValueType())
>>> +        NewPtrTy = PointerType::getUnqual(SrcETy);
>>>   }
>>> }
>>> 
>>> @@ -186,12 +178,11 @@
>>> SrcAlign = std::max(SrcAlign, CopyAlign);
>>> DstAlign = std::max(DstAlign, CopyAlign);
>>> 
>>> -  Value *Src = Builder->CreateBitCast(MI->getOperand(2), NewSrcPtrTy);
>>> -  Value *Dest = Builder->CreateBitCast(MI->getOperand(1), NewDstPtrTy);
>>> -  Instruction *L = new LoadInst(Src, "tmp", MI->isVolatile(), SrcAlign);
>>> +  Value *Src = Builder->CreateBitCast(MI->getOperand(2), NewPtrTy);
>>> +  Value *Dest = Builder->CreateBitCast(MI->getOperand(1), NewPtrTy);
>>> +  Instruction *L = new LoadInst(Src, "tmp", false, SrcAlign);
>>> InsertNewInstBefore(L, *MI);
>>> -  InsertNewInstBefore(new StoreInst(L, Dest, MI->isVolatile(), DstAlign),
>>> -                      *MI);
>>> +  InsertNewInstBefore(new StoreInst(L, Dest, false, DstAlign), *MI);
>>> 
>>> // Set the size of the copy to 0, it will be deleted on the next iteration.
>>> MI->setOperand(3, Constant::getNullValue(MemOpLength->getType()));
>>> @@ -284,11 +275,10 @@
>>>       if (GVSrc->isConstant()) {
>>>         Module *M = CI.getParent()->getParent()->getParent();
>>>         Intrinsic::ID MemCpyID = Intrinsic::memcpy;
>>> -          const Type *Tys[3] = { CI.getOperand(1)->getType(),
>>> -                                 CI.getOperand(2)->getType(),
>>> -                                 CI.getOperand(3)->getType() };
>>> +          const Type *Tys[1];
>>> +          Tys[0] = CI.getOperand(3)->getType();
>>>         CI.setOperand(0, 
>>> -                        Intrinsic::getDeclaration(M, MemCpyID, Tys, 3));
>>> +                        Intrinsic::getDeclaration(M, MemCpyID, Tys, 1));
>>>         Changed = true;
>>>       }
>>>   }
>>> 
>>> Modified: llvm/trunk/lib/Transforms/Scalar/MemCpyOptimizer.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Scalar/MemCpyOptimizer.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Transforms/Scalar/MemCpyOptimizer.cpp (original)
>>> +++ llvm/trunk/lib/Transforms/Scalar/MemCpyOptimizer.cpp Fri Apr  2 13:43:02 2010
>>> @@ -413,6 +413,7 @@
>>> // interesting as a small compile-time optimization.
>>> Ranges.addStore(0, SI);
>>> 
>>> +  Function *MemSetF = 0;
>>> 
>>> // Now that we have full information about ranges, loop over the ranges and
>>> // emit memset's for anything big enough to be worthwhile.
>>> @@ -432,40 +433,29 @@
>>>   // memset block.  This ensure that the memset is dominated by any addressing
>>>   // instruction needed by the start of the block.
>>>   BasicBlock::iterator InsertPt = BI;
>>> -
>>> +  
>>> +    if (MemSetF == 0) {
>>> +      const Type *Ty = Type::getInt64Ty(Context);
>>> +      MemSetF = Intrinsic::getDeclaration(M, Intrinsic::memset, &Ty, 1);
>>> +    }
>>> +    
>>>   // Get the starting pointer of the block.
>>>   StartPtr = Range.StartPtr;
>>> -
>>> -    // Determine alignment
>>> -    unsigned Alignment = Range.Alignment;
>>> -    if (Alignment == 0) {
>>> -      const Type *EltType = 
>>> -         cast<PointerType>(StartPtr->getType())->getElementType();
>>> -      Alignment = TD->getABITypeAlignment(EltType);
>>> -    }
>>> -
>>> +  
>>>   // Cast the start ptr to be i8* as memset requires.
>>> -    const PointerType* StartPTy = cast<PointerType>(StartPtr->getType());
>>> -    const PointerType *i8Ptr = Type::getInt8PtrTy(Context,
>>> -                                                  StartPTy->getAddressSpace());
>>> -    if (StartPTy!= i8Ptr)
>>> +    const Type *i8Ptr = Type::getInt8PtrTy(Context);
>>> +    if (StartPtr->getType() != i8Ptr)
>>>     StartPtr = new BitCastInst(StartPtr, i8Ptr, StartPtr->getName(),
>>>                                InsertPt);
>>> -
>>> +  
>>>   Value *Ops[] = {
>>>     StartPtr, ByteVal,   // Start, value
>>>     // size
>>>     ConstantInt::get(Type::getInt64Ty(Context), Range.End-Range.Start),
>>>     // align
>>> -      ConstantInt::get(Type::getInt32Ty(Context), Alignment),
>>> -      // volatile
>>> -      ConstantInt::get(Type::getInt1Ty(Context), 0),
>>> +      ConstantInt::get(Type::getInt32Ty(Context), Range.Alignment)
>>>   };
>>> -    const Type *Tys[] = { Ops[0]->getType(), Ops[2]->getType() };
>>> -
>>> -    Function *MemSetF = Intrinsic::getDeclaration(M, Intrinsic::memset, Tys, 2);
>>> -
>>> -    Value *C = CallInst::Create(MemSetF, Ops, Ops+5, "", InsertPt);
>>> +    Value *C = CallInst::Create(MemSetF, Ops, Ops+4, "", InsertPt);
>>>   DEBUG(dbgs() << "Replace stores:\n";
>>>         for (unsigned i = 0, e = Range.TheStores.size(); i != e; ++i)
>>>           dbgs() << *Range.TheStores[i];
>>> @@ -690,19 +680,16 @@
>>>   return false;
>>> 
>>> // If all checks passed, then we can transform these memcpy's
>>> -  const Type *ArgTys[3] = { M->getRawDest()->getType(),
>>> -                            MDep->getRawSource()->getType(),
>>> -                            M->getLength()->getType() };
>>> +  const Type *Ty = M->getLength()->getType();
>>> Function *MemCpyFun = Intrinsic::getDeclaration(
>>>                                M->getParent()->getParent()->getParent(),
>>> -                                 M->getIntrinsicID(), ArgTys, 3);
>>> +                                 M->getIntrinsicID(), &Ty, 1);
>>> 
>>> -  Value *Args[5] = {
>>> -    M->getRawDest(), MDep->getRawSource(), M->getLength(),
>>> -    M->getAlignmentCst(), M->getVolatileCst()
>>> +  Value *Args[4] = {
>>> +    M->getRawDest(), MDep->getRawSource(), M->getLength(), M->getAlignmentCst()
>>> };
>>> 
>>> -  CallInst *C = CallInst::Create(MemCpyFun, Args, Args+5, "", M);
>>> +  CallInst *C = CallInst::Create(MemCpyFun, Args, Args+4, "", M);
>>> 
>>> 
>>> // If C and M don't interfere, then this is a valid transformation.  If they
>>> @@ -741,10 +728,8 @@
>>> 
>>> // If not, then we know we can transform this.
>>> Module *Mod = M->getParent()->getParent()->getParent();
>>> -  const Type *ArgTys[3] = { M->getRawDest()->getType(),
>>> -                            M->getRawSource()->getType(),
>>> -                            M->getLength()->getType() };
>>> -  M->setOperand(0,Intrinsic::getDeclaration(Mod, Intrinsic::memcpy, ArgTys, 3));
>>> +  const Type *Ty = M->getLength()->getType();
>>> +  M->setOperand(0, Intrinsic::getDeclaration(Mod, Intrinsic::memcpy, &Ty, 1));
>>> 
>>> // MemDep may have over conservative information about this instruction, just
>>> // conservatively flush it from the cache.
>>> 
>>> Modified: llvm/trunk/lib/Transforms/Scalar/ScalarReplAggregates.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Scalar/ScalarReplAggregates.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Transforms/Scalar/ScalarReplAggregates.cpp (original)
>>> +++ llvm/trunk/lib/Transforms/Scalar/ScalarReplAggregates.cpp Fri Apr  2 13:43:02 2010
>>> @@ -858,17 +858,8 @@
>>>     EltPtr = new BitCastInst(EltPtr, BytePtrTy, EltPtr->getName(), MI);
>>> 
>>>   // Cast the other pointer (if we have one) to BytePtrTy. 
>>> -    if (OtherElt && OtherElt->getType() != BytePtrTy) {
>>> -      // Preserve address space of OtherElt
>>> -      const PointerType* OtherPTy = cast<PointerType>(OtherElt->getType());
>>> -      const PointerType* PTy = cast<PointerType>(BytePtrTy);
>>> -      if (OtherPTy->getElementType() != PTy->getElementType()) {
>>> -        Type *NewOtherPTy = PointerType::get(PTy->getElementType(),
>>> -                                             OtherPTy->getAddressSpace());
>>> -        OtherElt = new BitCastInst(OtherElt, NewOtherPTy,
>>> -                                   OtherElt->getNameStr(), MI);
>>> -      }
>>> -    }
>>> +    if (OtherElt && OtherElt->getType() != BytePtrTy)
>>> +      OtherElt = new BitCastInst(OtherElt, BytePtrTy, OtherElt->getName(), MI);
>>> 
>>>   unsigned EltSize = TD->getTypeAllocSize(EltTy);
>>> 
>>> @@ -879,28 +870,17 @@
>>>       SROADest ? OtherElt : EltPtr,  // Src ptr
>>>       ConstantInt::get(MI->getOperand(3)->getType(), EltSize), // Size
>>>       // Align
>>> -        ConstantInt::get(Type::getInt32Ty(MI->getContext()), OtherEltAlign),
>>> -        MI->getVolatileCst()
>>> +        ConstantInt::get(Type::getInt32Ty(MI->getContext()), OtherEltAlign)
>>>     };
>>> -      // In case we fold the address space overloaded memcpy of A to B
>>> -      // with memcpy of B to C, change the function to be a memcpy of A to C.
>>> -      const Type *Tys[] = { Ops[0]->getType(), Ops[1]->getType(),
>>> -                            Ops[2]->getType() };
>>> -      Module *M = MI->getParent()->getParent()->getParent();
>>> -      TheFn = Intrinsic::getDeclaration(M, MI->getIntrinsicID(), Tys, 3);
>>> -      CallInst::Create(TheFn, Ops, Ops + 5, "", MI);
>>> +      CallInst::Create(TheFn, Ops, Ops + 4, "", MI);
>>>   } else {
>>>     assert(isa<MemSetInst>(MI));
>>>     Value *Ops[] = {
>>>       EltPtr, MI->getOperand(2),  // Dest, Value,
>>>       ConstantInt::get(MI->getOperand(3)->getType(), EltSize), // Size
>>> -        Zero,  // Align
>>> -        ConstantInt::get(Type::getInt1Ty(MI->getContext()), 0) // isVolatile
>>> +        Zero  // Align
>>>     };
>>> -      const Type *Tys[] = { Ops[0]->getType(), Ops[2]->getType() };
>>> -      Module *M = MI->getParent()->getParent()->getParent();
>>> -      TheFn = Intrinsic::getDeclaration(M, Intrinsic::memset, Tys, 2);
>>> -      CallInst::Create(TheFn, Ops, Ops + 5, "", MI);
>>> +      CallInst::Create(TheFn, Ops, Ops + 4, "", MI);
>>>   }
>>> }
>>> DeadInsts.push_back(MI);
>>> 
>>> Modified: llvm/trunk/lib/Transforms/Scalar/SimplifyLibCalls.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Scalar/SimplifyLibCalls.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Transforms/Scalar/SimplifyLibCalls.cpp (original)
>>> +++ llvm/trunk/lib/Transforms/Scalar/SimplifyLibCalls.cpp Fri Apr  2 13:43:02 2010
>>> @@ -142,8 +142,7 @@
>>>   // We have enough information to now generate the memcpy call to do the
>>>   // concatenation for us.  Make a memcpy to copy the nul byte with align = 1.
>>>   EmitMemCpy(CpyDst, Src,
>>> -               ConstantInt::get(TD->getIntPtrType(*Context), Len+1),
>>> -                                1, false, B, TD);
>>> +               ConstantInt::get(TD->getIntPtrType(*Context), Len+1), 1, B, TD);
>>> }
>>> };
>>> 
>>> @@ -384,8 +383,7 @@
>>>                   CI->getOperand(3), B, TD);
>>>   else
>>>     EmitMemCpy(Dst, Src,
>>> -                 ConstantInt::get(TD->getIntPtrType(*Context), Len),
>>> -                                  1, false, B, TD);
>>> +                 ConstantInt::get(TD->getIntPtrType(*Context), Len), 1, B, TD);
>>>   return Dst;
>>> }
>>> };
>>> @@ -413,8 +411,8 @@
>>> 
>>>   if (SrcLen == 0) {
>>>     // strncpy(x, "", y) -> memset(x, '\0', y, 1)
>>> -      EmitMemSet(Dst, ConstantInt::get(Type::getInt8Ty(*Context), '\0'),
>>> -                 LenOp, false, B, TD);
>>> +      EmitMemSet(Dst, ConstantInt::get(Type::getInt8Ty(*Context), '\0'), LenOp,
>>> +		             B, TD);
>>>     return Dst;
>>>   }
>>> 
>>> @@ -434,8 +432,7 @@
>>> 
>>>   // strncpy(x, s, c) -> memcpy(x, s, c, 1) [s and c are constant]
>>>   EmitMemCpy(Dst, Src,
>>> -               ConstantInt::get(TD->getIntPtrType(*Context), Len),
>>> -                                1, false, B, TD);
>>> +               ConstantInt::get(TD->getIntPtrType(*Context), Len), 1, B, TD);
>>> 
>>>   return Dst;
>>> }
>>> @@ -596,7 +593,7 @@
>>> 
>>>   // memcpy(x, y, n) -> llvm.memcpy(x, y, n, 1)
>>>   EmitMemCpy(CI->getOperand(1), CI->getOperand(2),
>>> -               CI->getOperand(3), 1, false, B, TD);
>>> +               CI->getOperand(3), 1, B, TD);
>>>   return CI->getOperand(1);
>>> }
>>> };
>>> @@ -618,7 +615,7 @@
>>> 
>>>   // memmove(x, y, n) -> llvm.memmove(x, y, n, 1)
>>>   EmitMemMove(CI->getOperand(1), CI->getOperand(2),
>>> -                CI->getOperand(3), 1, false, B, TD);
>>> +                CI->getOperand(3), 1, B, TD);
>>>   return CI->getOperand(1);
>>> }
>>> };
>>> @@ -640,8 +637,8 @@
>>> 
>>>   // memset(p, v, n) -> llvm.memset(p, v, n, 1)
>>>   Value *Val = B.CreateIntCast(CI->getOperand(2), Type::getInt8Ty(*Context),
>>> -                                 false);
>>> -    EmitMemSet(CI->getOperand(1), Val,  CI->getOperand(3), false, B, TD);
>>> +				 false);
>>> +    EmitMemSet(CI->getOperand(1), Val,  CI->getOperand(3), B, TD);
>>>   return CI->getOperand(1);
>>> }
>>> };
>>> @@ -1002,7 +999,7 @@
>>>     // sprintf(str, fmt) -> llvm.memcpy(str, fmt, strlen(fmt)+1, 1)
>>>     EmitMemCpy(CI->getOperand(1), CI->getOperand(2), // Copy the nul byte.
>>>                ConstantInt::get(TD->getIntPtrType(*Context),
>>> -                 FormatStr.size()+1), 1, false, B, TD);
>>> +                 FormatStr.size()+1), 1, B, TD);
>>>     return ConstantInt::get(CI->getType(), FormatStr.size());
>>>   }
>>> 
>>> @@ -1016,11 +1013,11 @@
>>>     // sprintf(dst, "%c", chr) --> *(i8*)dst = chr; *((i8*)dst+1) = 0
>>>     if (!CI->getOperand(3)->getType()->isIntegerTy()) return 0;
>>>     Value *V = B.CreateTrunc(CI->getOperand(3),
>>> -                               Type::getInt8Ty(*Context), "char");
>>> +			       Type::getInt8Ty(*Context), "char");
>>>     Value *Ptr = CastToCStr(CI->getOperand(1), B);
>>>     B.CreateStore(V, Ptr);
>>>     Ptr = B.CreateGEP(Ptr, ConstantInt::get(Type::getInt32Ty(*Context), 1),
>>> -                        "nul");
>>> +			"nul");
>>>     B.CreateStore(Constant::getNullValue(Type::getInt8Ty(*Context)), Ptr);
>>> 
>>>     return ConstantInt::get(CI->getType(), 1);
>>> @@ -1037,7 +1034,7 @@
>>>     Value *IncLen = B.CreateAdd(Len,
>>>                                 ConstantInt::get(Len->getType(), 1),
>>>                                 "leninc");
>>> -      EmitMemCpy(CI->getOperand(1), CI->getOperand(3), IncLen, 1, false, B, TD);
>>> +      EmitMemCpy(CI->getOperand(1), CI->getOperand(3), IncLen, 1, B, TD);
>>> 
>>>     // The sprintf result is the unincremented number of bytes in the string.
>>>     return B.CreateIntCast(Len, CI->getType(), false);
>>> 
>>> Modified: llvm/trunk/lib/Transforms/Utils/BuildLibCalls.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Utils/BuildLibCalls.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Transforms/Utils/BuildLibCalls.cpp (original)
>>> +++ llvm/trunk/lib/Transforms/Utils/BuildLibCalls.cpp Fri Apr  2 13:43:02 2010
>>> @@ -109,16 +109,15 @@
>>> 
>>> /// EmitMemCpy - Emit a call to the memcpy function to the builder.  This always
>>> /// expects that Len has type 'intptr_t' and Dst/Src are pointers.
>>> -Value *llvm::EmitMemCpy(Value *Dst, Value *Src, Value *Len, unsigned Align,
>>> -                        bool isVolatile, IRBuilder<> &B, const TargetData *TD) {
>>> +Value *llvm::EmitMemCpy(Value *Dst, Value *Src, Value *Len,
>>> +                        unsigned Align, IRBuilder<> &B, const TargetData *TD) {
>>> Module *M = B.GetInsertBlock()->getParent()->getParent();
>>> -  const Type *ArgTys[3] = { Dst->getType(), Src->getType(), Len->getType() };
>>> -  Value *MemCpy = Intrinsic::getDeclaration(M, Intrinsic::memcpy, ArgTys, 3);
>>> +  const Type *Ty = Len->getType();
>>> +  Value *MemCpy = Intrinsic::getDeclaration(M, Intrinsic::memcpy, &Ty, 1);
>>> Dst = CastToCStr(Dst, B);
>>> Src = CastToCStr(Src, B);
>>> -  return B.CreateCall5(MemCpy, Dst, Src, Len,
>>> -                       ConstantInt::get(B.getInt32Ty(), Align),
>>> -                       ConstantInt::get(B.getInt1Ty(), isVolatile));
>>> +  return B.CreateCall4(MemCpy, Dst, Src, Len,
>>> +                       ConstantInt::get(B.getInt32Ty(), Align));
>>> }
>>> 
>>> /// EmitMemCpyChk - Emit a call to the __memcpy_chk function to the builder.
>>> @@ -147,18 +146,16 @@
>>> 
>>> /// EmitMemMove - Emit a call to the memmove function to the builder.  This
>>> /// always expects that the size has type 'intptr_t' and Dst/Src are pointers.
>>> -Value *llvm::EmitMemMove(Value *Dst, Value *Src, Value *Len, unsigned Align,
>>> -                         bool isVolatile, IRBuilder<> &B, const TargetData *TD) {
>>> +Value *llvm::EmitMemMove(Value *Dst, Value *Src, Value *Len,
>>> +                         unsigned Align, IRBuilder<> &B, const TargetData *TD) {
>>> Module *M = B.GetInsertBlock()->getParent()->getParent();
>>> LLVMContext &Context = B.GetInsertBlock()->getContext();
>>> -  const Type *ArgTys[3] = { Dst->getType(), Src->getType(),
>>> -                            TD->getIntPtrType(Context) };
>>> -  Value *MemMove = Intrinsic::getDeclaration(M, Intrinsic::memmove, ArgTys, 3);
>>> +  const Type *Ty = TD->getIntPtrType(Context);
>>> +  Value *MemMove = Intrinsic::getDeclaration(M, Intrinsic::memmove, &Ty, 1);
>>> Dst = CastToCStr(Dst, B);
>>> Src = CastToCStr(Src, B);
>>> Value *A = ConstantInt::get(B.getInt32Ty(), Align);
>>> -  Value *Vol = ConstantInt::get(B.getInt1Ty(), isVolatile);
>>> -  return B.CreateCall5(MemMove, Dst, Src, Len, A, Vol);
>>> +  return B.CreateCall4(MemMove, Dst, Src, Len, A);
>>> }
>>> 
>>> /// EmitMemChr - Emit a call to the memchr function.  This assumes that Ptr is
>>> @@ -209,15 +206,15 @@
>>> }
>>> 
>>> /// EmitMemSet - Emit a call to the memset function
>>> -Value *llvm::EmitMemSet(Value *Dst, Value *Val, Value *Len, bool isVolatile,
>>> -                        IRBuilder<> &B, const TargetData *TD) {
>>> +Value *llvm::EmitMemSet(Value *Dst, Value *Val,
>>> +                        Value *Len, IRBuilder<> &B, const TargetData *TD) {
>>> Module *M = B.GetInsertBlock()->getParent()->getParent();
>>> Intrinsic::ID IID = Intrinsic::memset;
>>> - const Type *Tys[2] = { Dst->getType(), Len->getType() };
>>> - Value *MemSet = Intrinsic::getDeclaration(M, IID, Tys, 2);
>>> + const Type *Tys[1];
>>> + Tys[0] = Len->getType();
>>> + Value *MemSet = Intrinsic::getDeclaration(M, IID, Tys, 1);
>>> Value *Align = ConstantInt::get(B.getInt32Ty(), 1);
>>> - Value *Vol = ConstantInt::get(B.getInt1Ty(), isVolatile);
>>> - return B.CreateCall5(MemSet, CastToCStr(Dst, B), Val, Len, Align, Vol);
>>> + return B.CreateCall4(MemSet, CastToCStr(Dst, B), Val, Len, Align);
>>> }
>>> 
>>> /// EmitUnaryFloatFnCall - Emit a call to the unary function named 'Name' (e.g.
>>> @@ -384,7 +381,7 @@
>>> if (Name == "__memcpy_chk") {
>>>   if (isFoldable(4, 3, false)) {
>>>     EmitMemCpy(CI->getOperand(1), CI->getOperand(2), CI->getOperand(3),
>>> -                 1, false, B, TD);
>>> +                 1, B, TD);
>>>     replaceCall(CI->getOperand(1));
>>>     return true;
>>>   }
>>> @@ -399,7 +396,7 @@
>>> if (Name == "__memmove_chk") {
>>>   if (isFoldable(4, 3, false)) {
>>>     EmitMemMove(CI->getOperand(1), CI->getOperand(2), CI->getOperand(3),
>>> -                  1, false, B, TD);
>>> +                  1, B, TD);
>>>     replaceCall(CI->getOperand(1));
>>>     return true;
>>>   }
>>> @@ -410,7 +407,7 @@
>>>   if (isFoldable(4, 3, false)) {
>>>     Value *Val = B.CreateIntCast(CI->getOperand(2), B.getInt8Ty(),
>>>                                  false);
>>> -      EmitMemSet(CI->getOperand(1), Val,  CI->getOperand(3), false, B, TD);
>>> +      EmitMemSet(CI->getOperand(1), Val,  CI->getOperand(3), B, TD);
>>>     replaceCall(CI->getOperand(1));
>>>     return true;
>>>   }
>>> 
>>> Modified: llvm/trunk/lib/Transforms/Utils/InlineFunction.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Utils/InlineFunction.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/Transforms/Utils/InlineFunction.cpp (original)
>>> +++ llvm/trunk/lib/Transforms/Utils/InlineFunction.cpp Fri Apr  2 13:43:02 2010
>>> @@ -297,10 +297,10 @@
>>>                                         I->getName(), 
>>>                                         &*Caller->begin()->begin());
>>>       // Emit a memcpy.
>>> -        const Type *Tys[3] = {VoidPtrTy, VoidPtrTy, Type::getInt64Ty(Context)};
>>> +        const Type *Tys[] = { Type::getInt64Ty(Context) };
>>>       Function *MemCpyFn = Intrinsic::getDeclaration(Caller->getParent(),
>>>                                                      Intrinsic::memcpy, 
>>> -                                                       Tys, 3);
>>> +                                                       Tys, 1);
>>>       Value *DestCast = new BitCastInst(NewAlloca, VoidPtrTy, "tmp", TheCall);
>>>       Value *SrcCast = new BitCastInst(*AI, VoidPtrTy, "tmp", TheCall);
>>> 
>>> @@ -309,18 +309,17 @@
>>>         Size = ConstantExpr::getSizeOf(AggTy);
>>>       else
>>>         Size = ConstantInt::get(Type::getInt64Ty(Context),
>>> -                                  TD->getTypeStoreSize(AggTy));
>>> +                                         TD->getTypeStoreSize(AggTy));
>>> 
>>>       // Always generate a memcpy of alignment 1 here because we don't know
>>>       // the alignment of the src pointer.  Other optimizations can infer
>>>       // better alignment.
>>>       Value *CallArgs[] = {
>>>         DestCast, SrcCast, Size,
>>> -          ConstantInt::get(Type::getInt32Ty(Context), 1),
>>> -          ConstantInt::get(Type::getInt1Ty(Context), 0)
>>> +          ConstantInt::get(Type::getInt32Ty(Context), 1)
>>>       };
>>>       CallInst *TheMemCpy =
>>> -          CallInst::Create(MemCpyFn, CallArgs, CallArgs+5, "", TheCall);
>>> +          CallInst::Create(MemCpyFn, CallArgs, CallArgs+4, "", TheCall);
>>> 
>>>       // If we have a call graph, update it.
>>>       if (CG) {
>>> 
>>> Modified: llvm/trunk/lib/VMCore/AutoUpgrade.cpp
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/VMCore/AutoUpgrade.cpp?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/lib/VMCore/AutoUpgrade.cpp (original)
>>> +++ llvm/trunk/lib/VMCore/AutoUpgrade.cpp Fri Apr  2 13:43:02 2010
>>> @@ -145,54 +145,6 @@
>>>   }
>>>   break;
>>> 
>>> -  case 'm': {
>>> -    // This upgrades the llvm.memcpy, llvm.memmove, and llvm.memset to the
>>> -    // new format that allows overloading the pointer for different address
>>> -    // space (e.g., llvm.memcpy.i16 => llvm.memcpy.p0i8.p0i8.i16)
>>> -    const char* NewFnName = NULL;
>>> -    if (Name.compare(5,8,"memcpy.i",8) == 0) {
>>> -      if (Name[13] == '8')
>>> -        NewFnName = "llvm.memcpy.p0i8.p0i8.i8";
>>> -      else if (Name.compare(13,2,"16") == 0)
>>> -        NewFnName = "llvm.memcpy.p0i8.p0i8.i16";
>>> -      else if (Name.compare(13,2,"32") == 0)
>>> -        NewFnName = "llvm.memcpy.p0i8.p0i8.i32";
>>> -      else if (Name.compare(13,2,"64") == 0)
>>> -        NewFnName = "llvm.memcpy.p0i8.p0i8.i64";
>>> -    } else if (Name.compare(5,9,"memmove.i",9) == 0) {
>>> -      if (Name[14] == '8')
>>> -        NewFnName = "llvm.memmove.p0i8.p0i8.i8";
>>> -      else if (Name.compare(14,2,"16") == 0)
>>> -        NewFnName = "llvm.memmove.p0i8.p0i8.i16";
>>> -      else if (Name.compare(14,2,"32") == 0)
>>> -        NewFnName = "llvm.memmove.p0i8.p0i8.i32";
>>> -      else if (Name.compare(14,2,"64") == 0)
>>> -        NewFnName = "llvm.memmove.p0i8.p0i8.i64";
>>> -    }
>>> -    else if (Name.compare(5,8,"memset.i",8) == 0) {
>>> -      if (Name[13] == '8')
>>> -        NewFnName = "llvm.memset.p0i8.i8";
>>> -      else if (Name.compare(13,2,"16") == 0)
>>> -        NewFnName = "llvm.memset.p0i8.i16";
>>> -      else if (Name.compare(13,2,"32") == 0)
>>> -        NewFnName = "llvm.memset.p0i8.i32";
>>> -      else if (Name.compare(13,2,"64") == 0)
>>> -        NewFnName = "llvm.memset.p0i8.i64";
>>> -    }
>>> -    if (NewFnName) {
>>> -      const FunctionType *FTy = F->getFunctionType();
>>> -      NewFn = cast<Function>(M->getOrInsertFunction(NewFnName, 
>>> -                                            FTy->getReturnType(),
>>> -                                            FTy->getParamType(0),
>>> -                                            FTy->getParamType(1),
>>> -                                            FTy->getParamType(2),
>>> -                                            FTy->getParamType(3),
>>> -                                            Type::getInt1Ty(F->getContext()),
>>> -                                            (Type *)0));
>>> -      return true;
>>> -    }
>>> -    break;
>>> -  }
>>> case 'p':
>>>   //  This upgrades the llvm.part.select overloaded intrinsic names to only 
>>>   //  use one type specifier in the name. We only care about the old format
>>> @@ -520,28 +472,6 @@
>>>   CI->eraseFromParent();
>>> }
>>> break;
>>> -  case Intrinsic::memcpy:
>>> -  case Intrinsic::memmove:
>>> -  case Intrinsic::memset: {
>>> -    // Add isVolatile
>>> -    const llvm::Type *I1Ty = llvm::Type::getInt1Ty(CI->getContext());
>>> -    Value *Operands[5] = { CI->getOperand(1), CI->getOperand(2),
>>> -                           CI->getOperand(3), CI->getOperand(4),
>>> -                           llvm::ConstantInt::get(I1Ty, 0) };
>>> -    CallInst *NewCI = CallInst::Create(NewFn, Operands, Operands+5,
>>> -                                       CI->getName(), CI);
>>> -    NewCI->setTailCall(CI->isTailCall());
>>> -    NewCI->setCallingConv(CI->getCallingConv());
>>> -    //  Handle any uses of the old CallInst.
>>> -    if (!CI->use_empty())
>>> -      //  Replace all uses of the old call with the new cast which has the 
>>> -      //  correct type.
>>> -      CI->replaceAllUsesWith(NewCI);
>>> -    
>>> -    //  Clean up the old call now that it has been completely upgraded.
>>> -    CI->eraseFromParent();
>>> -    break;
>>> -  }
>>> }
>>> }
>>> 
>>> 
>>> Modified: llvm/trunk/test/Analysis/BasicAA/modref.ll
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/BasicAA/modref.ll?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/test/Analysis/BasicAA/modref.ll (original)
>>> +++ llvm/trunk/test/Analysis/BasicAA/modref.ll Fri Apr  2 13:43:02 2010
>>> @@ -103,7 +103,7 @@
>>> ret i32 %sub
>>> ; CHECK: @test4
>>> ; CHECK: load i32* @G
>>> -; CHECK: memset.p0i8.i32
>>> +; CHECK: memset.i32
>>> ; CHECK-NOT: load
>>> ; CHECK: sub i32 %tmp, %tmp
>>> }
>>> @@ -118,7 +118,7 @@
>>> ret i32 %sub
>>> ; CHECK: @test5
>>> ; CHECK: load i32* @G
>>> -; CHECK: memcpy.p0i8.p0i8.i32
>>> +; CHECK: memcpy.i32
>>> ; CHECK-NOT: load
>>> ; CHECK: sub i32 %tmp, %tmp
>>> }
>>> 
>>> Modified: llvm/trunk/test/Bitcode/memcpy.ll
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/memcpy.ll?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/test/Bitcode/memcpy.ll (original)
>>> +++ llvm/trunk/test/Bitcode/memcpy.ll Fri Apr  2 13:43:02 2010
>>> @@ -8,7 +8,6 @@
>>>       tail call void @llvm.memcpy.i64( i8* %tmp.1, i8* %tmp.3, i64 100000, i32 1 )
>>>       tail call void @llvm.memset.i32( i8* %tmp.3, i8 14, i32 10000, i32 0 )
>>>       tail call void @llvm.memmove.i32( i8* %tmp.1, i8* %tmp.3, i32 123124, i32 1 )
>>> -        tail call void @llvm.memmove.i64( i8* %tmp.1, i8* %tmp.3, i64 123124, i32 1 )
>>>       ret void
>>> }
>>> 
>>> @@ -20,4 +19,3 @@
>>> 
>>> declare void @llvm.memmove.i32(i8*, i8*, i32, i32)
>>> 
>>> -declare void @llvm.memmove.i64(i8*, i8*, i32, i32)
>>> 
>>> Modified: llvm/trunk/test/Transforms/InstCombine/memset_chk.ll
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/InstCombine/memset_chk.ll?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/test/Transforms/InstCombine/memset_chk.ll (original)
>>> +++ llvm/trunk/test/Transforms/InstCombine/memset_chk.ll Fri Apr  2 13:43:02 2010
>>> @@ -7,7 +7,7 @@
>>> 
>>> define i32 @t() nounwind ssp {
>>> ; CHECK: @t
>>> -; CHECK: @llvm.memset.p0i8.i64
>>> +; CHECK: @llvm.memset.i64
>>> entry:
>>> %0 = alloca %struct.data, align 8               ; <%struct.data*> [#uses=1]
>>> %1 = bitcast %struct.data* %0 to i8*            ; <i8*> [#uses=1]
>>> 
>>> Modified: llvm/trunk/test/Transforms/InstCombine/objsize.ll
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/InstCombine/objsize.ll?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/test/Transforms/InstCombine/objsize.ll (original)
>>> +++ llvm/trunk/test/Transforms/InstCombine/objsize.ll Fri Apr  2 13:43:02 2010
>>> @@ -113,7 +113,7 @@
>>> %1 = bitcast %struct.data* %0 to i8*
>>> %2 = call i64 @llvm.objectsize.i64(i8* %1, i1 false) nounwind
>>> ; CHECK-NOT: @llvm.objectsize
>>> -; CHECK: @llvm.memset.p0i8.i64(i8* %1, i8 0, i64 1824, i32 8, i1 false)
>>> +; CHECK: @llvm.memset.i64(i8* %1, i8 0, i64 1824, i32 8)
>>> %3 = call i8* @__memset_chk(i8* %1, i32 0, i64 1824, i64 %2) nounwind
>>> ret i32 0
>>> }
>>> @@ -128,7 +128,7 @@
>>> %1 = tail call i32 @llvm.objectsize.i32(i8* %0, i1 false)
>>> %2 = load i8** @s, align 8
>>> ; CHECK-NOT: @llvm.objectsize
>>> -; CHECK: @llvm.memcpy.p0i8.p0i8.i32(i8* %0, i8* %1, i32 10, i32 1, i1 false)
>>> +; CHECK: @llvm.memcpy.i32(i8* %0, i8* %1, i32 10, i32 1)
>>> %3 = tail call i8* @__memcpy_chk(i8* %0, i8* %2, i32 10, i32 %1) nounwind
>>> ret void
>>> }
>>> 
>>> Modified: llvm/trunk/test/Transforms/MemCpyOpt/align.ll
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/MemCpyOpt/align.ll?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/test/Transforms/MemCpyOpt/align.ll (original)
>>> +++ llvm/trunk/test/Transforms/MemCpyOpt/align.ll Fri Apr  2 13:43:02 2010
>>> @@ -4,7 +4,7 @@
>>> ; The resulting memset is only 4-byte aligned, despite containing
>>> ; a 16-byte alignmed store in the middle.
>>> 
>>> -; CHECK: call void @llvm.memset.p0i8.i64(i8* %a01, i8 0, i64 16, i32 4, i1 false)
>>> +; CHECK: call void @llvm.memset.i64(i8* %a01, i8 0, i64 16, i32 4)
>>> 
>>> define void @foo(i32* %p) {
>>> %a0 = getelementptr i32* %p, i64 0
>>> 
>>> Modified: llvm/trunk/test/Transforms/SimplifyLibCalls/StrCpy.ll
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SimplifyLibCalls/StrCpy.ll?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/test/Transforms/SimplifyLibCalls/StrCpy.ll (original)
>>> +++ llvm/trunk/test/Transforms/SimplifyLibCalls/StrCpy.ll Fri Apr  2 13:43:02 2010
>>> @@ -21,7 +21,7 @@
>>> %arg1 = getelementptr [1024 x i8]* %target, i32 0, i32 0
>>> %arg2 = getelementptr [6 x i8]* @hello, i32 0, i32 0
>>> %rslt1 = call i8* @strcpy( i8* %arg1, i8* %arg2 )
>>> -; CHECK: @llvm.memcpy.p0i8.p0i8.i32
>>> +; CHECK: @llvm.memcpy.i32
>>> ret i32 0
>>> }
>>> 
>>> 
>>> Modified: llvm/trunk/test/Verifier/2006-12-12-IntrinsicDefine.ll
>>> URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Verifier/2006-12-12-IntrinsicDefine.ll?rev=100199&r1=100198&r2=100199&view=diff
>>> ==============================================================================
>>> --- llvm/trunk/test/Verifier/2006-12-12-IntrinsicDefine.ll (original)
>>> +++ llvm/trunk/test/Verifier/2006-12-12-IntrinsicDefine.ll Fri Apr  2 13:43:02 2010
>>> @@ -1,7 +1,7 @@
>>> ; RUN: not llvm-as < %s |& grep {llvm intrinsics cannot be defined}
>>> ; PR1047
>>> 
>>> -define void @llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1) {
>>> +define void @llvm.memcpy.i32(i8*, i8*, i32, i32) {
>>> entry:
>>> 	ret void
>>> }
>>> 
>>> 
>>> _______________________________________________
>>> llvm-commits mailing list
>>> llvm-commits at cs.uiuc.edu
>>> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
>> 
>> 
>> _______________________________________________
>> llvm-commits mailing list
>> llvm-commits at cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
> 
> 
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits





More information about the llvm-commits mailing list