<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jul 16, 2015 at 10:39 AM, Eli Bendersky <span dir="ltr"><<a href="mailto:eliben@google.com" target="_blank">eliben@google.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jul 16, 2015 at 10:36 AM, Benjamin Kramer <span dir="ltr"><<a href="mailto:benny.kra@gmail.com" target="_blank">benny.kra@gmail.com</a>></span> wrote:<span class=""><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
> On 16.07.2015, at 19:01, Eli Bendersky <<a href="mailto:eliben@google.com" target="_blank">eliben@google.com</a>> wrote:<br>
><br>
> Larisee, could you please point to a bot you see failing? [or more details about the platform]<br>
<br>
</span>This was fixed in r242417.<br></blockquote><div><br></div></span><div>Thanks Benjamin, you're quick :) I was just testing my fix patch when I saw this</div><div><br></div><div>Eli</div></div></div></div></blockquote><div><br></div><div>Sorry I got side-tracked. It does look fixed on my end now.</div><div><br></div><div>-- Larisse.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<span><br>
><br>
> On Thu, Jul 16, 2015 at 9:59 AM, Eli Bendersky <<a href="mailto:eliben@google.com" target="_blank">eliben@google.com</a>> wrote:<br>
> Thanks Lairsse, I'm looking into it right now<br>
><br>
</span></span>> On Thu, Jul 16, 2015 at 9:57 AM, Larisse Voufo <<a href="mailto:lvoufo@google.com" target="_blank">lvoufo@google.com</a>> wrote:<br>
<span><span class="">><br>
><br>
> On Thu, Jul 16, 2015 at 9:27 AM, Eli Bendersky <<a href="mailto:eliben@google.com" target="_blank">eliben@google.com</a>> wrote:<br>
> Author: eliben<br>
> Date: Thu Jul 16 11:27:19 2015<br>
> New Revision: 242413<br>
><br></span>
> URL: <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__llvm.org_viewvc_llvm-2Dproject-3Frev-3D242413-26view-3Drev&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=mQ4LZ2PUj9hpadE3cDHZnIdEwhEBrbAstXeMaFoB9tg&m=0vr0NBeoAqNJrhIdEUdAM4SFxp59rUbH5fjPTnOhaBs&s=-d6zWGRRe75hVdBXYB5bPpMa6LD4A1B-UpkRd_50-vY&e=" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project?rev=242413&view=rev</a><br>
</span><span><span class="">> Log:<br>
> Correct lowering of memmove in NVPTX<br>
><br></span>
> This fixes <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__llvm.org_bugs_show-5Fbug.cgi-3Fid-3D24056&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=mQ4LZ2PUj9hpadE3cDHZnIdEwhEBrbAstXeMaFoB9tg&m=0vr0NBeoAqNJrhIdEUdAM4SFxp59rUbH5fjPTnOhaBs&s=mWxCcDplFDe9eidnBItRlWPYanDmvm1WYTHfwelyRBE&e=" rel="noreferrer" target="_blank">https://llvm.org/bugs/show_bug.cgi?id=24056</a><br>
><br>
</span><span><span class="">> Also a bit of refactoring along the way.<br>
><br></span>
> Differential Revision: <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__reviews.llvm.org_D11220&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=mQ4LZ2PUj9hpadE3cDHZnIdEwhEBrbAstXeMaFoB9tg&m=0vr0NBeoAqNJrhIdEUdAM4SFxp59rUbH5fjPTnOhaBs&s=br37T-nTcOJVjAExRS6PYb74WGpAGV3hd3q64IfonoM&e=" rel="noreferrer" target="_blank">http://reviews.llvm.org/D11220</a><br>
><br>
</span><span><span class="">> Modified:<br>
> llvm/trunk/lib/Target/NVPTX/NVPTXLowerAggrCopies.cpp<br>
> llvm/trunk/lib/Target/NVPTX/NVPTXTargetMachine.cpp<br>
><br>
><br>
> llvm/trunk/test/CodeGen/NVPTX/lower-aggr-copies.ll<br>
><br>
> Fyi. It looks like this test is failing on some targets.<br>
> Running LLVM regresison tests on llvm/trunk@242414 resulted in<br>
><br>
> ********************<br>
> Testing: 0 .. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80.. 90..<br>
> Testing Time: 86.35s<br>
> ********************<br>
> Failing Tests (1):<br>
> LLVM :: CodeGen/NVPTX/lower-aggr-copies.ll<br>
><br>
> Expected Passes : 14028<br>
> Expected Failures : 104<br>
> Unsupported Tests : 64<br>
> Unexpected Failures: 1<br>
> ninja: build stopped: subcommand failed.<br>
><br>
><br>
><br>
><br>
> Modified: llvm/trunk/lib/Target/NVPTX/NVPTXLowerAggrCopies.cpp<br></span>
> URL: <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__llvm.org_viewvc_llvm-2Dproject_llvm_trunk_lib_Target_NVPTX_NVPTXLowerAggrCopies.cpp-3Frev-3D242413-26r1-3D242412-26r2-3D242413-26view-3Ddiff&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=mQ4LZ2PUj9hpadE3cDHZnIdEwhEBrbAstXeMaFoB9tg&m=0vr0NBeoAqNJrhIdEUdAM4SFxp59rUbH5fjPTnOhaBs&s=F_ccBtLT-IZVko5Qs7CmXn3qhlx_ruM8OtxmLM3JYjc&e=" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/NVPTX/NVPTXLowerAggrCopies.cpp?rev=242413&r1=242412&r2=242413&view=diff</a><br>
><br>
</span><div><div><div><div class="h5">> ==============================================================================<br>
> --- llvm/trunk/lib/Target/NVPTX/NVPTXLowerAggrCopies.cpp (original)<br>
> +++ llvm/trunk/lib/Target/NVPTX/NVPTXLowerAggrCopies.cpp Thu Jul 16 11:27:19 2015<br>
> @@ -6,6 +6,7 @@<br>
> // License. See LICENSE.TXT for details.<br>
> //<br>
> //===----------------------------------------------------------------------===//<br>
> +//<br>
> // Lower aggregate copies, memset, memcpy, memmov intrinsics into loops when<br>
> // the size is large or is not a compile-time constant.<br>
> //<br>
> @@ -25,12 +26,14 @@<br>
> #include "llvm/IR/LLVMContext.h"<br>
> #include "llvm/IR/Module.h"<br>
> #include "llvm/Support/Debug.h"<br>
> +#include "llvm/Transforms/Utils/BasicBlockUtils.h"<br>
><br>
> #define DEBUG_TYPE "nvptx"<br>
><br>
> using namespace llvm;<br>
><br>
> namespace {<br>
> +<br>
> // actual analysis class, which is a functionpass<br>
> struct NVPTXLowerAggrCopies : public FunctionPass {<br>
> static char ID;<br>
> @@ -50,14 +53,13 @@ struct NVPTXLowerAggrCopies : public Fun<br>
> return "Lower aggregate copies/intrinsics into loops";<br>
> }<br>
> };<br>
> -} // namespace<br>
><br>
> char NVPTXLowerAggrCopies::ID = 0;<br>
><br>
> -// Lower MemTransferInst or load-store pair to loop<br>
> -static void convertTransferToLoop(<br>
> - Instruction *splitAt, Value *srcAddr, Value *dstAddr, Value *len,<br>
> - bool srcVolatile, bool dstVolatile, LLVMContext &Context, Function &F) {<br>
> +// Lower memcpy to loop.<br>
> +void convertMemCpyToLoop(Instruction *splitAt, Value *srcAddr, Value *dstAddr,<br>
> + Value *len, bool srcVolatile, bool dstVolatile,<br>
> + LLVMContext &Context, Function &F) {<br>
> Type *indType = len->getType();<br>
><br>
> BasicBlock *origBB = splitAt->getParent();<br>
> @@ -98,10 +100,105 @@ static void convertTransferToLoop(<br>
> loop.CreateCondBr(loop.CreateICmpULT(newind, len), loopBB, newBB);<br>
> }<br>
><br>
> -// Lower MemSetInst to loop<br>
> -static void convertMemSetToLoop(Instruction *splitAt, Value *dstAddr,<br>
> - Value *len, Value *val, LLVMContext &Context,<br>
> - Function &F) {<br>
> +// Lower memmove to IR. memmove is required to correctly copy overlapping memory<br>
> +// regions; therefore, it has to check the relative positions of the source and<br>
> +// destination pointers and choose the copy direction accordingly.<br>
> +//<br>
> +// The code below is an IR rendition of this C function:<br>
> +//<br>
> +// void* memmove(void* dst, const void* src, size_t n) {<br>
> +// unsigned char* d = dst;<br>
> +// const unsigned char* s = src;<br>
> +// if (s < d) {<br>
> +// // copy backwards<br>
> +// while (n--) {<br>
> +// d[n] = s[n];<br>
> +// }<br>
> +// } else {<br>
> +// // copy forward<br>
> +// for (size_t i = 0; i < n; ++i) {<br>
> +// d[i] = s[i];<br>
> +// }<br>
> +// }<br>
> +// return dst;<br>
> +// }<br>
> +void convertMemMoveToLoop(Instruction *splitAt, Value *srcAddr, Value *dstAddr,<br>
> + Value *len, bool srcVolatile, bool dstVolatile,<br>
> + LLVMContext &Context, Function &F) {<br>
> + Type *TypeOfLen = len->getType();<br>
> + BasicBlock *OrigBB = splitAt->getParent();<br>
> +<br>
> + // Create the a comparison of src and dst, based on which we jump to either<br>
> + // the forward-copy part of the function (if src >= dst) or the backwards-copy<br>
> + // part (if src < dst).<br>
> + // SplitBlockAndInsertIfThenElse conveniently creates the basic if-then-else<br>
> + // structure. Its block terminators (unconditional branches) are replaced by<br>
> + // the appropriate conditional branches when the loop is built.<br>
> + ICmpInst *PtrCompare = new ICmpInst(splitAt, ICmpInst::ICMP_ULT, srcAddr,<br>
> + dstAddr, "compare_src_dst");<br>
> + TerminatorInst *ThenTerm, *ElseTerm;<br>
> + SplitBlockAndInsertIfThenElse(PtrCompare, splitAt, &ThenTerm, &ElseTerm);<br>
> +<br>
> + // Each part of the function consists of two blocks:<br>
> + // copy_backwards: used to skip the loop when n == 0<br>
> + // copy_backwards_loop: the actual backwards loop BB<br>
> + // copy_forward: used to skip the loop when n == 0<br>
> + // copy_forward_loop: the actual forward loop BB<br>
> + BasicBlock *CopyBackwardsBB = ThenTerm->getParent();<br>
> + CopyBackwardsBB->setName("copy_backwards");<br>
> + BasicBlock *CopyForwardBB = ElseTerm->getParent();<br>
> + CopyForwardBB->setName("copy_forward");<br>
> + BasicBlock *ExitBB = splitAt->getParent();<br>
> + ExitBB->setName("memmove_done");<br>
> +<br>
> + // Initial comparison of n == 0 that lets us skip the loops altogether. Shared<br>
> + // between both backwards and forward copy clauses.<br>
> + ICmpInst *CompareN =<br>
> + new ICmpInst(OrigBB->getTerminator(), ICmpInst::ICMP_EQ, len,<br>
> + ConstantInt::get(TypeOfLen, 0), "compare_n_to_0");<br>
> +<br>
> + // Copying backwards.<br>
> + BasicBlock *LoopBB =<br>
> + BasicBlock::Create(Context, "copy_backwards_loop", &F, CopyForwardBB);<br>
> + IRBuilder<> LoopBuilder(LoopBB);<br>
> + PHINode *LoopPhi = LoopBuilder.CreatePHI(TypeOfLen, 0);<br>
> + Value *IndexPtr = LoopBuilder.CreateSub(<br>
> + LoopPhi, ConstantInt::get(TypeOfLen, 1), "index_ptr");<br>
> + Value *Element = LoopBuilder.CreateLoad(<br>
> + LoopBuilder.CreateInBoundsGEP(srcAddr, IndexPtr), "element");<br>
> + LoopBuilder.CreateStore(Element,<br>
> + LoopBuilder.CreateInBoundsGEP(dstAddr, IndexPtr));<br>
> + LoopBuilder.CreateCondBr(<br>
> + LoopBuilder.CreateICmpEQ(IndexPtr, ConstantInt::get(TypeOfLen, 0)),<br>
> + ExitBB, LoopBB);<br>
> + LoopPhi->addIncoming(IndexPtr, LoopBB);<br>
> + LoopPhi->addIncoming(len, CopyBackwardsBB);<br>
> + BranchInst::Create(ExitBB, LoopBB, CompareN, ThenTerm);<br>
> + ThenTerm->removeFromParent();<br>
> +<br>
> + // Copying forward.<br>
> + BasicBlock *FwdLoopBB =<br>
> + BasicBlock::Create(Context, "copy_forward_loop", &F, ExitBB);<br>
> + IRBuilder<> FwdLoopBuilder(FwdLoopBB);<br>
> + PHINode *FwdCopyPhi = FwdLoopBuilder.CreatePHI(TypeOfLen, 0, "index_ptr");<br>
> + Value *FwdElement = FwdLoopBuilder.CreateLoad(<br>
> + FwdLoopBuilder.CreateInBoundsGEP(srcAddr, FwdCopyPhi), "element");<br>
> + FwdLoopBuilder.CreateStore(<br>
> + FwdElement, FwdLoopBuilder.CreateInBoundsGEP(dstAddr, FwdCopyPhi));<br>
> + Value *FwdIndexPtr = FwdLoopBuilder.CreateAdd(<br>
> + FwdCopyPhi, ConstantInt::get(TypeOfLen, 1), "index_increment");<br>
> + FwdLoopBuilder.CreateCondBr(FwdLoopBuilder.CreateICmpEQ(FwdIndexPtr, len),<br>
> + ExitBB, FwdLoopBB);<br>
> + FwdCopyPhi->addIncoming(FwdIndexPtr, FwdLoopBB);<br>
> + FwdCopyPhi->addIncoming(ConstantInt::get(TypeOfLen, 0), CopyForwardBB);<br>
> +<br>
> + BranchInst::Create(ExitBB, FwdLoopBB, CompareN, ElseTerm);<br>
> + ElseTerm->removeFromParent();<br>
> +}<br>
> +<br>
> +// Lower memset to loop.<br>
> +void convertMemSetToLoop(Instruction *splitAt, Value *dstAddr, Value *len,<br>
> + Value *val, LLVMContext &Context, Function &F) {<br>
> BasicBlock *origBB = splitAt->getParent();<br>
> BasicBlock *newBB = splitAt->getParent()->splitBasicBlock(splitAt, "split");<br>
> BasicBlock *loopBB = BasicBlock::Create(Context, "loadstoreloop", &F, newBB);<br>
> @@ -129,15 +226,12 @@ static void convertMemSetToLoop(Instruct<br>
><br>
> bool NVPTXLowerAggrCopies::runOnFunction(Function &F) {<br>
> SmallVector<LoadInst *, 4> aggrLoads;<br>
> - SmallVector<MemTransferInst *, 4> aggrMemcpys;<br>
> - SmallVector<MemSetInst *, 4> aggrMemsets;<br>
> + SmallVector<MemIntrinsic *, 4> MemCalls;<br>
><br>
> const DataLayout &DL = F.getParent()->getDataLayout();<br>
> LLVMContext &Context = F.getParent()->getContext();<br>
><br>
> - //<br>
> - // Collect all the aggrLoads, aggrMemcpys and addrMemsets.<br>
> - //<br>
> + // Collect all aggregate loads and mem* calls.<br>
> for (Function::iterator BI = F.begin(), BE = F.end(); BI != BE; ++BI) {<br>
> for (BasicBlock::iterator II = BI->begin(), IE = BI->end(); II != IE;<br>
> ++II) {<br>
> @@ -154,34 +248,23 @@ bool NVPTXLowerAggrCopies::runOnFunction<br>
> continue;<br>
> aggrLoads.push_back(load);<br>
> }<br>
> - } else if (MemTransferInst *intr = dyn_cast<MemTransferInst>(II)) {<br>
> - Value *len = intr->getLength();<br>
> - // If the number of elements being copied is greater<br>
> - // than MaxAggrCopySize, lower it to a loop<br>
> - if (ConstantInt *len_int = dyn_cast<ConstantInt>(len)) {<br>
> - if (len_int->getZExtValue() >= MaxAggrCopySize) {<br>
> - aggrMemcpys.push_back(intr);<br>
> - }<br>
> - } else {<br>
> - // turn variable length memcpy/memmov into loop<br>
> - aggrMemcpys.push_back(intr);<br>
> - }<br>
> - } else if (MemSetInst *memsetintr = dyn_cast<MemSetInst>(II)) {<br>
> - Value *len = memsetintr->getLength();<br>
> - if (ConstantInt *len_int = dyn_cast<ConstantInt>(len)) {<br>
> - if (len_int->getZExtValue() >= MaxAggrCopySize) {<br>
> - aggrMemsets.push_back(memsetintr);<br>
> + } else if (MemIntrinsic *IntrCall = dyn_cast<MemIntrinsic>(II)) {<br>
> + // Convert intrinsic calls with variable size or with constant size<br>
> + // larger than the MaxAggrCopySize threshold.<br>
> + if (ConstantInt *LenCI = dyn_cast<ConstantInt>(IntrCall->getLength())) {<br>
> + if (LenCI->getZExtValue() >= MaxAggrCopySize) {<br>
> + MemCalls.push_back(IntrCall);<br>
> }<br>
> } else {<br>
> - // turn variable length memset into loop<br>
> - aggrMemsets.push_back(memsetintr);<br>
> + MemCalls.push_back(IntrCall);<br>
> }<br>
> }<br>
> }<br>
> }<br>
> - if ((aggrLoads.size() == 0) && (aggrMemcpys.size() == 0) &&<br>
> - (aggrMemsets.size() == 0))<br>
> +<br>
> + if (aggrLoads.size() == 0 && MemCalls.size() == 0) {<br>
> return false;<br>
> + }<br>
><br>
> //<br>
> // Do the transformation of an aggr load/copy/set to a loop<br>
> @@ -193,36 +276,58 @@ bool NVPTXLowerAggrCopies::runOnFunction<br>
> unsigned numLoads = DL.getTypeStoreSize(load->getType());<br>
> Value *len = ConstantInt::get(Type::getInt32Ty(Context), numLoads);<br>
><br>
> - convertTransferToLoop(store, srcAddr, dstAddr, len, load->isVolatile(),<br>
> - store->isVolatile(), Context, F);<br>
> + convertMemCpyToLoop(store, srcAddr, dstAddr, len, load->isVolatile(),<br>
> + store->isVolatile(), Context, F);<br>
><br>
> store->eraseFromParent();<br>
> load->eraseFromParent();<br>
> }<br>
><br>
> - for (MemTransferInst *cpy : aggrMemcpys) {<br>
> - convertTransferToLoop(/* splitAt */ cpy,<br>
> - /* srcAddr */ cpy->getSource(),<br>
> - /* dstAddr */ cpy->getDest(),<br>
> - /* len */ cpy->getLength(),<br>
> - /* srcVolatile */ cpy->isVolatile(),<br>
> - /* dstVolatile */ cpy->isVolatile(),<br>
> + // Transform mem* intrinsic calls.<br>
> + for (MemIntrinsic *MemCall : MemCalls) {<br>
> + if (MemCpyInst *Memcpy = dyn_cast<MemCpyInst>(MemCall)) {<br>
> + convertMemCpyToLoop(/* splitAt */ Memcpy,<br>
> + /* srcAddr */ Memcpy->getRawSource(),<br>
> + /* dstAddr */ Memcpy->getRawDest(),<br>
> + /* len */ Memcpy->getLength(),<br>
> + /* srcVolatile */ Memcpy->isVolatile(),<br>
> + /* dstVolatile */ Memcpy->isVolatile(),<br>
> /* Context */ Context,<br>
> /* Function F */ F);<br>
> - cpy->eraseFromParent();<br>
> - }<br>
> -<br>
> - for (MemSetInst *memsetinst : aggrMemsets) {<br>
> - Value *len = memsetinst->getLength();<br>
> - Value *val = memsetinst->getValue();<br>
> - convertMemSetToLoop(memsetinst, memsetinst->getDest(), len, val, Context,<br>
> - F);<br>
> - memsetinst->eraseFromParent();<br>
> + } else if (MemMoveInst *Memmove = dyn_cast<MemMoveInst>(MemCall)) {<br>
> + convertMemMoveToLoop(/* splitAt */ Memmove,<br>
> + /* srcAddr */ Memmove->getRawSource(),<br>
> + /* dstAddr */ Memmove->getRawDest(),<br>
> + /* len */ Memmove->getLength(),<br>
> + /* srcVolatile */ Memmove->isVolatile(),<br>
> + /* dstVolatile */ Memmove->isVolatile(),<br>
> + /* Context */ Context,<br>
> + /* Function F */ F);<br>
> +<br>
> + } else if (MemSetInst *Memset = dyn_cast<MemSetInst>(MemCall)) {<br>
> + convertMemSetToLoop(/* splitAt */ Memset,<br>
> + /* dstAddr */ Memset->getRawDest(),<br>
> + /* len */ Memset->getLength(),<br>
> + /* val */ Memset->getValue(),<br>
> + /* Context */ Context,<br>
> + /* F */ F);<br>
> + }<br>
> + MemCall->eraseFromParent();<br>
> }<br>
><br>
> return true;<br>
> }<br>
><br>
> +} // namespace<br>
> +<br>
> +namespace llvm {<br>
> +void initializeNVPTXLowerAggrCopiesPass(PassRegistry &);<br>
> +}<br>
> +<br>
> +INITIALIZE_PASS(NVPTXLowerAggrCopies, "nvptx-lower-aggr-copies",<br>
> + "Lower aggregate copies, and llvm.mem* intrinsics into loops",<br>
> + false, false)<br>
> +<br>
> FunctionPass *llvm::createLowerAggrCopies() {<br>
> return new NVPTXLowerAggrCopies();<br>
> }<br>
><br>
> Modified: llvm/trunk/lib/Target/NVPTX/NVPTXTargetMachine.cpp<br></div></div>
> URL: <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__llvm.org_viewvc_llvm-2Dproject_llvm_trunk_lib_Target_NVPTX_NVPTXTargetMachine.cpp-3Frev-3D242413-26r1-3D242412-26r2-3D242413-26view-3Ddiff&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=mQ4LZ2PUj9hpadE3cDHZnIdEwhEBrbAstXeMaFoB9tg&m=0vr0NBeoAqNJrhIdEUdAM4SFxp59rUbH5fjPTnOhaBs&s=wMxl3UHnCp-ur_0nQ9Or1ngvfNBv9Cycdtn80yb5H0c&e=" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/NVPTX/NVPTXTargetMachine.cpp?rev=242413&r1=242412&r2=242413&view=diff</a><br>
><br>
</div></div><div><div><div><div class="h5">> ==============================================================================<br>
> --- llvm/trunk/lib/Target/NVPTX/NVPTXTargetMachine.cpp (original)<br>
> +++ llvm/trunk/lib/Target/NVPTX/NVPTXTargetMachine.cpp Thu Jul 16 11:27:19 2015<br>
> @@ -53,6 +53,7 @@ void initializeGenericToNVVMPass(PassReg<br>
> void initializeNVPTXAllocaHoistingPass(PassRegistry &);<br>
> void initializeNVPTXAssignValidGlobalNamesPass(PassRegistry&);<br>
> void initializeNVPTXFavorNonGenericAddrSpacesPass(PassRegistry &);<br>
> +void initializeNVPTXLowerAggrCopiesPass(PassRegistry &);<br>
> void initializeNVPTXLowerKernelArgsPass(PassRegistry &);<br>
> void initializeNVPTXLowerAllocaPass(PassRegistry &);<br>
> }<br>
> @@ -64,14 +65,15 @@ extern "C" void LLVMInitializeNVPTXTarge<br>
><br>
> // FIXME: This pass is really intended to be invoked during IR optimization,<br>
> // but it's very NVPTX-specific.<br>
> - initializeNVVMReflectPass(*PassRegistry::getPassRegistry());<br>
> - initializeGenericToNVVMPass(*PassRegistry::getPassRegistry());<br>
> - initializeNVPTXAllocaHoistingPass(*PassRegistry::getPassRegistry());<br>
> - initializeNVPTXAssignValidGlobalNamesPass(*PassRegistry::getPassRegistry());<br>
> - initializeNVPTXFavorNonGenericAddrSpacesPass(<br>
> - *PassRegistry::getPassRegistry());<br>
> - initializeNVPTXLowerKernelArgsPass(*PassRegistry::getPassRegistry());<br>
> - initializeNVPTXLowerAllocaPass(*PassRegistry::getPassRegistry());<br>
> + PassRegistry &PR = *PassRegistry::getPassRegistry();<br>
> + initializeNVVMReflectPass(PR);<br>
> + initializeGenericToNVVMPass(PR);<br>
> + initializeNVPTXAllocaHoistingPass(PR);<br>
> + initializeNVPTXAssignValidGlobalNamesPass(PR);<br>
> + initializeNVPTXFavorNonGenericAddrSpacesPass(PR);<br>
> + initializeNVPTXLowerKernelArgsPass(PR);<br>
> + initializeNVPTXLowerAllocaPass(PR);<br>
> + initializeNVPTXLowerAggrCopiesPass(PR);<br>
> }<br>
><br>
> static std::string computeDataLayout(bool is64Bit) {<br>
><br>
> Modified: llvm/trunk/test/CodeGen/NVPTX/lower-aggr-copies.ll<br></div></div>
> URL: <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__llvm.org_viewvc_llvm-2Dproject_llvm_trunk_test_CodeGen_NVPTX_lower-2Daggr-2Dcopies.ll-3Frev-3D242413-26r1-3D242412-26r2-3D242413-26view-3Ddiff&d=AwMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=mQ4LZ2PUj9hpadE3cDHZnIdEwhEBrbAstXeMaFoB9tg&m=0vr0NBeoAqNJrhIdEUdAM4SFxp59rUbH5fjPTnOhaBs&s=UwSTW45Kn6fOasOYjLZ-bpe2QwkCvzcCXiTqxctutlE&e=" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/NVPTX/lower-aggr-copies.ll?rev=242413&r1=242412&r2=242413&view=diff</a><br>
><br>
</div></div><div><div class="h5"><div><div>> ==============================================================================<br>
> --- llvm/trunk/test/CodeGen/NVPTX/lower-aggr-copies.ll (original)<br>
> +++ llvm/trunk/test/CodeGen/NVPTX/lower-aggr-copies.ll Thu Jul 16 11:27:19 2015<br>
> @@ -1,35 +1,68 @@<br>
> -; RUN: llc < %s -march=nvptx -mcpu=sm_35 | FileCheck %s<br>
> +; RUN: llc < %s -march=nvptx64 -mcpu=sm_35 | FileCheck %s --check-prefix PTX<br>
> +; RUN: opt < %s -S -nvptx-lower-aggr-copies | FileCheck %s --check-prefix IR<br>
><br>
> ; Verify that the NVPTXLowerAggrCopies pass works as expected - calls to<br>
> ; llvm.mem* intrinsics get lowered to loops.<br>
><br>
> +target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"<br>
> +target triple = "nvptx64-unknown-unknown"<br>
> +<br>
> declare void @llvm.memcpy.p0i8.p0i8.i64(i8* nocapture, i8* nocapture readonly, i64, i32, i1) #1<br>
> +declare void @llvm.memmove.p0i8.p0i8.i64(i8* nocapture, i8* nocapture readonly, i64, i32, i1) #1<br>
> declare void @llvm.memset.p0i8.i64(i8* nocapture, i8, i64, i32, i1) #1<br>
><br>
> define i8* @memcpy_caller(i8* %dst, i8* %src, i64 %n) #0 {<br>
> entry:<br>
> tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %dst, i8* %src, i64 %n, i32 1, i1 false)<br>
> ret i8* %dst<br>
> -; CHECK-LABEL: .visible .func (.param .b32 func_retval0) memcpy_caller<br>
> -; CHECK: LBB[[LABEL:[_0-9]+]]:<br>
> -; CHECK: ld.u8 %rs[[REG:[0-9]+]]<br>
> -; CHECK: st.u8 [%r{{[0-9]+}}], %rs[[REG]]<br>
> -; CHECK: add.s64 %rd[[COUNTER:[0-9]+]], %rd[[COUNTER]], 1<br>
> -; CHECK-NEXT: setp.lt.u64 %p[[PRED:[0-9]+]], %rd[[COUNTER]], %rd<br>
> -; CHECK-NEXT: @%p[[PRED]] bra LBB[[LABEL]]<br>
> +<br>
> +; IR-LABEL: @memcpy_caller<br>
> +; IR: loadstoreloop:<br>
> +; IR: [[LOADPTR:%[0-9]+]] = getelementptr i8, i8* %src, i64<br>
> +; IR-NEXT: [[VAL:%[0-9]+]] = load i8, i8* [[LOADPTR]]<br>
> +; IR-NEXT: [[STOREPTR:%[0-9]+]] = getelementptr i8, i8* %dst, i64<br>
> +; IR-NEXT: store i8 [[VAL]], i8* [[STOREPTR]]<br>
> +<br>
> +; PTX-LABEL: .visible .func (.param .b64 func_retval0) memcpy_caller<br>
> +; PTX: LBB[[LABEL:[_0-9]+]]:<br>
> +; PTX: ld.u8 %rs[[REG:[0-9]+]]<br>
> +; PTX: st.u8 [%rd{{[0-9]+}}], %rs[[REG]]<br>
> +; PTX: add.s64 %rd[[COUNTER:[0-9]+]], %rd[[COUNTER]], 1<br>
> +; PTX-NEXT: setp.lt.u64 %p[[PRED:[0-9]+]], %rd[[COUNTER]], %rd<br>
> +; PTX-NEXT: @%p[[PRED]] bra LBB[[LABEL]]<br>
> }<br>
><br>
> define i8* @memcpy_volatile_caller(i8* %dst, i8* %src, i64 %n) #0 {<br>
> entry:<br>
> tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %dst, i8* %src, i64 %n, i32 1, i1 true)<br>
> ret i8* %dst<br>
> -; CHECK-LABEL: .visible .func (.param .b32 func_retval0) memcpy_volatile_caller<br>
> -; CHECK: LBB[[LABEL:[_0-9]+]]:<br>
> -; CHECK: ld.volatile.u8 %rs[[REG:[0-9]+]]<br>
> -; CHECK: st.volatile.u8 [%r{{[0-9]+}}], %rs[[REG]]<br>
> -; CHECK: add.s64 %rd[[COUNTER:[0-9]+]], %rd[[COUNTER]], 1<br>
> -; CHECK-NEXT: setp.lt.u64 %p[[PRED:[0-9]+]], %rd[[COUNTER]], %rd<br>
> -; CHECK-NEXT: @%p[[PRED]] bra LBB[[LABEL]]<br>
> +<br>
> +; IR-LABEL: @memcpy_volatile_caller<br>
> +; IR: load volatile<br>
> +; IR: store volatile<br>
> +<br>
> +; PTX-LABEL: .visible .func (.param .b64 func_retval0) memcpy_volatile_caller<br>
> +; PTX: LBB[[LABEL:[_0-9]+]]:<br>
> +; PTX: ld.volatile.u8 %rs[[REG:[0-9]+]]<br>
> +; PTX: st.volatile.u8 [%rd{{[0-9]+}}], %rs[[REG]]<br>
> +; PTX: add.s64 %rd[[COUNTER:[0-9]+]], %rd[[COUNTER]], 1<br>
> +; PTX-NEXT: setp.lt.u64 %p[[PRED:[0-9]+]], %rd[[COUNTER]], %rd<br>
> +; PTX-NEXT: @%p[[PRED]] bra LBB[[LABEL]]<br>
> +}<br>
> +<br>
> +define i8* @memcpy_casting_caller(i32* %dst, i32* %src, i64 %n) #0 {<br>
> +entry:<br>
> + %0 = bitcast i32* %dst to i8*<br>
> + %1 = bitcast i32* %src to i8*<br>
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %0, i8* %1, i64 %n, i32 1, i1 false)<br>
> + ret i8* %0<br>
> +<br>
> +; Check that casts in calls to memcpy are handled properly<br>
> +; IR-LABEL: @memcpy_casting_caller<br>
> +; IR: [[DSTCAST:%[0-9]+]] = bitcast i32* %dst to i8*<br>
> +; IR: [[SRCCAST:%[0-9]+]] = bitcast i32* %src to i8*<br>
> +; IR: getelementptr i8, i8* [[SRCCAST]]<br>
> +; IR: getelementptr i8, i8* [[DSTCAST]]<br>
> }<br>
><br>
> define i8* @memset_caller(i8* %dst, i32 %c, i64 %n) #0 {<br>
> @@ -37,11 +70,52 @@ entry:<br>
> %0 = trunc i32 %c to i8<br>
> tail call void @llvm.memset.p0i8.i64(i8* %dst, i8 %0, i64 %n, i32 1, i1 false)<br>
> ret i8* %dst<br>
> -; CHECK-LABEL: .visible .func (.param .b32 func_retval0) memset_caller(<br>
> -; CHECK: ld.param.u8 %rs[[REG:[0-9]+]]<br>
> -; CHECK: LBB[[LABEL:[_0-9]+]]:<br>
> -; CHECK: st.u8 [%r{{[0-9]+}}], %rs[[REG]]<br>
> -; CHECK: add.s64 %rd[[COUNTER:[0-9]+]], %rd[[COUNTER]], 1<br>
> -; CHECK-NEXT: setp.lt.u64 %p[[PRED:[0-9]+]], %rd[[COUNTER]], %rd<br>
> -; CHECK-NEXT: @%p[[PRED]] bra LBB[[LABEL]]<br>
> +<br>
> +; IR-LABEL: @memset_caller<br>
> +; IR: [[VAL:%[0-9]+]] = trunc i32 %c to i8<br>
> +; IR: loadstoreloop:<br>
> +; IR: [[STOREPTR:%[0-9]+]] = getelementptr i8, i8* %dst, i64<br>
> +; IR-NEXT: store i8 [[VAL]], i8* [[STOREPTR]]<br>
> +<br>
> +; PTX-LABEL: .visible .func (.param .b64 func_retval0) memset_caller(<br>
> +; PTX: ld.param.u8 %rs[[REG:[0-9]+]]<br>
> +; PTX: LBB[[LABEL:[_0-9]+]]:<br>
> +; PTX: st.u8 [%rd{{[0-9]+}}], %rs[[REG]]<br>
> +; PTX: add.s64 %rd[[COUNTER:[0-9]+]], %rd[[COUNTER]], 1<br>
> +; PTX-NEXT: setp.lt.u64 %p[[PRED:[0-9]+]], %rd[[COUNTER]], %rd<br>
> +; PTX-NEXT: @%p[[PRED]] bra LBB[[LABEL]]<br>
> +}<br>
> +<br>
> +define i8* @memmove_caller(i8* %dst, i8* %src, i64 %n) #0 {<br>
> +entry:<br>
> + tail call void @llvm.memmove.p0i8.p0i8.i64(i8* %dst, i8* %src, i64 %n, i32 1, i1 false)<br>
> + ret i8* %dst<br>
> +<br>
> +; IR-LABEL: @memmove_caller<br>
> +; IR: icmp ult i8* %src, %dst<br>
> +; IR: [[PHIVAL:%[0-9a-zA-Z_]+]] = phi i64<br>
> +; IR-NEXT: %index_ptr = sub i64 [[PHIVAL]], 1<br>
> +; IR: [[FWDPHIVAL:%[0-9a-zA-Z_]+]] = phi i64<br>
> +; IR: {{%[0-9a-zA-Z_]+}} = add i64 [[FWDPHIVAL]], 1<br>
> +<br>
> +; PTX-LABEL: .visible .func (.param .b64 func_retval0) memmove_caller(<br>
> +; PTX: ld.param.u64 %rd[[N:[0-9]+]]<br>
> +; PTX: setp.eq.s64 %p[[NEQ0:[0-9]+]], %rd[[N]], 0<br>
> +; PTX: setp.ge.u64 %p[[SRC_GT_THAN_DST:[0-9]+]], %rd{{[0-9]+}}, %rd{{[0-9]+}}<br>
> +; PTX-NEXT: @%p[[SRC_GT_THAN_DST]] bra LBB[[FORWARD_BB:[0-9_]+]]<br>
> +; -- this is the backwards copying BB<br>
> +; PTX: @%p[[NEQ0]] bra LBB[[EXIT:[0-9_]+]]<br>
> +; PTX: add.s64 %rd[[N]], %rd[[N]], -1<br>
> +; PTX: ld.u8 %rs[[ELEMENT:[0-9]+]]<br>
> +; PTX: st.u8 [%rd{{[0-9]+}}], %rs[[ELEMENT]]<br>
> +; -- this is the forwards copying BB<br>
> +; PTX: LBB[[FORWARD_BB]]:<br>
> +; PTX: @%p[[NEQ0]] bra LBB[[EXIT]]<br>
> +; PTX: ld.u8 %rs[[ELEMENT2:[0-9]+]]<br>
> +; PTX: st.u8 [%rd{{[0-9]+}}], %rs[[ELEMENT2]]<br>
> +; PTX: add.s64 %rd[[INDEX:[0-9]+]], %rd[[INDEX]], 1<br>
> +; -- exit block<br>
> +; PTX: LBB[[EXIT]]:<br>
> +; PTX-NEXT: st.param.b64 [func_retval0<br>
> +; PTX-NEXT: ret<br>
> }<br>
><br>
><br>
> _______________________________________________<br>
> llvm-commits mailing list<br>
</div></div></div></div>> <a href="mailto:llvm-commits@cs.uiuc.edu" target="_blank">llvm-commits@cs.uiuc.edu</a><br>
> <a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><br>
<div><div>><br>
><br>
><br>
> _______________________________________________<br>
> llvm-commits mailing list<span class=""><br>
> <a href="mailto:llvm-commits@cs.uiuc.edu" target="_blank">llvm-commits@cs.uiuc.edu</a><br>
> <a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><br>
<br>
</span></div></div></blockquote></div><br></div></div>
</blockquote></div><br></div></div>