<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">Alexey, <div><br></div><div>What’s the URL of your buildbot ? </div><div><br></div><div>Nadav</div><div><br><div><div>On Jul 8, 2013, at 3:00 AM, Benjamin Kramer <<a href="mailto:benny.kra@gmail.com">benny.kra@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><meta http-equiv="content-type" content="text/html; charset=utf-8"><div dir="auto"><div style="">On 08.07.2013, at 09:53, Alexey Samsonov <<a href="mailto:samsonov@google.com">samsonov@google.com</a>> wrote:</div>
<div style=""><br></div><blockquote type="cite" style=""><div dir="ltr">Looks like it wasn't fixed - I still observe the same error.</div></blockquote><div style=""><br></div><div><span style="">Would it make sense to have </span><span style="background-color:rgba(255,255,255,0)">BuilderLocGuard store an AssertingVH so it complains loudly when the insertion point is deleted while the guard still has a reference to it?</span></div>
<div><span style="background-color:rgba(255,255,255,0)"><br></span></div><div><span style="background-color:rgba(255,255,255,0)">- Ben</span></div><br><blockquote type="cite" style=""><div class="gmail_extra"><div class="gmail_quote">
On Sun, Jul 7, 2013 at 6:59 PM, Nadav Rotem <span dir="ltr"><<a href="mailto:nrotem@apple.com" target="_blank">nrotem@apple.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div><br></div><div>Thanks for catching this Alexey! This should be fixed in r185777. </div>
<div><br></div><div><div><div>On Jul 7, 2013, at 5:22 AM, Alexey Samsonov <<a href="mailto:samsonov@google.com" target="_blank">samsonov@google.com</a>> wrote:</div><br><blockquote type="cite"><div style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">
<div dir="ltr">Hi Nadav!<div><br></div><div>Our ASan bootstrap bot reports a problem with this:</div><div><div>=================================================================</div><div>==9987==ERROR: AddressSanitizer: heap-use-after-free on address 0x60b00000ae90 at pc 0x15b0917 bp 0x7fff39e016b0 sp 0x7fff39e016a8</div>
<div>READ of size 8 at 0x60b00000ae90 thread T0</div><div> #0 0x15b0916 in getParent /build/llvm/include/llvm/IR/Instruction.h:53</div><div> #1 0x15b0916 in SetInsertPoint /build/llvm/include/llvm/IR/IRBuilder.h:90</div>
<div> #2 0x15b0916 in ~BuilderLocGuard /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:62</div><div> #3 0x15b0916 in (anonymous namespace)::BoUpSLP::vectorizeTree((anonymous namespace)::BoUpSLP::TreeEntry*) /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1217</div>
<div> #4 0x15aad0c in (anonymous namespace)::BoUpSLP::vectorizeTree() /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1220</div><div> #5 0x15a44c3 in vectorizeStoreChain /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1512</div>
<div> #6 0x15a44c3 in vectorizeStores /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1582</div><div> #7 0x15a44c3 in vectorizeStoreChains /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1781</div><div>
#8 0x15a44c3 in (anonymous namespace)::SLPVectorizer::runOnFunction(llvm::Function&) /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1425</div><div> #9 0x2b029a2 in llvm::FPPassManager::runOnFunction(llvm::Function&) /build/llvm/lib/IR/PassManager.cpp:1530</div>
<div> #10 0x2b02f75 in llvm::FPPassManager::runOnModule(llvm::Module&) /build/llvm/lib/IR/PassManager.cpp:1550</div><div> #11 0x2b037a0 in llvm::MPPassManager::runOnModule(llvm::Module&) /build/llvm/lib/IR/PassManager.cpp:1608</div>
<div> #12 0x2b049a3 in llvm::PassManagerImpl::run(llvm::Module&) /build/llvm/lib/IR/PassManager.cpp:1703</div><div> #13 0x2b04e1f in llvm::PassManager::run(llvm::Module&) /build/llvm/lib/IR/PassManager.cpp:1738</div>
<div> #14 0x61dc7d in main /build/llvm/tools/opt/opt.cpp:826</div><div> #15 0x7fac4bb4476c (/lib/x86_64-linux-gnu/libc.so.6+0x2176c)</div><div> #16 0x60ecec in _start (/build/llvm_build_asan/bin/opt+0x60ecec)</div>
<div>0x60b00000ae90 is located 96 bytes inside of 112-byte region [0x60b00000ae30,0x60b00000aea0)</div><div>freed by thread T0 here:</div><div> #0 0x5f96c5 in operator delete(void*) /build/llvm/projects/compiler-rt/lib/asan/asan_new_delete.cc:83</div>
<div> #1 0x15ac453 in (anonymous namespace)::BoUpSLP::vectorizeTree() /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1288</div><div> #2 0x15a44c3 in vectorizeStoreChain /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1512</div>
<div> #3 0x15a44c3 in vectorizeStores /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1582</div><div> #4 0x15a44c3 in vectorizeStoreChains /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1781</div><div>
#5 0x15a44c3 in (anonymous namespace)::SLPVectorizer::runOnFunction(llvm::Function&) /build/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp:1425</div><div> #6 0x2b029a2 in llvm::FPPassManager::runOnFunction(llvm::Function&) /build/llvm/lib/IR/PassManager.cpp:1530</div>
<div> #7 0x2b02f75 in llvm::FPPassManager::runOnModule(llvm::Module&) /build/llvm/lib/IR/PassManager.cpp:1550</div><div> #8 0x2b037a0 in llvm::MPPassManager::runOnModule(llvm::Module&) /build/llvm/lib/IR/PassManager.cpp:1608</div>
<div> #9 0x2b049a3 in llvm::PassManagerImpl::run(llvm::Module&) /build/llvm/lib/IR/PassManager.cpp:1703</div><div> #10 0x2b04e1f in llvm::PassManager::run(llvm::Module&) /build/llvm/lib/IR/PassManager.cpp:1738</div>
<div> #11 0x61dc7d in main /build/llvm/tools/opt/opt.cpp:826</div><div> #12 0x7fac4bb4476c (/lib/x86_64-linux-gnu/libc.so.6+0x2176c)</div><div>previously allocated by thread T0 here:</div><div> #0 0x5f9405 in operator new(unsigned long) /build/llvm/projects/compiler-rt/lib/asan/asan_new_delete.cc:52</div>
<div> #1 0x2b2ed73 in llvm::User::operator new(unsigned long, unsigned int) /build/llvm/lib/IR/User.cpp:60</div><div> #2 0x14a4391 in operator new /build/llvm/include/llvm/IR/InstrTypes.h:105</div><div> #3 0x14a4391 in llvm::LLParser::ParseLoad(llvm::Instruction*&, llvm::LLParser::PerFunctionState&) /build/llvm/lib/AsmParser/LLParser.cpp:4096</div>
<div> #4 0x149380c in llvm::LLParser::ParseInstruction(llvm::Instruction*&, llvm::BasicBlock*, llvm::LLParser::PerFunctionState&) /build/llvm/lib/AsmParser/LLParser.cpp:3316</div><div> #5 0x14927b8 in llvm::LLParser::ParseBasicBlock(llvm::LLParser::PerFunctionState&) /build/llvm/lib/AsmParser/LLParser.cpp:3190</div>
<div> #6 0x146af2f in llvm::LLParser::ParseFunctionBody(llvm::Function&) /build/llvm/lib/AsmParser/LLParser.cpp:3143</div><div> #7 0x14584c0 in ParseDefine /build/llvm/lib/AsmParser/LLParser.cpp:424</div><div> #8 0x14584c0 in llvm::LLParser::ParseTopLevelEntities() /build/llvm/lib/AsmParser/LLParser.cpp:226</div>
<div> #9 0x145813e in llvm::LLParser::Run() /build/llvm/lib/AsmParser/LLParser.cpp:41</div><div> #10 0x1449c59 in llvm::ParseAssembly(llvm::MemoryBuffer*, llvm::Module*, llvm::SMDiagnostic&, llvm::LLVMContext&) /build/llvm/lib/AsmParser/Parser.cpp:38</div>
<div> #11 0x11be978 in llvm::ParseIR(llvm::MemoryBuffer*, llvm::SMDiagnostic&, llvm::LLVMContext&) /build/llvm/lib/IRReader/IRReader.cpp:76</div><div> #12 0x11bf290 in llvm::ParseIRFile(std::string const&, llvm::SMDiagnostic&, llvm::LLVMContext&) /build/llvm/lib/IRReader/IRReader.cpp:88</div>
<div> #13 0x618e4c in main /build/llvm/tools/opt/opt.cpp:595</div><div> #14 0x7fac4bb4476c (/lib/x86_64-linux-gnu/libc.so.6+0x2176c)</div><div>SUMMARY: AddressSanitizer: heap-use-after-free /build/llvm/include/llvm/IR/Instruction.h:53 getParent</div>
</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sun, Jul 7, 2013 at 10:57 AM, Nadav Rotem<span> </span><span dir="ltr"><<a href="mailto:nrotem@apple.com" target="_blank">nrotem@apple.com</a>></span><span> </span>wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Author: nadav<br>Date: Sun Jul 7 01:57:07 2013<br>New Revision: 185774<br>
<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project?rev=185774&view=rev" target="_blank">http://llvm.org/viewvc/llvm-project?rev=185774&view=rev</a><br>Log:<br>SLPVectorizer: Implement DCE as part of vectorization.<br>
<br>This is a complete re-write if the bottom-up vectorization class.<br>Before this commit we scanned the instruction tree 3 times. First in search of merge points for the trees. Second, for estimating the cost. And finally for vectorization.<br>
There was a lot of code duplication and adding the DCE exposed bugs. The new design is simpler and DCE was a part of the design.<br>In this implementation we build the tree once. After that we estimate the cost by scanning the different entries in the constructed tree (in any order). The vectorization phase also works on the built tree.<br>
<br><br>Added:<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_7zip.ll<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet.ll<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet2.ll<br>
<span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_dequeue.ll<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_flop7.ll<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lame.ll<br>
<span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod.ll<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod2.ll<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_mandeltext.ll<br>
<span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_rc4.ll<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_sim4b1.ll<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt.ll<br>
<span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt2.ll<br>Modified:<br> <span> </span>llvm/trunk/lib/Transforms/Vectorize/SLPVectorizer.cpp<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/diamond.ll<br>
<span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/long_chains.ll<br> <span> </span>llvm/trunk/test/Transforms/SLPVectorizer/X86/saxpy.ll<br><br>Modified: llvm/trunk/lib/Transforms/Vectorize/SLPVectorizer.cpp<br>
URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Vectorize/SLPVectorizer.cpp?rev=185774&r1=185773&r2=185774&view=diff" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Vectorize/SLPVectorizer.cpp?rev=185774&r1=185773&r2=185774&view=diff</a><br>
==============================================================================<br>--- llvm/trunk/lib/Transforms/Vectorize/SLPVectorizer.cpp (original)<br>+++ llvm/trunk/lib/Transforms/Vectorize/SLPVectorizer.cpp Sun Jul 7 01:57:07 2013<br>
@@ -59,7 +59,7 @@ static const unsigned RecursionMaxDepth<br> class BuilderLocGuard {<br> public:<br> BuilderLocGuard(IRBuilder<> &B) : Builder(B), Loc(B.GetInsertPoint()) {}<br>- ~BuilderLocGuard() { Builder.SetInsertPoint(Loc); }<br>
+ ~BuilderLocGuard() { if (Loc) Builder.SetInsertPoint(Loc); }<br><br> private:<br> // Prevent copying.<br>@@ -91,6 +91,7 @@ struct BlockNumbering {<br> }<br><br> int getIndex(Instruction *I) {<br>+ assert(I->getParent() == BB && "Invalid instruction");<br>
if (!Valid)<br> numberInstructions();<br> assert(InstrIdx.count(I) && "Unknown instruction");<br>@@ -117,72 +118,169 @@ private:<br> std::vector<Instruction *> InstrVec;<br> };<br>
<br>-class FuncSLP {<br>+/// \returns the parent basic block if all of the instructions in \p VL<br>+/// are in the same block or null otherwise.<br>+static BasicBlock *getSameBlock(ArrayRef<Value *> VL) {<br>+ Instruction *I0 = dyn_cast<Instruction>(VL[0]);<br>
+ if (!I0)<br>+ return 0;<br>+ BasicBlock *BB = I0->getParent();<br>+ for (int i = 1, e = VL.size(); i < e; i++) {<br>+ Instruction *I = dyn_cast<Instruction>(VL[i]);<br>+ if (!I)<br>+ return 0;<br>
+<br>+ if (BB != I->getParent())<br>+ return 0;<br>+ }<br>+ return BB;<br>+}<br>+<br>+/// \returns True if all of the values in \p VL are constants.<br>+static bool allConstant(ArrayRef<Value *> VL) {<br>
+ for (unsigned i = 0, e = VL.size(); i < e; ++i)<br>+ if (!isa<Constant>(VL[i]))<br>+ return false;<br>+ return true;<br>+}<br>+<br>+/// \returns True if all of the values in \p VL are identical.<br>+static bool isSplat(ArrayRef<Value *> VL) {<br>
+ for (unsigned i = 1, e = VL.size(); i < e; ++i)<br>+ if (VL[i] != VL[0])<br>+ return false;<br>+ return true;<br>+}<br>+<br>+/// \returns The opcode if all of the Instructions in \p VL have the same<br>+/// opcode, or zero.<br>
+static unsigned getSameOpcode(ArrayRef<Value *> VL) {<br>+ Instruction *I0 = dyn_cast<Instruction>(VL[0]);<br>+ if (!I0)<br>+ return 0;<br>+ unsigned Opcode = I0->getOpcode();<br>+ for (int i = 1, e = VL.size(); i < e; i++) {<br>
+ Instruction *I = dyn_cast<Instruction>(VL[i]);<br>+ if (!I || Opcode != I->getOpcode())<br>+ return 0;<br>+ }<br>+ return Opcode;<br>+}<br>+<br>+/// \returns The type that all of the values in \p VL have or null if there<br>
+/// are different types.<br>+static Type* getSameType(ArrayRef<Value *> VL) {<br>+ Type *Ty = VL[0]->getType();<br>+ for (int i = 1, e = VL.size(); i < e; i++)<br>+ if (VL[0]->getType() != Ty)<br>+ return 0;<br>
+<br>+ return Ty;<br>+}<br>+<br>+/// \returns True if the ExtractElement instructions in VL can be vectorized<br>+/// to use the original vector.<br>+static bool CanReuseExtract(ArrayRef<Value *> VL) {<br>+ assert(Instruction::ExtractElement == getSameOpcode(VL) && "Invalid opcode");<br>
+ // Check if all of the extracts come from the same vector and from the<br>+ // correct offset.<br>+ Value *VL0 = VL[0];<br>+ ExtractElementInst *E0 = cast<ExtractElementInst>(VL0);<br>+ Value *Vec = E0->getOperand(0);<br>
+<br>+ // We have to extract from the same vector type.<br>+ unsigned NElts = Vec->getType()->getVectorNumElements();<br>+<br>+ if (NElts != VL.size())<br>+ return false;<br>+<br>+ // Check that all of the indices extract from the correct offset.<br>
+ ConstantInt *CI = dyn_cast<ConstantInt>(E0->getOperand(1));<br>+ if (!CI || CI->getZExtValue())<br>+ return false;<br>+<br>+ for (unsigned i = 1, e = VL.size(); i < e; ++i) {<br>+ ExtractElementInst *E = cast<ExtractElementInst>(VL[i]);<br>
+ ConstantInt *CI = dyn_cast<ConstantInt>(E->getOperand(1));<br>+<br>+ if (!CI || CI->getZExtValue() != i || E->getOperand(0) != Vec)<br>+ return false;<br>+ }<br>+<br>+ return true;<br>+}<br>
+<br>
+/// Bottom Up SLP Vectorizer.<br>+class BoUpSLP {<br>+public:<br> typedef SmallVector<Value *, 8> ValueList;<br> typedef SmallVector<Instruction *, 16> InstrList;<br> typedef SmallPtrSet<Value *, 16> ValueSet;<br>
typedef SmallVector<StoreInst *, 8> StoreList;<br><br>-public:<br>- static const int MAX_COST = INT_MIN;<br>-<br>- FuncSLP(Function *Func, ScalarEvolution *Se, DataLayout *Dl,<br>- TargetTransformInfo *Tti, AliasAnalysis *Aa, LoopInfo *Li,<br>
+ BoUpSLP(Function *Func, ScalarEvolution *Se, DataLayout *Dl,<br>+ TargetTransformInfo *Tti, AliasAnalysis *Aa, LoopInfo *Li,<br> DominatorTree *Dt) :<br> F(Func), SE(Se), DL(Dl), TTI(Tti), AA(Aa), LI(Li), DT(Dt),<br>
Builder(Se->getContext()) {<br>- for (Function::iterator it = F->begin(), e = F->end(); it != e; ++it) {<br>- BasicBlock *BB = it;<br>- BlocksNumbers[BB] = BlockNumbering(BB);<br>+ // Setup the block numbering utility for all of the blocks in the<br>
+ // function.<br>+ for (Function::iterator it = F->begin(), e = F->end(); it != e; ++it) {<br>+ BasicBlock *BB = it;<br>+ BlocksNumbers[BB] = BlockNumbering(BB);<br>+ }<br> }<br>- }<br>
-<br>- /// \brief Take the pointer operand from the Load/Store instruction.<br>- /// \returns NULL if this is not a valid Load/Store instruction.<br>- static Value *getPointerOperand(Value *I);<br>-<br>- /// \brief Take the address space operand from the Load/Store instruction.<br>
- /// \returns -1 if this is not a valid Load/Store instruction.<br>- static unsigned getAddressSpaceOperand(Value *I);<br>-<br>- /// \returns true if the memory operations A and B are consecutive.<br>- bool isConsecutiveAccess(Value *A, Value *B);<br>
<br> /// \brief Vectorize the tree that starts with the elements in \p VL.<br>- /// \returns the vectorized value.<br>- Value *vectorizeTree(ArrayRef<Value *> VL);<br>+ void vectorizeTree();<br><br> /// \returns the vectorization cost of the subtree that starts at \p VL.<br>
/// A negative number means that this is profitable.<br>- int getTreeCost(ArrayRef<Value *> VL);<br>+ int getTreeCost();<br>+<br>+ /// Construct a vectorizable tree that starts at \p Roots.<br>+ void buildTree(ArrayRef<Value *> Roots);<br>
+<br>+ /// Clear the internal data structures that are created by 'buildTree'.<br>+ void deleteTree() {<br>+ VectorizableTree.clear();<br>+ ScalarToTreeEntry.clear();<br>+ MustGather.clear();<br>+ MemBarrierIgnoreList.clear();<br>
+ }<br><br> /// \returns the scalarization cost for this list of values. Assuming that<br> /// this subtree gets vectorized, we may need to extract the values from the<br> /// roots. This method calculates the cost of extracting the values.<br>
int getGatherCost(ArrayRef<Value *> VL);<br><br>- /// \brief Attempts to order and vectorize a sequence of stores. This<br>- /// function does a quadratic scan of the given stores.<br>- /// \returns true if the basic block was modified.<br>
- bool vectorizeStores(ArrayRef<StoreInst *> Stores, int costThreshold);<br>-<br>- /// \brief Vectorize a group of scalars into a vector tree.<br>- /// \returns the vectorized value.<br>- Value *vectorizeArith(ArrayRef<Value *> Operands);<br>
-<br>- /// \brief This method contains the recursive part of getTreeCost.<br>- int getTreeCost_rec(ArrayRef<Value *> VL, unsigned Depth);<br>-<br>- /// \brief This recursive method looks for vectorization hazards such as<br>
- /// values that are used by multiple users and checks that values are used<br>- /// by only one vector lane. It updates the variables LaneMap, MultiUserVals.<br>- void getTreeUses_rec(ArrayRef<Value *> VL, unsigned Depth);<br>
+ /// \returns true if the memory operations A and B are consecutive.<br>+ bool isConsecutiveAccess(Value *A, Value *B);<br>+<br>+ /// \brief Perform LICM and CSE on the newly generated gather sequences.<br>+ void optimizeGatherSequence();<br>
+private:<br>+ struct TreeEntry;<br>+<br>+ /// \returns the cost of the vectorizable entry.<br>+ int getEntryCost(TreeEntry *E);<br>+<br>+ /// This is the recursive part of buildTree.<br>+ void buildTree_rec(ArrayRef<Value *> Roots, unsigned Depth);<br>
<br>- /// \brief This method contains the recursive part of vectorizeTree.<br>- Value *vectorizeTree_rec(ArrayRef<Value *> VL);<br>+ /// Vectorizer a single entry in the tree.<br>+ Value *vectorizeTree(TreeEntry *E);<br>
<br>- /// \brief Vectorize a sorted sequence of stores.<br>- bool vectorizeStoreChain(ArrayRef<Value *> Chain, int CostThreshold);<br>+ /// Vectorizer a single entry in the tree, starting in \p VL.<br>+ Value *vectorizeTree(ArrayRef<Value *> VL);<br>
+<br>+ /// \brief Take the pointer operand from the Load/Store instruction.<br>+ /// \returns NULL if this is not a valid Load/Store instruction.<br>+ static Value *getPointerOperand(Value *I);<br>+<br>+ /// \brief Take the address space operand from the Load/Store instruction.<br>
+ /// \returns -1 if this is not a valid Load/Store instruction.<br>+ static unsigned getAddressSpaceOperand(Value *I);<br><br> /// \returns the scalarization cost for this type. Scalarization in this<br> /// context means the creation of vectors from a group of scalars.<br>
@@ -211,57 +309,65 @@ public:<br> /// \returns a vector from a collection of scalars in \p VL.<br> Value *Gather(ArrayRef<Value *> VL, VectorType *Ty);<br><br>- /// \brief Perform LICM and CSE on the newly generated gather sequences.<br>
- void optimizeGatherSequence();<br>+ struct TreeEntry {<br>+ TreeEntry() : Scalars(), VectorizedValue(0), LastScalarIndex(0),<br>+ NeedToGather(0) {}<br>+<br>+ /// \returns true if the scalars in VL are equal to this entry.<br>
+ bool isSame(ArrayRef<Value *> VL) {<br>+ assert(VL.size() == Scalars.size() && "Invalid size");<br>+ for (int i = 0, e = VL.size(); i != e; ++i)<br>+ if (VL[i] != Scalars[i])<br>
+ return false;<br>+ return true;<br>+ }<br><br>- bool needToGatherAny(ArrayRef<Value *> VL) {<br>- for (int i = 0, e = VL.size(); i < e; ++i)<br>- if (MustGather.count(VL[i]))<br>- return true;<br>
- return false;<br>- }<br>+ /// A vector of scalars.<br>+ ValueList Scalars;<br>+<br>+ /// The Scalars are vectorized into this value. It is initialized to Null.<br>+ Value *VectorizedValue;<br><br>- void forgetNumbering() {<br>
- for (Function::iterator it = F->begin(), e = F->end(); it != e; ++it)<br>- BlocksNumbers[it].forget();<br>+ /// The index in the basic block of the last scalar.<br>+ int LastScalarIndex;<br>+<br>+ /// Do we need to gather this sequence ?<br>
+ bool NeedToGather;<br>+ };<br>+<br>+ /// Create a new VectorizableTree entry.<br>+ TreeEntry *newTreeEntry(ArrayRef<Value *> VL, bool Vectorized) {<br>+ VectorizableTree.push_back(TreeEntry());<br>+ int idx = VectorizableTree.size() - 1;<br>
+ TreeEntry *Last = &VectorizableTree[idx];<br>+ Last->Scalars.insert(Last->Scalars.begin(), VL.begin(), VL.end());<br>+ Last->NeedToGather = !Vectorized;<br>+ if (Vectorized) {<br>+ Last->LastScalarIndex = getLastIndex(VL);<br>
+ for (int i = 0, e = VL.size(); i != e; ++i) {<br>+ assert(!ScalarToTreeEntry.count(VL[i]) && "Scalar already in tree!");<br>+ ScalarToTreeEntry[VL[i]] = idx;<br>+ }<br>+ } else {<br>
+ Last->LastScalarIndex = 0;<br>+ MustGather.insert(VL.begin(), VL.end());<br>+ }<br>+ return Last;<br> }<br><br> /// -- Vectorization State --<br>+ /// Holds all of the tree entries.<br>+ std::vector<TreeEntry> VectorizableTree;<br>
<br>- /// Maps values in the tree to the vector lanes that uses them. This map must<br>- /// be reset between runs of getCost.<br>- std::map<Value *, int> LaneMap;<br>- /// A list of instructions to ignore while sinking<br>
- /// memory instructions. This map must be reset between runs of getCost.<br>- ValueSet MemBarrierIgnoreList;<br>+ /// Maps a specific scalar to its tree entry.<br>+ SmallDenseMap<Value*, int> ScalarToTreeEntry;<br>
<br>- /// Maps between the first scalar to the vector. This map must be reset<br>- /// between runs.<br>- DenseMap<Value *, Value *> VectorizedValues;<br>-<br>- /// Contains values that must be gathered because they are used<br>
- /// by multiple lanes, or by users outside the tree.<br>- /// NOTICE: The vectorization methods also use this set.<br>+ /// A list of scalars that we found that we need to keep as scalars.<br> ValueSet MustGather;<br>
<br>- /// Contains PHINodes that are being processed. We use this data structure<br>- /// to stop cycles in the graph.<br>- ValueSet VisitedPHIs;<br>-<br>- /// Contains a list of values that are used outside the current tree, the<br>
- /// first element in the bundle and the insertion point for extracts. This<br>- /// set must be reset between runs.<br>- struct UseInfo{<br>- UseInfo(Instruction *VL0, int I) :<br>- Leader(VL0), LastIndex(I) {}<br>
- UseInfo() : Leader(0), LastIndex(0) {}<br>- /// The first element in the bundle.<br>- Instruction *Leader;<br>- /// The insertion index.<br>- int LastIndex;<br>- };<br>- MapVector<Instruction*, UseInfo> MultiUserVals;<br>
- SetVector<Instruction*> ExtractedLane;<br>+ /// A list of instructions to ignore while sinking<br>+ /// memory instructions. This map must be reset between runs of getCost.<br>+ ValueSet MemBarrierIgnoreList;<br>
<br> /// Holds all of the instructions that we gathered.<br> SetVector<Instruction *> GatherSeq;<br>@@ -281,409 +387,176 @@ public:<br> IRBuilder<> Builder;<br> };<br><br>-int FuncSLP::getGatherCost(Type *Ty) {<br>
- int Cost = 0;<br>- for (unsigned i = 0, e = cast<VectorType>(Ty)->getNumElements(); i < e; ++i)<br>- Cost += TTI->getVectorInstrCost(Instruction::InsertElement, Ty, i);<br>- return Cost;<br>-}<br>-<br>
-int FuncSLP::getGatherCost(ArrayRef<Value *> VL) {<br>- // Find the type of the operands in VL.<br>- Type *ScalarTy = VL[0]->getType();<br>- if (StoreInst *SI = dyn_cast<StoreInst>(VL[0]))<br>- ScalarTy = SI->getValueOperand()->getType();<br>
- VectorType *VecTy = VectorType::get(ScalarTy, VL.size());<br>- // Find the cost of inserting/extracting values from the vector.<br>- return getGatherCost(VecTy);<br>-}<br>-<br>-AliasAnalysis::Location FuncSLP::getLocation(Instruction *I) {<br>
- if (StoreInst *SI = dyn_cast<StoreInst>(I))<br>- return AA->getLocation(SI);<br>- if (LoadInst *LI = dyn_cast<LoadInst>(I))<br>- return AA->getLocation(LI);<br>- return AliasAnalysis::Location();<br>
-}<br>-<br>-Value *FuncSLP::getPointerOperand(Value *I) {<br>- if (LoadInst *LI = dyn_cast<LoadInst>(I))<br>- return LI->getPointerOperand();<br>- if (StoreInst *SI = dyn_cast<StoreInst>(I))<br>- return SI->getPointerOperand();<br>
- return 0;<br>+void BoUpSLP::buildTree(ArrayRef<Value *> Roots) {<br>+ deleteTree();<br>+ buildTree_rec(Roots, 0);<br> }<br><br>-unsigned FuncSLP::getAddressSpaceOperand(Value *I) {<br>- if (LoadInst *L = dyn_cast<LoadInst>(I))<br>
- return L->getPointerAddressSpace();<br>- if (StoreInst *S = dyn_cast<StoreInst>(I))<br>- return S->getPointerAddressSpace();<br>- return -1;<br>-}<br><br>-bool FuncSLP::isConsecutiveAccess(Value *A, Value *B) {<br>
- Value *PtrA = getPointerOperand(A);<br>- Value *PtrB = getPointerOperand(B);<br>- unsigned ASA = getAddressSpaceOperand(A);<br>- unsigned ASB = getAddressSpaceOperand(B);<br>+void BoUpSLP::buildTree_rec(ArrayRef<Value *> VL, unsigned Depth) {<br>
+ bool SameTy = getSameType(VL); (void)SameTy;<br>+ assert(SameTy && "Invalid types!");<br><br>- // Check that the address spaces match and that the pointers are valid.<br>- if (!PtrA || !PtrB || (ASA != ASB))<br>
- return false;<br>+ if (Depth == RecursionMaxDepth) {<br>+ DEBUG(dbgs() << "SLP: Gathering due to max recursion depth.\n");<br>+ newTreeEntry(VL, false);<br>+ return;<br>+ }<br><br>- // Check that A and B are of the same type.<br>
- if (PtrA->getType() != PtrB->getType())<br>- return false;<br>+ // Don't handle vectors.<br>+ if (VL[0]->getType()->isVectorTy()) {<br>+ DEBUG(dbgs() << "SLP: Gathering due to vector type.\n");<br>
+ newTreeEntry(VL, false);<br>+ return;<br>+ }<br><br>- // Calculate the distance.<br>- const SCEV *PtrSCEVA = SE->getSCEV(PtrA);<br>- const SCEV *PtrSCEVB = SE->getSCEV(PtrB);<br>- const SCEV *OffsetSCEV = SE->getMinusSCEV(PtrSCEVA, PtrSCEVB);<br>
- const SCEVConstant *ConstOffSCEV = dyn_cast<SCEVConstant>(OffsetSCEV);<br>+ if (StoreInst *SI = dyn_cast<StoreInst>(VL[0]))<br>+ if (SI->getValueOperand()->getType()->isVectorTy()) {<br>+ DEBUG(dbgs() << "SLP: Gathering due to store vector type.\n");<br>
+ newTreeEntry(VL, false);<br>+ return;<br>+ }<br><br>- // Non constant distance.<br>- if (!ConstOffSCEV)<br>- return false;<br>+ // If all of the operands are identical or constant we have a simple solution.<br>
+ if (allConstant(VL) || isSplat(VL) || !getSameBlock(VL) ||<br>+ !getSameOpcode(VL)) {<br>+ DEBUG(dbgs() << "SLP: Gathering due to C,S,B,O. \n");<br>+ newTreeEntry(VL, false);<br>+ return;<br>
+ }<br><br>- int64_t Offset = ConstOffSCEV->getValue()->getSExtValue();<br>- Type *Ty = cast<PointerType>(PtrA->getType())->getElementType();<br>- // The Instructions are connsecutive if the size of the first load/store is<br>
- // the same as the offset.<br>- int64_t Sz = DL->getTypeStoreSize(Ty);<br>- return ((-Offset) == Sz);<br>-}<br>+ // We now know that this is a vector of instructions of the same type from<br>+ // the same block.<br>
<br>-Value *FuncSLP::getSinkBarrier(Instruction *Src, Instruction *Dst) {<br>- assert(Src->getParent() == Dst->getParent() && "Not the same BB");<br>- BasicBlock::iterator I = Src, E = Dst;<br>- /// Scan all of the instruction from SRC to DST and check if<br>
- /// the source may alias.<br>- for (++I; I != E; ++I) {<br>- // Ignore store instructions that are marked as 'ignore'.<br>- if (MemBarrierIgnoreList.count(I))<br>- continue;<br>- if (Src->mayWriteToMemory()) /* Write */ {<br>
- if (!I->mayReadOrWriteMemory())<br>- continue;<br>- } else /* Read */ {<br>- if (!I->mayWriteToMemory())<br>- continue;<br>+ // Check if this is a duplicate of another entry.<br>+ if (ScalarToTreeEntry.count(VL[0])) {<br>
+ int Idx = ScalarToTreeEntry[VL[0]];<br>+ TreeEntry *E = &VectorizableTree[Idx];<br>+ for (unsigned i = 0, e = VL.size(); i != e; ++i) {<br>+ DEBUG(dbgs() << "SLP: \tChecking bundle: " << *VL[i] << ".\n");<br>
+ if (E->Scalars[i] != VL[i]) {<br>+ DEBUG(dbgs() << "SLP: Gathering due to partial overlap.\n");<br>+ newTreeEntry(VL, false);<br>+ return;<br>+ }<br> }<br>- AliasAnalysis::Location A = getLocation(&*I);<br>
- AliasAnalysis::Location B = getLocation(Src);<br>-<br>- if (!A.Ptr || !B.Ptr || AA->alias(A, B))<br>- return I;<br>+ DEBUG(dbgs() << "SLP: Perfect diamond merge at " << *VL[0] << ".\n");<br>
+ return;<br> }<br>- return 0;<br>-}<br>-<br>-static BasicBlock *getSameBlock(ArrayRef<Value *> VL) {<br>- BasicBlock *BB = 0;<br>- for (int i = 0, e = VL.size(); i < e; i++) {<br>- Instruction *I = dyn_cast<Instruction>(VL[i]);<br>
- if (!I)<br>- return 0;<br><br>- if (!BB) {<br>- BB = I->getParent();<br>- continue;<br>+ // Check that none of the instructions in the bundle are already in the tree.<br>+ for (unsigned i = 0, e = VL.size(); i != e; ++i) {<br>
+ if (ScalarToTreeEntry.count(VL[i])) {<br>+ DEBUG(dbgs() << "SLP: The instruction (" << *VL[i] <<<br>+ ") is already in tree.\n");<br>+ newTreeEntry(VL, false);<br>
+ return;<br> }<br>+ }<br><br>- if (BB != I->getParent())<br>- return 0;<br>+ // If any of the scalars appears in the table OR it is marked as a value that<br>+ // needs to stat scalar then we need to gather the scalars.<br>
+ for (unsigned i = 0, e = VL.size(); i != e; ++i) {<br>+ if (ScalarToTreeEntry.count(VL[i]) || MustGather.count(VL[i])) {<br>+ DEBUG(dbgs() << "SLP: Gathering due to gathered scalar. \n");<br>+ newTreeEntry(VL, false);<br>
+ return;<br>+ }<br> }<br>- return BB;<br>-}<br><br>-static bool allConstant(ArrayRef<Value *> VL) {<br>- for (unsigned i = 0, e = VL.size(); i < e; ++i)<br>- if (!isa<Constant>(VL[i]))<br>- return false;<br>
- return true;<br>-}<br>+ // Check that all of the users of the scalars that we want to vectorize are<br>+ // schedulable.<br>+ Instruction *VL0 = cast<Instruction>(VL[0]);<br>+ int MyLastIndex = getLastIndex(VL);<br>
+ BasicBlock *BB = cast<Instruction>(VL0)->getParent();<br><br>-static bool isSplat(ArrayRef<Value *> VL) {<br>- for (unsigned i = 1, e = VL.size(); i < e; ++i)<br>- if (VL[i] != VL[0])<br>- return false;<br>
- return true;<br>-}<br>+ for (unsigned i = 0, e = VL.size(); i != e; ++i) {<br>+ Instruction *Scalar = cast<Instruction>(VL[i]);<br>+ DEBUG(dbgs() << "SLP: Checking users of " << *Scalar << ". \n");<br>
+ for (Value::use_iterator U = Scalar->use_begin(), UE = Scalar->use_end();<br>+ U != UE; ++U) {<br>+ DEBUG(dbgs() << "SLP: \tUser " << **U << ". \n");<br>+ Instruction *User = dyn_cast<Instruction>(*U);<br>
+ if (!User) {<br>+ DEBUG(dbgs() << "SLP: Gathering due unknown user. \n");<br>+ newTreeEntry(VL, false);<br>+ return;<br>+ }<br><br>-static unsigned getSameOpcode(ArrayRef<Value *> VL) {<br>
- unsigned Opcode = 0;<br>- for (int i = 0, e = VL.size(); i < e; i++) {<br>- if (Instruction *I = dyn_cast<Instruction>(VL[i])) {<br>- if (!Opcode) {<br>- Opcode = I->getOpcode();<br>+ // We don't care if the user is in a different basic block.<br>
+ BasicBlock *UserBlock = User->getParent();<br>+ if (UserBlock != BB) {<br>+ DEBUG(dbgs() << "SLP: User from a different basic block "<br>+ << *User << ". \n");<br>
continue;<br> }<br>- if (Opcode != I->getOpcode())<br>- return 0;<br>- }<br>- }<br>- return Opcode;<br>-}<br><br>-static bool CanReuseExtract(ArrayRef<Value *> VL, unsigned VF,<br>
- VectorType *VecTy) {<br>- assert(Instruction::ExtractElement == getSameOpcode(VL) && "Invalid opcode");<br>- // Check if all of the extracts come from the same vector and from the<br>
- // correct offset.<br>- Value *VL0 = VL[0];<br>- ExtractElementInst *E0 = cast<ExtractElementInst>(VL0);<br>- Value *Vec = E0->getOperand(0);<br>-<br>- // We have to extract from the same vector type.<br>
- if (Vec->getType() != VecTy)<br>
- return false;<br>-<br>- // Check that all of the indices extract from the correct offset.<br>- ConstantInt *CI = dyn_cast<ConstantInt>(E0->getOperand(1));<br>- if (!CI || CI->getZExtValue())<br>- return false;<br>
-<br>- for (unsigned i = 1, e = VF; i < e; ++i) {<br>- ExtractElementInst *E = cast<ExtractElementInst>(VL[i]);<br>- ConstantInt *CI = dyn_cast<ConstantInt>(E->getOperand(1));<br>-<br>- if (!CI || CI->getZExtValue() != i || E->getOperand(0) != Vec)<br>
- return false;<br>- }<br>-<br>- return true;<br>-}<br>-<br>-void FuncSLP::getTreeUses_rec(ArrayRef<Value *> VL, unsigned Depth) {<br>- if (Depth == RecursionMaxDepth)<br>- return MustGather.insert(VL.begin(), VL.end());<br>
-<br>- // Don't handle vectors.<br>- if (VL[0]->getType()->isVectorTy())<br>- return;<br>-<br>- if (StoreInst *SI = dyn_cast<StoreInst>(VL[0]))<br>- if (SI->getValueOperand()->getType()->isVectorTy())<br>
- return;<br>-<br>- // If all of the operands are identical or constant we have a simple solution.<br>- if (allConstant(VL) || isSplat(VL) || !getSameBlock(VL))<br>- return MustGather.insert(VL.begin(), VL.end());<br>
-<br>- // Stop the scan at unknown IR.<br>- Instruction *VL0 = dyn_cast<Instruction>(VL[0]);<br>- assert(VL0 && "Invalid instruction");<br>-<br>- // Mark instructions with multiple users.<br>- int LastIndex = getLastIndex(VL);<br>
- for (unsigned i = 0, e = VL.size(); i < e; ++i) {<br>- if (PHINode *PN = dyn_cast<PHINode>(VL[i])) {<br>- unsigned NumUses = 0;<br>- // Check that PHINodes have only one external (non-self) use.<br>
- for (Value::use_iterator U = VL[i]->use_begin(), UE = VL[i]->use_end();<br>- U != UE; ++U) {<br>- // Don't count self uses.<br>- if (*U == PN)<br>- continue;<br>- NumUses++;<br>
- }<br>- if (NumUses > 1) {<br>- DEBUG(dbgs() << "SLP: Adding PHI to MultiUserVals "<br>- "because it has " << NumUses << " users:" << *PN << " \n");<br>
- UseInfo UI(VL0, 0);<br>- MultiUserVals[PN] = UI;<br>+ // If this is a PHINode within this basic block then we can place the<br>+ // extract wherever we want.<br>+ if (isa<PHINode>(*User)) {<br>
+ DEBUG(dbgs() << "SLP: \tWe can schedule PHIs:" << *User << ". \n");<br>+ continue;<br> }<br>- continue;<br>- }<br>-<br>- Instruction *I = dyn_cast<Instruction>(VL[i]);<br>
- // Remember to check if all of the users of this instruction are vectorized<br>- // within our tree. At depth zero we have no local users, only external<br>- // users that we don't care about.<br>- if (Depth && I && I->getNumUses() > 1) {<br>
- DEBUG(dbgs() << "SLP: Adding to MultiUserVals "<br>- "because it has " << I->getNumUses() << " users:" << *I << " \n");<br>- UseInfo UI(VL0, LastIndex);<br>
- MultiUserVals[I] = UI;<br>- }<br>- }<br>-<br>- // Check that the instruction is only used within one lane.<br>- for (int i = 0, e = VL.size(); i < e; ++i) {<br>- if (LaneMap.count(VL[i]) && LaneMap[VL[i]] != i) {<br>
- DEBUG(dbgs() << "SLP: Value used by multiple lanes:" << *VL[i] << "\n");<br>- return MustGather.insert(VL.begin(), VL.end());<br>- }<br>- // Make this instruction as 'seen' and remember the lane.<br>
- LaneMap[VL[i]] = i;<br>- }<br>-<br>- unsigned Opcode = getSameOpcode(VL);<br>- if (!Opcode)<br>- return MustGather.insert(VL.begin(), VL.end());<br><br>- switch (Opcode) {<br>- case Instruction::PHI: {<br>- PHINode *PH = dyn_cast<PHINode>(VL0);<br>
+ // Check if this is a safe in-tree user.<br>+ if (ScalarToTreeEntry.count(User)) {<br>+ int Idx = ScalarToTreeEntry[User];<br>+ int VecLocation = VectorizableTree[Idx].LastScalarIndex;<br>+ if (VecLocation <= MyLastIndex) {<br>
+ DEBUG(dbgs() << "SLP: Gathering due to unschedulable vector. \n");<br>+ newTreeEntry(VL, false);<br>+ return;<br>+ }<br>+ DEBUG(dbgs() << "SLP: In-tree user (" << *User << ") at #" <<<br>
+ VecLocation << " vector value (" << *Scalar << ") at #"<br>+ << MyLastIndex << ".\n");<br>+ continue;<br>+ }<br><br>- // Stop self cycles.<br>
- if (VisitedPHIs.count(PH))<br>+ // Make sure that we can schedule this unknown user.<br>+ BlockNumbering &BN = BlocksNumbers[BB];<br>+ int UserIndex = BN.getIndex(User);<br>+ if (UserIndex < MyLastIndex) {<br>
+<br>+ DEBUG(dbgs() << "SLP: Can't schedule extractelement for "<br>+ << *User << ". \n");<br>+ newTreeEntry(VL, false);<br> return;<br>-<br>- VisitedPHIs.insert(PH);<br>
- for (unsigned i = 0, e = PH->getNumIncomingValues(); i < e; ++i) {<br>- ValueList Operands;<br>- // Prepare the operand vector.<br>- for (unsigned j = 0; j < VL.size(); ++j)<br>- Operands.push_back(cast<PHINode>(VL[j])->getIncomingValue(i));<br>
-<br>- getTreeUses_rec(Operands, Depth + 1);<br>- }<br>- return;<br>- }<br>- case Instruction::ExtractElement: {<br>- VectorType *VecTy = VectorType::get(VL[0]->getType(), VL.size());<br>- // No need to follow ExtractElements that are going to be optimized away.<br>
- if (CanReuseExtract(VL, VL.size(), VecTy))<br>- return;<br>- // Fall through.<br>- }<br>- case Instruction::Load:<br>- return;<br>- case Instruction::ZExt:<br>- case Instruction::SExt:<br>- case Instruction::FPToUI:<br>
- case Instruction::FPToSI:<br>- case Instruction::FPExt:<br>- case Instruction::PtrToInt:<br>- case Instruction::IntToPtr:<br>- case Instruction::SIToFP:<br>- case Instruction::UIToFP:<br>- case Instruction::Trunc:<br>
- case Instruction::FPTrunc:<br>- case Instruction::BitCast:<br>- case Instruction::Select:<br>- case Instruction::ICmp:<br>- case Instruction::FCmp:<br>- case Instruction::Add:<br>- case Instruction::FAdd:<br>- case Instruction::Sub:<br>
- case Instruction::FSub:<br>- case Instruction::Mul:<br>- case Instruction::FMul:<br>- case Instruction::UDiv:<br>- case Instruction::SDiv:<br>- case Instruction::FDiv:<br>- case Instruction::URem:<br>- case Instruction::SRem:<br>
- case Instruction::FRem:<br>- case Instruction::Shl:<br>- case Instruction::LShr:<br>- case Instruction::AShr:<br>- case Instruction::And:<br>- case Instruction::Or:<br>- case Instruction::Xor: {<br>- for (unsigned i = 0, e = VL0->getNumOperands(); i < e; ++i) {<br>
- ValueList Operands;<br>- // Prepare the operand vector.<br>- for (unsigned j = 0; j < VL.size(); ++j)<br>- Operands.push_back(cast<Instruction>(VL[j])->getOperand(i));<br>-<br>- getTreeUses_rec(Operands, Depth + 1);<br>
+ }<br> }<br>- return;<br>- }<br>- case Instruction::Store: {<br>- ValueList Operands;<br>- for (unsigned j = 0; j < VL.size(); ++j)<br>- Operands.push_back(cast<Instruction>(VL[j])->getOperand(0));<br>
- getTreeUses_rec(Operands, Depth + 1);<br>- return;<br>- }<br>- default:<br>- return MustGather.insert(VL.begin(), VL.end());<br> }<br>-}<br><br>-int FuncSLP::getLastIndex(ArrayRef<Value *> VL) {<br>- BasicBlock *BB = cast<Instruction>(VL[0])->getParent();<br>
- assert(BB == getSameBlock(VL) && BlocksNumbers.count(BB) && "Invalid block");<br>- BlockNumbering &BN = BlocksNumbers[BB];<br>-<br>- int MaxIdx = BN.getIndex(BB->getFirstNonPHI());<br>
+ // Check that every instructions appears once in this bundle.<br> for (unsigned i = 0, e = VL.size(); i < e; ++i)<br>- MaxIdx = std::max(MaxIdx, BN.getIndex(cast<Instruction>(VL[i])));<br>- return MaxIdx;<br>
-}<br>-<br>-Instruction *FuncSLP::getLastInstruction(ArrayRef<Value *> VL) {<br>- BasicBlock *BB = cast<Instruction>(VL[0])->getParent();<br>- assert(BB == getSameBlock(VL) && BlocksNumbers.count(BB) && "Invalid block");<br>
- BlockNumbering &BN = BlocksNumbers[BB];<br>-<br>- int MaxIdx = BN.getIndex(cast<Instruction>(VL[0]));<br>- for (unsigned i = 1, e = VL.size(); i < e; ++i)<br>- MaxIdx = std::max(MaxIdx, BN.getIndex(cast<Instruction>(VL[i])));<br>
- return BN.getInstruction(MaxIdx);<br>-}<br>-<br>-Instruction *FuncSLP::getInstructionForIndex(unsigned Index, BasicBlock *BB) {<br>- BlockNumbering &BN = BlocksNumbers[BB];<br>- return BN.getInstruction(Index);<br>
-}<br>-<br>-int FuncSLP::getFirstUserIndex(ArrayRef<Value *> VL) {<br>- BasicBlock *BB = getSameBlock(VL);<br>- assert(BB && "All instructions must come from the same block");<br>- BlockNumbering &BN = BlocksNumbers[BB];<br>
+ for (unsigned j = i+1; j < e; ++j)<br>+ if (VL[i] == VL[j]) {<br>+ DEBUG(dbgs() << "SLP: Scalar used twice in bundle.\n");<br>+ newTreeEntry(VL, false);<br>+ return;<br>+ }<br>
<br>- // Find the first user of the values.<br>- int FirstUser = BN.getIndex(BB->getTerminator());<br>+ // Check that instructions in this bundle don't reference other instructions.<br>+ // The runtime of this check is O(N * N-1 * uses(N)) and a typical N is 4.<br>
for (unsigned i = 0, e = VL.size(); i < e; ++i) {<br> for (Value::use_iterator U = VL[i]->use_begin(), UE = VL[i]->use_end();<br> <span> </span>U != UE; ++U) {<br>- Instruction *Instr = dyn_cast<Instruction>(*U);<br>
-<br>- if (!Instr || Instr->getParent() != BB)<br>- continue;<br>-<br>- FirstUser = std::min(FirstUser, BN.getIndex(Instr));<br>+ for (unsigned j = 0; j < e; ++j) {<br>+ if (i != j && *U == VL[j]) {<br>
+ DEBUG(dbgs() << "SLP: Intra-bundle dependencies!" << **U << ". \n");<br>+ newTreeEntry(VL, false);<br>+ return;<br>+ }<br>+ }<br> }<br> }<br>
- return FirstUser;<br>-}<br>-<br>-int FuncSLP::getTreeCost_rec(ArrayRef<Value *> VL, unsigned Depth) {<br>- Type *ScalarTy = VL[0]->getType();<br>-<br>- if (StoreInst *SI = dyn_cast<StoreInst>(VL[0]))<br>
- ScalarTy = SI->getValueOperand()->getType();<br>-<br>- /// Don't mess with vectors.<br>- if (ScalarTy->isVectorTy())<br>- return FuncSLP::MAX_COST;<br><br>- if (allConstant(VL))<br>- return 0;<br>
-<br>- VectorType *VecTy = VectorType::get(ScalarTy, VL.size());<br>-<br>- if (isSplat(VL))<br>- return TTI->getShuffleCost(TargetTransformInfo::SK_Broadcast, VecTy, 0);<br>-<br>- int GatherCost = getGatherCost(VecTy);<br>
- if (Depth == RecursionMaxDepth || needToGatherAny(VL))<br>- return GatherCost;<br>+ DEBUG(dbgs() << "SLP: We are able to schedule this bundle.\n");<br><br>- BasicBlock *BB = getSameBlock(VL);<br> unsigned Opcode = getSameOpcode(VL);<br>
- assert(Opcode && BB && "Invalid Instruction Value");<br><br> // Check if it is safe to sink the loads or the stores.<br> if (Opcode == Instruction::Load || Opcode == Instruction::Store) {<br>
- int MaxIdx = getLastIndex(VL);<br>- Instruction *Last = getInstructionForIndex(MaxIdx, BB);<br>+ Instruction *Last = getLastInstruction(VL);<br><br> for (unsigned i = 0, e = VL.size(); i < e; ++i) {<br>
if (VL[i] == Last)<br>
@@ -691,407 +564,442 @@ int FuncSLP::getTreeCost_rec(ArrayRef<Va<br> Value *Barrier = getSinkBarrier(cast<Instruction>(VL[i]), Last);<br> if (Barrier) {<br> DEBUG(dbgs() << "SLP: Can't sink " << *VL[i] << "\n down to " << *Last<br>
- << "\n because of " << *Barrier << "\n");<br>- return MAX_COST;<br>+ << "\n because of " << *Barrier << ". Gathering.\n");<br>
+ newTreeEntry(VL, false);<br>+ return;<br> }<br> }<br> }<br><br>- // Calculate the extract cost.<br>- unsigned ExternalUserExtractCost = 0;<br>- for (unsigned i = 0, e = VL.size(); i < e; ++i)<br>
- if (ExtractedLane.count(cast<Instruction>(VL[i])))<br>- ExternalUserExtractCost +=<br>- TTI->getVectorInstrCost(Instruction::ExtractElement, VecTy, i);<br>-<br>- Instruction *VL0 = cast<Instruction>(VL[0]);<br>
switch (Opcode) {<br>- case Instruction::PHI: {<br>- PHINode *PH = dyn_cast<PHINode>(VL0);<br>-<br>- // Stop self cycles.<br>- if (VisitedPHIs.count(PH))<br>- return 0;<br>-<br>- VisitedPHIs.insert(PH);<br>
- int TotalCost = 0;<br>- // Calculate the cost of all of the operands.<br>- for (unsigned i = 0, e = PH->getNumIncomingValues(); i < e; ++i) {<br>- ValueList Operands;<br>- // Prepare the operand vector.<br>
- for (unsigned j = 0; j < VL.size(); ++j)<br>- Operands.push_back(cast<PHINode>(VL[j])->getIncomingValue(i));<br>+ case Instruction::PHI: {<br>+ PHINode *PH = dyn_cast<PHINode>(VL0);<br>
+ newTreeEntry(VL, true);<br>+ DEBUG(dbgs() << "SLP: added a vector of PHINodes.\n");<br>+<br>+ for (unsigned i = 0, e = PH->getNumIncomingValues(); i < e; ++i) {<br>+ ValueList Operands;<br>
+ // Prepare the operand vector.<br>+ for (unsigned j = 0; j < VL.size(); ++j)<br>+ Operands.push_back(cast<PHINode>(VL[j])->getIncomingValue(i));<br><br>- int Cost = getTreeCost_rec(Operands, Depth + 1);<br>
- if (Cost == MAX_COST)<br>- return MAX_COST;<br>- TotalCost += TotalCost;<br>+ buildTree_rec(Operands, Depth + 1);<br>+ }<br>+ return;<br> }<br>+ case Instruction::ExtractElement: {<br>
+ bool Reuse = CanReuseExtract(VL);<br>+ if (Reuse) {<br>+ DEBUG(dbgs() << "SLP: Reusing extract sequence.\n");<br>+ }<br>+ newTreeEntry(VL, Reuse);<br>+ return;<br>+ }<br>
+ case Instruction::Load: {<br>+ // Check if the loads are consecutive or of we need to swizzle them.<br>+ for (unsigned i = 0, e = VL.size() - 1; i < e; ++i)<br>+ if (!isConsecutiveAccess(VL[i], VL[i + 1])) {<br>
+ newTreeEntry(VL, false);<br>+ DEBUG(dbgs() << "SLP: Need to swizzle loads.\n");<br>+ return;<br>+ }<br><br>- if (TotalCost > GatherCost) {<br>- MustGather.insert(VL.begin(), VL.end());<br>
- return GatherCost;<br>+ newTreeEntry(VL, true);<br>+ DEBUG(dbgs() << "SLP: added a vector of loads.\n");<br>+ return;<br> }<br>+ case Instruction::ZExt:<br>+ case Instruction::SExt:<br>
+ case Instruction::FPToUI:<br>+ case Instruction::FPToSI:<br>+ case Instruction::FPExt:<br>+ case Instruction::PtrToInt:<br>+ case Instruction::IntToPtr:<br>+ case Instruction::SIToFP:<br>+ case Instruction::UIToFP:<br>
+ case Instruction::Trunc:<br>+ case Instruction::FPTrunc:<br>+ case Instruction::BitCast: {<br>+ Type *SrcTy = VL0->getOperand(0)->getType();<br>+ for (unsigned i = 0; i < VL.size(); ++i) {<br>
+ Type *Ty = cast<Instruction>(VL[i])->getOperand(0)->getType();<br>+ if (Ty != SrcTy || Ty->isAggregateType() || Ty->isVectorTy()) {<br>+ newTreeEntry(VL, false);<br>+ DEBUG(dbgs() << "SLP: Gathering casts with different src types.\n");<br>
+ return;<br>+ }<br>+ }<br>+ newTreeEntry(VL, true);<br>+ DEBUG(dbgs() << "SLP: added a vector of casts.\n");<br><br>- return TotalCost + ExternalUserExtractCost;<br>- }<br>
- case Instruction::ExtractElement: {<br>- if (CanReuseExtract(VL, VL.size(), VecTy))<br>- return 0;<br>- return getGatherCost(VecTy);<br>- }<br>- case Instruction::ZExt:<br>- case Instruction::SExt:<br>- case Instruction::FPToUI:<br>
- case Instruction::FPToSI:<br>- case Instruction::FPExt:<br>- case Instruction::PtrToInt:<br>- case Instruction::IntToPtr:<br>- case Instruction::SIToFP:<br>- case Instruction::UIToFP:<br>- case Instruction::Trunc:<br>
- case Instruction::FPTrunc:<br>- case Instruction::BitCast: {<br>- ValueList Operands;<br>- Type *SrcTy = VL0->getOperand(0)->getType();<br>- // Prepare the operand vector.<br>- for (unsigned j = 0; j < VL.size(); ++j) {<br>
- Operands.push_back(cast<Instruction>(VL[j])->getOperand(0));<br>- // Check that the casted type is the same for all users.<br>- if (cast<Instruction>(VL[j])->getOperand(0)->getType() != SrcTy)<br>
- return getGatherCost(VecTy);<br>- }<br>-<br>- int Cost = getTreeCost_rec(Operands, Depth + 1);<br>- if (Cost == MAX_COST)<br>- return MAX_COST;<br>-<br>- // Calculate the cost of this instruction.<br>
- int ScalarCost = VL.size() * TTI->getCastInstrCost(VL0->getOpcode(),<br>- VL0->getType(), SrcTy);<br>-<br>- VectorType *SrcVecTy = VectorType::get(SrcTy, VL.size());<br>
- int VecCost = TTI->getCastInstrCost(VL0->getOpcode(), VecTy, SrcVecTy);<br>- Cost += (VecCost - ScalarCost);<br>+ for (unsigned i = 0, e = VL0->getNumOperands(); i < e; ++i) {<br>+ ValueList Operands;<br>
+ // Prepare the operand vector.<br>+ for (unsigned j = 0; j < VL.size(); ++j)<br>+ Operands.push_back(cast<Instruction>(VL[j])->getOperand(i));<br><br>- if (Cost > GatherCost) {<br>
- MustGather.insert(VL.begin(), VL.end());<br>- return GatherCost;<br>+ buildTree_rec(Operands, Depth+1);<br>+ }<br>+ return;<br> }<br>+ case Instruction::ICmp:<br>+ case Instruction::FCmp: {<br>
+ // Check that all of the compares have the same predicate.<br>+ CmpInst::Predicate P0 = dyn_cast<CmpInst>(VL0)->getPredicate();<br>+ for (unsigned i = 1, e = VL.size(); i < e; ++i) {<br>+ CmpInst *Cmp = cast<CmpInst>(VL[i]);<br>
+ if (Cmp->getPredicate() != P0) {<br>+ newTreeEntry(VL, false);<br>+ DEBUG(dbgs() << "SLP: Gathering cmp with different predicate.\n");<br>+ return;<br>+ }<br>
+ }<br>
<br>- return Cost + ExternalUserExtractCost;<br>- }<br>- case Instruction::FCmp:<br>- case Instruction::ICmp: {<br>- // Check that all of the compares have the same predicate.<br>- CmpInst::Predicate P0 = dyn_cast<CmpInst>(VL0)->getPredicate();<br>
- for (unsigned i = 1, e = VL.size(); i < e; ++i) {<br>- CmpInst *Cmp = cast<CmpInst>(VL[i]);<br>- if (Cmp->getPredicate() != P0)<br>- return getGatherCost(VecTy);<br>- }<br>- // Fall through.<br>
- }<br>- case Instruction::Select:<br>- case Instruction::Add:<br>- case Instruction::FAdd:<br>- case Instruction::Sub:<br>- case Instruction::FSub:<br>- case Instruction::Mul:<br>- case Instruction::FMul:<br>- case Instruction::UDiv:<br>
- case Instruction::SDiv:<br>- case Instruction::FDiv:<br>- case Instruction::URem:<br>- case Instruction::SRem:<br>- case Instruction::FRem:<br>- case Instruction::Shl:<br>- case Instruction::LShr:<br>- case Instruction::AShr:<br>
- case Instruction::And:<br>- case Instruction::Or:<br>- case Instruction::Xor: {<br>- int TotalCost = 0;<br>- // Calculate the cost of all of the operands.<br>- for (unsigned i = 0, e = VL0->getNumOperands(); i < e; ++i) {<br>
- ValueList Operands;<br>- // Prepare the operand vector.<br>- for (unsigned j = 0; j < VL.size(); ++j)<br>- Operands.push_back(cast<Instruction>(VL[j])->getOperand(i));<br>+ newTreeEntry(VL, true);<br>
+ DEBUG(dbgs() << "SLP: added a vector of compares.\n");<br><br>- int Cost = getTreeCost_rec(Operands, Depth + 1);<br>- if (Cost == MAX_COST)<br>- return MAX_COST;<br>- TotalCost += Cost;<br>
- }<br>-<br>- // Calculate the cost of this instruction.<br>- int ScalarCost = 0;<br>- int VecCost = 0;<br>- if (Opcode == Instruction::FCmp || Opcode == Instruction::ICmp ||<br>- Opcode == Instruction::Select) {<br>
- VectorType *MaskTy = VectorType::get(Builder.getInt1Ty(), VL.size());<br>- ScalarCost =<br>- VecTy->getNumElements() *<br>- TTI->getCmpSelInstrCost(Opcode, ScalarTy, Builder.getInt1Ty());<br>
- VecCost = TTI->getCmpSelInstrCost(Opcode, VecTy, MaskTy);<br>- } else {<br>- ScalarCost = VecTy->getNumElements() *<br>- TTI->getArithmeticInstrCost(Opcode, ScalarTy);<br>- VecCost = TTI->getArithmeticInstrCost(Opcode, VecTy);<br>
+ for (unsigned i = 0, e = VL0->getNumOperands(); i < e; ++i) {<br>+ ValueList Operands;<br>+ // Prepare the operand vector.<br>+ for (unsigned j = 0; j < VL.size(); ++j)<br>+ Operands.push_back(cast<Instruction>(VL[j])->getOperand(i));<br>
+<br>+ buildTree_rec(Operands, Depth+1);<br>+ }<br>+ return;<br> }<br>- TotalCost += (VecCost - ScalarCost);<br>+ case Instruction::Select:<br>+ case Instruction::Add:<br>+ case Instruction::FAdd:<br>
+ case Instruction::Sub:<br>+ case Instruction::FSub:<br>+ case Instruction::Mul:<br>+ case Instruction::FMul:<br>+ case Instruction::UDiv:<br>+ case Instruction::SDiv:<br>+ case Instruction::FDiv:<br>
+ case Instruction::URem:<br>+ case Instruction::SRem:<br>+ case Instruction::FRem:<br>+ case Instruction::Shl:<br>+ case Instruction::LShr:<br>+ case Instruction::AShr:<br>+ case Instruction::And:<br>
+ case Instruction::Or:<br>+ case Instruction::Xor: {<br>+ newTreeEntry(VL, true);<br>+ DEBUG(dbgs() << "SLP: added a vector of bin op.\n");<br>+<br>+ for (unsigned i = 0, e = VL0->getNumOperands(); i < e; ++i) {<br>
+ ValueList Operands;<br>+ // Prepare the operand vector.<br>+ for (unsigned j = 0; j < VL.size(); ++j)<br>+ Operands.push_back(cast<Instruction>(VL[j])->getOperand(i));<br><br>- if (TotalCost > GatherCost) {<br>
- MustGather.insert(VL.begin(), VL.end());<br>- return GatherCost;<br>+ buildTree_rec(Operands, Depth+1);<br>+ }<br>+ return;<br> }<br>+ case Instruction::Store: {<br>+ // Check if the stores are consecutive or of we need to swizzle them.<br>
+ for (unsigned i = 0, e = VL.size() - 1; i < e; ++i)<br>+ if (!isConsecutiveAccess(VL[i], VL[i + 1])) {<br>+ newTreeEntry(VL, false);<br>+ DEBUG(dbgs() << "SLP: Non consecutive store.\n");<br>
+ return;<br>+ }<br><br>- return TotalCost + ExternalUserExtractCost;<br>- }<br>- case Instruction::Load: {<br>- // If we are scalarize the loads, add the cost of forming the vector.<br>- for (unsigned i = 0, e = VL.size() - 1; i < e; ++i)<br>
- if (!isConsecutiveAccess(VL[i], VL[i + 1]))<br>- return getGatherCost(VecTy);<br>+ newTreeEntry(VL, true);<br>+ DEBUG(dbgs() << "SLP: added a vector of stores.\n");<br><br>- // Cost of wide load - cost of scalar loads.<br>
- int ScalarLdCost = VecTy->getNumElements() *<br>- TTI->getMemoryOpCost(Instruction::Load, ScalarTy, 1, 0);<br>- int VecLdCost = TTI->getMemoryOpCost(Instruction::Load, ScalarTy, 1, 0);<br>
- int TotalCost = VecLdCost - ScalarLdCost;<br>+ ValueList Operands;<br>+ for (unsigned j = 0; j < VL.size(); ++j)<br>+ Operands.push_back(cast<Instruction>(VL[j])->getOperand(0));<br><br>
- if (TotalCost > GatherCost) {<br>
- MustGather.insert(VL.begin(), VL.end());<br>- return GatherCost;<br>+ // We can ignore these values because we are sinking them down.<br>+ MemBarrierIgnoreList.insert(VL.begin(), VL.end());<br>+ buildTree_rec(Operands, Depth + 1);<br>
+ return;<br> }<br>-<br>- return TotalCost + ExternalUserExtractCost;<br>+ default:<br>+ newTreeEntry(VL, false);<br>+ DEBUG(dbgs() << "SLP: Gathering unknown instruction.\n");<br>
+ return;<br> }<br>- case Instruction::Store: {<br>- // We know that we can merge the stores. Calculate the cost.<br>- int ScalarStCost = VecTy->getNumElements() *<br>- TTI->getMemoryOpCost(Instruction::Store, ScalarTy, 1, 0);<br>
- int VecStCost = TTI->getMemoryOpCost(Instruction::Store, ScalarTy, 1, 0);<br>- int StoreCost = VecStCost - ScalarStCost;<br>+}<br><br>- ValueList Operands;<br>- for (unsigned j = 0; j < VL.size(); ++j) {<br>
- Operands.push_back(cast<Instruction>(VL[j])->getOperand(0));<br>- MemBarrierIgnoreList.insert(VL[j]);<br>- }<br>+int BoUpSLP::getEntryCost(TreeEntry *E) {<br>+ ArrayRef<Value*> VL = E->Scalars;<br>
<br>- int Cost = getTreeCost_rec(Operands, Depth + 1);<br>- if (Cost == MAX_COST)<br>- return MAX_COST;<br>+ Type *ScalarTy = VL[0]->getType();<br>+ if (StoreInst *SI = dyn_cast<StoreInst>(VL[0]))<br>
+ ScalarTy = SI->getValueOperand()->getType();<br>+ VectorType *VecTy = VectorType::get(ScalarTy, VL.size());<br><br>- int TotalCost = StoreCost + Cost;<br>- return TotalCost + ExternalUserExtractCost;<br>
+ if (E->NeedToGather) {<br>+ if (allConstant(VL))<br>+ return 0;<br>+ if (isSplat(VL)) {<br>+ return TTI->getShuffleCost(TargetTransformInfo::SK_Broadcast, VecTy, 0);<br>+ }<br>+ return getGatherCost(E->Scalars);<br>
}<br>- default:<br>- // Unable to vectorize unknown instructions.<br>- return getGatherCost(VecTy);<br>+<br>+ assert(getSameOpcode(VL) && getSameType(VL) && getSameBlock(VL) &&<br>+ "Invalid VL");<br>
+ Instruction *VL0 = cast<Instruction>(VL[0]);<br>+ unsigned Opcode = VL0->getOpcode();<br>+ switch (Opcode) {<br>+ case Instruction::PHI: {<br>+ return 0;<br>+ }<br>+ case Instruction::ExtractElement: {<br>
+ if (CanReuseExtract(VL))<br>+ return 0;<br>+ return getGatherCost(VecTy);<br>+ }<br>+ case Instruction::ZExt:<br>+ case Instruction::SExt:<br>+ case Instruction::FPToUI:<br>+ case Instruction::FPToSI:<br>
+ case Instruction::FPExt:<br>+ case Instruction::PtrToInt:<br>+ case Instruction::IntToPtr:<br>+ case Instruction::SIToFP:<br>+ case Instruction::UIToFP:<br>+ case Instruction::Trunc:<br>+ case Instruction::FPTrunc:<br>
+ case Instruction::BitCast: {<br>+ Type *SrcTy = VL0->getOperand(0)->getType();<br>+<br>+ // Calculate the cost of this instruction.<br>+ int ScalarCost = VL.size() * TTI->getCastInstrCost(VL0->getOpcode(),<br>
+ VL0->getType(), SrcTy);<br>+<br>+ VectorType *SrcVecTy = VectorType::get(SrcTy, VL.size());<br>+ int VecCost = TTI->getCastInstrCost(VL0->getOpcode(), VecTy, SrcVecTy);<br>
+ return VecCost - ScalarCost;<br>+ }<br>+ case Instruction::FCmp:<br>+ case Instruction::ICmp:<br>+ case Instruction::Select:<br>+ case Instruction::Add:<br>+ case Instruction::FAdd:<br>+ case Instruction::Sub:<br>
+ case Instruction::FSub:<br>+ case Instruction::Mul:<br>+ case Instruction::FMul:<br>+ case Instruction::UDiv:<br>+ case Instruction::SDiv:<br>+ case Instruction::FDiv:<br>+ case Instruction::URem:<br>
+ case Instruction::SRem:<br>+ case Instruction::FRem:<br>+ case Instruction::Shl:<br>+ case Instruction::LShr:<br>+ case Instruction::AShr:<br>+ case Instruction::And:<br>+ case Instruction::Or:<br>
+ case Instruction::Xor: {<br>
+ // Calculate the cost of this instruction.<br>+ int ScalarCost = 0;<br>+ int VecCost = 0;<br>+ if (Opcode == Instruction::FCmp || Opcode == Instruction::ICmp ||<br>+ Opcode == Instruction::Select) {<br>
+ VectorType *MaskTy = VectorType::get(Builder.getInt1Ty(), VL.size());<br>+ ScalarCost = VecTy->getNumElements() *<br>+ TTI->getCmpSelInstrCost(Opcode, ScalarTy, Builder.getInt1Ty());<br>+ VecCost = TTI->getCmpSelInstrCost(Opcode, VecTy, MaskTy);<br>
+ } else {<br>+ ScalarCost = VecTy->getNumElements() *<br>+ TTI->getArithmeticInstrCost(Opcode, ScalarTy);<br>+ VecCost = TTI->getArithmeticInstrCost(Opcode, VecTy);<br>+ }<br>+ return VecCost - ScalarCost;<br>
+ }<br>+ case Instruction::Load: {<br>+ // Cost of wide load - cost of scalar loads.<br>+ int ScalarLdCost = VecTy->getNumElements() *<br>+ TTI->getMemoryOpCost(Instruction::Load, ScalarTy, 1, 0);<br>
+ int VecLdCost = TTI->getMemoryOpCost(Instruction::Load, ScalarTy, 1, 0);<br>+ return VecLdCost - ScalarLdCost;<br>+ }<br>+ case Instruction::Store: {<br>+ // We know that we can merge the stores. Calculate the cost.<br>
+ int ScalarStCost = VecTy->getNumElements() *<br>+ TTI->getMemoryOpCost(Instruction::Store, ScalarTy, 1, 0);<br>+ int VecStCost = TTI->getMemoryOpCost(Instruction::Store, ScalarTy, 1, 0);<br>+ return VecStCost - ScalarStCost;<br>
+ }<br>+ default:<br>+ llvm_unreachable("Unknown instruction");<br> }<br> }<br><br>-int FuncSLP::getTreeCost(ArrayRef<Value *> VL) {<br>- // Get rid of the list of stores that were removed, and from the<br>
- // lists of instructions with multiple users.<br>- MemBarrierIgnoreList.clear();<br>- LaneMap.clear();<br>- MultiUserVals.clear();<br>- ExtractedLane.clear();<br>- MustGather.clear();<br>- VisitedPHIs.clear();<br>
-<br>- if (!getSameBlock(VL))<br>- return MAX_COST;<br>-<br>- // Find the location of the last root.<br>- int LastRootIndex = getLastIndex(VL);<br>- int FirstUserIndex = getFirstUserIndex(VL);<br>-<br>- // Don't vectorize if there are users of the tree roots inside the tree<br>
- // itself.<br>- if (LastRootIndex > FirstUserIndex)<br>- return MAX_COST;<br>-<br>- // Scan the tree and find which value is used by which lane, and which values<br>- // must be scalarized.<br>- getTreeUses_rec(VL, 0);<br>
-<br>- // Check that instructions with multiple users can be vectorized. Mark<br>- // unsafe instructions.<br>- for (MapVector<Instruction *, UseInfo>::iterator UI = MultiUserVals.begin(),<br>- e = MultiUserVals.end(); UI != e; ++UI) {<br>
- Instruction *Scalar = UI->first;<br>-<br>- if (MustGather.count(Scalar))<br>- continue;<br>+int BoUpSLP::getTreeCost() {<br>+ int Cost = 0;<br>+ DEBUG(dbgs() << "SLP: Calculating cost for tree of size " <<<br>
+ VectorizableTree.size() << ".\n");<br><br>- assert(LaneMap.count(Scalar) && "Unknown scalar");<br>- int ScalarLane = LaneMap[Scalar];<br>+ for (unsigned i = 0, e = VectorizableTree.size(); i != e; ++i) {<br>
+ int C = getEntryCost(&VectorizableTree[i]);<br>+ DEBUG(dbgs() << "SLP: Adding cost " << C << " for bundle that starts with "<br>+ << *VectorizableTree[i].Scalars[0] << " .\n");<br>
+ Cost += C;<br>+ }<br>+ DEBUG(dbgs() << "SLP: Total Cost " << Cost << ".\n");<br>+ return Cost;<br>+}<br><br>- bool ExternalUse = false;<br>- // Check that all of the users of this instr are within the tree.<br>
- for (Value::use_iterator Usr = Scalar->use_begin(),<br>- UE = Scalar->use_end(); Usr != UE; ++Usr) {<br>- // If this user is within the tree, make sure it is from the same lane.<br>- // Notice that we have both in-tree and out-of-tree users.<br>
- if (LaneMap.count(*Usr)) {<br>- if (LaneMap[*Usr] != ScalarLane) {<br>- DEBUG(dbgs() << "SLP: Adding to MustExtract "<br>- "because of an out-of-lane usage.\n");<br>
- MustGather.insert(Scalar);<br>- break;<br>- }<br>- continue;<br>- }<br>+int BoUpSLP::getGatherCost(Type *Ty) {<br>+ int Cost = 0;<br>+ for (unsigned i = 0, e = cast<VectorType>(Ty)->getNumElements(); i < e; ++i)<br>
+ Cost += TTI->getVectorInstrCost(Instruction::InsertElement, Ty, i);<br>+ return Cost;<br>+}<br><br>- // We have an out-of-tree user. Check if we can place an 'extract'.<br>- Instruction *User = cast<Instruction>(*Usr);<br>
- // We care about the order only if the user is in the same block.<br>- if (User->getParent() == Scalar->getParent()) {<br>- int LastLoc = UI->second.LastIndex;<br>- BlockNumbering &BN = BlocksNumbers[User->getParent()];<br>
- int UserIdx = BN.getIndex(User);<br>- if (UserIdx <= LastLoc) {<br>- DEBUG(dbgs() << "SLP: Adding to MustExtract because of an external "<br>- "user that we can't schedule.\n");<br>
- MustGather.insert(Scalar);<br>- break;<br>- }<br>- }<br>- // We have an external user.<br>- ExternalUse = true;<br>- }<br>+int BoUpSLP::getGatherCost(ArrayRef<Value *> VL) {<br>
+ // Find the type of the operands in VL.<br>+ Type *ScalarTy = VL[0]->getType();<br>+ if (StoreInst *SI = dyn_cast<StoreInst>(VL[0]))<br>+ ScalarTy = SI->getValueOperand()->getType();<br>+ VectorType *VecTy = VectorType::get(ScalarTy, VL.size());<br>
+ // Find the cost of inserting/extracting values from the vector.<br>+ return getGatherCost(VecTy);<br>+}<br><br>- if (ExternalUse) {<br>- // Items that are left in MultiUserVals are to be extracted.<br>- // ExtractLane is used for the lookup.<br>
- ExtractedLane.insert(Scalar);<br>- }<br>+AliasAnalysis::Location BoUpSLP::getLocation(Instruction *I) {<br>+ if (StoreInst *SI = dyn_cast<StoreInst>(I))<br>+ return AA->getLocation(SI);<br>+ if (LoadInst *LI = dyn_cast<LoadInst>(I))<br>
+ return AA->getLocation(LI);<br>+ return AliasAnalysis::Location();<br>+}<br><br>- }<br>+Value *BoUpSLP::getPointerOperand(Value *I) {<br>+ if (LoadInst *LI = dyn_cast<LoadInst>(I))<br>+ return LI->getPointerOperand();<br>
+ if (StoreInst *SI = dyn_cast<StoreInst>(I))<br>+ return SI->getPointerOperand();<br>+ return 0;<br>+}<br><br>- // Now calculate the cost of vectorizing the tree.<br>- return getTreeCost_rec(VL, 0);<br>+unsigned BoUpSLP::getAddressSpaceOperand(Value *I) {<br>
+ if (LoadInst *L = dyn_cast<LoadInst>(I))<br>+ return L->getPointerAddressSpace();<br>+ if (StoreInst *S = dyn_cast<StoreInst>(I))<br>+ return S->getPointerAddressSpace();<br>+ return -1;<br> }<br>
-bool FuncSLP::vectorizeStoreChain(ArrayRef<Value *> Chain, int CostThreshold) {<br>- unsigned ChainLen = Chain.size();<br>- DEBUG(dbgs() << "SLP: Analyzing a store chain of length " << ChainLen<br>
- << "\n");<br>- Type *StoreTy = cast<StoreInst>(Chain[0])->getValueOperand()->getType();<br>- unsigned Sz = DL->getTypeSizeInBits(StoreTy);<br>- unsigned VF = MinVecRegSize / Sz;<br>
<br>- if (!isPowerOf2_32(Sz) || VF < 2)<br>+bool BoUpSLP::isConsecutiveAccess(Value *A, Value *B) {<br>+ Value *PtrA = getPointerOperand(A);<br>+ Value *PtrB = getPointerOperand(B);<br>+ unsigned ASA = getAddressSpaceOperand(A);<br>
+ unsigned ASB = getAddressSpaceOperand(B);<br>+<br>+ // Check that the address spaces match and that the pointers are valid.<br>+ if (!PtrA || !PtrB || (ASA != ASB))<br> return false;<br><br>- bool Changed = false;<br>
- // Look for profitable vectorizable trees at all offsets, starting at zero.<br>- for (unsigned i = 0, e = ChainLen; i < e; ++i) {<br>- if (i + VF > e)<br>- break;<br>- DEBUG(dbgs() << "SLP: Analyzing " << VF << " stores at offset " << i<br>
- << "\n");<br>- ArrayRef<Value *> Operands = Chain.slice(i, VF);<br>+ // Check that A and B are of the same type.<br>+ if (PtrA->getType() != PtrB->getType())<br>+ return false;<br>
<br>- int Cost = getTreeCost(Operands);<br>- if (Cost == FuncSLP::MAX_COST)<br>- continue;<br>- DEBUG(dbgs() << "SLP: Found cost=" << Cost << " for VF=" << VF << "\n");<br>
- if (Cost < CostThreshold) {<br>- DEBUG(dbgs() << "SLP: Decided to vectorize cost=" << Cost << "\n");<br>- vectorizeTree(Operands);<br>+ // Calculate the distance.<br>
+ const SCEV *PtrSCEVA = SE->getSCEV(PtrA);<br>+ const SCEV *PtrSCEVB = SE->getSCEV(PtrB);<br>+ const SCEV *OffsetSCEV = SE->getMinusSCEV(PtrSCEVA, PtrSCEVB);<br>+ const SCEVConstant *ConstOffSCEV = dyn_cast<SCEVConstant>(OffsetSCEV);<br>
<br>- // Remove the scalar stores.<br>- for (int j = 0, e = VF; j < e; ++j)<br>- cast<Instruction>(Operands[j])->eraseFromParent();<br>+ // Non constant distance.<br>+ if (!ConstOffSCEV)<br>
+ return false;<br><br>- // Move to the next bundle.<br>- i += VF - 1;<br>- Changed = true;<br>+ int64_t Offset = ConstOffSCEV->getValue()->getSExtValue();<br>+ Type *Ty = cast<PointerType>(PtrA->getType())->getElementType();<br>
+ // The Instructions are connsecutive if the size of the first load/store is<br>+ // the same as the offset.<br>+ int64_t Sz = DL->getTypeStoreSize(Ty);<br>+ return ((-Offset) == Sz);<br>+}<br>+<br>+Value *BoUpSLP::getSinkBarrier(Instruction *Src, Instruction *Dst) {<br>
+ assert(Src->getParent() == Dst->getParent() && "Not the same BB");<br>+ BasicBlock::iterator I = Src, E = Dst;<br>+ /// Scan all of the instruction from SRC to DST and check if<br>+ /// the source may alias.<br>
+ for (++I; I != E; ++I) {<br>+ // Ignore store instructions that are marked as 'ignore'.<br>+ if (MemBarrierIgnoreList.count(I))<br>+ continue;<br>+ if (Src->mayWriteToMemory()) /* Write */ {<br>
+ if (!I->mayReadOrWriteMemory())<br>+ continue;<br>+ } else /* Read */ {<br>+ if (!I->mayWriteToMemory())<br>+ continue;<br> }<br>+ AliasAnalysis::Location A = getLocation(&*I);<br>
+ AliasAnalysis::Location B = getLocation(Src);<br>+<br>+ if (!A.Ptr || !B.Ptr || AA->alias(A, B))<br>+ return I;<br> }<br>+ return 0;<br>+}<br><br>- if (Changed || ChainLen > VF)<br>- return Changed;<br>
+int BoUpSLP::getLastIndex(ArrayRef<Value *> VL) {<br>+ BasicBlock *BB = cast<Instruction>(VL[0])->getParent();<br>+ assert(BB == getSameBlock(VL) && BlocksNumbers.count(BB) && "Invalid block");<br>
+ BlockNumbering &BN = BlocksNumbers[BB];<br><br>- // Handle short chains. This helps us catch types such as <3 x float> that<br>- // are smaller than vector size.<br>- int Cost = getTreeCost(Chain);<br>- if (Cost == FuncSLP::MAX_COST)<br>
- return false;<br>- if (Cost < CostThreshold) {<br>- DEBUG(dbgs() << "SLP: Found store chain cost = " << Cost<br>- << " for size = " << ChainLen << "\n");<br>
- vectorizeTree(Chain);<br>+ int MaxIdx = BN.getIndex(BB->getFirstNonPHI());<br>+ for (unsigned i = 0, e = VL.size(); i < e; ++i)<br>+ MaxIdx = std::max(MaxIdx, BN.getIndex(cast<Instruction>(VL[i])));<br>
+ return MaxIdx;<br>+}<br><br>- // Remove all of the scalar stores.<br>- for (int i = 0, e = Chain.size(); i < e; ++i)<br>- cast<Instruction>(Chain[i])->eraseFromParent();<br>+Instruction *BoUpSLP::getLastInstruction(ArrayRef<Value *> VL) {<br>
+ BasicBlock *BB = cast<Instruction>(VL[0])->getParent();<br>+ assert(BB == getSameBlock(VL) && BlocksNumbers.count(BB) && "Invalid block");<br>+ BlockNumbering &BN = BlocksNumbers[BB];<br>
<br>- return true;<br>- }<br>+ int MaxIdx = BN.getIndex(cast<Instruction>(VL[0]));<br>+ for (unsigned i = 1, e = VL.size(); i < e; ++i)<br>+ MaxIdx = std::max(MaxIdx, BN.getIndex(cast<Instruction>(VL[i])));<br>
+ Instruction *I = BN.getInstruction(MaxIdx);<br>+ assert(I && "bad location");<br>+ return I;<br>+}<br><br>- return false;<br>+Instruction *BoUpSLP::getInstructionForIndex(unsigned Index, BasicBlock *BB) {<br>
+ BlockNumbering &BN = BlocksNumbers[BB];<br>+ return BN.getInstruction(Index);<br> }<br><br>-bool FuncSLP::vectorizeStores(ArrayRef<StoreInst *> Stores, int costThreshold) {<br>- SetVector<Value *> Heads, Tails;<br>
- SmallDenseMap<Value *, Value *> ConsecutiveChain;<br>+int BoUpSLP::getFirstUserIndex(ArrayRef<Value *> VL) {<br>+ BasicBlock *BB = getSameBlock(VL);<br>+ assert(BB && "All instructions must come from the same block");<br>
+ BlockNumbering &BN = BlocksNumbers[BB];<br><br>- // We may run into multiple chains that merge into a single chain. We mark the<br>- // stores that we vectorized so that we don't visit the same store twice.<br>
- ValueSet VectorizedStores;<br>- bool Changed = false;<br>+ // Find the first user of the values.<br>+ int FirstUser = BN.getIndex(BB->getTerminator());<br>+ for (unsigned i = 0, e = VL.size(); i < e; ++i) {<br>
+ for (Value::use_iterator U = VL[i]->use_begin(), UE = VL[i]->use_end();<br>+ U != UE; ++U) {<br>+ Instruction *Instr = dyn_cast<Instruction>(*U);<br><br>- // Do a quadratic search on all of the given stores and find<br>
- // all of the pairs of loads that follow each other.<br>- for (unsigned i = 0, e = Stores.size(); i < e; ++i)<br>- for (unsigned j = 0; j < e; ++j) {<br>- if (i == j)<br>+ if (!Instr || Instr->getParent() != BB)<br>
continue;<br><br>- if (isConsecutiveAccess(Stores[i], Stores[j])) {<br>- Tails.insert(Stores[j]);<br>- Heads.insert(Stores[i]);<br>- ConsecutiveChain[Stores[i]] = Stores[j];<br>- }<br>
- }<br>-<br>- // For stores that start but don't end a link in the chain:<br>- for (SetVector<Value *>::iterator it = Heads.begin(), e = Heads.end();<br>- it != e; ++it) {<br>- if (Tails.count(*it))<br>
- continue;<br>-<br>- // We found a store instr that starts a chain. Now follow the chain and try<br>- // to vectorize it.<br>- ValueList Operands;<br>- Value *I = *it;<br>- // Collect the chain into a list.<br>
- while (Tails.count(I) || Heads.count(I)) {<br>- if (VectorizedStores.count(I))<br>- break;<br>- Operands.push_back(I);<br>- // Move to the next value in the chain.<br>- I = ConsecutiveChain[I];<br>
+ FirstUser = std::min(FirstUser, BN.getIndex(Instr));<br> }<br>-<br>- bool Vectorized = vectorizeStoreChain(Operands, costThreshold);<br>-<br>- // Mark the vectorized stores so that we don't vectorize them again.<br>
- if (Vectorized)<br>- VectorizedStores.insert(Operands.begin(), Operands.end());<br>- Changed |= Vectorized;<br> }<br>-<br>- return Changed;<br>+ return FirstUser;<br> }<br><br>-Value *FuncSLP::Gather(ArrayRef<Value *> VL, VectorType *Ty) {<br>
+Value *BoUpSLP::Gather(ArrayRef<Value *> VL, VectorType *Ty) {<br> Value *Vec = UndefValue::get(Ty);<br> // Generate the 'InsertElement' instruction.<br> for (unsigned i = 0; i < Ty->getNumElements(); ++i) {<br>
@@ -1103,282 +1011,292 @@ Value *FuncSLP::Gather(ArrayRef<Value *><br> return Vec;<br> }<br><br>-Value *FuncSLP::vectorizeTree_rec(ArrayRef<Value *> VL) {<br>- BuilderLocGuard Guard(Builder);<br>+Value *BoUpSLP::vectorizeTree(ArrayRef<Value *> VL) {<br>
+ if (ScalarToTreeEntry.count(VL[0])) {<br>+ int Idx = ScalarToTreeEntry[VL[0]];<br>+ TreeEntry *E = &VectorizableTree[Idx];<br>+ if (E->isSame(VL))<br>+ return vectorizeTree(E);<br>+ }<br><br> Type *ScalarTy = VL[0]->getType();<br>
if (StoreInst *SI = dyn_cast<StoreInst>(VL[0]))<br> ScalarTy = SI->getValueOperand()->getType();<br> VectorType *VecTy = VectorType::get(ScalarTy, VL.size());<br><br>- if (needToGatherAny(VL))<br>- return Gather(VL, VecTy);<br>
+ return Gather(VL, VecTy);<br>+}<br>+<br>+Value *BoUpSLP::vectorizeTree(TreeEntry *E) {<br>+ BuilderLocGuard Guard(Builder);<br>+<br>+ if (E->VectorizedValue) {<br>+ DEBUG(dbgs() << "SLP: Diamond merged for " << *E->Scalars[0] << ".\n");<br>
+ return E->VectorizedValue;<br>+ }<br>+<br>+ Type *ScalarTy = E->Scalars[0]->getType();<br>+ if (StoreInst *SI = dyn_cast<StoreInst>(E->Scalars[0]))<br>+ ScalarTy = SI->getValueOperand()->getType();<br>
+ VectorType *VecTy = VectorType::get(ScalarTy, E->Scalars.size());<br><br>- if (VectorizedValues.count(VL[0])) {<br>- DEBUG(dbgs() << "SLP: Diamond merged at depth.\n");<br>- return VectorizedValues[VL[0]];<br>
+ if (E->NeedToGather) {<br>+ return Gather(E->Scalars, VecTy);<br> }<br><br>- Instruction *VL0 = cast<Instruction>(VL[0]);<br>+ Instruction *VL0 = cast<Instruction>(E->Scalars[0]);<br> unsigned Opcode = VL0->getOpcode();<br>
- assert(Opcode == getSameOpcode(VL) && "Invalid opcode");<br>+ assert(Opcode == getSameOpcode(E->Scalars) && "Invalid opcode");<br><br> switch (Opcode) {<br>- case Instruction::PHI: {<br>
- PHINode *PH = dyn_cast<PHINode>(VL0);<br>- Builder.SetInsertPoint(PH->getParent()->getFirstInsertionPt());<br>- PHINode *NewPhi = Builder.CreatePHI(VecTy, PH->getNumIncomingValues());<br>- VectorizedValues[VL0] = NewPhi;<br>
+ case Instruction::PHI: {<br>+ PHINode *PH = dyn_cast<PHINode>(VL0);<br>+ Builder.SetInsertPoint(PH->getParent()->getFirstInsertionPt());<br>+ PHINode *NewPhi = Builder.CreatePHI(VecTy, PH->getNumIncomingValues());<br>
+ E->VectorizedValue = NewPhi;<br>+<br>+ for (unsigned i = 0, e = PH->getNumIncomingValues(); i < e; ++i) {<br>+ ValueList Operands;<br>+ BasicBlock *IBB = PH->getIncomingBlock(i);<br>
+<br>
+ // Prepare the operand vector.<br>+ for (unsigned j = 0; j < E->Scalars.size(); ++j)<br>+ Operands.push_back(cast<PHINode>(E->Scalars[j])-><br>+ getIncomingValueForBlock(IBB));<br>
+<br>+ Builder.SetInsertPoint(IBB->getTerminator());<br>+ Value *Vec = vectorizeTree(Operands);<br>+ NewPhi->addIncoming(Vec, IBB);<br>+ }<br><br>- for (unsigned i = 0, e = PH->getNumIncomingValues(); i < e; ++i) {<br>
- ValueList Operands;<br>- BasicBlock *IBB = PH->getIncomingBlock(i);<br>+ assert(NewPhi->getNumIncomingValues() == PH->getNumIncomingValues() &&<br>+ "Invalid number of incoming values");<br>
+ return NewPhi;<br>+ }<br><br>- // Prepare the operand vector.<br>- for (unsigned j = 0; j < VL.size(); ++j)<br>- Operands.push_back(cast<PHINode>(VL[j])->getIncomingValueForBlock(IBB));<br>
+ case Instruction::ExtractElement: {<br>+ if (CanReuseExtract(E->Scalars)) {<br>+ Value *V = VL0->getOperand(0);<br>+ E->VectorizedValue = V;<br>+ return V;<br>+ }<br>+ return Gather(E->Scalars, VecTy);<br>
+ }<br>+ case Instruction::ZExt:<br>+ case Instruction::SExt:<br>+ case Instruction::FPToUI:<br>+ case Instruction::FPToSI:<br>+ case Instruction::FPExt:<br>+ case Instruction::PtrToInt:<br>+ case Instruction::IntToPtr:<br>
+ case Instruction::SIToFP:<br>+ case Instruction::UIToFP:<br>+ case Instruction::Trunc:<br>+ case Instruction::FPTrunc:<br>+ case Instruction::BitCast: {<br>+ ValueList INVL;<br>+ for (int i = 0, e = E->Scalars.size(); i < e; ++i)<br>
+ INVL.push_back(cast<Instruction>(E->Scalars[i])->getOperand(0));<br>+<br>+ Builder.SetInsertPoint(getLastInstruction(E->Scalars));<br>+ Value *InVec = vectorizeTree(INVL);<br>+ CastInst *CI = dyn_cast<CastInst>(VL0);<br>
+ Value *V = Builder.CreateCast(CI->getOpcode(), InVec, VecTy);<br>+ E->VectorizedValue = V;<br>+ return V;<br>+ }<br>+ case Instruction::FCmp:<br>+ case Instruction::ICmp: {<br>+ ValueList LHSV, RHSV;<br>
+ for (int i = 0, e = E->Scalars.size(); i < e; ++i) {<br>+ LHSV.push_back(cast<Instruction>(E->Scalars[i])->getOperand(0));<br>+ RHSV.push_back(cast<Instruction>(E->Scalars[i])->getOperand(1));<br>
+ }<br><br>- Builder.SetInsertPoint(IBB->getTerminator());<br>- Value *Vec = vectorizeTree_rec(Operands);<br>- NewPhi->addIncoming(Vec, IBB);<br>- }<br>-<br>- assert(NewPhi->getNumIncomingValues() == PH->getNumIncomingValues() &&<br>
- "Invalid number of incoming values");<br>- return NewPhi;<br>- }<br>-<br>- case Instruction::ExtractElement: {<br>- if (CanReuseExtract(VL, VL.size(), VecTy))<br>- return VL0->getOperand(0);<br>
- return Gather(VL, VecTy);<br>- }<br>- case Instruction::ZExt:<br>- case Instruction::SExt:<br>- case Instruction::FPToUI:<br>- case Instruction::FPToSI:<br>- case Instruction::FPExt:<br>- case Instruction::PtrToInt:<br>
- case Instruction::IntToPtr:<br>- case Instruction::SIToFP:<br>- case Instruction::UIToFP:<br>- case Instruction::Trunc:<br>- case Instruction::FPTrunc:<br>- case Instruction::BitCast: {<br>- ValueList INVL;<br>
- for (int i = 0, e = VL.size(); i < e; ++i)<br>- INVL.push_back(cast<Instruction>(VL[i])->getOperand(0));<br>-<br>- Builder.SetInsertPoint(getLastInstruction(VL));<br>- Value *InVec = vectorizeTree_rec(INVL);<br>
- CastInst *CI = dyn_cast<CastInst>(VL0);<br>- Value *V = Builder.CreateCast(CI->getOpcode(), InVec, VecTy);<br>- VectorizedValues[VL0] = V;<br>- return V;<br>- }<br>- case Instruction::FCmp:<br>- case Instruction::ICmp: {<br>
- // Check that all of the compares have the same predicate.<br>- CmpInst::Predicate P0 = dyn_cast<CmpInst>(VL0)->getPredicate();<br>- for (unsigned i = 1, e = VL.size(); i < e; ++i) {<br>- CmpInst *Cmp = cast<CmpInst>(VL[i]);<br>
- if (Cmp->getPredicate() != P0)<br>- return Gather(VL, VecTy);<br>- }<br>-<br>- ValueList LHSV, RHSV;<br>- for (int i = 0, e = VL.size(); i < e; ++i) {<br>- LHSV.push_back(cast<Instruction>(VL[i])->getOperand(0));<br>
- RHSV.push_back(cast<Instruction>(VL[i])->getOperand(1));<br>- }<br>-<br>- Builder.SetInsertPoint(getLastInstruction(VL));<br>- Value *L = vectorizeTree_rec(LHSV);<br>- Value *R = vectorizeTree_rec(RHSV);<br>
- Value *V;<br>-<br>- if (Opcode == Instruction::FCmp)<br>- V = Builder.CreateFCmp(P0, L, R);<br>- else<br>- V = Builder.CreateICmp(P0, L, R);<br>-<br>- VectorizedValues[VL0] = V;<br>- return V;<br>
- }<br>- case Instruction::Select: {<br>- ValueList TrueVec, FalseVec, CondVec;<br>- for (int i = 0, e = VL.size(); i < e; ++i) {<br>- CondVec.push_back(cast<Instruction>(VL[i])->getOperand(0));<br>
- TrueVec.push_back(cast<Instruction>(VL[i])->getOperand(1));<br>- FalseVec.push_back(cast<Instruction>(VL[i])->getOperand(2));<br>- }<br>-<br>- Builder.SetInsertPoint(getLastInstruction(VL));<br>
- Value *True = vectorizeTree_rec(TrueVec);<br>- Value *False = vectorizeTree_rec(FalseVec);<br>- Value *Cond = vectorizeTree_rec(CondVec);<br>- Value *V = Builder.CreateSelect(Cond, True, False);<br>- VectorizedValues[VL0] = V;<br>
- return V;<br>- }<br>- case Instruction::Add:<br>- case Instruction::FAdd:<br>- case Instruction::Sub:<br>- case Instruction::FSub:<br>- case Instruction::Mul:<br>- case Instruction::FMul:<br>- case Instruction::UDiv:<br>
- case Instruction::SDiv:<br>- case Instruction::FDiv:<br>- case Instruction::URem:<br>- case Instruction::SRem:<br>- case Instruction::FRem:<br>- case Instruction::Shl:<br>- case Instruction::LShr:<br>- case Instruction::AShr:<br>
- case Instruction::And:<br>- case Instruction::Or:<br>- case Instruction::Xor: {<br>- ValueList LHSVL, RHSVL;<br>- for (int i = 0, e = VL.size(); i < e; ++i) {<br>- LHSVL.push_back(cast<Instruction>(VL[i])->getOperand(0));<br>
- RHSVL.push_back(cast<Instruction>(VL[i])->getOperand(1));<br>- }<br>-<br>- Builder.SetInsertPoint(getLastInstruction(VL));<br>- Value *LHS = vectorizeTree_rec(LHSVL);<br>- Value *RHS = vectorizeTree_rec(RHSVL);<br>
-<br>- if (LHS == RHS) {<br>- assert((VL0->getOperand(0) == VL0->getOperand(1)) && "Invalid order");<br>- }<br>-<br>- BinaryOperator *BinOp = cast<BinaryOperator>(VL0);<br>- Value *V = Builder.CreateBinOp(BinOp->getOpcode(), LHS, RHS);<br>
- VectorizedValues[VL0] = V;<br>- return V;<br>- }<br>- case Instruction::Load: {<br>- // Check if all of the loads are consecutive.<br>- for (unsigned i = 1, e = VL.size(); i < e; ++i)<br>- if (!isConsecutiveAccess(VL[i - 1], VL[i]))<br>
- return Gather(VL, VecTy);<br>-<br>- // Loads are inserted at the head of the tree because we don't want to<br>- // sink them all the way down past store instructions.<br>- Builder.SetInsertPoint(getLastInstruction(VL));<br>
- LoadInst *LI = cast<LoadInst>(VL0);<br>- Value *VecPtr =<br>- Builder.CreateBitCast(LI->getPointerOperand(), VecTy->getPointerTo());<br>- unsigned Alignment = LI->getAlignment();<br>- LI = Builder.CreateLoad(VecPtr);<br>
- LI->setAlignment(Alignment);<br>-<br>- VectorizedValues[VL0] = LI;<br>- return LI;<br>- }<br>- case Instruction::Store: {<br>- StoreInst *SI = cast<StoreInst>(VL0);<br>- unsigned Alignment = SI->getAlignment();<br>
-<br>- ValueList ValueOp;<br>- for (int i = 0, e = VL.size(); i < e; ++i)<br>- ValueOp.push_back(cast<StoreInst>(VL[i])->getValueOperand());<br>-<br>- Value *VecValue = vectorizeTree_rec(ValueOp);<br>
-<br>- Builder.SetInsertPoint(getLastInstruction(VL));<br>- Value *VecPtr =<br>- Builder.CreateBitCast(SI->getPointerOperand(), VecTy->getPointerTo());<br>- Builder.CreateStore(VecValue, VecPtr)->setAlignment(Alignment);<br>
- return 0;<br>- }<br>- default:<br>- return Gather(VL, VecTy);<br>+ Builder.SetInsertPoint(getLastInstruction(E->Scalars));<br>+ Value *L = vectorizeTree(LHSV);<br>+ Value *R = vectorizeTree(RHSV);<br>
+ Value *V;<br>+<br>+ CmpInst::Predicate P0 = dyn_cast<CmpInst>(VL0)->getPredicate();<br>+ if (Opcode == Instruction::FCmp)<br>+ V = Builder.CreateFCmp(P0, L, R);<br>+ else<br>+ V = Builder.CreateICmp(P0, L, R);<br>
+<br>+ E->VectorizedValue = V;<br>+ return V;<br>+ }<br>+ case Instruction::Select: {<br>+ ValueList TrueVec, FalseVec, CondVec;<br>+ for (int i = 0, e = E->Scalars.size(); i < e; ++i) {<br>
+ CondVec.push_back(cast<Instruction>(E->Scalars[i])->getOperand(0));<br>+ TrueVec.push_back(cast<Instruction>(E->Scalars[i])->getOperand(1));<br>+ FalseVec.push_back(cast<Instruction>(E->Scalars[i])->getOperand(2));<br>
+ }<br>+<br>+ Builder.SetInsertPoint(getLastInstruction(E->Scalars));<br>+ Value *Cond = vectorizeTree(CondVec);<br>+ Value *True = vectorizeTree(TrueVec);<br>+ Value *False = vectorizeTree(FalseVec);<br>
+ Value *V = Builder.CreateSelect(Cond, True, False);<br>+ E->VectorizedValue = V;<br>+ return V;<br>+ }<br>+ case Instruction::Add:<br>+ case Instruction::FAdd:<br>+ case Instruction::Sub:<br>
+ case Instruction::FSub:<br>+ case Instruction::Mul:<br>+ case Instruction::FMul:<br>+ case Instruction::UDiv:<br>+ case Instruction::SDiv:<br>+ case Instruction::FDiv:<br>+ case Instruction::URem:<br>
+ case Instruction::SRem:<br>+ case Instruction::FRem:<br>+ case Instruction::Shl:<br>+ case Instruction::LShr:<br>+ case Instruction::AShr:<br>+ case Instruction::And:<br>+ case Instruction::Or:<br>
+ case Instruction::Xor: {<br>
+ ValueList LHSVL, RHSVL;<br>+ for (int i = 0, e = E->Scalars.size(); i < e; ++i) {<br>+ LHSVL.push_back(cast<Instruction>(E->Scalars[i])->getOperand(0));<br>+ RHSVL.push_back(cast<Instruction>(E->Scalars[i])->getOperand(1));<br>
+ }<br>+<br>+ Builder.SetInsertPoint(getLastInstruction(E->Scalars));<br>+ Value *LHS = vectorizeTree(LHSVL);<br>+ Value *RHS = vectorizeTree(RHSVL);<br>+<br>+ if (LHS == RHS && isa<Instruction>(LHS)) {<br>
+ assert((VL0->getOperand(0) == VL0->getOperand(1)) && "Invalid order");<br>+ }<br>+<br>+ BinaryOperator *BinOp = cast<BinaryOperator>(VL0);<br>+ Value *V = Builder.CreateBinOp(BinOp->getOpcode(), LHS, RHS);<br>
+ E->VectorizedValue = V;<br>+ return V;<br>+ }<br>+ case Instruction::Load: {<br>+ // Loads are inserted at the head of the tree because we don't want to<br>+ // sink them all the way down past store instructions.<br>
+ Builder.SetInsertPoint(getLastInstruction(E->Scalars));<br>+ LoadInst *LI = cast<LoadInst>(VL0);<br>+ Value *VecPtr =<br>+ Builder.CreateBitCast(LI->getPointerOperand(), VecTy->getPointerTo());<br>
+ unsigned Alignment = LI->getAlignment();<br>+ LI = Builder.CreateLoad(VecPtr);<br>+ LI->setAlignment(Alignment);<br>+ E->VectorizedValue = LI;<br>+ return LI;<br>+ }<br>+ case Instruction::Store: {<br>
+ StoreInst *SI = cast<StoreInst>(VL0);<br>+ unsigned Alignment = SI->getAlignment();<br>+<br>+ ValueList ValueOp;<br>+ for (int i = 0, e = E->Scalars.size(); i < e; ++i)<br>+ ValueOp.push_back(cast<StoreInst>(E->Scalars[i])->getValueOperand());<br>
+<br>+ Builder.SetInsertPoint(getLastInstruction(E->Scalars));<br>+ Value *VecValue = vectorizeTree(ValueOp);<br>+ Value *VecPtr =<br>+ Builder.CreateBitCast(SI->getPointerOperand(), VecTy->getPointerTo());<br>
+ StoreInst *S = Builder.CreateStore(VecValue, VecPtr);<br>+ S->setAlignment(Alignment);<br>+ E->VectorizedValue = S;<br>+ return S;<br>+ }<br>+ default:<br>+ llvm_unreachable("unknown inst");<br>
}<br>+ return 0;<br> }<br><br>-Value *FuncSLP::vectorizeTree(ArrayRef<Value *> VL) {<br>- Builder.SetInsertPoint(getLastInstruction(VL));<br>- Value *V = vectorizeTree_rec(VL);<br>+void BoUpSLP::vectorizeTree() {<br>
+ vectorizeTree(&VectorizableTree[0]);<br><br>- DEBUG(dbgs() << "SLP: Placing 'extracts'\n");<br>- for (SetVector<Instruction*>::iterator it = ExtractedLane.begin(), e =<br>- ExtractedLane.end(); it != e; ++it) {<br>
- Instruction *Scalar = *it;<br>- DEBUG(dbgs() << "SLP: Looking at " << *Scalar);<br>+ // For each vectorized value:<br>+ for (int EIdx = 0, EE = VectorizableTree.size(); EIdx < EE; ++EIdx) {<br>
+ TreeEntry *Entry = &VectorizableTree[EIdx];<br>+<br>+ // For each lane:<br>+ for (int Lane = 0, LE = Entry->Scalars.size(); Lane != LE; ++Lane) {<br>+ Value *Scalar = Entry->Scalars[Lane];<br><br>
- if (!Scalar)<br>- continue;<br>+ // No need to handle users of gathered values.<br>+ if (Entry->NeedToGather)<br>+ continue;<br><br>- Instruction *Loc = 0;<br>+ Value *Vec = Entry->VectorizedValue;<br>
+ assert(Vec && "Can't find vectorizable value");<br><br>- assert(MultiUserVals.count(Scalar) && "Can't find the lane to extract");<br>- Instruction *Leader = MultiUserVals[Scalar].Leader;<br>
+ SmallVector<User*, 16> Users(Scalar->use_begin(), Scalar->use_end());<br><br>- // This value is gathered so we don't need to extract from anywhere.<br>- if (!VectorizedValues.count(Leader))<br>
- continue;<br>+ for (SmallVector<User*, 16>::iterator User = Users.begin(),<br>+ UE = Users.end(); User != UE; ++User) {<br>+ DEBUG(dbgs() << "SLP: \tupdating user " << **User << ".\n");<br>
+<br>+ bool Gathered = MustGather.count(*User);<br>+<br>+ // Skip in-tree scalars that become vectors.<br>+ if (ScalarToTreeEntry.count(*User) && !Gathered) {<br>+ DEBUG(dbgs() << "SLP: \tUser will be removed soon:" <<<br>
+ **User << ".\n");<br>+ int Idx = ScalarToTreeEntry[*User]; (void) Idx;<br>+ assert(!VectorizableTree[Idx].NeedToGather && "bad state ?");<br>+ continue;<br>
+ }<br><br>- Value *Vec = VectorizedValues[Leader];<br>- if (PHINode *PN = dyn_cast<PHINode>(Vec)) {<br>- Loc = PN->getParent()->getFirstInsertionPt();<br>- } else {<br>- Instruction *I = cast<Instruction>(Vec);<br>
- BasicBlock::iterator L = *I;<br>- Loc = ++L;<br>- }<br>+ if (!isa<Instruction>(*User))<br>+ continue;<br><br>- Builder.SetInsertPoint(Loc);<br>- assert(LaneMap.count(Scalar) && "Can't find the extracted lane.");<br>
- int Lane = LaneMap[Scalar];<br>- Value *Idx = Builder.getInt32(Lane);<br>- Value *Extract = Builder.CreateExtractElement(Vec, Idx);<br>+ // Generate extracts for out-of-tree users.<br>+ // Find the insertion point for the extractelement lane.<br>
+ Instruction *Loc = 0;<br>+ if (PHINode *PN = dyn_cast<PHINode>(Vec)) {<br>+ Loc = PN->getParent()->getFirstInsertionPt();<br>+ } else if (Instruction *Iv = dyn_cast<Instruction>(Vec)){<br>
+ Loc = ++((BasicBlock::iterator)*Iv);<br>+ } else {<br>+ Loc = F->getEntryBlock().begin();<br>+ }<br><br>- bool Replaced = false;;<br>- for (Value::use_iterator U = Scalar->use_begin(), UE = Scalar->use_end();<br>
- U != UE; ++U) {<br>- Instruction *UI = cast<Instruction>(*U);<br>- // No need to replace instructions that are inside our lane map.<br>- if (LaneMap.count(UI))<br>- continue;<br>+ Builder.SetInsertPoint(Loc);<br>
+ Value *Ex = Builder.CreateExtractElement(Vec, Builder.getInt32(Lane));<br>+ (*User)->replaceUsesOfWith(Scalar, Ex);<br>+ DEBUG(dbgs() << "SLP: \tupdated user:" << **User << ".\n");<br>
+ }<br><br>- UI->replaceUsesOfWith(Scalar ,Extract);<br>- Replaced = true;<br>+ Type *Ty = Scalar->getType();<br>+ if (!Ty->isVoidTy()) {<br>+ for (Value::use_iterator User = Scalar->use_begin(), UE = Scalar->use_end();<br>
+ User != UE; ++User) {<br>+ DEBUG(dbgs() << "SLP: \tvalidating user:" << **User << ".\n");<br>+ assert(!MustGather.count(*User) &&<br>+ "Replacing gathered value with undef");<br>
+ assert(ScalarToTreeEntry.count(*User) &&<br>+ "Replacing out-of-tree value with undef");<br>+ }<br>+ Value *Undef = UndefValue::get(Ty);<br>+ Scalar->replaceAllUsesWith(Undef);<br>
+ }<br>+ DEBUG(dbgs() << "SLP: \tErasing scalar:" << *Scalar << ".\n");<br>+ cast<Instruction>(Scalar)->eraseFromParent();<br> }<br>- assert(Replaced && "Must replace at least one outside user");<br>
- (void)Replaced;<br> }<br><br>- // We moved some instructions around. We have to number them again<br>- // before we can do any analysis.<br>- forgetNumbering();<br>-<br>- // Clear the state.<br>- MustGather.clear();<br>
- VisitedPHIs.clear();<br>- VectorizedValues.clear();<br>- MemBarrierIgnoreList.clear();<br>- return V;<br>-}<br>-<br>-Value *FuncSLP::vectorizeArith(ArrayRef<Value *> Operands) {<br>- Instruction *LastInst = getLastInstruction(Operands);<br>
- Value *Vec = vectorizeTree(Operands);<br>- // After vectorizing the operands we need to generate extractelement<br>- // instructions and replace all of the uses of the scalar values with<br>- // the values that we extracted from the vectorized tree.<br>
- Builder.SetInsertPoint(LastInst);<br>- for (unsigned i = 0, e = Operands.size(); i != e; ++i) {<br>- Value *S = Builder.CreateExtractElement(Vec, Builder.getInt32(i));<br>- Operands[i]->replaceAllUsesWith(S);<br>
+ for (Function::iterator it = F->begin(), e = F->end(); it != e; ++it) {<br>+ BlocksNumbers[it].forget();<br> }<br>-<br>- forgetNumbering();<br>- return Vec;<br> }<br><br>-void FuncSLP::optimizeGatherSequence() {<br>
+void BoUpSLP::optimizeGatherSequence() {<br>+ DEBUG(dbgs() << "SLP: Optimizing " << GatherSeq.size()<br>+ << " gather sequences instructions.\n");<br> // LICM InsertElementInst sequences.<br>
for (SetVector<Instruction *>::iterator it = GatherSeq.begin(),<br> <span> </span>e = GatherSeq.end(); it != e; ++it) {<br>@@ -1449,8 +1367,6 @@ void FuncSLP::optimizeGatherSequence() {<br> assert((*v)->getNumUses() == 0 && "Can't remove instructions with uses");<br>
(*v)->eraseFromParent();<br> }<br>-<br>- forgetNumbering();<br> }<br><br> /// The SLPVectorizer Pass.<br>@@ -1492,7 +1408,7 @@ struct SLPVectorizer : public FunctionPa<br><br> // Use the bollom up slp vectorizer to construct chains that start with<br>
// he store instructions.<br>- FuncSLP R(&F, SE, DL, TTI, AA, LI, DT);<br>+ BoUpSLP R(&F, SE, DL, TTI, AA, LI, DT);<br><br> // Scan the blocks in the function in post order.<br> for (po_iterator<BasicBlock*> it = po_begin(&F.getEntryBlock()),<br>
@@ -1536,31 +1452,146 @@ private:<br> /// object. We sort the stores to their base objects to reduce the cost of the<br> /// quadratic search on the stores. TODO: We can further reduce this cost<br> /// if we flush the chain creation every time we run into a memory barrier.<br>
- unsigned collectStores(BasicBlock *BB, FuncSLP &R);<br>+ unsigned collectStores(BasicBlock *BB, BoUpSLP &R);<br><br> /// \brief Try to vectorize a chain that starts at two arithmetic instrs.<br>- bool tryToVectorizePair(Value *A, Value *B, FuncSLP &R);<br>
+ bool tryToVectorizePair(Value *A, Value *B, BoUpSLP &R);<br><br> /// \brief Try to vectorize a list of operands. If \p NeedExtracts is true<br> /// then we calculate the cost of extracting the scalars from the vector.<br>
/// \returns true if a value was vectorized.<br>- bool tryToVectorizeList(ArrayRef<Value *> VL, FuncSLP &R, bool NeedExtracts);<br>+ bool tryToVectorizeList(ArrayRef<Value *> VL, BoUpSLP &R, bool NeedExtracts);<br>
<br> /// \brief Try to vectorize a chain that may start at the operands of \V;<br>- bool tryToVectorize(BinaryOperator *V, FuncSLP &R);<br>+ bool tryToVectorize(BinaryOperator *V, BoUpSLP &R);<br><br> /// \brief Vectorize the stores that were collected in StoreRefs.<br>
- bool vectorizeStoreChains(FuncSLP &R);<br>+ bool vectorizeStoreChains(BoUpSLP &R);<br><br> /// \brief Scan the basic block and look for patterns that are likely to start<br> /// a vectorization chain.<br>
- bool vectorizeChainsInBlock(BasicBlock *BB, FuncSLP &R);<br>
+ bool vectorizeChainsInBlock(BasicBlock *BB, BoUpSLP &R);<br>+<br>+ bool vectorizeStoreChain(ArrayRef<Value *> Chain, int CostThreshold,<br>+ BoUpSLP &R);<br><br>+ bool vectorizeStores(ArrayRef<StoreInst *> Stores, int costThreshold,<br>
+ BoUpSLP &R);<br> private:<br> StoreListMap StoreRefs;<br> };<br><br>-unsigned SLPVectorizer::collectStores(BasicBlock *BB, FuncSLP &R) {<br>+bool SLPVectorizer::vectorizeStoreChain(ArrayRef<Value *> Chain,<br>
+ int CostThreshold, BoUpSLP &R) {<br>+ unsigned ChainLen = Chain.size();<br>+ DEBUG(dbgs() << "SLP: Analyzing a store chain of length " << ChainLen<br>
+ << "\n");<br>+ Type *StoreTy = cast<StoreInst>(Chain[0])->getValueOperand()->getType();<br>+ unsigned Sz = DL->getTypeSizeInBits(StoreTy);<br>+ unsigned VF = MinVecRegSize / Sz;<br>
+<br>+ if (!isPowerOf2_32(Sz) || VF < 2)<br>+ return false;<br>+<br>+ bool Changed = false;<br>+ // Look for profitable vectorizable trees at all offsets, starting at zero.<br>+ for (unsigned i = 0, e = ChainLen; i < e; ++i) {<br>
+ if (i + VF > e)<br>+ break;<br>+ DEBUG(dbgs() << "SLP: Analyzing " << VF << " stores at offset " << i<br>+ << "\n");<br>+ ArrayRef<Value *> Operands = Chain.slice(i, VF);<br>
+<br>+ R.buildTree(Operands);<br>+<br>+ int Cost = R.getTreeCost();<br>+<br>+ DEBUG(dbgs() << "SLP: Found cost=" << Cost << " for VF=" << VF << "\n");<br>
+ if (Cost < CostThreshold) {<br>+ DEBUG(dbgs() << "SLP: Decided to vectorize cost=" << Cost << "\n");<br>+ R.vectorizeTree();<br>+<br>+ // Move to the next bundle.<br>
+ i += VF - 1;<br>+ Changed = true;<br>+ }<br>+ }<br>+<br>+ if (Changed || ChainLen > VF)<br>+ return Changed;<br>+<br>+ // Handle short chains. This helps us catch types such as <3 x float> that<br>
+ // are smaller than vector size.<br>+ R.buildTree(Chain);<br>+<br>+ int Cost = R.getTreeCost();<br>+<br>+ if (Cost < CostThreshold) {<br>+ DEBUG(dbgs() << "SLP: Found store chain cost = " << Cost<br>
+ << " for size = " << ChainLen << "\n");<br>+ R.vectorizeTree();<br>+ return true;<br>+ }<br>+<br>+ return false;<br>+}<br>+<br>+bool SLPVectorizer::vectorizeStores(ArrayRef<StoreInst *> Stores,<br>
+ int costThreshold, BoUpSLP &R) {<br>+ SetVector<Value *> Heads, Tails;<br>+ SmallDenseMap<Value *, Value *> ConsecutiveChain;<br>+<br>+ // We may run into multiple chains that merge into a single chain. We mark the<br>
+ // stores that we vectorized so that we don't visit the same store twice.<br>+ BoUpSLP::ValueSet VectorizedStores;<br>+ bool Changed = false;<br>+<br>+ // Do a quadratic search on all of the given stores and find<br>
+ // all of the pairs of loads that follow each other.<br>+ for (unsigned i = 0, e = Stores.size(); i < e; ++i)<br>+ for (unsigned j = 0; j < e; ++j) {<br>+ if (i == j)<br>+ continue;<br>+<br>+ if (R.isConsecutiveAccess(Stores[i], Stores[j])) {<br>
+ Tails.insert(Stores[j]);<br>+ Heads.insert(Stores[i]);<br>+ ConsecutiveChain[Stores[i]] = Stores[j];<br>+ }<br>+ }<br>+<br>+ // For stores that start but don't end a link in the chain:<br>
+ for (SetVector<Value *>::iterator it = Heads.begin(), e = Heads.end();<br>+ it != e; ++it) {<br>+ if (Tails.count(*it))<br>+ continue;<br>+<br>+ // We found a store instr that starts a chain. Now follow the chain and try<br>
+ // to vectorize it.<br>+ BoUpSLP::ValueList Operands;<br>+ Value *I = *it;<br>+ // Collect the chain into a list.<br>+ while (Tails.count(I) || Heads.count(I)) {<br>+ if (VectorizedStores.count(I))<br>
+ break;<br>+ Operands.push_back(I);<br>+ // Move to the next value in the chain.<br>+ I = ConsecutiveChain[I];<br>+ }<br>+<br>+ bool Vectorized = vectorizeStoreChain(Operands, costThreshold, R);<br>
+<br>+ // Mark the vectorized stores so that we don't vectorize them again.<br>+ if (Vectorized)<br>+ VectorizedStores.insert(Operands.begin(), Operands.end());<br>+ Changed |= Vectorized;<br>+ }<br>+<br>
+ return Changed;<br>+}<br>+<br>+<br>+unsigned SLPVectorizer::collectStores(BasicBlock *BB, BoUpSLP &R) {<br> unsigned count = 0;<br> StoreRefs.clear();<br> for (BasicBlock::iterator it = BB->begin(), e = BB->end(); it != e; ++it) {<br>
@@ -1585,14 +1616,14 @@ unsigned SLPVectorizer::collectStores(Ba<br> return count;<br> }<br><br>-bool SLPVectorizer::tryToVectorizePair(Value *A, Value *B, FuncSLP &R) {<br>+bool SLPVectorizer::tryToVectorizePair(Value *A, Value *B, BoUpSLP &R) {<br>
if (!A || !B)<br> return false;<br> Value *VL[] = { A, B };<br> return tryToVectorizeList(VL, R, true);<br> }<br><br>-bool SLPVectorizer::tryToVectorizeList(ArrayRef<Value *> VL, FuncSLP &R,<br>+bool SLPVectorizer::tryToVectorizeList(ArrayRef<Value *> VL, BoUpSLP &R,<br>
<span> </span>bool NeedExtracts) {<br> if (VL.size() < 2)<br> return false;<br>@@ -1615,9 +1646,8 @@ bool SLPVectorizer::tryToVectorizeList(A<br> return 0;<br> }<br>
<br>- int Cost = R.getTreeCost(VL);<br>- if (Cost == FuncSLP::MAX_COST)<br>- return false;<br>+ R.buildTree(VL);<br>+ int Cost = R.getTreeCost();<br><br> int ExtrCost = NeedExtracts ? R.getGatherCost(VL) : 0;<br>
DEBUG(dbgs() << "SLP: Cost of pair:" << Cost<br>@@ -1625,11 +1655,11 @@ bool SLPVectorizer::tryToVectorizeList(A<br> if ((Cost + ExtrCost) >= -SLPCostThreshold)<br> return false;<br> DEBUG(dbgs() << "SLP: Vectorizing pair.\n");<br>
- R.vectorizeArith(VL);<br>+ R.vectorizeTree();<br> return true;<br> }<br><br>-bool SLPVectorizer::tryToVectorize(BinaryOperator *V, FuncSLP &R) {<br>+bool SLPVectorizer::tryToVectorize(BinaryOperator *V, BoUpSLP &R) {<br>
if (!V)<br> return false;<br><br>@@ -1669,7 +1699,7 @@ bool SLPVectorizer::tryToVectorize(Binar<br> return 0;<br> }<br><br>-bool SLPVectorizer::vectorizeChainsInBlock(BasicBlock *BB, FuncSLP &R) {<br>+bool SLPVectorizer::vectorizeChainsInBlock(BasicBlock *BB, BoUpSLP &R) {<br>
bool Changed = false;<br> for (BasicBlock::iterator it = BB->begin(), e = BB->end(); it != e; ++it) {<br> if (isa<DbgInfoIntrinsic>(it))<br>@@ -1737,7 +1767,7 @@ bool SLPVectorizer::vectorizeChainsInBlo<br>
return Changed;<br> }<br><br>-bool SLPVectorizer::vectorizeStoreChains(FuncSLP &R) {<br>+bool SLPVectorizer::vectorizeStoreChains(BoUpSLP &R) {<br> bool Changed = false;<br> // Attempt to sort and vectorize each of the store-groups.<br>
for (StoreListMap::iterator it = StoreRefs.begin(), e = StoreRefs.end();<br>@@ -1748,7 +1778,7 @@ bool SLPVectorizer::vectorizeStoreChains<br> DEBUG(dbgs() << "SLP: Analyzing a store chain of length "<br>
<span> </span><< it->second.size() << ".\n");<br><br>- Changed |= R.vectorizeStores(it->second, -SLPCostThreshold);<br>+ Changed |= vectorizeStores(it->second, -SLPCostThreshold, R);<br>
}<br> return Changed;<br> }<br><br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_7zip.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_7zip.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_7zip.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_7zip.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_7zip.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,38 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+%struct.CLzmaDec.1.28.55.82.103.124.145.166.181.196.229.259.334 = type { %struct._CLzmaProps.0.27.54.81.102.123.144.165.180.195.228.258.333, i16*, i8*, i8*, i32, i32, i64, i64, i32, i32, i32, [4 x i32], i32, i32, i32, i32, i32, [20 x i8] }<br>
+%struct._CLzmaProps.0.27.54.81.102.123.144.165.180.195.228.258.333 = type { i32, i32, i32, i32 }<br>+<br>+define fastcc void @LzmaDec_DecodeReal2(%struct.CLzmaDec.1.28.55.82.103.124.145.166.181.196.229.259.334* %p) {<br>
+entry:<br>+ %range20.i = getelementptr inbounds %struct.CLzmaDec.1.28.55.82.103.124.145.166.181.196.229.259.334* %p, i64 0, i32 4<br>+ %code21.i = getelementptr inbounds %struct.CLzmaDec.1.28.55.82.103.124.145.166.181.196.229.259.334* %p, i64 0, i32 5<br>
+ br label %do.body66.i<br>+<br>+do.body66.i: ; preds = %do.cond.i, %entry<br>+ %range.2.i = phi i32 [ %range.4.i, %do.cond.i ], [ undef, %entry ]<br>+ %code.2.i = phi i32 [ %code.4.i, %do.cond.i ], [ undef, %entry ]<br>
+ %.range.2.i = select i1 undef, i32 undef, i32 %range.2.i<br>+ %.code.2.i = select i1 undef, i32 undef, i32 %code.2.i<br>+ br i1 undef, label %do.cond.i, label %if.else.i<br>+<br>+if.else.i: ; preds = %do.body66.i<br>
+ %sub91.i = sub i32 %.range.2.i, undef<br>+ %sub92.i = sub i32 %.code.2.i, undef<br>+ br label %do.cond.i<br>+<br>+do.cond.i: ; preds = %if.else.i, %do.body66.i<br>+ %range.4.i = phi i32 [ %sub91.i, %if.else.i ], [ undef, %do.body66.i ]<br>
+ %code.4.i = phi i32 [ %sub92.i, %if.else.i ], [ %.code.2.i, %do.body66.i ]<br>+ br i1 undef, label %do.body66.i, label %do.end1006.i<br>+<br>+do.end1006.i: ; preds = %do.cond.i<br>
+ %.range.4.i = select i1 undef, i32 undef, i32 %range.4.i<br>
+ %.code.4.i = select i1 undef, i32 undef, i32 %code.4.i<br>+ store i32 %.range.4.i, i32* %range20.i, align 4<br>+ store i32 %.code.4.i, i32* %code21.i, align 4<br>+ ret void<br>+}<br><br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet.ll<br>
URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,38 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+%"struct.btTypedConstraint::btConstraintInfo1.17.157.357.417.477.960" = type { i32, i32 }<br>+<br>+define void @_ZN23btGeneric6DofConstraint8getInfo1EPN17btTypedConstraint17btConstraintInfo1E(%"struct.btTypedConstraint::btConstraintInfo1.17.157.357.417.477.960"* nocapture %info) {<br>
+entry:<br>+ br i1 undef, label %if.else, label %if.then<br>+<br>+if.then: ; preds = %entry<br>+ ret void<br>+<br>+if.else: ; preds = %entry<br>
+ %m_numConstraintRows4 = getelementptr inbounds %"struct.btTypedConstraint::btConstraintInfo1.17.157.357.417.477.960"* %info, i64 0, i32 0<br>+ %nub5 = getelementptr inbounds %"struct.btTypedConstraint::btConstraintInfo1.17.157.357.417.477.960"* %info, i64 0, i32 1<br>
+ br i1 undef, label %land.lhs.true.i.1, label %if.then7.1<br>+<br>+land.lhs.true.i.1: ; preds = %if.else<br>+ br i1 undef, label %for.inc.1, label %if.then7.1<br>+<br>+if.then7.1: ; preds = %land.lhs.true.i.1, %if.else<br>
+ %inc.1 = add nsw i32 0, 1<br>+ store i32 %inc.1, i32* %m_numConstraintRows4, align 4<br>+ %dec.1 = add nsw i32 6, -1<br>+ store i32 %dec.1, i32* %nub5, align 4<br>+ br label %for.inc.1<br>+<br>+for.inc.1: ; preds = %if.then7.1, %land.lhs.true.i.1<br>
+ %0 = phi i32 [ %dec.1, %if.then7.1 ], [ 6, %land.lhs.true.i.1 ]<br>+ %1 = phi i32 [ %inc.1, %if.then7.1 ], [ 0, %land.lhs.true.i.1 ]<br>+ %inc.2 = add nsw i32 %1, 1<br>+ store i32 %inc.2, i32* %m_numConstraintRows4, align 4<br>
+ %dec.2 = add nsw i32 %0, -1<br>+ store i32 %dec.2, i32* %nub5, align 4<br>+ unreachable<br>+}<br><br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet2.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet2.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet2.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet2.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_bullet2.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,38 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+%class.GIM_TRIANGLE_CALCULATION_CACHE.9.34.69.94.119.144.179.189.264.284.332 = type { float, [3 x %class.btVector3.5.30.65.90.115.140.175.185.260.280.330], [3 x %class.btVector3.5.30.65.90.115.140.175.185.260.280.330], %class.btVector4.7.32.67.92.117.142.177.187.262.282.331, %class.btVector4.7.32.67.92.117.142.177.187.262.282.331, %class.btVector3.5.30.65.90.115.140.175.185.260.280.330, %class.btVector3.5.30.65.90.115.140.175.185.260.280.330, %class.btVector3.5.30.65.90.115.140.175.185.260.280.330, %class.btVector3.5.30.65.90.115.140.175.185.260.280.330, [4 x float], float, float, [4 x float], float, float, [16 x %class.btVector3.5.30.65.90.115.140.175.185.260.280.330], [16 x %class.btVector3.5.30.65.90.115.140.175.185.260.280.330], [16 x %class.btVector3.5.30.65.90.115.140.175.185.260.280.330] }<br>
+%class.btVector3.5.30.65.90.115.140.175.185.260.280.330 = type { [4 x float] }<br>+%class.btVector4.7.32.67.92.117.142.177.187.262.282.331 = type { %class.btVector3.5.30.65.90.115.140.175.185.260.280.330 }<br>+<br>+define void @_ZN30GIM_TRIANGLE_CALCULATION_CACHE18triangle_collisionERK9btVector3S2_S2_fS2_S2_S2_fR25GIM_TRIANGLE_CONTACT_DATA(%class.GIM_TRIANGLE_CALCULATION_CACHE.9.34.69.94.119.144.179.189.264.284.332* %this) {<br>
+entry:<br>+ %arrayidx26 = getelementptr inbounds %class.GIM_TRIANGLE_CALCULATION_CACHE.9.34.69.94.119.144.179.189.264.284.332* %this, i64 0, i32 2, i64 0, i32 0, i64 1<br>+ %arrayidx36 = getelementptr inbounds %class.GIM_TRIANGLE_CALCULATION_CACHE.9.34.69.94.119.144.179.189.264.284.332* %this, i64 0, i32 2, i64 0, i32 0, i64 2<br>
+ %0 = load float* %arrayidx36, align 4<br>+ %add587 = fadd float undef, undef<br>+ %sub600 = fsub float %add587, undef<br>+ store float %sub600, float* undef, align 4<br>+ %sub613 = fsub float %add587, %sub600<br>+ store float %sub613, float* %arrayidx26, align 4<br>
+ %add626 = fadd float %0, undef<br>+ %sub639 = fsub float %add626, undef<br>+ %sub652 = fsub float %add626, %sub639<br>+ store float %sub652, float* %arrayidx36, align 4<br>+ br i1 undef, label %if.else1609, label %if.then1595<br>
+<br>+if.then1595: ; preds = %entry<br>+ br i1 undef, label %return, label %for.body.lr.ph.i.i1702<br>+<br>+for.body.lr.ph.i.i1702: ; preds = %if.then1595<br>
+ unreachable<br>+<br>+if.else1609: ; preds = %entry<br>+ unreachable<br>+<br>+return: ; preds = %if.then1595<br>+ ret void<br>+}<br>+<br>
<br>
Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_dequeue.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_dequeue.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_dequeue.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_dequeue.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_dequeue.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,40 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+%"struct.std::_Deque_iterator.4.157.174.208.259.276.344.731" = type { double*, double*, double*, double** }<br>+<br>+; Function Attrs: nounwind ssp uwtable<br>
+define void @_ZSt6uniqueISt15_Deque_iteratorIdRdPdEET_S4_S4_(%"struct.std::_Deque_iterator.4.157.174.208.259.276.344.731"* %__first, %"struct.std::_Deque_iterator.4.157.174.208.259.276.344.731"* nocapture %__last) {<br>
+entry:<br>+ %_M_cur2.i.i = getelementptr inbounds %"struct.std::_Deque_iterator.4.157.174.208.259.276.344.731"* %__first, i64 0, i32 0<br>+ %0 = load double** %_M_cur2.i.i, align 8<br>+ %_M_first3.i.i = getelementptr inbounds %"struct.std::_Deque_iterator.4.157.174.208.259.276.344.731"* %__first, i64 0, i32 1<br>
+ %_M_cur2.i.i81 = getelementptr inbounds %"struct.std::_Deque_iterator.4.157.174.208.259.276.344.731"* %__last, i64 0, i32 0<br>+ %1 = load double** %_M_cur2.i.i81, align 8<br>+ %_M_first3.i.i83 = getelementptr inbounds %"struct.std::_Deque_iterator.4.157.174.208.259.276.344.731"* %__last, i64 0, i32 1<br>
+ %2 = load double** %_M_first3.i.i83, align 8<br>+ br i1 undef, label %_ZSt13adjacent_findISt15_Deque_iteratorIdRdPdEET_S4_S4_.exit, label %while.cond.i.preheader<br>+<br>+while.cond.i.preheader: ; preds = %entry<br>
+ br label %while.cond.i<br>+<br>+while.cond.i: ; preds = %while.body.i, %while.cond.i.preheader<br>+ br i1 undef, label %_ZSt13adjacent_findISt15_Deque_iteratorIdRdPdEET_S4_S4_.exit, label %while.body.i<br>
+<br>+while.body.i: ; preds = %while.cond.i<br>+ br i1 undef, label %_ZSt13adjacent_findISt15_Deque_iteratorIdRdPdEET_S4_S4_.exit, label %while.cond.i<br>+<br>+_ZSt13adjacent_findISt15_Deque_iteratorIdRdPdEET_S4_S4_.exit: ; preds = %while.body.i, %while.cond.i, %entry<br>
+ %3 = phi double* [ %2, %entry ], [ %2, %while.cond.i ], [ undef, %while.body.i ]<br>+ %4 = phi double* [ %0, %entry ], [ %1, %while.cond.i ], [ undef, %while.body.i ]<br>+ store double* %4, double** %_M_cur2.i.i, align 8<br>
+ store double* %3, double** %_M_first3.i.i, align 8<br>+ br i1 undef, label %if.then.i55, label %while.cond<br>+<br>+if.then.i55: ; preds = %_ZSt13adjacent_findISt15_Deque_iteratorIdRdPdEET_S4_S4_.exit<br>
+ br label %while.cond<br>+<br>+while.cond: ; preds = %while.cond, %if.then.i55, %_ZSt13adjacent_findISt15_Deque_iteratorIdRdPdEET_S4_S4_.exit<br>+ br label %while.cond<br>+}<br><br>
Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_flop7.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_flop7.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_flop7.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_flop7.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_flop7.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,46 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+; Function Attrs: nounwind ssp uwtable<br>+define void @main() #0 {<br>+entry:<br>+ br i1 undef, label %while.body, label %while.end<br>+<br>+while.body: ; preds = %entry<br>
+ unreachable<br>+<br>+while.end: ; preds = %entry<br>+ br i1 undef, label %for.end80, label %<a href="http://for.body75.lr.ph/" target="_blank">for.body75.lr.ph</a><br>+<br>+<a href="http://for.body75.lr.ph/" target="_blank">for.body75.lr.ph</a>: ; preds = %while.end<br>
+ br label %for.body75<br>+<br>+for.body75: ; preds = %for.body75, %<a href="http://for.body75.lr.ph/" target="_blank">for.body75.lr.ph</a><br>+ br label %for.body75<br>+<br>+for.end80: ; preds = %while.end<br>
+ br i1 undef, label %for.end300, label %<a href="http://for.body267.lr.ph/" target="_blank">for.body267.lr.ph</a><br>+<br>+<a href="http://for.body267.lr.ph/" target="_blank">for.body267.lr.ph</a>: ; preds = %for.end80<br>
+ br label %for.body267<br>+<br>+for.body267: ; preds = %for.body267, %<a href="http://for.body267.lr.ph/" target="_blank">for.body267.lr.ph</a><br>+ %s.71010 = phi double [ 0.000000e+00, %<a href="http://for.body267.lr.ph/" target="_blank">for.body267.lr.ph</a><span> </span>], [ %add297, %for.body267 ]<br>
+ %mul269 = fmul double undef, undef<br>+ %mul270 = fmul double %mul269, %mul269<br>+ %add282 = fadd double undef, undef<br>+ %mul283 = fmul double %mul269, %add282<br>+ %add293 = fadd double undef, undef<br>+ %mul294 = fmul double %mul270, %add293<br>
+ %add295 = fadd double undef, %mul294<br>+ %div296 = fdiv double %mul283, %add295<br>+ %add297 = fadd double %s.71010, %div296<br>+ br i1 undef, label %for.body267, label %for.end300<br>+<br>+for.end300: ; preds = %for.body267, %for.end80<br>
+ unreachable<br>+}<br>+<br>+attributes #0 = { nounwind ssp uwtable "less-precise-fpmad"="false" "no-frame-pointer-elim"="false" "no-frame-pointer-elim-non-leaf"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" }<br>
<br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lame.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lame.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lame.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lame.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lame.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,24 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+; Function Attrs: nounwind ssp uwtable<br>+define fastcc void @dct36(double* %inbuf) #0 {<br>+entry:<br>+ %arrayidx41 = getelementptr inbounds double* %inbuf, i64 2<br>
+ %arrayidx44 = getelementptr inbounds double* %inbuf, i64 1<br>+ %0 = load double* %arrayidx44, align 8, !tbaa !0<br>+ %add46 = fadd double %0, undef<br>+ store double %add46, double* %arrayidx41, align 8, !tbaa !0<br>
+ %1 = load double* %inbuf, align 8, !tbaa !0<br>+ %add49 = fadd double %1, %0<br>+ store double %add49, double* %arrayidx44, align 8, !tbaa !0<br>+ ret void<br>+}<br>+<br>+attributes #0 = { nounwind ssp uwtable "less-precise-fpmad"="false" "no-frame-pointer-elim"="false" "no-frame-pointer-elim-non-leaf"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" }<br>
+<br>+!0 = metadata !{metadata !"double", metadata !1}<br>+!1 = metadata !{metadata !"omnipotent char", metadata !2}<br>+!2 = metadata !{metadata !"Simple C/C++ TBAA"}<br><br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod.ll<br>
URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,66 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+; Function Attrs: nounwind ssp uwtable<br>+define void @RCModelEstimator() {<br>+entry:<br>+ br i1 undef, label %<a href="http://for.body.lr.ph/" target="_blank">for.body.lr.ph</a>, label %for.end.thread<br>
+<br>+for.end.thread: ; preds = %entry<br>+ unreachable<br>+<br>+<a href="http://for.body.lr.ph/" target="_blank">for.body.lr.ph</a>: ; preds = %entry<br>
+ br i1 undef, label %for.end, label %for.body<br>+<br>+for.body: ; preds = %for.body, %<a href="http://for.body.lr.ph/" target="_blank">for.body.lr.ph</a><br>+ br i1 undef, label %for.end, label %for.body<br>
+<br>+for.end: ; preds = %for.body, %<a href="http://for.body.lr.ph/" target="_blank">for.body.lr.ph</a><br>+ br i1 undef, label %for.body3, label %if.end103<br>+<br>+for.cond14.preheader: ; preds = %for.inc11<br>
+ br i1 undef, label %<a href="http://for.body16.lr.ph/" target="_blank">for.body16.lr.ph</a>, label %if.end103<br>+<br>+<a href="http://for.body16.lr.ph/" target="_blank">for.body16.lr.ph</a>: ; preds = %for.cond14.preheader<br>
+ br label %for.body16<br>+<br>+for.body3: ; preds = %for.inc11, %for.end<br>+ br i1 undef, label %if.then7, label %for.inc11<br>+<br>+if.then7: ; preds = %for.body3<br>
+ br label %for.inc11<br>+<br>+for.inc11: ; preds = %if.then7, %for.body3<br>+ br i1 false, label %for.cond14.preheader, label %for.body3<br>+<br>+for.body16: ; preds = %for.body16, %<a href="http://for.body16.lr.ph/" target="_blank">for.body16.lr.ph</a><br>
+ br i1 undef, label %for.end39, label %for.body16<br>+<br>+for.end39: ; preds = %for.body16<br>+ br i1 undef, label %if.end103, label %for.cond45.preheader<br>+<br>+for.cond45.preheader: ; preds = %for.end39<br>
+ br i1 undef, label %if.then88, label %if.else<br>+<br>+if.then88: ; preds = %for.cond45.preheader<br>+ %mul89 = fmul double 0.000000e+00, 0.000000e+00<br>+ %mul90 = fmul double 0.000000e+00, 0.000000e+00<br>
+ %sub91 = fsub double %mul89, %mul90<br>+ %div92 = fdiv double %sub91, undef<br>+ %mul94 = fmul double 0.000000e+00, 0.000000e+00<br>+ %mul95 = fmul double 0.000000e+00, 0.000000e+00<br>+ %sub96 = fsub double %mul94, %mul95<br>
+ %div97 = fdiv double %sub96, undef<br>+ br label %if.end103<br>+<br>+if.else: ; preds = %for.cond45.preheader<br>+ br label %if.end103<br>+<br>+if.end103: ; preds = %if.else, %if.then88, %for.end39, %for.cond14.preheader, %for.end<br>
+ %0 = phi double [ 0.000000e+00, %for.end39 ], [ %div97, %if.then88 ], [ 0.000000e+00, %if.else ], [ 0.000000e+00, %for.cond14.preheader ], [ 0.000000e+00, %for.end ]<br>+ %1 = phi double [ undef, %for.end39 ], [ %div92, %if.then88 ], [ undef, %if.else ], [ 0.000000e+00, %for.cond14.preheader ], [ 0.000000e+00, %for.end ]<br>
+ ret void<br>+}<br>+<br><br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod2.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod2.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod2.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod2.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_lencod2.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,23 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+; Function Attrs: nounwind ssp uwtable<br>+define void @intrapred_luma() #0 {<br>+entry:<br>+ %conv153 = trunc i32 undef to i16<br>+ %arrayidx154 = getelementptr inbounds [13 x i16]* undef, i64 0, i64 12<br>
+ store i16 %conv153, i16* %arrayidx154, align 8, !tbaa !0<br>+ %arrayidx155 = getelementptr inbounds [13 x i16]* undef, i64 0, i64 11<br>+ store i16 %conv153, i16* %arrayidx155, align 2, !tbaa !0<br>+ %arrayidx156 = getelementptr inbounds [13 x i16]* undef, i64 0, i64 10<br>
+ store i16 %conv153, i16* %arrayidx156, align 4, !tbaa !0<br>+ ret void<br>+}<br>+<br>+attributes #0 = { nounwind ssp uwtable "less-precise-fpmad"="false" "no-frame-pointer-elim"="false" "no-frame-pointer-elim-non-leaf"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" }<br>
+<br>+!0 = metadata !{metadata !"short", metadata !1}<br>+!1 = metadata !{metadata !"omnipotent char", metadata !2}<br>+!2 = metadata !{metadata !"Simple C/C++ TBAA"}<br><br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_mandeltext.ll<br>
URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_mandeltext.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_mandeltext.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_mandeltext.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_mandeltext.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,53 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+define void @main() {<br>+entry:<br>+ br label %for.body<br>+<br>+for.body: ; preds = %for.end44, %entry<br>+ br label %for.cond4.preheader<br>
+<br>+for.cond4.preheader: ; preds = %if.then25, %for.body<br>+ br label %for.body6<br>+<br>+for.body6: ; preds = %for.inc21, %for.cond4.preheader<br>+ br label %for.body12<br>
+<br>+for.body12: ; preds = %if.end, %for.body6<br>+ %fZImg.069 = phi double [ undef, %for.body6 ], [ %add19, %if.end ]<br>+ %fZReal.068 = phi double [ undef, %for.body6 ], [ %add20, %if.end ]<br>
+ %mul13 = fmul double %fZReal.068, %fZReal.068<br>+ %mul14 = fmul double %fZImg.069, %fZImg.069<br>+ %add15 = fadd double %mul13, %mul14<br>+ %cmp16 = fcmp ogt double %add15, 4.000000e+00<br>+ br i1 %cmp16, label %for.inc21, label %if.end<br>
+<br>+if.end: ; preds = %for.body12<br>+ %mul18 = fmul double undef, %fZImg.069<br>+ %add19 = fadd double undef, %mul18<br>+ %sub = fsub double %mul13, %mul14<br>+ %add20 = fadd double undef, %sub<br>
+ br i1 undef, label %for.body12, label %for.inc21<br>+<br>+for.inc21: ; preds = %if.end, %for.body12<br>+ br i1 undef, label %for.end23, label %for.body6<br>+<br>+for.end23: ; preds = %for.inc21<br>
+ br i1 undef, label %if.then25, label %if.then26<br>+<br>+if.then25: ; preds = %for.end23<br>+ br i1 undef, label %for.end44, label %for.cond4.preheader<br>+<br>+if.then26: ; preds = %for.end23<br>
+ unreachable<br>+<br>+for.end44: ; preds = %if.then25<br>+ br i1 undef, label %for.end48, label %for.body<br>+<br>+for.end48: ; preds = %for.end44<br>
+ ret void<br>+}<br>+<br><br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_rc4.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_rc4.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_rc4.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_rc4.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_rc4.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,28 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+%struct.rc4_state.0.24 = type { i32, i32, [256 x i32] }<br>+<br>+define void @rc4_crypt(%struct.rc4_state.0.24* nocapture %s) {<br>+entry:<br>+ %x1 = getelementptr inbounds %struct.rc4_state.0.24* %s, i64 0, i32 0<br>
+ %y2 = getelementptr inbounds %struct.rc4_state.0.24* %s, i64 0, i32 1<br>+ br i1 undef, label %for.body, label %for.end<br>+<br>+for.body: ; preds = %for.body, %entry<br>+ %x.045 = phi i32 [ %conv4, %for.body ], [ undef, %entry ]<br>
+ %conv4 = and i32 undef, 255<br>+ %conv7 = and i32 undef, 255<br>+ %idxprom842 = zext i32 %conv7 to i64<br>+ br i1 undef, label %for.end, label %for.body<br>+<br>+for.end: ; preds = %for.body, %entry<br>
+ %x.0.lcssa = phi i32 [ undef, %entry ], [ %conv4, %for.body ]<br>+ %y.0.lcssa = phi i32 [ undef, %entry ], [ %conv7, %for.body ]<br>+ store i32 %x.0.lcssa, i32* %x1, align 4<br>+ store i32 %y.0.lcssa, i32* %y2, align 4<br>
+ ret void<br>+}<br>+<br><br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_sim4b1.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_sim4b1.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_sim4b1.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_sim4b1.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_sim4b1.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,113 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+%struct._exon_t.12.103.220.363.480.649.740.857.1039.1065.1078.1091.1117.1130.1156.1169.1195.1221.1234.1286.1299.1312.1338.1429.1455.1468.1494.1520.1884.1897.1975.2066.2105.2170.2171 = type { i32, i32, i32, i32, i32, i32, [8 x i8] }<br>
+<br>+define void @SIM4() {<br>+entry:<br>+ br i1 undef, label %return, label %lor.lhs.false<br>+<br>+lor.lhs.false: ; preds = %entry<br>+ br i1 undef, label %return, label %if.end<br>
+<br>+if.end: ; preds = %lor.lhs.false<br>+ br i1 undef, label %for.end605, label %<a href="http://for.body.lr.ph/" target="_blank">for.body.lr.ph</a><br>+<br>+<a href="http://for.body.lr.ph/" target="_blank">for.body.lr.ph</a>: ; preds = %if.end<br>
+ br label %for.body<br>+<br>+for.body: ; preds = %for.inc603, %<a href="http://for.body.lr.ph/" target="_blank">for.body.lr.ph</a><br>+ br i1 undef, label %for.inc603, label %if.end12<br>
+<br>+if.end12: ; preds = %for.body<br>+ br i1 undef, label %land.lhs.true, label %land.lhs.true167<br>+<br>+land.lhs.true: ; preds = %if.end12<br>
+ br i1 undef, label %if.then17, label %land.lhs.true167<br>+<br>+if.then17: ; preds = %land.lhs.true<br>+ br i1 undef, label %if.end98, label %<a href="http://land.rhs.lr.ph/" target="_blank">land.rhs.lr.ph</a><br>
+<br>+<a href="http://land.rhs.lr.ph/" target="_blank">land.rhs.lr.ph</a>: ; preds = %if.then17<br>+ unreachable<br>+<br>+if.end98: ; preds = %if.then17<br>
+ %from299 = getelementptr inbounds %struct._exon_t.12.103.220.363.480.649.740.857.1039.1065.1078.1091.1117.1130.1156.1169.1195.1221.1234.1286.1299.1312.1338.1429.1455.1468.1494.1520.1884.1897.1975.2066.2105.2170.2171* undef, i64 0, i32 1<br>
+ br i1 undef, label %land.lhs.true167, label %if.then103<br>+<br>+if.then103: ; preds = %if.end98<br>+ %.sub100 = select i1 undef, i32 250, i32 undef<br>+ %mul114 = shl nsw i32 %.sub100, 2<br>
+ %from1115 = getelementptr inbounds %struct._exon_t.12.103.220.363.480.649.740.857.1039.1065.1078.1091.1117.1130.1156.1169.1195.1221.1234.1286.1299.1312.1338.1429.1455.1468.1494.1520.1884.1897.1975.2066.2105.2170.2171* undef, i64 0, i32 0<br>
+ %cond125 = select i1 undef, i32 undef, i32 %mul114<br>+ br label %for.cond.i<br>+<br>+for.cond.i: ; preds = %land.rhs.i874, %if.then103<br>+ %row.0.i = phi i32 [ undef, %land.rhs.i874 ], [ %.sub100, %if.then103 ]<br>
+ %col.0.i = phi i32 [ undef, %land.rhs.i874 ], [ %cond125, %if.then103 ]<br>+ br i1 undef, label %land.rhs.i874, label %for.end.i<br>+<br>+land.rhs.i874: ; preds = %for.cond.i<br>+ br i1 undef, label %for.cond.i, label %for.end.i<br>
+<br>+for.end.i: ; preds = %land.rhs.i874, %for.cond.i<br>+ br i1 undef, label %if.then.i, label %if.end.i<br>+<br>+if.then.i: ; preds = %for.end.i<br>
+ %add14.i = add nsw i32 %row.0.i, undef<br>+ %add15.i = add nsw i32 %col.0.i, undef<br>+ br label %extend_bw.exit<br>+<br>+if.end.i: ; preds = %for.end.i<br>+ %add16.i = add i32 %cond125, %.sub100<br>
+ %cmp26514.i = icmp slt i32 %add16.i, 0<br>+ br i1 %cmp26514.i, label %for.end33.i, label %for.body28.lr.ph.i<br>+<br>+for.body28.lr.ph.i: ; preds = %if.end.i<br>+ br label %for.end33.i<br>
+<br>+for.end33.i: ; preds = %for.body28.lr.ph.i, %if.end.i<br>+ br i1 undef, label %for.end58.i, label %for.body52.lr.ph.i<br>+<br>+for.body52.lr.ph.i: ; preds = %for.end33.i<br>
+ br label %for.end58.i<br>+<br>+for.end58.i: ; preds = %for.body52.lr.ph.i, %for.end33.i<br>+ br label %while.cond260.i<br>+<br>+while.cond260.i: ; preds = %land.rhs263.i, %for.end58.i<br>
+ br i1 undef, label %land.rhs263.i, label %while.end275.i<br>+<br>+land.rhs263.i: ; preds = %while.cond260.i<br>+ br i1 undef, label %while.cond260.i, label %while.end275.i<br>+<br>+while.end275.i: ; preds = %land.rhs263.i, %while.cond260.i<br>
+ br label %extend_bw.exit<br>+<br>+extend_bw.exit: ; preds = %while.end275.i, %if.then.i<br>+ %add14.i1262 = phi i32 [ %add14.i, %if.then.i ], [ undef, %while.end275.i ]<br>+ %add15.i1261 = phi i32 [ %add15.i, %if.then.i ], [ undef, %while.end275.i ]<br>
+ br i1 false, label %if.then157, label %land.lhs.true167<br>+<br>+if.then157: ; preds = %extend_bw.exit<br>+ %add158 = add nsw i32 %add14.i1262, 1<br>+ store i32 %add158, i32* %from299, align 4<br>
+ %add160 = add nsw i32 %add15.i1261, 1<br>+ store i32 %add160, i32* %from1115, align 4<br>+ br label %land.lhs.true167<br>+<br>+land.lhs.true167: ; preds = %if.then157, %extend_bw.exit, %if.end98, %land.lhs.true, %if.end12<br>
+ unreachable<br>+<br>+for.inc603: ; preds = %for.body<br>+ br i1 undef, label %for.body, label %for.end605<br>+<br>+for.end605: ; preds = %for.inc603, %if.end<br>
+ unreachable<br>+<br>+return: ; preds = %lor.lhs.false, %entry<br>+ ret void<br>+}<br>+<br><br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,65 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+%struct.Ray.5.11.53.113.119.137.149.185.329.389.415 = type { %struct.Vec.0.6.48.108.114.132.144.180.324.384.414, %struct.Vec.0.6.48.108.114.132.144.180.324.384.414 }<br>
+%struct.Vec.0.6.48.108.114.132.144.180.324.384.414 = type { double, double, double }<br>+<br>+; Function Attrs: ssp uwtable<br>+define void @main() #0 {<br>+entry:<br>+ br i1 undef, label %cond.true, label %cond.end<br>
+<br>+cond.true: ; preds = %entry<br>+ unreachable<br>+<br>+cond.end: ; preds = %entry<br>+ br label %invoke.cont<br>+<br>+invoke.cont: ; preds = %invoke.cont, %cond.end<br>
+ br i1 undef, label %arrayctor.cont, label %invoke.cont<br>+<br>+arrayctor.cont: ; preds = %invoke.cont<br>+ %agg.tmp99208.sroa.0.0.idx = getelementptr inbounds %struct.Ray.5.11.53.113.119.137.149.185.329.389.415* undef, i64 0, i32 0, i32 0<br>
+ %agg.tmp99208.sroa.1.8.idx388 = getelementptr inbounds %struct.Ray.5.11.53.113.119.137.149.185.329.389.415* undef, i64 0, i32 0, i32 1<br>+ %agg.tmp101211.sroa.0.0.idx = getelementptr inbounds %struct.Ray.5.11.53.113.119.137.149.185.329.389.415* undef, i64 0, i32 1, i32 0<br>
+ %agg.tmp101211.sroa.1.8.idx390 = getelementptr inbounds %struct.Ray.5.11.53.113.119.137.149.185.329.389.415* undef, i64 0, i32 1, i32 1<br>+ br label %for.cond36.preheader<br>+<br>+for.cond36.preheader: ; preds = %_Z5clampd.exit.1, %arrayctor.cont<br>
+ br i1 undef, label %<a href="http://for.body42.lr.ph.us/" target="_blank">for.body42.lr.ph.us</a>, label %_Z5clampd.exit.1<br>+<br>+<a href="http://cond.false51.us/" target="_blank">cond.false51.us</a>: ; preds = %<a href="http://for.body42.lr.ph.us/" target="_blank">for.body42.lr.ph.us</a><br>
+ unreachable<br>+<br>+<a href="http://cond.true48.us/" target="_blank">cond.true48.us</a>: ; preds = %<a href="http://for.body42.lr.ph.us/" target="_blank">for.body42.lr.ph.us</a><br>+ br i1 undef, label %<a href="http://cond.true63.us/" target="_blank">cond.true63.us</a>, label %<a href="http://cond.false66.us/" target="_blank">cond.false66.us</a><br>
+<br>+<a href="http://cond.false66.us/" target="_blank">cond.false66.us</a>: ; preds = %<a href="http://cond.true48.us/" target="_blank">cond.true48.us</a><br>+ %<a href="http://add.i276.us/" target="_blank">add.i276.us</a><span> </span>= fadd double 0.000000e+00, undef<br>
+ %<a href="http://add.i264.us/" target="_blank">add.i264.us</a><span> </span>= fadd double %<a href="http://add.i276.us/" target="_blank">add.i276.us</a>, 0.000000e+00<br>+ %<a href="http://add4.i267.us/" target="_blank">add4.i267.us</a><span> </span>= fadd double undef, 0xBFA5CC2D1960285F<br>
+ %<a href="http://mul.i254.us/" target="_blank">mul.i254.us</a><span> </span>= fmul double %<a href="http://add.i264.us/" target="_blank">add.i264.us</a>, 1.400000e+02<br>+ %<a href="http://mul2.i256.us/" target="_blank">mul2.i256.us</a><span> </span>= fmul double %<a href="http://add4.i267.us/" target="_blank">add4.i267.us</a>, 1.400000e+02<br>
+ %<a href="http://add.i243.us/" target="_blank">add.i243.us</a><span> </span>= fadd double %<a href="http://mul.i254.us/" target="_blank">mul.i254.us</a>, 5.000000e+01<br>+ %<a href="http://add4.i246.us/" target="_blank">add4.i246.us</a><span> </span>= fadd double %<a href="http://mul2.i256.us/" target="_blank">mul2.i256.us</a>, 5.200000e+01<br>
+ %<a href="http://mul.i.i.us/" target="_blank">mul.i.i.us</a><span> </span>= fmul double undef, %<a href="http://add.i264.us/" target="_blank">add.i264.us</a><br>+ %<a href="http://mul2.i.i.us/" target="_blank">mul2.i.i.us</a><span> </span>= fmul double undef, %<a href="http://add4.i267.us/" target="_blank">add4.i267.us</a><br>
+ store double %<a href="http://add.i243.us/" target="_blank">add.i243.us</a>, double* %agg.tmp99208.sroa.0.0.idx, align 8<br>+ store double %<a href="http://add4.i246.us/" target="_blank">add4.i246.us</a>, double* %agg.tmp99208.sroa.1.8.idx388, align 8<br>
+ store double %<a href="http://mul.i.i.us/" target="_blank">mul.i.i.us</a>, double* %agg.tmp101211.sroa.0.0.idx, align 8<br>+ store double %<a href="http://mul2.i.i.us/" target="_blank">mul2.i.i.us</a>, double* %agg.tmp101211.sroa.1.8.idx390, align 8<br>
+ unreachable<br>+<br>+<a href="http://cond.true63.us/" target="_blank">cond.true63.us</a>: ; preds = %<a href="http://cond.true48.us/" target="_blank">cond.true48.us</a><br>+ unreachable<br>
+<br>+<a href="http://for.body42.lr.ph.us/" target="_blank">for.body42.lr.ph.us</a>: ; preds = %for.cond36.preheader<br>+ br i1 undef, label %<a href="http://cond.true48.us/" target="_blank">cond.true48.us</a>, label %<a href="http://cond.false51.us/" target="_blank">cond.false51.us</a><br>
+<br>+_Z5clampd.exit.1: ; preds = %for.cond36.preheader<br>+ br label %for.cond36.preheader<br>+}<br>+<br>+attributes #0 = { ssp uwtable "less-precise-fpmad"="false" "no-frame-pointer-elim"="false" "no-frame-pointer-elim-non-leaf"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" }<br>
<br>Added: llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt2.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt2.ll?rev=185774&view=auto" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt2.ll?rev=185774&view=auto</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt2.ll (added)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/crash_smallpt2.ll Sun Jul 7 01:57:07 2013<br>
@@ -0,0 +1,46 @@<br>+; RUN: opt < %s -basicaa -slp-vectorizer -dce -S -mtriple=x86_64-apple-macosx10.8.0 -mcpu=corei7<br>+<br>+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br>
+target triple = "x86_64-apple-macosx10.8.0"<br>+<br>+%struct.Ray.5.11.53.95.137.191.197.203.239.257.263.269.275.281.287.293.383.437.443.455.461.599.601 = type { %struct.Vec.0.6.48.90.132.186.192.198.234.252.258.264.270.276.282.288.378.432.438.450.456.594.600, %struct.Vec.0.6.48.90.132.186.192.198.234.252.258.264.270.276.282.288.378.432.438.450.456.594.600 }<br>
+%struct.Vec.0.6.48.90.132.186.192.198.234.252.258.264.270.276.282.288.378.432.438.450.456.594.600 = type { double, double, double }<br>+<br>+; Function Attrs: ssp uwtable<br>+define void @_Z8radianceRK3RayiPt() #0 {<br>
+entry:<br>
+ br i1 undef, label %if.then78, label %if.then38<br>+<br>+if.then38: ; preds = %entry<br>+ %mul.i.i790 = fmul double undef, undef<br>+ %mul3.i.i792 = fmul double undef, undef<br>
+ %mul.i764 = fmul double undef, %mul3.i.i792<br>+ %mul4.i767 = fmul double undef, undef<br>+ %sub.i768 = fsub double %mul.i764, %mul4.i767<br>+ %mul6.i770 = fmul double undef, %mul.i.i790<br>+ %mul9.i772 = fmul double undef, %mul3.i.i792<br>
+ %sub10.i773 = fsub double %mul6.i770, %mul9.i772<br>+ %mul.i736 = fmul double undef, %sub.i768<br>+ %mul2.i738 = fmul double undef, %sub10.i773<br>+ %mul.i727 = fmul double undef, %mul.i736<br>+ %mul2.i729 = fmul double undef, %mul2.i738<br>
+ %add.i716 = fadd double undef, %mul.i727<br>+ %add4.i719 = fadd double undef, %mul2.i729<br>+ %add.i695 = fadd double undef, %add.i716<br>+ %add4.i698 = fadd double undef, %add4.i719<br>+ %mul.i.i679 = fmul double undef, %add.i695<br>
+ %mul2.i.i680 = fmul double undef, %add4.i698<br>+ %agg.tmp74663.sroa.0.0.idx = getelementptr inbounds %struct.Ray.5.11.53.95.137.191.197.203.239.257.263.269.275.281.287.293.383.437.443.455.461.599.601* undef, i64 0, i32 1, i32 0<br>
+ store double %mul.i.i679, double* %agg.tmp74663.sroa.0.0.idx, align 8<br>+ %agg.tmp74663.sroa.1.8.idx943 = getelementptr inbounds %struct.Ray.5.11.53.95.137.191.197.203.239.257.263.269.275.281.287.293.383.437.443.455.461.599.601* undef, i64 0, i32 1, i32 1<br>
+ store double %mul2.i.i680, double* %agg.tmp74663.sroa.1.8.idx943, align 8<br>+ br label %return<br>+<br>+if.then78: ; preds = %entry<br>+ br label %return<br>+<br>+return: ; preds = %if.then78, %if.then38<br>
+ ret void<br>+}<br>+<br>+attributes #0 = { ssp uwtable "less-precise-fpmad"="false" "no-frame-pointer-elim"="false" "no-frame-pointer-elim-non-leaf"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" }<br>
<br>Modified: llvm/trunk/test/Transforms/SLPVectorizer/X86/diamond.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/diamond.ll?rev=185774&r1=185773&r2=185774&view=diff" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/diamond.ll?rev=185774&r1=185773&r2=185774&view=diff</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/diamond.ll (original)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/diamond.ll Sun Jul 7 01:57:07 2013<br>
@@ -50,7 +50,8 @@ entry:<br> ; }<br><br> ; CHECK: @extr_user<br>-; CHECK: load i32*<br>+; CHECK: load <4 x i32><br>+; CHECK-NEXT: extractelement <4 x i32><br> ; CHECK: store <4 x i32><br> ; CHECK-NEXT: ret<br>
define i32 @extr_user(i32* noalias nocapture %B, i32* noalias nocapture %A, i32 %n, i32 %m) {<br>@@ -79,7 +80,8 @@ entry:<br><br> ; In this example we have an external user that is not the first element in the vector.<br>
; CHECK: @extr_user1<br>-; CHECK: load i32*<br>+; CHECK: load <4 x i32><br>+; CHECK-NEXT: extractelement <4 x i32><br> ; CHECK: store <4 x i32><br> ; CHECK-NEXT: ret<br> define i32 @extr_user1(i32* noalias nocapture %B, i32* noalias nocapture %A, i32 %n, i32 %m) {<br>
<br>Modified: llvm/trunk/test/Transforms/SLPVectorizer/X86/long_chains.ll<br>URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/long_chains.ll?rev=185774&r1=185773&r2=185774&view=diff" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/long_chains.ll?rev=185774&r1=185773&r2=185774&view=diff</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/long_chains.ll (original)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/long_chains.ll Sun Jul 7 01:57:07 2013<br>
@@ -3,12 +3,13 @@<br> target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"<br> target triple = "x86_64-apple-macosx10.8.0"<br>
<br>+; At this point we can't vectorize only parts of the tree.<br>+<br> ; CHECK: test<br>-; CHECK: sitofp i8<br>-; CHECK-NEXT: sitofp i8<br>-; CHECK-NEXT: insertelement<br>-; CHECK-NEXT: insertelement<br>-; CHECK-NEXT: fmul <2 x double><br>
+; CHECK: insertelement <2 x i8><br>+; CHECK: insertelement <2 x i8><br>+; CHECK: sitofp <2 x i8><br>+; CHECK: fmul <2 x double><br> ; CHECK: ret<br> define i32 @test(double* nocapture %A, i8* nocapture %B) {<br>
entry:<br>@@ -18,7 +19,7 @@ entry:<br> %add = add i8 %0, 3<br> %add4 = add i8 %1, 3<br> %conv6 = sitofp i8 %add to double<br>- %conv7 = sitofp i8 %add4 to double ; <--- This is inefficient. The chain stops here.<br>
+ %conv7 = sitofp i8 %add4 to double<br> %mul = fmul double %conv6, %conv6<br> %add8 = fadd double %mul, 1.000000e+00<br> %mul9 = fmul double %conv7, %conv7<br><br>Modified: llvm/trunk/test/Transforms/SLPVectorizer/X86/saxpy.ll<br>
URL:<span> </span><a href="http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/saxpy.ll?rev=185774&r1=185773&r2=185774&view=diff" target="_blank">http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/SLPVectorizer/X86/saxpy.ll?rev=185774&r1=185773&r2=185774&view=diff</a><br>
==============================================================================<br>--- llvm/trunk/test/Transforms/SLPVectorizer/X86/saxpy.ll (original)<br>+++ llvm/trunk/test/Transforms/SLPVectorizer/X86/saxpy.ll Sun Jul 7 01:57:07 2013<br>
@@ -43,3 +43,19 @@ define void @SAXPY(i32* noalias nocaptur<br> ret void<br> }<br><br>+; Make sure we don't crash on this one.<br>+define void @SAXPY_crash(i32* noalias nocapture %x, i32* noalias nocapture %y, i64 %i) {<br>
+ %1 = add i64 %i, 1<br>+ %2 = getelementptr inbounds i32* %x, i64 %1<br>+ %3 = getelementptr inbounds i32* %y, i64 %1<br>+ %4 = load i32* %3, align 4<br>+ %5 = add nsw i32 undef, %4<br>+ store i32 %5, i32* %2, align 4<br>
+ %6 = add i64 %i, 2<br>+ %7 = getelementptr inbounds i32* %x, i64 %6<br>+ %8 = getelementptr inbounds i32* %y, i64 %6<br>+ %9 = load i32* %8, align 4<br>+ %10 = add nsw i32 undef, %9<br>+ store i32 %10, i32* %7, align 4<br>
+ ret void<br>+}<br><br><br>_______________________________________________<br>llvm-commits mailing list<br><a href="mailto:llvm-commits@cs.uiuc.edu" target="_blank">llvm-commits@cs.uiuc.edu</a><br><a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><br>
</blockquote></div><br><br clear="all"><span class="HOEnZb"><font color="#888888"><div><br></div>--<span> </span><br><div>Alexey Samsonov, MSK</div></font></span></div></div></blockquote></div><br></div></div></blockquote>
</div><br><br clear="all"><div><br></div>-- <br><div>Alexey Samsonov, MSK</div>
</div>
</blockquote><blockquote type="cite" style=""><span>_______________________________________________</span><br><span>llvm-commits mailing list</span><br><span><a href="mailto:llvm-commits@cs.uiuc.edu">llvm-commits@cs.uiuc.edu</a></span><br>
<span><a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a></span><br></blockquote></div>
</blockquote></div><br></div></body></html>