[llvm] r325128 - [X86] Reduce Store Forward Block issues in HW - Recommit after fixing Bug 36346
Chandler Carruth via llvm-commits
llvm-commits at lists.llvm.org
Fri Feb 16 20:14:04 PST 2018
We don't have a test case yet, but just as a heads up: pretty certain this
is still miscompiling code. =/ We'll work on getting a test case.
On Wed, Feb 14, 2018 at 7:00 AM Lama Saba via llvm-commits <
llvm-commits at lists.llvm.org> wrote:
> Author: lsaba
> Date: Wed Feb 14 06:58:53 2018
> New Revision: 325128
>
> URL: http://llvm.org/viewvc/llvm-project?rev=325128&view=rev
> Log:
> [X86] Reduce Store Forward Block issues in HW - Recommit after fixing Bug
> 36346
>
> If a load follows a store and reloads data that the store has written to
> memory, Intel microarchitectures can in many cases forward the data
> directly from the store to the load, This "store forwarding" saves cycles
> by enabling the load to directly obtain the data instead of accessing the
> data from cache or memory.
> A "store forward block" occurs in cases that a store cannot be forwarded
> to the load. The most typical case of store forward block on Intel Core
> microarchiticutre that a small store cannot be forwarded to a large load.
> The estimated penalty for a store forward block is ~13 cycles.
>
> This pass tries to recognize and handle cases where "store forward block"
> is created by the compiler when lowering memcpy calls to a sequence
> of a load and a store.
>
> The pass currently only handles cases where memcpy is lowered to XMM/YMM
> registers, it tries to break the memcpy into smaller copies.
> breaking the memcpy should be possible since there is no atomicity
> guarantee for loads and stores to XMM/YMM.
>
> Change-Id: Ic41aa9ade6512e0478db66e07e2fde41b4fb35f9
>
> Added:
> llvm/trunk/lib/Target/X86/X86FixupSFB.cpp
> llvm/trunk/test/CodeGen/X86/fixup-sfb-32.ll
> llvm/trunk/test/CodeGen/X86/fixup-sfb.ll
> Modified:
> llvm/trunk/lib/Target/X86/CMakeLists.txt
> llvm/trunk/lib/Target/X86/X86.h
> llvm/trunk/lib/Target/X86/X86TargetMachine.cpp
>
> Modified: llvm/trunk/lib/Target/X86/CMakeLists.txt
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/CMakeLists.txt?rev=325128&r1=325127&r2=325128&view=diff
>
> ==============================================================================
> --- llvm/trunk/lib/Target/X86/CMakeLists.txt (original)
> +++ llvm/trunk/lib/Target/X86/CMakeLists.txt Wed Feb 14 06:58:53 2018
> @@ -31,6 +31,7 @@ set(sources
> X86FastISel.cpp
> X86FixupBWInsts.cpp
> X86FixupLEAs.cpp
> + X86FixupSFB.cpp
> X86FixupSetCC.cpp
> X86FloatingPoint.cpp
> X86FrameLowering.cpp
>
> Modified: llvm/trunk/lib/Target/X86/X86.h
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86.h?rev=325128&r1=325127&r2=325128&view=diff
>
> ==============================================================================
> --- llvm/trunk/lib/Target/X86/X86.h (original)
> +++ llvm/trunk/lib/Target/X86/X86.h Wed Feb 14 06:58:53 2018
> @@ -70,6 +70,9 @@ FunctionPass *createX86OptimizeLEAs();
> /// Return a pass that transforms setcc + movzx pairs into xor + setcc.
> FunctionPass *createX86FixupSetCC();
>
> +/// Return a pass that avoids creating store forward block issues in the
> hardware.
> +FunctionPass *createX86FixupSFB();
> +
> /// Return a pass that expands WinAlloca pseudo-instructions.
> FunctionPass *createX86WinAllocaExpander();
>
>
> Added: llvm/trunk/lib/Target/X86/X86FixupSFB.cpp
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86FixupSFB.cpp?rev=325128&view=auto
>
> ==============================================================================
> --- llvm/trunk/lib/Target/X86/X86FixupSFB.cpp (added)
> +++ llvm/trunk/lib/Target/X86/X86FixupSFB.cpp Wed Feb 14 06:58:53 2018
> @@ -0,0 +1,580 @@
> +//===- X86FixupSFB.cpp - Avoid HW Store Forward Block issues
> -----------===//
> +//
> +// The LLVM Compiler Infrastructure
> +//
> +// This file is distributed under the University of Illinois Open Source
> +// License. See LICENSE.TXT for details.
> +//
>
> +//===----------------------------------------------------------------------===//
> +//
> +// If a load follows a store and reloads data that the store has written
> to
> +// memory, Intel microarchitectures can in many cases forward the data
> directly
> +// from the store to the load, This "store forwarding" saves cycles by
> enabling
> +// the load to directly obtain the data instead of accessing the data from
> +// cache or memory.
> +// A "store forward block" occurs in cases that a store cannot be
> forwarded to
> +// the load. The most typical case of store forward block on Intel Core
> +// microarchitecture that a small store cannot be forwarded to a large
> load.
> +// The estimated penalty for a store forward block is ~13 cycles.
> +//
> +// This pass tries to recognize and handle cases where "store forward
> block"
> +// is created by the compiler when lowering memcpy calls to a sequence
> +// of a load and a store.
> +//
> +// The pass currently only handles cases where memcpy is lowered to
> +// XMM/YMM registers, it tries to break the memcpy into smaller copies.
> +// breaking the memcpy should be possible since there is no atomicity
> +// guarantee for loads and stores to XMM/YMM.
> +//
> +// It could be better for performance to solve the problem by loading
> +// to XMM/YMM then inserting the partial store before storing back from
> XMM/YMM
> +// to memory, but this will result in a more conservative optimization
> since it
> +// requires we prove that all memory accesses between the blocking store
> and the
> +// load must alias/don't alias before we can move the store, whereas the
> +// transformation done here is correct regardless to other memory
> accesses.
>
> +//===----------------------------------------------------------------------===//
> +
> +#include "X86InstrInfo.h"
> +#include "X86Subtarget.h"
> +#include "llvm/CodeGen/MachineBasicBlock.h"
> +#include "llvm/CodeGen/MachineFunction.h"
> +#include "llvm/CodeGen/MachineFunctionPass.h"
> +#include "llvm/CodeGen/MachineInstr.h"
> +#include "llvm/CodeGen/MachineInstrBuilder.h"
> +#include "llvm/CodeGen/MachineOperand.h"
> +#include "llvm/CodeGen/MachineRegisterInfo.h"
> +#include "llvm/IR/DebugInfoMetadata.h"
> +#include "llvm/IR/DebugLoc.h"
> +#include "llvm/IR/Function.h"
> +#include "llvm/MC/MCInstrDesc.h"
> +
> +using namespace llvm;
> +
> +#define DEBUG_TYPE "x86-fixup-SFB"
> +
> +static cl::opt<bool> DisableX86FixupSFB("disable-fixup-SFB", cl::Hidden,
> + cl::desc("X86: Disable SFB
> fixup."),
> + cl::init(false));
> +namespace {
> +
> +class FixupSFBPass : public MachineFunctionPass {
> +public:
> + FixupSFBPass() : MachineFunctionPass(ID) {}
> +
> + StringRef getPassName() const override {
> + return "X86 Fixup Store Forward Block";
> + }
> +
> + bool runOnMachineFunction(MachineFunction &MF) override;
> +
> +private:
> + MachineRegisterInfo *MRI;
> + const X86InstrInfo *TII;
> + const X86RegisterInfo *TRI;
> + SmallVector<std::pair<MachineInstr *, MachineInstr *>, 2>
> BlockedLoadsStores;
> + SmallVector<MachineInstr *, 2> ForRemoval;
> + bool Is64Bit;
> +
> + /// \brief Returns couples of Load then Store to memory which look
> + /// like a memcpy.
> + void findPotentiallylBlockedCopies(MachineFunction &MF);
> + /// \brief Break the memcpy's load and store into smaller copies
> + /// such that each memory load that was blocked by a smaller store
> + /// would now be copied separately.
> + void
> + breakBlockedCopies(MachineInstr *LoadInst, MachineInstr *StoreInst,
> + const std::map<int64_t, unsigned>
> &BlockingStoresDisp);
> + /// \brief Break a copy of size Size to smaller copies.
> + void buildCopies(int Size, MachineInstr *LoadInst, int64_t LdDispImm,
> + MachineInstr *StoreInst, int64_t StDispImm,
> + int64_t LMMOffset, int64_t SMMOffset);
> +
> + void buildCopy(MachineInstr *LoadInst, unsigned NLoadOpcode, int64_t
> LoadDisp,
> + MachineInstr *StoreInst, unsigned NStoreOpcode,
> + int64_t StoreDisp, unsigned Size, int64_t LMMOffset,
> + int64_t SMMOffset);
> +
> + unsigned getRegSizeInBytes(MachineInstr *Inst);
> + static char ID;
> +};
> +
> +} // end anonymous namespace
> +
> +char FixupSFBPass::ID = 0;
> +
> +FunctionPass *llvm::createX86FixupSFB() { return new FixupSFBPass(); }
> +
> +static bool isXMMLoadOpcode(unsigned Opcode) {
> + return Opcode == X86::MOVUPSrm || Opcode == X86::MOVAPSrm ||
> + Opcode == X86::VMOVUPSrm || Opcode == X86::VMOVAPSrm ||
> + Opcode == X86::VMOVUPDrm || Opcode == X86::VMOVAPDrm ||
> + Opcode == X86::VMOVDQUrm || Opcode == X86::VMOVDQArm ||
> + Opcode == X86::VMOVUPSZ128rm || Opcode == X86::VMOVAPSZ128rm ||
> + Opcode == X86::VMOVUPDZ128rm || Opcode == X86::VMOVAPDZ128rm ||
> + Opcode == X86::VMOVDQU64Z128rm || Opcode == X86::VMOVDQA64Z128rm
> ||
> + Opcode == X86::VMOVDQU32Z128rm || Opcode == X86::VMOVDQA32Z128rm;
> +}
> +static bool isYMMLoadOpcode(unsigned Opcode) {
> + return Opcode == X86::VMOVUPSYrm || Opcode == X86::VMOVAPSYrm ||
> + Opcode == X86::VMOVUPDYrm || Opcode == X86::VMOVAPDYrm ||
> + Opcode == X86::VMOVDQUYrm || Opcode == X86::VMOVDQAYrm ||
> + Opcode == X86::VMOVUPSZ256rm || Opcode == X86::VMOVAPSZ256rm ||
> + Opcode == X86::VMOVUPDZ256rm || Opcode == X86::VMOVAPDZ256rm ||
> + Opcode == X86::VMOVDQU64Z256rm || Opcode == X86::VMOVDQA64Z256rm
> ||
> + Opcode == X86::VMOVDQU32Z256rm || Opcode == X86::VMOVDQA32Z256rm;
> +}
> +
> +static bool isPotentialBlockedMemCpyLd(unsigned Opcode) {
> + return isXMMLoadOpcode(Opcode) || isYMMLoadOpcode(Opcode);
> +}
> +
> +std::map<unsigned, std::pair<unsigned, unsigned>> PotentialBlockedMemCpy{
> + {X86::MOVUPSrm, {X86::MOVUPSmr, X86::MOVAPSmr}},
> + {X86::MOVAPSrm, {X86::MOVUPSmr, X86::MOVAPSmr}},
> + {X86::VMOVUPSrm, {X86::VMOVUPSmr, X86::VMOVAPSmr}},
> + {X86::VMOVAPSrm, {X86::VMOVUPSmr, X86::VMOVAPSmr}},
> + {X86::VMOVUPDrm, {X86::VMOVUPDmr, X86::VMOVAPDmr}},
> + {X86::VMOVAPDrm, {X86::VMOVUPDmr, X86::VMOVAPDmr}},
> + {X86::VMOVDQUrm, {X86::VMOVDQUmr, X86::VMOVDQAmr}},
> + {X86::VMOVDQArm, {X86::VMOVDQUmr, X86::VMOVDQAmr}},
> + {X86::VMOVUPSZ128rm, {X86::VMOVUPSZ128mr, X86::VMOVAPSZ128mr}},
> + {X86::VMOVAPSZ128rm, {X86::VMOVUPSZ128mr, X86::VMOVAPSZ128mr}},
> + {X86::VMOVUPDZ128rm, {X86::VMOVUPDZ128mr, X86::VMOVAPDZ128mr}},
> + {X86::VMOVAPDZ128rm, {X86::VMOVUPDZ128mr, X86::VMOVAPDZ128mr}},
> + {X86::VMOVUPSYrm, {X86::VMOVUPSYmr, X86::VMOVAPSYmr}},
> + {X86::VMOVAPSYrm, {X86::VMOVUPSYmr, X86::VMOVAPSYmr}},
> + {X86::VMOVUPDYrm, {X86::VMOVUPDYmr, X86::VMOVAPDYmr}},
> + {X86::VMOVAPDYrm, {X86::VMOVUPDYmr, X86::VMOVAPDYmr}},
> + {X86::VMOVDQUYrm, {X86::VMOVDQUYmr, X86::VMOVDQAYmr}},
> + {X86::VMOVDQAYrm, {X86::VMOVDQUYmr, X86::VMOVDQAYmr}},
> + {X86::VMOVUPSZ256rm, {X86::VMOVUPSZ256mr, X86::VMOVAPSZ256mr}},
> + {X86::VMOVAPSZ256rm, {X86::VMOVUPSZ256mr, X86::VMOVAPSZ256mr}},
> + {X86::VMOVUPDZ256rm, {X86::VMOVUPDZ256mr, X86::VMOVAPDZ256mr}},
> + {X86::VMOVAPDZ256rm, {X86::VMOVUPDZ256mr, X86::VMOVAPDZ256mr}},
> + {X86::VMOVDQU64Z128rm, {X86::VMOVDQU64Z128mr, X86::VMOVDQA64Z128mr}},
> + {X86::VMOVDQA64Z128rm, {X86::VMOVDQU64Z128mr, X86::VMOVDQA64Z128mr}},
> + {X86::VMOVDQU32Z128rm, {X86::VMOVDQU32Z128mr, X86::VMOVDQA32Z128mr}},
> + {X86::VMOVDQA32Z128rm, {X86::VMOVDQU32Z128mr, X86::VMOVDQA32Z128mr}},
> + {X86::VMOVDQU64Z256rm, {X86::VMOVDQU64Z256mr, X86::VMOVDQA64Z256mr}},
> + {X86::VMOVDQA64Z256rm, {X86::VMOVDQU64Z256mr, X86::VMOVDQA64Z256mr}},
> + {X86::VMOVDQU32Z256rm, {X86::VMOVDQU32Z256mr, X86::VMOVDQA32Z256mr}},
> + {X86::VMOVDQA32Z256rm, {X86::VMOVDQU32Z256mr, X86::VMOVDQA32Z256mr}},
> +};
> +
> +static bool isPotentialBlockedMemCpyPair(unsigned LdOpcode, unsigned
> StOpcode) {
> + auto PotentialStores = PotentialBlockedMemCpy.at(LdOpcode);
> + return PotentialStores.first == StOpcode ||
> + PotentialStores.second == StOpcode;
> +}
> +
> +static bool isPotentialBlockingStoreInst(int Opcode, int LoadOpcode) {
> + bool PBlock = false;
> + PBlock |= Opcode == X86::MOV64mr || Opcode == X86::MOV64mi32 ||
> + Opcode == X86::MOV32mr || Opcode == X86::MOV32mi ||
> + Opcode == X86::MOV16mr || Opcode == X86::MOV16mi ||
> + Opcode == X86::MOV8mr || Opcode == X86::MOV8mi;
> + if (isYMMLoadOpcode(LoadOpcode))
> + PBlock |= Opcode == X86::VMOVUPSmr || Opcode == X86::VMOVAPSmr ||
> + Opcode == X86::VMOVUPDmr || Opcode == X86::VMOVAPDmr ||
> + Opcode == X86::VMOVDQUmr || Opcode == X86::VMOVDQAmr ||
> + Opcode == X86::VMOVUPSZ128mr || Opcode ==
> X86::VMOVAPSZ128mr ||
> + Opcode == X86::VMOVUPDZ128mr || Opcode ==
> X86::VMOVAPDZ128mr ||
> + Opcode == X86::VMOVDQU64Z128mr ||
> + Opcode == X86::VMOVDQA64Z128mr ||
> + Opcode == X86::VMOVDQU32Z128mr || Opcode ==
> X86::VMOVDQA32Z128mr;
> + return PBlock;
> +}
> +
> +static const int MOV128SZ = 16;
> +static const int MOV64SZ = 8;
> +static const int MOV32SZ = 4;
> +static const int MOV16SZ = 2;
> +static const int MOV8SZ = 1;
> +
> +std::map<unsigned, unsigned> YMMtoXMMLoadMap = {
> + {X86::VMOVUPSYrm, X86::VMOVUPSrm},
> + {X86::VMOVAPSYrm, X86::VMOVUPSrm},
> + {X86::VMOVUPDYrm, X86::VMOVUPDrm},
> + {X86::VMOVAPDYrm, X86::VMOVUPDrm},
> + {X86::VMOVDQUYrm, X86::VMOVDQUrm},
> + {X86::VMOVDQAYrm, X86::VMOVDQUrm},
> + {X86::VMOVUPSZ256rm, X86::VMOVUPSZ128rm},
> + {X86::VMOVAPSZ256rm, X86::VMOVUPSZ128rm},
> + {X86::VMOVUPDZ256rm, X86::VMOVUPDZ128rm},
> + {X86::VMOVAPDZ256rm, X86::VMOVUPDZ128rm},
> + {X86::VMOVDQU64Z256rm, X86::VMOVDQU64Z128rm},
> + {X86::VMOVDQA64Z256rm, X86::VMOVDQU64Z128rm},
> + {X86::VMOVDQU32Z256rm, X86::VMOVDQU32Z128rm},
> + {X86::VMOVDQA32Z256rm, X86::VMOVDQU32Z128rm},
> +};
> +
> +std::map<unsigned, unsigned> YMMtoXMMStoreMap = {
> + {X86::VMOVUPSYmr, X86::VMOVUPSmr},
> + {X86::VMOVAPSYmr, X86::VMOVUPSmr},
> + {X86::VMOVUPDYmr, X86::VMOVUPDmr},
> + {X86::VMOVAPDYmr, X86::VMOVUPDmr},
> + {X86::VMOVDQUYmr, X86::VMOVDQUmr},
> + {X86::VMOVDQAYmr, X86::VMOVDQUmr},
> + {X86::VMOVUPSZ256mr, X86::VMOVUPSZ128mr},
> + {X86::VMOVAPSZ256mr, X86::VMOVUPSZ128mr},
> + {X86::VMOVUPDZ256mr, X86::VMOVUPDZ128mr},
> + {X86::VMOVAPDZ256mr, X86::VMOVUPDZ128mr},
> + {X86::VMOVDQU64Z256mr, X86::VMOVDQU64Z128mr},
> + {X86::VMOVDQA64Z256mr, X86::VMOVDQU64Z128mr},
> + {X86::VMOVDQU32Z256mr, X86::VMOVDQU32Z128mr},
> + {X86::VMOVDQA32Z256mr, X86::VMOVDQU32Z128mr},
> +};
> +
> +static int getAddrOffset(MachineInstr *MI) {
> + const MCInstrDesc &Descl = MI->getDesc();
> + int AddrOffset = X86II::getMemoryOperandNo(Descl.TSFlags);
> + assert(AddrOffset != -1 && "Expected Memory Operand");
> + AddrOffset += X86II::getOperandBias(Descl);
> + return AddrOffset;
> +}
> +
> +static MachineOperand &getBaseOperand(MachineInstr *MI) {
> + int AddrOffset = getAddrOffset(MI);
> + return MI->getOperand(AddrOffset + X86::AddrBaseReg);
> +}
> +
> +static MachineOperand &getDispOperand(MachineInstr *MI) {
> + int AddrOffset = getAddrOffset(MI);
> + return MI->getOperand(AddrOffset + X86::AddrDisp);
> +}
> +
> +// Relevant addressing modes contain only base register and immediate
> +// displacement or frameindex and immediate displacement.
> +// TODO: Consider expanding to other addressing modes in the future
> +static bool isRelevantAddressingMode(MachineInstr *MI) {
> + int AddrOffset = getAddrOffset(MI);
> + MachineOperand &Base = MI->getOperand(AddrOffset + X86::AddrBaseReg);
> + MachineOperand &Disp = MI->getOperand(AddrOffset + X86::AddrDisp);
> + MachineOperand &Scale = MI->getOperand(AddrOffset + X86::AddrScaleAmt);
> + MachineOperand &Index = MI->getOperand(AddrOffset + X86::AddrIndexReg);
> + MachineOperand &Segment = MI->getOperand(AddrOffset +
> X86::AddrSegmentReg);
> +
> + if (!((Base.isReg() && Base.getReg() != X86::NoRegister) ||
> Base.isFI()))
> + return false;
> + if (!Disp.isImm())
> + return false;
> + if (Scale.getImm() != 1)
> + return false;
> + if (!(Index.isReg() && Index.getReg() == X86::NoRegister))
> + return false;
> + if (!(Segment.isReg() && Segment.getReg() == X86::NoRegister))
> + return false;
> + return true;
> +}
> +
> +// Collect potentially blocking stores.
> +// Limit the number of instructions backwards we want to inspect
> +// since the effect of store block won't be visible if the store
> +// and load instructions have enough instructions in between to
> +// keep the core busy.
> +static const unsigned LIMIT = 20;
> +static SmallVector<MachineInstr *, 2>
> +findPotentialBlockers(MachineInstr *LoadInst) {
> + SmallVector<MachineInstr *, 2> PotentialBlockers;
> + unsigned BlockLimit = 0;
> + for (MachineBasicBlock::iterator LI = LoadInst,
> + BB = LoadInst->getParent()->begin();
> + LI != BB; --LI) {
> + BlockLimit++;
> + if (BlockLimit >= LIMIT)
> + break;
> + MachineInstr &MI = *LI;
> + if (MI.getDesc().isCall())
> + break;
> + PotentialBlockers.push_back(&MI);
> + }
> + // If we didn't get to the instructions limit try predecessing blocks.
> + // Ideally we should traverse the predecessor blocks in depth with some
> + // coloring algorithm, but for now let's just look at the first order
> + // predecessors.
> + if (BlockLimit < LIMIT) {
> + MachineBasicBlock *MBB = LoadInst->getParent();
> + int LimitLeft = LIMIT - BlockLimit;
> + for (MachineBasicBlock::pred_iterator PB = MBB->pred_begin(),
> + PE = MBB->pred_end();
> + PB != PE; ++PB) {
> + MachineBasicBlock *PMBB = *PB;
> + int PredLimit = 0;
> + for (MachineBasicBlock::reverse_iterator PMI = PMBB->rbegin(),
> + PME = PMBB->rend();
> + PMI != PME; ++PMI) {
> + PredLimit++;
> + if (PredLimit >= LimitLeft)
> + break;
> + if (PMI->getDesc().isCall())
> + break;
> + PotentialBlockers.push_back(&*PMI);
> + }
> + }
> + }
> + return PotentialBlockers;
> +}
> +
> +void FixupSFBPass::buildCopy(MachineInstr *LoadInst, unsigned NLoadOpcode,
> + int64_t LoadDisp, MachineInstr *StoreInst,
> + unsigned NStoreOpcode, int64_t StoreDisp,
> + unsigned Size, int64_t LMMOffset,
> + int64_t SMMOffset) {
> + MachineOperand &LoadBase = getBaseOperand(LoadInst);
> + MachineOperand &StoreBase = getBaseOperand(StoreInst);
> + MachineBasicBlock *MBB = LoadInst->getParent();
> + MachineMemOperand *LMMO = *LoadInst->memoperands_begin();
> + MachineMemOperand *SMMO = *StoreInst->memoperands_begin();
> +
> + unsigned Reg1 = MRI->createVirtualRegister(
> + TII->getRegClass(TII->get(NLoadOpcode), 0, TRI,
> *(MBB->getParent())));
> + BuildMI(*MBB, LoadInst, LoadInst->getDebugLoc(), TII->get(NLoadOpcode),
> Reg1)
> + .add(LoadBase)
> + .addImm(1)
> + .addReg(X86::NoRegister)
> + .addImm(LoadDisp)
> + .addReg(X86::NoRegister)
> + .addMemOperand(
> + MBB->getParent()->getMachineMemOperand(LMMO, LMMOffset, Size));
> + DEBUG(LoadInst->getPrevNode()->dump());
> + // If the load and store are consecutive, use the loadInst location to
> + // reduce register pressure.
> + MachineInstr *StInst = StoreInst;
> + if (StoreInst->getPrevNode() == LoadInst)
> + StInst = LoadInst;
> + BuildMI(*MBB, StInst, StInst->getDebugLoc(), TII->get(NStoreOpcode))
> + .add(StoreBase)
> + .addImm(1)
> + .addReg(X86::NoRegister)
> + .addImm(StoreDisp)
> + .addReg(X86::NoRegister)
> + .addReg(Reg1)
> + .addMemOperand(
> + MBB->getParent()->getMachineMemOperand(SMMO, SMMOffset, Size));
> + DEBUG(StInst->getPrevNode()->dump());
> +}
> +
> +void FixupSFBPass::buildCopies(int Size, MachineInstr *LoadInst,
> + int64_t LdDispImm, MachineInstr *StoreInst,
> + int64_t StDispImm, int64_t LMMOffset,
> + int64_t SMMOffset) {
> + int LdDisp = LdDispImm;
> + int StDisp = StDispImm;
> + while (Size > 0) {
> + if ((Size - MOV128SZ >= 0) && isYMMLoadOpcode(LoadInst->getOpcode()))
> {
> + Size = Size - MOV128SZ;
> + buildCopy(LoadInst, YMMtoXMMLoadMap.at(LoadInst->getOpcode()),
> LdDisp,
> + StoreInst, YMMtoXMMStoreMap.at(StoreInst->getOpcode()),
> StDisp,
> + MOV128SZ, LMMOffset, SMMOffset);
> + LdDisp += MOV128SZ;
> + StDisp += MOV128SZ;
> + LMMOffset += MOV128SZ;
> + SMMOffset += MOV128SZ;
> + continue;
> + }
> + if (Size - MOV64SZ >= 0 && Is64Bit) {
> + Size = Size - MOV64SZ;
> + buildCopy(LoadInst, X86::MOV64rm, LdDisp, StoreInst, X86::MOV64mr,
> StDisp,
> + MOV64SZ, LMMOffset, SMMOffset);
> + LdDisp += MOV64SZ;
> + StDisp += MOV64SZ;
> + LMMOffset += MOV64SZ;
> + SMMOffset += MOV64SZ;
> + continue;
> + }
> + if (Size - MOV32SZ >= 0) {
> + Size = Size - MOV32SZ;
> + buildCopy(LoadInst, X86::MOV32rm, LdDisp, StoreInst, X86::MOV32mr,
> StDisp,
> + MOV32SZ, LMMOffset, SMMOffset);
> + LdDisp += MOV32SZ;
> + StDisp += MOV32SZ;
> + LMMOffset += MOV32SZ;
> + SMMOffset += MOV32SZ;
> + continue;
> + }
> + if (Size - MOV16SZ >= 0) {
> + Size = Size - MOV16SZ;
> + buildCopy(LoadInst, X86::MOV16rm, LdDisp, StoreInst, X86::MOV16mr,
> StDisp,
> + MOV16SZ, LMMOffset, SMMOffset);
> + LdDisp += MOV16SZ;
> + StDisp += MOV16SZ;
> + LMMOffset += MOV16SZ;
> + SMMOffset += MOV16SZ;
> + continue;
> + }
> + if (Size - MOV8SZ >= 0) {
> + Size = Size - MOV8SZ;
> + buildCopy(LoadInst, X86::MOV8rm, LdDisp, StoreInst, X86::MOV8mr,
> StDisp,
> + MOV8SZ, LMMOffset, SMMOffset);
> + LdDisp += MOV8SZ;
> + StDisp += MOV8SZ;
> + LMMOffset += MOV8SZ;
> + SMMOffset += MOV8SZ;
> + continue;
> + }
> + }
> + assert(Size == 0 && "Wrong size division");
> +}
> +
> +static void updateKillStatus(MachineInstr *LoadInst, MachineInstr
> *StoreInst) {
> + MachineOperand &LoadBase = getBaseOperand(LoadInst);
> + MachineOperand &StoreBase = getBaseOperand(StoreInst);
> + if (LoadBase.isReg()) {
> + MachineInstr *LastLoad = LoadInst->getPrevNode();
> + // If the original load and store to xmm/ymm were consecutive
> + // then the partial copies were also created in
> + // a consecutive order to reduce register pressure,
> + // and the location of the last load is before the last store.
> + if (StoreInst->getPrevNode() == LoadInst)
> + LastLoad = LoadInst->getPrevNode()->getPrevNode();
> + getBaseOperand(LastLoad).setIsKill(LoadBase.isKill());
> + }
> + if (StoreBase.isReg()) {
> + MachineInstr *StInst = StoreInst;
> + if (StoreInst->getPrevNode() == LoadInst)
> + StInst = LoadInst;
> + getBaseOperand(StInst->getPrevNode()).setIsKill(StoreBase.isKill());
> + }
> +}
> +
> +void FixupSFBPass::findPotentiallylBlockedCopies(MachineFunction &MF) {
> + for (auto &MBB : MF)
> + for (auto &MI : MBB)
> + if (isPotentialBlockedMemCpyLd(MI.getOpcode())) {
> + int DefVR = MI.getOperand(0).getReg();
> + if (MRI->hasOneUse(DefVR))
> + for (auto UI = MRI->use_nodbg_begin(DefVR), UE =
> MRI->use_nodbg_end();
> + UI != UE;) {
> + MachineOperand &StoreMO = *UI++;
> + MachineInstr &StoreMI = *StoreMO.getParent();
> + if (isPotentialBlockedMemCpyPair(MI.getOpcode(),
> + StoreMI.getOpcode()) &&
> + (StoreMI.getParent() == MI.getParent()))
> + if (isRelevantAddressingMode(&MI) &&
> + isRelevantAddressingMode(&StoreMI))
> + BlockedLoadsStores.push_back(
> + std::pair<MachineInstr *, MachineInstr *>(&MI,
> &StoreMI));
> + }
> + }
> +}
> +unsigned FixupSFBPass::getRegSizeInBytes(MachineInstr *LoadInst) {
> + auto TRC = TII->getRegClass(TII->get(LoadInst->getOpcode()), 0, TRI,
> + *LoadInst->getParent()->getParent());
> + return TRI->getRegSizeInBits(*TRC) / 8;
> +}
> +
> +void FixupSFBPass::breakBlockedCopies(
> + MachineInstr *LoadInst, MachineInstr *StoreInst,
> + const std::map<int64_t, unsigned> &BlockingStoresDisp) {
> + int64_t LdDispImm = getDispOperand(LoadInst).getImm();
> + int64_t StDispImm = getDispOperand(StoreInst).getImm();
> + int64_t LMMOffset = (*LoadInst->memoperands_begin())->getOffset();
> + int64_t SMMOffset = (*StoreInst->memoperands_begin())->getOffset();
> +
> + int64_t LdDisp1 = LdDispImm;
> + int64_t LdDisp2 = 0;
> + int64_t StDisp1 = StDispImm;
> + int64_t StDisp2 = 0;
> + unsigned Size1 = 0;
> + unsigned Size2 = 0;
> + int64_t LdStDelta = StDispImm - LdDispImm;
> + for (auto inst : BlockingStoresDisp) {
> + LdDisp2 = inst.first;
> + StDisp2 = inst.first + LdStDelta;
> + Size1 = std::abs(std::abs(LdDisp2) - std::abs(LdDisp1));
> + Size2 = inst.second;
> + buildCopies(Size1, LoadInst, LdDisp1, StoreInst, StDisp1, LMMOffset,
> + SMMOffset);
> + buildCopies(Size2, LoadInst, LdDisp2, StoreInst, StDisp2, LMMOffset +
> Size1,
> + SMMOffset + Size1);
> + LdDisp1 = LdDisp2 + Size2;
> + StDisp1 = StDisp2 + Size2;
> + LMMOffset += Size1 + Size2;
> + SMMOffset += Size1 + Size2;
> + }
> + unsigned Size3 = (LdDispImm + getRegSizeInBytes(LoadInst)) - LdDisp1;
> + buildCopies(Size3, LoadInst, LdDisp1, StoreInst, StDisp1, LMMOffset,
> + LMMOffset);
> +}
> +
> +bool FixupSFBPass::runOnMachineFunction(MachineFunction &MF) {
> + bool Changed = false;
> +
> + if (DisableX86FixupSFB || skipFunction(MF.getFunction()))
> + return false;
> +
> + MRI = &MF.getRegInfo();
> + assert(MRI->isSSA() && "Expected MIR to be in SSA form");
> + TII = MF.getSubtarget<X86Subtarget>().getInstrInfo();
> + TRI = MF.getSubtarget<X86Subtarget>().getRegisterInfo();
> + Is64Bit = MF.getSubtarget<X86Subtarget>().is64Bit();
> + DEBUG(dbgs() << "Start X86FixupSFB\n";);
> + // Look for a load then a store to XMM/YMM which look like a memcpy
> + findPotentiallylBlockedCopies(MF);
> +
> + for (auto LoadStoreInst : BlockedLoadsStores) {
> + MachineInstr *LoadInst = LoadStoreInst.first;
> + SmallVector<MachineInstr *, 2> PotentialBlockers =
> + findPotentialBlockers(LoadInst);
> +
> + MachineOperand &LoadBase = getBaseOperand(LoadInst);
> + int64_t LdDispImm = getDispOperand(LoadInst).getImm();
> + std::map<int64_t, unsigned> BlockingStoresDisp;
> + int LdBaseReg = LoadBase.isReg() ? LoadBase.getReg() :
> LoadBase.getIndex();
> +
> + for (auto PBInst : PotentialBlockers) {
> + if (isPotentialBlockingStoreInst(PBInst->getOpcode(),
> + LoadInst->getOpcode())) {
> + if (!isRelevantAddressingMode(PBInst))
> + continue;
> + MachineOperand &PBstoreBase = getBaseOperand(PBInst);
> + int64_t PBstDispImm = getDispOperand(PBInst).getImm();
> + assert(PBInst->hasOneMemOperand() && "Expected One Memory
> Operand");
> + unsigned PBstSize = (*PBInst->memoperands_begin())->getSize();
> + int PBstBaseReg =
> + PBstoreBase.isReg() ? PBstoreBase.getReg() :
> PBstoreBase.getIndex();
> + // This check doesn't cover all cases, but it will suffice for
> now.
> + // TODO: take branch probability into consideration, if the
> blocking
> + // store is in an unreached block, breaking the memcopy could lose
> + // performance.
> + if (((LoadBase.isReg() && PBstoreBase.isReg()) ||
> + (LoadBase.isFI() && PBstoreBase.isFI())) &&
> + LdBaseReg == PBstBaseReg &&
> + ((PBstDispImm >= LdDispImm) &&
> + (PBstDispImm <=
> + LdDispImm + (getRegSizeInBytes(LoadInst) - PBstSize)))) {
> + if (BlockingStoresDisp.count(PBstDispImm)) {
> + if (BlockingStoresDisp[PBstDispImm] > PBstSize)
> + BlockingStoresDisp[PBstDispImm] = PBstSize;
> +
> + } else
> + BlockingStoresDisp[PBstDispImm] = PBstSize;
> + }
> + }
> + }
> +
> + if (BlockingStoresDisp.size() == 0)
> + continue;
> +
> + // We found a store forward block, break the memcpy's load and store
> + // into smaller copies such that each smaller store that was causing
> + // a store block would now be copied separately.
> + MachineInstr *StoreInst = LoadStoreInst.second;
> + DEBUG(dbgs() << "Blocked load and store instructions: \n");
> + DEBUG(LoadInst->dump());
> + DEBUG(StoreInst->dump());
> + DEBUG(dbgs() << "Replaced with:\n");
> + breakBlockedCopies(LoadInst, StoreInst, BlockingStoresDisp);
> + updateKillStatus(LoadInst, StoreInst);
> + ForRemoval.push_back(LoadInst);
> + ForRemoval.push_back(StoreInst);
> + }
> + for (auto RemovedInst : ForRemoval) {
> + RemovedInst->eraseFromParent();
> + }
> + ForRemoval.clear();
> + BlockedLoadsStores.clear();
> + DEBUG(dbgs() << "End X86FixupSFB\n";);
> +
> + return Changed;
> +}
>
> Modified: llvm/trunk/lib/Target/X86/X86TargetMachine.cpp
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86TargetMachine.cpp?rev=325128&r1=325127&r2=325128&view=diff
>
> ==============================================================================
> --- llvm/trunk/lib/Target/X86/X86TargetMachine.cpp (original)
> +++ llvm/trunk/lib/Target/X86/X86TargetMachine.cpp Wed Feb 14 06:58:53 2018
> @@ -449,6 +449,7 @@ void X86PassConfig::addPreRegAlloc() {
> addPass(createX86FixupSetCC());
> addPass(createX86OptimizeLEAs());
> addPass(createX86CallFrameOptimization());
> + addPass(createX86FixupSFB());
> }
>
> addPass(createX86WinAllocaExpander());
>
> Added: llvm/trunk/test/CodeGen/X86/fixup-sfb-32.ll
> URL:
> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/X86/fixup-sfb-32.ll?rev=325128&view=auto
>
> ==============================================================================
> --- llvm/trunk/test/CodeGen/X86/fixup-sfb-32.ll (added)
> +++ llvm/trunk/test/CodeGen/X86/fixup-sfb-32.ll Wed Feb 14 06:58:53 2018
> @@ -0,0 +1,1926 @@
> +; NOTE: Assertions have been autogenerated by
> utils/update_llc_test_checks.py
> +; RUN: llc < %s -mtriple=i686-linux | FileCheck %s -check-prefix=CHECK
> +; RUN: llc < %s -mtriple=i686-linux --disable-fixup-SFB | FileCheck %s
> --check-prefix=DISABLED
> +; RUN: llc < %s -mtriple=i686-linux -mattr +sse4.1 | FileCheck %s
> -check-prefix=CHECK-AVX2
> +; RUN: llc < %s -mtriple=i686-linux
> -mattr=+avx512f,+avx512bw,+avx512vl,+avx512dq | FileCheck %s
> -check-prefix=CHECK-AVX512
> +
> +%struct.S = type { i32, i32, i32, i32 }
> +
> +; Function Attrs: nounwind uwtable
> +define void @test_conditional_block(%struct.S* nocapture %s1, %struct.S*
> nocapture %s2, i32 %x, %struct.S* nocapture %s3, %struct.S* nocapture
> readonly %s4) local_unnamed_addr #0 {
> +; CHECK-LABEL: test_conditional_block:
> +; CHECK: # %bb.0: # %entry
> +; CHECK-NEXT: pushl %edi
> +; CHECK-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-NEXT: pushl %esi
> +; CHECK-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-NEXT: .cfi_offset %esi, -12
> +; CHECK-NEXT: .cfi_offset %edi, -8
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-NEXT: cmpl $18, %edi
> +; CHECK-NEXT: jl .LBB0_2
> +; CHECK-NEXT: # %bb.1: # %if.then
> +; CHECK-NEXT: movl %edi, 4(%ecx)
> +; CHECK-NEXT: .LBB0_2: # %if.end
> +; CHECK-NEXT: movups (%esi), %xmm0
> +; CHECK-NEXT: movups %xmm0, (%edx)
> +; CHECK-NEXT: movl (%ecx), %edx
> +; CHECK-NEXT: movl %edx, (%eax)
> +; CHECK-NEXT: movl 4(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 4(%eax)
> +; CHECK-NEXT: movl 8(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 8(%eax)
> +; CHECK-NEXT: movl 12(%ecx), %ecx
> +; CHECK-NEXT: movl %ecx, 12(%eax)
> +; CHECK-NEXT: popl %esi
> +; CHECK-NEXT: popl %edi
> +; CHECK-NEXT: retl
> +;
> +; DISABLED-LABEL: test_conditional_block:
> +; DISABLED: # %bb.0: # %entry
> +; DISABLED-NEXT: pushl %edi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 8
> +; DISABLED-NEXT: pushl %esi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 12
> +; DISABLED-NEXT: .cfi_offset %esi, -12
> +; DISABLED-NEXT: .cfi_offset %edi, -8
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; DISABLED-NEXT: cmpl $18, %edi
> +; DISABLED-NEXT: jl .LBB0_2
> +; DISABLED-NEXT: # %bb.1: # %if.then
> +; DISABLED-NEXT: movl %edi, 4(%esi)
> +; DISABLED-NEXT: .LBB0_2: # %if.end
> +; DISABLED-NEXT: movups (%edx), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%ecx)
> +; DISABLED-NEXT: movups (%esi), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%eax)
> +; DISABLED-NEXT: popl %esi
> +; DISABLED-NEXT: popl %edi
> +; DISABLED-NEXT: retl
> +;
> +; CHECK-AVX2-LABEL: test_conditional_block:
> +; CHECK-AVX2: # %bb.0: # %entry
> +; CHECK-AVX2-NEXT: pushl %edi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX2-NEXT: pushl %esi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-AVX2-NEXT: .cfi_offset %esi, -12
> +; CHECK-AVX2-NEXT: .cfi_offset %edi, -8
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX2-NEXT: cmpl $18, %edi
> +; CHECK-AVX2-NEXT: jl .LBB0_2
> +; CHECK-AVX2-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX2-NEXT: movl %edi, 4(%ecx)
> +; CHECK-AVX2-NEXT: .LBB0_2: # %if.end
> +; CHECK-AVX2-NEXT: movups (%esi), %xmm0
> +; CHECK-AVX2-NEXT: movups %xmm0, (%edx)
> +; CHECK-AVX2-NEXT: movl (%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, (%eax)
> +; CHECK-AVX2-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX2-NEXT: movl 8(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 8(%eax)
> +; CHECK-AVX2-NEXT: movl 12(%ecx), %ecx
> +; CHECK-AVX2-NEXT: movl %ecx, 12(%eax)
> +; CHECK-AVX2-NEXT: popl %esi
> +; CHECK-AVX2-NEXT: popl %edi
> +; CHECK-AVX2-NEXT: retl
> +;
> +; CHECK-AVX512-LABEL: test_conditional_block:
> +; CHECK-AVX512: # %bb.0: # %entry
> +; CHECK-AVX512-NEXT: pushl %edi
> +; CHECK-AVX512-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX512-NEXT: pushl %esi
> +; CHECK-AVX512-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-AVX512-NEXT: .cfi_offset %esi, -12
> +; CHECK-AVX512-NEXT: .cfi_offset %edi, -8
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX512-NEXT: cmpl $18, %edi
> +; CHECK-AVX512-NEXT: jl .LBB0_2
> +; CHECK-AVX512-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX512-NEXT: movl %edi, 4(%ecx)
> +; CHECK-AVX512-NEXT: .LBB0_2: # %if.end
> +; CHECK-AVX512-NEXT: vmovups (%esi), %xmm0
> +; CHECK-AVX512-NEXT: vmovups %xmm0, (%edx)
> +; CHECK-AVX512-NEXT: movl (%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, (%eax)
> +; CHECK-AVX512-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX512-NEXT: movl 8(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 8(%eax)
> +; CHECK-AVX512-NEXT: movl 12(%ecx), %ecx
> +; CHECK-AVX512-NEXT: movl %ecx, 12(%eax)
> +; CHECK-AVX512-NEXT: popl %esi
> +; CHECK-AVX512-NEXT: popl %edi
> +; CHECK-AVX512-NEXT: retl
> +entry:
> + %cmp = icmp sgt i32 %x, 17
> + br i1 %cmp, label %if.then, label %if.end
> +
> +if.then: ; preds = %entry
> + %b = getelementptr inbounds %struct.S, %struct.S* %s1, i64 0, i32 1
> + store i32 %x, i32* %b, align 4
> + br label %if.end
> +
> +if.end: ; preds = %if.then,
> %entry
> + %0 = bitcast %struct.S* %s3 to i8*
> + %1 = bitcast %struct.S* %s4 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %0, i8* %1, i64 16, i32
> 4, i1 false)
> + %2 = bitcast %struct.S* %s2 to i8*
> + %3 = bitcast %struct.S* %s1 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %2, i8* %3, i64 16, i32
> 4, i1 false)
> + ret void
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define void @test_imm_store(%struct.S* nocapture %s1, %struct.S*
> nocapture %s2, i32 %x, %struct.S* nocapture %s3) local_unnamed_addr #0 {
> +; CHECK-LABEL: test_imm_store:
> +; CHECK: # %bb.0: # %entry
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-NEXT: movl $0, (%edx)
> +; CHECK-NEXT: movl $1, (%ecx)
> +; CHECK-NEXT: movl (%edx), %ecx
> +; CHECK-NEXT: movl %ecx, (%eax)
> +; CHECK-NEXT: movl 4(%edx), %ecx
> +; CHECK-NEXT: movl %ecx, 4(%eax)
> +; CHECK-NEXT: movl 8(%edx), %ecx
> +; CHECK-NEXT: movl %ecx, 8(%eax)
> +; CHECK-NEXT: movl 12(%edx), %ecx
> +; CHECK-NEXT: movl %ecx, 12(%eax)
> +; CHECK-NEXT: retl
> +;
> +; DISABLED-LABEL: test_imm_store:
> +; DISABLED: # %bb.0: # %entry
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; DISABLED-NEXT: movl $0, (%edx)
> +; DISABLED-NEXT: movl $1, (%ecx)
> +; DISABLED-NEXT: movups (%edx), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%eax)
> +; DISABLED-NEXT: retl
> +;
> +; CHECK-AVX2-LABEL: test_imm_store:
> +; CHECK-AVX2: # %bb.0: # %entry
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX2-NEXT: movl $0, (%edx)
> +; CHECK-AVX2-NEXT: movl $1, (%ecx)
> +; CHECK-AVX2-NEXT: movl (%edx), %ecx
> +; CHECK-AVX2-NEXT: movl %ecx, (%eax)
> +; CHECK-AVX2-NEXT: movl 4(%edx), %ecx
> +; CHECK-AVX2-NEXT: movl %ecx, 4(%eax)
> +; CHECK-AVX2-NEXT: movl 8(%edx), %ecx
> +; CHECK-AVX2-NEXT: movl %ecx, 8(%eax)
> +; CHECK-AVX2-NEXT: movl 12(%edx), %ecx
> +; CHECK-AVX2-NEXT: movl %ecx, 12(%eax)
> +; CHECK-AVX2-NEXT: retl
> +;
> +; CHECK-AVX512-LABEL: test_imm_store:
> +; CHECK-AVX512: # %bb.0: # %entry
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX512-NEXT: movl $0, (%edx)
> +; CHECK-AVX512-NEXT: movl $1, (%ecx)
> +; CHECK-AVX512-NEXT: movl (%edx), %ecx
> +; CHECK-AVX512-NEXT: movl %ecx, (%eax)
> +; CHECK-AVX512-NEXT: movl 4(%edx), %ecx
> +; CHECK-AVX512-NEXT: movl %ecx, 4(%eax)
> +; CHECK-AVX512-NEXT: movl 8(%edx), %ecx
> +; CHECK-AVX512-NEXT: movl %ecx, 8(%eax)
> +; CHECK-AVX512-NEXT: movl 12(%edx), %ecx
> +; CHECK-AVX512-NEXT: movl %ecx, 12(%eax)
> +; CHECK-AVX512-NEXT: retl
> +entry:
> + %a = getelementptr inbounds %struct.S, %struct.S* %s1, i64 0, i32 0
> + store i32 0, i32* %a, align 4
> + %a1 = getelementptr inbounds %struct.S, %struct.S* %s3, i64 0, i32 0
> + store i32 1, i32* %a1, align 4
> + %0 = bitcast %struct.S* %s2 to i8*
> + %1 = bitcast %struct.S* %s1 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %0, i8* %1, i64 16, i32
> 4, i1 false)
> + ret void
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define void @test_nondirect_br(%struct.S* nocapture %s1, %struct.S*
> nocapture %s2, i32 %x, %struct.S* nocapture %s3, %struct.S* nocapture
> readonly %s4, i32 %x2) local_unnamed_addr #0 {
> +; CHECK-LABEL: test_nondirect_br:
> +; CHECK: # %bb.0: # %entry
> +; CHECK-NEXT: pushl %edi
> +; CHECK-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-NEXT: pushl %esi
> +; CHECK-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-NEXT: .cfi_offset %esi, -12
> +; CHECK-NEXT: .cfi_offset %edi, -8
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-NEXT: cmpl $18, %ecx
> +; CHECK-NEXT: jl .LBB2_2
> +; CHECK-NEXT: # %bb.1: # %if.then
> +; CHECK-NEXT: movl %ecx, 4(%eax)
> +; CHECK-NEXT: .LBB2_2: # %if.end
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-NEXT: cmpl $14, %edx
> +; CHECK-NEXT: jl .LBB2_4
> +; CHECK-NEXT: # %bb.3: # %if.then2
> +; CHECK-NEXT: movl %edx, 12(%eax)
> +; CHECK-NEXT: .LBB2_4: # %if.end3
> +; CHECK-NEXT: movups (%edi), %xmm0
> +; CHECK-NEXT: movups %xmm0, (%esi)
> +; CHECK-NEXT: movl (%eax), %edx
> +; CHECK-NEXT: movl %edx, (%ecx)
> +; CHECK-NEXT: movl 4(%eax), %edx
> +; CHECK-NEXT: movl %edx, 4(%ecx)
> +; CHECK-NEXT: movl 8(%eax), %edx
> +; CHECK-NEXT: movl %edx, 8(%ecx)
> +; CHECK-NEXT: movl 12(%eax), %eax
> +; CHECK-NEXT: movl %eax, 12(%ecx)
> +; CHECK-NEXT: popl %esi
> +; CHECK-NEXT: popl %edi
> +; CHECK-NEXT: retl
> +;
> +; DISABLED-LABEL: test_nondirect_br:
> +; DISABLED: # %bb.0: # %entry
> +; DISABLED-NEXT: pushl %edi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 8
> +; DISABLED-NEXT: pushl %esi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 12
> +; DISABLED-NEXT: .cfi_offset %esi, -12
> +; DISABLED-NEXT: .cfi_offset %edi, -8
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; DISABLED-NEXT: cmpl $18, %edx
> +; DISABLED-NEXT: jl .LBB2_2
> +; DISABLED-NEXT: # %bb.1: # %if.then
> +; DISABLED-NEXT: movl %edx, 4(%eax)
> +; DISABLED-NEXT: .LBB2_2: # %if.end
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; DISABLED-NEXT: cmpl $14, %ecx
> +; DISABLED-NEXT: jl .LBB2_4
> +; DISABLED-NEXT: # %bb.3: # %if.then2
> +; DISABLED-NEXT: movl %ecx, 12(%eax)
> +; DISABLED-NEXT: .LBB2_4: # %if.end3
> +; DISABLED-NEXT: movups (%edi), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%esi)
> +; DISABLED-NEXT: movups (%eax), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%edx)
> +; DISABLED-NEXT: popl %esi
> +; DISABLED-NEXT: popl %edi
> +; DISABLED-NEXT: retl
> +;
> +; CHECK-AVX2-LABEL: test_nondirect_br:
> +; CHECK-AVX2: # %bb.0: # %entry
> +; CHECK-AVX2-NEXT: pushl %edi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX2-NEXT: pushl %esi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-AVX2-NEXT: .cfi_offset %esi, -12
> +; CHECK-AVX2-NEXT: .cfi_offset %edi, -8
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX2-NEXT: cmpl $18, %ecx
> +; CHECK-AVX2-NEXT: jl .LBB2_2
> +; CHECK-AVX2-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX2-NEXT: movl %ecx, 4(%eax)
> +; CHECK-AVX2-NEXT: .LBB2_2: # %if.end
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX2-NEXT: cmpl $14, %edx
> +; CHECK-AVX2-NEXT: jl .LBB2_4
> +; CHECK-AVX2-NEXT: # %bb.3: # %if.then2
> +; CHECK-AVX2-NEXT: movl %edx, 12(%eax)
> +; CHECK-AVX2-NEXT: .LBB2_4: # %if.end3
> +; CHECK-AVX2-NEXT: movups (%edi), %xmm0
> +; CHECK-AVX2-NEXT: movups %xmm0, (%esi)
> +; CHECK-AVX2-NEXT: movl (%eax), %edx
> +; CHECK-AVX2-NEXT: movl %edx, (%ecx)
> +; CHECK-AVX2-NEXT: movl 4(%eax), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 4(%ecx)
> +; CHECK-AVX2-NEXT: movl 8(%eax), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 8(%ecx)
> +; CHECK-AVX2-NEXT: movl 12(%eax), %eax
> +; CHECK-AVX2-NEXT: movl %eax, 12(%ecx)
> +; CHECK-AVX2-NEXT: popl %esi
> +; CHECK-AVX2-NEXT: popl %edi
> +; CHECK-AVX2-NEXT: retl
> +;
> +; CHECK-AVX512-LABEL: test_nondirect_br:
> +; CHECK-AVX512: # %bb.0: # %entry
> +; CHECK-AVX512-NEXT: pushl %edi
> +; CHECK-AVX512-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX512-NEXT: pushl %esi
> +; CHECK-AVX512-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-AVX512-NEXT: .cfi_offset %esi, -12
> +; CHECK-AVX512-NEXT: .cfi_offset %edi, -8
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX512-NEXT: cmpl $18, %ecx
> +; CHECK-AVX512-NEXT: jl .LBB2_2
> +; CHECK-AVX512-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX512-NEXT: movl %ecx, 4(%eax)
> +; CHECK-AVX512-NEXT: .LBB2_2: # %if.end
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX512-NEXT: cmpl $14, %edx
> +; CHECK-AVX512-NEXT: jl .LBB2_4
> +; CHECK-AVX512-NEXT: # %bb.3: # %if.then2
> +; CHECK-AVX512-NEXT: movl %edx, 12(%eax)
> +; CHECK-AVX512-NEXT: .LBB2_4: # %if.end3
> +; CHECK-AVX512-NEXT: vmovups (%edi), %xmm0
> +; CHECK-AVX512-NEXT: vmovups %xmm0, (%esi)
> +; CHECK-AVX512-NEXT: movl (%eax), %edx
> +; CHECK-AVX512-NEXT: movl %edx, (%ecx)
> +; CHECK-AVX512-NEXT: movl 4(%eax), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 4(%ecx)
> +; CHECK-AVX512-NEXT: movl 8(%eax), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 8(%ecx)
> +; CHECK-AVX512-NEXT: movl 12(%eax), %eax
> +; CHECK-AVX512-NEXT: movl %eax, 12(%ecx)
> +; CHECK-AVX512-NEXT: popl %esi
> +; CHECK-AVX512-NEXT: popl %edi
> +; CHECK-AVX512-NEXT: retl
> +entry:
> + %cmp = icmp sgt i32 %x, 17
> + br i1 %cmp, label %if.then, label %if.end
> +
> +if.then: ; preds = %entry
> + %b = getelementptr inbounds %struct.S, %struct.S* %s1, i64 0, i32 1
> + store i32 %x, i32* %b, align 4
> + br label %if.end
> +
> +if.end: ; preds = %if.then,
> %entry
> + %cmp1 = icmp sgt i32 %x2, 13
> + br i1 %cmp1, label %if.then2, label %if.end3
> +
> +if.then2: ; preds = %if.end
> + %d = getelementptr inbounds %struct.S, %struct.S* %s1, i64 0, i32 3
> + store i32 %x2, i32* %d, align 4
> + br label %if.end3
> +
> +if.end3: ; preds = %if.then2,
> %if.end
> + %0 = bitcast %struct.S* %s3 to i8*
> + %1 = bitcast %struct.S* %s4 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %0, i8* %1, i64 16, i32
> 4, i1 false)
> + %2 = bitcast %struct.S* %s2 to i8*
> + %3 = bitcast %struct.S* %s1 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %2, i8* %3, i64 16, i32
> 4, i1 false)
> + ret void
> +}
> +
> +; Function Attrs: nounwind uwtable
> +define void @test_2preds_block(%struct.S* nocapture %s1, %struct.S*
> nocapture %s2, i32 %x, %struct.S* nocapture %s3, %struct.S* nocapture
> readonly %s4, i32 %x2) local_unnamed_addr #0 {
> +; CHECK-LABEL: test_2preds_block:
> +; CHECK: # %bb.0: # %entry
> +; CHECK-NEXT: pushl %ebx
> +; CHECK-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-NEXT: pushl %edi
> +; CHECK-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-NEXT: pushl %esi
> +; CHECK-NEXT: .cfi_def_cfa_offset 16
> +; CHECK-NEXT: .cfi_offset %esi, -16
> +; CHECK-NEXT: .cfi_offset %edi, -12
> +; CHECK-NEXT: .cfi_offset %ebx, -8
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ebx
> +; CHECK-NEXT: movl %ebx, 12(%ecx)
> +; CHECK-NEXT: cmpl $18, %edi
> +; CHECK-NEXT: jl .LBB3_2
> +; CHECK-NEXT: # %bb.1: # %if.then
> +; CHECK-NEXT: movl %edi, 4(%ecx)
> +; CHECK-NEXT: .LBB3_2: # %if.end
> +; CHECK-NEXT: movups (%esi), %xmm0
> +; CHECK-NEXT: movups %xmm0, (%edx)
> +; CHECK-NEXT: movl (%ecx), %edx
> +; CHECK-NEXT: movl %edx, (%eax)
> +; CHECK-NEXT: movl 4(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 4(%eax)
> +; CHECK-NEXT: movl 8(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 8(%eax)
> +; CHECK-NEXT: movl 12(%ecx), %ecx
> +; CHECK-NEXT: movl %ecx, 12(%eax)
> +; CHECK-NEXT: popl %esi
> +; CHECK-NEXT: popl %edi
> +; CHECK-NEXT: popl %ebx
> +; CHECK-NEXT: retl
> +;
> +; DISABLED-LABEL: test_2preds_block:
> +; DISABLED: # %bb.0: # %entry
> +; DISABLED-NEXT: pushl %ebx
> +; DISABLED-NEXT: .cfi_def_cfa_offset 8
> +; DISABLED-NEXT: pushl %edi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 12
> +; DISABLED-NEXT: pushl %esi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 16
> +; DISABLED-NEXT: .cfi_offset %esi, -16
> +; DISABLED-NEXT: .cfi_offset %edi, -12
> +; DISABLED-NEXT: .cfi_offset %ebx, -8
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %ebx
> +; DISABLED-NEXT: movl %ebx, 12(%esi)
> +; DISABLED-NEXT: cmpl $18, %edi
> +; DISABLED-NEXT: jl .LBB3_2
> +; DISABLED-NEXT: # %bb.1: # %if.then
> +; DISABLED-NEXT: movl %edi, 4(%esi)
> +; DISABLED-NEXT: .LBB3_2: # %if.end
> +; DISABLED-NEXT: movups (%edx), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%ecx)
> +; DISABLED-NEXT: movups (%esi), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%eax)
> +; DISABLED-NEXT: popl %esi
> +; DISABLED-NEXT: popl %edi
> +; DISABLED-NEXT: popl %ebx
> +; DISABLED-NEXT: retl
> +;
> +; CHECK-AVX2-LABEL: test_2preds_block:
> +; CHECK-AVX2: # %bb.0: # %entry
> +; CHECK-AVX2-NEXT: pushl %ebx
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX2-NEXT: pushl %edi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-AVX2-NEXT: pushl %esi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 16
> +; CHECK-AVX2-NEXT: .cfi_offset %esi, -16
> +; CHECK-AVX2-NEXT: .cfi_offset %edi, -12
> +; CHECK-AVX2-NEXT: .cfi_offset %ebx, -8
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %ebx
> +; CHECK-AVX2-NEXT: movl %ebx, 12(%ecx)
> +; CHECK-AVX2-NEXT: cmpl $18, %edi
> +; CHECK-AVX2-NEXT: jl .LBB3_2
> +; CHECK-AVX2-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX2-NEXT: movl %edi, 4(%ecx)
> +; CHECK-AVX2-NEXT: .LBB3_2: # %if.end
> +; CHECK-AVX2-NEXT: movups (%esi), %xmm0
> +; CHECK-AVX2-NEXT: movups %xmm0, (%edx)
> +; CHECK-AVX2-NEXT: movl (%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, (%eax)
> +; CHECK-AVX2-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX2-NEXT: movl 8(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 8(%eax)
> +; CHECK-AVX2-NEXT: movl 12(%ecx), %ecx
> +; CHECK-AVX2-NEXT: movl %ecx, 12(%eax)
> +; CHECK-AVX2-NEXT: popl %esi
> +; CHECK-AVX2-NEXT: popl %edi
> +; CHECK-AVX2-NEXT: popl %ebx
> +; CHECK-AVX2-NEXT: retl
> +;
> +; CHECK-AVX512-LABEL: test_2preds_block:
> +; CHECK-AVX512: # %bb.0: # %entry
> +; CHECK-AVX512-NEXT: pushl %ebx
> +; CHECK-AVX512-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX512-NEXT: pushl %edi
> +; CHECK-AVX512-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-AVX512-NEXT: pushl %esi
> +; CHECK-AVX512-NEXT: .cfi_def_cfa_offset 16
> +; CHECK-AVX512-NEXT: .cfi_offset %esi, -16
> +; CHECK-AVX512-NEXT: .cfi_offset %edi, -12
> +; CHECK-AVX512-NEXT: .cfi_offset %ebx, -8
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %ebx
> +; CHECK-AVX512-NEXT: movl %ebx, 12(%ecx)
> +; CHECK-AVX512-NEXT: cmpl $18, %edi
> +; CHECK-AVX512-NEXT: jl .LBB3_2
> +; CHECK-AVX512-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX512-NEXT: movl %edi, 4(%ecx)
> +; CHECK-AVX512-NEXT: .LBB3_2: # %if.end
> +; CHECK-AVX512-NEXT: vmovups (%esi), %xmm0
> +; CHECK-AVX512-NEXT: vmovups %xmm0, (%edx)
> +; CHECK-AVX512-NEXT: movl (%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, (%eax)
> +; CHECK-AVX512-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX512-NEXT: movl 8(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 8(%eax)
> +; CHECK-AVX512-NEXT: movl 12(%ecx), %ecx
> +; CHECK-AVX512-NEXT: movl %ecx, 12(%eax)
> +; CHECK-AVX512-NEXT: popl %esi
> +; CHECK-AVX512-NEXT: popl %edi
> +; CHECK-AVX512-NEXT: popl %ebx
> +; CHECK-AVX512-NEXT: retl
> +entry:
> + %d = getelementptr inbounds %struct.S, %struct.S* %s1, i64 0, i32 3
> + store i32 %x2, i32* %d, align 4
> + %cmp = icmp sgt i32 %x, 17
> + br i1 %cmp, label %if.then, label %if.end
> +
> +if.then: ; preds = %entry
> + %b = getelementptr inbounds %struct.S, %struct.S* %s1, i64 0, i32 1
> + store i32 %x, i32* %b, align 4
> + br label %if.end
> +
> +if.end: ; preds = %if.then,
> %entry
> + %0 = bitcast %struct.S* %s3 to i8*
> + %1 = bitcast %struct.S* %s4 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %0, i8* %1, i64 16, i32
> 4, i1 false)
> + %2 = bitcast %struct.S* %s2 to i8*
> + %3 = bitcast %struct.S* %s1 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %2, i8* %3, i64 16, i32
> 4, i1 false)
> + ret void
> +}
> +%struct.S2 = type { i64, i64 }
> +
> +; Function Attrs: nounwind uwtable
> +define void @test_type64(%struct.S2* nocapture %s1, %struct.S2* nocapture
> %s2, i32 %x, %struct.S2* nocapture %s3, %struct.S2* nocapture readonly %s4)
> local_unnamed_addr #0 {
> +; CHECK-LABEL: test_type64:
> +; CHECK: # %bb.0: # %entry
> +; CHECK-NEXT: pushl %edi
> +; CHECK-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-NEXT: pushl %esi
> +; CHECK-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-NEXT: .cfi_offset %esi, -12
> +; CHECK-NEXT: .cfi_offset %edi, -8
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-NEXT: cmpl $18, %edi
> +; CHECK-NEXT: jl .LBB4_2
> +; CHECK-NEXT: # %bb.1: # %if.then
> +; CHECK-NEXT: movl %edi, 8(%ecx)
> +; CHECK-NEXT: sarl $31, %edi
> +; CHECK-NEXT: movl %edi, 12(%ecx)
> +; CHECK-NEXT: .LBB4_2: # %if.end
> +; CHECK-NEXT: movups (%esi), %xmm0
> +; CHECK-NEXT: movups %xmm0, (%edx)
> +; CHECK-NEXT: movl (%ecx), %edx
> +; CHECK-NEXT: movl %edx, (%eax)
> +; CHECK-NEXT: movl 4(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 4(%eax)
> +; CHECK-NEXT: movl 8(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 8(%eax)
> +; CHECK-NEXT: movl 12(%ecx), %ecx
> +; CHECK-NEXT: movl %ecx, 12(%eax)
> +; CHECK-NEXT: popl %esi
> +; CHECK-NEXT: popl %edi
> +; CHECK-NEXT: retl
> +;
> +; DISABLED-LABEL: test_type64:
> +; DISABLED: # %bb.0: # %entry
> +; DISABLED-NEXT: pushl %edi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 8
> +; DISABLED-NEXT: pushl %esi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 12
> +; DISABLED-NEXT: .cfi_offset %esi, -12
> +; DISABLED-NEXT: .cfi_offset %edi, -8
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; DISABLED-NEXT: cmpl $18, %edi
> +; DISABLED-NEXT: jl .LBB4_2
> +; DISABLED-NEXT: # %bb.1: # %if.then
> +; DISABLED-NEXT: movl %edi, 8(%esi)
> +; DISABLED-NEXT: sarl $31, %edi
> +; DISABLED-NEXT: movl %edi, 12(%esi)
> +; DISABLED-NEXT: .LBB4_2: # %if.end
> +; DISABLED-NEXT: movups (%edx), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%ecx)
> +; DISABLED-NEXT: movups (%esi), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%eax)
> +; DISABLED-NEXT: popl %esi
> +; DISABLED-NEXT: popl %edi
> +; DISABLED-NEXT: retl
> +;
> +; CHECK-AVX2-LABEL: test_type64:
> +; CHECK-AVX2: # %bb.0: # %entry
> +; CHECK-AVX2-NEXT: pushl %edi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX2-NEXT: pushl %esi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-AVX2-NEXT: .cfi_offset %esi, -12
> +; CHECK-AVX2-NEXT: .cfi_offset %edi, -8
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX2-NEXT: cmpl $18, %edi
> +; CHECK-AVX2-NEXT: jl .LBB4_2
> +; CHECK-AVX2-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX2-NEXT: movl %edi, 8(%ecx)
> +; CHECK-AVX2-NEXT: sarl $31, %edi
> +; CHECK-AVX2-NEXT: movl %edi, 12(%ecx)
> +; CHECK-AVX2-NEXT: .LBB4_2: # %if.end
> +; CHECK-AVX2-NEXT: movups (%esi), %xmm0
> +; CHECK-AVX2-NEXT: movups %xmm0, (%edx)
> +; CHECK-AVX2-NEXT: movl (%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, (%eax)
> +; CHECK-AVX2-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX2-NEXT: movl 8(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 8(%eax)
> +; CHECK-AVX2-NEXT: movl 12(%ecx), %ecx
> +; CHECK-AVX2-NEXT: movl %ecx, 12(%eax)
> +; CHECK-AVX2-NEXT: popl %esi
> +; CHECK-AVX2-NEXT: popl %edi
> +; CHECK-AVX2-NEXT: retl
> +;
> +; CHECK-AVX512-LABEL: test_type64:
> +; CHECK-AVX512: # %bb.0: # %entry
> +; CHECK-AVX512-NEXT: pushl %edi
> +; CHECK-AVX512-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX512-NEXT: pushl %esi
> +; CHECK-AVX512-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-AVX512-NEXT: .cfi_offset %esi, -12
> +; CHECK-AVX512-NEXT: .cfi_offset %edi, -8
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX512-NEXT: cmpl $18, %edi
> +; CHECK-AVX512-NEXT: jl .LBB4_2
> +; CHECK-AVX512-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX512-NEXT: movl %edi, 8(%ecx)
> +; CHECK-AVX512-NEXT: sarl $31, %edi
> +; CHECK-AVX512-NEXT: movl %edi, 12(%ecx)
> +; CHECK-AVX512-NEXT: .LBB4_2: # %if.end
> +; CHECK-AVX512-NEXT: vmovups (%esi), %xmm0
> +; CHECK-AVX512-NEXT: vmovups %xmm0, (%edx)
> +; CHECK-AVX512-NEXT: movl (%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, (%eax)
> +; CHECK-AVX512-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX512-NEXT: movl 8(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 8(%eax)
> +; CHECK-AVX512-NEXT: movl 12(%ecx), %ecx
> +; CHECK-AVX512-NEXT: movl %ecx, 12(%eax)
> +; CHECK-AVX512-NEXT: popl %esi
> +; CHECK-AVX512-NEXT: popl %edi
> +; CHECK-AVX512-NEXT: retl
> +entry:
> + %cmp = icmp sgt i32 %x, 17
> + br i1 %cmp, label %if.then, label %if.end
> +
> +if.then: ; preds = %entry
> + %conv = sext i32 %x to i64
> + %b = getelementptr inbounds %struct.S2, %struct.S2* %s1, i64 0, i32 1
> + store i64 %conv, i64* %b, align 8
> + br label %if.end
> +
> +if.end: ; preds = %if.then,
> %entry
> + %0 = bitcast %struct.S2* %s3 to i8*
> + %1 = bitcast %struct.S2* %s4 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %0, i8* %1, i64 16, i32
> 8, i1 false)
> + %2 = bitcast %struct.S2* %s2 to i8*
> + %3 = bitcast %struct.S2* %s1 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %2, i8* %3, i64 16, i32
> 8, i1 false)
> + ret void
> +}
> +%struct.S3 = type { i64, i8, i8, i16, i32 }
> +
> +; Function Attrs: noinline nounwind uwtable
> +define void @test_mixed_type(%struct.S3* nocapture %s1, %struct.S3*
> nocapture %s2, i32 %x, %struct.S3* nocapture readnone %s3, %struct.S3*
> nocapture readnone %s4) local_unnamed_addr #0 {
> +; CHECK-LABEL: test_mixed_type:
> +; CHECK: # %bb.0: # %entry
> +; CHECK-NEXT: pushl %esi
> +; CHECK-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-NEXT: .cfi_offset %esi, -8
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-NEXT: cmpl $18, %edx
> +; CHECK-NEXT: jl .LBB5_2
> +; CHECK-NEXT: # %bb.1: # %if.then
> +; CHECK-NEXT: movl %edx, %esi
> +; CHECK-NEXT: sarl $31, %esi
> +; CHECK-NEXT: movl %edx, (%ecx)
> +; CHECK-NEXT: movl %esi, 4(%ecx)
> +; CHECK-NEXT: movb %dl, 8(%ecx)
> +; CHECK-NEXT: .LBB5_2: # %if.end
> +; CHECK-NEXT: movl (%ecx), %edx
> +; CHECK-NEXT: movl %edx, (%eax)
> +; CHECK-NEXT: movl 4(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 4(%eax)
> +; CHECK-NEXT: movb 8(%ecx), %dl
> +; CHECK-NEXT: movb %dl, 8(%eax)
> +; CHECK-NEXT: movl 9(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 9(%eax)
> +; CHECK-NEXT: movzwl 13(%ecx), %edx
> +; CHECK-NEXT: movw %dx, 13(%eax)
> +; CHECK-NEXT: movb 15(%ecx), %cl
> +; CHECK-NEXT: movb %cl, 15(%eax)
> +; CHECK-NEXT: popl %esi
> +; CHECK-NEXT: retl
> +;
> +; DISABLED-LABEL: test_mixed_type:
> +; DISABLED: # %bb.0: # %entry
> +; DISABLED-NEXT: pushl %esi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 8
> +; DISABLED-NEXT: .cfi_offset %esi, -8
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; DISABLED-NEXT: cmpl $18, %edx
> +; DISABLED-NEXT: jl .LBB5_2
> +; DISABLED-NEXT: # %bb.1: # %if.then
> +; DISABLED-NEXT: movl %edx, %esi
> +; DISABLED-NEXT: sarl $31, %esi
> +; DISABLED-NEXT: movl %edx, (%ecx)
> +; DISABLED-NEXT: movl %esi, 4(%ecx)
> +; DISABLED-NEXT: movb %dl, 8(%ecx)
> +; DISABLED-NEXT: .LBB5_2: # %if.end
> +; DISABLED-NEXT: movups (%ecx), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%eax)
> +; DISABLED-NEXT: popl %esi
> +; DISABLED-NEXT: retl
> +;
> +; CHECK-AVX2-LABEL: test_mixed_type:
> +; CHECK-AVX2: # %bb.0: # %entry
> +; CHECK-AVX2-NEXT: pushl %esi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX2-NEXT: .cfi_offset %esi, -8
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX2-NEXT: cmpl $18, %edx
> +; CHECK-AVX2-NEXT: jl .LBB5_2
> +; CHECK-AVX2-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX2-NEXT: movl %edx, %esi
> +; CHECK-AVX2-NEXT: sarl $31, %esi
> +; CHECK-AVX2-NEXT: movl %edx, (%ecx)
> +; CHECK-AVX2-NEXT: movl %esi, 4(%ecx)
> +; CHECK-AVX2-NEXT: movb %dl, 8(%ecx)
> +; CHECK-AVX2-NEXT: .LBB5_2: # %if.end
> +; CHECK-AVX2-NEXT: movl (%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, (%eax)
> +; CHECK-AVX2-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX2-NEXT: movb 8(%ecx), %dl
> +; CHECK-AVX2-NEXT: movb %dl, 8(%eax)
> +; CHECK-AVX2-NEXT: movl 9(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 9(%eax)
> +; CHECK-AVX2-NEXT: movzwl 13(%ecx), %edx
> +; CHECK-AVX2-NEXT: movw %dx, 13(%eax)
> +; CHECK-AVX2-NEXT: movb 15(%ecx), %cl
> +; CHECK-AVX2-NEXT: movb %cl, 15(%eax)
> +; CHECK-AVX2-NEXT: popl %esi
> +; CHECK-AVX2-NEXT: retl
> +;
> +; CHECK-AVX512-LABEL: test_mixed_type:
> +; CHECK-AVX512: # %bb.0: # %entry
> +; CHECK-AVX512-NEXT: pushl %esi
> +; CHECK-AVX512-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX512-NEXT: .cfi_offset %esi, -8
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX512-NEXT: cmpl $18, %edx
> +; CHECK-AVX512-NEXT: jl .LBB5_2
> +; CHECK-AVX512-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX512-NEXT: movl %edx, %esi
> +; CHECK-AVX512-NEXT: sarl $31, %esi
> +; CHECK-AVX512-NEXT: movl %edx, (%ecx)
> +; CHECK-AVX512-NEXT: movl %esi, 4(%ecx)
> +; CHECK-AVX512-NEXT: movb %dl, 8(%ecx)
> +; CHECK-AVX512-NEXT: .LBB5_2: # %if.end
> +; CHECK-AVX512-NEXT: movl (%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, (%eax)
> +; CHECK-AVX512-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX512-NEXT: movb 8(%ecx), %dl
> +; CHECK-AVX512-NEXT: movb %dl, 8(%eax)
> +; CHECK-AVX512-NEXT: movl 9(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 9(%eax)
> +; CHECK-AVX512-NEXT: movzwl 13(%ecx), %edx
> +; CHECK-AVX512-NEXT: movw %dx, 13(%eax)
> +; CHECK-AVX512-NEXT: movb 15(%ecx), %cl
> +; CHECK-AVX512-NEXT: movb %cl, 15(%eax)
> +; CHECK-AVX512-NEXT: popl %esi
> +; CHECK-AVX512-NEXT: retl
> +entry:
> + %cmp = icmp sgt i32 %x, 17
> + br i1 %cmp, label %if.then, label %if.end
> +
> +if.then: ; preds = %entry
> + %conv = sext i32 %x to i64
> + %a = getelementptr inbounds %struct.S3, %struct.S3* %s1, i64 0, i32 0
> + store i64 %conv, i64* %a, align 8
> + %conv1 = trunc i32 %x to i8
> + %b = getelementptr inbounds %struct.S3, %struct.S3* %s1, i64 0, i32 1
> + store i8 %conv1, i8* %b, align 8
> + br label %if.end
> +
> +if.end: ; preds = %if.then,
> %entry
> + %0 = bitcast %struct.S3* %s2 to i8*
> + %1 = bitcast %struct.S3* %s1 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %0, i8* %1, i64 16, i32
> 8, i1 false)
> + ret void
> +}
> +%struct.S4 = type { i32, i32, i32, i32, i32, i32, i32, i32, i32, i32,
> i32, i32 }
> +
> +; Function Attrs: nounwind uwtable
> +define void @test_multiple_blocks(%struct.S4* nocapture %s1, %struct.S4*
> nocapture %s2) local_unnamed_addr #0 {
> +; CHECK-LABEL: test_multiple_blocks:
> +; CHECK: # %bb.0: # %entry
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-NEXT: movl $0, 4(%ecx)
> +; CHECK-NEXT: movl $0, 36(%ecx)
> +; CHECK-NEXT: movups 16(%ecx), %xmm0
> +; CHECK-NEXT: movups %xmm0, 16(%eax)
> +; CHECK-NEXT: movl 32(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 32(%eax)
> +; CHECK-NEXT: movl 36(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 36(%eax)
> +; CHECK-NEXT: movl 40(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 40(%eax)
> +; CHECK-NEXT: movl 44(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 44(%eax)
> +; CHECK-NEXT: movl (%ecx), %edx
> +; CHECK-NEXT: movl %edx, (%eax)
> +; CHECK-NEXT: movl 4(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 4(%eax)
> +; CHECK-NEXT: movl 8(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 8(%eax)
> +; CHECK-NEXT: movl 12(%ecx), %ecx
> +; CHECK-NEXT: movl %ecx, 12(%eax)
> +; CHECK-NEXT: retl
> +;
> +; DISABLED-LABEL: test_multiple_blocks:
> +; DISABLED: # %bb.0: # %entry
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; DISABLED-NEXT: movl $0, 4(%ecx)
> +; DISABLED-NEXT: movl $0, 36(%ecx)
> +; DISABLED-NEXT: movups 16(%ecx), %xmm0
> +; DISABLED-NEXT: movups %xmm0, 16(%eax)
> +; DISABLED-NEXT: movups 32(%ecx), %xmm0
> +; DISABLED-NEXT: movups %xmm0, 32(%eax)
> +; DISABLED-NEXT: movups (%ecx), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%eax)
> +; DISABLED-NEXT: retl
> +;
> +; CHECK-AVX2-LABEL: test_multiple_blocks:
> +; CHECK-AVX2: # %bb.0: # %entry
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX2-NEXT: movl $0, 4(%ecx)
> +; CHECK-AVX2-NEXT: movl $0, 36(%ecx)
> +; CHECK-AVX2-NEXT: movups 16(%ecx), %xmm0
> +; CHECK-AVX2-NEXT: movups %xmm0, 16(%eax)
> +; CHECK-AVX2-NEXT: movl 32(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 32(%eax)
> +; CHECK-AVX2-NEXT: movl 36(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 36(%eax)
> +; CHECK-AVX2-NEXT: movl 40(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 40(%eax)
> +; CHECK-AVX2-NEXT: movl 44(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 44(%eax)
> +; CHECK-AVX2-NEXT: movl (%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, (%eax)
> +; CHECK-AVX2-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX2-NEXT: movl 8(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 8(%eax)
> +; CHECK-AVX2-NEXT: movl 12(%ecx), %ecx
> +; CHECK-AVX2-NEXT: movl %ecx, 12(%eax)
> +; CHECK-AVX2-NEXT: retl
> +;
> +; CHECK-AVX512-LABEL: test_multiple_blocks:
> +; CHECK-AVX512: # %bb.0: # %entry
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX512-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX512-NEXT: movl $0, 4(%ecx)
> +; CHECK-AVX512-NEXT: movl $0, 36(%ecx)
> +; CHECK-AVX512-NEXT: vmovups 16(%ecx), %xmm0
> +; CHECK-AVX512-NEXT: vmovups %xmm0, 16(%eax)
> +; CHECK-AVX512-NEXT: movl 32(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 32(%eax)
> +; CHECK-AVX512-NEXT: movl 36(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 36(%eax)
> +; CHECK-AVX512-NEXT: movl 40(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 40(%eax)
> +; CHECK-AVX512-NEXT: movl 44(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 44(%eax)
> +; CHECK-AVX512-NEXT: movl (%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, (%eax)
> +; CHECK-AVX512-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX512-NEXT: vmovups 8(%ecx), %xmm0
> +; CHECK-AVX512-NEXT: vmovups %xmm0, 8(%eax)
> +; CHECK-AVX512-NEXT: movl 24(%ecx), %edx
> +; CHECK-AVX512-NEXT: movl %edx, 24(%eax)
> +; CHECK-AVX512-NEXT: movl 28(%ecx), %ecx
> +; CHECK-AVX512-NEXT: movl %ecx, 28(%eax)
> +; CHECK-AVX512-NEXT: retl
> +entry:
> + %b = getelementptr inbounds %struct.S4, %struct.S4* %s1, i64 0, i32 1
> + store i32 0, i32* %b, align 4
> + %b3 = getelementptr inbounds %struct.S4, %struct.S4* %s1, i64 0, i32 9
> + store i32 0, i32* %b3, align 4
> + %0 = bitcast %struct.S4* %s2 to i8*
> + %1 = bitcast %struct.S4* %s1 to i8*
> + tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %0, i8* %1, i64 48, i32
> 4, i1 false)
> + ret void
> +}
> +%struct.S5 = type { i16, i16, i16, i16, i16, i16, i16, i16 }
> +
> +; Function Attrs: nounwind uwtable
> +define void @test_type16(%struct.S5* nocapture %s1, %struct.S5* nocapture
> %s2, i32 %x, %struct.S5* nocapture %s3, %struct.S5* nocapture readonly %s4)
> local_unnamed_addr #0 {
> +; CHECK-LABEL: test_type16:
> +; CHECK: # %bb.0: # %entry
> +; CHECK-NEXT: pushl %edi
> +; CHECK-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-NEXT: pushl %esi
> +; CHECK-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-NEXT: .cfi_offset %esi, -12
> +; CHECK-NEXT: .cfi_offset %edi, -8
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-NEXT: cmpl $18, %edi
> +; CHECK-NEXT: jl .LBB7_2
> +; CHECK-NEXT: # %bb.1: # %if.then
> +; CHECK-NEXT: movw %di, 2(%ecx)
> +; CHECK-NEXT: .LBB7_2: # %if.end
> +; CHECK-NEXT: movups (%esi), %xmm0
> +; CHECK-NEXT: movups %xmm0, (%edx)
> +; CHECK-NEXT: movzwl (%ecx), %edx
> +; CHECK-NEXT: movw %dx, (%eax)
> +; CHECK-NEXT: movzwl 2(%ecx), %edx
> +; CHECK-NEXT: movw %dx, 2(%eax)
> +; CHECK-NEXT: movl 4(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 4(%eax)
> +; CHECK-NEXT: movl 8(%ecx), %edx
> +; CHECK-NEXT: movl %edx, 8(%eax)
> +; CHECK-NEXT: movl 12(%ecx), %ecx
> +; CHECK-NEXT: movl %ecx, 12(%eax)
> +; CHECK-NEXT: popl %esi
> +; CHECK-NEXT: popl %edi
> +; CHECK-NEXT: retl
> +;
> +; DISABLED-LABEL: test_type16:
> +; DISABLED: # %bb.0: # %entry
> +; DISABLED-NEXT: pushl %edi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 8
> +; DISABLED-NEXT: pushl %esi
> +; DISABLED-NEXT: .cfi_def_cfa_offset 12
> +; DISABLED-NEXT: .cfi_offset %esi, -12
> +; DISABLED-NEXT: .cfi_offset %edi, -8
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; DISABLED-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; DISABLED-NEXT: cmpl $18, %edi
> +; DISABLED-NEXT: jl .LBB7_2
> +; DISABLED-NEXT: # %bb.1: # %if.then
> +; DISABLED-NEXT: movw %di, 2(%esi)
> +; DISABLED-NEXT: .LBB7_2: # %if.end
> +; DISABLED-NEXT: movups (%edx), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%ecx)
> +; DISABLED-NEXT: movups (%esi), %xmm0
> +; DISABLED-NEXT: movups %xmm0, (%eax)
> +; DISABLED-NEXT: popl %esi
> +; DISABLED-NEXT: popl %edi
> +; DISABLED-NEXT: retl
> +;
> +; CHECK-AVX2-LABEL: test_type16:
> +; CHECK-AVX2: # %bb.0: # %entry
> +; CHECK-AVX2-NEXT: pushl %edi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 8
> +; CHECK-AVX2-NEXT: pushl %esi
> +; CHECK-AVX2-NEXT: .cfi_def_cfa_offset 12
> +; CHECK-AVX2-NEXT: .cfi_offset %esi, -12
> +; CHECK-AVX2-NEXT: .cfi_offset %edi, -8
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %esi
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edx
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %edi
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
> +; CHECK-AVX2-NEXT: movl {{[0-9]+}}(%esp), %ecx
> +; CHECK-AVX2-NEXT: cmpl $18, %edi
> +; CHECK-AVX2-NEXT: jl .LBB7_2
> +; CHECK-AVX2-NEXT: # %bb.1: # %if.then
> +; CHECK-AVX2-NEXT: movw %di, 2(%ecx)
> +; CHECK-AVX2-NEXT: .LBB7_2: # %if.end
> +; CHECK-AVX2-NEXT: movups (%esi), %xmm0
> +; CHECK-AVX2-NEXT: movups %xmm0, (%edx)
> +; CHECK-AVX2-NEXT: movzwl (%ecx), %edx
> +; CHECK-AVX2-NEXT: movw %dx, (%eax)
> +; CHECK-AVX2-NEXT: movzwl 2(%ecx), %edx
> +; CHECK-AVX2-NEXT: movw %dx, 2(%eax)
> +; CHECK-AVX2-NEXT: movl 4(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 4(%eax)
> +; CHECK-AVX2-NEXT: movl 8(%ecx), %edx
> +; CHECK-AVX2-NEXT: movl %edx, 8(%eax)
> +; CHECK-AVX2-NEXT: movl 12(%ecx), %ecx
> +; CHECK-AVX2-NEXT:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20180217/a84093a8/attachment.html>
More information about the llvm-commits
mailing list