[llvm] 56a4315 - [SystemZ] Add a SystemZ specific pre-RA scheduling strategy. (#135076)
via llvm-commits
llvm-commits at lists.llvm.org
Tue Mar 10 07:38:11 PDT 2026
Author: Jonas Paulsson
Date: 2026-03-10T15:38:05+01:00
New Revision: 56a4315ee05a0cd3e803b5daf7332be4d4afac3c
URL: https://github.com/llvm/llvm-project/commit/56a4315ee05a0cd3e803b5daf7332be4d4afac3c
DIFF: https://github.com/llvm/llvm-project/commit/56a4315ee05a0cd3e803b5daf7332be4d4afac3c.diff
LOG: [SystemZ] Add a SystemZ specific pre-RA scheduling strategy. (#135076)
This is a relatively simple strategy as it is omitting any heuristics for
liveness and register pressure reduction. This works well as the SystemZ ISel
scheduler is using Sched::RegPressure which gives a good input order to begin
with.
It is trying harder with biasing phys regs than GenericScheduler as it also
considers other instructions such as immediate loads directly into phys-regs
produced by the register coalescer. This can hopefully be refactored into
MachineScheduler.cpp.
It has a latency heuristic that is slightly different from the one in
GenericScheduler: It is activated for a specific type of region that have
many "data sequences" consisting of SUs connected only with a single
data-edge that are next to each other in the input order. This is only 3% of
all the scheduling regions, but when activated it is applied on all the
candidates (not just once per cycle). At the same time it is a bit more
careful by checking not only the SU Height against the scheduled latency but
also its Depth against the remaining latency.
It reuses the GenericScheduler handling of weak edges to help copy
coalescing.
It also helps with compare zero elimination as it tries to put a CC-defining
instruction that produces the compare source value above the compare before
any other instruction clobbering CC or the value.
This work was started after observing heavy spilling in Cactus, which was
actually *caused* by GenericScheduler - disabling it (no pre-RA scheduling)
remedied it and gave a 7% improvement in performance on that benchmark. Many
different versions have been tried which has evolved into this initial
simplistic MachineSchedStrategy that does relatively little and yet achieves
double-digit improvements on Cactus and Imagick compared to GenericSched
(which is OTOH 3% better on Blender). There will hopefully be more
improvements added later on as there seems to be potential for it.
It would be very interesting to have other OOO targets try this as well and
perhaps make this available in MachineScheduler.cpp
(A first attempt with improving the pre-RA scheduling was made with #90181,
which however did not materialize in anything actually useful.)
Added:
llvm/test/CodeGen/SystemZ/misched-prera-biaspregs.mir
llvm/test/CodeGen/SystemZ/misched-prera-cmp-elim.mir
llvm/test/CodeGen/SystemZ/misched-prera-copy-coal.mir
llvm/test/CodeGen/SystemZ/misched-prera-latencies.mir
llvm/test/CodeGen/SystemZ/vec-cmpsel-01.ll
llvm/test/CodeGen/SystemZ/vec-cmpsel-02.ll
Modified:
llvm/include/llvm/CodeGen/MachineScheduler.h
llvm/lib/CodeGen/MachineScheduler.cpp
llvm/lib/Target/SystemZ/SystemZElimCompare.cpp
llvm/lib/Target/SystemZ/SystemZInstrInfo.cpp
llvm/lib/Target/SystemZ/SystemZInstrInfo.h
llvm/lib/Target/SystemZ/SystemZMachineScheduler.cpp
llvm/lib/Target/SystemZ/SystemZMachineScheduler.h
llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp
llvm/lib/Target/SystemZ/SystemZTargetMachine.h
llvm/test/CodeGen/SystemZ/DAGCombiner_isAlias.ll
llvm/test/CodeGen/SystemZ/alias-01.ll
llvm/test/CodeGen/SystemZ/args-06.ll
llvm/test/CodeGen/SystemZ/args-12.ll
llvm/test/CodeGen/SystemZ/atomic-load-09.ll
llvm/test/CodeGen/SystemZ/atomic-store-08.ll
llvm/test/CodeGen/SystemZ/atomic-store-09.ll
llvm/test/CodeGen/SystemZ/atomicrmw-fmax-01.ll
llvm/test/CodeGen/SystemZ/atomicrmw-fmin-01.ll
llvm/test/CodeGen/SystemZ/fp-half-vector-fcmp-select.ll
llvm/test/CodeGen/SystemZ/fp-half.ll
llvm/test/CodeGen/SystemZ/int-conv-11.ll
llvm/test/CodeGen/SystemZ/knownbits-intrinsics-binop.ll
llvm/test/CodeGen/SystemZ/memcpy-01.ll
llvm/test/CodeGen/SystemZ/memset-01.ll
llvm/test/CodeGen/SystemZ/shift-13.ll
llvm/test/CodeGen/SystemZ/shift-14.ll
llvm/test/CodeGen/SystemZ/shift-15.ll
llvm/test/CodeGen/SystemZ/signbits-intrinsics-binop.ll
llvm/test/CodeGen/SystemZ/vec-args-04.ll
llvm/test/CodeGen/SystemZ/vec-sub-01.ll
llvm/test/CodeGen/SystemZ/vector-constrained-fp-intrinsics.ll
Removed:
llvm/test/CodeGen/SystemZ/vec-cmpsel.ll
################################################################################
diff --git a/llvm/include/llvm/CodeGen/MachineScheduler.h b/llvm/include/llvm/CodeGen/MachineScheduler.h
index 33036030679e5..0c7efd8ab1d8f 100644
--- a/llvm/include/llvm/CodeGen/MachineScheduler.h
+++ b/llvm/include/llvm/CodeGen/MachineScheduler.h
@@ -1233,6 +1233,7 @@ class GenericSchedulerBase : public MachineSchedStrategy {
};
// Utility functions used by heuristics in tryCandidate().
+LLVM_ABI unsigned computeRemLatency(SchedBoundary &CurrZone);
LLVM_ABI bool tryLess(int TryVal, int CandVal,
GenericSchedulerBase::SchedCandidate &TryCand,
GenericSchedulerBase::SchedCandidate &Cand,
diff --git a/llvm/lib/CodeGen/MachineScheduler.cpp b/llvm/lib/CodeGen/MachineScheduler.cpp
index 6697c0a110dc3..482dea0ca0626 100644
--- a/llvm/lib/CodeGen/MachineScheduler.cpp
+++ b/llvm/lib/CodeGen/MachineScheduler.cpp
@@ -3255,31 +3255,6 @@ initResourceDelta(const ScheduleDAGMI *DAG,
}
}
-/// Compute remaining latency. We need this both to determine whether the
-/// overall schedule has become latency-limited and whether the instructions
-/// outside this zone are resource or latency limited.
-///
-/// The "dependent" latency is updated incrementally during scheduling as the
-/// max height/depth of scheduled nodes minus the cycles since it was
-/// scheduled:
-/// DLat = max (N.depth - (CurrCycle - N.ReadyCycle) for N in Zone
-///
-/// The "independent" latency is the max ready queue depth:
-/// ILat = max N.depth for N in Available|Pending
-///
-/// RemainingLatency is the greater of independent and dependent latency.
-///
-/// These computations are expensive, especially in DAGs with many edges, so
-/// only do them if necessary.
-static unsigned computeRemLatency(SchedBoundary &CurrZone) {
- unsigned RemLatency = CurrZone.getDependentLatency();
- RemLatency = std::max(RemLatency,
- CurrZone.findMaxLatency(CurrZone.Available.elements()));
- RemLatency = std::max(RemLatency,
- CurrZone.findMaxLatency(CurrZone.Pending.elements()));
- return RemLatency;
-}
-
/// Returns true if the current cycle plus remaning latency is greater than
/// the critical path in the scheduling region.
bool GenericSchedulerBase::shouldReduceLatency(const CandPolicy &Policy,
@@ -3437,6 +3412,31 @@ void GenericSchedulerBase::traceCandidate(const SchedCandidate &Cand) {
}
#endif
+/// Compute remaining latency. We need this both to determine whether the
+/// overall schedule has become latency-limited and whether the instructions
+/// outside this zone are resource or latency limited.
+///
+/// The "dependent" latency is updated incrementally during scheduling as the
+/// max height/depth of scheduled nodes minus the cycles since it was
+/// scheduled:
+/// DLat = max (N.depth - (CurrCycle - N.ReadyCycle) for N in Zone
+///
+/// The "independent" latency is the max ready queue depth:
+/// ILat = max N.depth for N in Available|Pending
+///
+/// RemainingLatency is the greater of independent and dependent latency.
+///
+/// These computations are expensive, especially in DAGs with many edges, so
+/// only do them if necessary.
+unsigned llvm::computeRemLatency(SchedBoundary &CurrZone) {
+ unsigned RemLatency = CurrZone.getDependentLatency();
+ RemLatency = std::max(RemLatency,
+ CurrZone.findMaxLatency(CurrZone.Available.elements()));
+ RemLatency = std::max(RemLatency,
+ CurrZone.findMaxLatency(CurrZone.Pending.elements()));
+ return RemLatency;
+}
+
/// Return true if this heuristic determines order.
/// TODO: Consider refactor return type of these functions as integer or enum,
/// as we may need to
diff erentiate whether TryCand is better than Cand.
diff --git a/llvm/lib/Target/SystemZ/SystemZElimCompare.cpp b/llvm/lib/Target/SystemZ/SystemZElimCompare.cpp
index bbe1821e7b8f7..05599844c1747 100644
--- a/llvm/lib/Target/SystemZ/SystemZElimCompare.cpp
+++ b/llvm/lib/Target/SystemZ/SystemZElimCompare.cpp
@@ -150,30 +150,6 @@ Reference SystemZElimCompare::getRegReferences(MachineInstr &MI, unsigned Reg) {
return Ref;
}
-// Return true if this is a load and test which can be optimized the
-// same way as compare instruction.
-static bool isLoadAndTestAsCmp(MachineInstr &MI) {
- // If we during isel used a load-and-test as a compare with 0, the
- // def operand is dead.
- return (MI.getOpcode() == SystemZ::LTEBR ||
- MI.getOpcode() == SystemZ::LTDBR ||
- MI.getOpcode() == SystemZ::LTXBR) &&
- MI.getOperand(0).isDead();
-}
-
-// Return the source register of Compare, which is the unknown value
-// being tested.
-static unsigned getCompareSourceReg(MachineInstr &Compare) {
- unsigned reg = 0;
- if (Compare.isCompare())
- reg = Compare.getOperand(0).getReg();
- else if (isLoadAndTestAsCmp(Compare))
- reg = Compare.getOperand(1).getReg();
- assert(reg);
-
- return reg;
-}
-
// Compare compares the result of MI against zero. If MI is an addition
// of -1 and if CCUsers is a single branch on nonzero, eliminate the addition
// and convert the branch to a BRCT(G) or BRCTH. Return true on success.
@@ -206,7 +182,7 @@ bool SystemZElimCompare::convertToBRCT(
// We already know that there are no references to the register between
// MI and Compare. Make sure that there are also no references between
// Compare and Branch.
- unsigned SrcReg = getCompareSourceReg(Compare);
+ unsigned SrcReg = TII->getCompareSourceReg(Compare);
MachineBasicBlock::iterator MBBI = Compare, MBBE = Branch;
for (++MBBI; MBBI != MBBE; ++MBBI)
if (getRegReferences(*MBBI, SrcReg))
@@ -253,7 +229,7 @@ bool SystemZElimCompare::convertToLoadAndTrap(
// We already know that there are no references to the register between
// MI and Compare. Make sure that there are also no references between
// Compare and Branch.
- unsigned SrcReg = getCompareSourceReg(Compare);
+ unsigned SrcReg = TII->getCompareSourceReg(Compare);
MachineBasicBlock::iterator MBBI = Compare, MBBE = Branch;
for (++MBBI; MBBI != MBBE; ++MBBI)
if (getRegReferences(*MBBI, SrcReg))
@@ -494,25 +470,17 @@ bool SystemZElimCompare::adjustCCMasksForInstr(
return true;
}
-// Return true if Compare is a comparison against zero.
-static bool isCompareZero(MachineInstr &Compare) {
- if (isLoadAndTestAsCmp(Compare))
- return true;
- return Compare.getNumExplicitOperands() == 2 &&
- Compare.getOperand(1).isImm() && Compare.getOperand(1).getImm() == 0;
-}
-
// Try to optimize cases where comparison instruction Compare is testing
// a value against zero. Return true on success and if Compare should be
// deleted as dead. CCUsers is the list of instructions that use the CC
// value produced by Compare.
bool SystemZElimCompare::optimizeCompareZero(
MachineInstr &Compare, SmallVectorImpl<MachineInstr *> &CCUsers) {
- if (!isCompareZero(Compare))
+ if (!TII->isCompareZero(Compare))
return false;
// Search back for CC results that are based on the first operand.
- unsigned SrcReg = getCompareSourceReg(Compare);
+ unsigned SrcReg = TII->getCompareSourceReg(Compare);
MachineBasicBlock &MBB = *Compare.getParent();
Reference CCRefs;
Reference SrcRefs;
@@ -701,7 +669,7 @@ bool SystemZElimCompare::processBlock(MachineBasicBlock &MBB) {
MachineBasicBlock::iterator MBBI = MBB.end();
while (MBBI != MBB.begin()) {
MachineInstr &MI = *--MBBI;
- if (CompleteCCUsers && (MI.isCompare() || isLoadAndTestAsCmp(MI)) &&
+ if (CompleteCCUsers && (MI.isCompare() || TII->isLoadAndTestAsCmp(MI)) &&
(optimizeCompareZero(MI, CCUsers) ||
fuseCompareOperations(MI, CCUsers))) {
++MBBI;
diff --git a/llvm/lib/Target/SystemZ/SystemZInstrInfo.cpp b/llvm/lib/Target/SystemZ/SystemZInstrInfo.cpp
index b1fbea1c7cb64..a76424eff1e49 100644
--- a/llvm/lib/Target/SystemZ/SystemZInstrInfo.cpp
+++ b/llvm/lib/Target/SystemZ/SystemZInstrInfo.cpp
@@ -2159,6 +2159,28 @@ unsigned SystemZInstrInfo::getFusedCompare(unsigned Opcode,
return 0;
}
+bool SystemZInstrInfo::isLoadAndTestAsCmp(const MachineInstr &MI) const {
+ // If we during isel used a load-and-test as a compare with 0, the
+ // def operand is dead.
+ return (MI.getOpcode() == SystemZ::LTEBR ||
+ MI.getOpcode() == SystemZ::LTDBR ||
+ MI.getOpcode() == SystemZ::LTXBR) &&
+ MI.getOperand(0).isDead();
+}
+
+bool SystemZInstrInfo::isCompareZero(const MachineInstr &Compare) const {
+ if (isLoadAndTestAsCmp(Compare))
+ return true;
+ return Compare.isCompare() && Compare.getNumExplicitOperands() == 2 &&
+ Compare.getOperand(1).isImm() && Compare.getOperand(1).getImm() == 0;
+}
+
+Register
+SystemZInstrInfo::getCompareSourceReg(const MachineInstr &Compare) const {
+ assert(isCompareZero(Compare) && "Expected a compare with 0.");
+ return Compare.getOperand(isLoadAndTestAsCmp(Compare) ? 1 : 0).getReg();
+}
+
bool SystemZInstrInfo::
prepareCompareSwapOperands(MachineBasicBlock::iterator const MBBI) const {
assert(MBBI->isCompare() && MBBI->getOperand(0).isReg() &&
diff --git a/llvm/lib/Target/SystemZ/SystemZInstrInfo.h b/llvm/lib/Target/SystemZ/SystemZInstrInfo.h
index bf832154ad717..029fe93d5b15c 100644
--- a/llvm/lib/Target/SystemZ/SystemZInstrInfo.h
+++ b/llvm/lib/Target/SystemZ/SystemZInstrInfo.h
@@ -352,6 +352,17 @@ class SystemZInstrInfo : public SystemZGenInstrInfo {
SystemZII::FusedCompareType Type,
const MachineInstr *MI = nullptr) const;
+ // Return true if this is a load and test which can be optimized the
+ // same way as compare instruction.
+ bool isLoadAndTestAsCmp(const MachineInstr &MI) const;
+
+ // Return true if Compare is a comparison against zero.
+ bool isCompareZero(const MachineInstr &Compare) const;
+
+ // Return the source register of Compare, which is the unknown value
+ // being tested.
+ Register getCompareSourceReg(const MachineInstr &Compare) const;
+
// Try to find all CC users of the compare instruction (MBBI) and update
// all of them to maintain equivalent behavior after swapping the compare
// operands. Return false if not all users can be conclusively found and
diff --git a/llvm/lib/Target/SystemZ/SystemZMachineScheduler.cpp b/llvm/lib/Target/SystemZ/SystemZMachineScheduler.cpp
index 5e2365f1dc513..edc1650ee8d32 100644
--- a/llvm/lib/Target/SystemZ/SystemZMachineScheduler.cpp
+++ b/llvm/lib/Target/SystemZ/SystemZMachineScheduler.cpp
@@ -5,14 +5,6 @@
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
-//
-// -------------------------- Post RA scheduling ---------------------------- //
-// SystemZPostRASchedStrategy is a scheduling strategy which is plugged into
-// the MachineScheduler. It has a sorted Available set of SUs and a pickNode()
-// implementation that looks to optimize decoder grouping and balance the
-// usage of processor resources. Scheduler states are saved for the end
-// region of each MBB, so that a successor block can learn from it.
-//===----------------------------------------------------------------------===//
#include "SystemZMachineScheduler.h"
#include "llvm/CodeGen/MachineLoopInfo.h"
@@ -21,6 +13,188 @@ using namespace llvm;
#define DEBUG_TYPE "machine-scheduler"
+/// Pre-RA scheduling ///
+
+static bool isRegDef(const MachineOperand &MO) {
+ return MO.isReg() && MO.isDef();
+}
+
+static bool isPhysRegDef(const MachineOperand &MO) {
+ return isRegDef(MO) && MO.getReg().isPhysical();
+}
+
+void SystemZPreRASchedStrategy::initializeLatencyReduction() {
+ // Enable latency reduction for a region that has a considerable amount of
+ // data sequences that should be interlaved. These are SUs that only have
+ // one data predecessor / successor edge(s) to their adjacent instruction(s)
+ // in the input order. Disable if region has many SUs relative to the
+ // overall height.
+ unsigned DAGHeight = 0;
+ for (unsigned Idx = 0, End = DAG->SUnits.size(); Idx != End; ++Idx)
+ DAGHeight = std::max(DAGHeight, DAG->SUnits[Idx].getHeight());
+ RegionPolicy.DisableLatencyHeuristic =
+ DAG->SUnits.size() >= 3 * std::max(DAGHeight, 1u);
+ if ((HasDataSequences = !RegionPolicy.DisableLatencyHeuristic)) {
+ unsigned CurrSequence = 0, NumSeqNodes = 0;
+ auto countSequence = [&CurrSequence, &NumSeqNodes]() {
+ if (CurrSequence >= 2)
+ NumSeqNodes += CurrSequence;
+ CurrSequence = 0;
+ };
+ for (unsigned Idx = 0, End = DAG->SUnits.size(); Idx != End; ++Idx) {
+ const SUnit *SU = &DAG->SUnits[Idx];
+ bool InDataSequence = true;
+ // One Data pred to MI just above, or no preds.
+ unsigned NumPreds = 0;
+ for (const SDep &Pred : SU->Preds)
+ if (++NumPreds != 1 || Pred.getKind() != SDep::Data ||
+ Pred.getSUnit()->NodeNum != Idx - 1)
+ InDataSequence = false;
+ // One Data succ or no succs (ignoring ExitSU).
+ unsigned NumSuccs = 0;
+ for (const SDep &Succ : SU->Succs)
+ if (Succ.getSUnit() != &DAG->ExitSU &&
+ (++NumSuccs != 1 || Succ.getKind() != SDep::Data))
+ InDataSequence = false;
+ // Another type of node or one that does not have a single data pred
+ // ends any previous sequence.
+ if (!InDataSequence || !NumPreds)
+ countSequence();
+ if (InDataSequence)
+ CurrSequence++;
+ }
+ countSequence();
+ if (NumSeqNodes >= std::max(size_t(4), DAG->SUnits.size() / 4)) {
+ LLVM_DEBUG(dbgs() << "Number of nodes in def-use sequences: "
+ << NumSeqNodes << ". ";);
+ } else
+ HasDataSequences = false;
+ }
+}
+
+bool SystemZPreRASchedStrategy::definesCmp0Src(const MachineInstr *MI,
+ bool CCDef) const {
+ if (Cmp0SrcReg != SystemZ::NoRegister && MI->getNumOperands() &&
+ (MI->getDesc().hasImplicitDefOfPhysReg(SystemZ::CC) || !CCDef)) {
+ const MachineOperand &MO0 = MI->getOperand(0);
+ assert(!isPhysRegDef(MO0) && "Did not expect physreg def!");
+ if (isRegDef(MO0) && MO0.getReg() == Cmp0SrcReg)
+ return true;
+ }
+ return false;
+}
+
+static int biasPhysRegExtra(const SUnit *SU) {
+ if (int Res = biasPhysReg(SU, /*isTop=*/false))
+ return Res;
+
+ // Also recognize Load Address. Most of these are with an FI operand.
+ const MachineInstr *MI = SU->getInstr();
+ return MI->getNumOperands() && !MI->isCopy() &&
+ isPhysRegDef(MI->getOperand(0));
+}
+
+bool SystemZPreRASchedStrategy::tryCandidate(SchedCandidate &Cand,
+ SchedCandidate &TryCand,
+ SchedBoundary *Zone) const {
+ assert(Zone && !Zone->isTop() && "Bottom-Up scheduling only.");
+
+ // Initialize the candidate if needed.
+ if (!Cand.isValid()) {
+ TryCand.Reason = FirstValid;
+ return true;
+ }
+
+ // Bias physreg defs and copies to their uses and definitions respectively.
+ int TryCandPRegBias = biasPhysRegExtra(TryCand.SU);
+ int CandPRegBias = biasPhysRegExtra(Cand.SU);
+ if (tryGreater(TryCandPRegBias, CandPRegBias, TryCand, Cand, PhysReg))
+ return TryCand.Reason != NoCand;
+ if (TryCandPRegBias && CandPRegBias) {
+ // Both biased same way.
+ tryGreater(TryCand.SU->NodeNum, Cand.SU->NodeNum, TryCand, Cand, NodeOrder);
+ return TryCand.Reason != NoCand;
+ }
+
+ // Don't extend the scheduled latency in regions with many nodes in data
+ // sequences, or for (single block loop) regions that are acyclically
+ // (within a single loop iteration) latency limited. IsAcyclicLatencyLimited
+ // is set only after initialization in registerRoots(), which is why it is
+ // checked here instead of earlier.
+ if (!RegionPolicy.DisableLatencyHeuristic &&
+ (HasDataSequences || Rem.IsAcyclicLatencyLimited))
+ if (const SUnit *HigherSU =
+ TryCand.SU->getHeight() > Cand.SU->getHeight() ? TryCand.SU
+ : TryCand.SU->getHeight() < Cand.SU->getHeight() ? Cand.SU
+ : nullptr)
+ if (HigherSU->getHeight() > Zone->getScheduledLatency() &&
+ HigherSU->getDepth() < computeRemLatency(*Zone)) {
+ // One or both SUs increase the scheduled latency.
+ tryLess(TryCand.SU->getHeight(), Cand.SU->getHeight(), TryCand, Cand,
+ GenericSchedulerBase::BotHeightReduce);
+ return TryCand.Reason != NoCand;
+ }
+
+ // Weak edges help copy coalescing.
+ if (tryLess(TryCand.SU->WeakSuccsLeft, Cand.SU->WeakSuccsLeft, TryCand, Cand,
+ Weak))
+ return TryCand.Reason != NoCand;
+
+ // Help compare with zero elimination.
+ if (tryGreater(definesCmp0Src(TryCand.SU->getInstr()),
+ definesCmp0Src(Cand.SU->getInstr()), TryCand, Cand, Weak))
+ return TryCand.Reason != NoCand;
+
+ // Fall through to original instruction order.
+ if (TryCand.SU->NodeNum > Cand.SU->NodeNum) {
+ TryCand.Reason = NodeOrder;
+ return true;
+ }
+
+ return false;
+}
+
+void SystemZPreRASchedStrategy::initPolicy(MachineBasicBlock::iterator Begin,
+ MachineBasicBlock::iterator End,
+ unsigned NumRegionInstrs) {
+ // Avoid setting up the register pressure tracker for small regions to save
+ // compile time. Currently only used for computeCyclicCriticalPath() which
+ // is used for single block loops.
+ MachineBasicBlock *MBB = Begin->getParent();
+ RegionPolicy.ShouldTrackPressure =
+ MBB->isSuccessor(MBB) && NumRegionInstrs >= 8;
+
+ // These heuristics has so far seemed to work better without adding a
+ // top-down boundary.
+ RegionPolicy.OnlyBottomUp = true;
+ BotIdx = NumRegionInstrs - 1;
+ this->NumRegionInstrs = NumRegionInstrs;
+}
+
+void SystemZPreRASchedStrategy::initialize(ScheduleDAGMI *dag) {
+ GenericScheduler::initialize(dag);
+
+ Cmp0SrcReg = SystemZ::NoRegister;
+
+ initializeLatencyReduction();
+ LLVM_DEBUG(dbgs() << "Latency scheduling " << (HasDataSequences ? "" : "not ")
+ << "enabled for data sequences.\n";);
+}
+
+void SystemZPreRASchedStrategy::schedNode(SUnit *SU, bool IsTopNode) {
+ GenericScheduler::schedNode(SU, IsTopNode);
+
+ const SystemZInstrInfo *TII = static_cast<const SystemZInstrInfo *>(DAG->TII);
+ MachineInstr *MI = SU->getInstr();
+ if (TII->isCompareZero(*MI))
+ Cmp0SrcReg = TII->getCompareSourceReg(*MI);
+ else if (MI->getDesc().hasImplicitDefOfPhysReg(SystemZ::CC) ||
+ definesCmp0Src(MI, /*CCDef=*/false))
+ Cmp0SrcReg = SystemZ::NoRegister;
+}
+
+/// Post-RA scheduling ///
+
#ifndef NDEBUG
// Print the set of SUs
void SystemZPostRASchedStrategy::SUSet::
diff --git a/llvm/lib/Target/SystemZ/SystemZMachineScheduler.h b/llvm/lib/Target/SystemZ/SystemZMachineScheduler.h
index ba325b5d22951..4fdfd92d192c3 100644
--- a/llvm/lib/Target/SystemZ/SystemZMachineScheduler.h
+++ b/llvm/lib/Target/SystemZ/SystemZMachineScheduler.h
@@ -5,14 +5,22 @@
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
+
+// -------------------------- Pre RA scheduling ----------------------------- //
+//
+// SystemZPreRASchedStrategy performs latency scheduling in certain types of
+// regions where this is beneficial, and also helps copy coalescing and
+// comparison elimination.
//
// -------------------------- Post RA scheduling ---------------------------- //
+//
// SystemZPostRASchedStrategy is a scheduling strategy which is plugged into
// the MachineScheduler. It has a sorted Available set of SUs and a pickNode()
// implementation that looks to optimize decoder grouping and balance the
// usage of processor resources. Scheduler states are saved for the end
// region of each MBB, so that a successor block can learn from it.
-//===----------------------------------------------------------------------===//
+//
+//----------------------------------------------------------------------------//
#ifndef LLVM_LIB_TARGET_SYSTEMZ_SYSTEMZMACHINESCHEDULER_H
#define LLVM_LIB_TARGET_SYSTEMZ_SYSTEMZMACHINESCHEDULER_H
@@ -24,6 +32,34 @@
namespace llvm {
+/// A MachineSchedStrategy implementation for SystemZ pre RA scheduling.
+class SystemZPreRASchedStrategy : public GenericScheduler {
+ void initializeLatencyReduction();
+
+ Register Cmp0SrcReg;
+ // Return true if MI defines the Cmp0SrcReg that is used by a scheduled
+ // compare with 0. If CCDef is true MI must also have an implicit def of CC.
+ bool definesCmp0Src(const MachineInstr *MI, bool CCDef = true) const;
+
+ // True if the region has many instructions in def-use sequences and would
+ // likely benefit from latency reduction.
+ bool HasDataSequences;
+
+protected:
+ bool tryCandidate(SchedCandidate &Cand, SchedCandidate &TryCand,
+ SchedBoundary *Zone) const override;
+
+public:
+ SystemZPreRASchedStrategy(const MachineSchedContext *C)
+ : GenericScheduler(C) {}
+
+ void initPolicy(MachineBasicBlock::iterator Begin,
+ MachineBasicBlock::iterator End,
+ unsigned NumRegionInstrs) override;
+ void initialize(ScheduleDAGMI *dag) override;
+ void schedNode(SUnit *SU, bool IsTopNode) override;
+};
+
/// A MachineSchedStrategy implementation for SystemZ post RA scheduling.
class SystemZPostRASchedStrategy : public MachineSchedStrategy {
diff --git a/llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp b/llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp
index 3d0c04b574933..110ba5b46fe60 100644
--- a/llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp
+++ b/llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp
@@ -36,6 +36,11 @@ static cl::opt<bool> EnableMachineCombinerPass(
cl::desc("Enable the machine combiner pass"),
cl::init(true), cl::Hidden);
+static cl::opt<bool> GenericSched(
+ "generic-sched", cl::Hidden, cl::init(false),
+ cl::desc("Run the generic pre-ra scheduler instead of the SystemZ "
+ "scheduler."));
+
// NOLINTNEXTLINE(readability-identifier-naming)
extern "C" LLVM_ABI LLVM_EXTERNAL_VISIBILITY void
LLVMInitializeSystemZTarget() {
@@ -168,6 +173,17 @@ SystemZTargetMachine::getSubtargetImpl(const Function &F) const {
return I.get();
}
+ScheduleDAGInstrs *
+SystemZTargetMachine::createMachineScheduler(MachineSchedContext *C) const {
+ // Use GenericScheduler if requested on CL or for Z10 which has no sched
+ // model.
+ if (GenericSched ||
+ !C->MF->getSubtarget().getSchedModel().hasInstrSchedModel())
+ return nullptr;
+
+ return createSchedLive<SystemZPreRASchedStrategy>(C);
+}
+
ScheduleDAGInstrs *
SystemZTargetMachine::createPostMachineScheduler(MachineSchedContext *C) const {
return createSchedPostRA<SystemZPostRASchedStrategy>(C);
diff --git a/llvm/lib/Target/SystemZ/SystemZTargetMachine.h b/llvm/lib/Target/SystemZ/SystemZTargetMachine.h
index cced57a40ede0..1493332b9d361 100644
--- a/llvm/lib/Target/SystemZ/SystemZTargetMachine.h
+++ b/llvm/lib/Target/SystemZ/SystemZTargetMachine.h
@@ -55,6 +55,10 @@ class SystemZTargetMachine : public CodeGenTargetMachineImpl {
MachineFunctionInfo *
createMachineFunctionInfo(BumpPtrAllocator &Allocator, const Function &F,
const TargetSubtargetInfo *STI) const override;
+
+ ScheduleDAGInstrs *
+ createMachineScheduler(MachineSchedContext *C) const override;
+
ScheduleDAGInstrs *
createPostMachineScheduler(MachineSchedContext *C) const override;
diff --git a/llvm/test/CodeGen/SystemZ/DAGCombiner_isAlias.ll b/llvm/test/CodeGen/SystemZ/DAGCombiner_isAlias.ll
index 0e6c0e5836c04..2556c62ed1e72 100644
--- a/llvm/test/CodeGen/SystemZ/DAGCombiner_isAlias.ll
+++ b/llvm/test/CodeGen/SystemZ/DAGCombiner_isAlias.ll
@@ -10,10 +10,10 @@
; %.b = load i1, ptr @g_2, align 4
; CHECK: # %bb.6: # %crc32_gentab.exit
-; CHECK: larl %r2, g_2
-; CHECK-NEXT: llc %r3, 0(%r2)
-; CHECK-NOT: %r2
-; CHECK: llc %r1, 0(%r2)
+; CHECK: larl [[REG:%r[1-9]+]], g_2
+; CHECK-NEXT: llc {{%r[1-9]}}, 0([[REG]])
+; CHECK-NOT: [[REG]],
+; CHECK: llc %r1, 0([[REG]])
@g_2 = external hidden unnamed_addr global i1, align 4
@.str.1 = external hidden unnamed_addr constant [4 x i8], align 2
diff --git a/llvm/test/CodeGen/SystemZ/alias-01.ll b/llvm/test/CodeGen/SystemZ/alias-01.ll
index 008d659219172..79792de99a6b4 100644
--- a/llvm/test/CodeGen/SystemZ/alias-01.ll
+++ b/llvm/test/CodeGen/SystemZ/alias-01.ll
@@ -1,6 +1,9 @@
-; Test 32-bit ANDs in which the second operand is variable.
+; Test 32-bit ADDs in which the second operand is variable.
;
-; RUN: llc < %s -mtriple=s390x-linux-gnu | FileCheck %s
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z196 -generic-sched | FileCheck %s
+;
+; TODO: Some spills here with SystemZPreRASchedStrategy to all the stores in
+; bottom of region.
; Check that there are no spills.
define void @f1(ptr %src1, ptr %dest) {
diff --git a/llvm/test/CodeGen/SystemZ/args-06.ll b/llvm/test/CodeGen/SystemZ/args-06.ll
index d19fdb58e5a16..45723ba9ab2e5 100644
--- a/llvm/test/CodeGen/SystemZ/args-06.ll
+++ b/llvm/test/CodeGen/SystemZ/args-06.ll
@@ -5,10 +5,10 @@
define i8 @f1(i8 %a, i8 %b, i8 %c, i8 %d, i8 %e, i8 %f, i8 %g) {
; CHECK-LABEL: f1:
-; CHECK: lb {{%r[0-5]}}, 175(%r15)
-; CHECK: lb {{%r[0-5]}}, 167(%r15)
-; CHECK: ar %r2, %r3
-; CHECK: ar %r2, %r4
+; CHECK-DAG: lb {{%r[0-5]}}, 175(%r15)
+; CHECK-DAG: lb {{%r[0-5]}}, 167(%r15)
+; CHECK-DAG: ar %r2, %r3
+; CHECK-DAG: ar %r2, %r4
; CHECK: ar %r2, %r5
; CHECK: ar %r2, %r6
; CHECK: br %r14
diff --git a/llvm/test/CodeGen/SystemZ/args-12.ll b/llvm/test/CodeGen/SystemZ/args-12.ll
index 472672bbfd5ca..ada3fd3085a1c 100644
--- a/llvm/test/CodeGen/SystemZ/args-12.ll
+++ b/llvm/test/CodeGen/SystemZ/args-12.ll
@@ -28,11 +28,11 @@ define void @foo() {
; CHECK-NEXT: vl %v0, 0(%r1), 3
; CHECK-NEXT: vst %v0, 160(%r15), 3
; CHECK-NEXT: vgbm %v0, 0
-; CHECK-NEXT: la %r6, 216(%r15)
; CHECK-NEXT: lghi %r2, 1
; CHECK-NEXT: lghi %r3, 2
; CHECK-NEXT: lghi %r4, 3
; CHECK-NEXT: lghi %r5, 4
+; CHECK-NEXT: la %r6, 216(%r15)
; CHECK-NEXT: vst %v0, 200(%r15), 3
; CHECK-NEXT: vst %v0, 216(%r15), 3
; CHECK-NEXT: brasl %r14, bar at PLT
diff --git a/llvm/test/CodeGen/SystemZ/atomic-load-09.ll b/llvm/test/CodeGen/SystemZ/atomic-load-09.ll
index 61b8e2f0efa8c..74e5716b4169e 100644
--- a/llvm/test/CodeGen/SystemZ/atomic-load-09.ll
+++ b/llvm/test/CodeGen/SystemZ/atomic-load-09.ll
@@ -39,8 +39,8 @@ define void @f2(ptr %ret, ptr %src) {
; CHECK-NEXT: aghi %r15, -176
; CHECK-NEXT: .cfi_def_cfa_offset 336
; CHECK-NEXT: lgr %r13, %r2
-; CHECK-NEXT: la %r4, 160(%r15)
; CHECK-NEXT: lghi %r2, 16
+; CHECK-NEXT: la %r4, 160(%r15)
; CHECK-NEXT: lhi %r5, 5
; CHECK-NEXT: brasl %r14, __atomic_load at PLT
; CHECK-NEXT: vl %v0, 160(%r15), 3
@@ -62,8 +62,8 @@ define void @f2_fpuse(ptr %ret, ptr %src) {
; CHECK-NEXT: aghi %r15, -176
; CHECK-NEXT: .cfi_def_cfa_offset 336
; CHECK-NEXT: lgr %r13, %r2
-; CHECK-NEXT: la %r4, 160(%r15)
; CHECK-NEXT: lghi %r2, 16
+; CHECK-NEXT: la %r4, 160(%r15)
; CHECK-NEXT: lhi %r5, 5
; CHECK-NEXT: brasl %r14, __atomic_load at PLT
; CHECK-NEXT: vl %v0, 160(%r15), 3
diff --git a/llvm/test/CodeGen/SystemZ/atomic-store-08.ll b/llvm/test/CodeGen/SystemZ/atomic-store-08.ll
index 57f1319365c4f..d73ab946f72c7 100644
--- a/llvm/test/CodeGen/SystemZ/atomic-store-08.ll
+++ b/llvm/test/CodeGen/SystemZ/atomic-store-08.ll
@@ -86,11 +86,11 @@ define void @f2_fpuse(ptr %dst, ptr %src) {
; CHECK-NEXT: .cfi_def_cfa_offset 336
; CHECK-NEXT: ld %f0, 0(%r3)
; CHECK-NEXT: ld %f2, 8(%r3)
-; CHECK-DAG: lgr %r3, %r2
+; CHECK-DAG: lgr %r3, %r2
; CHECK-DAG: axbr %f0, %f0
-; CHECK-NEXT: la %r4, 160(%r15)
-; CHECK-NEXT: lghi %r2, 16
-; CHECK-NEXT: lhi %r5, 5
+; CHECK-DAG: la %r4, 160(%r15)
+; CHECK-DAG: lghi %r2, 16
+; CHECK-DAG: lhi %r5, 5
; CHECK-NEXT: std %f0, 160(%r15)
; CHECK-NEXT: std %f2, 168(%r15)
; CHECK-NEXT: brasl %r14, __atomic_store at PLT
diff --git a/llvm/test/CodeGen/SystemZ/atomic-store-09.ll b/llvm/test/CodeGen/SystemZ/atomic-store-09.ll
index 3af16490b34bd..b85e7f81790ec 100644
--- a/llvm/test/CodeGen/SystemZ/atomic-store-09.ll
+++ b/llvm/test/CodeGen/SystemZ/atomic-store-09.ll
@@ -42,9 +42,9 @@ define void @f2(ptr %dst, ptr %src) {
; CHECK-NEXT: .cfi_def_cfa_offset 336
; CHECK-NEXT: vl %v0, 0(%r3), 3
; CHECK-NEXT: lgr %r0, %r2
-; CHECK-NEXT: la %r4, 160(%r15)
; CHECK-NEXT: lghi %r2, 16
; CHECK-NEXT: lgr %r3, %r0
+; CHECK-NEXT: la %r4, 160(%r15)
; CHECK-NEXT: lhi %r5, 5
; CHECK-NEXT: vst %v0, 160(%r15), 3
; CHECK-NEXT: brasl %r14, __atomic_store at PLT
@@ -66,9 +66,9 @@ define void @f2_fpuse(ptr %dst, ptr %src) {
; CHECK-NEXT: vl %v0, 0(%r3), 3
; CHECK-NEXT: wfaxb %v0, %v0, %v0
; CHECK-NEXT: lgr %r0, %r2
-; CHECK-NEXT: la %r4, 160(%r15)
; CHECK-NEXT: lghi %r2, 16
; CHECK-NEXT: lgr %r3, %r0
+; CHECK-NEXT: la %r4, 160(%r15)
; CHECK-NEXT: lhi %r5, 5
; CHECK-NEXT: vst %v0, 160(%r15), 3
; CHECK-NEXT: brasl %r14, __atomic_store at PLT
diff --git a/llvm/test/CodeGen/SystemZ/atomicrmw-fmax-01.ll b/llvm/test/CodeGen/SystemZ/atomicrmw-fmax-01.ll
index 80c43137e3a03..04b8e9f0d2c55 100644
--- a/llvm/test/CodeGen/SystemZ/atomicrmw-fmax-01.ll
+++ b/llvm/test/CodeGen/SystemZ/atomicrmw-fmax-01.ll
@@ -12,10 +12,10 @@ define float @f1(ptr %src, float %b) {
; CHECK: ler %f0, [[FSRC]]
; CHECK: ler %f2, [[FB]]
; CHECK: brasl %r14, fmaxf at PLT
-; CHECK: lgdr [[RO:%r[0-9]+]], %f0
-; CHECK: srlg [[RO]], [[RO]], 32
-; CHECK: lgdr [[RI:%r[0-9]+]], [[FSRC]]
-; CHECK: srlg [[RI]], [[RI]], 32
+; CHECK-DAG: lgdr [[RO:%r[0-9]+]], %f0
+; CHECK-DAG: srlg [[RO]], [[RO]], 32
+; CHECK-DAG: lgdr [[RI:%r[0-9]+]], [[FSRC]]
+; CHECK-DAG: srlg [[RI]], [[RI]], 32
; CHECK: cs [[RI]], [[RO]], 0([[SRC]])
; CHECK: sllg [[RO]], [[RI]], 32
; CHECK: ldgr [[FSRC]], [[RO]]
diff --git a/llvm/test/CodeGen/SystemZ/atomicrmw-fmin-01.ll b/llvm/test/CodeGen/SystemZ/atomicrmw-fmin-01.ll
index c67b02e688de3..bb01b1d90eaa8 100644
--- a/llvm/test/CodeGen/SystemZ/atomicrmw-fmin-01.ll
+++ b/llvm/test/CodeGen/SystemZ/atomicrmw-fmin-01.ll
@@ -12,10 +12,10 @@ define float @f1(ptr %src, float %b) {
; CHECK: ler %f0, [[FSRC]]
; CHECK: ler %f2, [[FB]]
; CHECK: brasl %r14, fminf at PLT
-; CHECK: lgdr [[RO:%r[0-9]+]], %f0
-; CHECK: srlg [[RO]], [[RO]], 32
-; CHECK: lgdr [[RI:%r[0-9]+]], [[FSRC]]
-; CHECK: srlg [[RI]], [[RI]], 32
+; CHECK-DAG: lgdr [[RO:%r[0-9]+]], %f0
+; CHECK-DAG: srlg [[RO]], [[RO]], 32
+; CHECK-DAG: lgdr [[RI:%r[0-9]+]], [[FSRC]]
+; CHECK-DAG: srlg [[RI]], [[RI]], 32
; CHECK: cs [[RI]], [[RO]], 0([[SRC]])
; CHECK: sllg [[RO]], [[RI]], 32
; CHECK: ldgr [[FSRC]], [[RO]]
diff --git a/llvm/test/CodeGen/SystemZ/fp-half-vector-fcmp-select.ll b/llvm/test/CodeGen/SystemZ/fp-half-vector-fcmp-select.ll
index 0500f43b7f33e..a453d29705ff2 100644
--- a/llvm/test/CodeGen/SystemZ/fp-half-vector-fcmp-select.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-half-vector-fcmp-select.ll
@@ -391,20 +391,20 @@ define void @fun1(ptr %Src, ptr %Dst) {
; CHECK-NEXT: .cfi_offset %f9, -176
; CHECK-NEXT: .cfi_offset %f10, -184
; CHECK-NEXT: .cfi_offset %f11, -192
-; CHECK-NEXT: lgh %r0, 2(%r2)
-; CHECK-NEXT: sllg %r0, %r0, 48
-; CHECK-NEXT: ldgr %f8, %r0
-; CHECK-NEXT: lgh %r0, 0(%r2)
-; CHECK-NEXT: sllg %r0, %r0, 48
-; CHECK-NEXT: ldgr %f11, %r0
-; CHECK-NEXT: lgh %r0, 6(%r2)
-; CHECK-NEXT: sllg %r0, %r0, 48
-; CHECK-NEXT: ldgr %f10, %r0
; CHECK-NEXT: lgh %r0, 4(%r2)
-; CHECK-NEXT: sllg %r0, %r0, 48
+; CHECK-NEXT: lgh %r1, 2(%r2)
; CHECK-NEXT: lgr %r13, %r3
+; CHECK-NEXT: lgh %r3, 0(%r2)
+; CHECK-NEXT: lgh %r2, 6(%r2)
+; CHECK-NEXT: sllg %r0, %r0, 48
+; CHECK-NEXT: sllg %r1, %r1, 48
+; CHECK-NEXT: ldgr %f8, %r1
+; CHECK-NEXT: sllg %r1, %r3, 48
; CHECK-NEXT: ldgr %f0, %r0
; CHECK-NEXT: # kill: def $f0h killed $f0h killed $f0d
+; CHECK-NEXT: sllg %r0, %r2, 48
+; CHECK-NEXT: ldgr %f11, %r1
+; CHECK-NEXT: ldgr %f10, %r0
; CHECK-NEXT: brasl %r14, __extendhfsf2 at PLT
; CHECK-NEXT: ler %f9, %f0
; CHECK-NEXT: ler %f0, %f11
@@ -429,15 +429,15 @@ define void @fun1(ptr %Src, ptr %Dst) {
; CHECK-NEXT: brasl %r14, __truncsfhf2 at PLT
; CHECK-NEXT: # kill: def $f0h killed $f0h def $f0d
; CHECK-NEXT: lgdr %r0, %f0
-; CHECK-NEXT: srlg %r0, %r0, 48
-; CHECK-NEXT: sth %r0, 2(%r13)
-; CHECK-NEXT: lgdr %r0, %f9
+; CHECK-NEXT: lgdr %r1, %f9
; CHECK-NEXT: ld %f8, 184(%r15) # 8-byte Reload
; CHECK-NEXT: ld %f9, 176(%r15) # 8-byte Reload
; CHECK-NEXT: ld %f10, 168(%r15) # 8-byte Reload
; CHECK-NEXT: ld %f11, 160(%r15) # 8-byte Reload
; CHECK-NEXT: srlg %r0, %r0, 48
-; CHECK-NEXT: sth %r0, 0(%r13)
+; CHECK-NEXT: srlg %r1, %r1, 48
+; CHECK-NEXT: sth %r0, 2(%r13)
+; CHECK-NEXT: sth %r1, 0(%r13)
; CHECK-NEXT: lmg %r13, %r15, 296(%r15)
; CHECK-NEXT: br %r14
;
diff --git a/llvm/test/CodeGen/SystemZ/fp-half.ll b/llvm/test/CodeGen/SystemZ/fp-half.ll
index f479e405b04e9..2de7fd3c79461 100644
--- a/llvm/test/CodeGen/SystemZ/fp-half.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-half.ll
@@ -564,14 +564,14 @@ define void @fun11() {
; NOVEC-NEXT: aghi %r15, -160
; NOVEC-NEXT: .cfi_def_cfa_offset 320
; NOVEC-NEXT: lghrl %r0, .LCPI11_0
-; NOVEC-NEXT: sllg %r0, %r0, 48
-; NOVEC-NEXT: ldgr %f4, %r0
-; NOVEC-NEXT: lghrl %r0, .LCPI11_1
+; NOVEC-NEXT: lghrl %r1, .LCPI11_1
; NOVEC-NEXT: lzer %f2
; NOVEC-NEXT: lcdfr %f0, %f2
-; NOVEC-NEXT: # kill: def $f4h killed $f4h killed $f4d
; NOVEC-NEXT: sllg %r0, %r0, 48
-; NOVEC-NEXT: ldgr %f6, %r0
+; NOVEC-NEXT: sllg %r1, %r1, 48
+; NOVEC-NEXT: ldgr %f4, %r0
+; NOVEC-NEXT: # kill: def $f4h killed $f4h killed $f4d
+; NOVEC-NEXT: ldgr %f6, %r1
; NOVEC-NEXT: # kill: def $f6h killed $f6h killed $f6d
; NOVEC-NEXT: brasl %r14, foo2 at PLT
; NOVEC-NEXT: lmg %r14, %r15, 272(%r15)
diff --git a/llvm/test/CodeGen/SystemZ/int-conv-11.ll b/llvm/test/CodeGen/SystemZ/int-conv-11.ll
index 48e3a4d0dda63..52b3a58bfce3f 100644
--- a/llvm/test/CodeGen/SystemZ/int-conv-11.ll
+++ b/llvm/test/CodeGen/SystemZ/int-conv-11.ll
@@ -1,6 +1,9 @@
; Test spills of zero extensions when high GR32s are available.
;
-; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z196 | FileCheck %s
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z196 -generic-sched | FileCheck %s
+;
+; TODO: Some spills here with SystemZPreRASchedStrategy to all the stores in
+; bottom of region.
; Test a case where we spill the source of at least one LLCRMux. We want
; to use LLC(H) if possible.
diff --git a/llvm/test/CodeGen/SystemZ/knownbits-intrinsics-binop.ll b/llvm/test/CodeGen/SystemZ/knownbits-intrinsics-binop.ll
index b855d01934782..d67d085b2402a 100644
--- a/llvm/test/CodeGen/SystemZ/knownbits-intrinsics-binop.ll
+++ b/llvm/test/CodeGen/SystemZ/knownbits-intrinsics-binop.ll
@@ -464,9 +464,9 @@ define i32 @f31() {
; CHECK-LABEL: f31:
; CHECK-LABEL: # %bb.0:
; CHECK-NEXT: larl %r1, .LCPI31_0
+; CHECK-NEXT: larl %r2, .LCPI31_1
; CHECK-NEXT: vl %v0, 0(%r1), 3
-; CHECK-NEXT: larl %r1, .LCPI31_1
-; CHECK-NEXT: vl %v1, 0(%r1), 3
+; CHECK-NEXT: vl %v1, 0(%r2), 3
; CHECK-NEXT: vperm %v0, %v1, %v1, %v0
; CHECK-NEXT: vlgvb %r2, %v0, 0
; CHECK-NEXT: nilf %r2, 7
diff --git a/llvm/test/CodeGen/SystemZ/memcpy-01.ll b/llvm/test/CodeGen/SystemZ/memcpy-01.ll
index cabbfb40acb9a..e62f3eea58efc 100644
--- a/llvm/test/CodeGen/SystemZ/memcpy-01.ll
+++ b/llvm/test/CodeGen/SystemZ/memcpy-01.ll
@@ -149,8 +149,8 @@ define void @f13() {
; CHECK-LABEL: f13:
; CHECK: brasl %r14, foo at PLT
; CHECK: mvc 200(256,%r15), 3826(%r15)
-; CHECK: mvc 456(256,%r15), 4082(%r15)
-; CHECK: lay [[NEWSRC:%r[1-5]]], 4338(%r15)
+; CHECK-DAG: mvc 456(256,%r15), 4082(%r15)
+; CHECK-DAG: lay [[NEWSRC:%r[1-5]]], 4338(%r15)
; CHECK: mvc 712(256,%r15), 0([[NEWSRC]])
; CHECK: mvc 968(256,%r15), 256([[NEWSRC]])
; CHECK: mvc 1224(255,%r15), 512([[NEWSRC]])
diff --git a/llvm/test/CodeGen/SystemZ/memset-01.ll b/llvm/test/CodeGen/SystemZ/memset-01.ll
index 535ccfd7b9e28..06a34e0297868 100644
--- a/llvm/test/CodeGen/SystemZ/memset-01.ll
+++ b/llvm/test/CodeGen/SystemZ/memset-01.ll
@@ -192,8 +192,8 @@ define void @f17(ptr %dest, i8 %val) {
; CHECK-NEXT: mvc 3584(255,%r2), 3583(%r2)
; CHECK-NEXT: stc %r3, 3839(%r2)
; CHECK-NEXT: mvc 3840(255,%r2), 3839(%r2)
-; CHECK-NEXT: lay %r1, 4096(%r2)
-; CHECK-NEXT: stc %r3, 4095(%r2)
+; CHECK-DAG: lay %r1, 4096(%r2)
+; CHECK-DAG: stc %r3, 4095(%r2)
; CHECK-NEXT: mvc 0(1,%r1), 4095(%r2)
; CHECK-NEXT: br %r14
%addr = getelementptr i8, ptr %dest, i64 3583
diff --git a/llvm/test/CodeGen/SystemZ/misched-prera-biaspregs.mir b/llvm/test/CodeGen/SystemZ/misched-prera-biaspregs.mir
new file mode 100644
index 0000000000000..85c9535ea6609
--- /dev/null
+++ b/llvm/test/CodeGen/SystemZ/misched-prera-biaspregs.mir
@@ -0,0 +1,94 @@
+# RUN: llc -o - %s -mtriple=s390x-linux-gnu -mcpu=z16 -verify-machineinstrs \
+# RUN: -run-pass=machine-scheduler 2>&1 | FileCheck %s
+
+# The COPY to r2 should be right before the return.
+# CHECK: name: fun0
+# CHECK: $r2d = COPY %0
+# CHECK-NEXT: Return implicit $r2d
+---
+name: fun0
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ %0:gr64bit = LGHI 0
+ $r2d = COPY %0
+ %1:gr64bit = LGHI 0
+ Return implicit $r2d
+...
+
+# The COPY from r2 should be first.
+# CHECK: name: fun1
+# CHECK: liveins: $r2
+# CHECK-NEXT: {{ $}}
+# CHECK-NEXT: %2:gr64bit = COPY $r2
+---
+name: fun1
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $r2d
+
+ %0:gr64bit = LGHI 1
+ %1:gr64bit = COPY %0
+ %2:gr64bit = COPY %1
+ %2:gr64bit = COPY $r2d
+ $r2d = COPY %2
+ Return implicit $r2d
+...
+
+# The LGHI to r2 should be right before the return.
+# CHECK: name: fun2
+# CHECK: $r2d = LGHI 0
+# CHECK-NEXT: Return implicit $r2d
+---
+name: fun2
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ %0:gr64bit = LGHI 0
+ $r2d = LGHI 0
+ %1:gr64bit = LGHI 0
+ Return implicit $r2d
+...
+
+# The LA to r2 should be right before the return.
+# CHECK: name: fun3
+# CHECK: $r2d = LA %stack.0, 0, $noreg
+# CHECK-NEXT: Return implicit $r2d
+---
+name: fun3
+tracksRegLiveness: true
+stack:
+ - { id: 0, size: 8 }
+body: |
+ bb.0:
+ $r2d = LA %stack.0, 0, $noreg
+ %0:gr64bit = LGHI 0
+ Return implicit killed $r2d
+...
+
+# Don't reorder phys-reg COPYs. Recognize LA to PReg with a VReg use.
+# CHECK: name: fun4
+# CHECK: %0:addr64bit = COPY $r2d
+# CHECK-NEXT: dead %3:gr64bit = COPY $r3d
+# CHECK-NEXT: %1:gr64bit = AGRK %0, %0, implicit-def $cc
+# CHECK-NEXT: dead %2:gr64bit = AGRK %1, %0, implicit-def $cc
+# CHECK-NEXT: $r2d = LA %0, 0, $noreg
+# CHECK-NEXT: Return
+---
+name: fun4
+tracksRegLiveness: true
+stack:
+ - { id: 0, size: 8 }
+body: |
+ bb.0.entry:
+ liveins: $r2d, $r3d
+
+ %0:addr64bit = COPY $r2d
+ %2:gr64bit = AGRK %0, %0, implicit-def $cc
+ $r2d = LA %0, 0, $noreg
+ %3:gr64bit = AGRK %2, %0, implicit-def $cc
+ %1:gr64bit = COPY $r3d
+
+ Return
+...
diff --git a/llvm/test/CodeGen/SystemZ/misched-prera-cmp-elim.mir b/llvm/test/CodeGen/SystemZ/misched-prera-cmp-elim.mir
new file mode 100644
index 0000000000000..481fe51f0de0d
--- /dev/null
+++ b/llvm/test/CodeGen/SystemZ/misched-prera-cmp-elim.mir
@@ -0,0 +1,38 @@
+# RUN: llc -o - %s -mtriple=s390x-linux-gnu -mcpu=z16 -verify-machineinstrs \
+# RUN: -run-pass=machine-scheduler -debug-only=machine-scheduler 2>&1\
+# RUN: | FileCheck %s
+
+# Schedule the AGHIK that defines the compare source low to help comparison
+# elimination.
+# CHECK: ********** MI Scheduling **********
+# CHECK-NEXT: fun0:%bb.0
+# CHECK: ********** MI Scheduling **********
+# CHECK-NEXT: fun0:%bb.1
+# CHECK: Queue BotQ.A: 1 0
+# CHECK-NEXT: Cand SU(1) FIRST
+# CHECK-NEXT: Cand SU(0) WEAK
+# CHECK-NEXT: Pick Bot WEAK [pre-RA]
+# CHECK-NEXT: Scheduling SU(0) %2:gr64bit = AGHIK %0:gr64bit, -1, impl
+# CHECK: *** Final schedule for %bb.1 ***
+# CHECK-NEXT: SU(1): dead %3:gr64bit = AGHIK %1:gr64bit, 1, implicit-def dead $cc
+# CHECK-NEXT: SU(0): %2:gr64bit = AGHIK %0:gr64bit, -1, implicit-def dead $cc
+# CHECK-NEXT: SU(2): CGHI %2:gr64bit, 0, implicit-def $cc
+---
+name: fun0
+tracksRegLiveness: true
+body: |
+ bb.0:
+ liveins: $r2d, $r3d
+
+ %0:gr64bit = COPY $r2d
+ %1:gr64bit = COPY $r3d
+
+ bb.1:
+ %2:gr64bit = AGHIK %0, -1, implicit-def dead $cc
+ %3:gr64bit = AGHIK %1, 1, implicit-def dead $cc
+ CGHI %2:gr64bit, 0, implicit-def $cc
+ BRC 14, 8, %bb.1, implicit killed $cc
+
+ bb.2:
+ Return
+...
diff --git a/llvm/test/CodeGen/SystemZ/misched-prera-copy-coal.mir b/llvm/test/CodeGen/SystemZ/misched-prera-copy-coal.mir
new file mode 100644
index 0000000000000..05a1fe699c584
--- /dev/null
+++ b/llvm/test/CodeGen/SystemZ/misched-prera-copy-coal.mir
@@ -0,0 +1,31 @@
+# RUN: llc -o - %s -mtriple=s390x-linux-gnu -mcpu=z16 -verify-machineinstrs \
+# RUN: -run-pass=machine-scheduler 2>&1 | FileCheck %s
+
+# Respect the weak edge between the SLLK and CLFIMux. Only if the SLLK is scheduled
+# below can the COPY be coalesced.
+# CHECK: name: fun0
+# CHECK: CLFIMux %0, 0, implicit-def $cc
+# CHECK-NEXT: %1:gr32bit = SLLK %0, $noreg, 1
+# CHECK-NEXT: %0:gr32bit = COPY %1
+# CHECK-NEXT: BRC 14, 10, %bb.1, implicit killed $cc
+---
+name: fun0
+tracksRegLiveness: true
+body: |
+ bb.0:
+ successors: %bb.1(0x80000000)
+
+ %0:gr32bit = LHIMux 0
+
+ bb.1:
+ successors: %bb.2(0x04000000), %bb.1(0x7c000000)
+
+ %1:gr32bit = SLLK %0, $noreg, 1
+ CLFIMux %0, 0, implicit-def $cc
+ %0:gr32bit = COPY %1
+ BRC 14, 10, %bb.1, implicit killed $cc
+ J %bb.2
+
+ bb.2:
+ Return
+...
diff --git a/llvm/test/CodeGen/SystemZ/misched-prera-latencies.mir b/llvm/test/CodeGen/SystemZ/misched-prera-latencies.mir
new file mode 100644
index 0000000000000..8257d672ee514
--- /dev/null
+++ b/llvm/test/CodeGen/SystemZ/misched-prera-latencies.mir
@@ -0,0 +1,212 @@
+# RUN: llc -o - %s -mtriple=s390x-linux-gnu -mcpu=z16 -verify-machineinstrs \
+# RUN: -run-pass=machine-scheduler -debug-only=machine-scheduler 2>&1\
+# RUN: | FileCheck %s
+
+# This function has two long independent chains of instructions that should be interleaved.
+# CHECK: ********** MI Scheduling **********
+# CHECK-NEXT: fun0:%bb.0
+# CHECK: ********** MI Scheduling **********
+# CHECK-NEXT: fun0:%bb.1
+# CHECK: Number of nodes in def-use sequences: 10. Latency scheduling enabled for data sequences.
+# CHECK: *** Final schedule for %bb.1 ***
+# CHECK-NEXT: SU(0): %4:fp64bit = LZDR
+# CHECK-NEXT: SU(5): %9:fp64bit = LZDR
+# CHECK-NEXT: SU(1): %5:fp64bit = nofpexcept WFMADB %0:fp64bit, %1:fp64bit, %4:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(6): %10:fp64bit = nofpexcept WFMADB %2:fp64bit, %3:fp64bit, %9:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(2): %6:fp64bit = nofpexcept WFMADB %0:fp64bit, %1:fp64bit, %5:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(7): %11:fp64bit = nofpexcept WFMADB %2:fp64bit, %3:fp64bit, %10:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(3): %7:fp64bit = nofpexcept WFMADB %0:fp64bit, %1:fp64bit, %6:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(8): %12:fp64bit = nofpexcept WFMADB %2:fp64bit, %3:fp64bit, %11:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(4): %8:fp64bit = nofpexcept WFMADB %0:fp64bit, %1:fp64bit, %7:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(9): %13:fp64bit = nofpexcept WFMADB %2:fp64bit, %3:fp64bit, %12:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(10): %14:vr64bit = nofpexcept WFADB %8:fp64bit, %13:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(11): VST64 %14:vr64bit, $noreg, 0, $noreg :: (store (s128) into `ptr null`, align 8)
+---
+name: fun0
+tracksRegLiveness: true
+body: |
+ bb.0:
+ liveins: $f0d, $f2d, $f4d, $f6d
+
+ %0:fp64bit = COPY $f0d
+ %1:fp64bit = COPY $f2d
+ %2:fp64bit = COPY $f4d
+ %3:fp64bit = COPY $f6d
+
+ bb.1:
+ %4:fp64bit = LZDR
+ %5:fp64bit = nofpexcept WFMADB %0, %1, %4, implicit $fpc
+ %6:fp64bit = nofpexcept WFMADB %0, %1, %5, implicit $fpc
+ %7:fp64bit = nofpexcept WFMADB %0, %1, %6, implicit $fpc
+ %8:fp64bit = nofpexcept WFMADB %0, %1, %7, implicit $fpc
+ %9:fp64bit = LZDR
+ %10:fp64bit = nofpexcept WFMADB %2, %3, %9, implicit $fpc
+ %11:fp64bit = nofpexcept WFMADB %2, %3, %10, implicit $fpc
+ %12:fp64bit = nofpexcept WFMADB %2, %3, %11, implicit $fpc
+ %13:fp64bit = nofpexcept WFMADB %2, %3, %12, implicit $fpc
+ %14:vr64bit = nofpexcept WFADB %8, %13, implicit $fpc
+ VST64 %14, $noreg, 0, $noreg :: (store (s128) into `ptr null`, align 8)
+ Return
+...
+
+# This function has a data flow sequence and latency scheduling puts the WFDDB high.
+# CHECK: ********** MI Scheduling **********
+# CHECK-NEXT: fun1:%bb.0
+# CHECK: Number of nodes in def-use sequences: 4. Latency scheduling enabled for data sequences.
+# CHECK: *** Final schedule for %bb.0 ***
+# CHECK-NEXT: SU(1): undef %1.subreg_h64:vr128bit = WFDDB undef %2:fp64bit, undef %3:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(2): %4:fp64bit = COPY %1.subreg_h64:vr128bit
+# CHECK-NEXT: SU(3): %5:fp64bit = WFADB %4:fp64bit, %4:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(0): dead %0:fp64bit = LZDR
+# CHECK-NEXT: SU(4): VST64 %5:fp64bit, $noreg, 0, $noreg :: (store (s128) into `ptr null`, align 8)
+---
+name: fun1
+tracksRegLiveness: true
+body: |
+ bb.0:
+ %0:fp64bit = LZDR
+ undef %1.subreg_h64:vr128bit = WFDDB undef %2:fp64bit, undef %3:fp64bit, implicit $fpc
+ %4:fp64bit = COPY %1.subreg_h64:vr128bit
+ %5:fp64bit = WFADB %4, %4, implicit $fpc
+ VST64 %5:fp64bit , $noreg, 0, $noreg :: (store (s128) into `ptr null`, align 8)
+ Return
+...
+
+# Same, but there is no sequence, so no latency scheduling is done.
+# CHECK: ********** MI Scheduling **********
+# CHECK-NEXT: fun2:%bb.0
+# CHECK: Latency scheduling not enabled for data sequences.
+# CHECK: *** Final schedule for %bb.0 ***
+# CHECK-NEXT: SU(0): dead %0:fp64bit = LZDR
+# CHECK-NEXT: SU(1): undef %1.subreg_h64:vr128bit = WFDDB undef %2:fp64bit, undef %3:fp64bit, implicit $fpc
+# CHECK-NEXT: SU(2): VST64 %1.subreg_h64:vr128bit, $noreg, 0, $noreg :: (store (s128) into `ptr null`, align 8)
+---
+name: fun2
+tracksRegLiveness: true
+body: |
+ bb.0:
+ %0:fp64bit = LZDR
+ undef %1.subreg_h64:vr128bit = WFDDB undef %2:fp64bit, undef %3:fp64bit, implicit $fpc
+ VST64 %1.subreg_h64:vr128bit , $noreg, 0, $noreg :: (store (s128) into `ptr null`, align 8)
+ Return
+...
+
+# Use the GenericScheduler latency heuristic for this single block loop.
+# CHECK: ********** MI Scheduling **********
+# CHECK-NEXT: fun3:%bb.1
+# CHECK: Latency scheduling not enabled for data sequences.
+# CHECK: ACYCLIC LATENCY LIMIT
+# CHECK: Pick Bot BOT-HEIGHT [pre-RA]
+---
+name: fun3
+tracksRegLiveness: true
+body: |
+ bb.0:
+ %19:vr128bit = VREPIF 1
+
+ bb.1:
+ %3:gr64bit = VLGVF %19, $noreg, 0
+ %6:gr64bit = LLGFR %3.subreg_l32
+ %6:gr64bit = MSGFI %6, 274877907
+ %7:gr64bit = SRLG %6, $noreg, 39
+ %9:gr64bit = VLGVF %19, $noreg, 1
+ %12:gr64bit = LLGFR %9.subreg_l32
+ %12:gr64bit = MSGFI %12, 274877907
+ %13:gr64bit = SRLG %12, $noreg, 39
+ %15:vr128bit = VLVGP %13, %13
+ %15:vr128bit = VLVGF %15, %7.subreg_l32, $noreg, 0
+ %16:vr128bit = VUPLHF %15
+ %17:vr128bit = nofpexcept VCDLGB %16, 0, 0, implicit $fpc
+ %19:vr128bit = VGBM 0
+ %18:vr128bit = nofpexcept VFMDB %17, %19, implicit $fpc
+ VST %18, $noreg, 0, $noreg :: (store (s128) into `ptr null`, align 8)
+ J %bb.1
+...
+
+# This region has many nodes compared to the maximum height - the DAG is
+# "wide". Don't interleave the data flows in cases like this, as it could
+# result in too much ILP and spilling.
+# CHECK: ********** MI Scheduling **********
+# CHECK-NEXT: fun4:%bb.0
+# CHECK: Latency scheduling not enabled for data sequences.
+# CHECK: *** Final schedule for %bb.0 ***
+# CHECK-NEXT: SU(0): %0:gr64bit = COPY undef %1:gr64bit
+# CHECK-NEXT: SU(1): dead %2:gr64bit = AGRK %0:gr64bit, %0:gr64bit,
+# CHECK-NEXT: SU(2): %3:gr64bit = COPY undef %1:gr64bit
+# CHECK-NEXT: SU(3): dead %4:gr64bit = AGRK %3:gr64bit, %3:gr64bit,
+# CHECK-NEXT: SU(4): %5:gr64bit = COPY undef %1:gr64bit
+# CHECK-NEXT: SU(5): dead %6:gr64bit = AGRK %5:gr64bit, %5:gr64bit,
+# CHECK-NEXT: SU(6): %7:gr64bit = COPY undef %1:gr64bit
+# CHECK-NEXT: SU(7): dead %8:gr64bit = AGRK %7:gr64bit, %7:gr64bit,
+# CHECK-NEXT: SU(8): %9:gr64bit = COPY undef %1:gr64bit
+# CHECK-NEXT: SU(9): dead %10:gr64bit = AGRK %9:gr64bit, %9:gr64bit,
+# CHECK-NEXT: SU(10): %11:gr64bit = COPY undef %1:gr64bit
+# CHECK-NEXT: SU(11): dead %12:gr64bit = AGRK %11:gr64bit, %11:gr64bit,
+---
+name: fun4
+tracksRegLiveness: true
+body: |
+ bb.0:
+ %1:gr64bit = COPY undef %0:gr64bit
+ %2:gr64bit = AGRK %1, %1, implicit-def dead $cc
+ %3:gr64bit = COPY undef %0:gr64bit
+ %4:gr64bit = AGRK %3, %3, implicit-def dead $cc
+ %5:gr64bit = COPY undef %0:gr64bit
+ %6:gr64bit = AGRK %5, %5, implicit-def dead $cc
+ %7:gr64bit = COPY undef %0:gr64bit
+ %8:gr64bit = AGRK %7, %7, implicit-def dead $cc
+ %9:gr64bit = COPY undef %0:gr64bit
+ %10:gr64bit = AGRK %9, %9, implicit-def dead $cc
+ %11:gr64bit = COPY undef %0:gr64bit
+ %12:gr64bit = AGRK %11, %11, implicit-def dead $cc
+ Return
+...
+
+# The TMLL64 should be scheduled first even though the LA is available and of
+# lesser height, because the TMLL64 Depth equals the remaining latency (on CP).
+# CHECK: ********** MI Scheduling **********
+# CHECK-NEXT: fun5:%bb.0
+# CHECK: ********** MI Scheduling **********
+# CHECK-NEXT: fun5:%bb.1
+# CHECK: Number of nodes in def-use sequences: 4. Latency scheduling enabled for data sequences.
+# CHECK: SU(0): dead %2:addr64bit = LA %0:addr64bit, 1, $noreg
+# CHECK-NEXT: # preds left : 0
+# CHECK-NEXT: # succs left : 0
+# CHECK-NEXT: # rdefs left : 0
+# CHECK-NEXT: Latency : 1
+# CHECK-NEXT: Depth : 0
+# CHECK-NEXT: Height : 0
+# CHECK: SU(4): TMLL64 %5:gr64bit, 1, implicit-def $cc
+# CHECK-NEXT: # preds left : 1
+# CHECK-NEXT: # succs left : 1
+# CHECK-NEXT: # rdefs left : 0
+# CHECK-NEXT: Latency : 1
+# CHECK-NEXT: Depth : 3
+# CHECK-NEXT: Height : 1
+# CHECK: Queue BotQ.A: 0 4
+# CHECK: *** Final schedule for %bb.1 ***
+# CHECK-NEXT: SU(1): %3:gr64bit = COPY %1:gr32bit
+# CHECK-NEXT: SU(2): %4:gr64bit = COPY %3:gr64bit
+# CHECK-NEXT: SU(3): %5:gr64bit = COPY %4:gr64bit
+# CHECK-NEXT: SU(0): dead %2:addr64bit = LA %0:addr64bit, 1, $noreg
+# CHECK-NEXT: SU(4): TMLL64 %5:gr64bit, 1, implicit-def $cc
+---
+name: fun5
+tracksRegLiveness: true
+body: |
+ bb.0:
+ liveins: $r2d, $r3l
+
+ %0:addr64bit = COPY $r2d
+ %1:gr32bit = COPY $r3l
+
+ bb.1:
+ %2:addr64bit = LA %0, 1, $noreg
+ %3:gr64bit = COPY %1
+ %4:gr64bit = COPY %3
+ %5:gr64bit = COPY %4
+ TMLL64 %5, 1, implicit-def $cc
+ BRC 15, 7, %bb.1, implicit killed $cc
+ Return
+...
+
diff --git a/llvm/test/CodeGen/SystemZ/shift-13.ll b/llvm/test/CodeGen/SystemZ/shift-13.ll
index e214a18861172..2e4899516d107 100644
--- a/llvm/test/CodeGen/SystemZ/shift-13.ll
+++ b/llvm/test/CodeGen/SystemZ/shift-13.ll
@@ -136,16 +136,16 @@ define i128 @f9(i128 %a, i128 %sh) {
; CHECK-LABEL: f9:
; CHECK: # %bb.0:
; CHECK-NEXT: larl %r1, .LCPI8_0
-; CHECK-NEXT: vl %v1, 0(%r4), 3
-; CHECK-NEXT: vl %v2, 0(%r1), 3
-; CHECK-NEXT: vn %v1, %v1, %v2
-; CHECK-NEXT: vlgvf %r0, %v1, 3
-; CHECK-NEXT: vlvgp %v2, %r0, %r0
-; CHECK-NEXT: vl %v0, 0(%r3), 3
-; CHECK-NEXT: vrepb %v2, %v2, 15
-; CHECK-NEXT: vslb %v0, %v0, %v2
-; CHECK-NEXT: vsl %v0, %v0, %v2
-; CHECK-NEXT: vaq %v0, %v1, %v0
+; CHECK-NEXT: vl %v0, 0(%r4), 3
+; CHECK-NEXT: vl %v1, 0(%r1), 3
+; CHECK-NEXT: vn %v0, %v0, %v1
+; CHECK-NEXT: vlgvf %r0, %v0, 3
+; CHECK-NEXT: vlvgp %v1, %r0, %r0
+; CHECK-NEXT: vl %v2, 0(%r3), 3
+; CHECK-NEXT: vrepb %v1, %v1, 15
+; CHECK-NEXT: vslb %v2, %v2, %v1
+; CHECK-NEXT: vsl %v1, %v2, %v1
+; CHECK-NEXT: vaq %v0, %v0, %v1
; CHECK-NEXT: vst %v0, 0(%r2), 3
; CHECK-NEXT: br %r14
%and = and i128 %sh, 127
diff --git a/llvm/test/CodeGen/SystemZ/shift-14.ll b/llvm/test/CodeGen/SystemZ/shift-14.ll
index e45126043f273..8f29d983c2344 100644
--- a/llvm/test/CodeGen/SystemZ/shift-14.ll
+++ b/llvm/test/CodeGen/SystemZ/shift-14.ll
@@ -136,16 +136,16 @@ define i128 @f9(i128 %a, i128 %sh) {
; CHECK-LABEL: f9:
; CHECK: # %bb.0:
; CHECK-NEXT: larl %r1, .LCPI8_0
-; CHECK-NEXT: vl %v1, 0(%r4), 3
-; CHECK-NEXT: vl %v2, 0(%r1), 3
-; CHECK-NEXT: vn %v1, %v1, %v2
-; CHECK-NEXT: vlgvf %r0, %v1, 3
-; CHECK-NEXT: vlvgp %v2, %r0, %r0
-; CHECK-NEXT: vl %v0, 0(%r3), 3
-; CHECK-NEXT: vrepb %v2, %v2, 15
-; CHECK-NEXT: vsrlb %v0, %v0, %v2
-; CHECK-NEXT: vsrl %v0, %v0, %v2
-; CHECK-NEXT: vaq %v0, %v1, %v0
+; CHECK-NEXT: vl %v0, 0(%r4), 3
+; CHECK-NEXT: vl %v1, 0(%r1), 3
+; CHECK-NEXT: vn %v0, %v0, %v1
+; CHECK-NEXT: vlgvf %r0, %v0, 3
+; CHECK-NEXT: vlvgp %v1, %r0, %r0
+; CHECK-NEXT: vl %v2, 0(%r3), 3
+; CHECK-NEXT: vrepb %v1, %v1, 15
+; CHECK-NEXT: vsrlb %v2, %v2, %v1
+; CHECK-NEXT: vsrl %v1, %v2, %v1
+; CHECK-NEXT: vaq %v0, %v0, %v1
; CHECK-NEXT: vst %v0, 0(%r2), 3
; CHECK-NEXT: br %r14
%and = and i128 %sh, 127
diff --git a/llvm/test/CodeGen/SystemZ/shift-15.ll b/llvm/test/CodeGen/SystemZ/shift-15.ll
index e21d05c4c91c8..01d1ae64682ec 100644
--- a/llvm/test/CodeGen/SystemZ/shift-15.ll
+++ b/llvm/test/CodeGen/SystemZ/shift-15.ll
@@ -136,16 +136,16 @@ define i128 @f9(i128 %a, i128 %sh) {
; CHECK-LABEL: f9:
; CHECK: # %bb.0:
; CHECK-NEXT: larl %r1, .LCPI8_0
-; CHECK-NEXT: vl %v1, 0(%r4), 3
-; CHECK-NEXT: vl %v2, 0(%r1), 3
-; CHECK-NEXT: vn %v1, %v1, %v2
-; CHECK-NEXT: vlgvf %r0, %v1, 3
-; CHECK-NEXT: vlvgp %v2, %r0, %r0
-; CHECK-NEXT: vl %v0, 0(%r3), 3
-; CHECK-NEXT: vrepb %v2, %v2, 15
-; CHECK-NEXT: vsrab %v0, %v0, %v2
-; CHECK-NEXT: vsra %v0, %v0, %v2
-; CHECK-NEXT: vaq %v0, %v1, %v0
+; CHECK-NEXT: vl %v0, 0(%r4), 3
+; CHECK-NEXT: vl %v1, 0(%r1), 3
+; CHECK-NEXT: vn %v0, %v0, %v1
+; CHECK-NEXT: vlgvf %r0, %v0, 3
+; CHECK-NEXT: vlvgp %v1, %r0, %r0
+; CHECK-NEXT: vl %v2, 0(%r3), 3
+; CHECK-NEXT: vrepb %v1, %v1, 15
+; CHECK-NEXT: vsrab %v2, %v2, %v1
+; CHECK-NEXT: vsra %v1, %v2, %v1
+; CHECK-NEXT: vaq %v0, %v0, %v1
; CHECK-NEXT: vst %v0, 0(%r2), 3
; CHECK-NEXT: br %r14
%and = and i128 %sh, 127
diff --git a/llvm/test/CodeGen/SystemZ/signbits-intrinsics-binop.ll b/llvm/test/CodeGen/SystemZ/signbits-intrinsics-binop.ll
index 4ad7e33ea2d82..8682df6b99d04 100644
--- a/llvm/test/CodeGen/SystemZ/signbits-intrinsics-binop.ll
+++ b/llvm/test/CodeGen/SystemZ/signbits-intrinsics-binop.ll
@@ -65,9 +65,9 @@ define <4 x i32> @f3() {
; CHECK-LABEL: f3:
; CHECK: # %bb.0:
; CHECK-NEXT: larl %r1, .LCPI3_0
+; CHECK-NEXT: larl %r2, .LCPI3_1
; CHECK-NEXT: vl %v0, 0(%r1), 3
-; CHECK-NEXT: larl %r1, .LCPI3_1
-; CHECK-NEXT: vl %v1, 0(%r1), 3
+; CHECK-NEXT: vl %v1, 0(%r2), 3
; CHECK-NEXT: vpklsgs %v24, %v1, %v0
; CHECK-NEXT: br %r14
%call = call {<4 x i32>, i32} @llvm.s390.vpklsgs(<2 x i64> <i64 0, i64 1>, <2 x i64> <i64 1, i64 0>)
@@ -119,9 +119,9 @@ define <4 x i32> @f6() {
; CHECK-LABEL: f6:
; CHECK: # %bb.0:
; CHECK-NEXT: larl %r1, .LCPI6_0
+; CHECK-NEXT: larl %r2, .LCPI6_1
; CHECK-NEXT: vl %v0, 0(%r1), 3
-; CHECK-NEXT: larl %r1, .LCPI6_1
-; CHECK-NEXT: vl %v1, 0(%r1), 3
+; CHECK-NEXT: vl %v1, 0(%r2), 3
; CHECK-NEXT: vpksg %v24, %v1, %v0
; CHECK-NEXT: br %r14
%call = call <4 x i32> @llvm.s390.vpksg(<2 x i64> <i64 0, i64 1>, <2 x i64> <i64 1, i64 0>)
@@ -170,9 +170,9 @@ define <4 x i32> @f9() {
; CHECK-LABEL: f9:
; CHECK: # %bb.0:
; CHECK-NEXT: larl %r1, .LCPI9_0
+; CHECK-NEXT: larl %r2, .LCPI9_1
; CHECK-NEXT: vl %v0, 0(%r1), 3
-; CHECK-NEXT: larl %r1, .LCPI9_1
-; CHECK-NEXT: vl %v1, 0(%r1), 3
+; CHECK-NEXT: vl %v1, 0(%r2), 3
; CHECK-NEXT: vpklsg %v24, %v1, %v0
; CHECK-NEXT: br %r14
%call = call <4 x i32> @llvm.s390.vpklsg(<2 x i64> <i64 0, i64 1>, <2 x i64> <i64 1, i64 0>)
@@ -219,9 +219,9 @@ define <2 x i64> @f12() {
; CHECK-LABEL: f12:
; CHECK: # %bb.0:
; CHECK-NEXT: larl %r1, .LCPI12_0
+; CHECK-NEXT: larl %r2, .LCPI12_1
; CHECK-NEXT: vl %v0, 0(%r1), 3
-; CHECK-NEXT: larl %r1, .LCPI12_1
-; CHECK-NEXT: vl %v1, 0(%r1), 3
+; CHECK-NEXT: vl %v1, 0(%r2), 3
; CHECK-NEXT: vpdi %v24, %v1, %v0, 0
; CHECK-NEXT: br %r14
%perm = call <2 x i64> @llvm.s390.vpdi(<2 x i64> <i64 0, i64 1>,
diff --git a/llvm/test/CodeGen/SystemZ/vec-args-04.ll b/llvm/test/CodeGen/SystemZ/vec-args-04.ll
index b1cd278992541..cc9a3d2d0cfff 100644
--- a/llvm/test/CodeGen/SystemZ/vec-args-04.ll
+++ b/llvm/test/CodeGen/SystemZ/vec-args-04.ll
@@ -19,10 +19,9 @@ define void @foo() {
; CHECK-VEC-NEXT: aghi %r15, -192
; CHECK-VEC-NEXT: .cfi_def_cfa_offset 352
; CHECK-VEC-NEXT: larl %r1, .LCPI0_0
+; CHECK-VEC-NEXT: larl %r2, .LCPI0_1
; CHECK-VEC-NEXT: vl %v0, 0(%r1), 3
-; CHECK-VEC-NEXT: larl %r1, .LCPI0_1
-; CHECK-VEC-NEXT: vst %v0, 176(%r15), 3
-; CHECK-VEC-NEXT: vl %v0, 0(%r1), 3
+; CHECK-VEC-NEXT: vl %v1, 0(%r2), 3
; CHECK-VEC-NEXT: vrepib %v24, 1
; CHECK-VEC-NEXT: vrepib %v26, 2
; CHECK-VEC-NEXT: vrepib %v28, 3
@@ -31,7 +30,8 @@ define void @foo() {
; CHECK-VEC-NEXT: vrepib %v27, 6
; CHECK-VEC-NEXT: vrepib %v29, 7
; CHECK-VEC-NEXT: vrepib %v31, 8
-; CHECK-VEC-NEXT: vst %v0, 160(%r15), 3
+; CHECK-VEC-NEXT: vst %v0, 176(%r15), 3
+; CHECK-VEC-NEXT: vst %v1, 160(%r15), 3
; CHECK-VEC-NEXT: brasl %r14, bar at PLT
; CHECK-VEC-NEXT: lmg %r14, %r15, 304(%r15)
; CHECK-VEC-NEXT: br %r14
@@ -44,10 +44,9 @@ define void @foo() {
; CHECK-STACK-NEXT: aghi %r15, -192
; CHECK-STACK-NEXT: .cfi_def_cfa_offset 352
; CHECK-STACK-NEXT: larl %r1, .LCPI0_0
+; CHECK-STACK-NEXT: larl %r2, .LCPI0_1
; CHECK-STACK-NEXT: vl %v0, 0(%r1), 3
-; CHECK-STACK-NEXT: larl %r1, .LCPI0_1
-; CHECK-STACK-NEXT: vst %v0, 176(%r15), 3
-; CHECK-STACK-NEXT: vl %v0, 0(%r1), 3
+; CHECK-STACK-NEXT: vl %v1, 0(%r2), 3
; CHECK-STACK-NEXT: vrepib %v24, 1
; CHECK-STACK-NEXT: vrepib %v26, 2
; CHECK-STACK-NEXT: vrepib %v28, 3
@@ -56,7 +55,8 @@ define void @foo() {
; CHECK-STACK-NEXT: vrepib %v27, 6
; CHECK-STACK-NEXT: vrepib %v29, 7
; CHECK-STACK-NEXT: vrepib %v31, 8
-; CHECK-STACK-NEXT: vst %v0, 160(%r15), 3
+; CHECK-STACK-NEXT: vst %v0, 176(%r15), 3
+; CHECK-STACK-NEXT: vst %v1, 160(%r15), 3
; CHECK-STACK-NEXT: brasl %r14, bar at PLT
; CHECK-STACK-NEXT: lmg %r14, %r15, 304(%r15)
; CHECK-STACK-NEXT: br %r14
diff --git a/llvm/test/CodeGen/SystemZ/vec-cmpsel.ll b/llvm/test/CodeGen/SystemZ/vec-cmpsel-01.ll
similarity index 82%
rename from llvm/test/CodeGen/SystemZ/vec-cmpsel.ll
rename to llvm/test/CodeGen/SystemZ/vec-cmpsel-01.ll
index f93ecc348af65..7c887c0eb3278 100644
--- a/llvm/test/CodeGen/SystemZ/vec-cmpsel.ll
+++ b/llvm/test/CodeGen/SystemZ/vec-cmpsel-01.ll
@@ -1,8 +1,8 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
; Test that vector compare / select combinations do not produce any
; unnecessary pack /unpack / shift instructions.
;
; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z13 | FileCheck %s
-; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z14 | FileCheck %s -check-prefix=CHECK-Z14
define <2 x i8> @fun0(<2 x i8> %val1, <2 x i8> %val2, <2 x i8> %val3, <2 x i8> %val4) {
; CHECK-LABEL: fun0:
@@ -42,10 +42,10 @@ define <16 x i16> @fun3(<16 x i8> %val1, <16 x i8> %val2, <16 x i16> %val3, <16
; CHECK-LABEL: fun3:
; CHECK: # %bb.0:
; CHECK-NEXT: vceqb %v0, %v24, %v26
-; CHECK-DAG: vuphb [[REG0:%v[0-9]+]], %v0
-; CHECK-DAG: vuplb [[REG1:%v[0-9]+]], %v0
-; CHECK-NEXT: vsel %v24, %v28, %v25, [[REG0]]
-; CHECK-NEXT: vsel %v26, %v30, %v27, [[REG1]]
+; CHECK-NEXT: vuplb %v1, %v0
+; CHECK-NEXT: vuphb %v0, %v0
+; CHECK-NEXT: vsel %v24, %v28, %v25, %v0
+; CHECK-NEXT: vsel %v26, %v30, %v27, %v1
; CHECK-NEXT: br %r14
%cmp = icmp eq <16 x i8> %val1, %val2
%sel = select <16 x i1> %cmp, <16 x i16> %val3, <16 x i16> %val4
@@ -55,10 +55,10 @@ define <16 x i16> @fun3(<16 x i8> %val1, <16 x i8> %val2, <16 x i16> %val3, <16
define <32 x i8> @fun4(<32 x i8> %val1, <32 x i8> %val2, <32 x i8> %val3, <32 x i8> %val4) {
; CHECK-LABEL: fun4:
; CHECK: # %bb.0:
-; CHECK-DAG: vceqb [[REG0:%v[0-9]+]], %v26, %v30
-; CHECK-DAG: vceqb [[REG1:%v[0-9]+]], %v24, %v28
-; CHECK-DAG: vsel %v24, %v25, %v29, [[REG1]]
-; CHECK-DAG: vsel %v26, %v27, %v31, [[REG0]]
+; CHECK-NEXT: vceqb %v0, %v26, %v30
+; CHECK-NEXT: vceqb %v1, %v24, %v28
+; CHECK-NEXT: vsel %v24, %v25, %v29, %v1
+; CHECK-NEXT: vsel %v26, %v27, %v31, %v0
; CHECK-NEXT: br %r14
%cmp = icmp eq <32 x i8> %val1, %val2
%sel = select <32 x i1> %cmp, <32 x i8> %val3, <32 x i8> %val4
@@ -127,10 +127,10 @@ define <8 x i32> @fun10(<8 x i16> %val1, <8 x i16> %val2, <8 x i32> %val3, <8 x
; CHECK-LABEL: fun10:
; CHECK: # %bb.0:
; CHECK-NEXT: vceqh %v0, %v24, %v26
-; CHECK-DAG: vuphh [[REG0:%v[0-9]+]], %v0
-; CHECK-DAG: vuplhw [[REG1:%v[0-9]+]], %v0
-; CHECK-NEXT: vsel %v24, %v28, %v25, [[REG0]]
-; CHECK-NEXT: vsel %v26, %v30, %v27, [[REG1]]
+; CHECK-NEXT: vuplhw %v1, %v0
+; CHECK-NEXT: vuphh %v0, %v0
+; CHECK-NEXT: vsel %v24, %v28, %v25, %v0
+; CHECK-NEXT: vsel %v26, %v30, %v27, %v1
; CHECK-NEXT: br %r14
%cmp = icmp eq <8 x i16> %val1, %val2
%sel = select <8 x i1> %cmp, <8 x i32> %val3, <8 x i32> %val4
@@ -153,10 +153,10 @@ define <16 x i8> @fun11(<16 x i16> %val1, <16 x i16> %val2, <16 x i8> %val3, <16
define <16 x i16> @fun12(<16 x i16> %val1, <16 x i16> %val2, <16 x i16> %val3, <16 x i16> %val4) {
; CHECK-LABEL: fun12:
; CHECK: # %bb.0:
-; CHECK-DAG: vceqh [[REG0:%v[0-9]+]], %v26, %v30
-; CHECK-DAG: vceqh [[REG1:%v[0-9]+]], %v24, %v28
-; CHECK-DAG: vsel %v24, %v25, %v29, [[REG1]]
-; CHECK-DAG: vsel %v26, %v27, %v31, [[REG0]]
+; CHECK-NEXT: vceqh %v0, %v26, %v30
+; CHECK-NEXT: vceqh %v1, %v24, %v28
+; CHECK-NEXT: vsel %v24, %v25, %v29, %v1
+; CHECK-NEXT: vsel %v26, %v27, %v31, %v0
; CHECK-NEXT: br %r14
%cmp = icmp eq <16 x i16> %val1, %val2
%sel = select <16 x i1> %cmp, <16 x i16> %val3, <16 x i16> %val4
@@ -225,10 +225,10 @@ define <4 x i64> @fun18(<4 x i32> %val1, <4 x i32> %val2, <4 x i64> %val3, <4 x
; CHECK-LABEL: fun18:
; CHECK: # %bb.0:
; CHECK-NEXT: vceqf %v0, %v24, %v26
-; CHECK-DAG: vuphf [[REG0:%v[0-9]+]], %v0
-; CHECK-DAG: vuplf [[REG1]], %v0
-; CHECK-NEXT: vsel %v24, %v28, %v25, [[REG0]]
-; CHECK-NEXT: vsel %v26, %v30, %v27, [[REG1]]
+; CHECK-NEXT: vuplf %v1, %v0
+; CHECK-NEXT: vuphf %v0, %v0
+; CHECK-NEXT: vsel %v24, %v28, %v25, %v0
+; CHECK-NEXT: vsel %v26, %v30, %v27, %v1
; CHECK-NEXT: br %r14
%cmp = icmp eq <4 x i32> %val1, %val2
%sel = select <4 x i1> %cmp, <4 x i64> %val3, <4 x i64> %val4
@@ -251,10 +251,10 @@ define <8 x i16> @fun19(<8 x i32> %val1, <8 x i32> %val2, <8 x i16> %val3, <8 x
define <8 x i32> @fun20(<8 x i32> %val1, <8 x i32> %val2, <8 x i32> %val3, <8 x i32> %val4) {
; CHECK-LABEL: fun20:
; CHECK: # %bb.0:
-; CHECK-DAG: vceqf [[REG0:%v[0-9]+]], %v26, %v30
-; CHECK-DAG: vceqf [[REG1:%v[0-9]+]], %v24, %v28
-; CHECK-DAG: vsel %v24, %v25, %v29, [[REG1]]
-; CHECK-DAG: vsel %v26, %v27, %v31, [[REG0]]
+; CHECK-NEXT: vceqf %v0, %v26, %v30
+; CHECK-NEXT: vceqf %v1, %v24, %v28
+; CHECK-NEXT: vsel %v24, %v25, %v29, %v1
+; CHECK-NEXT: vsel %v26, %v27, %v31, %v0
; CHECK-NEXT: br %r14
%cmp = icmp eq <8 x i32> %val1, %val2
%sel = select <8 x i1> %cmp, <8 x i32> %val3, <8 x i32> %val4
@@ -300,10 +300,10 @@ define <4 x i32> @fun23(<4 x i64> %val1, <4 x i64> %val2, <4 x i32> %val3, <4 x
define <4 x i64> @fun24(<4 x i64> %val1, <4 x i64> %val2, <4 x i64> %val3, <4 x i64> %val4) {
; CHECK-LABEL: fun24:
; CHECK: # %bb.0:
-; CHECK-DAG: vceqg [[REG0:%v[0-9]+]], %v26, %v30
-; CHECK-DAG: vceqg [[REG1:%v[0-9]+]], %v24, %v28
-; CHECK-DAG: vsel %v24, %v25, %v29, [[REG1]]
-; CHECK-DAG: vsel %v26, %v27, %v31, [[REG0]]
+; CHECK-NEXT: vceqg %v0, %v26, %v30
+; CHECK-NEXT: vceqg %v1, %v24, %v28
+; CHECK-NEXT: vsel %v24, %v25, %v29, %v1
+; CHECK-NEXT: vsel %v26, %v27, %v31, %v0
; CHECK-NEXT: br %r14
%cmp = icmp eq <4 x i64> %val1, %val2
%sel = select <4 x i1> %cmp, <4 x i64> %val3, <4 x i64> %val4
@@ -326,13 +326,6 @@ define <2 x float> @fun25(<2 x float> %val1, <2 x float> %val2, <2 x float> %val
; CHECK-NEXT: vpkg %v0, %v1, %v0
; CHECK-NEXT: vsel %v24, %v28, %v30, %v0
; CHECK-NEXT: br %r14
-
-; CHECK-Z14-LABEL: fun25:
-; CHECK-Z14: # %bb.0:
-; CHECK-Z14-NEXT: vfchsb %v0, %v24, %v26
-; CHECK-Z14-NEXT: vsel %v24, %v28, %v30, %v0
-; CHECK-Z14-NEXT: br %r14
-
%cmp = fcmp ogt <2 x float> %val1, %val2
%sel = select <2 x i1> %cmp, <2 x float> %val3, <2 x float> %val4
ret <2 x float> %sel
@@ -355,14 +348,6 @@ define <2 x double> @fun26(<2 x float> %val1, <2 x float> %val2, <2 x double> %v
; CHECK-NEXT: vuphf %v0, %v0
; CHECK-NEXT: vsel %v24, %v28, %v30, %v0
; CHECK-NEXT: br %r14
-
-; CHECK-Z14-LABEL: fun26:
-; CHECK-Z14: # %bb.0:
-; CHECK-Z14-NEXT: vfchsb %v0, %v24, %v26
-; CHECK-Z14-NEXT: vuphf %v0, %v0
-; CHECK-Z14-NEXT: vsel %v24, %v28, %v30, %v0
-; CHECK-Z14-NEXT: br %r14
-
%cmp = fcmp ogt <2 x float> %val1, %val2
%sel = select <2 x i1> %cmp, <2 x double> %val3, <2 x double> %val4
ret <2 x double> %sel
@@ -399,13 +384,6 @@ define <4 x float> @fun28(<4 x float> %val1, <4 x float> %val2, <4 x float> %val
; CHECK-NEXT: vpkg %v0, %v1, %v0
; CHECK-NEXT: vsel %v24, %v28, %v30, %v0
; CHECK-NEXT: br %r14
-
-; CHECK-Z14-LABEL: fun28:
-; CHECK-Z14: # %bb.0:
-; CHECK-Z14-NEXT: vfchsb %v0, %v24, %v26
-; CHECK-Z14-NEXT: vsel %v24, %v28, %v30, %v0
-; CHECK-Z14-NEXT: br %r14
-
%cmp = fcmp ogt <4 x float> %val1, %val2
%sel = select <4 x i1> %cmp, <4 x float> %val3, <4 x float> %val4
ret <4 x float> %sel
@@ -424,35 +402,46 @@ define <4 x double> @fun29(<4 x float> %val1, <4 x float> %val2, <4 x double> %v
; CHECK-NEXT: vldeb %v1, %v1
; CHECK-NEXT: vldeb %v2, %v2
; CHECK-NEXT: vfchdb %v1, %v2, %v1
-; CHECK-NEXT: vpkg [[REG0:%v[0-9]+]], %v1, %v0
-; CHECK-DAG: vuplf [[REG1:%v[0-9]+]], [[REG0]]
-; CHECK-DAG: vuphf [[REG2:%v[0-9]+]], [[REG0]]
-; CHECK-NEXT: vsel %v24, %v28, %v25, [[REG2]]
-; CHECK-NEXT: vsel %v26, %v30, %v27, [[REG1]]
+; CHECK-NEXT: vpkg %v0, %v1, %v0
+; CHECK-NEXT: vuplf %v1, %v0
+; CHECK-NEXT: vuphf %v0, %v0
+; CHECK-NEXT: vsel %v24, %v28, %v25, %v0
+; CHECK-NEXT: vsel %v26, %v30, %v27, %v1
; CHECK-NEXT: br %r14
-
-; CHECK-Z14-LABEL: fun29:
-; CHECK-Z14: # %bb.0:
-; CHECK-Z14-NEXT: vfchsb %v0, %v24, %v26
-; CHECK-Z14-DAG: vuphf [[REG0:%v[0-9]+]], %v0
; CHECK-Z14-DAG: vuplf [[REG1:%v[0-9]+]], %v0
-; CHECK-Z14-NEXT: vsel %v24, %v28, %v25, [[REG0]]
-; CHECK-Z14-NEXT: vsel %v26, %v30, %v27, [[REG1]]
-; CHECK-Z14-NEXT: br %r14
-
%cmp = fcmp ogt <4 x float> %val1, %val2
%sel = select <4 x i1> %cmp, <4 x double> %val3, <4 x double> %val4
ret <4 x double> %sel
}
define <8 x float> @fun30(<8 x float> %val1, <8 x float> %val2, <8 x float> %val3, <8 x float> %val4) {
-; CHECK-Z14-LABEL: fun30:
-; CHECK-Z14: # %bb.0:
-; CHECK-Z14-DAG: vfchsb [[REG0:%v[0-9]+]], %v26, %v30
-; CHECK-Z14-DAG: vfchsb [[REG1:%v[0-9]+]], %v24, %v28
-; CHECK-Z14-DAG: vsel %v24, %v25, %v29, [[REG1]]
-; CHECK-Z14-DAG: vsel %v26, %v27, %v31, [[REG0]]
-; CHECK-Z14-NEXT: br %r14
+; CHECK-LABEL: fun30:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vmrlf %v0, %v30, %v30
+; CHECK-NEXT: vmrlf %v1, %v26, %v26
+; CHECK-NEXT: vldeb %v0, %v0
+; CHECK-NEXT: vldeb %v1, %v1
+; CHECK-NEXT: vfchdb %v0, %v1, %v0
+; CHECK-NEXT: vmrhf %v1, %v30, %v30
+; CHECK-NEXT: vmrhf %v2, %v26, %v26
+; CHECK-NEXT: vldeb %v1, %v1
+; CHECK-NEXT: vmrhf %v3, %v24, %v24
+; CHECK-NEXT: vldeb %v2, %v2
+; CHECK-NEXT: vfchdb %v1, %v2, %v1
+; CHECK-NEXT: vpkg %v0, %v1, %v0
+; CHECK-NEXT: vmrlf %v1, %v28, %v28
+; CHECK-NEXT: vmrlf %v2, %v24, %v24
+; CHECK-NEXT: vldeb %v1, %v1
+; CHECK-NEXT: vsel %v26, %v27, %v31, %v0
+; CHECK-NEXT: vldeb %v2, %v2
+; CHECK-NEXT: vfchdb %v1, %v2, %v1
+; CHECK-NEXT: vmrhf %v2, %v28, %v28
+; CHECK-NEXT: vldeb %v2, %v2
+; CHECK-NEXT: vldeb %v3, %v3
+; CHECK-NEXT: vfchdb %v2, %v3, %v2
+; CHECK-NEXT: vpkg %v1, %v2, %v1
+; CHECK-NEXT: vsel %v24, %v25, %v29, %v1
+; CHECK-NEXT: br %r14
%cmp = fcmp ogt <8 x float> %val1, %val2
%sel = select <8 x i1> %cmp, <8 x float> %val3, <8 x float> %val4
ret <8 x float> %sel
@@ -498,10 +487,10 @@ define <4 x float> @fun33(<4 x double> %val1, <4 x double> %val2, <4 x float> %v
define <4 x double> @fun34(<4 x double> %val1, <4 x double> %val2, <4 x double> %val3, <4 x double> %val4) {
; CHECK-LABEL: fun34:
; CHECK: # %bb.0:
-; CHECK-DAG: vfchdb [[REG0:%v[0-9]+]], %v26, %v30
-; CHECK-DAG: vfchdb [[REG1:%v[0-9]+]], %v24, %v28
-; CHECK-DAG: vsel %v24, %v25, %v29, [[REG1]]
-; CHECK-DAG: vsel %v26, %v27, %v31, [[REG0]]
+; CHECK-NEXT: vfchdb %v0, %v26, %v30
+; CHECK-NEXT: vfchdb %v1, %v24, %v28
+; CHECK-NEXT: vsel %v24, %v25, %v29, %v1
+; CHECK-NEXT: vsel %v26, %v27, %v31, %v0
; CHECK-NEXT: br %r14
%cmp = fcmp ogt <4 x double> %val1, %val2
%sel = select <4 x i1> %cmp, <4 x double> %val3, <4 x double> %val4
diff --git a/llvm/test/CodeGen/SystemZ/vec-cmpsel-02.ll b/llvm/test/CodeGen/SystemZ/vec-cmpsel-02.ll
new file mode 100644
index 0000000000000..9daf5c984c041
--- /dev/null
+++ b/llvm/test/CodeGen/SystemZ/vec-cmpsel-02.ll
@@ -0,0 +1,70 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; Test that vector float compare / select combinations do not produce any
+; unnecessary pack /unpack / shift instructions.
+;
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z14 | FileCheck %s
+
+define <2 x float> @fun0(<2 x float> %val1, <2 x float> %val2, <2 x float> %val3, <2 x float> %val4) {
+; CHECK-LABEL: fun0:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vfchsb %v0, %v24, %v26
+; CHECK-NEXT: vsel %v24, %v28, %v30, %v0
+; CHECK-NEXT: br %r14
+
+ %cmp = fcmp ogt <2 x float> %val1, %val2
+ %sel = select <2 x i1> %cmp, <2 x float> %val3, <2 x float> %val4
+ ret <2 x float> %sel
+}
+
+define <2 x double> @fun1(<2 x float> %val1, <2 x float> %val2, <2 x double> %val3, <2 x double> %val4) {
+; CHECK-LABEL: fun1:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vfchsb %v0, %v24, %v26
+; CHECK-NEXT: vuphf %v0, %v0
+; CHECK-NEXT: vsel %v24, %v28, %v30, %v0
+; CHECK-NEXT: br %r14
+
+ %cmp = fcmp ogt <2 x float> %val1, %val2
+ %sel = select <2 x i1> %cmp, <2 x double> %val3, <2 x double> %val4
+ ret <2 x double> %sel
+}
+
+define <4 x float> @fun2(<4 x float> %val1, <4 x float> %val2, <4 x float> %val3, <4 x float> %val4) {
+; CHECK-LABEL: fun2:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vfchsb %v0, %v24, %v26
+; CHECK-NEXT: vsel %v24, %v28, %v30, %v0
+; CHECK-NEXT: br %r14
+
+ %cmp = fcmp ogt <4 x float> %val1, %val2
+ %sel = select <4 x i1> %cmp, <4 x float> %val3, <4 x float> %val4
+ ret <4 x float> %sel
+}
+
+define <4 x double> @fun3(<4 x float> %val1, <4 x float> %val2, <4 x double> %val3, <4 x double> %val4) {
+; CHECK-LABEL: fun3:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vfchsb %v0, %v24, %v26
+; CHECK-NEXT: vuplf %v1, %v0
+; CHECK-NEXT: vuphf %v0, %v0
+; CHECK-NEXT: vsel %v24, %v28, %v25, %v0
+; CHECK-NEXT: vsel %v26, %v30, %v27, %v1
+; CHECK-NEXT: br %r14
+ %cmp = fcmp ogt <4 x float> %val1, %val2
+ %sel = select <4 x i1> %cmp, <4 x double> %val3, <4 x double> %val4
+ ret <4 x double> %sel
+}
+
+define <8 x float> @fun4(<8 x float> %val1, <8 x float> %val2, <8 x float> %val3, <8 x float> %val4) {
+; CHECK-Z14-LABEL: fun4:
+; CHECK-LABEL: fun4:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vfchsb %v0, %v26, %v30
+; CHECK-NEXT: vfchsb %v1, %v24, %v28
+; CHECK-NEXT: vsel %v24, %v25, %v29, %v1
+; CHECK-NEXT: vsel %v26, %v27, %v31, %v0
+; CHECK-NEXT: br %r14
+ %cmp = fcmp ogt <8 x float> %val1, %val2
+ %sel = select <8 x i1> %cmp, <8 x float> %val3, <8 x float> %val4
+ ret <8 x float> %sel
+}
diff --git a/llvm/test/CodeGen/SystemZ/vec-sub-01.ll b/llvm/test/CodeGen/SystemZ/vec-sub-01.ll
index e1e08ebaaef47..08062b35e30e9 100644
--- a/llvm/test/CodeGen/SystemZ/vec-sub-01.ll
+++ b/llvm/test/CodeGen/SystemZ/vec-sub-01.ll
@@ -44,20 +44,20 @@ define <2 x i64> @f4(<2 x i64> %dummy, <2 x i64> %val1, <2 x i64> %val2) {
; the VSLDBs use the result of the VLRs or use %v24 and %v26 directly.
define <4 x float> @f5(<4 x float> %val1, <4 x float> %val2) {
; CHECK-LABEL: f5:
-; CHECK-DAG: vlr %v[[A1:[0-5]]], %v24
-; CHECK-DAG: vlr %v[[A2:[0-5]]], %v26
-; CHECK-DAG: vrepf %v[[B1:[0-5]]], %v24, 1
-; CHECK-DAG: vrepf %v[[B2:[0-5]]], %v26, 1
-; CHECK-DAG: vrepf %v[[C1:[0-5]]], %v24, 2
+; CHECK-DAG: vrepf %v[[B2:[0-5]]], %v26, 3
+; CHECK-DAG: vrepf %v[[B1:[0-5]]], %v24, 3
; CHECK-DAG: vrepf %v[[C2:[0-5]]], %v26, 2
-; CHECK-DAG: vrepf %v[[D1:[0-5]]], %v24, 3
-; CHECK-DAG: vrepf %v[[D2:[0-5]]], %v26, 3
-; CHECK-DAG: sebr %f[[A1]], %f[[A2]]
+; CHECK-DAG: vrepf %v[[C1:[0-5]]], %v24, 2
+; CHECK-DAG: vrepf %v[[D2:[0-7]]], %v26, 1
+; CHECK-DAG: vrepf %v[[D1:[0-7]]], %v24, 1
+; CHECK-DAG: vlr %v[[A2:[0-5]]], %v26
+; CHECK-DAG: vlr %v[[A1:[0-5]]], %v24
; CHECK-DAG: sebr %f[[B1]], %f[[B2]]
; CHECK-DAG: sebr %f[[C1]], %f[[C2]]
+; CHECK-DAG: sebr %f[[A1]], %f[[A2]]
; CHECK-DAG: sebr %f[[D1]], %f[[D2]]
-; CHECK-DAG: vmrhf [[HIGH:%v[0-9]+]], %v[[A1]], %v[[B1]]
-; CHECK-DAG: vmrhf [[LOW:%v[0-9]+]], %v[[C1]], %v[[D1]]
+; CHECK-DAG: vmrhf [[LOW:%v[0-9]+]], %v[[C1]], %v[[B1]]
+; CHECK-DAG: vmrhf [[HIGH:%v[0-9]+]], %v[[A1]], %v[[D1]]
; CHECK: vmrhg %v24, [[HIGH]], [[LOW]]
; CHECK: br %r14
%ret = fsub <4 x float> %val1, %val2
diff --git a/llvm/test/CodeGen/SystemZ/vector-constrained-fp-intrinsics.ll b/llvm/test/CodeGen/SystemZ/vector-constrained-fp-intrinsics.ll
index 614f7b243c7e2..bde3635f48446 100644
--- a/llvm/test/CodeGen/SystemZ/vector-constrained-fp-intrinsics.ll
+++ b/llvm/test/CodeGen/SystemZ/vector-constrained-fp-intrinsics.ll
@@ -43,9 +43,9 @@ define <2 x double> @constrained_vector_fdiv_v2f64() #0 {
; SZ13-LABEL: constrained_vector_fdiv_v2f64:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI1_0
+; SZ13-NEXT: larl %r2, .LCPI1_1
; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: larl %r1, .LCPI1_1
-; SZ13-NEXT: vl %v1, 0(%r1), 3
+; SZ13-NEXT: vl %v1, 0(%r2), 3
; SZ13-NEXT: vfddb %v24, %v1, %v0
; SZ13-NEXT: br %r14
entry:
@@ -159,13 +159,13 @@ define <4 x double> @constrained_vector_fdiv_v4f64() #0 {
; SZ13-LABEL: constrained_vector_fdiv_v4f64:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI4_0
+; SZ13-NEXT: larl %r2, .LCPI4_1
+; SZ13-NEXT: larl %r3, .LCPI4_2
; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: larl %r1, .LCPI4_1
-; SZ13-NEXT: vl %v1, 0(%r1), 3
+; SZ13-NEXT: vl %v1, 0(%r2), 3
; SZ13-NEXT: vfddb %v26, %v1, %v0
-; SZ13-NEXT: larl %r1, .LCPI4_2
-; SZ13-NEXT: vl %v1, 0(%r1), 3
-; SZ13-NEXT: vfddb %v24, %v1, %v0
+; SZ13-NEXT: vl %v2, 0(%r3), 3
+; SZ13-NEXT: vfddb %v24, %v2, %v0
; SZ13-NEXT: br %r14
entry:
%div = call <4 x double> @llvm.experimental.constrained.fdiv.v4f64(
@@ -331,10 +331,10 @@ define <3 x float> @constrained_vector_frem_v3f32() #0 {
; SZ13-NEXT: .cfi_def_cfa_offset 360
; SZ13-NEXT: std %f8, 192(%r15) # 8-byte Spill
; SZ13-NEXT: .cfi_offset %f8, -168
+; SZ13-NEXT: larl %r2, .LCPI7_1
; SZ13-NEXT: larl %r1, .LCPI7_0
+; SZ13-NEXT: lde %f8, 0(%r2)
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI7_1
-; SZ13-NEXT: lde %f8, 0(%r1)
; SZ13-NEXT: ldr %f2, %f8
; SZ13-NEXT: brasl %r14, fmodf at PLT
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
@@ -596,9 +596,9 @@ define <2 x double> @constrained_vector_fmul_v2f64() #0 {
; SZ13-LABEL: constrained_vector_fmul_v2f64:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI11_0
+; SZ13-NEXT: larl %r2, .LCPI11_1
; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: larl %r1, .LCPI11_1
-; SZ13-NEXT: vl %v1, 0(%r1), 3
+; SZ13-NEXT: vl %v1, 0(%r2), 3
; SZ13-NEXT: vfmdb %v24, %v1, %v0
; SZ13-NEXT: br %r14
entry:
@@ -709,13 +709,13 @@ define <4 x double> @constrained_vector_fmul_v4f64() #0 {
; SZ13-LABEL: constrained_vector_fmul_v4f64:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI14_0
+; SZ13-NEXT: larl %r2, .LCPI14_1
+; SZ13-NEXT: larl %r3, .LCPI14_2
; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: larl %r1, .LCPI14_1
-; SZ13-NEXT: vl %v1, 0(%r1), 3
-; SZ13-NEXT: larl %r1, .LCPI14_2
+; SZ13-NEXT: vl %v1, 0(%r2), 3
+; SZ13-NEXT: vl %v2, 0(%r3), 3
; SZ13-NEXT: vfmdb %v26, %v1, %v0
-; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: vfmdb %v24, %v1, %v0
+; SZ13-NEXT: vfmdb %v24, %v1, %v2
; SZ13-NEXT: br %r14
entry:
%mul = call <4 x double> @llvm.experimental.constrained.fmul.v4f64(
@@ -768,9 +768,9 @@ define <2 x double> @constrained_vector_fadd_v2f64() #0 {
; SZ13-LABEL: constrained_vector_fadd_v2f64:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI16_0
+; SZ13-NEXT: larl %r2, .LCPI16_1
; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: larl %r1, .LCPI16_1
-; SZ13-NEXT: vl %v1, 0(%r1), 3
+; SZ13-NEXT: vl %v1, 0(%r2), 3
; SZ13-NEXT: vfadb %v24, %v1, %v0
; SZ13-NEXT: br %r14
entry:
@@ -879,13 +879,13 @@ define <4 x double> @constrained_vector_fadd_v4f64() #0 {
; SZ13-LABEL: constrained_vector_fadd_v4f64:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI19_0
+; SZ13-NEXT: larl %r2, .LCPI19_1
+; SZ13-NEXT: larl %r3, .LCPI19_2
; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: larl %r1, .LCPI19_1
-; SZ13-NEXT: vl %v1, 0(%r1), 3
-; SZ13-NEXT: larl %r1, .LCPI19_2
+; SZ13-NEXT: vl %v1, 0(%r2), 3
+; SZ13-NEXT: vl %v2, 0(%r3), 3
; SZ13-NEXT: vfadb %v26, %v1, %v0
-; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: vfadb %v24, %v1, %v0
+; SZ13-NEXT: vfadb %v24, %v1, %v2
; SZ13-NEXT: br %r14
entry:
%add = call <4 x double> @llvm.experimental.constrained.fadd.v4f64(
@@ -1049,12 +1049,12 @@ define <4 x double> @constrained_vector_fsub_v4f64() #0 {
; SZ13-LABEL: constrained_vector_fsub_v4f64:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI24_0
+; SZ13-NEXT: larl %r2, .LCPI24_1
; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: vgmg %v1, 12, 10
-; SZ13-NEXT: larl %r1, .LCPI24_1
-; SZ13-NEXT: vfsdb %v26, %v1, %v0
-; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: vfsdb %v24, %v1, %v0
+; SZ13-NEXT: vl %v1, 0(%r2), 3
+; SZ13-NEXT: vgmg %v2, 12, 10
+; SZ13-NEXT: vfsdb %v26, %v2, %v0
+; SZ13-NEXT: vfsdb %v24, %v2, %v1
; SZ13-NEXT: br %r14
entry:
%sub = call <4 x double> @llvm.experimental.constrained.fsub.v4f64(
@@ -1126,11 +1126,11 @@ define <3 x float> @constrained_vector_sqrt_v3f32() #0 {
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI27_0
; SZ13-NEXT: sqeb %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI27_1
+; SZ13-NEXT: larl %r2, .LCPI27_1
+; SZ13-NEXT: larl %r3, .LCPI27_2
+; SZ13-NEXT: sqeb %f1, 0(%r2)
; SZ13-NEXT: vrepf %v0, %v0, 0
-; SZ13-NEXT: sqeb %f1, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI27_2
-; SZ13-NEXT: sqeb %f2, 0(%r1)
+; SZ13-NEXT: sqeb %f2, 0(%r3)
; SZ13-NEXT: vmrhf %v1, %v1, %v2
; SZ13-NEXT: vmrhg %v24, %v1, %v0
; SZ13-NEXT: br %r14
@@ -1187,11 +1187,11 @@ define <4 x double> @constrained_vector_sqrt_v4f64() #0 {
; SZ13-LABEL: constrained_vector_sqrt_v4f64:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI29_0
+; SZ13-NEXT: larl %r2, .LCPI29_1
; SZ13-NEXT: vl %v0, 0(%r1), 3
; SZ13-NEXT: vfsqdb %v26, %v0
-; SZ13-NEXT: larl %r1, .LCPI29_1
-; SZ13-NEXT: vl %v0, 0(%r1), 3
-; SZ13-NEXT: vfsqdb %v24, %v0
+; SZ13-NEXT: vl %v1, 0(%r2), 3
+; SZ13-NEXT: vfsqdb %v24, %v1
; SZ13-NEXT: br %r14
entry:
%sqrt = call <4 x double> @llvm.experimental.constrained.sqrt.v4f64(
@@ -1226,9 +1226,9 @@ define <1 x float> @constrained_vector_pow_v1f32() #0 {
; SZ13-NEXT: aghi %r15, -160
; SZ13-NEXT: .cfi_def_cfa_offset 320
; SZ13-NEXT: larl %r1, .LCPI30_0
+; SZ13-NEXT: larl %r2, .LCPI30_1
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI30_1
-; SZ13-NEXT: lde %f2, 0(%r1)
+; SZ13-NEXT: lde %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, powf at PLT
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
; SZ13-NEXT: vlr %v24, %v0
@@ -1282,10 +1282,10 @@ define <2 x double> @constrained_vector_pow_v2f64() #0 {
; SZ13-NEXT: .cfi_def_cfa_offset 344
; SZ13-NEXT: std %f8, 176(%r15) # 8-byte Spill
; SZ13-NEXT: .cfi_offset %f8, -168
+; SZ13-NEXT: larl %r2, .LCPI31_1
; SZ13-NEXT: larl %r1, .LCPI31_0
+; SZ13-NEXT: ld %f8, 0(%r2)
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI31_1
-; SZ13-NEXT: ld %f8, 0(%r1)
; SZ13-NEXT: ldr %f2, %f8
; SZ13-NEXT: brasl %r14, pow at PLT
; SZ13-NEXT: larl %r1, .LCPI31_2
@@ -1358,10 +1358,10 @@ define <3 x float> @constrained_vector_pow_v3f32() #0 {
; SZ13-NEXT: .cfi_def_cfa_offset 360
; SZ13-NEXT: std %f8, 192(%r15) # 8-byte Spill
; SZ13-NEXT: .cfi_offset %f8, -168
+; SZ13-NEXT: larl %r2, .LCPI32_1
; SZ13-NEXT: larl %r1, .LCPI32_0
+; SZ13-NEXT: lde %f8, 0(%r2)
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI32_1
-; SZ13-NEXT: lde %f8, 0(%r1)
; SZ13-NEXT: ldr %f2, %f8
; SZ13-NEXT: brasl %r14, powf at PLT
; SZ13-NEXT: larl %r1, .LCPI32_2
@@ -1548,10 +1548,10 @@ define <4 x double> @constrained_vector_pow_v4f64() #0 {
; SZ13-NEXT: .cfi_def_cfa_offset 360
; SZ13-NEXT: std %f8, 192(%r15) # 8-byte Spill
; SZ13-NEXT: .cfi_offset %f8, -168
+; SZ13-NEXT: larl %r2, .LCPI34_1
; SZ13-NEXT: larl %r1, .LCPI34_0
+; SZ13-NEXT: ld %f8, 0(%r2)
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI34_1
-; SZ13-NEXT: ld %f8, 0(%r1)
; SZ13-NEXT: ldr %f2, %f8
; SZ13-NEXT: brasl %r14, pow at PLT
; SZ13-NEXT: larl %r1, .LCPI34_2
@@ -4544,9 +4544,9 @@ define <1 x float> @constrained_vector_maxnum_v1f32() #0 {
; SZ13-NEXT: aghi %r15, -160
; SZ13-NEXT: .cfi_def_cfa_offset 320
; SZ13-NEXT: larl %r1, .LCPI85_0
+; SZ13-NEXT: larl %r2, .LCPI85_1
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI85_1
-; SZ13-NEXT: lde %f2, 0(%r1)
+; SZ13-NEXT: lde %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmaxf at PLT
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
; SZ13-NEXT: vlr %v24, %v0
@@ -4594,16 +4594,16 @@ define <2 x double> @constrained_vector_maxnum_v2f64() #0 {
; SZ13-NEXT: aghi %r15, -176
; SZ13-NEXT: .cfi_def_cfa_offset 336
; SZ13-NEXT: larl %r1, .LCPI86_0
+; SZ13-NEXT: larl %r2, .LCPI86_1
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI86_1
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmax at PLT
; SZ13-NEXT: larl %r1, .LCPI86_2
+; SZ13-NEXT: larl %r2, .LCPI86_3
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI86_3
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmax at PLT
; SZ13-NEXT: vl %v1, 160(%r15), 3 # 16-byte Reload
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
@@ -4667,10 +4667,10 @@ define <3 x float> @constrained_vector_maxnum_v3f32() #0 {
; SZ13-NEXT: .cfi_def_cfa_offset 360
; SZ13-NEXT: std %f8, 192(%r15) # 8-byte Spill
; SZ13-NEXT: .cfi_offset %f8, -168
+; SZ13-NEXT: larl %r2, .LCPI87_1
; SZ13-NEXT: larl %r1, .LCPI87_0
+; SZ13-NEXT: lde %f8, 0(%r2)
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI87_1
-; SZ13-NEXT: lde %f8, 0(%r1)
; SZ13-NEXT: ldr %f2, %f8
; SZ13-NEXT: brasl %r14, fmaxf at PLT
; SZ13-NEXT: larl %r1, .LCPI87_2
@@ -4680,11 +4680,11 @@ define <3 x float> @constrained_vector_maxnum_v3f32() #0 {
; SZ13-NEXT: ldr %f0, %f8
; SZ13-NEXT: brasl %r14, fmaxf at PLT
; SZ13-NEXT: larl %r1, .LCPI87_3
+; SZ13-NEXT: larl %r2, .LCPI87_4
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI87_4
-; SZ13-NEXT: lde %f2, 0(%r1)
+; SZ13-NEXT: lde %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmaxf at PLT
; SZ13-NEXT: vl %v1, 160(%r15), 3 # 16-byte Reload
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
@@ -4850,32 +4850,33 @@ define <4 x double> @constrained_vector_maxnum_v4f64() #0 {
; SZ13-NEXT: aghi %r15, -192
; SZ13-NEXT: .cfi_def_cfa_offset 352
; SZ13-NEXT: larl %r1, .LCPI89_0
+; SZ13-NEXT: larl %r2, .LCPI89_1
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI89_1
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmax at PLT
; SZ13-NEXT: larl %r1, .LCPI89_2
+; SZ13-NEXT: larl %r2, .LCPI89_3
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI89_3
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmax at PLT
-; SZ13-NEXT: vl %v1, 160(%r15), 3 # 16-byte Reload
-; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
-; SZ13-NEXT: vmrhg %v0, %v0, %v1
+; SZ13-NEXT: vl %v3, 160(%r15), 3 # 16-byte Reload
; SZ13-NEXT: larl %r1, .LCPI89_4
+; SZ13-NEXT: larl %r2, .LCPI89_5
+; SZ13-NEXT: ld %f1, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
+; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
+; SZ13-NEXT: vmrhg %v0, %v0, %v3
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
-; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI89_5
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ldr %f0, %f1
; SZ13-NEXT: brasl %r14, fmax at PLT
; SZ13-NEXT: larl %r1, .LCPI89_6
+; SZ13-NEXT: larl %r2, .LCPI89_7
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
; SZ13-NEXT: vst %v0, 176(%r15), 3 # 16-byte Spill
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI89_7
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmax at PLT
; SZ13-NEXT: vl %v1, 176(%r15), 3 # 16-byte Reload
; SZ13-NEXT: vl %v24, 160(%r15), 3 # 16-byte Reload
@@ -4917,9 +4918,9 @@ define <1 x float> @constrained_vector_minnum_v1f32() #0 {
; SZ13-NEXT: aghi %r15, -160
; SZ13-NEXT: .cfi_def_cfa_offset 320
; SZ13-NEXT: larl %r1, .LCPI90_0
+; SZ13-NEXT: larl %r2, .LCPI90_1
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI90_1
-; SZ13-NEXT: lde %f2, 0(%r1)
+; SZ13-NEXT: lde %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fminf at PLT
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
; SZ13-NEXT: vlr %v24, %v0
@@ -4967,16 +4968,16 @@ define <2 x double> @constrained_vector_minnum_v2f64() #0 {
; SZ13-NEXT: aghi %r15, -176
; SZ13-NEXT: .cfi_def_cfa_offset 336
; SZ13-NEXT: larl %r1, .LCPI91_0
+; SZ13-NEXT: larl %r2, .LCPI91_1
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI91_1
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmin at PLT
; SZ13-NEXT: larl %r1, .LCPI91_2
+; SZ13-NEXT: larl %r2, .LCPI91_3
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI91_3
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmin at PLT
; SZ13-NEXT: vl %v1, 160(%r15), 3 # 16-byte Reload
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
@@ -5040,10 +5041,10 @@ define <3 x float> @constrained_vector_minnum_v3f32() #0 {
; SZ13-NEXT: .cfi_def_cfa_offset 360
; SZ13-NEXT: std %f8, 192(%r15) # 8-byte Spill
; SZ13-NEXT: .cfi_offset %f8, -168
+; SZ13-NEXT: larl %r2, .LCPI92_1
; SZ13-NEXT: larl %r1, .LCPI92_0
+; SZ13-NEXT: lde %f8, 0(%r2)
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI92_1
-; SZ13-NEXT: lde %f8, 0(%r1)
; SZ13-NEXT: ldr %f2, %f8
; SZ13-NEXT: brasl %r14, fminf at PLT
; SZ13-NEXT: larl %r1, .LCPI92_2
@@ -5053,11 +5054,11 @@ define <3 x float> @constrained_vector_minnum_v3f32() #0 {
; SZ13-NEXT: ldr %f0, %f8
; SZ13-NEXT: brasl %r14, fminf at PLT
; SZ13-NEXT: larl %r1, .LCPI92_3
+; SZ13-NEXT: larl %r2, .LCPI92_4
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI92_4
-; SZ13-NEXT: lde %f2, 0(%r1)
+; SZ13-NEXT: lde %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fminf at PLT
; SZ13-NEXT: vl %v1, 160(%r15), 3 # 16-byte Reload
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
@@ -5227,32 +5228,33 @@ define <4 x double> @constrained_vector_minnum_v4f64() #0 {
; SZ13-NEXT: aghi %r15, -192
; SZ13-NEXT: .cfi_def_cfa_offset 352
; SZ13-NEXT: larl %r1, .LCPI94_0
+; SZ13-NEXT: larl %r2, .LCPI94_1
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI94_1
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmin at PLT
; SZ13-NEXT: larl %r1, .LCPI94_2
+; SZ13-NEXT: larl %r2, .LCPI94_3
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI94_3
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmin at PLT
-; SZ13-NEXT: vl %v1, 160(%r15), 3 # 16-byte Reload
-; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
-; SZ13-NEXT: vmrhg %v0, %v0, %v1
+; SZ13-NEXT: vl %v3, 160(%r15), 3 # 16-byte Reload
; SZ13-NEXT: larl %r1, .LCPI94_4
+; SZ13-NEXT: larl %r2, .LCPI94_5
+; SZ13-NEXT: ld %f1, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
+; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
+; SZ13-NEXT: vmrhg %v0, %v0, %v3
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
-; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI94_5
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ldr %f0, %f1
; SZ13-NEXT: brasl %r14, fmin at PLT
; SZ13-NEXT: larl %r1, .LCPI94_6
+; SZ13-NEXT: larl %r2, .LCPI94_7
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
; SZ13-NEXT: vst %v0, 176(%r15), 3 # 16-byte Spill
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI94_7
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, fmin at PLT
; SZ13-NEXT: vl %v1, 176(%r15), 3 # 16-byte Reload
; SZ13-NEXT: vl %v24, 160(%r15), 3 # 16-byte Reload
@@ -5306,9 +5308,9 @@ define <2 x float> @constrained_vector_fptrunc_v2f64() #0 {
; SZ13-LABEL: constrained_vector_fptrunc_v2f64:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI96_0
+; SZ13-NEXT: larl %r2, .LCPI96_1
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI96_1
-; SZ13-NEXT: ld %f1, 0(%r1)
+; SZ13-NEXT: ld %f1, 0(%r2)
; SZ13-NEXT: ledbra %f0, 0, %f0, 0
; SZ13-NEXT: ledbra %f1, 0, %f1, 0
; SZ13-NEXT: vmrhf %v0, %v1, %v0
@@ -5382,19 +5384,19 @@ define <4 x float> @constrained_vector_fptrunc_v4f64() #0 {
; SZ13-LABEL: constrained_vector_fptrunc_v4f64:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI98_0
+; SZ13-NEXT: larl %r2, .LCPI98_1
+; SZ13-NEXT: larl %r3, .LCPI98_2
+; SZ13-NEXT: larl %r4, .LCPI98_3
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI98_1
-; SZ13-NEXT: ld %f1, 0(%r1)
+; SZ13-NEXT: ld %f1, 0(%r2)
+; SZ13-NEXT: ld %f2, 0(%r3)
+; SZ13-NEXT: ld %f3, 0(%r4)
; SZ13-NEXT: ledbra %f0, 0, %f0, 0
; SZ13-NEXT: ledbra %f1, 0, %f1, 0
-; SZ13-NEXT: larl %r1, .LCPI98_2
-; SZ13-NEXT: vmrhf %v0, %v1, %v0
-; SZ13-NEXT: ld %f1, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI98_3
-; SZ13-NEXT: ld %f2, 0(%r1)
-; SZ13-NEXT: ledbra %f1, 0, %f1, 0
; SZ13-NEXT: ledbra %f2, 0, %f2, 0
-; SZ13-NEXT: vmrhf %v1, %v2, %v1
+; SZ13-NEXT: ledbra %f3, 0, %f3, 0
+; SZ13-NEXT: vmrhf %v0, %v1, %v0
+; SZ13-NEXT: vmrhf %v1, %v3, %v2
; SZ13-NEXT: vmrhg %v24, %v1, %v0
; SZ13-NEXT: br %r14
entry:
@@ -5438,9 +5440,9 @@ define <2 x double> @constrained_vector_fpext_v2f32() #0 {
; SZ13-LABEL: constrained_vector_fpext_v2f32:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI100_0
+; SZ13-NEXT: larl %r2, .LCPI100_1
; SZ13-NEXT: ldeb %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI100_1
-; SZ13-NEXT: ldeb %f1, 0(%r1)
+; SZ13-NEXT: ldeb %f1, 0(%r2)
; SZ13-NEXT: vmrhg %v24, %v1, %v0
; SZ13-NEXT: br %r14
entry:
@@ -5501,15 +5503,15 @@ define <4 x double> @constrained_vector_fpext_v4f32() #0 {
; SZ13-LABEL: constrained_vector_fpext_v4f32:
; SZ13: # %bb.0: # %entry
; SZ13-NEXT: larl %r1, .LCPI102_0
+; SZ13-NEXT: larl %r2, .LCPI102_1
+; SZ13-NEXT: larl %r3, .LCPI102_2
+; SZ13-NEXT: larl %r4, .LCPI102_3
; SZ13-NEXT: ldeb %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI102_1
-; SZ13-NEXT: ldeb %f1, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI102_2
+; SZ13-NEXT: ldeb %f1, 0(%r2)
+; SZ13-NEXT: ldeb %f2, 0(%r3)
+; SZ13-NEXT: ldeb %f3, 0(%r4)
; SZ13-NEXT: vmrhg %v24, %v1, %v0
-; SZ13-NEXT: ldeb %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI102_3
-; SZ13-NEXT: ldeb %f1, 0(%r1)
-; SZ13-NEXT: vmrhg %v26, %v1, %v0
+; SZ13-NEXT: vmrhg %v26, %v3, %v2
; SZ13-NEXT: br %r14
entry:
%result = call <4 x double> @llvm.experimental.constrained.fpext.v4f64.v4f32(
@@ -6722,9 +6724,9 @@ define <1 x float> @constrained_vector_atan2_v1f32() #0 {
; SZ13-NEXT: aghi %r15, -160
; SZ13-NEXT: .cfi_def_cfa_offset 320
; SZ13-NEXT: larl %r1, .LCPI128_0
+; SZ13-NEXT: larl %r2, .LCPI128_1
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI128_1
-; SZ13-NEXT: lde %f2, 0(%r1)
+; SZ13-NEXT: lde %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, atan2f at PLT
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
; SZ13-NEXT: vlr %v24, %v0
@@ -6774,16 +6776,16 @@ define <2 x double> @constrained_vector_atan2_v2f64() #0 {
; SZ13-NEXT: aghi %r15, -176
; SZ13-NEXT: .cfi_def_cfa_offset 336
; SZ13-NEXT: larl %r1, .LCPI129_0
+; SZ13-NEXT: larl %r2, .LCPI129_1
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI129_1
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, atan2 at PLT
; SZ13-NEXT: larl %r1, .LCPI129_2
+; SZ13-NEXT: larl %r2, .LCPI129_3
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI129_3
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, atan2 at PLT
; SZ13-NEXT: vl %v1, 160(%r15), 3 # 16-byte Reload
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
@@ -6845,23 +6847,23 @@ define <3 x float> @constrained_vector_atan2_v3f32() #0 {
; SZ13-NEXT: aghi %r15, -192
; SZ13-NEXT: .cfi_def_cfa_offset 352
; SZ13-NEXT: larl %r1, .LCPI130_0
+; SZ13-NEXT: larl %r2, .LCPI130_1
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI130_1
-; SZ13-NEXT: lde %f2, 0(%r1)
+; SZ13-NEXT: lde %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, atan2f at PLT
; SZ13-NEXT: larl %r1, .LCPI130_2
+; SZ13-NEXT: larl %r2, .LCPI130_3
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
; SZ13-NEXT: vst %v0, 176(%r15), 3 # 16-byte Spill
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI130_3
-; SZ13-NEXT: lde %f2, 0(%r1)
+; SZ13-NEXT: lde %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, atan2f at PLT
; SZ13-NEXT: larl %r1, .LCPI130_4
+; SZ13-NEXT: larl %r2, .LCPI130_5
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
; SZ13-NEXT: lde %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI130_5
-; SZ13-NEXT: lde %f2, 0(%r1)
+; SZ13-NEXT: lde %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, atan2f at PLT
; SZ13-NEXT: vl %v1, 160(%r15), 3 # 16-byte Reload
; SZ13-NEXT: # kill: def $f0s killed $f0s def $v0
@@ -7039,32 +7041,33 @@ define <4 x double> @constrained_vector_atan2_v4f64() #0 {
; SZ13-NEXT: aghi %r15, -192
; SZ13-NEXT: .cfi_def_cfa_offset 352
; SZ13-NEXT: larl %r1, .LCPI132_0
+; SZ13-NEXT: larl %r2, .LCPI132_1
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI132_1
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, atan2 at PLT
; SZ13-NEXT: larl %r1, .LCPI132_2
+; SZ13-NEXT: larl %r2, .LCPI132_3
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI132_3
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, atan2 at PLT
-; SZ13-NEXT: vl %v1, 160(%r15), 3 # 16-byte Reload
-; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
-; SZ13-NEXT: vmrhg %v0, %v0, %v1
+; SZ13-NEXT: vl %v3, 160(%r15), 3 # 16-byte Reload
; SZ13-NEXT: larl %r1, .LCPI132_4
+; SZ13-NEXT: larl %r2, .LCPI132_5
+; SZ13-NEXT: ld %f1, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
+; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
+; SZ13-NEXT: vmrhg %v0, %v0, %v3
; SZ13-NEXT: vst %v0, 160(%r15), 3 # 16-byte Spill
-; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI132_5
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ldr %f0, %f1
; SZ13-NEXT: brasl %r14, atan2 at PLT
; SZ13-NEXT: larl %r1, .LCPI132_6
+; SZ13-NEXT: larl %r2, .LCPI132_7
; SZ13-NEXT: # kill: def $f0d killed $f0d def $v0
; SZ13-NEXT: vst %v0, 176(%r15), 3 # 16-byte Spill
; SZ13-NEXT: ld %f0, 0(%r1)
-; SZ13-NEXT: larl %r1, .LCPI132_7
-; SZ13-NEXT: ld %f2, 0(%r1)
+; SZ13-NEXT: ld %f2, 0(%r2)
; SZ13-NEXT: brasl %r14, atan2 at PLT
; SZ13-NEXT: vl %v1, 176(%r15), 3 # 16-byte Reload
; SZ13-NEXT: vl %v24, 160(%r15), 3 # 16-byte Reload
More information about the llvm-commits
mailing list