[llvm] [SystemZ] Enable MachineCombiner for FP reassociation (PR #83546)
Jonas Paulsson via llvm-commits
llvm-commits at lists.llvm.org
Fri Mar 1 01:27:26 PST 2024
https://github.com/JonPsson1 created https://github.com/llvm/llvm-project/pull/83546
This patch is still slightly unfinished/experimental. It has the option of using either PPC or alternative new patterns. Some feedback would be nice at this point (see below).
Main points:
- Enable MachineCombiner
- Disable reg/mem folding during isel as MachineCombiner can only work with reg/reg instructions.
- Implement optimizeLoadInstr() so that PeepholeOptimizer can fold loads into reg/mem instructions after
MachineCombiner.
- Run a new simple pass in SystemZ backend that converts any remaining reg/reg pseudos. It also does some
limited reg/mem folding across MBB boundaries which gives a nice improvement on imagick (13%!).
I have experimented with figuring out which pattern combinations would give the best improvement, measured by the decrease of maximum depth of any instructions in the MBB as reported by MachineTraceMetrics. Just the common-code binary patterns gives a total static improvement of -10294. Adding different fma patterns gave some further improvements:
```
Only common code binary patterns: -10294
Additional decrease, relative to -10294: With reassoc-additions: static (relative to -10294) / performance
PowerPC patterns (fma2add, fma3-ch): -3211
fma1add, fma2, addfma : -6269 -6812 / possibly better
fma1add, fma2 : -5760
All experimental patterns : -6237 -6865 / about same
```
Looking at these numbers, it seemed like the patterns of {fma+fma, fma+add, add+fma} was the most promising combination, and that the PPC patterns {fma+fma+fma, fma+fma+add} were only half of that.
However, even though these different settings of FMA patterns give further static improvements, there doesn't seem to be any further performance improvement on SPEC for any of them. With the experimental reassoc-additions pass (right colum) a bit more depth-improvement resulted, but not even with this is there any noticeable performance win.
A bit surprisingly, I saw instead a 13% improvement imagick *without* the MachineCombiner, just because of the global reg/mem folding I tried. Then with the MachineCombiner using only common-code patterns for binops, lbm is improved ~20%, and cactus ~2%.
My conclusions are that the MachineCombiner is worth enabling and even though there is no noticeable performance improvement on SPEC it seems good to have some basic FMA patterns, either like -z-fma or -ppc-fma . The simpler solution is probably to reuse the 2 PPC patterns, while the z-fma ones give better maxdepth improvement which may or may not be valuable sometimes. Compile time seems to be the same.
The experiments and results per above have included a minor fix in MachineCombiner (https://github.com/llvm/llvm-project/pull/82025), which hopefully can be committed but is probably not vital.
The full experimental patch:
[MachineComb.patch.tar.gz](https://github.com/llvm/llvm-project/files/14459050/MachineComb.patch.tar.gz)
>From fab2c858e18e1239201f579c223b54a1debfd7d6 Mon Sep 17 00:00:00 2001
From: Jonas Paulsson <paulsson at linux.vnet.ibm.com>
Date: Tue, 4 Apr 2023 22:09:43 +0200
Subject: [PATCH] Cleaned up after experiments.
---
.../llvm/CodeGen/MachineCombinerPattern.h | 9 +
llvm/include/llvm/CodeGen/TargetInstrInfo.h | 2 +-
llvm/lib/CodeGen/MachineCombiner.cpp | 11 +-
llvm/lib/Target/SystemZ/CMakeLists.txt | 1 +
llvm/lib/Target/SystemZ/SystemZ.h | 2 +
.../SystemZ/SystemZFinalizeReassociation.cpp | 127 ++++
.../Target/SystemZ/SystemZISelDAGToDAG.cpp | 7 +
llvm/lib/Target/SystemZ/SystemZInstrFP.td | 24 +-
.../lib/Target/SystemZ/SystemZInstrFormats.td | 30 +
llvm/lib/Target/SystemZ/SystemZInstrInfo.cpp | 568 +++++++++++++-
llvm/lib/Target/SystemZ/SystemZInstrInfo.h | 39 +
llvm/lib/Target/SystemZ/SystemZInstrVector.td | 66 +-
llvm/lib/Target/SystemZ/SystemZOperators.td | 29 +
llvm/lib/Target/SystemZ/SystemZScheduleZ13.td | 6 +-
llvm/lib/Target/SystemZ/SystemZScheduleZ14.td | 10 +-
llvm/lib/Target/SystemZ/SystemZScheduleZ15.td | 10 +-
llvm/lib/Target/SystemZ/SystemZScheduleZ16.td | 10 +-
.../Target/SystemZ/SystemZTargetMachine.cpp | 10 +
llvm/lib/Target/X86/X86InstrInfo.cpp | 2 +-
llvm/lib/Target/X86/X86InstrInfo.h | 2 +-
llvm/test/CodeGen/SystemZ/fp-add-02.ll | 14 +
llvm/test/CodeGen/SystemZ/fp-mul-02.ll | 12 +-
.../SystemZ/machine-combiner-reassoc-fp-01.ll | 690 ++++++++++++++++++
.../SystemZ/machine-combiner-reassoc-fp-03.ll | 90 +++
.../SystemZ/machine-combiner-reassoc-fp-04.ll | 123 ++++
.../SystemZ/machine-combiner-reassoc-fp-08.ll | 115 +++
.../SystemZ/machine-combiner-reassoc-fp-09.ll | 177 +++++
27 files changed, 2106 insertions(+), 80 deletions(-)
create mode 100644 llvm/lib/Target/SystemZ/SystemZFinalizeReassociation.cpp
create mode 100644 llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-01.ll
create mode 100644 llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-03.ll
create mode 100644 llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-04.ll
create mode 100644 llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-08.ll
create mode 100644 llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-09.ll
diff --git a/llvm/include/llvm/CodeGen/MachineCombinerPattern.h b/llvm/include/llvm/CodeGen/MachineCombinerPattern.h
index 89eed7463bd783..c0b3927450cb69 100644
--- a/llvm/include/llvm/CodeGen/MachineCombinerPattern.h
+++ b/llvm/include/llvm/CodeGen/MachineCombinerPattern.h
@@ -176,6 +176,15 @@ enum class MachineCombinerPattern {
FMSUB,
FNMSUB,
+ // SystemZ patterns. (EXPERIMENTAL)
+ FMA2_P1P0,
+ FMA2_P0P1,
+ FMA2,
+ FMA1_Add_L,
+ FMA1_Add_R,
+ FMA3, // These are inspired by PPC
+ FMA2_Add, //
+
// X86 VNNI
DPWSSD,
diff --git a/llvm/include/llvm/CodeGen/TargetInstrInfo.h b/llvm/include/llvm/CodeGen/TargetInstrInfo.h
index e7787aafb98e2d..dc48056c3d9bcd 100644
--- a/llvm/include/llvm/CodeGen/TargetInstrInfo.h
+++ b/llvm/include/llvm/CodeGen/TargetInstrInfo.h
@@ -1698,7 +1698,7 @@ class TargetInstrInfo : public MCInstrInfo {
/// instruction that defines FoldAsLoadDefReg, and the function returns
/// the machine instruction generated due to folding.
virtual MachineInstr *optimizeLoadInstr(MachineInstr &MI,
- const MachineRegisterInfo *MRI,
+ MachineRegisterInfo *MRI,
Register &FoldAsLoadDefReg,
MachineInstr *&DefMI) const {
return nullptr;
diff --git a/llvm/lib/CodeGen/MachineCombiner.cpp b/llvm/lib/CodeGen/MachineCombiner.cpp
index c65937935ed820..577331441eef28 100644
--- a/llvm/lib/CodeGen/MachineCombiner.cpp
+++ b/llvm/lib/CodeGen/MachineCombiner.cpp
@@ -155,9 +155,7 @@ MachineCombiner::getOperandDef(const MachineOperand &MO) {
// We need a virtual register definition.
if (MO.isReg() && MO.getReg().isVirtual())
DefInstr = MRI->getUniqueVRegDef(MO.getReg());
- // PHI's have no depth etc.
- if (DefInstr && DefInstr->isPHI())
- DefInstr = nullptr;
+ // (PATCH PROPOSED for PHIs: https://github.com/llvm/llvm-project/pull/82025)
return DefInstr;
}
@@ -317,6 +315,13 @@ static CombinerObjective getCombinerObjective(MachineCombinerPattern P) {
case MachineCombinerPattern::FMADD_XA:
case MachineCombinerPattern::FMSUB:
case MachineCombinerPattern::FNMSUB:
+ case MachineCombinerPattern::FMA2_P1P0:
+ case MachineCombinerPattern::FMA2_P0P1:
+ case MachineCombinerPattern::FMA2:
+ case MachineCombinerPattern::FMA1_Add_L:
+ case MachineCombinerPattern::FMA1_Add_R:
+ case MachineCombinerPattern::FMA3:
+ case MachineCombinerPattern::FMA2_Add:
return CombinerObjective::MustReduceDepth;
case MachineCombinerPattern::REASSOC_XY_BCA:
case MachineCombinerPattern::REASSOC_XY_BAC:
diff --git a/llvm/lib/Target/SystemZ/CMakeLists.txt b/llvm/lib/Target/SystemZ/CMakeLists.txt
index 0614e07bde8ac1..235df220729447 100644
--- a/llvm/lib/Target/SystemZ/CMakeLists.txt
+++ b/llvm/lib/Target/SystemZ/CMakeLists.txt
@@ -20,6 +20,7 @@ add_llvm_target(SystemZCodeGen
SystemZConstantPoolValue.cpp
SystemZCopyPhysRegs.cpp
SystemZElimCompare.cpp
+ SystemZFinalizeReassociation.cpp
SystemZFrameLowering.cpp
SystemZHazardRecognizer.cpp
SystemZISelDAGToDAG.cpp
diff --git a/llvm/lib/Target/SystemZ/SystemZ.h b/llvm/lib/Target/SystemZ/SystemZ.h
index d7aa9e4e18cbbb..49a200babfff57 100644
--- a/llvm/lib/Target/SystemZ/SystemZ.h
+++ b/llvm/lib/Target/SystemZ/SystemZ.h
@@ -195,12 +195,14 @@ FunctionPass *createSystemZShortenInstPass(SystemZTargetMachine &TM);
FunctionPass *createSystemZLongBranchPass(SystemZTargetMachine &TM);
FunctionPass *createSystemZLDCleanupPass(SystemZTargetMachine &TM);
FunctionPass *createSystemZCopyPhysRegsPass(SystemZTargetMachine &TM);
+FunctionPass *createSystemZFinalizeReassociationPass(SystemZTargetMachine &TM);
FunctionPass *createSystemZPostRewritePass(SystemZTargetMachine &TM);
FunctionPass *createSystemZTDCPass();
void initializeSystemZCopyPhysRegsPass(PassRegistry &);
void initializeSystemZDAGToDAGISelPass(PassRegistry &);
void initializeSystemZElimComparePass(PassRegistry &);
+void initializeSystemZFinalizeReassociationPass(PassRegistry &);
void initializeSystemZLDCleanupPass(PassRegistry &);
void initializeSystemZLongBranchPass(PassRegistry &);
void initializeSystemZPostRewritePass(PassRegistry &);
diff --git a/llvm/lib/Target/SystemZ/SystemZFinalizeReassociation.cpp b/llvm/lib/Target/SystemZ/SystemZFinalizeReassociation.cpp
new file mode 100644
index 00000000000000..d441b8bbc87381
--- /dev/null
+++ b/llvm/lib/Target/SystemZ/SystemZFinalizeReassociation.cpp
@@ -0,0 +1,127 @@
+//===---- SystemZFinalizeReassociation.cpp - Finalize FP reassociation ----===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This pass is the last step of the process of enabling reassociation with
+// the MachineCombiner. These are the steps involved:
+//
+// 1. Instruction selection: Disable reg/mem folding for any operations that
+// are reassociable since MachineCombiner will not succeed
+// otherwise. Instead select a reg/reg pseudo that pretends to clobber CC.
+//
+// 2. MachineCombiner: Performs reassociation with the reg/reg instructions.
+//
+// 3. PeepholeOptimizer: fold loads into reg/mem instructions after
+// reassociation. The reg/mem opcode sets CC which is why the special
+// reg/reg pseudo is needed.
+//
+// 4. Convert any remaining pseudos into the target opcodes that do not
+// clobber CC (this pass).
+//
+//===----------------------------------------------------------------------===//
+
+#include "SystemZMachineFunctionInfo.h"
+#include "SystemZTargetMachine.h"
+#include "llvm/CodeGen/MachineDominators.h"
+#include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/CodeGen/MachineInstrBuilder.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/TargetInstrInfo.h"
+#include "llvm/CodeGen/TargetRegisterInfo.h"
+#include "llvm/Target/TargetMachine.h"
+
+using namespace llvm;
+
+namespace {
+
+class SystemZFinalizeReassociation : public MachineFunctionPass {
+public:
+ static char ID;
+ SystemZFinalizeReassociation()
+ : MachineFunctionPass(ID), TII(nullptr), MRI(nullptr) {
+ initializeSystemZFinalizeReassociationPass(*PassRegistry::getPassRegistry());
+ }
+
+ bool runOnMachineFunction(MachineFunction &MF) override;
+ void getAnalysisUsage(AnalysisUsage &AU) const override;
+
+private:
+
+ bool visitMBB(MachineBasicBlock &MBB);
+
+ const SystemZInstrInfo *TII;
+ MachineRegisterInfo *MRI;
+};
+
+char SystemZFinalizeReassociation::ID = 0;
+
+} // end anonymous namespace
+
+INITIALIZE_PASS(SystemZFinalizeReassociation, "systemz-finalize-reassoc",
+ "SystemZ Finalize Reassociation", false, false)
+
+FunctionPass *llvm::
+createSystemZFinalizeReassociationPass(SystemZTargetMachine &TM) {
+ return new SystemZFinalizeReassociation();
+}
+
+void SystemZFinalizeReassociation::getAnalysisUsage(AnalysisUsage &AU) const {
+ AU.setPreservesCFG();
+ MachineFunctionPass::getAnalysisUsage(AU);
+}
+
+bool SystemZFinalizeReassociation::visitMBB(MachineBasicBlock &MBB) {
+ bool Changed = false;
+ for (MachineInstr &MI : llvm::make_early_inc_range(MBB)) {
+ unsigned PseudoOpcode = MI.getOpcode();
+ unsigned TargetOpcode =
+ PseudoOpcode == SystemZ::WFADB_CCPseudo ? SystemZ::WFADB
+ : PseudoOpcode == SystemZ::WFASB_CCPseudo ? SystemZ::WFASB
+ : PseudoOpcode == SystemZ::WFSDB_CCPseudo ? SystemZ::WFSDB
+ : PseudoOpcode == SystemZ::WFSSB_CCPseudo ? SystemZ::WFSSB
+ : PseudoOpcode == SystemZ::WFMDB_CCPseudo ? SystemZ::WFMDB
+ : PseudoOpcode == SystemZ::WFMSB_CCPseudo ? SystemZ::WFMSB
+ : PseudoOpcode == SystemZ::WFMADB_CCPseudo ? SystemZ::WFMADB
+ : PseudoOpcode == SystemZ::WFMASB_CCPseudo ? SystemZ::WFMASB
+ : 0;
+ if (TargetOpcode) {
+ // PeepholeOptimizer will not fold any loads across basic blocks, which
+ // however seems beneficial, so do it here:
+ bool Folded = false;
+ for (unsigned Op = 1; Op <= 2; ++Op) {
+ Register Reg = MI.getOperand(Op).getReg();
+ if (MachineInstr *DefMI = MRI->getVRegDef(Reg))
+ if (TII->optimizeLoadInstr(MI, MRI, Reg, DefMI)) {
+ MI.eraseFromParent();
+ DefMI->eraseFromParent();
+ MRI->markUsesInDebugValueAsUndef(Reg);
+ Folded = true;
+ break;
+ }
+ }
+
+ if (!Folded) {
+ MI.setDesc(TII->get(TargetOpcode));
+ int CCIdx = MI.findRegisterDefOperandIdx(SystemZ::CC);
+ MI.removeOperand(CCIdx);
+ }
+ Changed = true;
+ }
+ }
+ return Changed;
+}
+
+bool SystemZFinalizeReassociation::runOnMachineFunction(MachineFunction &F) {
+ TII = F.getSubtarget<SystemZSubtarget>().getInstrInfo();
+ MRI = &F.getRegInfo();
+
+ bool Modified = false;
+ for (auto &MBB : F)
+ Modified |= visitMBB(MBB);
+
+ return Modified;
+}
diff --git a/llvm/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp b/llvm/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp
index 815eca1240d827..652980d3c2c0df 100644
--- a/llvm/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp
+++ b/llvm/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp
@@ -347,6 +347,13 @@ class SystemZDAGToDAGISel : public SelectionDAGISel {
// Try to expand a boolean SELECT_CCMASK using an IPM sequence.
SDValue expandSelectBoolean(SDNode *Node);
+ // Return true if the flags of N and the subtarget allows for reassociation.
+ bool isReassociable(SDNode *N) const {
+ return N->getFlags().hasAllowReassociation() &&
+ N->getFlags().hasNoSignedZeros() &&
+ Subtarget->hasVector();
+ }
+
public:
static char ID;
diff --git a/llvm/lib/Target/SystemZ/SystemZInstrFP.td b/llvm/lib/Target/SystemZ/SystemZInstrFP.td
index 6e67425c1e788b..5b383b3fa6ae88 100644
--- a/llvm/lib/Target/SystemZ/SystemZInstrFP.td
+++ b/llvm/lib/Target/SystemZ/SystemZInstrFP.td
@@ -430,8 +430,10 @@ let Uses = [FPC], mayRaiseFPException = 1,
def ADBR : BinaryRRE<"adbr", 0xB31A, any_fadd, FP64, FP64>;
def AXBR : BinaryRRE<"axbr", 0xB34A, any_fadd, FP128, FP128>;
}
- defm AEB : BinaryRXEAndPseudo<"aeb", 0xED0A, any_fadd, FP32, load, 4>;
- defm ADB : BinaryRXEAndPseudo<"adb", 0xED1A, any_fadd, FP64, load, 8>;
+ defm AEB : BinaryRXEAndPseudo<"aeb", 0xED0A, z_any_fadd_noreassoc, FP32,
+ load, 4>;
+ defm ADB : BinaryRXEAndPseudo<"adb", 0xED1A, z_any_fadd_noreassoc, FP64,
+ load, 8>;
}
// Subtraction.
@@ -441,8 +443,10 @@ let Uses = [FPC], mayRaiseFPException = 1,
def SDBR : BinaryRRE<"sdbr", 0xB31B, any_fsub, FP64, FP64>;
def SXBR : BinaryRRE<"sxbr", 0xB34B, any_fsub, FP128, FP128>;
- defm SEB : BinaryRXEAndPseudo<"seb", 0xED0B, any_fsub, FP32, load, 4>;
- defm SDB : BinaryRXEAndPseudo<"sdb", 0xED1B, any_fsub, FP64, load, 8>;
+ defm SEB : BinaryRXEAndPseudo<"seb", 0xED0B, z_any_fsub_noreassoc, FP32,
+ load, 4>;
+ defm SDB : BinaryRXEAndPseudo<"sdb", 0xED1B, z_any_fsub_noreassoc, FP64,
+ load, 8>;
}
// Multiplication.
@@ -452,8 +456,10 @@ let Uses = [FPC], mayRaiseFPException = 1 in {
def MDBR : BinaryRRE<"mdbr", 0xB31C, any_fmul, FP64, FP64>;
def MXBR : BinaryRRE<"mxbr", 0xB34C, any_fmul, FP128, FP128>;
}
- defm MEEB : BinaryRXEAndPseudo<"meeb", 0xED17, any_fmul, FP32, load, 4>;
- defm MDB : BinaryRXEAndPseudo<"mdb", 0xED1C, any_fmul, FP64, load, 8>;
+ defm MEEB : BinaryRXEAndPseudo<"meeb", 0xED17, z_any_fmul_noreassoc, FP32,
+ load, 4>;
+ defm MDB : BinaryRXEAndPseudo<"mdb", 0xED1C, z_any_fmul_noreassoc, FP64,
+ load, 8>;
}
// f64 multiplication of two FP32 registers.
@@ -495,8 +501,10 @@ let Uses = [FPC], mayRaiseFPException = 1 in {
def MAEBR : TernaryRRD<"maebr", 0xB30E, z_any_fma, FP32, FP32>;
def MADBR : TernaryRRD<"madbr", 0xB31E, z_any_fma, FP64, FP64>;
- defm MAEB : TernaryRXFAndPseudo<"maeb", 0xED0E, z_any_fma, FP32, FP32, load, 4>;
- defm MADB : TernaryRXFAndPseudo<"madb", 0xED1E, z_any_fma, FP64, FP64, load, 8>;
+ defm MAEB : TernaryRXFAndPseudo<"maeb", 0xED0E, z_any_fma_noreassoc, FP32,
+ FP32, load, 4>;
+ defm MADB : TernaryRXFAndPseudo<"madb", 0xED1E, z_any_fma_noreassoc, FP64,
+ FP64, load, 8>;
}
// Fused multiply-subtract.
diff --git a/llvm/lib/Target/SystemZ/SystemZInstrFormats.td b/llvm/lib/Target/SystemZ/SystemZInstrFormats.td
index bb9fa0fc33ffa0..9fdda0a5d60031 100644
--- a/llvm/lib/Target/SystemZ/SystemZInstrFormats.td
+++ b/llvm/lib/Target/SystemZ/SystemZInstrFormats.td
@@ -5536,3 +5536,33 @@ multiclass StringRRE<string mnemonic, bits<16> opcode,
[(set GR64:$end, (operator GR64:$start1, GR64:$start2,
GR32:$char))]>;
}
+
+multiclass BinaryVRRcAndCCPseudo<string mnemonic, bits<16> opcode,
+ SDPatternOperator operator,
+ SDPatternOperator reassoc_operator,
+ TypedReg tr1, TypedReg tr2, bits<4> type = 0,
+ bits<4> m5 = 0, bits<4> m6 = 0,
+ string fp_mnemonic = ""> {
+ def "" : BinaryVRRc<mnemonic, opcode, operator, tr1, tr2, type, m5, m6,
+ fp_mnemonic>;
+ let Defs = [CC], AddedComplexity = 1 in // Win over "".
+ def _CCPseudo : Pseudo<(outs tr1.op:$V1), (ins tr2.op:$V2, tr2.op:$V3),
+ [(set (tr1.vt tr1.op:$V1),
+ (reassoc_operator (tr2.vt tr2.op:$V2),
+ (tr2.vt tr2.op:$V3)))]>;
+}
+
+multiclass TernaryVRReAndCCPseudo<string mnemonic, bits<16> opcode,
+ SDPatternOperator operator,
+ SDPatternOperator reassoc_operator,
+ TypedReg tr1, TypedReg tr2, bits<4> m5 = 0,
+ bits<4> type = 0, string fp_mnemonic = ""> {
+ def "" : TernaryVRRe<mnemonic, opcode, operator, tr1, tr2, m5, type,
+ fp_mnemonic>;
+ let Defs = [CC], AddedComplexity = 1 in // Win over "".
+ def _CCPseudo : Pseudo<(outs tr1.op:$V1),
+ (ins tr2.op:$V2, tr2.op:$V3, tr1.op:$V4),
+ [(set (tr1.vt tr1.op:$V1), (reassoc_operator (tr2.vt tr2.op:$V2),
+ (tr2.vt tr2.op:$V3),
+ (tr1.vt tr1.op:$V4)))]>;
+}
diff --git a/llvm/lib/Target/SystemZ/SystemZInstrInfo.cpp b/llvm/lib/Target/SystemZ/SystemZInstrInfo.cpp
index 046a12208467b4..1d8db507e30f14 100644
--- a/llvm/lib/Target/SystemZ/SystemZInstrInfo.cpp
+++ b/llvm/lib/Target/SystemZ/SystemZInstrInfo.cpp
@@ -21,6 +21,7 @@
#include "llvm/CodeGen/LivePhysRegs.h"
#include "llvm/CodeGen/LiveVariables.h"
#include "llvm/CodeGen/MachineBasicBlock.h"
+#include "llvm/CodeGen/MachineCombinerPattern.h"
#include "llvm/CodeGen/MachineFrameInfo.h"
#include "llvm/CodeGen/MachineFunction.h"
#include "llvm/CodeGen/MachineInstr.h"
@@ -610,6 +611,112 @@ void SystemZInstrInfo::insertSelect(MachineBasicBlock &MBB,
.addImm(CCValid).addImm(CCMask);
}
+static void transferDeadCC(MachineInstr *OldMI, MachineInstr *NewMI) {
+ if (OldMI->registerDefIsDead(SystemZ::CC)) {
+ MachineOperand *CCDef = NewMI->findRegisterDefOperand(SystemZ::CC);
+ if (CCDef != nullptr)
+ CCDef->setIsDead(true);
+ }
+}
+
+void SystemZInstrInfo::transferMIFlag(MachineInstr *OldMI, MachineInstr *NewMI,
+ MachineInstr::MIFlag Flag) const {
+ if (OldMI->getFlag(Flag))
+ NewMI->setFlag(Flag);
+}
+
+static cl::opt<unsigned> MaxUsersGlobalFold("maxusersglobfold", cl::init(7));
+
+MachineInstr *SystemZInstrInfo::optimizeLoadInstr(MachineInstr &MI,
+ MachineRegisterInfo *MRI,
+ Register &FoldAsLoadDefReg,
+ MachineInstr *&DefMI) const {
+ const TargetRegisterInfo *TRI = MRI->getTargetRegisterInfo();
+
+ // Check whether we can move the DefMI load, and that it only has one use.
+ DefMI = MRI->getVRegDef(FoldAsLoadDefReg);
+ assert(DefMI);
+ bool SawStore = false;
+ if (!DefMI->isSafeToMove(nullptr, SawStore) ||
+ !MRI->hasOneNonDBGUse(FoldAsLoadDefReg))
+ return nullptr;
+
+ // For reassociable FP operations, any loads have been purposefully left
+ // unfolded so that MachineCombiner can do its work on reg/reg
+ // opcodes. After that, as many loads as possible are now folded.
+ unsigned LoadOpcode = 0;
+ unsigned RegMemOpcode = 0;
+ const TargetRegisterClass *FPRC = nullptr;
+ RegMemOpcode = MI.getOpcode() == SystemZ::WFADB_CCPseudo ? SystemZ::ADB
+ : MI.getOpcode() == SystemZ::WFSDB_CCPseudo ? SystemZ::SDB
+ : MI.getOpcode() == SystemZ::WFMDB_CCPseudo ? SystemZ::MDB
+ : MI.getOpcode() == SystemZ::WFMADB_CCPseudo ? SystemZ::MADB
+ : 0;
+ if (RegMemOpcode) {
+ LoadOpcode = SystemZ::VL64;
+ FPRC = &SystemZ::FP64BitRegClass;
+ } else {
+ RegMemOpcode = MI.getOpcode() == SystemZ::WFASB_CCPseudo ? SystemZ::AEB
+ : MI.getOpcode() == SystemZ::WFSSB_CCPseudo ? SystemZ::SEB
+ : MI.getOpcode() == SystemZ::WFMSB_CCPseudo ? SystemZ::MEEB
+ : MI.getOpcode() == SystemZ::WFMASB_CCPseudo ? SystemZ::MAEB
+ : 0;
+ if (RegMemOpcode) {
+ LoadOpcode = SystemZ::VL32;
+ FPRC = &SystemZ::FP32BitRegClass;
+ }
+ }
+ if (!RegMemOpcode || DefMI->getOpcode() != LoadOpcode)
+ return nullptr;
+
+ DebugLoc DL = MI.getDebugLoc();
+ Register DstReg = MI.getOperand(0).getReg();
+ MachineOperand LHS = MI.getOperand(1);
+ MachineOperand RHS = MI.getOperand(2);
+ bool IsTernary =
+ (RegMemOpcode == SystemZ::MADB || RegMemOpcode == SystemZ::MAEB);
+ MachineOperand *AccMO = IsTernary ? &MI.getOperand(3) : nullptr;
+ if ((RegMemOpcode == SystemZ::SDB || RegMemOpcode == SystemZ::SEB) &&
+ FoldAsLoadDefReg != RHS.getReg())
+ return nullptr;
+ if (IsTernary && FoldAsLoadDefReg == AccMO->getReg())
+ return nullptr;
+ MachineOperand &RegMO = RHS.getReg() == FoldAsLoadDefReg ? LHS : RHS;
+ // It seems that folding globally into 2-address with too many users of reg
+ // is detrimental (resulting in reg copys before each use).
+ if (!MRI->hasOneNonDBGUse(RegMO.getReg()) && !RegMO.isKill() &&
+ DefMI->getParent() != MI.getParent()) {
+ unsigned NumUsers = 0;
+ for (MachineInstr &UI : MRI->use_nodbg_instructions(RegMO.getReg())) {
+ NumUsers++; (void)UI;
+ }
+ if (NumUsers > MaxUsersGlobalFold)
+ return nullptr;
+ }
+
+ MachineOperand &Base = DefMI->getOperand(1);
+ MachineOperand &Disp = DefMI->getOperand(2);
+ MachineOperand &Indx = DefMI->getOperand(3);
+ if (Base.isReg()) // Could be a FrameIndex.
+ Base.setIsKill(false);
+ Indx.setIsKill(false);
+
+ MachineInstrBuilder MIB =
+ BuildMI(*MI.getParent(), MI, DL, get(RegMemOpcode), DstReg);
+ if (IsTernary)
+ MIB.add(*AccMO);
+ MIB.add(RegMO);
+ MIB.add(Base).add(Disp).add(Indx);
+ MIB.addMemOperand(*DefMI->memoperands_begin());
+ transferMIFlag(&MI, MIB, MachineInstr::NoFPExcept);
+ MIB->addRegisterDead(SystemZ::CC, TRI);
+ MRI->setRegClass(DstReg, FPRC);
+ if (IsTernary)
+ MRI->setRegClass(AccMO->getReg(), FPRC);
+ MRI->setRegClass(RegMO.getReg(), FPRC);
+ return MIB;
+}
+
bool SystemZInstrInfo::foldImmediate(MachineInstr &UseMI, MachineInstr &DefMI,
Register Reg,
MachineRegisterInfo *MRI) const {
@@ -937,20 +1044,6 @@ static LogicOp interpretAndImmediate(unsigned Opcode) {
}
}
-static void transferDeadCC(MachineInstr *OldMI, MachineInstr *NewMI) {
- if (OldMI->registerDefIsDead(SystemZ::CC)) {
- MachineOperand *CCDef = NewMI->findRegisterDefOperand(SystemZ::CC);
- if (CCDef != nullptr)
- CCDef->setIsDead(true);
- }
-}
-
-static void transferMIFlag(MachineInstr *OldMI, MachineInstr *NewMI,
- MachineInstr::MIFlag Flag) {
- if (OldMI->getFlag(Flag))
- NewMI->setFlag(Flag);
-}
-
MachineInstr *
SystemZInstrInfo::convertToThreeAddress(MachineInstr &MI, LiveVariables *LV,
LiveIntervals *LIS) const {
@@ -1003,6 +1096,453 @@ SystemZInstrInfo::convertToThreeAddress(MachineInstr &MI, LiveVariables *LV,
return nullptr;
}
+static bool hasReassocFlags(const MachineInstr *MI) {
+ return (MI->getFlag(MachineInstr::MIFlag::FmReassoc) &&
+ MI->getFlag(MachineInstr::MIFlag::FmNsz));
+}
+
+bool SystemZInstrInfo::IsReassociableFMA(const MachineInstr *MI) const {
+ switch (MI->getOpcode()) {
+ case SystemZ::VFMADB:
+ case SystemZ::VFMASB:
+ case SystemZ::WFMAXB:
+ return hasReassocFlags(MI);
+ case SystemZ::WFMADB_CCPseudo:
+ case SystemZ::WFMASB_CCPseudo:
+ return hasReassocFlags(MI) &&
+ MI->findRegisterDefOperandIdx(SystemZ::CC, true /*isDead*/) != -1;
+ default:
+ break;
+ }
+ return false;
+}
+
+bool SystemZInstrInfo::IsReassociableAdd(const MachineInstr *MI) const {
+ switch (MI->getOpcode()) {
+ case SystemZ::VFADB:
+ case SystemZ::VFASB:
+ case SystemZ::WFAXB:
+ return hasReassocFlags(MI);
+ case SystemZ::WFADB_CCPseudo:
+ case SystemZ::WFASB_CCPseudo:
+ return hasReassocFlags(MI) &&
+ MI->findRegisterDefOperandIdx(SystemZ::CC, true/*isDead*/) != -1;
+ default:
+ break;
+ }
+ return false;
+}
+
+// EXPERIMENTAL
+static cl::opt<bool> Z_FMA("z-fma", cl::init(false));
+static cl::opt<bool> PPC_FMA("ppc-fma", cl::init(false));
+
+bool SystemZInstrInfo::getFMAPatterns(
+ MachineInstr &Root, SmallVectorImpl<MachineCombinerPattern> &Patterns,
+ bool DoRegPressureReduce) const {
+ assert(Patterns.empty());
+ MachineBasicBlock *MBB = Root.getParent();
+ const MachineRegisterInfo *MRI = &MBB->getParent()->getRegInfo();
+
+ if (!IsReassociableFMA(&Root))
+ return false;
+
+ const TargetRegisterClass *RC = MRI->getRegClass(Root.getOperand(0).getReg());
+
+ // This is more or less always true.
+ auto AllOpsOK = [&MRI, &RC](const MachineInstr &Instr) {
+ for (const auto &MO : Instr.explicit_operands())
+ if (!(MO.isReg() && MO.getReg().isVirtual() && !MO.getSubReg()))
+ return false;
+ const TargetRegisterClass *DefRC = MRI->getRegClass(Instr.getOperand(0).getReg());
+ if (!DefRC->hasSubClassEq(RC) && !DefRC->hasSuperClassEq(RC))
+ return false;
+ return true;
+ };
+ if (!AllOpsOK(Root))
+ return false;
+
+ MachineInstr *TopAdd = nullptr;
+ std::vector<MachineInstr *> FMAChain;
+ FMAChain.push_back(&Root);
+ Register Acc = Root.getOperand(3).getReg();
+ while (MachineInstr *Prev = MRI->getUniqueVRegDef(Acc)) {
+ if (Prev->getParent() != MBB || !MRI->hasOneNonDBGUse(Acc) ||
+ !AllOpsOK(*Prev))
+ break;
+ if (IsReassociableFMA(Prev)) {
+ FMAChain.push_back(Prev);
+ Acc = Prev->getOperand(3).getReg();
+ continue;
+ }
+ if (IsReassociableAdd(Prev))
+ TopAdd = Prev;
+ break;
+ }
+
+ if (Z_FMA) {
+ if (FMAChain.size() >= 2) {
+ Patterns.push_back(MachineCombinerPattern::FMA2_P1P0);
+ LLVM_DEBUG(dbgs() << "add pattern FMA2_P1P0\n");
+ Patterns.push_back(MachineCombinerPattern::FMA2_P0P1);
+ LLVM_DEBUG(dbgs() << "add pattern FMA2_P0P1\n");
+ Patterns.push_back(MachineCombinerPattern::FMA2);
+ LLVM_DEBUG(dbgs() << "add pattern FMA2\n");
+ }
+ if (FMAChain.size() == 1 && TopAdd) {
+ // The latency of the FMA could potentially be hidden above the add:
+ // Try both sides of the add and let MachineCombiner decide on
+ // profitability.
+ Patterns.push_back(MachineCombinerPattern::FMA1_Add_L);
+ LLVM_DEBUG(dbgs() << "add pattern FMA1_Add_L\n");
+ Patterns.push_back(MachineCombinerPattern::FMA1_Add_R);
+ LLVM_DEBUG(dbgs() << "add pattern FMA1_Add_R\n");
+ }
+ } else if (PPC_FMA) {
+ if (FMAChain.size() >= 3) {
+ Patterns.push_back(MachineCombinerPattern::FMA3);
+ LLVM_DEBUG(dbgs() << "add pattern FMA3\n");
+ }
+ if (FMAChain.size() == 2 && TopAdd) {
+ Patterns.push_back(MachineCombinerPattern::FMA2_Add);
+ LLVM_DEBUG(dbgs() << "add pattern FMA2_Add\n");
+ }
+ }
+
+ return Patterns.size() > 0;
+}
+
+bool SystemZInstrInfo::getMachineCombinerPatterns(
+ MachineInstr &Root, SmallVectorImpl<MachineCombinerPattern> &Patterns,
+ bool DoRegPressureReduce) const {
+
+ if (getFMAPatterns(Root, Patterns, DoRegPressureReduce))
+ return true;
+
+ return TargetInstrInfo::getMachineCombinerPatterns(Root, Patterns,
+ DoRegPressureReduce);
+}
+
+void SystemZInstrInfo::finalizeInsInstrs(
+ MachineInstr &Root, MachineCombinerPattern &P,
+ SmallVectorImpl<MachineInstr *> &InsInstrs) const {
+ const TargetRegisterInfo *TRI =
+ Root.getParent()->getParent()->getSubtarget().getRegisterInfo();
+ for (auto *Inst : InsInstrs) {
+ switch (Inst->getOpcode()) {
+ case SystemZ::WFADB_CCPseudo:
+ case SystemZ::WFASB_CCPseudo:
+ case SystemZ::WFMDB_CCPseudo:
+ case SystemZ::WFMSB_CCPseudo:
+ case SystemZ::WFSDB_CCPseudo:
+ case SystemZ::WFSSB_CCPseudo:
+ case SystemZ::WFMADB_CCPseudo:
+ case SystemZ::WFMASB_CCPseudo:
+ Inst->addRegisterDead(SystemZ::CC, TRI);
+ break;
+ default: break;
+ }
+ }
+}
+
+bool SystemZInstrInfo::isAssociativeAndCommutative(const MachineInstr &Inst,
+ bool Invert) const {
+ unsigned Opc = Inst.getOpcode();
+ if (Invert) {
+ auto InverseOpcode = getInverseOpcode(Opc);
+ if (!InverseOpcode)
+ return false;
+ Opc = *InverseOpcode;
+ }
+
+ switch (Opc) {
+ default:
+ break;
+ // Adds and multiplications.
+ case SystemZ::VFADB:
+ case SystemZ::VFASB:
+ case SystemZ::WFAXB:
+ case SystemZ::WFADB_CCPseudo:
+ case SystemZ::WFASB_CCPseudo:
+ case SystemZ::VFMDB:
+ case SystemZ::VFMSB:
+ case SystemZ::WFMXB:
+ case SystemZ::WFMDB_CCPseudo:
+ case SystemZ::WFMSB_CCPseudo:
+ return hasReassocFlags(&Inst);
+ }
+
+ return false;
+}
+
+std::optional<unsigned>
+SystemZInstrInfo::getInverseOpcode(unsigned Opcode) const {
+ // fadd <=> fsub in various forms.
+ switch (Opcode) {
+ case SystemZ::VFADB: return SystemZ::VFSDB;
+ case SystemZ::VFASB: return SystemZ::VFSSB;
+ case SystemZ::WFAXB: return SystemZ::WFSXB;
+ case SystemZ::WFADB_CCPseudo: return SystemZ::WFSDB_CCPseudo;
+ case SystemZ::WFASB_CCPseudo: return SystemZ::WFSSB_CCPseudo;
+ case SystemZ::VFSDB: return SystemZ::VFADB;
+ case SystemZ::VFSSB: return SystemZ::VFASB;
+ case SystemZ::WFSXB: return SystemZ::WFAXB;
+ case SystemZ::WFSDB_CCPseudo: return SystemZ::WFADB_CCPseudo;
+ case SystemZ::WFSSB_CCPseudo: return SystemZ::WFASB_CCPseudo;
+ default: return std::nullopt;
+ }
+}
+
+void SystemZInstrInfo::genAlternativeCodeSequence(
+ MachineInstr &Root, MachineCombinerPattern Pattern,
+ SmallVectorImpl<MachineInstr *> &InsInstrs,
+ SmallVectorImpl<MachineInstr *> &DelInstrs,
+ DenseMap<unsigned, unsigned> &InstrIdxForVirtReg) const {
+ switch (Pattern) {
+ case MachineCombinerPattern::FMA2_P1P0:
+ case MachineCombinerPattern::FMA2_P0P1:
+ case MachineCombinerPattern::FMA2:
+ case MachineCombinerPattern::FMA1_Add_L:
+ case MachineCombinerPattern::FMA1_Add_R:
+ case MachineCombinerPattern::FMA3:
+ case MachineCombinerPattern::FMA2_Add:
+ reassociateFMA(Root, Pattern, InsInstrs, DelInstrs, InstrIdxForVirtReg);
+ break;
+ default:
+ // Reassociate default patterns.
+ TargetInstrInfo::genAlternativeCodeSequence(Root, Pattern, InsInstrs,
+ DelInstrs, InstrIdxForVirtReg);
+ break;
+ }
+}
+
+static void getSplitFMAOpcodes(unsigned FMAOpc, unsigned &AddOpc,
+ unsigned &MulOpc) {
+ switch (FMAOpc) {
+ case SystemZ::VFMADB: AddOpc = SystemZ::VFADB; MulOpc = SystemZ::VFMDB; break;
+ case SystemZ::VFMASB: AddOpc = SystemZ::VFASB; MulOpc = SystemZ::VFMSB; break;
+ case SystemZ::WFMAXB: AddOpc = SystemZ::WFAXB; MulOpc = SystemZ::WFMXB; break;
+ case SystemZ::WFMADB_CCPseudo:
+ AddOpc = SystemZ::WFADB_CCPseudo; MulOpc = SystemZ::WFMDB_CCPseudo; break;
+ case SystemZ::WFMASB_CCPseudo:
+ AddOpc = SystemZ::WFASB_CCPseudo; MulOpc = SystemZ::WFMSB_CCPseudo; break;
+ default:
+ llvm_unreachable("Expected FMA opcode.");
+ }
+}
+
+void SystemZInstrInfo::reassociateFMA(
+ MachineInstr &Root, MachineCombinerPattern Pattern,
+ SmallVectorImpl<MachineInstr *> &InsInstrs,
+ SmallVectorImpl<MachineInstr *> &DelInstrs,
+ DenseMap<unsigned, unsigned> &InstrIdxForVirtReg) const {
+ MachineFunction *MF = Root.getMF();
+ MachineRegisterInfo &MRI = MF->getRegInfo();
+ const TargetRegisterInfo *TRI = &getRegisterInfo();
+
+ const TargetRegisterClass *RC = Root.getRegClassConstraint(0, this, TRI);
+ Register DstReg = Root.getOperand(0).getReg();
+ std::vector<MachineInstr *> Chain; // XXXXX
+ Chain.push_back(&Root);
+
+ uint16_t IntersectedFlags = Root.getFlags();
+ auto getIntersectedFlags = [&]() {
+ for (auto *MI : Chain)
+ IntersectedFlags &= MI->getFlags();
+ };
+
+ auto createNewVReg = [&](unsigned NewInsIdx) -> Register {
+ Register NewReg = MRI.createVirtualRegister(RC);
+ InstrIdxForVirtReg.insert(std::make_pair(NewReg, NewInsIdx));
+ return NewReg;
+ };
+
+ auto finalizeNewMIs = [&](ArrayRef<MachineInstr *> NewMIs) {
+ for (auto *MI : NewMIs) {
+ setSpecialOperandAttr(*MI, IntersectedFlags);
+ MI->addRegisterDead(SystemZ::CC, TRI);
+ InsInstrs.push_back(MI);
+ }
+ };
+
+ auto deleteOld = [&InsInstrs, &DelInstrs, &Chain]() {
+ assert(!InsInstrs.empty() &&
+ "Insertion instructions set should not be empty!");
+ // Record old instructions for deletion.
+ for (auto *MI : make_range(Chain.rbegin(), Chain.rend()))
+ DelInstrs.push_back(MI);
+ };
+
+ assert(IsReassociableFMA(&Root));
+ unsigned FMAOpc = Root.getOpcode();
+ unsigned AddOpc, MulOpc;
+ getSplitFMAOpcodes(FMAOpc, AddOpc, MulOpc);
+
+#ifndef NDEBUG
+ auto IsAllFMA = [&Chain, &FMAOpc]() {
+ for (auto *MI : Chain)
+ if (MI->getOpcode() != FMAOpc)
+ return false;
+ return true;
+ };
+#endif
+
+ switch (Pattern) {
+ case MachineCombinerPattern::FMA2_P1P0:
+ case MachineCombinerPattern::FMA2_P0P1: {
+ if (Pattern == MachineCombinerPattern::FMA2_P1P0)
+ LLVM_DEBUG(dbgs() << "reassociating using pattern FMA_P1P0\n");
+ else
+ LLVM_DEBUG(dbgs() << "reassociating using pattern FMA_P0P1\n");
+ Chain.push_back(MRI.getUniqueVRegDef(Chain.back()->getOperand(3).getReg()));
+ assert(IsAllFMA());
+ getIntersectedFlags(); // XXXXXXXXXx
+ Register NewVRA = createNewVReg(0);
+ Register NewVRB = createNewVReg(1);
+ unsigned FirstMulIdx =
+ Pattern == MachineCombinerPattern::FMA2_P1P0 ? 1 : 0;
+ unsigned SecondMulIdx = FirstMulIdx == 0 ? 1 : 0;
+ MachineInstr *MINewA =
+ BuildMI(*MF, Chain[FirstMulIdx]->getDebugLoc(), get(MulOpc), NewVRA)
+ .add(Chain[FirstMulIdx]->getOperand(1))
+ .add(Chain[FirstMulIdx]->getOperand(2));
+ MachineInstr *MINewB =
+ BuildMI(*MF, Chain[SecondMulIdx]->getDebugLoc(), get(FMAOpc), NewVRB)
+ .add(Chain[SecondMulIdx]->getOperand(1))
+ .add(Chain[SecondMulIdx]->getOperand(2))
+ .addReg(NewVRA);
+ MachineInstr *MINewC =
+ BuildMI(*MF, Chain[1]->getDebugLoc(), get(AddOpc), DstReg)
+ .add(Chain[1]->getOperand(3))
+ .addReg(NewVRB);
+ finalizeNewMIs({MINewA, MINewB, MINewC});
+ break;
+ }
+ case MachineCombinerPattern::FMA2: {
+ LLVM_DEBUG(dbgs() << "reassociating using pattern FMA2\n");
+ Chain.push_back(MRI.getUniqueVRegDef(Chain.back()->getOperand(3).getReg()));
+ assert(IsAllFMA());
+ getIntersectedFlags();
+ Register NewVRA = createNewVReg(0);
+ MachineInstr *MINewA =
+ BuildMI(*MF, Chain[0]->getDebugLoc(), get(FMAOpc), NewVRA)
+ .add(Chain[0]->getOperand(1))
+ .add(Chain[0]->getOperand(2))
+ .add(Chain[1]->getOperand(3));
+ MachineInstr *MINewB =
+ BuildMI(*MF, Chain[1]->getDebugLoc(), get(FMAOpc), DstReg)
+ .add(Chain[1]->getOperand(1))
+ .add(Chain[1]->getOperand(2))
+ .addReg(NewVRA);
+ finalizeNewMIs({MINewA, MINewB});
+ break;
+ }
+ case MachineCombinerPattern::FMA1_Add_L:
+ case MachineCombinerPattern::FMA1_Add_R: {
+ if (Pattern == MachineCombinerPattern::FMA1_Add_L)
+ LLVM_DEBUG(dbgs() << "reassociating using pattern FMA1_Add_L\n");
+ else
+ LLVM_DEBUG(dbgs() << "reassociating using pattern FMA1_Add_R\n");
+ assert(IsAllFMA());
+ Chain.push_back(MRI.getUniqueVRegDef(Chain.back()->getOperand(3).getReg()));
+ assert(Chain.back()->getOpcode() == AddOpc && "Expected matching Add");
+ getIntersectedFlags();
+ unsigned Op = Pattern == MachineCombinerPattern::FMA1_Add_L ? 1 : 2;
+ unsigned OtherOp = Op == 1 ? 2 : 1;
+ Register NewVRA = createNewVReg(0);
+ MachineInstr *MINewA =
+ BuildMI(*MF, Chain[0]->getDebugLoc(), get(FMAOpc), NewVRA)
+ .add(Chain[0]->getOperand(1))
+ .add(Chain[0]->getOperand(2))
+ .add(Chain[1]->getOperand(Op));
+ MachineInstr *MINewB =
+ BuildMI(*MF, Chain[1]->getDebugLoc(), get(AddOpc), DstReg)
+ .addReg(NewVRA)
+ .add(Chain[1]->getOperand(OtherOp));
+ finalizeNewMIs({MINewA, MINewB});
+ break;
+ }
+ case MachineCombinerPattern::FMA3: {
+ LLVM_DEBUG(dbgs() << "reassociating using pattern FMA3\n");
+ Chain.push_back(MRI.getUniqueVRegDef(Chain.back()->getOperand(3).getReg()));
+ Chain.push_back(MRI.getUniqueVRegDef(Chain.back()->getOperand(3).getReg()));
+ assert(IsAllFMA());
+ getIntersectedFlags();
+ Register NewVRA = createNewVReg(0);
+ Register NewVRB = createNewVReg(1);
+ Register NewVRC = createNewVReg(2);
+ MachineInstr *MINewA =
+ BuildMI(*MF, Chain[2]->getDebugLoc(), get(MulOpc), NewVRA)
+ .add(Chain[2]->getOperand(1))
+ .add(Chain[2]->getOperand(2));
+ MachineInstr *MINewB =
+ BuildMI(*MF, Chain[1]->getDebugLoc(), get(FMAOpc), NewVRB)
+ .add(Chain[1]->getOperand(1))
+ .add(Chain[1]->getOperand(2))
+ .add(Chain[2]->getOperand(3));
+ MachineInstr *MINewC =
+ BuildMI(*MF, Chain[0]->getDebugLoc(), get(FMAOpc), NewVRC)
+ .add(Chain[0]->getOperand(1))
+ .add(Chain[0]->getOperand(2))
+ .addReg(NewVRA);
+ MachineInstr *MINewD =
+ BuildMI(*MF, Chain[0]->getDebugLoc(), get(AddOpc), DstReg)
+ .addReg(NewVRB)
+ .addReg(NewVRC);
+ finalizeNewMIs({MINewA, MINewB, MINewC, MINewD});
+ break;
+ }
+ case MachineCombinerPattern::FMA2_Add: {
+ LLVM_DEBUG(dbgs() << "reassociating using pattern FMA2_Add\n");
+ Chain.push_back(MRI.getUniqueVRegDef(Chain.back()->getOperand(3).getReg()));
+ assert(IsAllFMA());
+ Chain.push_back(MRI.getUniqueVRegDef(Chain.back()->getOperand(3).getReg()));
+ assert(Chain.back()->getOpcode() == AddOpc && "Expected matching Add");
+ getIntersectedFlags();
+ Register NewVRA = createNewVReg(0);
+ Register NewVRB = createNewVReg(1);
+ MachineInstr *MINewA =
+ BuildMI(*MF, Chain[1]->getDebugLoc(), get(FMAOpc), NewVRA)
+ .add(Chain[1]->getOperand(1))
+ .add(Chain[1]->getOperand(2))
+ .add(Chain[2]->getOperand(1));
+ MachineInstr *MINewB =
+ BuildMI(*MF, Chain[0]->getDebugLoc(), get(FMAOpc), NewVRB)
+ .add(Chain[0]->getOperand(1))
+ .add(Chain[0]->getOperand(2))
+ .add(Chain[2]->getOperand(2));
+ MachineInstr *MINewC =
+ BuildMI(*MF, Chain[0]->getDebugLoc(), get(AddOpc), DstReg)
+ .addReg(NewVRA)
+ .addReg(NewVRB);
+ finalizeNewMIs({MINewA, MINewB, MINewC});
+ break;
+ }
+ default:
+ llvm_unreachable("not recognized pattern!");
+ }
+
+ deleteOld();
+}
+
+bool
+SystemZInstrInfo::accumulateInstrSeqToRootLatency(MachineInstr &Root) const {
+ // This doesn't make much sense for FMA patterns as they typically use an
+ // extra Add to do things in parallell.
+ if (IsReassociableFMA(&Root)) // XXXXXXXXXXXX
+ return false;
+
+ return true;
+}
+
+void SystemZInstrInfo::setSpecialOperandAttr(MachineInstr &MI,
+ uint32_t Flags) const {
+ MI.setFlags(Flags);
+ MI.clearFlag(MachineInstr::MIFlag::NoSWrap);
+ MI.clearFlag(MachineInstr::MIFlag::NoUWrap);
+ MI.clearFlag(MachineInstr::MIFlag::IsExact);
+}
+
MachineInstr *SystemZInstrInfo::foldMemoryOperandImpl(
MachineFunction &MF, MachineInstr &MI, ArrayRef<unsigned> Ops,
MachineBasicBlock::iterator InsertPt, int FrameIndex,
diff --git a/llvm/lib/Target/SystemZ/SystemZInstrInfo.h b/llvm/lib/Target/SystemZ/SystemZInstrInfo.h
index cdf07310108a96..6394a3ef925b5d 100644
--- a/llvm/lib/Target/SystemZ/SystemZInstrInfo.h
+++ b/llvm/lib/Target/SystemZ/SystemZInstrInfo.h
@@ -254,8 +254,15 @@ class SystemZInstrInfo : public SystemZGenInstrInfo {
const DebugLoc &DL, Register DstReg,
ArrayRef<MachineOperand> Cond, Register TrueReg,
Register FalseReg) const override;
+ void transferMIFlag(MachineInstr *OldMI, MachineInstr *NewMI,
+ MachineInstr::MIFlag Flag) const;
+ MachineInstr *optimizeLoadInstr(MachineInstr &MI,
+ MachineRegisterInfo *MRI,
+ Register &FoldAsLoadDefReg,
+ MachineInstr *&DefMI) const override;
bool foldImmediate(MachineInstr &UseMI, MachineInstr &DefMI, Register Reg,
MachineRegisterInfo *MRI) const override;
+
bool isPredicable(const MachineInstr &MI) const override;
bool isProfitableToIfCvt(MachineBasicBlock &MBB, unsigned NumCycles,
unsigned ExtraPredCycles,
@@ -285,6 +292,38 @@ class SystemZInstrInfo : public SystemZGenInstrInfo {
Register VReg) const override;
MachineInstr *convertToThreeAddress(MachineInstr &MI, LiveVariables *LV,
LiveIntervals *LIS) const override;
+
+ bool useMachineCombiner() const override { return true; }
+ bool IsReassociableFMA(const MachineInstr *MI) const;
+ bool IsReassociableAdd(const MachineInstr *MI) const;
+ bool getFMAPatterns(MachineInstr &Root,
+ SmallVectorImpl<MachineCombinerPattern> &P,
+ bool DoRegPressureReduce) const;
+ bool getMachineCombinerPatterns(MachineInstr &Root,
+ SmallVectorImpl<MachineCombinerPattern> &P,
+ bool DoRegPressureReduce) const override;
+ void
+ finalizeInsInstrs(MachineInstr &Root, MachineCombinerPattern &P,
+ SmallVectorImpl<MachineInstr *> &InsInstrs) const override;
+ bool isAssociativeAndCommutative(const MachineInstr &Inst,
+ bool Invert) const override;
+ std::optional<unsigned> getInverseOpcode(unsigned Opcode) const override;
+ void genAlternativeCodeSequence(
+ MachineInstr &Root, MachineCombinerPattern Pattern,
+ SmallVectorImpl<MachineInstr *> &InsInstrs,
+ SmallVectorImpl<MachineInstr *> &DelInstrs,
+ DenseMap<unsigned, unsigned> &InstIdxForVirtReg) const override;
+ void reassociateFMA(
+ MachineInstr &Root, MachineCombinerPattern Pattern,
+ SmallVectorImpl<MachineInstr *> &InsInstrs,
+ SmallVectorImpl<MachineInstr *> &DelInstrs,
+ DenseMap<unsigned, unsigned> &InstrIdxForVirtReg) const;
+ bool accumulateInstrSeqToRootLatency(MachineInstr &Root) const override;
+ int getExtendResourceLenLimit() const override { return 0; } //XXX
+ // SystemZ specific version of setSpecialOperandAttr that copies Flags to
+ // MI and clears nuw, nsw, and exact flags.
+ void setSpecialOperandAttr(MachineInstr &MI, uint32_t Flags) const;
+
MachineInstr *
foldMemoryOperandImpl(MachineFunction &MF, MachineInstr &MI,
ArrayRef<unsigned> Ops,
diff --git a/llvm/lib/Target/SystemZ/SystemZInstrVector.td b/llvm/lib/Target/SystemZ/SystemZInstrVector.td
index 799b27d74414d5..4f85f191e4c09b 100644
--- a/llvm/lib/Target/SystemZ/SystemZInstrVector.td
+++ b/llvm/lib/Target/SystemZ/SystemZInstrVector.td
@@ -139,7 +139,7 @@ let Predicates = [FeatureVector] in {
// LEY and LDY offer full 20-bit displacement fields. It's often better
// to use those instructions rather than force a 20-bit displacement
// into a GPR temporary.
- let mayLoad = 1 in {
+ let mayLoad = 1, canFoldAsLoad = 1 in {
def VL32 : UnaryAliasVRX<load, v32sb, bdxaddr12pair>;
def VL64 : UnaryAliasVRX<load, v64db, bdxaddr12pair>;
}
@@ -1061,15 +1061,15 @@ multiclass VectorRounding<Instruction insn, TypedReg tr> {
let Predicates = [FeatureVector] in {
// Add.
let Uses = [FPC], mayRaiseFPException = 1, isCommutable = 1 in {
- def VFA : BinaryVRRcFloatGeneric<"vfa", 0xE7E3>;
- def VFADB : BinaryVRRc<"vfadb", 0xE7E3, any_fadd, v128db, v128db, 3, 0>;
- def WFADB : BinaryVRRc<"wfadb", 0xE7E3, any_fadd, v64db, v64db, 3, 8, 0,
- "adbr">;
+ def VFA : BinaryVRRcFloatGeneric<"vfa", 0xE7E3>;
+ def VFADB : BinaryVRRc<"vfadb", 0xE7E3, any_fadd, v128db, v128db, 3, 0>;
+ defm WFADB : BinaryVRRcAndCCPseudo<"wfadb", 0xE7E3, any_fadd,
+ z_fadd_reassoc, v64db, v64db, 3, 8, 0, "adbr">;
let Predicates = [FeatureVectorEnhancements1] in {
- def VFASB : BinaryVRRc<"vfasb", 0xE7E3, any_fadd, v128sb, v128sb, 2, 0>;
- def WFASB : BinaryVRRc<"wfasb", 0xE7E3, any_fadd, v32sb, v32sb, 2, 8, 0,
- "aebr">;
- def WFAXB : BinaryVRRc<"wfaxb", 0xE7E3, any_fadd, v128xb, v128xb, 4, 8>;
+ def VFASB : BinaryVRRc<"vfasb", 0xE7E3, any_fadd, v128sb, v128sb, 2, 0>;
+ defm WFASB : BinaryVRRcAndCCPseudo<"wfasb", 0xE7E3, any_fadd,
+ z_fadd_reassoc, v32sb, v32sb, 2, 8, 0, "aebr">;
+ def WFAXB : BinaryVRRc<"wfaxb", 0xE7E3, any_fadd, v128xb, v128xb, 4, 8>;
}
}
@@ -1274,29 +1274,29 @@ let Predicates = [FeatureVector] in {
// Multiply.
let Uses = [FPC], mayRaiseFPException = 1, isCommutable = 1 in {
- def VFM : BinaryVRRcFloatGeneric<"vfm", 0xE7E7>;
- def VFMDB : BinaryVRRc<"vfmdb", 0xE7E7, any_fmul, v128db, v128db, 3, 0>;
- def WFMDB : BinaryVRRc<"wfmdb", 0xE7E7, any_fmul, v64db, v64db, 3, 8, 0,
- "mdbr">;
+ def VFM : BinaryVRRcFloatGeneric<"vfm", 0xE7E7>;
+ def VFMDB : BinaryVRRc<"vfmdb", 0xE7E7, any_fmul, v128db, v128db, 3, 0>;
+ defm WFMDB : BinaryVRRcAndCCPseudo<"wfmdb", 0xE7E7, any_fmul,
+ z_fmul_reassoc, v64db, v64db, 3, 8, 0, "mdbr">;
let Predicates = [FeatureVectorEnhancements1] in {
- def VFMSB : BinaryVRRc<"vfmsb", 0xE7E7, any_fmul, v128sb, v128sb, 2, 0>;
- def WFMSB : BinaryVRRc<"wfmsb", 0xE7E7, any_fmul, v32sb, v32sb, 2, 8, 0,
- "meebr">;
- def WFMXB : BinaryVRRc<"wfmxb", 0xE7E7, any_fmul, v128xb, v128xb, 4, 8>;
+ def VFMSB : BinaryVRRc<"vfmsb", 0xE7E7, any_fmul, v128sb, v128sb, 2, 0>;
+ defm WFMSB : BinaryVRRcAndCCPseudo<"wfmsb", 0xE7E7, any_fmul,
+ z_fmul_reassoc, v32sb, v32sb, 2, 8, 0, "meebr">;
+ def WFMXB : BinaryVRRc<"wfmxb", 0xE7E7, any_fmul, v128xb, v128xb, 4, 8>;
}
}
// Multiply and add.
let Uses = [FPC], mayRaiseFPException = 1, isCommutable = 1 in {
- def VFMA : TernaryVRReFloatGeneric<"vfma", 0xE78F>;
- def VFMADB : TernaryVRRe<"vfmadb", 0xE78F, any_fma, v128db, v128db, 0, 3>;
- def WFMADB : TernaryVRRe<"wfmadb", 0xE78F, any_fma, v64db, v64db, 8, 3,
- "madbr">;
+ def VFMA : TernaryVRReFloatGeneric<"vfma", 0xE78F>;
+ def VFMADB : TernaryVRRe<"vfmadb", 0xE78F, any_fma, v128db, v128db, 0, 3>;
+ defm WFMADB : TernaryVRReAndCCPseudo<"wfmadb", 0xE78F, any_fma,
+ z_fma_reassoc, v64db, v64db, 8, 3, "madbr">;
let Predicates = [FeatureVectorEnhancements1] in {
- def VFMASB : TernaryVRRe<"vfmasb", 0xE78F, any_fma, v128sb, v128sb, 0, 2>;
- def WFMASB : TernaryVRRe<"wfmasb", 0xE78F, any_fma, v32sb, v32sb, 8, 2,
- "maebr">;
- def WFMAXB : TernaryVRRe<"wfmaxb", 0xE78F, any_fma, v128xb, v128xb, 8, 4>;
+ def VFMASB : TernaryVRRe<"vfmasb", 0xE78F, any_fma, v128sb, v128sb, 0, 2>;
+ defm WFMASB : TernaryVRReAndCCPseudo<"wfmasb", 0xE78F, any_fma,
+ z_fma_reassoc, v32sb, v32sb, 8, 2, "maebr">;
+ def WFMAXB : TernaryVRRe<"wfmaxb", 0xE78F, any_fma, v128xb, v128xb, 8, 4>;
}
}
@@ -1389,15 +1389,15 @@ let Predicates = [FeatureVector] in {
// Subtract.
let Uses = [FPC], mayRaiseFPException = 1 in {
- def VFS : BinaryVRRcFloatGeneric<"vfs", 0xE7E2>;
- def VFSDB : BinaryVRRc<"vfsdb", 0xE7E2, any_fsub, v128db, v128db, 3, 0>;
- def WFSDB : BinaryVRRc<"wfsdb", 0xE7E2, any_fsub, v64db, v64db, 3, 8, 0,
- "sdbr">;
+ def VFS : BinaryVRRcFloatGeneric<"vfs", 0xE7E2>;
+ def VFSDB : BinaryVRRc<"vfsdb", 0xE7E2, any_fsub, v128db, v128db, 3, 0>;
+ defm WFSDB : BinaryVRRcAndCCPseudo<"wfsdb", 0xE7E2, any_fsub,
+ z_fsub_reassoc, v64db, v64db, 3, 8, 0, "sdbr">;
let Predicates = [FeatureVectorEnhancements1] in {
- def VFSSB : BinaryVRRc<"vfssb", 0xE7E2, any_fsub, v128sb, v128sb, 2, 0>;
- def WFSSB : BinaryVRRc<"wfssb", 0xE7E2, any_fsub, v32sb, v32sb, 2, 8, 0,
- "sebr">;
- def WFSXB : BinaryVRRc<"wfsxb", 0xE7E2, any_fsub, v128xb, v128xb, 4, 8>;
+ def VFSSB : BinaryVRRc<"vfssb", 0xE7E2, any_fsub, v128sb, v128sb, 2, 0>;
+ defm WFSSB : BinaryVRRcAndCCPseudo<"wfssb", 0xE7E2, any_fsub,
+ z_fsub_reassoc, v32sb, v32sb, 2, 8, 0, "sebr">;
+ def WFSXB : BinaryVRRc<"wfsxb", 0xE7E2, any_fsub, v128xb, v128xb, 4, 8>;
}
}
diff --git a/llvm/lib/Target/SystemZ/SystemZOperators.td b/llvm/lib/Target/SystemZ/SystemZOperators.td
index d98bb886c18506..272096e8a74ec1 100644
--- a/llvm/lib/Target/SystemZ/SystemZOperators.td
+++ b/llvm/lib/Target/SystemZ/SystemZOperators.td
@@ -727,6 +727,35 @@ def any_fnms : PatFrag<(ops node:$src1, node:$src2, node:$src3),
// Floating-point negative absolute.
def fnabs : PatFrag<(ops node:$ptr), (fneg (fabs node:$ptr))>;
+// Floating-point operations which are not marked as reassociable.
+def z_any_fadd_noreassoc : PatFrag<(ops node:$src1, node:$src2),
+ (any_fadd node:$src1, node:$src2),
+ [{ return !isReassociable(N); }]>;
+def z_any_fsub_noreassoc : PatFrag<(ops node:$src1, node:$src2),
+ (any_fsub node:$src1, node:$src2),
+ [{ return !isReassociable(N); }]>;
+def z_any_fmul_noreassoc : PatFrag<(ops node:$src1, node:$src2),
+ (any_fmul node:$src1, node:$src2),
+ [{ return !isReassociable(N); }]>;
+def z_any_fma_noreassoc : PatFrag<(ops node:$src1, node:$src2, node:$src3),
+ (any_fma node:$src2, node:$src3, node:$src1),
+ [{ return !isReassociable(N); }]>;
+
+// Floating-point operations which are reassociable (strict fp nodes not
+// supported).
+def z_fadd_reassoc : PatFrag<(ops node:$src1, node:$src2),
+ (fadd node:$src1, node:$src2),
+ [{ return isReassociable(N); }]>;
+def z_fsub_reassoc : PatFrag<(ops node:$src1, node:$src2),
+ (fsub node:$src1, node:$src2),
+ [{ return isReassociable(N); }]>;
+def z_fmul_reassoc : PatFrag<(ops node:$src1, node:$src2),
+ (fmul node:$src1, node:$src2),
+ [{ return isReassociable(N); }]>;
+def z_fma_reassoc : PatFrag<(ops node:$src1, node:$src2, node:$src3),
+ (fma node:$src1, node:$src2, node:$src3),
+ [{ return isReassociable(N); }]>;
+
// Strict floating-point fragments.
def z_any_fcmp : PatFrags<(ops node:$lhs, node:$rhs),
[(z_strict_fcmp node:$lhs, node:$rhs),
diff --git a/llvm/lib/Target/SystemZ/SystemZScheduleZ13.td b/llvm/lib/Target/SystemZ/SystemZScheduleZ13.td
index 9ce1a0d06b5afd..e78ed03e59f3bd 100644
--- a/llvm/lib/Target/SystemZ/SystemZScheduleZ13.td
+++ b/llvm/lib/Target/SystemZ/SystemZScheduleZ13.td
@@ -1344,15 +1344,15 @@ def : InstRW<[WLat3, WLat3, VecXsPm, NormalGr], (instregex "(V|W)FTCIDB$")>;
// Add / subtract
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VF(A|S)$")>;
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VF(A|S)DB$")>;
-def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WF(A|S)DB$")>;
+def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WF(A|S)DB(_CCPseudo)?$")>;
// Multiply / multiply-and-add/subtract
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VFM$")>;
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VFMDB$")>;
-def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WFMDB$")>;
+def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WFMDB(_CCPseudo)?$")>;
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VFM(A|S)$")>;
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VFM(A|S)DB$")>;
-def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WFM(A|S)DB$")>;
+def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WFM(A|S)DB(_CCPseudo)?$")>;
// Divide / square root
def : InstRW<[WLat30, VecFPd, NormalGr], (instregex "VFD$")>;
diff --git a/llvm/lib/Target/SystemZ/SystemZScheduleZ14.td b/llvm/lib/Target/SystemZ/SystemZScheduleZ14.td
index 7e6302ae656743..b73fc9dcfcb781 100644
--- a/llvm/lib/Target/SystemZ/SystemZScheduleZ14.td
+++ b/llvm/lib/Target/SystemZ/SystemZScheduleZ14.td
@@ -1388,22 +1388,22 @@ def : InstRW<[WLat3, WLat3, VecDFX, NormalGr], (instregex "WFTCIXB$")>;
// Add / subtract
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VF(A|S)$")>;
def : InstRW<[WLat7, VecBF, NormalGr], (instregex "VF(A|S)DB$")>;
-def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WF(A|S)DB$")>;
+def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WF(A|S)DB(_CCPseudo)?$")>;
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VF(A|S)SB$")>;
-def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WF(A|S)SB$")>;
+def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WF(A|S)SB(_CCPseudo)?$")>;
def : InstRW<[WLat10, VecDF2, NormalGr], (instregex "WF(A|S)XB$")>;
// Multiply / multiply-and-add/subtract
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VFM$")>;
def : InstRW<[WLat7, VecBF, NormalGr], (instregex "VFMDB$")>;
-def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WFM(D|S)B$")>;
+def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WFM(D|S)B(_CCPseudo)?$")>;
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VFMSB$")>;
def : InstRW<[WLat20, VecDF2, NormalGr], (instregex "WFMXB$")>;
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VF(N)?M(A|S)$")>;
def : InstRW<[WLat7, VecBF, NormalGr], (instregex "VF(N)?M(A|S)DB$")>;
-def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WF(N)?M(A|S)DB$")>;
+def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WF(N)?M(A|S)DB(_CCPseudo)?$")>;
def : InstRW<[WLat8, VecBF2, NormalGr], (instregex "VF(N)?M(A|S)SB$")>;
-def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WF(N)?M(A|S)SB$")>;
+def : InstRW<[WLat7, VecBF, NormalGr], (instregex "WF(N)?M(A|S)SB(_CCPseudo)?$")>;
def : InstRW<[WLat30, VecDF2, NormalGr], (instregex "WF(N)?M(A|S)XB$")>;
// Divide / square root
diff --git a/llvm/lib/Target/SystemZ/SystemZScheduleZ15.td b/llvm/lib/Target/SystemZ/SystemZScheduleZ15.td
index 89edcf426bd714..4030c81a8c7536 100644
--- a/llvm/lib/Target/SystemZ/SystemZScheduleZ15.td
+++ b/llvm/lib/Target/SystemZ/SystemZScheduleZ15.td
@@ -1431,21 +1431,21 @@ def : InstRW<[WLat3, WLat3, VecDFX, NormalGr], (instregex "WFTCIXB$")>;
// Add / subtract
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(A|S)$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(A|S)DB$")>;
-def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(A|S)DB$")>;
+def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(A|S)DB(_CCPseudo)?$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(A|S)SB$")>;
-def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(A|S)SB$")>;
+def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(A|S)SB(_CCPseudo)?$")>;
def : InstRW<[WLat10, VecDF2, NormalGr], (instregex "WF(A|S)XB$")>;
// Multiply / multiply-and-add/subtract
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VFM(DB)?$")>;
-def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WFM(D|S)B$")>;
+def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WFM(D|S)B(_CCPseudo)?$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VFMSB$")>;
def : InstRW<[WLat20, VecDF2, NormalGr], (instregex "WFMXB$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(N)?M(A|S)$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(N)?M(A|S)DB$")>;
-def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(N)?M(A|S)DB$")>;
+def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(N)?M(A|S)DB(_CCPseudo)?$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(N)?M(A|S)SB$")>;
-def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(N)?M(A|S)SB$")>;
+def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(N)?M(A|S)SB(_CCPseudo)?$")>;
def : InstRW<[WLat30, VecDF2, NormalGr], (instregex "WF(N)?M(A|S)XB$")>;
// Divide / square root
diff --git a/llvm/lib/Target/SystemZ/SystemZScheduleZ16.td b/llvm/lib/Target/SystemZ/SystemZScheduleZ16.td
index 8f6dc3befc1976..6972ed7bb70716 100644
--- a/llvm/lib/Target/SystemZ/SystemZScheduleZ16.td
+++ b/llvm/lib/Target/SystemZ/SystemZScheduleZ16.td
@@ -1437,21 +1437,21 @@ def : InstRW<[WLat3, WLat3, VecDFX, NormalGr], (instregex "WFTCIXB$")>;
// Add / subtract
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(A|S)$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(A|S)DB$")>;
-def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(A|S)DB$")>;
+def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(A|S)DB(_CCPseudo)?$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(A|S)SB$")>;
-def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(A|S)SB$")>;
+def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(A|S)SB(_CCPseudo)?$")>;
def : InstRW<[WLat10, VecDF2, NormalGr], (instregex "WF(A|S)XB$")>;
// Multiply / multiply-and-add/subtract
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VFM(DB)?$")>;
-def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WFM(D|S)B$")>;
+def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WFM(D|S)B(_CCPseudo)?$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VFMSB$")>;
def : InstRW<[WLat20, VecDF2, NormalGr], (instregex "WFMXB$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(N)?M(A|S)$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(N)?M(A|S)DB$")>;
-def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(N)?M(A|S)DB$")>;
+def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(N)?M(A|S)DB(_CCPseudo)?$")>;
def : InstRW<[WLat6, VecBF, NormalGr], (instregex "VF(N)?M(A|S)SB$")>;
-def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(N)?M(A|S)SB$")>;
+def : InstRW<[WLat6, VecBF, NormalGr], (instregex "WF(N)?M(A|S)SB(_CCPseudo)?$")>;
def : InstRW<[WLat20, VecDF2, NormalGr], (instregex "WF(N)?M(A|S)XB$")>;
// Divide / square root
diff --git a/llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp b/llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp
index 121512d5a7e589..af2778792c4017 100644
--- a/llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp
+++ b/llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp
@@ -29,6 +29,11 @@
using namespace llvm;
+static cl::opt<bool>
+EnableMachineCombinerPass("systemz-machine-combiner",
+ cl::desc("Enable the machine combiner pass"),
+ cl::init(true), cl::Hidden);
+
// NOLINTNEXTLINE(readability-identifier-naming)
extern "C" LLVM_EXTERNAL_VISIBILITY void LLVMInitializeSystemZTarget() {
// Register the target.
@@ -244,11 +249,16 @@ bool SystemZPassConfig::addInstSelector() {
bool SystemZPassConfig::addILPOpts() {
addPass(&EarlyIfConverterID);
+
+ if (EnableMachineCombinerPass)
+ addPass(&MachineCombinerID);
+
return true;
}
void SystemZPassConfig::addPreRegAlloc() {
addPass(createSystemZCopyPhysRegsPass(getSystemZTargetMachine()));
+ addPass(createSystemZFinalizeReassociationPass(getSystemZTargetMachine()));
}
void SystemZPassConfig::addPostRewrite() {
diff --git a/llvm/lib/Target/X86/X86InstrInfo.cpp b/llvm/lib/Target/X86/X86InstrInfo.cpp
index 0f21880f6df90c..c8c5e697eabe13 100644
--- a/llvm/lib/Target/X86/X86InstrInfo.cpp
+++ b/llvm/lib/Target/X86/X86InstrInfo.cpp
@@ -5481,7 +5481,7 @@ bool X86InstrInfo::optimizeCompareInstr(MachineInstr &CmpInstr, Register SrcReg,
/// register, the virtual register is used once in the same BB, and the
/// instructions in-between do not load or store, and have no side effects.
MachineInstr *X86InstrInfo::optimizeLoadInstr(MachineInstr &MI,
- const MachineRegisterInfo *MRI,
+ MachineRegisterInfo *MRI,
Register &FoldAsLoadDefReg,
MachineInstr *&DefMI) const {
// Check whether we can move DefMI here.
diff --git a/llvm/lib/Target/X86/X86InstrInfo.h b/llvm/lib/Target/X86/X86InstrInfo.h
index 996a24d9e8a944..c8b5152adfc45a 100644
--- a/llvm/lib/Target/X86/X86InstrInfo.h
+++ b/llvm/lib/Target/X86/X86InstrInfo.h
@@ -549,7 +549,7 @@ class X86InstrInfo final : public X86GenInstrInfo {
const MachineRegisterInfo *MRI) const override;
MachineInstr *optimizeLoadInstr(MachineInstr &MI,
- const MachineRegisterInfo *MRI,
+ MachineRegisterInfo *MRI,
Register &FoldAsLoadDefReg,
MachineInstr *&DefMI) const override;
diff --git a/llvm/test/CodeGen/SystemZ/fp-add-02.ll b/llvm/test/CodeGen/SystemZ/fp-add-02.ll
index bb12196fb848a5..8f65161b5bae83 100644
--- a/llvm/test/CodeGen/SystemZ/fp-add-02.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-add-02.ll
@@ -118,3 +118,17 @@ define double @f7(ptr %ptr0) {
ret double %add10
}
+
+; Check that reassociation flags do not get in the way of adb.
+define double @f8(ptr %x) {
+; CHECK-LABEL: f8:
+; CHECK: ld %f0
+; CHECK: adb %f0
+; CHECK: br %r14
+entry:
+ %0 = load double, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %1 = load double, ptr %arrayidx1, align 8
+ %add = fadd reassoc nsz arcp contract afn double %1, %0
+ ret double %add
+}
diff --git a/llvm/test/CodeGen/SystemZ/fp-mul-02.ll b/llvm/test/CodeGen/SystemZ/fp-mul-02.ll
index 5a99537493cd19..1ac4bbec352d1c 100644
--- a/llvm/test/CodeGen/SystemZ/fp-mul-02.ll
+++ b/llvm/test/CodeGen/SystemZ/fp-mul-02.ll
@@ -1,6 +1,6 @@
; Test multiplication of two f32s, producing an f64 result.
;
-; RUN: llc < %s -mtriple=s390x-linux-gnu | FileCheck %s
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z15 | FileCheck %s
declare float @foo()
@@ -201,3 +201,13 @@ define float @f7(ptr %ptr0) {
ret float %trunc9
}
+
+; Check that reassociation flags do not get in the way of mdebr.
+define double @f8(float %Src) {
+; CHECK-LABEL: f8:
+; CHECK: mdebr %f0, %f0
+; CHECK: br %r14
+ %D = fpext float %Src to double
+ %res = fmul reassoc nsz arcp contract afn double %D, %D
+ ret double %res
+}
diff --git a/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-01.ll b/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-01.ll
new file mode 100644
index 00000000000000..72303a47dc7386
--- /dev/null
+++ b/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-01.ll
@@ -0,0 +1,690 @@
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z15 -verify-machineinstrs -O3 \
+; RUN: | FileCheck %s
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z15 -stop-before=processimpdefs \
+; RUN: -O3 | FileCheck %s --check-prefix=PASSOUTPUT
+
+; Test reassociation of fp add, subtract and multiply.
+
+define double @fun0_fadd(ptr %x) {
+; CHECK-LABEL: fun0_fadd:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: ld %f0, 0(%r2)
+; CHECK-NEXT: adb %f0, 8(%r2)
+; CHECK-NEXT: ld %f1, 24(%r2)
+; CHECK-NEXT: adb %f1, 16(%r2)
+; CHECK-NEXT: adbr %f0, %f1
+; CHECK-NEXT: ld %f1, 40(%r2)
+; CHECK-NEXT: adb %f1, 32(%r2)
+; CHECK-NEXT: adb %f1, 48(%r2)
+; CHECK-NEXT: adbr %f0, %f1
+; CHECK-NEXT: adb %f0, 56(%r2)
+; CHECK-NEXT: br %r14
+
+; PASSOUTPUT: name: fun0_fadd
+; PASSOUTPUT-NOT: WFADB
+; PASSOUTPUT: WFADB killed %3, killed %18, implicit $fpc
+; PASSOUTPUT-NOT: WFADB {{.*}}$cc
+; PASSOUTPUT-NOT: WFADB_CCPseudo
+entry:
+ %0 = load double, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %1 = load double, ptr %arrayidx1, align 8
+ %add = fadd reassoc nsz arcp contract afn double %1, %0
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %2 = load double, ptr %arrayidx2, align 8
+ %add3 = fadd reassoc nsz arcp contract afn double %add, %2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+ %3 = load double, ptr %arrayidx4, align 8
+ %add5 = fadd reassoc nsz arcp contract afn double %add3, %3
+ %arrayidx6 = getelementptr inbounds double, ptr %x, i64 4
+ %4 = load double, ptr %arrayidx6, align 8
+ %add7 = fadd reassoc nsz arcp contract afn double %add5, %4
+ %arrayidx8 = getelementptr inbounds double, ptr %x, i64 5
+ %5 = load double, ptr %arrayidx8, align 8
+ %add9 = fadd reassoc nsz arcp contract afn double %add7, %5
+ %arrayidx10 = getelementptr inbounds double, ptr %x, i64 6
+ %6 = load double, ptr %arrayidx10, align 8
+ %add11 = fadd reassoc nsz arcp contract afn double %add9, %6
+ %arrayidx12 = getelementptr inbounds double, ptr %x, i64 7
+ %7 = load double, ptr %arrayidx12, align 8
+ %add13 = fadd reassoc nsz arcp contract afn double %add11, %7
+ ret double %add13
+}
+
+define float @fun1_fadd(ptr %x) {
+; CHECK-LABEL: fun1_fadd:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: lde %f0, 0(%r2)
+; CHECK-NEXT: aeb %f0, 4(%r2)
+; CHECK-NEXT: lde %f1, 12(%r2)
+; CHECK-NEXT: aeb %f1, 8(%r2)
+; CHECK-NEXT: aebr %f0, %f1
+; CHECK-NEXT: lde %f1, 20(%r2)
+; CHECK-NEXT: aeb %f1, 16(%r2)
+; CHECK-NEXT: aeb %f1, 24(%r2)
+; CHECK-NEXT: aebr %f0, %f1
+; CHECK-NEXT: aeb %f0, 28(%r2)
+; CHECK-NEXT: br %r14
+
+; PASSOUTPUT: name: fun1_fadd
+; PASSOUTPUT-NOT: WFASB
+; PASSOUTPUT: WFASB killed %3, killed %18, implicit $fpc
+; PASSOUTPUT-NOT: WFASB {{.*}}$cc
+; PASSOUTPUT-NOT: WFASB_CCPseudo
+entry:
+ %0 = load float, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds float, ptr %x, i64 1
+ %1 = load float, ptr %arrayidx1, align 8
+ %add = fadd reassoc nsz arcp contract afn float %1, %0
+ %arrayidx2 = getelementptr inbounds float, ptr %x, i64 2
+ %2 = load float, ptr %arrayidx2, align 8
+ %add3 = fadd reassoc nsz arcp contract afn float %add, %2
+ %arrayidx4 = getelementptr inbounds float, ptr %x, i64 3
+ %3 = load float, ptr %arrayidx4, align 8
+ %add5 = fadd reassoc nsz arcp contract afn float %add3, %3
+ %arrayidx6 = getelementptr inbounds float, ptr %x, i64 4
+ %4 = load float, ptr %arrayidx6, align 8
+ %add7 = fadd reassoc nsz arcp contract afn float %add5, %4
+ %arrayidx8 = getelementptr inbounds float, ptr %x, i64 5
+ %5 = load float, ptr %arrayidx8, align 8
+ %add9 = fadd reassoc nsz arcp contract afn float %add7, %5
+ %arrayidx10 = getelementptr inbounds float, ptr %x, i64 6
+ %6 = load float, ptr %arrayidx10, align 8
+ %add11 = fadd reassoc nsz arcp contract afn float %add9, %6
+ %arrayidx12 = getelementptr inbounds float, ptr %x, i64 7
+ %7 = load float, ptr %arrayidx12, align 8
+ %add13 = fadd reassoc nsz arcp contract afn float %add11, %7
+ ret float %add13
+}
+
+define fp128 @fun2_fadd(ptr %x) {
+; CHECK-LABEL: fun2_fadd:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vl %v0, 0(%r3), 3
+; CHECK-NEXT: vl %v1, 16(%r3), 3
+; CHECK-NEXT: wfaxb %v0, %v1, %v0
+; CHECK-NEXT: vl %v1, 32(%r3), 3
+; CHECK-NEXT: vl %v2, 48(%r3), 3
+; CHECK-NEXT: wfaxb %v1, %v1, %v2
+; CHECK-NEXT: wfaxb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 64(%r3), 3
+; CHECK-NEXT: vl %v2, 80(%r3), 3
+; CHECK-NEXT: wfaxb %v1, %v1, %v2
+; CHECK-NEXT: vl %v2, 96(%r3), 3
+; CHECK-NEXT: wfaxb %v1, %v1, %v2
+; CHECK-NEXT: wfaxb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 112(%r3), 3
+; CHECK-NEXT: wfaxb %v0, %v0, %v1
+; CHECK-NEXT: vst %v0, 0(%r2), 3
+; CHECK-NEXT: br %r14
+entry:
+ %0 = load fp128, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds fp128, ptr %x, i64 1
+ %1 = load fp128, ptr %arrayidx1, align 8
+ %add = fadd reassoc nsz arcp contract afn fp128 %1, %0
+ %arrayidx2 = getelementptr inbounds fp128, ptr %x, i64 2
+ %2 = load fp128, ptr %arrayidx2, align 8
+ %add3 = fadd reassoc nsz arcp contract afn fp128 %add, %2
+ %arrayidx4 = getelementptr inbounds fp128, ptr %x, i64 3
+ %3 = load fp128, ptr %arrayidx4, align 8
+ %add5 = fadd reassoc nsz arcp contract afn fp128 %add3, %3
+ %arrayidx6 = getelementptr inbounds fp128, ptr %x, i64 4
+ %4 = load fp128, ptr %arrayidx6, align 8
+ %add7 = fadd reassoc nsz arcp contract afn fp128 %add5, %4
+ %arrayidx8 = getelementptr inbounds fp128, ptr %x, i64 5
+ %5 = load fp128, ptr %arrayidx8, align 8
+ %add9 = fadd reassoc nsz arcp contract afn fp128 %add7, %5
+ %arrayidx10 = getelementptr inbounds fp128, ptr %x, i64 6
+ %6 = load fp128, ptr %arrayidx10, align 8
+ %add11 = fadd reassoc nsz arcp contract afn fp128 %add9, %6
+ %arrayidx12 = getelementptr inbounds fp128, ptr %x, i64 7
+ %7 = load fp128, ptr %arrayidx12, align 8
+ %add13 = fadd reassoc nsz arcp contract afn fp128 %add11, %7
+ ret fp128 %add13
+}
+
+define <2 x double> @fun3_fadd(ptr %x) {
+; CHECK-LABEL: fun3_fadd:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vl %v0, 0(%r2), 3
+; CHECK-NEXT: vl %v1, 16(%r2), 3
+; CHECK-NEXT: vfadb %v0, %v1, %v0
+; CHECK-NEXT: vl %v1, 32(%r2), 3
+; CHECK-NEXT: vl %v2, 48(%r2), 3
+; CHECK-NEXT: vfadb %v1, %v1, %v2
+; CHECK-NEXT: vfadb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 64(%r2), 3
+; CHECK-NEXT: vl %v2, 80(%r2), 3
+; CHECK-NEXT: vfadb %v1, %v1, %v2
+; CHECK-NEXT: vl %v2, 96(%r2), 3
+; CHECK-NEXT: vfadb %v1, %v1, %v2
+; CHECK-NEXT: vfadb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 112(%r2), 3
+; CHECK-NEXT: vfadb %v24, %v0, %v1
+; CHECK-NEXT: br %r14
+entry:
+ %0 = load <2 x double>, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds <2 x double>, ptr %x, i64 1
+ %1 = load <2 x double>, ptr %arrayidx1, align 8
+ %add = fadd reassoc nsz arcp contract afn <2 x double> %1, %0
+ %arrayidx2 = getelementptr inbounds <2 x double>, ptr %x, i64 2
+ %2 = load <2 x double>, ptr %arrayidx2, align 8
+ %add3 = fadd reassoc nsz arcp contract afn <2 x double> %add, %2
+ %arrayidx4 = getelementptr inbounds <2 x double>, ptr %x, i64 3
+ %3 = load <2 x double>, ptr %arrayidx4, align 8
+ %add5 = fadd reassoc nsz arcp contract afn <2 x double> %add3, %3
+ %arrayidx6 = getelementptr inbounds <2 x double>, ptr %x, i64 4
+ %4 = load <2 x double>, ptr %arrayidx6, align 8
+ %add7 = fadd reassoc nsz arcp contract afn <2 x double> %add5, %4
+ %arrayidx8 = getelementptr inbounds <2 x double>, ptr %x, i64 5
+ %5 = load <2 x double>, ptr %arrayidx8, align 8
+ %add9 = fadd reassoc nsz arcp contract afn <2 x double> %add7, %5
+ %arrayidx10 = getelementptr inbounds <2 x double>, ptr %x, i64 6
+ %6 = load <2 x double>, ptr %arrayidx10, align 8
+ %add11 = fadd reassoc nsz arcp contract afn <2 x double> %add9, %6
+ %arrayidx12 = getelementptr inbounds <2 x double>, ptr %x, i64 7
+ %7 = load <2 x double>, ptr %arrayidx12, align 8
+ %add13 = fadd reassoc nsz arcp contract afn <2 x double> %add11, %7
+ ret <2 x double> %add13
+}
+
+define <4 x float> @fun4_fadd(ptr %x) {
+; CHECK-LABEL: fun4_fadd:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vl %v0, 0(%r2), 3
+; CHECK-NEXT: vl %v1, 16(%r2), 3
+; CHECK-NEXT: vfasb %v0, %v1, %v0
+; CHECK-NEXT: vl %v1, 32(%r2), 3
+; CHECK-NEXT: vl %v2, 48(%r2), 3
+; CHECK-NEXT: vfasb %v1, %v1, %v2
+; CHECK-NEXT: vfasb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 64(%r2), 3
+; CHECK-NEXT: vl %v2, 80(%r2), 3
+; CHECK-NEXT: vfasb %v1, %v1, %v2
+; CHECK-NEXT: vl %v2, 96(%r2), 3
+; CHECK-NEXT: vfasb %v1, %v1, %v2
+; CHECK-NEXT: vfasb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 112(%r2), 3
+; CHECK-NEXT: vfasb %v24, %v0, %v1
+; CHECK-NEXT: br %r14
+entry:
+ %0 = load <4 x float>, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds <4 x float>, ptr %x, i64 1
+ %1 = load <4 x float>, ptr %arrayidx1, align 8
+ %add = fadd reassoc nsz arcp contract afn <4 x float> %1, %0
+ %arrayidx2 = getelementptr inbounds <4 x float>, ptr %x, i64 2
+ %2 = load <4 x float>, ptr %arrayidx2, align 8
+ %add3 = fadd reassoc nsz arcp contract afn <4 x float> %add, %2
+ %arrayidx4 = getelementptr inbounds <4 x float>, ptr %x, i64 3
+ %3 = load <4 x float>, ptr %arrayidx4, align 8
+ %add5 = fadd reassoc nsz arcp contract afn <4 x float> %add3, %3
+ %arrayidx6 = getelementptr inbounds <4 x float>, ptr %x, i64 4
+ %4 = load <4 x float>, ptr %arrayidx6, align 8
+ %add7 = fadd reassoc nsz arcp contract afn <4 x float> %add5, %4
+ %arrayidx8 = getelementptr inbounds <4 x float>, ptr %x, i64 5
+ %5 = load <4 x float>, ptr %arrayidx8, align 8
+ %add9 = fadd reassoc nsz arcp contract afn <4 x float> %add7, %5
+ %arrayidx10 = getelementptr inbounds <4 x float>, ptr %x, i64 6
+ %6 = load <4 x float>, ptr %arrayidx10, align 8
+ %add11 = fadd reassoc nsz arcp contract afn <4 x float> %add9, %6
+ %arrayidx12 = getelementptr inbounds <4 x float>, ptr %x, i64 7
+ %7 = load <4 x float>, ptr %arrayidx12, align 8
+ %add13 = fadd reassoc nsz arcp contract afn <4 x float> %add11, %7
+ ret <4 x float> %add13
+}
+
+define double @fun5_fsub(ptr %x) {
+; CHECK-LABEL: fun5_fsub:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: ld %f0, 0(%r2)
+; CHECK-NEXT: sdb %f0, 8(%r2)
+; CHECK-NEXT: ld %f1, 24(%r2)
+; CHECK-NEXT: adb %f1, 16(%r2)
+; CHECK-NEXT: sdbr %f0, %f1
+; CHECK-NEXT: ld %f1, 40(%r2)
+; CHECK-NEXT: adb %f1, 32(%r2)
+; CHECK-NEXT: adb %f1, 48(%r2)
+; CHECK-NEXT: sdbr %f0, %f1
+; CHECK-NEXT: sdb %f0, 56(%r2)
+; CHECK-NEXT: br %r14
+
+; PASSOUTPUT: name: fun5_fsub
+; PASSOUTPUT-NOT: WFSDB
+; PASSOUTPUT: WFSDB killed %3, killed %18, implicit $fpc
+; PASSOUTPUT-NOT: WFSDB {{.*}}$cc
+; PASSOUTPUT-NOT: WFSDB_CCPseudo
+entry:
+ %0 = load double, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %1 = load double, ptr %arrayidx1, align 8
+ %sub = fsub reassoc nsz arcp contract afn double %0, %1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %2 = load double, ptr %arrayidx2, align 8
+ %sub3 = fsub reassoc nsz arcp contract afn double %sub, %2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+ %3 = load double, ptr %arrayidx4, align 8
+ %sub5 = fsub reassoc nsz arcp contract afn double %sub3, %3
+ %arrayidx6 = getelementptr inbounds double, ptr %x, i64 4
+ %4 = load double, ptr %arrayidx6, align 8
+ %sub7 = fsub reassoc nsz arcp contract afn double %sub5, %4
+ %arrayidx8 = getelementptr inbounds double, ptr %x, i64 5
+ %5 = load double, ptr %arrayidx8, align 8
+ %sub9 = fsub reassoc nsz arcp contract afn double %sub7, %5
+ %arrayidx10 = getelementptr inbounds double, ptr %x, i64 6
+ %6 = load double, ptr %arrayidx10, align 8
+ %sub11 = fsub reassoc nsz arcp contract afn double %sub9, %6
+ %arrayidx12 = getelementptr inbounds double, ptr %x, i64 7
+ %7 = load double, ptr %arrayidx12, align 8
+ %sub13 = fsub reassoc nsz arcp contract afn double %sub11, %7
+ ret double %sub13
+}
+
+define float @fun6_fsub(ptr %x) {
+; CHECK-LABEL: fun6_fsub:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: lde %f0, 0(%r2)
+; CHECK-NEXT: seb %f0, 4(%r2)
+; CHECK-NEXT: lde %f1, 12(%r2)
+; CHECK-NEXT: aeb %f1, 8(%r2)
+; CHECK-NEXT: sebr %f0, %f1
+; CHECK-NEXT: lde %f1, 20(%r2)
+; CHECK-NEXT: aeb %f1, 16(%r2)
+; CHECK-NEXT: aeb %f1, 24(%r2)
+; CHECK-NEXT: sebr %f0, %f1
+; CHECK-NEXT: seb %f0, 28(%r2)
+; CHECK-NEXT: br %r14
+
+; PASSOUTPUT: name: fun6_fsub
+; PASSOUTPUT-NOT: WFSSB
+; PASSOUTPUT: WFSSB killed %3, killed %18, implicit $fpc
+; PASSOUTPUT-NOT: WFSSB {{.*}}$cc
+; PASSOUTPUT-NOT: WFSSB_CCPseudo
+entry:
+ %0 = load float, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds float, ptr %x, i64 1
+ %1 = load float, ptr %arrayidx1, align 8
+ %sub = fsub reassoc nsz arcp contract afn float %0, %1
+ %arrayidx2 = getelementptr inbounds float, ptr %x, i64 2
+ %2 = load float, ptr %arrayidx2, align 8
+ %sub3 = fsub reassoc nsz arcp contract afn float %sub, %2
+ %arrayidx4 = getelementptr inbounds float, ptr %x, i64 3
+ %3 = load float, ptr %arrayidx4, align 8
+ %sub5 = fsub reassoc nsz arcp contract afn float %sub3, %3
+ %arrayidx6 = getelementptr inbounds float, ptr %x, i64 4
+ %4 = load float, ptr %arrayidx6, align 8
+ %sub7 = fsub reassoc nsz arcp contract afn float %sub5, %4
+ %arrayidx8 = getelementptr inbounds float, ptr %x, i64 5
+ %5 = load float, ptr %arrayidx8, align 8
+ %sub9 = fsub reassoc nsz arcp contract afn float %sub7, %5
+ %arrayidx10 = getelementptr inbounds float, ptr %x, i64 6
+ %6 = load float, ptr %arrayidx10, align 8
+ %sub11 = fsub reassoc nsz arcp contract afn float %sub9, %6
+ %arrayidx12 = getelementptr inbounds float, ptr %x, i64 7
+ %7 = load float, ptr %arrayidx12, align 8
+ %sub13 = fsub reassoc nsz arcp contract afn float %sub11, %7
+ ret float %sub13
+}
+
+define fp128 @fun7_fsub(ptr %x) {
+; CHECK-LABEL: fun7_fsub:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vl %v0, 0(%r3), 3
+; CHECK-NEXT: vl %v1, 16(%r3), 3
+; CHECK-NEXT: wfsxb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 32(%r3), 3
+; CHECK-NEXT: vl %v2, 48(%r3), 3
+; CHECK-NEXT: wfaxb %v1, %v1, %v2
+; CHECK-NEXT: wfsxb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 64(%r3), 3
+; CHECK-NEXT: vl %v2, 80(%r3), 3
+; CHECK-NEXT: wfaxb %v1, %v1, %v2
+; CHECK-NEXT: vl %v2, 96(%r3), 3
+; CHECK-NEXT: wfaxb %v1, %v1, %v2
+; CHECK-NEXT: wfsxb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 112(%r3), 3
+; CHECK-NEXT: wfsxb %v0, %v0, %v1
+; CHECK-NEXT: vst %v0, 0(%r2), 3
+; CHECK-NEXT: br %r14
+entry:
+ %0 = load fp128, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds fp128, ptr %x, i64 1
+ %1 = load fp128, ptr %arrayidx1, align 8
+ %sub = fsub reassoc nsz arcp contract afn fp128 %0, %1
+ %arrayidx2 = getelementptr inbounds fp128, ptr %x, i64 2
+ %2 = load fp128, ptr %arrayidx2, align 8
+ %sub3 = fsub reassoc nsz arcp contract afn fp128 %sub, %2
+ %arrayidx4 = getelementptr inbounds fp128, ptr %x, i64 3
+ %3 = load fp128, ptr %arrayidx4, align 8
+ %sub5 = fsub reassoc nsz arcp contract afn fp128 %sub3, %3
+ %arrayidx6 = getelementptr inbounds fp128, ptr %x, i64 4
+ %4 = load fp128, ptr %arrayidx6, align 8
+ %sub7 = fsub reassoc nsz arcp contract afn fp128 %sub5, %4
+ %arrayidx8 = getelementptr inbounds fp128, ptr %x, i64 5
+ %5 = load fp128, ptr %arrayidx8, align 8
+ %sub9 = fsub reassoc nsz arcp contract afn fp128 %sub7, %5
+ %arrayidx10 = getelementptr inbounds fp128, ptr %x, i64 6
+ %6 = load fp128, ptr %arrayidx10, align 8
+ %sub11 = fsub reassoc nsz arcp contract afn fp128 %sub9, %6
+ %arrayidx12 = getelementptr inbounds fp128, ptr %x, i64 7
+ %7 = load fp128, ptr %arrayidx12, align 8
+ %sub13 = fsub reassoc nsz arcp contract afn fp128 %sub11, %7
+ ret fp128 %sub13
+}
+
+define <2 x double> @fun8_fsub(ptr %x) {
+; CHECK-LABEL: fun8_fsub:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vl %v0, 0(%r2), 3
+; CHECK-NEXT: vl %v1, 16(%r2), 3
+; CHECK-NEXT: vfsdb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 32(%r2), 3
+; CHECK-NEXT: vl %v2, 48(%r2), 3
+; CHECK-NEXT: vfadb %v1, %v1, %v2
+; CHECK-NEXT: vfsdb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 64(%r2), 3
+; CHECK-NEXT: vl %v2, 80(%r2), 3
+; CHECK-NEXT: vfadb %v1, %v1, %v2
+; CHECK-NEXT: vl %v2, 96(%r2), 3
+; CHECK-NEXT: vfadb %v1, %v1, %v2
+; CHECK-NEXT: vfsdb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 112(%r2), 3
+; CHECK-NEXT: vfsdb %v24, %v0, %v1
+; CHECK-NEXT: br %r14
+entry:
+ %0 = load <2 x double>, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds <2 x double>, ptr %x, i64 1
+ %1 = load <2 x double>, ptr %arrayidx1, align 8
+ %sub = fsub reassoc nsz arcp contract afn <2 x double> %0, %1
+ %arrayidx2 = getelementptr inbounds <2 x double>, ptr %x, i64 2
+ %2 = load <2 x double>, ptr %arrayidx2, align 8
+ %sub3 = fsub reassoc nsz arcp contract afn <2 x double> %sub, %2
+ %arrayidx4 = getelementptr inbounds <2 x double>, ptr %x, i64 3
+ %3 = load <2 x double>, ptr %arrayidx4, align 8
+ %sub5 = fsub reassoc nsz arcp contract afn <2 x double> %sub3, %3
+ %arrayidx6 = getelementptr inbounds <2 x double>, ptr %x, i64 4
+ %4 = load <2 x double>, ptr %arrayidx6, align 8
+ %sub7 = fsub reassoc nsz arcp contract afn <2 x double> %sub5, %4
+ %arrayidx8 = getelementptr inbounds <2 x double>, ptr %x, i64 5
+ %5 = load <2 x double>, ptr %arrayidx8, align 8
+ %sub9 = fsub reassoc nsz arcp contract afn <2 x double> %sub7, %5
+ %arrayidx10 = getelementptr inbounds <2 x double>, ptr %x, i64 6
+ %6 = load <2 x double>, ptr %arrayidx10, align 8
+ %sub11 = fsub reassoc nsz arcp contract afn <2 x double> %sub9, %6
+ %arrayidx12 = getelementptr inbounds <2 x double>, ptr %x, i64 7
+ %7 = load <2 x double>, ptr %arrayidx12, align 8
+ %sub13 = fsub reassoc nsz arcp contract afn <2 x double> %sub11, %7
+ ret <2 x double> %sub13
+}
+
+define <4 x float> @fun9_fsub(ptr %x) {
+; CHECK-LABEL: fun9_fsub:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vl %v0, 0(%r2), 3
+; CHECK-NEXT: vl %v1, 16(%r2), 3
+; CHECK-NEXT: vfssb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 32(%r2), 3
+; CHECK-NEXT: vl %v2, 48(%r2), 3
+; CHECK-NEXT: vfasb %v1, %v1, %v2
+; CHECK-NEXT: vfssb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 64(%r2), 3
+; CHECK-NEXT: vl %v2, 80(%r2), 3
+; CHECK-NEXT: vfasb %v1, %v1, %v2
+; CHECK-NEXT: vl %v2, 96(%r2), 3
+; CHECK-NEXT: vfasb %v1, %v1, %v2
+; CHECK-NEXT: vfssb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 112(%r2), 3
+; CHECK-NEXT: vfssb %v24, %v0, %v1
+; CHECK-NEXT: br %r14
+entry:
+ %0 = load <4 x float>, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds <4 x float>, ptr %x, i64 1
+ %1 = load <4 x float>, ptr %arrayidx1, align 8
+ %sub = fsub reassoc nsz arcp contract afn <4 x float> %0, %1
+ %arrayidx2 = getelementptr inbounds <4 x float>, ptr %x, i64 2
+ %2 = load <4 x float>, ptr %arrayidx2, align 8
+ %sub3 = fsub reassoc nsz arcp contract afn <4 x float> %sub, %2
+ %arrayidx4 = getelementptr inbounds <4 x float>, ptr %x, i64 3
+ %3 = load <4 x float>, ptr %arrayidx4, align 8
+ %sub5 = fsub reassoc nsz arcp contract afn <4 x float> %sub3, %3
+ %arrayidx6 = getelementptr inbounds <4 x float>, ptr %x, i64 4
+ %4 = load <4 x float>, ptr %arrayidx6, align 8
+ %sub7 = fsub reassoc nsz arcp contract afn <4 x float> %sub5, %4
+ %arrayidx8 = getelementptr inbounds <4 x float>, ptr %x, i64 5
+ %5 = load <4 x float>, ptr %arrayidx8, align 8
+ %sub9 = fsub reassoc nsz arcp contract afn <4 x float> %sub7, %5
+ %arrayidx10 = getelementptr inbounds <4 x float>, ptr %x, i64 6
+ %6 = load <4 x float>, ptr %arrayidx10, align 8
+ %sub11 = fsub reassoc nsz arcp contract afn <4 x float> %sub9, %6
+ %arrayidx12 = getelementptr inbounds <4 x float>, ptr %x, i64 7
+ %7 = load <4 x float>, ptr %arrayidx12, align 8
+ %sub13 = fsub reassoc nsz arcp contract afn <4 x float> %sub11, %7
+ ret <4 x float> %sub13
+}
+
+define double @fun10_fmul(ptr %x) {
+; CHECK-LABEL: fun10_fmul:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: ld %f0, 8(%r2)
+; CHECK-NEXT: mdb %f0, 0(%r2)
+; CHECK-NEXT: ld %f1, 24(%r2)
+; CHECK-NEXT: mdb %f1, 16(%r2)
+; CHECK-NEXT: mdbr %f0, %f1
+; CHECK-NEXT: ld %f1, 40(%r2)
+; CHECK-NEXT: mdb %f1, 32(%r2)
+; CHECK-NEXT: mdb %f1, 48(%r2)
+; CHECK-NEXT: mdbr %f0, %f1
+; CHECK-NEXT: mdb %f0, 56(%r2)
+; CHECK-NEXT: br %r14
+
+; PASSOUTPUT: name: fun10_fmul
+; PASSOUTPUT-NOT: WFMDB
+; PASSOUTPUT: WFMDB killed %3, killed %18, implicit $fpc
+; PASSOUTPUT-NOT: WFMDB {{.*}}$cc
+; PASSOUTPUT-NOT: WFMDB_CCPseudo
+entry:
+ %0 = load double, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %1 = load double, ptr %arrayidx1, align 8
+ %mul = fmul reassoc nsz arcp contract afn double %0, %1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %2 = load double, ptr %arrayidx2, align 8
+ %mul3 = fmul reassoc nsz arcp contract afn double %mul, %2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+ %3 = load double, ptr %arrayidx4, align 8
+ %mul5 = fmul reassoc nsz arcp contract afn double %mul3, %3
+ %arrayidx6 = getelementptr inbounds double, ptr %x, i64 4
+ %4 = load double, ptr %arrayidx6, align 8
+ %mul7 = fmul reassoc nsz arcp contract afn double %mul5, %4
+ %arrayidx8 = getelementptr inbounds double, ptr %x, i64 5
+ %5 = load double, ptr %arrayidx8, align 8
+ %mul9 = fmul reassoc nsz arcp contract afn double %mul7, %5
+ %arrayidx10 = getelementptr inbounds double, ptr %x, i64 6
+ %6 = load double, ptr %arrayidx10, align 8
+ %mul11 = fmul reassoc nsz arcp contract afn double %mul9, %6
+ %arrayidx12 = getelementptr inbounds double, ptr %x, i64 7
+ %7 = load double, ptr %arrayidx12, align 8
+ %mul13 = fmul reassoc nsz arcp contract afn double %mul11, %7
+ ret double %mul13
+}
+
+define float @fun11_fmul(ptr %x) {
+; CHECK-LABEL: fun11_fmul:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: lde %f0, 4(%r2)
+; CHECK-NEXT: meeb %f0, 0(%r2)
+; CHECK-NEXT: lde %f1, 12(%r2)
+; CHECK-NEXT: meeb %f1, 8(%r2)
+; CHECK-NEXT: meebr %f0, %f1
+; CHECK-NEXT: lde %f1, 20(%r2)
+; CHECK-NEXT: meeb %f1, 16(%r2)
+; CHECK-NEXT: meeb %f1, 24(%r2)
+; CHECK-NEXT: meebr %f0, %f1
+; CHECK-NEXT: meeb %f0, 28(%r2)
+; CHECK-NEXT: br %r14
+
+; PASSOUTPUT: name: fun11_fmul
+; PASSOUTPUT-NOT: WFMSB
+; PASSOUTPUT: WFMSB killed %3, killed %18, implicit $fpc
+; PASSOUTPUT-NOT: WFMSB {{.*}}$cc
+; PASSOUTPUT-NOT: WFMSB_CCPseudo
+entry:
+ %0 = load float, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds float, ptr %x, i64 1
+ %1 = load float, ptr %arrayidx1, align 8
+ %mul = fmul reassoc nsz arcp contract afn float %0, %1
+ %arrayidx2 = getelementptr inbounds float, ptr %x, i64 2
+ %2 = load float, ptr %arrayidx2, align 8
+ %mul3 = fmul reassoc nsz arcp contract afn float %mul, %2
+ %arrayidx4 = getelementptr inbounds float, ptr %x, i64 3
+ %3 = load float, ptr %arrayidx4, align 8
+ %mul5 = fmul reassoc nsz arcp contract afn float %mul3, %3
+ %arrayidx6 = getelementptr inbounds float, ptr %x, i64 4
+ %4 = load float, ptr %arrayidx6, align 8
+ %mul7 = fmul reassoc nsz arcp contract afn float %mul5, %4
+ %arrayidx8 = getelementptr inbounds float, ptr %x, i64 5
+ %5 = load float, ptr %arrayidx8, align 8
+ %mul9 = fmul reassoc nsz arcp contract afn float %mul7, %5
+ %arrayidx10 = getelementptr inbounds float, ptr %x, i64 6
+ %6 = load float, ptr %arrayidx10, align 8
+ %mul11 = fmul reassoc nsz arcp contract afn float %mul9, %6
+ %arrayidx12 = getelementptr inbounds float, ptr %x, i64 7
+ %7 = load float, ptr %arrayidx12, align 8
+ %mul13 = fmul reassoc nsz arcp contract afn float %mul11, %7
+ ret float %mul13
+}
+
+define fp128 @fun12_fmul(ptr %x) {
+; CHECK-LABEL: fun12_fmul:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vl %v0, 0(%r3), 3
+; CHECK-NEXT: vl %v1, 16(%r3), 3
+; CHECK-NEXT: wfmxb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 32(%r3), 3
+; CHECK-NEXT: vl %v2, 48(%r3), 3
+; CHECK-NEXT: wfmxb %v1, %v1, %v2
+; CHECK-NEXT: wfmxb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 64(%r3), 3
+; CHECK-NEXT: vl %v2, 80(%r3), 3
+; CHECK-NEXT: wfmxb %v1, %v1, %v2
+; CHECK-NEXT: vl %v2, 96(%r3), 3
+; CHECK-NEXT: wfmxb %v1, %v1, %v2
+; CHECK-NEXT: wfmxb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 112(%r3), 3
+; CHECK-NEXT: wfmxb %v0, %v0, %v1
+; CHECK-NEXT: vst %v0, 0(%r2), 3
+; CHECK-NEXT: br %r14
+entry:
+ %0 = load fp128, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds fp128, ptr %x, i64 1
+ %1 = load fp128, ptr %arrayidx1, align 8
+ %mul = fmul reassoc nsz arcp contract afn fp128 %0, %1
+ %arrayidx2 = getelementptr inbounds fp128, ptr %x, i64 2
+ %2 = load fp128, ptr %arrayidx2, align 8
+ %mul3 = fmul reassoc nsz arcp contract afn fp128 %mul, %2
+ %arrayidx4 = getelementptr inbounds fp128, ptr %x, i64 3
+ %3 = load fp128, ptr %arrayidx4, align 8
+ %mul5 = fmul reassoc nsz arcp contract afn fp128 %mul3, %3
+ %arrayidx6 = getelementptr inbounds fp128, ptr %x, i64 4
+ %4 = load fp128, ptr %arrayidx6, align 8
+ %mul7 = fmul reassoc nsz arcp contract afn fp128 %mul5, %4
+ %arrayidx8 = getelementptr inbounds fp128, ptr %x, i64 5
+ %5 = load fp128, ptr %arrayidx8, align 8
+ %mul9 = fmul reassoc nsz arcp contract afn fp128 %mul7, %5
+ %arrayidx10 = getelementptr inbounds fp128, ptr %x, i64 6
+ %6 = load fp128, ptr %arrayidx10, align 8
+ %mul11 = fmul reassoc nsz arcp contract afn fp128 %mul9, %6
+ %arrayidx12 = getelementptr inbounds fp128, ptr %x, i64 7
+ %7 = load fp128, ptr %arrayidx12, align 8
+ %mul13 = fmul reassoc nsz arcp contract afn fp128 %mul11, %7
+ ret fp128 %mul13
+}
+
+define <2 x double> @fun13_fmul(ptr %x) {
+; CHECK-LABEL: fun13_fmul:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vl %v0, 0(%r2), 3
+; CHECK-NEXT: vl %v1, 16(%r2), 3
+; CHECK-NEXT: vfmdb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 32(%r2), 3
+; CHECK-NEXT: vl %v2, 48(%r2), 3
+; CHECK-NEXT: vfmdb %v1, %v1, %v2
+; CHECK-NEXT: vfmdb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 64(%r2), 3
+; CHECK-NEXT: vl %v2, 80(%r2), 3
+; CHECK-NEXT: vfmdb %v1, %v1, %v2
+; CHECK-NEXT: vl %v2, 96(%r2), 3
+; CHECK-NEXT: vfmdb %v1, %v1, %v2
+; CHECK-NEXT: vfmdb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 112(%r2), 3
+; CHECK-NEXT: vfmdb %v24, %v0, %v1
+; CHECK-NEXT: br %r14
+entry:
+ %0 = load <2 x double>, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds <2 x double>, ptr %x, i64 1
+ %1 = load <2 x double>, ptr %arrayidx1, align 8
+ %mul = fmul reassoc nsz arcp contract afn <2 x double> %0, %1
+ %arrayidx2 = getelementptr inbounds <2 x double>, ptr %x, i64 2
+ %2 = load <2 x double>, ptr %arrayidx2, align 8
+ %mul3 = fmul reassoc nsz arcp contract afn <2 x double> %mul, %2
+ %arrayidx4 = getelementptr inbounds <2 x double>, ptr %x, i64 3
+ %3 = load <2 x double>, ptr %arrayidx4, align 8
+ %mul5 = fmul reassoc nsz arcp contract afn <2 x double> %mul3, %3
+ %arrayidx6 = getelementptr inbounds <2 x double>, ptr %x, i64 4
+ %4 = load <2 x double>, ptr %arrayidx6, align 8
+ %mul7 = fmul reassoc nsz arcp contract afn <2 x double> %mul5, %4
+ %arrayidx8 = getelementptr inbounds <2 x double>, ptr %x, i64 5
+ %5 = load <2 x double>, ptr %arrayidx8, align 8
+ %mul9 = fmul reassoc nsz arcp contract afn <2 x double> %mul7, %5
+ %arrayidx10 = getelementptr inbounds <2 x double>, ptr %x, i64 6
+ %6 = load <2 x double>, ptr %arrayidx10, align 8
+ %mul11 = fmul reassoc nsz arcp contract afn <2 x double> %mul9, %6
+ %arrayidx12 = getelementptr inbounds <2 x double>, ptr %x, i64 7
+ %7 = load <2 x double>, ptr %arrayidx12, align 8
+ %mul13 = fmul reassoc nsz arcp contract afn <2 x double> %mul11, %7
+ ret <2 x double> %mul13
+}
+
+define <4 x float> @fun14_fmul(ptr %x) {
+; CHECK-LABEL: fun14_fmul:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: vl %v0, 0(%r2), 3
+; CHECK-NEXT: vl %v1, 16(%r2), 3
+; CHECK-NEXT: vfmsb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 32(%r2), 3
+; CHECK-NEXT: vl %v2, 48(%r2), 3
+; CHECK-NEXT: vfmsb %v1, %v1, %v2
+; CHECK-NEXT: vfmsb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 64(%r2), 3
+; CHECK-NEXT: vl %v2, 80(%r2), 3
+; CHECK-NEXT: vfmsb %v1, %v1, %v2
+; CHECK-NEXT: vl %v2, 96(%r2), 3
+; CHECK-NEXT: vfmsb %v1, %v1, %v2
+; CHECK-NEXT: vfmsb %v0, %v0, %v1
+; CHECK-NEXT: vl %v1, 112(%r2), 3
+; CHECK-NEXT: vfmsb %v24, %v0, %v1
+; CHECK-NEXT: br %r14
+entry:
+ %0 = load <4 x float>, ptr %x, align 8
+ %arrayidx1 = getelementptr inbounds <4 x float>, ptr %x, i64 1
+ %1 = load <4 x float>, ptr %arrayidx1, align 8
+ %mul = fmul reassoc nsz arcp contract afn <4 x float> %0, %1
+ %arrayidx2 = getelementptr inbounds <4 x float>, ptr %x, i64 2
+ %2 = load <4 x float>, ptr %arrayidx2, align 8
+ %mul3 = fmul reassoc nsz arcp contract afn <4 x float> %mul, %2
+ %arrayidx4 = getelementptr inbounds <4 x float>, ptr %x, i64 3
+ %3 = load <4 x float>, ptr %arrayidx4, align 8
+ %mul5 = fmul reassoc nsz arcp contract afn <4 x float> %mul3, %3
+ %arrayidx6 = getelementptr inbounds <4 x float>, ptr %x, i64 4
+ %4 = load <4 x float>, ptr %arrayidx6, align 8
+ %mul7 = fmul reassoc nsz arcp contract afn <4 x float> %mul5, %4
+ %arrayidx8 = getelementptr inbounds <4 x float>, ptr %x, i64 5
+ %5 = load <4 x float>, ptr %arrayidx8, align 8
+ %mul9 = fmul reassoc nsz arcp contract afn <4 x float> %mul7, %5
+ %arrayidx10 = getelementptr inbounds <4 x float>, ptr %x, i64 6
+ %6 = load <4 x float>, ptr %arrayidx10, align 8
+ %mul11 = fmul reassoc nsz arcp contract afn <4 x float> %mul9, %6
+ %arrayidx12 = getelementptr inbounds <4 x float>, ptr %x, i64 7
+ %7 = load <4 x float>, ptr %arrayidx12, align 8
+ %mul13 = fmul reassoc nsz arcp contract afn <4 x float> %mul11, %7
+ ret <4 x float> %mul13
+}
diff --git a/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-03.ll b/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-03.ll
new file mode 100644
index 00000000000000..1be73fcc373228
--- /dev/null
+++ b/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-03.ll
@@ -0,0 +1,90 @@
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z15 -verify-machineinstrs -O3 \
+; RUN: -print-before=machine-combiner -print-after=machine-combiner -ppc-fma \
+; RUN: 2>&1 | FileCheck %s
+
+; REQUIRES: asserts
+
+define double @fun0_fma2_add(ptr %x, double %A, double %B) {
+; CHECK: # *** IR Dump Before Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun0_fma2_add: IsSSA, TracksLiveness
+; CHECK: bb.0.entry:
+; CHECK-NEXT: liveins: $r2d, $f0d, $f2d
+; CHECK-NEXT: [[Y:%2:fp64bit]] = COPY $f2d
+; CHECK-NEXT: [[X:%1:fp64bit]] = COPY $f0d
+; CHECK-NEXT: %0:addr64bit = COPY $r2d
+; CHECK-NEXT: %3:vr64bit = VL64 %0:addr64bit, 0, $noreg :: (load (s64) from %ir.x)
+; CHECK-NEXT: %4:vr64bit = VL64 %0:addr64bit, 8, $noreg :: (load (s64) from %ir.arrayidx1)
+; CHECK-NEXT: %5:vr64bit = VL64 %0:addr64bit, 16, $noreg :: (load (s64) from %ir.arrayidx2)
+; CHECK-NEXT: %6:vr64bit = VL64 %0:addr64bit, 24, $noreg :: (load (s64) from %ir.arrayidx4)
+; CHECK-NEXT: %7:vr64bit = {{.*}} WFADB_CCPseudo [[X]], [[Y]]
+; CHECK-NEXT: %8:vr64bit = {{.*}} WFMADB_CCPseudo killed [[M21:%3:vr64bit]], killed [[M22:%4:vr64bit]], killed %7:vr64bit
+; CHECK-NEXT: %9:vr64bit = {{.*}} WFMADB_CCPseudo killed [[M31:%5:vr64bit]], killed [[M32:%6:vr64bit]], killed %8:vr64bit
+; CHECK-NEXT: $f0d = COPY %9:vr64bit
+; CHECK-NEXT: Return implicit $f0d
+
+; CHECK: # *** IR Dump After Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun0_fma2_add: IsSSA, TracksLiveness
+; CHECK: %10:vr64bit = {{.*}} WFMADB_CCPseudo killed [[M21]], killed [[M22]], [[X]]
+; CHECK-NEXT: %11:vr64bit = {{.*}} WFMADB_CCPseudo killed [[M31]], killed [[M32]], [[Y]]
+; CHECK-NEXT: %9:vr64bit = {{.*}} WFADB_CCPseudo %10:vr64bit, %11:vr64bit
+; CHECK-NEXT: $f0d = COPY %9:vr64bit
+; CHECK-NEXT: Return implicit $f0d
+entry:
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+
+ %0 = load double, ptr %x
+ %1 = load double, ptr %arrayidx1
+ %2 = load double, ptr %arrayidx2
+ %3 = load double, ptr %arrayidx4
+
+ %mul1 = fmul reassoc nsz contract double %0, %1
+ %mul2 = fmul reassoc nsz contract double %2, %3
+
+ %A1 = fadd reassoc nsz contract double %A, %B
+ %A2 = fadd reassoc nsz contract double %A1, %mul1
+ %A3 = fadd reassoc nsz contract double %A2, %mul2
+
+ ret double %A3
+}
+
+; Same as above, but with a long-latency factor in the root FMA which makes
+; this undesirable.
+define double @fun1_fma2_add_divop(ptr %x, double %A, double %B) {
+; CHECK: # *** IR Dump After Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun1_fma2_add_divop: IsSSA, TracksLiveness
+; CHECK: bb.0.entry:
+; CHECK-NEXT: liveins: $r2d, $f0d, $f2d
+; CHECK-NEXT: %2:fp64bit = COPY $f2d
+; CHECK-NEXT: %1:fp64bit = COPY $f0d
+; CHECK-NEXT: %0:addr64bit = COPY $r2d
+; CHECK-NEXT: %3:vr64bit = VL64 %0:addr64bit, 0, $noreg :: (load (s64) from %ir.x)
+; CHECK-NEXT: %4:vr64bit = VL64 %0:addr64bit, 8, $noreg :: (load (s64) from %ir.arrayidx1)
+; CHECK-NEXT: %5:fp64bit = VL64 %0:addr64bit, 16, $noreg :: (load (s64) from %ir.arrayidx2)
+; CHECK-NEXT: %6:fp64bit = nofpexcept DDB %5:fp64bit(tied-def 0), %0:addr64bit, 24, $noreg, implicit $fpc
+; CHECK-NEXT: %7:vr64bit = {{.*}} WFADB_CCPseudo %1:fp64bit, %2:fp64bit
+; CHECK-NEXT: %8:vr64bit = {{.*}} WFMADB_CCPseudo killed %3:vr64bit, killed %4:vr64bit, killed %7:vr64bit
+; CHECK-NEXT: %9:vr64bit = {{.*}} WFMADB_CCPseudo %5:fp64bit, killed %6:fp64bit, killed %8:vr64bit
+; CHECK-NEXT: $f0d = COPY %9:vr64bit
+; CHECK-NEXT: Return implicit $f0d
+entry:
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+
+ %0 = load double, ptr %x
+ %1 = load double, ptr %arrayidx1
+ %2 = load double, ptr %arrayidx2
+ %3 = load double, ptr %arrayidx4
+ %div = fdiv double %2, %3
+
+ %mul1 = fmul reassoc nsz contract double %0, %1
+ %mul2 = fmul reassoc nsz contract double %2, %div
+
+ %A1 = fadd reassoc nsz contract double %A, %B
+ %A2 = fadd reassoc nsz contract double %A1, %mul1
+ %A3 = fadd reassoc nsz contract double %A2, %mul2
+
+ ret double %A3
+}
diff --git a/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-04.ll b/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-04.ll
new file mode 100644
index 00000000000000..f64508283a8c35
--- /dev/null
+++ b/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-04.ll
@@ -0,0 +1,123 @@
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z15 -verify-machineinstrs -O3 \
+; RUN: -print-before=machine-combiner -print-after=machine-combiner -z-fma \
+; RUN: 2>&1 | FileCheck %s
+
+; REQUIRES: asserts
+
+; The incoming accumulator is stalling so it is worth putting the
+; multiplications in parallell with it.
+define double @fun0_fma2_divop(ptr %x) {
+; CHECK: # *** IR Dump Before Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun0_fma2_divop: IsSSA, TracksLiveness
+; CHECK: bb.0.entry:
+; CHECK-NEXT: liveins: $r2d
+; CHECK-NEXT: %0:addr64bit = COPY $r2d
+; CHECK-NEXT: [[M21:%1:vr64bit]] = VL64 %0:addr64bit, 0, $noreg :: (load (s64) from %ir.x)
+; CHECK-NEXT: [[M22:%2:vr64bit]] = VL64 %0:addr64bit, 8, $noreg :: (load (s64) from %ir.arrayidx1)
+; CHECK-NEXT: [[M11:%3:vr64bit]] = VL64 %0:addr64bit, 16, $noreg :: (load (s64) from %ir.arrayidx2)
+; CHECK-NEXT: [[M12:%4:vr64bit]] = VL64 %0:addr64bit, 24, $noreg :: (load (s64) from %ir.arrayidx4)
+; CHECK-NEXT: [[DIV:%5:vr64bit]] = nofpexcept WFDDB %3:vr64bit, %4:vr64bit, implicit $fpc
+; CHECK-NEXT: %6:vr64bit = {{.*}} WFMADB_CCPseudo killed [[M21]], killed [[M22]], killed [[DIV]]
+; CHECK-NEXT: %7:vr64bit = {{.*}} WFMADB_CCPseudo [[M11]], [[M12]], killed %6:vr64bit
+; CHECK-NEXT: $f0d = COPY %7:vr64bit
+; CHECK-NEXT: Return implicit $f0d
+
+; CHECK: # *** IR Dump After Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun0_fma2_divop: IsSSA, TracksLiveness
+; CHECK: %8:vr64bit = {{.*}} WFMDB_CCPseudo killed [[M21]], killed [[M22]]
+; CHECK-NEXT: %9:vr64bit = {{.*}} WFMADB_CCPseudo [[M11]], [[M12]], %8:vr64bit
+; CHECK-NEXT: %7:vr64bit = {{.*}} WFADB_CCPseudo killed [[DIV]], %9:vr64bit
+entry:
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+
+ %0 = load double, ptr %x
+ %1 = load double, ptr %arrayidx1
+ %2 = load double, ptr %arrayidx2
+ %3 = load double, ptr %arrayidx4
+ %div = fdiv double %2, %3
+
+ %mul1 = fmul reassoc nsz contract double %0, %1
+ %mul2 = fmul reassoc nsz contract double %2, %3
+
+ %A1 = fadd reassoc nsz contract double %div, %mul1
+ %A2 = fadd reassoc nsz contract double %A1, %mul2
+
+ ret double %A2
+}
+
+; The non-profitable case:
+define double @fun1_fma2(ptr %x, double %Arg) {
+; CHECK: # *** IR Dump After Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun1_fma2: IsSSA, TracksLiveness
+; CHECK: bb.0.entry:
+; CHECK-NEXT: liveins: $r2d, $f0d
+; CHECK-NEXT: %1:fp64bit = COPY $f0d
+; CHECK-NEXT: %0:addr64bit = COPY $r2d
+; CHECK-NEXT: %2:vr64bit = VL64 %0:addr64bit, 0, $noreg :: (load (s64) from %ir.x)
+; CHECK-NEXT: %3:vr64bit = VL64 %0:addr64bit, 8, $noreg :: (load (s64) from %ir.arrayidx1)
+; CHECK-NEXT: %4:vr64bit = VL64 %0:addr64bit, 16, $noreg :: (load (s64) from %ir.arrayidx2)
+; CHECK-NEXT: %5:vr64bit = VL64 %0:addr64bit, 24, $noreg :: (load (s64) from %ir.arrayidx4)
+; CHECK-NEXT: %6:vr64bit = {{.*}} WFMADB_CCPseudo killed %2:vr64bit, killed %3:vr64bit, %1:fp64bit
+; CHECK-NEXT: %7:vr64bit = {{.*}} WFMADB_CCPseudo killed %4:vr64bit, killed %5:vr64bit, killed %6:vr64bit
+; CHECK-NEXT: $f0d = COPY %7:vr64bit
+; CHECK-NEXT: Return implicit $f0d
+entry:
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+
+ %0 = load double, ptr %x
+ %1 = load double, ptr %arrayidx1
+ %2 = load double, ptr %arrayidx2
+ %3 = load double, ptr %arrayidx4
+
+ %mul1 = fmul reassoc nsz contract double %0, %1
+ %mul2 = fmul reassoc nsz contract double %2, %3
+
+ %A1 = fadd reassoc nsz contract double %Arg, %mul1
+ %A2 = fadd reassoc nsz contract double %A1, %mul2
+
+ ret double %A2
+}
+
+; Keep the two FMAs, but change order due to the long latency divide.
+define double @fun2_fma2(ptr %x) {
+; CHECK: # *** IR Dump Before Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun2_fma2: IsSSA, TracksLiveness
+; CHECK: bb.0.entry:
+; CHECK-NEXT: liveins: $r2d
+; CHECK-NEXT: %0:addr64bit = COPY $r2d
+; CHECK-NEXT: %1:vr64bit = VL64 %0:addr64bit, 0, $noreg :: (load (s64) from %ir.x)
+; CHECK-NEXT: %2:vr64bit = VL64 %0:addr64bit, 8, $noreg :: (load (s64) from %ir.arrayidx1)
+; CHECK-NEXT: %3:vr64bit = VL64 %0:addr64bit, 16, $noreg :: (load (s64) from %ir.arrayidx2)
+; CHECK-NEXT: %4:vr64bit = VL64 %0:addr64bit, 24, $noreg :: (load (s64) from %ir.arrayidx4)
+; CHECK-NEXT: [[DIV:%5:vr64bit]] = nofpexcept WFDDB %3:vr64bit, %4:vr64bit, implicit $fpc
+; CHECK-NEXT: %6:vr64bit = {{.*}} WFMADB_CCPseudo killed %1:vr64bit, killed [[DIV]], killed %2:vr64bit
+; CHECK-NEXT: %7:vr64bit = {{.*}} WFMADB_CCPseudo %3:vr64bit, %4:vr64bit, killed %6:vr64bit
+
+; CHECK: # *** IR Dump After Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun2_fma2: IsSSA, TracksLiveness
+; CHECK: %12:vr64bit = {{.*}} WFMADB_CCPseudo %3:vr64bit, %4:vr64bit, killed %2:vr64bit
+; CHECK-NEXT: %7:vr64bit = {{.*}} WFMADB_CCPseudo killed %1:vr64bit, killed [[DIV]], %12:vr64bit
+
+entry:
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+
+ %0 = load double, ptr %x
+ %1 = load double, ptr %arrayidx1
+ %2 = load double, ptr %arrayidx2
+ %3 = load double, ptr %arrayidx4
+ %div = fdiv double %2, %3
+
+ %mul1 = fmul reassoc nsz contract double %0, %div
+ %mul2 = fmul reassoc nsz contract double %2, %3
+
+ %A1 = fadd reassoc nsz contract double %1, %mul1
+ %A2 = fadd reassoc nsz contract double %A1, %mul2
+
+ ret double %A2
+}
diff --git a/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-08.ll b/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-08.ll
new file mode 100644
index 00000000000000..7aa6cea7540fe2
--- /dev/null
+++ b/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-08.ll
@@ -0,0 +1,115 @@
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z15 -verify-machineinstrs -O3 \
+; RUN: -print-before=machine-combiner -print-after=machine-combiner -z-fma \
+; RUN: 2>&1 | FileCheck %s
+
+; REQUIRES: asserts
+
+; No improvement possible.
+define double @fun0_fma1add(ptr %x) {
+; CHECK: # *** IR Dump After Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun0_fma1add: IsSSA, TracksLiveness
+; CHECK: bb.0.entry:
+; CHECK-NEXT: liveins: $r2d
+; CHECK-NEXT: %0:addr64bit = COPY $r2d
+; CHECK-NEXT: %1:vr64bit = VL64 %0:addr64bit, 0, $noreg :: (load (s64) from %ir.x)
+; CHECK-NEXT: %2:vr64bit = VL64 %0:addr64bit, 8, $noreg :: (load (s64) from %ir.arrayidx1)
+; CHECK-NEXT: %3:vr64bit = VL64 %0:addr64bit, 16, $noreg :: (load (s64) from %ir.arrayidx2)
+; CHECK-NEXT: %4:vr64bit = VL64 %0:addr64bit, 24, $noreg :: (load (s64) from %ir.arrayidx4)
+; CHECK-NEXT: %5:vr64bit = {{.*}} WFADB_CCPseudo killed %3:vr64bit, killed %4:vr64bit
+; CHECK-NEXT: %6:vr64bit = {{.*}} WFMADB_CCPseudo killed %1:vr64bit, killed %2:vr64bit, killed %5:vr64bit
+; CHECK-NEXT: $f0d = COPY %6:vr64bit
+; CHECK-NEXT: Return implicit $f0d
+entry:
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+
+ %0 = load double, ptr %x
+ %1 = load double, ptr %arrayidx1
+ %2 = load double, ptr %arrayidx2
+ %3 = load double, ptr %arrayidx4
+
+ %mul = fmul reassoc nsz contract double %0, %1
+
+ %A1 = fadd reassoc nsz contract double %2, %3
+ %A2 = fadd reassoc nsz contract double %A1, %mul
+
+ ret double %A2
+}
+
+; The RHS of the Add is stalling, so move up the FMA to the LHS.
+define double @fun1_fma1add_divop(ptr %x) {
+; CHECK: # *** IR Dump Before Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun1_fma1add_divop: IsSSA, TracksLiveness
+; CHECK: bb.0.entry:
+; CHECK-NEXT: liveins: $r2d
+; CHECK-NEXT: %0:addr64bit = COPY $r2d
+; CHECK-NEXT: [[M21:%1:vr64bit]] = VL64 %0:addr64bit, 0, $noreg :: (load (s64) from %ir.x)
+; CHECK-NEXT: [[M22:%2:vr64bit]] = VL64 %0:addr64bit, 8, $noreg :: (load (s64) from %ir.arrayidx1)
+; CHECK-NEXT: [[T1:%3:fp64bit]] = VL64 %0:addr64bit, 16, $noreg :: (load (s64) from %ir.arrayidx2)
+; CHECK-NEXT: [[DIV:%4:fp64bit]] = nofpexcept DDB %3:fp64bit(tied-def 0), %0:addr64bit, 24, $noreg, implicit $fpc
+; CHECK-NEXT: %5:vr64bit = {{.*}} WFADB_CCPseudo [[T1]], killed [[DIV]]
+; CHECK-NEXT: %6:vr64bit = {{.*}} WFMADB_CCPseudo killed [[M21]], killed [[M22]], killed %5:vr64bit
+; CHECK-NEXT: $f0d = COPY %6:vr64bit
+; CHECK-NEXT: Return implicit $f0d
+
+; CHECK: # *** IR Dump After Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun1_fma1add_divop: IsSSA, TracksLiveness
+; CHECK: %7:vr64bit = {{.*}} WFMADB_CCPseudo killed [[M21]], killed [[M22]], [[T1]]
+; CHECK-NEXT: %6:vr64bit = {{.*}} WFADB_CCPseudo %7:vr64bit, killed [[DIV]]
+entry:
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+
+ %0 = load double, ptr %x
+ %1 = load double, ptr %arrayidx1
+ %2 = load double, ptr %arrayidx2
+ %3 = load double, ptr %arrayidx4
+ %div = fdiv double %2, %3
+
+ %mul = fmul reassoc nsz contract double %0, %1
+
+ %A1 = fadd reassoc nsz contract double %2, %div
+ %A2 = fadd reassoc nsz contract double %A1, %mul
+
+ ret double %A2
+}
+
+; The LHS of the Add is stalling, so move up the FMA to the RHS.
+define double @fun2_fma1add_divop(ptr %x) {
+; CHECK: # *** IR Dump Before Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun2_fma1add_divop: IsSSA, TracksLiveness
+; CHECK: bb.0.entry:
+; CHECK-NEXT: liveins: $r2d
+; CHECK-NEXT: %0:addr64bit = COPY $r2d
+; CHECK-NEXT: [[M21:%1:vr64bit]] = VL64 %0:addr64bit, 0, $noreg :: (load (s64) from %ir.x)
+; CHECK-NEXT: [[M22:%2:vr64bit]] = VL64 %0:addr64bit, 8, $noreg :: (load (s64) from %ir.arrayidx1)
+; CHECK-NEXT: %3:vr64bit = VL64 %0:addr64bit, 16, $noreg :: (load (s64) from %ir.arrayidx2)
+; CHECK-NEXT: [[T2:%4:vr64bit]] = VL64 %0:addr64bit, 24, $noreg :: (load (s64) from %ir.arrayidx4)
+; CHECK-NEXT: [[DIV:%5:vr64bit]] = nofpexcept WFDDB killed %3:vr64bit, %4:vr64bit, implicit $fpc
+; CHECK-NEXT: %6:vr64bit = {{.*}} WFADB_CCPseudo killed [[DIV]], [[T2]]
+; CHECK-NEXT: %7:vr64bit = {{.*}} WFMADB_CCPseudo killed [[M21]], killed [[M22]], killed %6:vr64bit
+
+; CHECK: # *** IR Dump After Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun2_fma1add_divop: IsSSA, TracksLiveness
+; CHECK: %9:vr64bit = {{.*}} WFMADB_CCPseudo killed [[M21]], killed [[M22]], [[T2]]
+; CHECK: %7:vr64bit = {{.*}} WFADB_CCPseudo %9:vr64bit, killed [[DIV]]
+entry:
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+
+ %0 = load double, ptr %x
+ %1 = load double, ptr %arrayidx1
+ %2 = load double, ptr %arrayidx2
+ %3 = load double, ptr %arrayidx4
+ %div = fdiv double %2, %3
+
+ %mul = fmul reassoc nsz contract double %0, %1
+
+ %A1 = fadd reassoc nsz contract double %div, %3
+ %A2 = fadd reassoc nsz contract double %A1, %mul
+
+ ret double %A2
+}
diff --git a/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-09.ll b/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-09.ll
new file mode 100644
index 00000000000000..4a8bcc962282d2
--- /dev/null
+++ b/llvm/test/CodeGen/SystemZ/machine-combiner-reassoc-fp-09.ll
@@ -0,0 +1,177 @@
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z15 -O3 -print-before=machine-combiner \
+; RUN: -print-after=machine-combiner -debug-only=machine-combiner,systemz-II -z-fma 2>&1 \
+; RUN: | FileCheck %s
+
+; RUN: llc < %s -mtriple=s390x-linux-gnu -mcpu=z15 -O3 \
+; RUN: -print-after=machine-combiner -debug-only=machine-combiner,systemz-II -ppc-fma 2>&1 \
+; RUN: | FileCheck %s --check-prefix=ALT
+
+; REQUIRES: asserts
+
+; Test transformation of a sequence of 8 FMAs, with different patterns.
+
+define double @fun_fma8(ptr %x, double %A) {
+; CHECK: # *** IR Dump Before Machine InstCombiner (machine-combiner) ***:
+; CHECK-NEXT: # Machine code for function fun_fma8: IsSSA, TracksLiveness
+; CHECK: bb.0.entry:
+; CHECK-NEXT: liveins: $r2d, $f0d
+; CHECK-NEXT: %1:fp64bit = COPY $f0d
+; CHECK-NEXT: %0:addr64bit = COPY $r2d
+; CHECK-NEXT: %2:vr64bit = VL64 %0:addr64bit, 0, $noreg :: (load (s64) from %ir.x)
+; CHECK-NEXT: %3:vr64bit = VL64 %0:addr64bit, 8, $noreg :: (load (s64) from %ir.arrayidx1)
+; CHECK-NEXT: %4:vr64bit = VL64 %0:addr64bit, 16, $noreg :: (load (s64) from %ir.arrayidx2)
+; CHECK-NEXT: %5:vr64bit = VL64 %0:addr64bit, 24, $noreg :: (load (s64) from %ir.arrayidx4)
+; CHECK-NEXT: %6:vr64bit = VL64 %0:addr64bit, 32, $noreg :: (load (s64) from %ir.arrayidx6)
+; CHECK-NEXT: %7:vr64bit = VL64 %0:addr64bit, 40, $noreg :: (load (s64) from %ir.arrayidx8)
+; CHECK-NEXT: %8:vr64bit = VL64 %0:addr64bit, 48, $noreg :: (load (s64) from %ir.arrayidx10)
+; CHECK-NEXT: %9:vr64bit = VL64 %0:addr64bit, 56, $noreg :: (load (s64) from %ir.arrayidx12)
+; CHECK-NEXT: %10:vr64bit = VL64 %0:addr64bit, 64, $noreg :: (load (s64) from %ir.arrayidx14)
+; CHECK-NEXT: %11:vr64bit = VL64 %0:addr64bit, 72, $noreg :: (load (s64) from %ir.arrayidx16)
+; CHECK-NEXT: %12:vr64bit = VL64 %0:addr64bit, 80, $noreg :: (load (s64) from %ir.arrayidx18)
+; CHECK-NEXT: %13:vr64bit = VL64 %0:addr64bit, 88, $noreg :: (load (s64) from %ir.arrayidx20)
+; CHECK-NEXT: %14:vr64bit = VL64 %0:addr64bit, 96, $noreg :: (load (s64) from %ir.arrayidx22)
+; CHECK-NEXT: %15:vr64bit = VL64 %0:addr64bit, 104, $noreg :: (load (s64) from %ir.arrayidx24)
+; CHECK-NEXT: %16:vr64bit = VL64 %0:addr64bit, 112, $noreg :: (load (s64) from %ir.arrayidx26)
+; CHECK-NEXT: %17:vr64bit = VL64 %0:addr64bit, 120, $noreg :: (load (s64) from %ir.arrayidx28)
+; CHECK-NEXT: %18:vr64bit = {{.*}} WFMADB_CCPseudo killed %2:vr64bit, killed %3:vr64bit, %1:fp64bit
+; CHECK-NEXT: %19:vr64bit = {{.*}} WFMADB_CCPseudo killed %4:vr64bit, killed %5:vr64bit, killed %18:vr64bit
+; CHECK-NEXT: %20:vr64bit = {{.*}} WFMADB_CCPseudo killed %6:vr64bit, killed %7:vr64bit, killed %19:vr64bit
+; CHECK-NEXT: %21:vr64bit = {{.*}} WFMADB_CCPseudo killed %8:vr64bit, killed %9:vr64bit, killed %20:vr64bit
+; CHECK-NEXT: %22:vr64bit = {{.*}} WFMADB_CCPseudo killed %10:vr64bit, killed %11:vr64bit, killed %21:vr64bit
+; CHECK-NEXT: %23:vr64bit = {{.*}} WFMADB_CCPseudo killed %12:vr64bit, killed %13:vr64bit, killed %22:vr64bit
+; CHECK-NEXT: %24:vr64bit = {{.*}} WFMADB_CCPseudo killed %14:vr64bit, killed %15:vr64bit, killed %23:vr64bit
+; CHECK-NEXT: %25:vr64bit = {{.*}} WFMADB_CCPseudo killed %16:vr64bit, killed %17:vr64bit, killed %24:vr64bit
+; CHECK-NEXT: $f0d = COPY %25:vr64bit
+; CHECK-NEXT: Return implicit $f0d
+
+; CHECK: Machine InstCombiner: fun_fma8
+; CHECK: add pattern FMA2_P1P0
+; CHECK-NEXT: add pattern FMA2_P0P1
+; CHECK-NEXT: add pattern FMA2
+; CHECK: reassociating using pattern FMA_P1P0
+; CHECK: Dependence data for %21:vr64bit = {{.*}} WFMADB_CCPseudo
+; CHECK-NEXT: NewRootDepth: 16 RootDepth: 22 It MustReduceDepth and it does it
+; CHECK-NEXT: Resource length before replacement: 16 and after: 16
+; CHECK-NEXT: As result it IMPROVES/PRESERVES Resource Length
+; CHECK: add pattern FMA2_P1P0
+; CHECK-NEXT: add pattern FMA2_P0P1
+; CHECK-NEXT: add pattern FMA2
+; CHECK-NEXT: reassociating using pattern FMA_P1P0
+; CHECK-NEXT: Dependence data for %23:vr64bit = {{.*}} WFMADB_CCPseudo
+; CHECK-NEXT: NewRootDepth: 22 RootDepth: 28 It MustReduceDepth and it does it
+; CHECK: Resource length before replacement: 16 and after: 16
+; CHECK-NEXT: As result it IMPROVES/PRESERVES Resource Length
+; CHECK-NEXT: add pattern FMA1_Add_L
+; CHECK-NEXT: add pattern FMA1_Add_R
+; CHECK-NEXT: reassociating using pattern FMA1_Add_L
+; CHECK-NEXT: Dependence data for %24:vr64bit = {{.*}} WFMADB_CCPseudo
+; CHECK-NEXT: NewRootDepth: 28 RootDepth: 28 It MustReduceDepth but it does NOT do it
+; CHECK-NEXT: reassociating using pattern FMA1_Add_R
+; CHECK-NEXT: Dependence data for %24:vr64bit = {{.*}} WFMADB_CCPseudo
+; CHECK-NEXT: NewRootDepth: 22 RootDepth: 28 It MustReduceDepth and it does it
+; CHECK-NEXT: Resource length before replacement: 16 and after: 16
+; CHECK-NEXT: As result it IMPROVES/PRESERVES Resource Length
+
+; CHECK: # *** IR Dump After Machine InstCombiner (machine-combiner) ***:
+; CHECK: %18:vr64bit = {{.*}} WFMADB_CCPseudo killed %2:vr64bit, killed %3:vr64bit, %1:fp64bit
+; CHECK-NEXT: %19:vr64bit = {{.*}} WFMADB_CCPseudo killed %4:vr64bit, killed %5:vr64bit, killed %18:vr64bit
+; CHECK-NEXT: %36:vr64bit = {{.*}} WFMDB_CCPseudo killed %6:vr64bit, killed %7:vr64bit
+; CHECK-NEXT: %37:vr64bit = {{.*}} WFMADB_CCPseudo killed %8:vr64bit, killed %9:vr64bit, %36:vr64bit
+; CHECK-NEXT: %21:vr64bit = {{.*}} WFADB_CCPseudo killed %19:vr64bit, %37:vr64bit
+; CHECK-NEXT: %40:vr64bit = {{.*}} WFMDB_CCPseudo killed %10:vr64bit, killed %11:vr64bit
+; CHECK-NEXT: %41:vr64bit = {{.*}} WFMADB_CCPseudo killed %12:vr64bit, killed %13:vr64bit, %40:vr64bit
+; CHECK-NEXT: %43:vr64bit = {{.*}} WFMADB_CCPseudo killed %14:vr64bit, killed %15:vr64bit, %41:vr64bit
+; CHECK-NEXT: %24:vr64bit = {{.*}} WFADB_CCPseudo %43:vr64bit, killed %21:vr64bit
+; CHECK-NEXT: %25:vr64bit = {{.*}} WFMADB_CCPseudo killed %16:vr64bit, killed %17:vr64bit, killed %24:vr64bit
+
+; ALT: Machine InstCombiner: fun_fma8
+; ALT-NEXT: Combining MBB entry
+; ALT-NEXT: add pattern FMA3
+; ALT-NEXT: reassociating using pattern FMA3
+; ALT-NEXT: Dependence data for %20:vr64bit = {{.*}} WFMADB_CCPseudo
+; ALT-NEXT: NewRootDepth: 16 RootDepth: 16 It MustReduceDepth but it does NOT do it
+; ALT-NEXT: add pattern FMA3
+; ALT-NEXT: reassociating using pattern FMA3
+; ALT-NEXT: Dependence data for %21:vr64bit = {{.*}} WFMADB_CCPseudo
+; ALT-NEXT: NewRootDepth: 16 RootDepth: 22 It MustReduceDepth and it does it
+; ALT-NEXT: Resource length before replacement: 16 and after: 16
+; ALT-NEXT: As result it IMPROVES/PRESERVES Resource Length
+; ALT-NEXT: add pattern FMA2_Add
+; ALT-NEXT: reassociating using pattern FMA2_Add
+; ALT-NEXT: Dependence data for %23:vr64bit = {{.*}} WFMADB_CCPseudo
+; ALT-NEXT: NewRootDepth: 22 RootDepth: 28 It MustReduceDepth and it does it
+; ALT-NEXT: Resource length before replacement: 16 and after: 16
+; ALT-NEXT: As result it IMPROVES/PRESERVES Resource Length
+; ALT-NEXT: add pattern FMA2_Add
+; ALT-NEXT: reassociating using pattern FMA2_Add
+; ALT-NEXT: Dependence data for %25:vr64bit = {{.*}} WFMADB_CCPseudo
+; ALT-NEXT: NewRootDepth: 28 RootDepth: 34 It MustReduceDepth and it does it
+; ALT-NEXT: Resource length before replacement: 16 and after: 16
+; ALT-NEXT: As result it IMPROVES/PRESERVES Resource Length
+
+; ALT: # *** IR Dump After Machine InstCombiner (machine-combiner) ***:
+; ALT: %18:vr64bit = {{.*}} WFMADB_CCPseudo killed %2:vr64bit, killed %3:vr64bit, %1:fp64bit
+; ALT-NEXT: %29:vr64bit = {{.*}} WFMDB_CCPseudo killed %4:vr64bit, killed %5:vr64bit
+; ALT-NEXT: %30:vr64bit = {{.*}} WFMADB_CCPseudo killed %6:vr64bit, killed %7:vr64bit, killed %18:vr64bit
+; ALT-NEXT: %31:vr64bit = {{.*}} WFMADB_CCPseudo killed %8:vr64bit, killed %9:vr64bit, %29:vr64bit
+; ALT-NEXT: %32:vr64bit = {{.*}} WFMADB_CCPseudo killed %10:vr64bit, killed %11:vr64bit, %30:vr64bit
+; ALT-NEXT: %33:vr64bit = {{.*}} WFMADB_CCPseudo killed %12:vr64bit, killed %13:vr64bit, %31:vr64bit
+; ALT-NEXT: %34:vr64bit = {{.*}} WFMADB_CCPseudo killed %14:vr64bit, killed %15:vr64bit, %32:vr64bit
+; ALT-NEXT: %35:vr64bit = {{.*}} WFMADB_CCPseudo killed %16:vr64bit, killed %17:vr64bit, %33:vr64bit
+; ALT-NEXT: %25:vr64bit = {{.*}} WFADB_CCPseudo %34:vr64bit, %35:vr64bit
+
+entry:
+ %arrayidx1 = getelementptr inbounds double, ptr %x, i64 1
+ %arrayidx2 = getelementptr inbounds double, ptr %x, i64 2
+ %arrayidx4 = getelementptr inbounds double, ptr %x, i64 3
+ %arrayidx6 = getelementptr inbounds double, ptr %x, i64 4
+ %arrayidx8 = getelementptr inbounds double, ptr %x, i64 5
+ %arrayidx10 = getelementptr inbounds double, ptr %x, i64 6
+ %arrayidx12 = getelementptr inbounds double, ptr %x, i64 7
+ %arrayidx14 = getelementptr inbounds double, ptr %x, i64 8
+ %arrayidx16 = getelementptr inbounds double, ptr %x, i64 9
+ %arrayidx18 = getelementptr inbounds double, ptr %x, i64 10
+ %arrayidx20 = getelementptr inbounds double, ptr %x, i64 11
+ %arrayidx22 = getelementptr inbounds double, ptr %x, i64 12
+ %arrayidx24 = getelementptr inbounds double, ptr %x, i64 13
+ %arrayidx26 = getelementptr inbounds double, ptr %x, i64 14
+ %arrayidx28 = getelementptr inbounds double, ptr %x, i64 15
+
+ %0 = load double, ptr %x
+ %1 = load double, ptr %arrayidx1
+ %2 = load double, ptr %arrayidx2
+ %3 = load double, ptr %arrayidx4
+ %4 = load double, ptr %arrayidx6
+ %5 = load double, ptr %arrayidx8
+ %6 = load double, ptr %arrayidx10
+ %7 = load double, ptr %arrayidx12
+ %8 = load double, ptr %arrayidx14
+ %9 = load double, ptr %arrayidx16
+ %10 = load double, ptr %arrayidx18
+ %11 = load double, ptr %arrayidx20
+ %12 = load double, ptr %arrayidx22
+ %13 = load double, ptr %arrayidx24
+ %14 = load double, ptr %arrayidx26
+ %15 = load double, ptr %arrayidx28
+
+ %mul1 = fmul reassoc nsz contract double %0, %1
+ %mul2 = fmul reassoc nsz contract double %2, %3
+ %mul3 = fmul reassoc nsz contract double %4, %5
+ %mul4 = fmul reassoc nsz contract double %6, %7
+ %mul5 = fmul reassoc nsz contract double %8, %9
+ %mul6 = fmul reassoc nsz contract double %10, %11
+ %mul7 = fmul reassoc nsz contract double %12, %13
+ %mul8 = fmul reassoc nsz contract double %14, %15
+
+ %A1 = fadd reassoc nsz contract double %A, %mul1
+ %A2 = fadd reassoc nsz contract double %A1, %mul2
+ %A3 = fadd reassoc nsz contract double %A2, %mul3
+ %A4 = fadd reassoc nsz contract double %A3, %mul4
+ %A5 = fadd reassoc nsz contract double %A4, %mul5
+ %A6 = fadd reassoc nsz contract double %A5, %mul6
+ %A7 = fadd reassoc nsz contract double %A6, %mul7
+ %A8 = fadd reassoc nsz contract double %A7, %mul8
+
+ ret double %A8
+}
+
More information about the llvm-commits
mailing list