[llvm] [GlobalISel] An optimizing MIR builder (PR #132282)

John Stuart via llvm-commits llvm-commits at lists.llvm.org
Sat Mar 29 10:11:06 PDT 2025


https://github.com/john-stuart2 updated https://github.com/llvm/llvm-project/pull/132282

>From 5817f695dd26fc49252b169a773e43a8badb9ff6 Mon Sep 17 00:00:00 2001
From: John Stuart <john.stuart.science at gmail.com>
Date: Thu, 20 Mar 2025 18:31:11 +0100
Subject: [PATCH 1/5] [GlobalISel] An optimizing MIR builder

It optimizes instructions while building them. It looks at the
constness and values of its parameters to optimize them. It might
build copies. In some cases partial constness or values are sufficient
to optimize. The only context relied on are constants and their
values. The builder will never use known bits to optimize. It always
relies on legality before building.

It uses undefness for optimization.

The CSEMIRBuilder can build maybe illegal build vectors pass the
legalizer without any means to verify. It is inconsistent whether to
optimize fixed-length vectors. It never optimizes scalable vectors.

The win of the new approach are separation of concern and correctness.

TODO: move constant folding out of CSEMIRBuilder

I put the new builder only into the post legalizer combiner to keep
the patch small and show the demand for a legal builder.

I like the GIConstant. It keeps the builder smaller and simpler.
---
 .../llvm/CodeGen/GlobalISel/OptMIRBuilder.h   |  83 ++
 llvm/include/llvm/CodeGen/GlobalISel/Utils.h  |  20 +
 llvm/lib/CodeGen/GlobalISel/CMakeLists.txt    |   1 +
 llvm/lib/CodeGen/GlobalISel/CSEMIRBuilder.cpp |   3 -
 .../CodeGen/GlobalISel/MachineIRBuilder.cpp   |   8 +-
 llvm/lib/CodeGen/GlobalISel/OptMIRBuilder.cpp | 206 ++++
 llvm/lib/CodeGen/GlobalISel/Utils.cpp         | 124 +++
 .../GISel/AArch64PostLegalizerCombiner.cpp    |  10 +-
 .../AArch64/GlobalISel/arm64-irtranslator.ll  |   4 +-
 .../AArch64/GlobalISel/combine-udiv.ll        |  22 +-
 .../AArch64/GlobalISel/combine-udiv.mir       |  14 +-
 .../GlobalISel/combine-umulh-to-lshr.mir      |  22 +-
 .../AArch64/GlobalISel/legalize-fshl.mir      |  55 +-
 .../AArch64/GlobalISel/legalize-fshr.mir      |  56 +-
 .../AArch64/GlobalISel/opt-mir-builder.mir    | 166 +++
 .../CodeGen/AArch64/arm64-neon-mul-div-cte.ll |  20 +-
 llvm/test/CodeGen/AArch64/fsh.ll              | 996 ++++++++++++------
 llvm/test/CodeGen/AArch64/funnel-shift.ll     |  31 +-
 18 files changed, 1421 insertions(+), 420 deletions(-)
 create mode 100644 llvm/include/llvm/CodeGen/GlobalISel/OptMIRBuilder.h
 create mode 100644 llvm/lib/CodeGen/GlobalISel/OptMIRBuilder.cpp
 create mode 100644 llvm/test/CodeGen/AArch64/GlobalISel/opt-mir-builder.mir

diff --git a/llvm/include/llvm/CodeGen/GlobalISel/OptMIRBuilder.h b/llvm/include/llvm/CodeGen/GlobalISel/OptMIRBuilder.h
new file mode 100644
index 0000000000000..7df63bef1d9ff
--- /dev/null
+++ b/llvm/include/llvm/CodeGen/GlobalISel/OptMIRBuilder.h
@@ -0,0 +1,83 @@
+//===-- llvm/CodeGen/GlobalISel/OptMIRBuilder.h  --*- C++ -*-==---------------//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+/// \file This file implements a legal version of MachineIRBuilder which
+/// optimizes insts while building.
+//===----------------------------------------------------------------------===//
+#ifndef LLVM_CODEGEN_GLOBALISEL_OPTMIRBUILDER_H
+#define LLVM_CODEGEN_GLOBALISEL_OPTMIRBUILDER_H
+
+#include "llvm/CodeGen/GlobalISel/CSEMIRBuilder.h"
+
+namespace llvm {
+
+class LegalizerInfo;
+struct LegalityQuery;
+
+/// OptMIRBuilder optimizes instructions while building them. It
+/// checks its operands whether they are constant or undef. It never
+/// checks whether an operand is defined by G_FSINCOS. It checks
+/// operands and registers for G_IMPLICIT_DEF, G_CONSTANT,
+/// G_BUILD_VECTOR, and G_SPLAT_VECTOR and nothing else.
+/// Based on undef, the constants and their values, it folds
+/// instructions into constants, undef, or other instructions. For
+/// optmizations and constant folding it relies on GIConstant.
+/// It can fold G_MUL into G_ADD and G_SUB. Before folding
+/// it always queries the legalizer. When it fails to fold, it
+/// delegates the building to the CSEMIRBuilder. It is the users
+/// responsibility to only attempt to build legal instructions pass
+/// the legalizer. OptMIRBuilder can safely be used in optimization
+/// passes pass the legalizer.
+class OptMIRBuilder : public CSEMIRBuilder {
+  const LegalizerInfo *LI;
+  const bool IsPrelegalize;
+
+  /// Legality tests.
+  bool isPrelegalize() const;
+  bool isLegal(const LegalityQuery &Query) const;
+  bool isConstantLegal(LLT);
+  bool isLegalOrBeforeLegalizer(const LegalityQuery &Query) const;
+  bool isConstantLegalOrBeforeLegalizer(LLT);
+
+  /// Returns true if the register \p R is defined by G_IMPLICIT_DEF.
+  bool isUndef(Register R) const;
+
+  /// Builds 0 - X.
+  MachineInstrBuilder buildNegation(const DstOp &, const SrcOp &);
+
+  // Constants.
+  MachineInstrBuilder buildGIConstant(const DstOp &DstOp, const GIConstant &);
+
+  /// Integer.
+  MachineInstrBuilder optimizeAdd(unsigned Opc, ArrayRef<DstOp> DstOps,
+                                  ArrayRef<SrcOp> SrcOps,
+                                  std::optional<unsigned> Flag = std::nullopt);
+  MachineInstrBuilder optimizeSub(unsigned Opc, ArrayRef<DstOp> DstOps,
+                                  ArrayRef<SrcOp> SrcOps,
+                                  std::optional<unsigned> Flag = std::nullopt);
+  MachineInstrBuilder optimizeMul(unsigned Opc, ArrayRef<DstOp> DstOps,
+                                  ArrayRef<SrcOp> SrcOps,
+                                  std::optional<unsigned> Flag = std::nullopt);
+
+public:
+  OptMIRBuilder(MachineFunction &MF, GISelCSEInfo *CSEInfo,
+                GISelChangeObserver &Observer, const LegalizerInfo *LI,
+                bool IsPrelegalize)
+      : LI(LI), IsPrelegalize(IsPrelegalize) {
+    setMF(MF);
+    setCSEInfo(CSEInfo);
+    setChangeObserver(Observer);
+  };
+
+  MachineInstrBuilder
+  buildInstr(unsigned Opc, ArrayRef<DstOp> DstOps, ArrayRef<SrcOp> SrcOps,
+             std::optional<unsigned> Flag = std::nullopt) override;
+};
+
+} // namespace llvm
+
+#endif // LLVM_CODEGEN_GLOBALISEL_OPTMIRBUILDER_H
diff --git a/llvm/include/llvm/CodeGen/GlobalISel/Utils.h b/llvm/include/llvm/CodeGen/GlobalISel/Utils.h
index 44141844f42f4..45e9f20565f98 100644
--- a/llvm/include/llvm/CodeGen/GlobalISel/Utils.h
+++ b/llvm/include/llvm/CodeGen/GlobalISel/Utils.h
@@ -635,8 +635,28 @@ class GIConstant {
   /// Returns the value, if this constant is a scalar.
   APInt getScalarValue() const;
 
+  /// Returns the value, if this constant is a scalable vector.
+  APInt getSplatValue() const;
+
+  /// Returns the values, if this constant is a fixed vector.
+  ArrayRef<APInt> getAsArrayRef() const;
+
   static std::optional<GIConstant> getConstant(Register Const,
                                                const MachineRegisterInfo &MRI);
+
+  /// Returns a new constant where add(this, x) was applied.
+  GIConstant add(const GIConstant &) const;
+
+  /// Returns a new constant where sub(this, x) was applied.
+  GIConstant sub(const GIConstant &) const;
+
+  /// Returns a new constant where mul(this, x) was applied.
+  GIConstant mul(const GIConstant &) const;
+
+  bool isZero() const;
+  bool isOne() const;
+  bool isTwo() const;
+  bool isAllOnes() const;
 };
 
 /// An floating-point-like constant.
diff --git a/llvm/lib/CodeGen/GlobalISel/CMakeLists.txt b/llvm/lib/CodeGen/GlobalISel/CMakeLists.txt
index 554a2367eb835..87699ef0bcf87 100644
--- a/llvm/lib/CodeGen/GlobalISel/CMakeLists.txt
+++ b/llvm/lib/CodeGen/GlobalISel/CMakeLists.txt
@@ -26,6 +26,7 @@ add_llvm_component_library(LLVMGlobalISel
   Localizer.cpp
   LostDebugLocObserver.cpp
   MachineIRBuilder.cpp
+  OptMIRBuilder.cpp
   RegBankSelect.cpp
   Utils.cpp
 
diff --git a/llvm/lib/CodeGen/GlobalISel/CSEMIRBuilder.cpp b/llvm/lib/CodeGen/GlobalISel/CSEMIRBuilder.cpp
index bf8e847011d7c..23ff50f5296af 100644
--- a/llvm/lib/CodeGen/GlobalISel/CSEMIRBuilder.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/CSEMIRBuilder.cpp
@@ -199,15 +199,12 @@ MachineInstrBuilder CSEMIRBuilder::buildInstr(unsigned Opc,
     }
     break;
   }
-  case TargetOpcode::G_ADD:
   case TargetOpcode::G_PTR_ADD:
   case TargetOpcode::G_AND:
   case TargetOpcode::G_ASHR:
   case TargetOpcode::G_LSHR:
-  case TargetOpcode::G_MUL:
   case TargetOpcode::G_OR:
   case TargetOpcode::G_SHL:
-  case TargetOpcode::G_SUB:
   case TargetOpcode::G_XOR:
   case TargetOpcode::G_UDIV:
   case TargetOpcode::G_SDIV:
diff --git a/llvm/lib/CodeGen/GlobalISel/MachineIRBuilder.cpp b/llvm/lib/CodeGen/GlobalISel/MachineIRBuilder.cpp
index 359677027f52f..f96c65ae4993e 100644
--- a/llvm/lib/CodeGen/GlobalISel/MachineIRBuilder.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/MachineIRBuilder.cpp
@@ -321,8 +321,12 @@ MachineInstrBuilder MachineIRBuilder::buildConstant(const DstOp &Res,
   assert(EltTy.getScalarSizeInBits() == Val.getBitWidth() &&
          "creating constant with the wrong size");
 
-  assert(!Ty.isScalableVector() &&
-         "unexpected scalable vector in buildConstant");
+  if (Ty.isScalableVector()) {
+    auto Const = buildInstr(TargetOpcode::G_CONSTANT)
+                     .addDef(getMRI()->createGenericVirtualRegister(EltTy))
+                     .addCImm(&Val);
+    return buildSplatVector(Res, Const);
+  }
 
   if (Ty.isFixedVector()) {
     auto Const = buildInstr(TargetOpcode::G_CONSTANT)
diff --git a/llvm/lib/CodeGen/GlobalISel/OptMIRBuilder.cpp b/llvm/lib/CodeGen/GlobalISel/OptMIRBuilder.cpp
new file mode 100644
index 0000000000000..66b879f5113dd
--- /dev/null
+++ b/llvm/lib/CodeGen/GlobalISel/OptMIRBuilder.cpp
@@ -0,0 +1,206 @@
+//===-- llvm/CodeGen/GlobalISel/OptMIRBuilder.cpp -----------------*- C++-*-==//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+/// \file
+/// This file implements the OptMIRBuilder class which optimizes as it builds
+/// instructions.
+//===----------------------------------------------------------------------===//
+//
+
+#include "llvm/CodeGen/GlobalISel/OptMIRBuilder.h"
+#include "llvm/CodeGen/GlobalISel/CSEMIRBuilder.h"
+#include "llvm/CodeGen/GlobalISel/LegalizerInfo.h"
+#include "llvm/CodeGen/GlobalISel/MachineIRBuilder.h"
+#include "llvm/CodeGen/GlobalISel/Utils.h"
+#include "llvm/CodeGen/TargetOpcodes.h"
+
+using namespace llvm;
+
+bool OptMIRBuilder::isPrelegalize() const { return IsPrelegalize; }
+
+bool OptMIRBuilder::isLegal(const LegalityQuery &Query) const {
+  assert(LI != nullptr && "legalizer info is not available");
+  return LI->isLegal(Query);
+}
+
+bool OptMIRBuilder::isConstantLegal(LLT Ty) {
+  if (Ty.isScalar())
+    return isLegal({TargetOpcode::G_CONSTANT, {Ty}});
+
+  LLT EltTy = Ty.getElementType();
+  if (Ty.isFixedVector())
+    return isLegal({TargetOpcode::G_BUILD_VECTOR, {Ty, EltTy}}) &&
+           isLegal({TargetOpcode::G_CONSTANT, {EltTy}});
+
+  // scalable vector
+  assert(Ty.isScalableVector() && "Unexpected LLT");
+  return isLegal({TargetOpcode::G_SPLAT_VECTOR, {Ty, EltTy}}) &&
+         isLegal({TargetOpcode::G_CONSTANT, {EltTy}});
+}
+
+bool OptMIRBuilder::isLegalOrBeforeLegalizer(const LegalityQuery &Query) const {
+  return isPrelegalize() || isLegal(Query);
+}
+
+bool OptMIRBuilder::isConstantLegalOrBeforeLegalizer(LLT Ty) {
+  return isPrelegalize() || isConstantLegal(Ty);
+}
+
+bool OptMIRBuilder::isUndef(Register Reg) const {
+  const MachineInstr *MI = getMRI()->getVRegDef(Reg);
+  return MI->getOpcode() == TargetOpcode::G_IMPLICIT_DEF;
+}
+
+MachineInstrBuilder OptMIRBuilder::buildNegation(const DstOp &DstOp,
+                                                 const SrcOp &SrcOp) {
+  LLT DstTy = DstOp.getLLTTy(*getMRI());
+
+  auto Zero = buildConstant(DstTy, 0);
+  return buildSub(DstOp, Zero, SrcOp);
+}
+
+MachineInstrBuilder OptMIRBuilder::buildGIConstant(const DstOp &DstOp,
+                                                   const GIConstant &Const) {
+  LLT DstTy = DstOp.getLLTTy(*getMRI());
+
+  switch (Const.getKind()) {
+  case GIConstant::GIConstantKind::Scalar:
+    return buildConstant(DstOp, Const.getScalarValue());
+  case GIConstant::GIConstantKind::FixedVector:
+    return buildBuildVectorConstant(DstOp, Const.getAsArrayRef());
+  case GIConstant::GIConstantKind::ScalableVector: {
+    auto Constant =
+        buildConstant(DstTy.getElementType(), Const.getSplatValue());
+    return buildSplatVector(DstOp, Constant);
+  }
+  }
+}
+
+MachineInstrBuilder OptMIRBuilder::optimizeAdd(unsigned Opc,
+                                               ArrayRef<DstOp> DstOps,
+                                               ArrayRef<SrcOp> SrcOps,
+                                               std::optional<unsigned> Flag) {
+  assert(SrcOps.size() == 2 && "Invalid sources");
+  assert(DstOps.size() == 1 && "Invalid dsts");
+
+  LLT DstTy = DstOps[0].getLLTTy(*getMRI());
+
+  if (isUndef(SrcOps[1].getReg()) || isUndef(SrcOps[0].getReg()))
+    return buildUndef(DstTy);
+
+  std::optional<GIConstant> RHS =
+      GIConstant::getConstant(SrcOps[1].getReg(), *getMRI());
+  if (!RHS)
+    return CSEMIRBuilder::buildInstr(Opc, DstOps, SrcOps, Flag);
+
+  if (RHS->isZero())
+    return buildCopy(DstOps[0], SrcOps[0]);
+
+  if (!isConstantLegalOrBeforeLegalizer(DstTy))
+    return CSEMIRBuilder::buildInstr(Opc, DstOps, SrcOps, Flag);
+
+  std::optional<GIConstant> LHS =
+      GIConstant::getConstant(SrcOps[0].getReg(), *getMRI());
+  if (!LHS)
+    return CSEMIRBuilder::buildInstr(Opc, DstOps, SrcOps, Flag);
+
+  GIConstant Add = LHS->add(*RHS);
+
+  return buildGIConstant(DstOps[0], Add);
+}
+
+MachineInstrBuilder OptMIRBuilder::optimizeSub(unsigned Opc,
+                                               ArrayRef<DstOp> DstOps,
+                                               ArrayRef<SrcOp> SrcOps,
+                                               std::optional<unsigned> Flag) {
+  assert(SrcOps.size() == 2 && "Invalid sources");
+  assert(DstOps.size() == 1 && "Invalid dsts");
+
+  LLT DstTy = DstOps[0].getLLTTy(*getMRI());
+
+  if (isUndef(SrcOps[1].getReg()) || isUndef(SrcOps[0].getReg()))
+    return buildUndef(DstTy);
+
+  std::optional<GIConstant> RHS =
+      GIConstant::getConstant(SrcOps[1].getReg(), *getMRI());
+  if (!RHS)
+    return CSEMIRBuilder::buildInstr(Opc, DstOps, SrcOps, Flag);
+
+  if (RHS->isZero())
+    return buildCopy(DstOps[0], SrcOps[0]);
+
+  if (!isConstantLegalOrBeforeLegalizer(DstTy))
+    return CSEMIRBuilder::buildInstr(Opc, DstOps, SrcOps, Flag);
+
+  std::optional<GIConstant> LHS =
+      GIConstant::getConstant(SrcOps[0].getReg(), *getMRI());
+  if (!LHS)
+    return CSEMIRBuilder::buildInstr(Opc, DstOps, SrcOps, Flag);
+
+  GIConstant Sub = LHS->sub(*RHS);
+
+  return buildGIConstant(DstOps[0], Sub);
+}
+
+MachineInstrBuilder OptMIRBuilder::optimizeMul(unsigned Opc,
+                                               ArrayRef<DstOp> DstOps,
+                                               ArrayRef<SrcOp> SrcOps,
+                                               std::optional<unsigned> Flag) {
+  assert(SrcOps.size() == 2 && "Invalid sources");
+  assert(DstOps.size() == 1 && "Invalid dsts");
+
+  LLT DstTy = DstOps[0].getLLTTy(*getMRI());
+
+  if ((isUndef(SrcOps[1].getReg()) || isUndef(SrcOps[0].getReg())) &&
+      isConstantLegalOrBeforeLegalizer(DstTy))
+    return buildConstant(DstTy, 0);
+
+  std::optional<GIConstant> RHS =
+      GIConstant::getConstant(SrcOps[1].getReg(), *getMRI());
+  if (!RHS)
+    return CSEMIRBuilder::buildInstr(Opc, DstOps, SrcOps, Flag);
+
+  if (RHS->isZero() && isConstantLegalOrBeforeLegalizer(DstTy))
+    return buildConstant(DstTy, 0);
+
+  if (RHS->isOne())
+    return buildCopy(DstOps[0], SrcOps[0]);
+
+  if (RHS->isAllOnes() && isConstantLegalOrBeforeLegalizer(DstTy) &&
+      isLegalOrBeforeLegalizer({TargetOpcode::G_SUB, {DstTy}}))
+    return buildNegation(DstOps[0], SrcOps[0]);
+
+  if (RHS->isTwo() && isLegalOrBeforeLegalizer({TargetOpcode::G_ADD, {DstTy}}))
+    return buildAdd(DstOps[0], SrcOps[0], SrcOps[0]);
+
+  if (!isConstantLegalOrBeforeLegalizer(DstTy))
+    return CSEMIRBuilder::buildInstr(Opc, DstOps, SrcOps, Flag);
+
+  std::optional<GIConstant> LHS =
+      GIConstant::getConstant(SrcOps[0].getReg(), *getMRI());
+  if (!LHS)
+    return CSEMIRBuilder::buildInstr(Opc, DstOps, SrcOps, Flag);
+
+  GIConstant Mul = LHS->mul(*RHS);
+
+  return buildGIConstant(DstOps[0], Mul);
+}
+
+MachineInstrBuilder OptMIRBuilder::buildInstr(unsigned Opc,
+                                              ArrayRef<DstOp> DstOps,
+                                              ArrayRef<SrcOp> SrcOps,
+                                              std::optional<unsigned> Flag) {
+  switch (Opc) {
+  case TargetOpcode::G_ADD:
+    return optimizeAdd(Opc, DstOps, SrcOps, Flag);
+  case TargetOpcode::G_SUB:
+    return optimizeSub(Opc, DstOps, SrcOps, Flag);
+  case TargetOpcode::G_MUL:
+    return optimizeMul(Opc, DstOps, SrcOps, Flag);
+  default:
+    return CSEMIRBuilder::buildInstr(Opc, DstOps, SrcOps, Flag);
+  }
+}
diff --git a/llvm/lib/CodeGen/GlobalISel/Utils.cpp b/llvm/lib/CodeGen/GlobalISel/Utils.cpp
index 223d69c362185..bf4aa11cfa56b 100644
--- a/llvm/lib/CodeGen/GlobalISel/Utils.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/Utils.cpp
@@ -1997,6 +1997,19 @@ APInt llvm::GIConstant::getScalarValue() const {
   return Value;
 }
 
+APInt llvm::GIConstant::getSplatValue() const {
+  assert(Kind == GIConstantKind::ScalableVector &&
+         "Expected scalable constant");
+
+  return Value;
+}
+
+ArrayRef<APInt> llvm::GIConstant::getAsArrayRef() const {
+  assert(Kind == GIConstantKind::FixedVector &&
+         "Expected fixed vector constant");
+  return Values;
+}
+
 std::optional<GIConstant>
 llvm::GIConstant::getConstant(Register Const, const MachineRegisterInfo &MRI) {
   MachineInstr *Constant = getDefIgnoringCopies(Const, MRI);
@@ -2031,6 +2044,117 @@ llvm::GIConstant::getConstant(Register Const, const MachineRegisterInfo &MRI) {
   return GIConstant(MayBeConstant->Value, GIConstantKind::Scalar);
 }
 
+/// Returns a new constant where add(x) was applied.
+GIConstant llvm::GIConstant::add(const GIConstant &RHS) const {
+  switch (getKind()) {
+  case GIConstantKind::ScalableVector:
+    return GIConstant(Value + RHS.Value, GIConstantKind::ScalableVector);
+  case GIConstantKind::Scalar: {
+    return GIConstant(Value + RHS.Value, GIConstantKind::Scalar);
+  }
+  case GIConstantKind::FixedVector: {
+    SmallVector<APInt> Adds;
+    for (unsigned I = 0, E = Values.size(); I < E; ++I)
+      Adds.push_back(Values[I] + RHS.Values[I]);
+    return GIConstant(Adds);
+  }
+  }
+}
+
+/// Returns a new constant where sub(x) was applied.
+GIConstant llvm::GIConstant::sub(const GIConstant &RHS) const {
+  switch (getKind()) {
+  case GIConstantKind::ScalableVector:
+    return GIConstant(Value - RHS.Value, GIConstantKind::ScalableVector);
+  case GIConstantKind::Scalar: {
+    return GIConstant(Value - RHS.Value, GIConstantKind::Scalar);
+  }
+  case GIConstantKind::FixedVector: {
+    SmallVector<APInt> Subs;
+    for (unsigned I = 0, E = Values.size(); I < E; ++I)
+      Subs.push_back(Values[I] - RHS.Values[I]);
+    return GIConstant(Subs);
+  }
+  }
+}
+
+/// Returns a new constant where mul(x) was applied.
+GIConstant llvm::GIConstant::mul(const GIConstant &RHS) const {
+  switch (getKind()) {
+  case GIConstantKind::ScalableVector:
+    return GIConstant(Value * RHS.Value, GIConstantKind::ScalableVector);
+  case GIConstantKind::Scalar: {
+    return GIConstant(Value * RHS.Value, GIConstantKind::Scalar);
+  }
+  case GIConstantKind::FixedVector: {
+    SmallVector<APInt> Muls;
+    for (unsigned I = 0, E = Values.size(); I < E; ++I)
+      Muls.push_back(Values[I] * RHS.Values[I]);
+    return GIConstant(Muls);
+  }
+  }
+}
+
+bool llvm::GIConstant::isZero() const {
+  switch (Kind) {
+  case GIConstantKind::Scalar:
+    return Value.isZero();
+  case GIConstantKind::ScalableVector:
+    return Value.isZero();
+  case GIConstantKind::FixedVector: {
+    for (const APInt &V : Values)
+      if (!V.isZero())
+        return false;
+    return true;
+  }
+  }
+}
+
+bool llvm::GIConstant::isOne() const {
+  switch (Kind) {
+  case GIConstantKind::Scalar:
+    return Value.isOne();
+  case GIConstantKind::ScalableVector:
+    return Value.isOne();
+  case GIConstantKind::FixedVector: {
+    for (const APInt &V : Values)
+      if (!V.isOne())
+        return false;
+    return true;
+  }
+  }
+}
+
+bool llvm::GIConstant::isTwo() const {
+  switch (Kind) {
+  case GIConstantKind::Scalar:
+    return Value.getLimitedValue() == 2;
+  case GIConstantKind::ScalableVector:
+    return Value.getLimitedValue() == 2;
+  case GIConstantKind::FixedVector: {
+    for (const APInt &V : Values)
+      if (V.getLimitedValue() != 2)
+        return false;
+    return true;
+  }
+  }
+}
+
+bool llvm::GIConstant::isAllOnes() const {
+  switch (Kind) {
+  case GIConstantKind::Scalar:
+    return Value.isAllOnes();
+  case GIConstantKind::ScalableVector:
+    return Value.isAllOnes();
+  case GIConstantKind::FixedVector: {
+    for (const APInt &V : Values)
+      if (!V.isAllOnes())
+        return false;
+    return true;
+  }
+  }
+}
+
 APFloat llvm::GFConstant::getScalarValue() const {
   assert(Kind == GFConstantKind::Scalar && "Expected scalar constant");
 
diff --git a/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerCombiner.cpp b/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerCombiner.cpp
index d4a14f8756304..bae1c9517e9c0 100644
--- a/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerCombiner.cpp
+++ b/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerCombiner.cpp
@@ -32,6 +32,7 @@
 #include "llvm/CodeGen/GlobalISel/GenericMachineInstrs.h"
 #include "llvm/CodeGen/GlobalISel/MIPatternMatch.h"
 #include "llvm/CodeGen/GlobalISel/MachineIRBuilder.h"
+#include "llvm/CodeGen/GlobalISel/OptMIRBuilder.h"
 #include "llvm/CodeGen/GlobalISel/Utils.h"
 #include "llvm/CodeGen/MachineDominators.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
@@ -440,6 +441,9 @@ void applyCombineMulCMLT(MachineInstr &MI, MachineRegisterInfo &MRI,
 
 class AArch64PostLegalizerCombinerImpl : public Combiner {
 protected:
+  std::unique_ptr<MachineIRBuilder> Opt;
+  // hides Combiner::B
+  MachineIRBuilder &B;
   const CombinerHelper Helper;
   const AArch64PostLegalizerCombinerImplRuleConfig &RuleConfig;
   const AArch64Subtarget &STI;
@@ -472,8 +476,10 @@ AArch64PostLegalizerCombinerImpl::AArch64PostLegalizerCombinerImpl(
     const AArch64PostLegalizerCombinerImplRuleConfig &RuleConfig,
     const AArch64Subtarget &STI, MachineDominatorTree *MDT,
     const LegalizerInfo *LI)
-    : Combiner(MF, CInfo, TPC, &VT, CSEInfo),
-      Helper(Observer, B, /*IsPreLegalize*/ false, &VT, MDT, LI),
+    : Combiner(MF, CInfo, TPC, &KB, CSEInfo),
+      Opt(std::make_unique<OptMIRBuilder>(MF, CSEInfo, Observer, LI,
+                                          /*IsPrelegalize=*/false)),
+      B(*Opt), Helper(Observer, B, /*IsPreLegalize*/ false, &VT, MDT, LI),
       RuleConfig(RuleConfig), STI(STI),
 #define GET_GICOMBINER_CONSTRUCTOR_INITS
 #include "AArch64GenPostLegalizeGICombiner.inc"
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
index d457418720f47..5da4e8aae8c4b 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
@@ -462,7 +462,9 @@ next:
 ; CHECK-LABEL: name: constant_int_start
 ; CHECK: [[TWO:%[0-9]+]]:_(s32) = G_CONSTANT i32 2
 ; CHECK: [[ANSWER:%[0-9]+]]:_(s32) = G_CONSTANT i32 42
-; CHECK: [[RES:%[0-9]+]]:_(s32) = G_CONSTANT i32 44
+; CHECK: [[ADD:%[0-9]+]]:_(s32) = G_ADD [[TWO]], [[ANSWER]]
+; CHECK: $w0 = COPY [[ADD]]
+; CHECK-NEXT: RET_ReallyLR implicit $w0
 define i32 @constant_int_start() {
   %res = add i32 2, 42
   ret i32 %res
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll b/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll
index b681e3b223117..3a34891bc37d3 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll
@@ -19,13 +19,18 @@ define <8 x i16> @combine_vec_udiv_uniform(<8 x i16> %x) {
 ; GISEL-LABEL: combine_vec_udiv_uniform:
 ; GISEL:       // %bb.0:
 ; GISEL-NEXT:    adrp x8, .LCPI0_0
+; GISEL-NEXT:    movi v3.8h, #15
+; GISEL-NEXT:    movi v4.8h, #16
 ; GISEL-NEXT:    ldr q1, [x8, :lo12:.LCPI0_0]
 ; GISEL-NEXT:    umull2 v2.4s, v0.8h, v1.8h
 ; GISEL-NEXT:    umull v1.4s, v0.4h, v1.4h
 ; GISEL-NEXT:    uzp2 v1.8h, v1.8h, v2.8h
+; GISEL-NEXT:    sub v2.8h, v4.8h, v3.8h
+; GISEL-NEXT:    neg v2.8h, v2.8h
 ; GISEL-NEXT:    sub v0.8h, v0.8h, v1.8h
-; GISEL-NEXT:    usra v1.8h, v0.8h, #1
-; GISEL-NEXT:    ushr v0.8h, v1.8h, #4
+; GISEL-NEXT:    ushl v0.8h, v0.8h, v2.8h
+; GISEL-NEXT:    add v0.8h, v0.8h, v1.8h
+; GISEL-NEXT:    ushr v0.8h, v0.8h, #4
 ; GISEL-NEXT:    ret
   %1 = udiv <8 x i16> %x, <i16 23, i16 23, i16 23, i16 23, i16 23, i16 23, i16 23, i16 23>
   ret <8 x i16> %1
@@ -135,16 +140,21 @@ define <8 x i16> @combine_vec_udiv_nonuniform3(<8 x i16> %x) {
 ; GISEL-LABEL: combine_vec_udiv_nonuniform3:
 ; GISEL:       // %bb.0:
 ; GISEL-NEXT:    adrp x8, .LCPI3_1
+; GISEL-NEXT:    movi v3.8h, #15
+; GISEL-NEXT:    movi v4.8h, #16
 ; GISEL-NEXT:    ldr q1, [x8, :lo12:.LCPI3_1]
 ; GISEL-NEXT:    adrp x8, .LCPI3_0
 ; GISEL-NEXT:    umull2 v2.4s, v0.8h, v1.8h
 ; GISEL-NEXT:    umull v1.4s, v0.4h, v1.4h
 ; GISEL-NEXT:    uzp2 v1.8h, v1.8h, v2.8h
-; GISEL-NEXT:    ldr q2, [x8, :lo12:.LCPI3_0]
+; GISEL-NEXT:    sub v2.8h, v4.8h, v3.8h
+; GISEL-NEXT:    neg v2.8h, v2.8h
 ; GISEL-NEXT:    sub v0.8h, v0.8h, v1.8h
-; GISEL-NEXT:    usra v1.8h, v0.8h, #1
-; GISEL-NEXT:    neg v0.8h, v2.8h
-; GISEL-NEXT:    ushl v0.8h, v1.8h, v0.8h
+; GISEL-NEXT:    ushl v0.8h, v0.8h, v2.8h
+; GISEL-NEXT:    ldr q2, [x8, :lo12:.LCPI3_0]
+; GISEL-NEXT:    add v0.8h, v0.8h, v1.8h
+; GISEL-NEXT:    neg v1.8h, v2.8h
+; GISEL-NEXT:    ushl v0.8h, v0.8h, v1.8h
 ; GISEL-NEXT:    ret
   %1 = udiv <8 x i16> %x, <i16 7, i16 23, i16 25, i16 27, i16 31, i16 47, i16 63, i16 127>
   ret <8 x i16> %1
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.mir
index f8578a694e2d4..096836d062dd6 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.mir
@@ -41,9 +41,12 @@ body:             |
     ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C1]](s16), [[C1]](s16), [[C1]](s16), [[C1]](s16), [[C1]](s16), [[C1]](s16), [[C1]](s16), [[C1]](s16)
     ; CHECK-NEXT: [[UMULH:%[0-9]+]]:_(<8 x s16>) = G_UMULH [[COPY]], [[BUILD_VECTOR]]
     ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(<8 x s16>) = G_SUB [[COPY]], [[UMULH]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s16) = G_CONSTANT i16 1
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s16) = G_CONSTANT i16 15
     ; CHECK-NEXT: [[BUILD_VECTOR2:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C2]](s16), [[C2]](s16), [[C2]](s16), [[C2]](s16), [[C2]](s16), [[C2]](s16), [[C2]](s16), [[C2]](s16)
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[SUB]], [[BUILD_VECTOR2]](<8 x s16>)
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s16) = G_CONSTANT i16 16
+    ; CHECK-NEXT: [[BUILD_VECTOR3:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16)
+    ; CHECK-NEXT: [[SUB1:%[0-9]+]]:_(<8 x s16>) = G_SUB [[BUILD_VECTOR3]], [[BUILD_VECTOR2]]
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[SUB]], [[SUB1]](<8 x s16>)
     ; CHECK-NEXT: [[ADD:%[0-9]+]]:_(<8 x s16>) = G_ADD [[LSHR]], [[UMULH]]
     ; CHECK-NEXT: [[LSHR1:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[ADD]], [[BUILD_VECTOR1]](<8 x s16>)
     ; CHECK-NEXT: $q0 = COPY [[LSHR1]](<8 x s16>)
@@ -192,9 +195,12 @@ body:             |
     ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C1]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C8]](s16), [[C8]](s16), [[C11]](s16)
     ; CHECK-NEXT: [[UMULH:%[0-9]+]]:_(<8 x s16>) = G_UMULH [[COPY]], [[BUILD_VECTOR]]
     ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(<8 x s16>) = G_SUB [[COPY]], [[UMULH]]
-    ; CHECK-NEXT: [[C12:%[0-9]+]]:_(s16) = G_CONSTANT i16 1
+    ; CHECK-NEXT: [[C12:%[0-9]+]]:_(s16) = G_CONSTANT i16 15
     ; CHECK-NEXT: [[BUILD_VECTOR2:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C12]](s16), [[C12]](s16), [[C12]](s16), [[C12]](s16), [[C12]](s16), [[C12]](s16), [[C12]](s16), [[C12]](s16)
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[SUB]], [[BUILD_VECTOR2]](<8 x s16>)
+    ; CHECK-NEXT: [[C13:%[0-9]+]]:_(s16) = G_CONSTANT i16 16
+    ; CHECK-NEXT: [[BUILD_VECTOR3:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C13]](s16), [[C13]](s16), [[C13]](s16), [[C13]](s16), [[C13]](s16), [[C13]](s16), [[C13]](s16), [[C13]](s16)
+    ; CHECK-NEXT: [[SUB1:%[0-9]+]]:_(<8 x s16>) = G_SUB [[BUILD_VECTOR3]], [[BUILD_VECTOR2]]
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[SUB]], [[SUB1]](<8 x s16>)
     ; CHECK-NEXT: [[ADD:%[0-9]+]]:_(<8 x s16>) = G_ADD [[LSHR]], [[UMULH]]
     ; CHECK-NEXT: [[LSHR1:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[ADD]], [[BUILD_VECTOR1]](<8 x s16>)
     ; CHECK-NEXT: $q0 = COPY [[LSHR1]](<8 x s16>)
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-umulh-to-lshr.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-umulh-to-lshr.mir
index 3ff72219810fb..d5fe354719908 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-umulh-to-lshr.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-umulh-to-lshr.mir
@@ -37,9 +37,15 @@ body:             |
     ; CHECK: liveins: $q0
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<4 x s32>) = COPY $q0
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 29
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 28
     ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C]](s32), [[C]](s32), [[C]](s32), [[C]](s32)
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<4 x s32>) = G_LSHR [[COPY]], [[BUILD_VECTOR]](<4 x s32>)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 31
+    ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C1]](s32), [[C1]](s32), [[C1]](s32), [[C1]](s32)
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(<4 x s32>) = G_SUB [[BUILD_VECTOR1]], [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 32
+    ; CHECK-NEXT: [[BUILD_VECTOR2:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C2]](s32), [[C2]](s32), [[C2]](s32), [[C2]](s32)
+    ; CHECK-NEXT: [[SUB1:%[0-9]+]]:_(<4 x s32>) = G_SUB [[BUILD_VECTOR2]], [[SUB]]
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<4 x s32>) = G_LSHR [[COPY]], [[SUB1]](<4 x s32>)
     ; CHECK-NEXT: $q0 = COPY [[LSHR]](<4 x s32>)
     %0:_(<4 x s32>) = COPY $q0
     %1:_(s32) = G_CONSTANT i32 8
@@ -107,12 +113,18 @@ body:             |
     ; CHECK: liveins: $q0
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<4 x s32>) = COPY $q0
+    ; CHECK-NEXT: %cst3:_(s32) = G_CONSTANT i32 32
     ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 28
     ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 27
     ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 26
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 29
-    ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C3]](s32), [[C]](s32), [[C1]](s32), [[C2]](s32)
-    ; CHECK-NEXT: %mulh:_(<4 x s32>) = G_LSHR [[COPY]], [[BUILD_VECTOR]](<4 x s32>)
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 25
+    ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C]](s32), [[C1]](s32), [[C2]](s32), [[C3]](s32)
+    ; CHECK-NEXT: [[C4:%[0-9]+]]:_(s32) = G_CONSTANT i32 31
+    ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C4]](s32), [[C4]](s32), [[C4]](s32), [[C4]](s32)
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(<4 x s32>) = G_SUB [[BUILD_VECTOR1]], [[BUILD_VECTOR]]
+    ; CHECK-NEXT: [[BUILD_VECTOR2:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR %cst3(s32), %cst3(s32), %cst3(s32), %cst3(s32)
+    ; CHECK-NEXT: [[SUB1:%[0-9]+]]:_(<4 x s32>) = G_SUB [[BUILD_VECTOR2]], [[SUB]]
+    ; CHECK-NEXT: %mulh:_(<4 x s32>) = G_LSHR [[COPY]], [[SUB1]](<4 x s32>)
     ; CHECK-NEXT: $q0 = COPY %mulh(<4 x s32>)
     %0:_(<4 x s32>) = COPY $q0
     %cst1:_(s32) = G_CONSTANT i32 8
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshl.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshl.mir
index 1e549fa9a833a..9abdcc954b6c1 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshl.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshl.mir
@@ -170,11 +170,14 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 8
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 4
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C2]](s64)
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C3]]
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[SUB]](s32)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -203,12 +206,14 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 2
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 6
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 8
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 2
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 2
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C2]](s64)
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C3]]
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[SUB]](s32)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -274,12 +279,14 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 12
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 12
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 12
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C2]](s64)
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C3]]
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[SUB]](s32)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -308,12 +315,14 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 12
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 4
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C2]](s64)
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C3]]
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[SUB]](s32)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshr.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshr.mir
index 9a9e5bb42c64f..20fbb0d8476f7 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshr.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshr.mir
@@ -169,12 +169,14 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 1
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 7
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 8
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 7
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[SUB]](s32)
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C2]]
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 7
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C3]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -203,12 +205,14 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 6
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 2
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 8
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 2
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[SUB]](s32)
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C2]]
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 2
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C3]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -273,12 +277,14 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 11
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 5
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 5
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[SUB]](s32)
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C2]]
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 5
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C3]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -307,12 +313,14 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 12
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 4
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[SUB]](s32)
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C2]]
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C3]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/opt-mir-builder.mir b/llvm/test/CodeGen/AArch64/GlobalISel/opt-mir-builder.mir
new file mode 100644
index 0000000000000..e9f31fdc7edb3
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/opt-mir-builder.mir
@@ -0,0 +1,166 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
+# RUN: llc -mtriple aarch64 -run-pass=aarch64-postlegalizer-combiner -verify-machineinstrs %s -o - | FileCheck %s
+
+---
+name:            and_undef
+legalized: true
+body:             |
+  bb.0:
+    liveins: $w0, $w1
+
+    ; CHECK-LABEL: name: and_undef
+    ; CHECK: liveins: $w0, $w1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: %undef:_(s32) = G_IMPLICIT_DEF
+    ; CHECK-NEXT: $w0 = COPY %undef(s32)
+    ; CHECK-NEXT: RET_ReallyLR implicit $w0
+    %x:_(s32) = COPY $w0
+    %undef:_(s32) = G_IMPLICIT_DEF
+    %add:_(s32) = G_ADD %x, %undef
+    $w0 = COPY %add
+    RET_ReallyLR implicit $w0
+...
+---
+name:            and_const
+legalized: true
+body:             |
+  bb.0:
+    liveins: $w0, $w1
+
+    ; CHECK-LABEL: name: and_const
+    ; CHECK: liveins: $w0, $w1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: %x:_(s32) = COPY $w0
+    ; CHECK-NEXT: %add:_(s32) = G_CONSTANT i32 8
+    ; CHECK-NEXT: $w0 = COPY %add(s32)
+    ; CHECK-NEXT: RET_ReallyLR implicit $w0
+    %x:_(s32) = COPY $w0
+    %four:_(s32) = G_CONSTANT i32 4
+    %add:_(s32) = G_ADD %four, %four
+    $w0 = COPY %add
+    RET_ReallyLR implicit $w0
+...
+---
+name:            sub_undef
+legalized: true
+body:             |
+  bb.0:
+    liveins: $w0, $w1
+
+    ; CHECK-LABEL: name: sub_undef
+    ; CHECK: liveins: $w0, $w1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: %undef:_(s32) = G_IMPLICIT_DEF
+    ; CHECK-NEXT: $w0 = COPY %undef(s32)
+    ; CHECK-NEXT: RET_ReallyLR implicit $w0
+    %x:_(s32) = COPY $w0
+    %undef:_(s32) = G_IMPLICIT_DEF
+    %sub:_(s32) = G_SUB %x, %undef
+    $w0 = COPY %sub
+    RET_ReallyLR implicit $w0
+...
+---
+name:            mul_undef
+legalized: true
+body:             |
+  bb.0:
+    liveins: $w0, $w1
+
+    ; CHECK-LABEL: name: mul_undef
+    ; CHECK: liveins: $w0, $w1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: %mul:_(s32) = G_CONSTANT i32 0
+    ; CHECK-NEXT: $w0 = COPY %mul(s32)
+    ; CHECK-NEXT: RET_ReallyLR implicit $w0
+    %x:_(s32) = COPY $w0
+    %undef:_(s32) = G_IMPLICIT_DEF
+    %mul:_(s32) = G_MUL %x, %undef
+    $w0 = COPY %mul
+    RET_ReallyLR implicit $w0
+...
+---
+name:            mul_0
+legalized: true
+body:             |
+  bb.0:
+    liveins: $w0, $w1
+
+    ; CHECK-LABEL: name: mul_0
+    ; CHECK: liveins: $w0, $w1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: %x:_(s32) = COPY $w0
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB %x, %x
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 32
+    ; CHECK-NEXT: %mul:_(s32) = G_SHL [[SUB]], [[C]](s64)
+    ; CHECK-NEXT: $w0 = COPY %mul(s32)
+    ; CHECK-NEXT: RET_ReallyLR implicit $w0
+    %x:_(s32) = COPY $w0
+    %zero:_(s32) = G_CONSTANT i32 0
+    %mul:_(s32) = G_MUL %x, %zero
+    $w0 = COPY %mul
+    RET_ReallyLR implicit $w0
+...
+---
+name:            mul_1
+legalized: true
+body:             |
+  bb.0:
+    liveins: $w0, $w1
+
+    ; CHECK-LABEL: name: mul_1
+    ; CHECK: liveins: $w0, $w1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: %x:_(s32) = COPY $w0
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 1
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL %x, [[C]](s64)
+    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[SHL]], %x
+    ; CHECK-NEXT: $w0 = COPY [[SUB]](s32)
+    ; CHECK-NEXT: RET_ReallyLR implicit $w0
+    %x:_(s32) = COPY $w0
+    %one:_(s32) = G_CONSTANT i32 1
+    %mul:_(s32) = G_MUL %x, %one
+    $w0 = COPY %mul
+    RET_ReallyLR implicit $w0
+...
+---
+name:            mul_2
+legalized: true
+body:             |
+  bb.0:
+    liveins: $w0, $w1
+
+    ; CHECK-LABEL: name: mul_2
+    ; CHECK: liveins: $w0, $w1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: %x:_(s32) = COPY $w0
+    ; CHECK-NEXT: %two:_(s32) = G_CONSTANT i32 2
+    ; CHECK-NEXT: %mul:_(s32) = G_MUL %x, %two
+    ; CHECK-NEXT: $w0 = COPY %mul(s32)
+    ; CHECK-NEXT: RET_ReallyLR implicit $w0
+    %x:_(s32) = COPY $w0
+    %two:_(s32) = G_CONSTANT i32 2
+    %mul:_(s32) = G_MUL %x, %two
+    $w0 = COPY %mul
+    RET_ReallyLR implicit $w0
+...
+---
+name:            mul_const
+legalized: true
+body:             |
+  bb.0:
+    liveins: $w0, $w1
+
+    ; CHECK-LABEL: name: mul_const
+    ; CHECK: liveins: $w0, $w1
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: %x:_(s32) = COPY $w0
+    ; CHECK-NEXT: %mul:_(s32) = G_CONSTANT i32 4
+    ; CHECK-NEXT: $w0 = COPY %mul(s32)
+    ; CHECK-NEXT: RET_ReallyLR implicit $w0
+    %x:_(s32) = COPY $w0
+    %two:_(s32) = G_CONSTANT i32 2
+    %mul:_(s32) = G_MUL %two, %two
+    $w0 = COPY %mul
+    RET_ReallyLR implicit $w0
+
+...
diff --git a/llvm/test/CodeGen/AArch64/arm64-neon-mul-div-cte.ll b/llvm/test/CodeGen/AArch64/arm64-neon-mul-div-cte.ll
index ca6bb8360de59..9425beb41042a 100644
--- a/llvm/test/CodeGen/AArch64/arm64-neon-mul-div-cte.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-neon-mul-div-cte.ll
@@ -179,13 +179,18 @@ define <8 x i16> @udiv8xi16(<8 x i16> %x) {
 ; CHECK-GI-LABEL: udiv8xi16:
 ; CHECK-GI:       // %bb.0:
 ; CHECK-GI-NEXT:    adrp x8, .LCPI4_0
+; CHECK-GI-NEXT:    movi v3.8h, #15
+; CHECK-GI-NEXT:    movi v4.8h, #16
 ; CHECK-GI-NEXT:    ldr q1, [x8, :lo12:.LCPI4_0]
 ; CHECK-GI-NEXT:    umull2 v2.4s, v0.8h, v1.8h
 ; CHECK-GI-NEXT:    umull v1.4s, v0.4h, v1.4h
 ; CHECK-GI-NEXT:    uzp2 v1.8h, v1.8h, v2.8h
+; CHECK-GI-NEXT:    sub v2.8h, v4.8h, v3.8h
+; CHECK-GI-NEXT:    neg v2.8h, v2.8h
 ; CHECK-GI-NEXT:    sub v0.8h, v0.8h, v1.8h
-; CHECK-GI-NEXT:    usra v1.8h, v0.8h, #1
-; CHECK-GI-NEXT:    ushr v0.8h, v1.8h, #12
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
+; CHECK-GI-NEXT:    add v0.8h, v0.8h, v1.8h
+; CHECK-GI-NEXT:    ushr v0.8h, v0.8h, #12
 ; CHECK-GI-NEXT:    ret
   %div = udiv <8 x i16> %x, <i16 6537, i16 6537, i16 6537, i16 6537, i16 6537, i16 6537, i16 6537, i16 6537>
   ret <8 x i16> %div
@@ -247,11 +252,18 @@ define <2 x i64> @udiv_v2i64(<2 x i64> %a) {
 ; CHECK-GI-NEXT:    movk x8, #9362, lsl #48
 ; CHECK-GI-NEXT:    umulh x9, x9, x8
 ; CHECK-GI-NEXT:    umulh x8, x10, x8
+; CHECK-GI-NEXT:    adrp x10, .LCPI6_0
+; CHECK-GI-NEXT:    ldr q3, [x10, :lo12:.LCPI6_0]
 ; CHECK-GI-NEXT:    mov v1.d[0], x9
+; CHECK-GI-NEXT:    adrp x9, .LCPI6_1
+; CHECK-GI-NEXT:    ldr q2, [x9, :lo12:.LCPI6_1]
+; CHECK-GI-NEXT:    sub v2.2d, v3.2d, v2.2d
 ; CHECK-GI-NEXT:    mov v1.d[1], x8
+; CHECK-GI-NEXT:    neg v2.2d, v2.2d
 ; CHECK-GI-NEXT:    sub v0.2d, v0.2d, v1.2d
-; CHECK-GI-NEXT:    usra v1.2d, v0.2d, #1
-; CHECK-GI-NEXT:    ushr v0.2d, v1.2d, #2
+; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v2.2d
+; CHECK-GI-NEXT:    add v0.2d, v0.2d, v1.2d
+; CHECK-GI-NEXT:    ushr v0.2d, v0.2d, #2
 ; CHECK-GI-NEXT:    ret
   %r = udiv <2 x i64> %a, splat (i64 7)
   ret <2 x i64> %r
diff --git a/llvm/test/CodeGen/AArch64/fsh.ll b/llvm/test/CodeGen/AArch64/fsh.ll
index c084813760b80..9a05c3be0ce3e 100644
--- a/llvm/test/CodeGen/AArch64/fsh.ll
+++ b/llvm/test/CodeGen/AArch64/fsh.ll
@@ -613,11 +613,24 @@ entry:
 }
 
 define i8 @rotl_i8_c(i8 %a) {
-; CHECK-LABEL: rotl_i8_c:
-; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    ubfx w8, w0, #5, #3
-; CHECK-NEXT:    orr w0, w8, w0, lsl #3
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: rotl_i8_c:
+; CHECK-SD:       // %bb.0: // %entry
+; CHECK-SD-NEXT:    ubfx w8, w0, #5, #3
+; CHECK-SD-NEXT:    orr w0, w8, w0, lsl #3
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: rotl_i8_c:
+; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    mov x8, xzr
+; CHECK-GI-NEXT:    mov w9, #-3 // =0xfffffffd
+; CHECK-GI-NEXT:    and w10, w0, #0xff
+; CHECK-GI-NEXT:    sub x8, x8, w9, uxtb
+; CHECK-GI-NEXT:    and x9, x9, #0x7
+; CHECK-GI-NEXT:    lsr w9, w10, w9
+; CHECK-GI-NEXT:    and x8, x8, #0x7
+; CHECK-GI-NEXT:    lsl w8, w0, w8
+; CHECK-GI-NEXT:    orr w0, w9, w8
+; CHECK-GI-NEXT:    ret
 entry:
   %d = call i8 @llvm.fshl(i8 %a, i8 %a, i8 3)
   ret i8 %d
@@ -642,11 +655,24 @@ entry:
 }
 
 define i16 @rotl_i16_c(i16 %a) {
-; CHECK-LABEL: rotl_i16_c:
-; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    ubfx w8, w0, #13, #3
-; CHECK-NEXT:    orr w0, w8, w0, lsl #3
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: rotl_i16_c:
+; CHECK-SD:       // %bb.0: // %entry
+; CHECK-SD-NEXT:    ubfx w8, w0, #13, #3
+; CHECK-SD-NEXT:    orr w0, w8, w0, lsl #3
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: rotl_i16_c:
+; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    mov x8, xzr
+; CHECK-GI-NEXT:    mov w9, #-3 // =0xfffffffd
+; CHECK-GI-NEXT:    and w10, w0, #0xffff
+; CHECK-GI-NEXT:    sub x8, x8, w9, uxth
+; CHECK-GI-NEXT:    and x9, x9, #0xf
+; CHECK-GI-NEXT:    lsr w9, w10, w9
+; CHECK-GI-NEXT:    and x8, x8, #0xf
+; CHECK-GI-NEXT:    lsl w8, w0, w8
+; CHECK-GI-NEXT:    orr w0, w9, w8
+; CHECK-GI-NEXT:    ret
 entry:
   %d = call i16 @llvm.fshl(i16 %a, i16 %a, i16 3)
   ret i16 %d
@@ -671,10 +697,16 @@ entry:
 }
 
 define i32 @rotl_i32_c(i32 %a) {
-; CHECK-LABEL: rotl_i32_c:
-; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    ror w0, w0, #29
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: rotl_i32_c:
+; CHECK-SD:       // %bb.0: // %entry
+; CHECK-SD-NEXT:    ror w0, w0, #29
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: rotl_i32_c:
+; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    mov w8, #-3 // =0xfffffffd
+; CHECK-GI-NEXT:    ror w0, w0, w8
+; CHECK-GI-NEXT:    ret
 entry:
   %d = call i32 @llvm.fshl(i32 %a, i32 %a, i32 3)
   ret i32 %d
@@ -3209,9 +3241,14 @@ define <8 x i8> @rotl_v8i8_c(<8 x i8> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v8i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v1.8b, v0.8b, #3
-; CHECK-GI-NEXT:    ushr v0.8b, v0.8b, #5
-; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
+; CHECK-GI-NEXT:    movi v1.8b, #3
+; CHECK-GI-NEXT:    movi v2.8b, #7
+; CHECK-GI-NEXT:    neg v1.8b, v1.8b
+; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
+; CHECK-GI-NEXT:    shl v2.8b, v0.8b, #3
+; CHECK-GI-NEXT:    neg v1.8b, v1.8b
+; CHECK-GI-NEXT:    ushl v0.8b, v0.8b, v1.8b
+; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i8> @llvm.fshl(<8 x i8> %a, <8 x i8> %a, <8 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
@@ -3228,9 +3265,13 @@ define <8 x i8> @rotr_v8i8_c(<8 x i8> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v8i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    ushr v1.8b, v0.8b, #3
-; CHECK-GI-NEXT:    shl v0.8b, v0.8b, #5
-; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
+; CHECK-GI-NEXT:    movi v1.8b, #3
+; CHECK-GI-NEXT:    movi v2.8b, #7
+; CHECK-GI-NEXT:    neg v1.8b, v1.8b
+; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
+; CHECK-GI-NEXT:    ushr v2.8b, v0.8b, #3
+; CHECK-GI-NEXT:    ushl v0.8b, v0.8b, v1.8b
+; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i8> @llvm.fshr(<8 x i8> %a, <8 x i8> %a, <8 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
@@ -3247,9 +3288,14 @@ define <16 x i8> @rotl_v16i8_c(<16 x i8> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v16i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v1.16b, v0.16b, #3
-; CHECK-GI-NEXT:    ushr v0.16b, v0.16b, #5
-; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
+; CHECK-GI-NEXT:    movi v1.16b, #3
+; CHECK-GI-NEXT:    movi v2.16b, #7
+; CHECK-GI-NEXT:    neg v1.16b, v1.16b
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    shl v2.16b, v0.16b, #3
+; CHECK-GI-NEXT:    neg v1.16b, v1.16b
+; CHECK-GI-NEXT:    ushl v0.16b, v0.16b, v1.16b
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i8> @llvm.fshl(<16 x i8> %a, <16 x i8> %a, <16 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
@@ -3266,9 +3312,13 @@ define <16 x i8> @rotr_v16i8_c(<16 x i8> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v16i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    ushr v1.16b, v0.16b, #3
-; CHECK-GI-NEXT:    shl v0.16b, v0.16b, #5
-; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
+; CHECK-GI-NEXT:    movi v1.16b, #3
+; CHECK-GI-NEXT:    movi v2.16b, #7
+; CHECK-GI-NEXT:    neg v1.16b, v1.16b
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    ushr v2.16b, v0.16b, #3
+; CHECK-GI-NEXT:    ushl v0.16b, v0.16b, v1.16b
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i8> @llvm.fshr(<16 x i8> %a, <16 x i8> %a, <16 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
@@ -3285,9 +3335,14 @@ define <4 x i16> @rotl_v4i16_c(<4 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v4i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v1.4h, v0.4h, #3
-; CHECK-GI-NEXT:    ushr v0.4h, v0.4h, #13
-; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
+; CHECK-GI-NEXT:    movi v1.4h, #3
+; CHECK-GI-NEXT:    movi v2.4h, #15
+; CHECK-GI-NEXT:    neg v1.4h, v1.4h
+; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
+; CHECK-GI-NEXT:    shl v2.4h, v0.4h, #3
+; CHECK-GI-NEXT:    neg v1.4h, v1.4h
+; CHECK-GI-NEXT:    ushl v0.4h, v0.4h, v1.4h
+; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i16> @llvm.fshl(<4 x i16> %a, <4 x i16> %a, <4 x i16> <i16 3, i16 3, i16 3, i16 3>)
@@ -3304,9 +3359,13 @@ define <4 x i16> @rotr_v4i16_c(<4 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v4i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    ushr v1.4h, v0.4h, #3
-; CHECK-GI-NEXT:    shl v0.4h, v0.4h, #13
-; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
+; CHECK-GI-NEXT:    movi v1.4h, #3
+; CHECK-GI-NEXT:    movi v2.4h, #15
+; CHECK-GI-NEXT:    neg v1.4h, v1.4h
+; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
+; CHECK-GI-NEXT:    ushr v2.4h, v0.4h, #3
+; CHECK-GI-NEXT:    ushl v0.4h, v0.4h, v1.4h
+; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i16> @llvm.fshr(<4 x i16> %a, <4 x i16> %a, <4 x i16> <i16 3, i16 3, i16 3, i16 3>)
@@ -3327,24 +3386,34 @@ define <7 x i16> @rotl_v7i16_c(<7 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v7i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov w8, #13 // =0xd
-; CHECK-GI-NEXT:    mov w9, #3 // =0x3
-; CHECK-GI-NEXT:    fmov s1, w8
-; CHECK-GI-NEXT:    fmov s2, w9
-; CHECK-GI-NEXT:    mov v1.h[1], w8
-; CHECK-GI-NEXT:    mov v2.h[1], w9
-; CHECK-GI-NEXT:    mov v1.h[2], w8
-; CHECK-GI-NEXT:    mov v2.h[2], w9
-; CHECK-GI-NEXT:    mov v1.h[3], w8
-; CHECK-GI-NEXT:    mov v2.h[3], w9
-; CHECK-GI-NEXT:    mov v1.h[4], w8
-; CHECK-GI-NEXT:    mov v2.h[4], w9
-; CHECK-GI-NEXT:    mov v1.h[5], w8
-; CHECK-GI-NEXT:    mov v2.h[5], w9
-; CHECK-GI-NEXT:    mov v1.h[6], w8
-; CHECK-GI-NEXT:    mov v2.h[6], w9
-; CHECK-GI-NEXT:    neg v1.8h, v1.8h
+; CHECK-GI-NEXT:    mov w8, #3 // =0x3
+; CHECK-GI-NEXT:    mov w9, #0 // =0x0
+; CHECK-GI-NEXT:    mov w10, #15 // =0xf
+; CHECK-GI-NEXT:    fmov s1, w9
+; CHECK-GI-NEXT:    fmov s2, w8
+; CHECK-GI-NEXT:    fmov s3, w10
+; CHECK-GI-NEXT:    mov v1.h[1], w9
+; CHECK-GI-NEXT:    mov v2.h[1], w8
+; CHECK-GI-NEXT:    mov v3.h[1], w10
+; CHECK-GI-NEXT:    mov v1.h[2], w9
+; CHECK-GI-NEXT:    mov v2.h[2], w8
+; CHECK-GI-NEXT:    mov v3.h[2], w10
+; CHECK-GI-NEXT:    mov v1.h[3], w9
+; CHECK-GI-NEXT:    mov v2.h[3], w8
+; CHECK-GI-NEXT:    mov v3.h[3], w10
+; CHECK-GI-NEXT:    mov v1.h[4], w9
+; CHECK-GI-NEXT:    mov v2.h[4], w8
+; CHECK-GI-NEXT:    mov v3.h[4], w10
+; CHECK-GI-NEXT:    mov v1.h[5], w9
+; CHECK-GI-NEXT:    mov v2.h[5], w8
+; CHECK-GI-NEXT:    mov v3.h[5], w10
+; CHECK-GI-NEXT:    mov v1.h[6], w9
+; CHECK-GI-NEXT:    mov v2.h[6], w8
+; CHECK-GI-NEXT:    mov v3.h[6], w10
+; CHECK-GI-NEXT:    sub v1.8h, v1.8h, v2.8h
 ; CHECK-GI-NEXT:    ushl v2.8h, v0.8h, v2.8h
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v3.16b
+; CHECK-GI-NEXT:    neg v1.8h, v1.8h
 ; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v1.8h
 ; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
@@ -3368,25 +3437,35 @@ define <7 x i16> @rotr_v7i16_c(<7 x i16> %a) {
 ; CHECK-GI-LABEL: rotr_v7i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
 ; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov w9, #13 // =0xd
-; CHECK-GI-NEXT:    fmov s1, w8
-; CHECK-GI-NEXT:    fmov s2, w9
-; CHECK-GI-NEXT:    mov v1.h[1], w8
-; CHECK-GI-NEXT:    mov v2.h[1], w9
-; CHECK-GI-NEXT:    mov v1.h[2], w8
-; CHECK-GI-NEXT:    mov v2.h[2], w9
-; CHECK-GI-NEXT:    mov v1.h[3], w8
-; CHECK-GI-NEXT:    mov v2.h[3], w9
-; CHECK-GI-NEXT:    mov v1.h[4], w8
-; CHECK-GI-NEXT:    mov v2.h[4], w9
-; CHECK-GI-NEXT:    mov v1.h[5], w8
-; CHECK-GI-NEXT:    mov v2.h[5], w9
-; CHECK-GI-NEXT:    mov v1.h[6], w8
-; CHECK-GI-NEXT:    mov v2.h[6], w9
-; CHECK-GI-NEXT:    neg v1.8h, v1.8h
-; CHECK-GI-NEXT:    ushl v1.8h, v0.8h, v1.8h
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
-; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
+; CHECK-GI-NEXT:    mov w9, #0 // =0x0
+; CHECK-GI-NEXT:    mov w10, #15 // =0xf
+; CHECK-GI-NEXT:    fmov s1, w9
+; CHECK-GI-NEXT:    fmov s2, w8
+; CHECK-GI-NEXT:    fmov s3, w10
+; CHECK-GI-NEXT:    mov v1.h[1], w9
+; CHECK-GI-NEXT:    mov v2.h[1], w8
+; CHECK-GI-NEXT:    mov v3.h[1], w10
+; CHECK-GI-NEXT:    mov v1.h[2], w9
+; CHECK-GI-NEXT:    mov v2.h[2], w8
+; CHECK-GI-NEXT:    mov v3.h[2], w10
+; CHECK-GI-NEXT:    mov v1.h[3], w9
+; CHECK-GI-NEXT:    mov v2.h[3], w8
+; CHECK-GI-NEXT:    mov v3.h[3], w10
+; CHECK-GI-NEXT:    mov v1.h[4], w9
+; CHECK-GI-NEXT:    mov v2.h[4], w8
+; CHECK-GI-NEXT:    mov v3.h[4], w10
+; CHECK-GI-NEXT:    mov v1.h[5], w9
+; CHECK-GI-NEXT:    mov v2.h[5], w8
+; CHECK-GI-NEXT:    mov v3.h[5], w10
+; CHECK-GI-NEXT:    mov v1.h[6], w9
+; CHECK-GI-NEXT:    mov v2.h[6], w8
+; CHECK-GI-NEXT:    mov v3.h[6], w10
+; CHECK-GI-NEXT:    sub v1.8h, v1.8h, v2.8h
+; CHECK-GI-NEXT:    neg v2.8h, v2.8h
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v3.16b
+; CHECK-GI-NEXT:    ushl v2.8h, v0.8h, v2.8h
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v1.8h
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <7 x i16> @llvm.fshr(<7 x i16> %a, <7 x i16> %a, <7 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -3403,9 +3482,14 @@ define <8 x i16> @rotl_v8i16_c(<8 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v8i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v1.8h, v0.8h, #3
-; CHECK-GI-NEXT:    ushr v0.8h, v0.8h, #13
-; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
+; CHECK-GI-NEXT:    movi v1.8h, #3
+; CHECK-GI-NEXT:    movi v2.8h, #15
+; CHECK-GI-NEXT:    neg v1.8h, v1.8h
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    shl v2.8h, v0.8h, #3
+; CHECK-GI-NEXT:    neg v1.8h, v1.8h
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v1.8h
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i16> @llvm.fshl(<8 x i16> %a, <8 x i16> %a, <8 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -3422,9 +3506,13 @@ define <8 x i16> @rotr_v8i16_c(<8 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v8i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    ushr v1.8h, v0.8h, #3
-; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #13
-; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
+; CHECK-GI-NEXT:    movi v1.8h, #3
+; CHECK-GI-NEXT:    movi v2.8h, #15
+; CHECK-GI-NEXT:    neg v1.8h, v1.8h
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    ushr v2.8h, v0.8h, #3
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v1.8h
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i16> @llvm.fshr(<8 x i16> %a, <8 x i16> %a, <8 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -3444,12 +3532,17 @@ define <16 x i16> @rotl_v16i16_c(<16 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v16i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v2.8h, v0.8h, #3
-; CHECK-GI-NEXT:    shl v3.8h, v1.8h, #3
-; CHECK-GI-NEXT:    ushr v0.8h, v0.8h, #13
-; CHECK-GI-NEXT:    ushr v1.8h, v1.8h, #13
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
+; CHECK-GI-NEXT:    movi v2.8h, #3
+; CHECK-GI-NEXT:    movi v3.8h, #15
+; CHECK-GI-NEXT:    shl v4.8h, v1.8h, #3
+; CHECK-GI-NEXT:    neg v2.8h, v2.8h
+; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
+; CHECK-GI-NEXT:    shl v3.8h, v0.8h, #3
+; CHECK-GI-NEXT:    neg v2.8h, v2.8h
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
+; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v2.8h
+; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i16> @llvm.fshl(<16 x i16> %a, <16 x i16> %a, <16 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -3469,12 +3562,16 @@ define <16 x i16> @rotr_v16i16_c(<16 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v16i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    ushr v2.8h, v0.8h, #3
-; CHECK-GI-NEXT:    ushr v3.8h, v1.8h, #3
-; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #13
-; CHECK-GI-NEXT:    shl v1.8h, v1.8h, #13
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
+; CHECK-GI-NEXT:    movi v2.8h, #3
+; CHECK-GI-NEXT:    movi v3.8h, #15
+; CHECK-GI-NEXT:    ushr v4.8h, v1.8h, #3
+; CHECK-GI-NEXT:    neg v2.8h, v2.8h
+; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
+; CHECK-GI-NEXT:    ushr v3.8h, v0.8h, #3
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
+; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v2.8h
+; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i16> @llvm.fshr(<16 x i16> %a, <16 x i16> %a, <16 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -3491,9 +3588,14 @@ define <2 x i32> @rotl_v2i32_c(<2 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v2i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v1.2s, v0.2s, #3
-; CHECK-GI-NEXT:    ushr v0.2s, v0.2s, #29
-; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
+; CHECK-GI-NEXT:    movi v1.2s, #3
+; CHECK-GI-NEXT:    movi v2.2s, #31
+; CHECK-GI-NEXT:    neg v1.2s, v1.2s
+; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
+; CHECK-GI-NEXT:    shl v2.2s, v0.2s, #3
+; CHECK-GI-NEXT:    neg v1.2s, v1.2s
+; CHECK-GI-NEXT:    ushl v0.2s, v0.2s, v1.2s
+; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i32> @llvm.fshl(<2 x i32> %a, <2 x i32> %a, <2 x i32> <i32 3, i32 3>)
@@ -3510,9 +3612,13 @@ define <2 x i32> @rotr_v2i32_c(<2 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v2i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    ushr v1.2s, v0.2s, #3
-; CHECK-GI-NEXT:    shl v0.2s, v0.2s, #29
-; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
+; CHECK-GI-NEXT:    movi v1.2s, #3
+; CHECK-GI-NEXT:    movi v2.2s, #31
+; CHECK-GI-NEXT:    neg v1.2s, v1.2s
+; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
+; CHECK-GI-NEXT:    ushr v2.2s, v0.2s, #3
+; CHECK-GI-NEXT:    ushl v0.2s, v0.2s, v1.2s
+; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i32> @llvm.fshr(<2 x i32> %a, <2 x i32> %a, <2 x i32> <i32 3, i32 3>)
@@ -3529,9 +3635,14 @@ define <4 x i32> @rotl_v4i32_c(<4 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v4i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v1.4s, v0.4s, #3
-; CHECK-GI-NEXT:    ushr v0.4s, v0.4s, #29
-; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
+; CHECK-GI-NEXT:    movi v1.4s, #3
+; CHECK-GI-NEXT:    movi v2.4s, #31
+; CHECK-GI-NEXT:    neg v1.4s, v1.4s
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    shl v2.4s, v0.4s, #3
+; CHECK-GI-NEXT:    neg v1.4s, v1.4s
+; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v1.4s
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i32> @llvm.fshl(<4 x i32> %a, <4 x i32> %a, <4 x i32> <i32 3, i32 3, i32 3, i32 3>)
@@ -3548,9 +3659,13 @@ define <4 x i32> @rotr_v4i32_c(<4 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v4i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    ushr v1.4s, v0.4s, #3
-; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #29
-; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
+; CHECK-GI-NEXT:    movi v1.4s, #3
+; CHECK-GI-NEXT:    movi v2.4s, #31
+; CHECK-GI-NEXT:    neg v1.4s, v1.4s
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    ushr v2.4s, v0.4s, #3
+; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v1.4s
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i32> @llvm.fshr(<4 x i32> %a, <4 x i32> %a, <4 x i32> <i32 3, i32 3, i32 3, i32 3>)
@@ -3587,42 +3702,55 @@ define <7 x i32> @rotl_v7i32_c(<7 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v7i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov v0.s[0], w0
-; CHECK-GI-NEXT:    mov v1.s[0], w0
-; CHECK-GI-NEXT:    mov w8, #29 // =0x1d
-; CHECK-GI-NEXT:    mov v2.s[0], w8
-; CHECK-GI-NEXT:    mov w9, #3 // =0x3
-; CHECK-GI-NEXT:    mov v3.s[0], w4
-; CHECK-GI-NEXT:    mov v4.s[0], w9
-; CHECK-GI-NEXT:    mov v5.s[0], w4
-; CHECK-GI-NEXT:    mov v0.s[1], w1
-; CHECK-GI-NEXT:    mov v1.s[1], w1
-; CHECK-GI-NEXT:    mov v2.s[1], w8
-; CHECK-GI-NEXT:    mov v3.s[1], w5
-; CHECK-GI-NEXT:    mov v4.s[1], w9
-; CHECK-GI-NEXT:    mov v5.s[1], w5
-; CHECK-GI-NEXT:    mov v0.s[2], w2
-; CHECK-GI-NEXT:    mov v1.s[2], w2
-; CHECK-GI-NEXT:    mov v2.s[2], w8
-; CHECK-GI-NEXT:    mov v3.s[2], w6
-; CHECK-GI-NEXT:    mov v4.s[2], w9
-; CHECK-GI-NEXT:    mov v5.s[2], w6
-; CHECK-GI-NEXT:    mov v0.s[3], w3
-; CHECK-GI-NEXT:    mov v1.s[3], w3
-; CHECK-GI-NEXT:    neg v2.4s, v2.4s
-; CHECK-GI-NEXT:    ushl v3.4s, v3.4s, v4.4s
-; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #3
-; CHECK-GI-NEXT:    ushr v1.4s, v1.4s, #29
-; CHECK-GI-NEXT:    ushl v2.4s, v5.4s, v2.4s
-; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
-; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v2.16b
-; CHECK-GI-NEXT:    mov s2, v0.s[1]
-; CHECK-GI-NEXT:    mov s3, v0.s[2]
-; CHECK-GI-NEXT:    mov s4, v0.s[3]
-; CHECK-GI-NEXT:    mov s5, v1.s[1]
-; CHECK-GI-NEXT:    mov s6, v1.s[2]
-; CHECK-GI-NEXT:    fmov w0, s0
-; CHECK-GI-NEXT:    fmov w4, s1
+; CHECK-GI-NEXT:    mov w8, #3 // =0x3
+; CHECK-GI-NEXT:    mov v0.s[0], wzr
+; CHECK-GI-NEXT:    mov w9, #31 // =0x1f
+; CHECK-GI-NEXT:    mov v1.s[0], w8
+; CHECK-GI-NEXT:    mov v2.s[0], w9
+; CHECK-GI-NEXT:    mov v3.s[0], w0
+; CHECK-GI-NEXT:    mov v4.s[0], w0
+; CHECK-GI-NEXT:    movi v5.4s, #3
+; CHECK-GI-NEXT:    mov v6.s[0], w4
+; CHECK-GI-NEXT:    mov v7.s[0], w8
+; CHECK-GI-NEXT:    mov v16.s[0], w4
+; CHECK-GI-NEXT:    movi v17.4s, #31
+; CHECK-GI-NEXT:    mov v0.s[1], wzr
+; CHECK-GI-NEXT:    mov v1.s[1], w8
+; CHECK-GI-NEXT:    mov v2.s[1], w9
+; CHECK-GI-NEXT:    mov v3.s[1], w1
+; CHECK-GI-NEXT:    mov v4.s[1], w1
+; CHECK-GI-NEXT:    mov v6.s[1], w5
+; CHECK-GI-NEXT:    mov v7.s[1], w8
+; CHECK-GI-NEXT:    mov v16.s[1], w5
+; CHECK-GI-NEXT:    mov v0.s[2], wzr
+; CHECK-GI-NEXT:    mov v1.s[2], w8
+; CHECK-GI-NEXT:    mov v2.s[2], w9
+; CHECK-GI-NEXT:    mov v3.s[2], w2
+; CHECK-GI-NEXT:    mov v4.s[2], w2
+; CHECK-GI-NEXT:    mov v6.s[2], w6
+; CHECK-GI-NEXT:    mov v7.s[2], w8
+; CHECK-GI-NEXT:    mov v16.s[2], w6
+; CHECK-GI-NEXT:    sub v0.4s, v0.4s, v1.4s
+; CHECK-GI-NEXT:    neg v1.4s, v5.4s
+; CHECK-GI-NEXT:    mov v3.s[3], w3
+; CHECK-GI-NEXT:    mov v4.s[3], w3
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v17.16b
+; CHECK-GI-NEXT:    and v0.16b, v0.16b, v2.16b
+; CHECK-GI-NEXT:    shl v2.4s, v3.4s, #3
+; CHECK-GI-NEXT:    ushl v3.4s, v6.4s, v7.4s
+; CHECK-GI-NEXT:    neg v1.4s, v1.4s
+; CHECK-GI-NEXT:    neg v0.4s, v0.4s
+; CHECK-GI-NEXT:    ushl v1.4s, v4.4s, v1.4s
+; CHECK-GI-NEXT:    ushl v0.4s, v16.4s, v0.4s
+; CHECK-GI-NEXT:    orr v1.16b, v2.16b, v1.16b
+; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
+; CHECK-GI-NEXT:    mov s2, v1.s[1]
+; CHECK-GI-NEXT:    mov s3, v1.s[2]
+; CHECK-GI-NEXT:    mov s4, v1.s[3]
+; CHECK-GI-NEXT:    mov s5, v0.s[1]
+; CHECK-GI-NEXT:    mov s6, v0.s[2]
+; CHECK-GI-NEXT:    fmov w0, s1
+; CHECK-GI-NEXT:    fmov w4, s0
 ; CHECK-GI-NEXT:    fmov w1, s2
 ; CHECK-GI-NEXT:    fmov w2, s3
 ; CHECK-GI-NEXT:    fmov w3, s4
@@ -3664,42 +3792,54 @@ define <7 x i32> @rotr_v7i32_c(<7 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v7i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov v0.s[0], w0
-; CHECK-GI-NEXT:    mov v1.s[0], w0
 ; CHECK-GI-NEXT:    mov w8, #3 // =0x3
+; CHECK-GI-NEXT:    mov v0.s[0], wzr
+; CHECK-GI-NEXT:    mov v1.s[0], w0
 ; CHECK-GI-NEXT:    mov v2.s[0], w8
-; CHECK-GI-NEXT:    mov w9, #29 // =0x1d
-; CHECK-GI-NEXT:    mov v3.s[0], w4
-; CHECK-GI-NEXT:    mov v4.s[0], w4
+; CHECK-GI-NEXT:    mov v3.s[0], w0
+; CHECK-GI-NEXT:    mov w9, #31 // =0x1f
+; CHECK-GI-NEXT:    mov v4.s[0], w8
 ; CHECK-GI-NEXT:    mov v5.s[0], w9
-; CHECK-GI-NEXT:    mov v0.s[1], w1
+; CHECK-GI-NEXT:    mov v6.s[0], w4
+; CHECK-GI-NEXT:    mov v7.s[0], w4
+; CHECK-GI-NEXT:    movi v16.4s, #3
+; CHECK-GI-NEXT:    movi v17.4s, #31
+; CHECK-GI-NEXT:    mov v0.s[1], wzr
 ; CHECK-GI-NEXT:    mov v1.s[1], w1
 ; CHECK-GI-NEXT:    mov v2.s[1], w8
-; CHECK-GI-NEXT:    mov v3.s[1], w5
-; CHECK-GI-NEXT:    mov v4.s[1], w5
+; CHECK-GI-NEXT:    mov v3.s[1], w1
+; CHECK-GI-NEXT:    mov v4.s[1], w8
 ; CHECK-GI-NEXT:    mov v5.s[1], w9
-; CHECK-GI-NEXT:    mov v0.s[2], w2
+; CHECK-GI-NEXT:    mov v6.s[1], w5
+; CHECK-GI-NEXT:    mov v7.s[1], w5
+; CHECK-GI-NEXT:    neg v16.4s, v16.4s
+; CHECK-GI-NEXT:    mov v0.s[2], wzr
 ; CHECK-GI-NEXT:    mov v1.s[2], w2
 ; CHECK-GI-NEXT:    mov v2.s[2], w8
-; CHECK-GI-NEXT:    mov v3.s[2], w6
-; CHECK-GI-NEXT:    mov v4.s[2], w6
+; CHECK-GI-NEXT:    mov v3.s[2], w2
+; CHECK-GI-NEXT:    mov v4.s[2], w8
 ; CHECK-GI-NEXT:    mov v5.s[2], w9
-; CHECK-GI-NEXT:    mov v0.s[3], w3
+; CHECK-GI-NEXT:    mov v6.s[2], w6
+; CHECK-GI-NEXT:    mov v7.s[2], w6
 ; CHECK-GI-NEXT:    mov v1.s[3], w3
-; CHECK-GI-NEXT:    neg v2.4s, v2.4s
-; CHECK-GI-NEXT:    ushl v4.4s, v4.4s, v5.4s
-; CHECK-GI-NEXT:    ushr v0.4s, v0.4s, #3
-; CHECK-GI-NEXT:    shl v1.4s, v1.4s, #29
+; CHECK-GI-NEXT:    sub v0.4s, v0.4s, v2.4s
+; CHECK-GI-NEXT:    mov v3.s[3], w3
+; CHECK-GI-NEXT:    and v2.16b, v16.16b, v17.16b
+; CHECK-GI-NEXT:    neg v4.4s, v4.4s
+; CHECK-GI-NEXT:    and v0.16b, v0.16b, v5.16b
+; CHECK-GI-NEXT:    ushr v1.4s, v1.4s, #3
 ; CHECK-GI-NEXT:    ushl v2.4s, v3.4s, v2.4s
-; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
-; CHECK-GI-NEXT:    orr v1.16b, v2.16b, v4.16b
-; CHECK-GI-NEXT:    mov s2, v0.s[1]
-; CHECK-GI-NEXT:    mov s3, v0.s[2]
-; CHECK-GI-NEXT:    mov s4, v0.s[3]
-; CHECK-GI-NEXT:    mov s5, v1.s[1]
-; CHECK-GI-NEXT:    mov s6, v1.s[2]
-; CHECK-GI-NEXT:    fmov w0, s0
-; CHECK-GI-NEXT:    fmov w4, s1
+; CHECK-GI-NEXT:    ushl v3.4s, v6.4s, v4.4s
+; CHECK-GI-NEXT:    ushl v0.4s, v7.4s, v0.4s
+; CHECK-GI-NEXT:    orr v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
+; CHECK-GI-NEXT:    mov s2, v1.s[1]
+; CHECK-GI-NEXT:    mov s3, v1.s[2]
+; CHECK-GI-NEXT:    mov s4, v1.s[3]
+; CHECK-GI-NEXT:    fmov w0, s1
+; CHECK-GI-NEXT:    mov s5, v0.s[1]
+; CHECK-GI-NEXT:    mov s6, v0.s[2]
+; CHECK-GI-NEXT:    fmov w4, s0
 ; CHECK-GI-NEXT:    fmov w1, s2
 ; CHECK-GI-NEXT:    fmov w2, s3
 ; CHECK-GI-NEXT:    fmov w3, s4
@@ -3724,12 +3864,17 @@ define <8 x i32> @rotl_v8i32_c(<8 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v8i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v2.4s, v0.4s, #3
-; CHECK-GI-NEXT:    shl v3.4s, v1.4s, #3
-; CHECK-GI-NEXT:    ushr v0.4s, v0.4s, #29
-; CHECK-GI-NEXT:    ushr v1.4s, v1.4s, #29
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
+; CHECK-GI-NEXT:    movi v2.4s, #3
+; CHECK-GI-NEXT:    movi v3.4s, #31
+; CHECK-GI-NEXT:    shl v4.4s, v1.4s, #3
+; CHECK-GI-NEXT:    neg v2.4s, v2.4s
+; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
+; CHECK-GI-NEXT:    shl v3.4s, v0.4s, #3
+; CHECK-GI-NEXT:    neg v2.4s, v2.4s
+; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v2.4s
+; CHECK-GI-NEXT:    ushl v1.4s, v1.4s, v2.4s
+; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i32> @llvm.fshl(<8 x i32> %a, <8 x i32> %a, <8 x i32> <i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3>)
@@ -3749,12 +3894,16 @@ define <8 x i32> @rotr_v8i32_c(<8 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v8i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    ushr v2.4s, v0.4s, #3
-; CHECK-GI-NEXT:    ushr v3.4s, v1.4s, #3
-; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #29
-; CHECK-GI-NEXT:    shl v1.4s, v1.4s, #29
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
+; CHECK-GI-NEXT:    movi v2.4s, #3
+; CHECK-GI-NEXT:    movi v3.4s, #31
+; CHECK-GI-NEXT:    ushr v4.4s, v1.4s, #3
+; CHECK-GI-NEXT:    neg v2.4s, v2.4s
+; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
+; CHECK-GI-NEXT:    ushr v3.4s, v0.4s, #3
+; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v2.4s
+; CHECK-GI-NEXT:    ushl v1.4s, v1.4s, v2.4s
+; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i32> @llvm.fshr(<8 x i32> %a, <8 x i32> %a, <8 x i32> <i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3>)
@@ -3771,9 +3920,16 @@ define <2 x i64> @rotl_v2i64_c(<2 x i64> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v2i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v1.2d, v0.2d, #3
-; CHECK-GI-NEXT:    ushr v0.2d, v0.2d, #61
-; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
+; CHECK-GI-NEXT:    adrp x8, .LCPI112_1
+; CHECK-GI-NEXT:    ldr q1, [x8, :lo12:.LCPI112_1]
+; CHECK-GI-NEXT:    adrp x8, .LCPI112_0
+; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI112_0]
+; CHECK-GI-NEXT:    neg v1.2d, v1.2d
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    shl v2.2d, v0.2d, #3
+; CHECK-GI-NEXT:    neg v1.2d, v1.2d
+; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v1.2d
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i64> @llvm.fshl(<2 x i64> %a, <2 x i64> %a, <2 x i64> <i64 3, i64 3>)
@@ -3790,9 +3946,15 @@ define <2 x i64> @rotr_v2i64_c(<2 x i64> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v2i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    ushr v1.2d, v0.2d, #3
-; CHECK-GI-NEXT:    shl v0.2d, v0.2d, #61
-; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
+; CHECK-GI-NEXT:    adrp x8, .LCPI113_1
+; CHECK-GI-NEXT:    ldr q1, [x8, :lo12:.LCPI113_1]
+; CHECK-GI-NEXT:    adrp x8, .LCPI113_0
+; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI113_0]
+; CHECK-GI-NEXT:    neg v1.2d, v1.2d
+; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    ushr v2.2d, v0.2d, #3
+; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v1.2d
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i64> @llvm.fshr(<2 x i64> %a, <2 x i64> %a, <2 x i64> <i64 3, i64 3>)
@@ -3812,12 +3974,19 @@ define <4 x i64> @rotl_v4i64_c(<4 x i64> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v4i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v2.2d, v0.2d, #3
-; CHECK-GI-NEXT:    shl v3.2d, v1.2d, #3
-; CHECK-GI-NEXT:    ushr v0.2d, v0.2d, #61
-; CHECK-GI-NEXT:    ushr v1.2d, v1.2d, #61
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
+; CHECK-GI-NEXT:    adrp x8, .LCPI114_1
+; CHECK-GI-NEXT:    shl v4.2d, v1.2d, #3
+; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI114_1]
+; CHECK-GI-NEXT:    adrp x8, .LCPI114_0
+; CHECK-GI-NEXT:    ldr q3, [x8, :lo12:.LCPI114_0]
+; CHECK-GI-NEXT:    neg v2.2d, v2.2d
+; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
+; CHECK-GI-NEXT:    shl v3.2d, v0.2d, #3
+; CHECK-GI-NEXT:    neg v2.2d, v2.2d
+; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v2.2d
+; CHECK-GI-NEXT:    ushl v1.2d, v1.2d, v2.2d
+; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i64> @llvm.fshl(<4 x i64> %a, <4 x i64> %a, <4 x i64> <i64 3, i64 3, i64 3, i64 3>)
@@ -3837,12 +4006,18 @@ define <4 x i64> @rotr_v4i64_c(<4 x i64> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v4i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    ushr v2.2d, v0.2d, #3
-; CHECK-GI-NEXT:    ushr v3.2d, v1.2d, #3
-; CHECK-GI-NEXT:    shl v0.2d, v0.2d, #61
-; CHECK-GI-NEXT:    shl v1.2d, v1.2d, #61
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
+; CHECK-GI-NEXT:    adrp x8, .LCPI115_1
+; CHECK-GI-NEXT:    ushr v4.2d, v1.2d, #3
+; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI115_1]
+; CHECK-GI-NEXT:    adrp x8, .LCPI115_0
+; CHECK-GI-NEXT:    ldr q3, [x8, :lo12:.LCPI115_0]
+; CHECK-GI-NEXT:    neg v2.2d, v2.2d
+; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
+; CHECK-GI-NEXT:    ushr v3.2d, v0.2d, #3
+; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v2.2d
+; CHECK-GI-NEXT:    ushl v1.2d, v1.2d, v2.2d
+; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i64> @llvm.fshr(<4 x i64> %a, <4 x i64> %a, <4 x i64> <i64 3, i64 3, i64 3, i64 3>)
@@ -3913,8 +4088,13 @@ define <8 x i8> @fshl_v8i8_c(<8 x i8> %a, <8 x i8> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v8i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    movi v2.8b, #3
+; CHECK-GI-NEXT:    movi v3.8b, #8
 ; CHECK-GI-NEXT:    shl v0.8b, v0.8b, #3
-; CHECK-GI-NEXT:    usra v0.8b, v1.8b, #5
+; CHECK-GI-NEXT:    sub v2.8b, v3.8b, v2.8b
+; CHECK-GI-NEXT:    neg v2.8b, v2.8b
+; CHECK-GI-NEXT:    ushl v1.8b, v1.8b, v2.8b
+; CHECK-GI-NEXT:    orr v0.8b, v0.8b, v1.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i8> @llvm.fshl(<8 x i8> %a, <8 x i8> %b, <8 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
@@ -3930,7 +4110,10 @@ define <8 x i8> @fshr_v8i8_c(<8 x i8> %a, <8 x i8> %b) {
 ;
 ; CHECK-GI-LABEL: fshr_v8i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v0.8b, v0.8b, #5
+; CHECK-GI-NEXT:    movi v2.8b, #3
+; CHECK-GI-NEXT:    movi v3.8b, #8
+; CHECK-GI-NEXT:    sub v2.8b, v3.8b, v2.8b
+; CHECK-GI-NEXT:    ushl v0.8b, v0.8b, v2.8b
 ; CHECK-GI-NEXT:    usra v0.8b, v1.8b, #3
 ; CHECK-GI-NEXT:    ret
 entry:
@@ -3947,8 +4130,13 @@ define <16 x i8> @fshl_v16i8_c(<16 x i8> %a, <16 x i8> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v16i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    movi v2.16b, #3
+; CHECK-GI-NEXT:    movi v3.16b, #8
 ; CHECK-GI-NEXT:    shl v0.16b, v0.16b, #3
-; CHECK-GI-NEXT:    usra v0.16b, v1.16b, #5
+; CHECK-GI-NEXT:    sub v2.16b, v3.16b, v2.16b
+; CHECK-GI-NEXT:    neg v2.16b, v2.16b
+; CHECK-GI-NEXT:    ushl v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i8> @llvm.fshl(<16 x i8> %a, <16 x i8> %b, <16 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
@@ -3964,7 +4152,10 @@ define <16 x i8> @fshr_v16i8_c(<16 x i8> %a, <16 x i8> %b) {
 ;
 ; CHECK-GI-LABEL: fshr_v16i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v0.16b, v0.16b, #5
+; CHECK-GI-NEXT:    movi v2.16b, #3
+; CHECK-GI-NEXT:    movi v3.16b, #8
+; CHECK-GI-NEXT:    sub v2.16b, v3.16b, v2.16b
+; CHECK-GI-NEXT:    ushl v0.16b, v0.16b, v2.16b
 ; CHECK-GI-NEXT:    usra v0.16b, v1.16b, #3
 ; CHECK-GI-NEXT:    ret
 entry:
@@ -3981,8 +4172,13 @@ define <4 x i16> @fshl_v4i16_c(<4 x i16> %a, <4 x i16> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v4i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    movi v2.4h, #3
+; CHECK-GI-NEXT:    movi v3.4h, #16
 ; CHECK-GI-NEXT:    shl v0.4h, v0.4h, #3
-; CHECK-GI-NEXT:    usra v0.4h, v1.4h, #13
+; CHECK-GI-NEXT:    sub v2.4h, v3.4h, v2.4h
+; CHECK-GI-NEXT:    neg v2.4h, v2.4h
+; CHECK-GI-NEXT:    ushl v1.4h, v1.4h, v2.4h
+; CHECK-GI-NEXT:    orr v0.8b, v0.8b, v1.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i16> @llvm.fshl(<4 x i16> %a, <4 x i16> %b, <4 x i16> <i16 3, i16 3, i16 3, i16 3>)
@@ -3998,7 +4194,10 @@ define <4 x i16> @fshr_v4i16_c(<4 x i16> %a, <4 x i16> %b) {
 ;
 ; CHECK-GI-LABEL: fshr_v4i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v0.4h, v0.4h, #13
+; CHECK-GI-NEXT:    movi v2.4h, #3
+; CHECK-GI-NEXT:    movi v3.4h, #16
+; CHECK-GI-NEXT:    sub v2.4h, v3.4h, v2.4h
+; CHECK-GI-NEXT:    ushl v0.4h, v0.4h, v2.4h
 ; CHECK-GI-NEXT:    usra v0.4h, v1.4h, #3
 ; CHECK-GI-NEXT:    ret
 entry:
@@ -4020,24 +4219,25 @@ define <7 x i16> @fshl_v7i16_c(<7 x i16> %a, <7 x i16> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v7i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov w8, #13 // =0xd
-; CHECK-GI-NEXT:    mov w9, #3 // =0x3
-; CHECK-GI-NEXT:    fmov s2, w8
-; CHECK-GI-NEXT:    fmov s3, w9
-; CHECK-GI-NEXT:    mov v2.h[1], w8
-; CHECK-GI-NEXT:    mov v3.h[1], w9
-; CHECK-GI-NEXT:    mov v2.h[2], w8
-; CHECK-GI-NEXT:    mov v3.h[2], w9
-; CHECK-GI-NEXT:    mov v2.h[3], w8
-; CHECK-GI-NEXT:    mov v3.h[3], w9
-; CHECK-GI-NEXT:    mov v2.h[4], w8
-; CHECK-GI-NEXT:    mov v3.h[4], w9
-; CHECK-GI-NEXT:    mov v2.h[5], w8
-; CHECK-GI-NEXT:    mov v3.h[5], w9
-; CHECK-GI-NEXT:    mov v2.h[6], w8
-; CHECK-GI-NEXT:    mov v3.h[6], w9
-; CHECK-GI-NEXT:    neg v2.8h, v2.8h
+; CHECK-GI-NEXT:    mov w8, #3 // =0x3
+; CHECK-GI-NEXT:    mov w9, #16 // =0x10
+; CHECK-GI-NEXT:    fmov s2, w9
+; CHECK-GI-NEXT:    fmov s3, w8
+; CHECK-GI-NEXT:    mov v2.h[1], w9
+; CHECK-GI-NEXT:    mov v3.h[1], w8
+; CHECK-GI-NEXT:    mov v2.h[2], w9
+; CHECK-GI-NEXT:    mov v3.h[2], w8
+; CHECK-GI-NEXT:    mov v2.h[3], w9
+; CHECK-GI-NEXT:    mov v3.h[3], w8
+; CHECK-GI-NEXT:    mov v2.h[4], w9
+; CHECK-GI-NEXT:    mov v3.h[4], w8
+; CHECK-GI-NEXT:    mov v2.h[5], w9
+; CHECK-GI-NEXT:    mov v3.h[5], w8
+; CHECK-GI-NEXT:    mov v2.h[6], w9
+; CHECK-GI-NEXT:    mov v3.h[6], w8
+; CHECK-GI-NEXT:    sub v2.8h, v2.8h, v3.8h
 ; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v3.8h
+; CHECK-GI-NEXT:    neg v2.8h, v2.8h
 ; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v2.8h
 ; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
@@ -4061,24 +4261,25 @@ define <7 x i16> @fshr_v7i16_c(<7 x i16> %a, <7 x i16> %b) {
 ; CHECK-GI-LABEL: fshr_v7i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
 ; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov w9, #13 // =0xd
-; CHECK-GI-NEXT:    fmov s2, w8
-; CHECK-GI-NEXT:    fmov s3, w9
-; CHECK-GI-NEXT:    mov v2.h[1], w8
-; CHECK-GI-NEXT:    mov v3.h[1], w9
-; CHECK-GI-NEXT:    mov v2.h[2], w8
-; CHECK-GI-NEXT:    mov v3.h[2], w9
-; CHECK-GI-NEXT:    mov v2.h[3], w8
-; CHECK-GI-NEXT:    mov v3.h[3], w9
-; CHECK-GI-NEXT:    mov v2.h[4], w8
-; CHECK-GI-NEXT:    mov v3.h[4], w9
-; CHECK-GI-NEXT:    mov v2.h[5], w8
-; CHECK-GI-NEXT:    mov v3.h[5], w9
-; CHECK-GI-NEXT:    mov v2.h[6], w8
-; CHECK-GI-NEXT:    mov v3.h[6], w9
-; CHECK-GI-NEXT:    neg v2.8h, v2.8h
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v3.8h
-; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v2.8h
+; CHECK-GI-NEXT:    mov w9, #16 // =0x10
+; CHECK-GI-NEXT:    fmov s2, w9
+; CHECK-GI-NEXT:    fmov s3, w8
+; CHECK-GI-NEXT:    mov v2.h[1], w9
+; CHECK-GI-NEXT:    mov v3.h[1], w8
+; CHECK-GI-NEXT:    mov v2.h[2], w9
+; CHECK-GI-NEXT:    mov v3.h[2], w8
+; CHECK-GI-NEXT:    mov v2.h[3], w9
+; CHECK-GI-NEXT:    mov v3.h[3], w8
+; CHECK-GI-NEXT:    mov v2.h[4], w9
+; CHECK-GI-NEXT:    mov v3.h[4], w8
+; CHECK-GI-NEXT:    mov v2.h[5], w9
+; CHECK-GI-NEXT:    mov v3.h[5], w8
+; CHECK-GI-NEXT:    mov v2.h[6], w9
+; CHECK-GI-NEXT:    mov v3.h[6], w8
+; CHECK-GI-NEXT:    sub v2.8h, v2.8h, v3.8h
+; CHECK-GI-NEXT:    neg v3.8h, v3.8h
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
+; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v3.8h
 ; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
@@ -4095,8 +4296,13 @@ define <8 x i16> @fshl_v8i16_c(<8 x i16> %a, <8 x i16> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v8i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    movi v2.8h, #3
+; CHECK-GI-NEXT:    movi v3.8h, #16
 ; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #3
-; CHECK-GI-NEXT:    usra v0.8h, v1.8h, #13
+; CHECK-GI-NEXT:    sub v2.8h, v3.8h, v2.8h
+; CHECK-GI-NEXT:    neg v2.8h, v2.8h
+; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v2.8h
+; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i16> @llvm.fshl(<8 x i16> %a, <8 x i16> %b, <8 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -4112,7 +4318,10 @@ define <8 x i16> @fshr_v8i16_c(<8 x i16> %a, <8 x i16> %b) {
 ;
 ; CHECK-GI-LABEL: fshr_v8i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #13
+; CHECK-GI-NEXT:    movi v2.8h, #3
+; CHECK-GI-NEXT:    movi v3.8h, #16
+; CHECK-GI-NEXT:    sub v2.8h, v3.8h, v2.8h
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
 ; CHECK-GI-NEXT:    usra v0.8h, v1.8h, #3
 ; CHECK-GI-NEXT:    ret
 entry:
@@ -4131,10 +4340,16 @@ define <16 x i16> @fshl_v16i16_c(<16 x i16> %a, <16 x i16> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v16i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    movi v4.8h, #3
+; CHECK-GI-NEXT:    movi v5.8h, #16
 ; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #3
 ; CHECK-GI-NEXT:    shl v1.8h, v1.8h, #3
-; CHECK-GI-NEXT:    usra v0.8h, v2.8h, #13
-; CHECK-GI-NEXT:    usra v1.8h, v3.8h, #13
+; CHECK-GI-NEXT:    sub v4.8h, v5.8h, v4.8h
+; CHECK-GI-NEXT:    neg v4.8h, v4.8h
+; CHECK-GI-NEXT:    ushl v2.8h, v2.8h, v4.8h
+; CHECK-GI-NEXT:    ushl v3.8h, v3.8h, v4.8h
+; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v2.16b
+; CHECK-GI-NEXT:    orr v1.16b, v1.16b, v3.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i16> @llvm.fshl(<16 x i16> %a, <16 x i16> %b, <16 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -4152,8 +4367,11 @@ define <16 x i16> @fshr_v16i16_c(<16 x i16> %a, <16 x i16> %b) {
 ;
 ; CHECK-GI-LABEL: fshr_v16i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #13
-; CHECK-GI-NEXT:    shl v1.8h, v1.8h, #13
+; CHECK-GI-NEXT:    movi v4.8h, #3
+; CHECK-GI-NEXT:    movi v5.8h, #16
+; CHECK-GI-NEXT:    sub v4.8h, v5.8h, v4.8h
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v4.8h
+; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v4.8h
 ; CHECK-GI-NEXT:    usra v0.8h, v2.8h, #3
 ; CHECK-GI-NEXT:    usra v1.8h, v3.8h, #3
 ; CHECK-GI-NEXT:    ret
@@ -4163,44 +4381,84 @@ entry:
 }
 
 define <2 x i32> @fshl_v2i32_c(<2 x i32> %a, <2 x i32> %b) {
-; CHECK-LABEL: fshl_v2i32_c:
-; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    shl v0.2s, v0.2s, #3
-; CHECK-NEXT:    usra v0.2s, v1.2s, #29
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: fshl_v2i32_c:
+; CHECK-SD:       // %bb.0: // %entry
+; CHECK-SD-NEXT:    shl v0.2s, v0.2s, #3
+; CHECK-SD-NEXT:    usra v0.2s, v1.2s, #29
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: fshl_v2i32_c:
+; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    movi v2.2s, #3
+; CHECK-GI-NEXT:    movi v3.2s, #32
+; CHECK-GI-NEXT:    shl v0.2s, v0.2s, #3
+; CHECK-GI-NEXT:    sub v2.2s, v3.2s, v2.2s
+; CHECK-GI-NEXT:    neg v2.2s, v2.2s
+; CHECK-GI-NEXT:    ushl v1.2s, v1.2s, v2.2s
+; CHECK-GI-NEXT:    orr v0.8b, v0.8b, v1.8b
+; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i32> @llvm.fshl(<2 x i32> %a, <2 x i32> %b, <2 x i32> <i32 3, i32 3>)
   ret <2 x i32> %d
 }
 
 define <2 x i32> @fshr_v2i32_c(<2 x i32> %a, <2 x i32> %b) {
-; CHECK-LABEL: fshr_v2i32_c:
-; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    shl v0.2s, v0.2s, #29
-; CHECK-NEXT:    usra v0.2s, v1.2s, #3
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: fshr_v2i32_c:
+; CHECK-SD:       // %bb.0: // %entry
+; CHECK-SD-NEXT:    shl v0.2s, v0.2s, #29
+; CHECK-SD-NEXT:    usra v0.2s, v1.2s, #3
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: fshr_v2i32_c:
+; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    movi v2.2s, #3
+; CHECK-GI-NEXT:    movi v3.2s, #32
+; CHECK-GI-NEXT:    sub v2.2s, v3.2s, v2.2s
+; CHECK-GI-NEXT:    ushl v0.2s, v0.2s, v2.2s
+; CHECK-GI-NEXT:    usra v0.2s, v1.2s, #3
+; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i32> @llvm.fshr(<2 x i32> %a, <2 x i32> %b, <2 x i32> <i32 3, i32 3>)
   ret <2 x i32> %d
 }
 
 define <4 x i32> @fshl_v4i32_c(<4 x i32> %a, <4 x i32> %b) {
-; CHECK-LABEL: fshl_v4i32_c:
-; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    shl v0.4s, v0.4s, #3
-; CHECK-NEXT:    usra v0.4s, v1.4s, #29
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: fshl_v4i32_c:
+; CHECK-SD:       // %bb.0: // %entry
+; CHECK-SD-NEXT:    shl v0.4s, v0.4s, #3
+; CHECK-SD-NEXT:    usra v0.4s, v1.4s, #29
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: fshl_v4i32_c:
+; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    movi v2.4s, #3
+; CHECK-GI-NEXT:    movi v3.4s, #32
+; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #3
+; CHECK-GI-NEXT:    sub v2.4s, v3.4s, v2.4s
+; CHECK-GI-NEXT:    neg v2.4s, v2.4s
+; CHECK-GI-NEXT:    ushl v1.4s, v1.4s, v2.4s
+; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
+; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i32> @llvm.fshl(<4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 3, i32 3, i32 3, i32 3>)
   ret <4 x i32> %d
 }
 
 define <4 x i32> @fshr_v4i32_c(<4 x i32> %a, <4 x i32> %b) {
-; CHECK-LABEL: fshr_v4i32_c:
-; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    shl v0.4s, v0.4s, #29
-; CHECK-NEXT:    usra v0.4s, v1.4s, #3
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: fshr_v4i32_c:
+; CHECK-SD:       // %bb.0: // %entry
+; CHECK-SD-NEXT:    shl v0.4s, v0.4s, #29
+; CHECK-SD-NEXT:    usra v0.4s, v1.4s, #3
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: fshr_v4i32_c:
+; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    movi v2.4s, #3
+; CHECK-GI-NEXT:    movi v3.4s, #32
+; CHECK-GI-NEXT:    sub v2.4s, v3.4s, v2.4s
+; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v2.4s
+; CHECK-GI-NEXT:    usra v0.4s, v1.4s, #3
+; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i32> @llvm.fshr(<4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 3, i32 3, i32 3, i32 3>)
   ret <4 x i32> %d
@@ -4248,46 +4506,55 @@ define <7 x i32> @fshl_v7i32_c(<7 x i32> %a, <7 x i32> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v7i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov v0.s[0], w0
-; CHECK-GI-NEXT:    mov w8, #29 // =0x1d
-; CHECK-GI-NEXT:    mov v2.s[0], w7
+; CHECK-GI-NEXT:    mov w8, #3 // =0x3
+; CHECK-GI-NEXT:    mov w9, #32 // =0x20
+; CHECK-GI-NEXT:    mov v2.s[0], w0
+; CHECK-GI-NEXT:    mov v0.s[0], w9
 ; CHECK-GI-NEXT:    mov v1.s[0], w8
-; CHECK-GI-NEXT:    mov w9, #3 // =0x3
-; CHECK-GI-NEXT:    mov v4.s[0], w4
-; CHECK-GI-NEXT:    mov v5.s[0], w9
-; CHECK-GI-NEXT:    ldr s3, [sp]
-; CHECK-GI-NEXT:    ldr s6, [sp, #24]
-; CHECK-GI-NEXT:    ldr s7, [sp, #32]
-; CHECK-GI-NEXT:    mov v0.s[1], w1
-; CHECK-GI-NEXT:    mov v2.s[1], v3.s[0]
-; CHECK-GI-NEXT:    ldr s3, [sp, #8]
+; CHECK-GI-NEXT:    mov v3.s[0], w7
+; CHECK-GI-NEXT:    ldr s4, [sp]
+; CHECK-GI-NEXT:    mov v5.s[0], w4
+; CHECK-GI-NEXT:    mov v6.s[0], w8
+; CHECK-GI-NEXT:    ldr s7, [sp, #24]
+; CHECK-GI-NEXT:    movi v16.4s, #32
+; CHECK-GI-NEXT:    movi v17.4s, #3
+; CHECK-GI-NEXT:    mov v2.s[1], w1
+; CHECK-GI-NEXT:    ldr s18, [sp, #32]
+; CHECK-GI-NEXT:    mov v0.s[1], w9
 ; CHECK-GI-NEXT:    mov v1.s[1], w8
-; CHECK-GI-NEXT:    mov v6.s[1], v7.s[0]
-; CHECK-GI-NEXT:    mov v4.s[1], w5
-; CHECK-GI-NEXT:    mov v5.s[1], w9
-; CHECK-GI-NEXT:    ldr s7, [sp, #40]
-; CHECK-GI-NEXT:    mov v0.s[2], w2
-; CHECK-GI-NEXT:    mov v2.s[2], v3.s[0]
-; CHECK-GI-NEXT:    ldr s3, [sp, #16]
+; CHECK-GI-NEXT:    mov v3.s[1], v4.s[0]
+; CHECK-GI-NEXT:    ldr s4, [sp, #8]
+; CHECK-GI-NEXT:    mov v5.s[1], w5
+; CHECK-GI-NEXT:    mov v6.s[1], w8
+; CHECK-GI-NEXT:    mov v7.s[1], v18.s[0]
+; CHECK-GI-NEXT:    sub v16.4s, v16.4s, v17.4s
+; CHECK-GI-NEXT:    ldr s17, [sp, #40]
+; CHECK-GI-NEXT:    mov v2.s[2], w2
+; CHECK-GI-NEXT:    mov v0.s[2], w9
 ; CHECK-GI-NEXT:    mov v1.s[2], w8
-; CHECK-GI-NEXT:    mov v6.s[2], v7.s[0]
-; CHECK-GI-NEXT:    mov v4.s[2], w6
-; CHECK-GI-NEXT:    mov v5.s[2], w9
-; CHECK-GI-NEXT:    mov v0.s[3], w3
-; CHECK-GI-NEXT:    mov v2.s[3], v3.s[0]
-; CHECK-GI-NEXT:    neg v1.4s, v1.4s
-; CHECK-GI-NEXT:    ushl v3.4s, v4.4s, v5.4s
-; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #3
-; CHECK-GI-NEXT:    ushl v1.4s, v6.4s, v1.4s
-; CHECK-GI-NEXT:    usra v0.4s, v2.4s, #29
-; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
-; CHECK-GI-NEXT:    mov s2, v0.s[1]
-; CHECK-GI-NEXT:    mov s3, v0.s[2]
-; CHECK-GI-NEXT:    mov s4, v0.s[3]
-; CHECK-GI-NEXT:    mov s5, v1.s[1]
-; CHECK-GI-NEXT:    mov s6, v1.s[2]
-; CHECK-GI-NEXT:    fmov w0, s0
-; CHECK-GI-NEXT:    fmov w4, s1
+; CHECK-GI-NEXT:    mov v3.s[2], v4.s[0]
+; CHECK-GI-NEXT:    ldr s4, [sp, #16]
+; CHECK-GI-NEXT:    mov v5.s[2], w6
+; CHECK-GI-NEXT:    mov v6.s[2], w8
+; CHECK-GI-NEXT:    mov v7.s[2], v17.s[0]
+; CHECK-GI-NEXT:    mov v2.s[3], w3
+; CHECK-GI-NEXT:    sub v0.4s, v0.4s, v1.4s
+; CHECK-GI-NEXT:    mov v3.s[3], v4.s[0]
+; CHECK-GI-NEXT:    neg v1.4s, v16.4s
+; CHECK-GI-NEXT:    neg v0.4s, v0.4s
+; CHECK-GI-NEXT:    shl v2.4s, v2.4s, #3
+; CHECK-GI-NEXT:    ushl v1.4s, v3.4s, v1.4s
+; CHECK-GI-NEXT:    ushl v3.4s, v5.4s, v6.4s
+; CHECK-GI-NEXT:    ushl v0.4s, v7.4s, v0.4s
+; CHECK-GI-NEXT:    orr v1.16b, v2.16b, v1.16b
+; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
+; CHECK-GI-NEXT:    mov s2, v1.s[1]
+; CHECK-GI-NEXT:    mov s3, v1.s[2]
+; CHECK-GI-NEXT:    mov s4, v1.s[3]
+; CHECK-GI-NEXT:    fmov w0, s1
+; CHECK-GI-NEXT:    mov s5, v0.s[1]
+; CHECK-GI-NEXT:    mov s6, v0.s[2]
+; CHECK-GI-NEXT:    fmov w4, s0
 ; CHECK-GI-NEXT:    fmov w1, s2
 ; CHECK-GI-NEXT:    fmov w2, s3
 ; CHECK-GI-NEXT:    fmov w3, s4
@@ -4343,37 +4610,44 @@ define <7 x i32> @fshr_v7i32_c(<7 x i32> %a, <7 x i32> %b) {
 ; CHECK-GI:       // %bb.0: // %entry
 ; CHECK-GI-NEXT:    mov v0.s[0], w0
 ; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov v2.s[0], w7
-; CHECK-GI-NEXT:    mov v1.s[0], w8
-; CHECK-GI-NEXT:    mov w9, #29 // =0x1d
-; CHECK-GI-NEXT:    mov v4.s[0], w4
-; CHECK-GI-NEXT:    mov v5.s[0], w9
-; CHECK-GI-NEXT:    ldr s3, [sp]
-; CHECK-GI-NEXT:    ldr s6, [sp, #24]
-; CHECK-GI-NEXT:    ldr s7, [sp, #32]
+; CHECK-GI-NEXT:    mov w9, #32 // =0x20
+; CHECK-GI-NEXT:    mov v1.s[0], w9
+; CHECK-GI-NEXT:    mov v2.s[0], w8
+; CHECK-GI-NEXT:    mov v3.s[0], w8
+; CHECK-GI-NEXT:    mov v4.s[0], w7
+; CHECK-GI-NEXT:    ldr s5, [sp]
+; CHECK-GI-NEXT:    mov v6.s[0], w4
+; CHECK-GI-NEXT:    ldr s7, [sp, #24]
+; CHECK-GI-NEXT:    ldr s16, [sp, #32]
+; CHECK-GI-NEXT:    movi v17.4s, #3
 ; CHECK-GI-NEXT:    mov v0.s[1], w1
-; CHECK-GI-NEXT:    mov v2.s[1], v3.s[0]
-; CHECK-GI-NEXT:    ldr s3, [sp, #8]
-; CHECK-GI-NEXT:    mov v1.s[1], w8
-; CHECK-GI-NEXT:    mov v6.s[1], v7.s[0]
-; CHECK-GI-NEXT:    mov v4.s[1], w5
-; CHECK-GI-NEXT:    mov v5.s[1], w9
-; CHECK-GI-NEXT:    ldr s7, [sp, #40]
+; CHECK-GI-NEXT:    mov v1.s[1], w9
+; CHECK-GI-NEXT:    mov v2.s[1], w8
+; CHECK-GI-NEXT:    mov v3.s[1], w8
+; CHECK-GI-NEXT:    mov v4.s[1], v5.s[0]
+; CHECK-GI-NEXT:    mov v7.s[1], v16.s[0]
+; CHECK-GI-NEXT:    ldr s16, [sp, #8]
+; CHECK-GI-NEXT:    mov v6.s[1], w5
+; CHECK-GI-NEXT:    movi v5.4s, #32
 ; CHECK-GI-NEXT:    mov v0.s[2], w2
-; CHECK-GI-NEXT:    mov v2.s[2], v3.s[0]
-; CHECK-GI-NEXT:    ldr s3, [sp, #16]
-; CHECK-GI-NEXT:    mov v1.s[2], w8
-; CHECK-GI-NEXT:    mov v6.s[2], v7.s[0]
-; CHECK-GI-NEXT:    mov v4.s[2], w6
-; CHECK-GI-NEXT:    mov v5.s[2], w9
+; CHECK-GI-NEXT:    mov v1.s[2], w9
+; CHECK-GI-NEXT:    mov v2.s[2], w8
+; CHECK-GI-NEXT:    mov v3.s[2], w8
+; CHECK-GI-NEXT:    mov v4.s[2], v16.s[0]
+; CHECK-GI-NEXT:    ldr s16, [sp, #40]
+; CHECK-GI-NEXT:    mov v6.s[2], w6
+; CHECK-GI-NEXT:    sub v5.4s, v5.4s, v17.4s
 ; CHECK-GI-NEXT:    mov v0.s[3], w3
-; CHECK-GI-NEXT:    mov v2.s[3], v3.s[0]
-; CHECK-GI-NEXT:    neg v1.4s, v1.4s
-; CHECK-GI-NEXT:    ushl v3.4s, v4.4s, v5.4s
-; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #29
+; CHECK-GI-NEXT:    mov v7.s[2], v16.s[0]
+; CHECK-GI-NEXT:    ldr s16, [sp, #16]
+; CHECK-GI-NEXT:    sub v1.4s, v1.4s, v2.4s
+; CHECK-GI-NEXT:    neg v2.4s, v3.4s
+; CHECK-GI-NEXT:    mov v4.s[3], v16.s[0]
+; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v5.4s
 ; CHECK-GI-NEXT:    ushl v1.4s, v6.4s, v1.4s
-; CHECK-GI-NEXT:    usra v0.4s, v2.4s, #3
-; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
+; CHECK-GI-NEXT:    ushl v2.4s, v7.4s, v2.4s
+; CHECK-GI-NEXT:    usra v0.4s, v4.4s, #3
+; CHECK-GI-NEXT:    orr v1.16b, v1.16b, v2.16b
 ; CHECK-GI-NEXT:    mov s2, v0.s[1]
 ; CHECK-GI-NEXT:    mov s3, v0.s[2]
 ; CHECK-GI-NEXT:    mov s4, v0.s[3]
@@ -4403,10 +4677,16 @@ define <8 x i32> @fshl_v8i32_c(<8 x i32> %a, <8 x i32> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v8i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    movi v4.4s, #3
+; CHECK-GI-NEXT:    movi v5.4s, #32
 ; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #3
 ; CHECK-GI-NEXT:    shl v1.4s, v1.4s, #3
-; CHECK-GI-NEXT:    usra v0.4s, v2.4s, #29
-; CHECK-GI-NEXT:    usra v1.4s, v3.4s, #29
+; CHECK-GI-NEXT:    sub v4.4s, v5.4s, v4.4s
+; CHECK-GI-NEXT:    neg v4.4s, v4.4s
+; CHECK-GI-NEXT:    ushl v2.4s, v2.4s, v4.4s
+; CHECK-GI-NEXT:    ushl v3.4s, v3.4s, v4.4s
+; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v2.16b
+; CHECK-GI-NEXT:    orr v1.16b, v1.16b, v3.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i32> @llvm.fshl(<8 x i32> %a, <8 x i32> %b, <8 x i32> <i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3>)
@@ -4424,8 +4704,11 @@ define <8 x i32> @fshr_v8i32_c(<8 x i32> %a, <8 x i32> %b) {
 ;
 ; CHECK-GI-LABEL: fshr_v8i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #29
-; CHECK-GI-NEXT:    shl v1.4s, v1.4s, #29
+; CHECK-GI-NEXT:    movi v4.4s, #3
+; CHECK-GI-NEXT:    movi v5.4s, #32
+; CHECK-GI-NEXT:    sub v4.4s, v5.4s, v4.4s
+; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v4.4s
+; CHECK-GI-NEXT:    ushl v1.4s, v1.4s, v4.4s
 ; CHECK-GI-NEXT:    usra v0.4s, v2.4s, #3
 ; CHECK-GI-NEXT:    usra v1.4s, v3.4s, #3
 ; CHECK-GI-NEXT:    ret
@@ -4435,22 +4718,46 @@ entry:
 }
 
 define <2 x i64> @fshl_v2i64_c(<2 x i64> %a, <2 x i64> %b) {
-; CHECK-LABEL: fshl_v2i64_c:
-; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    shl v0.2d, v0.2d, #3
-; CHECK-NEXT:    usra v0.2d, v1.2d, #61
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: fshl_v2i64_c:
+; CHECK-SD:       // %bb.0: // %entry
+; CHECK-SD-NEXT:    shl v0.2d, v0.2d, #3
+; CHECK-SD-NEXT:    usra v0.2d, v1.2d, #61
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: fshl_v2i64_c:
+; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    adrp x8, .LCPI138_1
+; CHECK-GI-NEXT:    adrp x9, .LCPI138_0
+; CHECK-GI-NEXT:    shl v0.2d, v0.2d, #3
+; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI138_1]
+; CHECK-GI-NEXT:    ldr q3, [x9, :lo12:.LCPI138_0]
+; CHECK-GI-NEXT:    sub v2.2d, v3.2d, v2.2d
+; CHECK-GI-NEXT:    neg v2.2d, v2.2d
+; CHECK-GI-NEXT:    ushl v1.2d, v1.2d, v2.2d
+; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
+; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i64> @llvm.fshl(<2 x i64> %a, <2 x i64> %b, <2 x i64> <i64 3, i64 3>)
   ret <2 x i64> %d
 }
 
 define <2 x i64> @fshr_v2i64_c(<2 x i64> %a, <2 x i64> %b) {
-; CHECK-LABEL: fshr_v2i64_c:
-; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    shl v0.2d, v0.2d, #61
-; CHECK-NEXT:    usra v0.2d, v1.2d, #3
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: fshr_v2i64_c:
+; CHECK-SD:       // %bb.0: // %entry
+; CHECK-SD-NEXT:    shl v0.2d, v0.2d, #61
+; CHECK-SD-NEXT:    usra v0.2d, v1.2d, #3
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: fshr_v2i64_c:
+; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    adrp x8, .LCPI139_1
+; CHECK-GI-NEXT:    adrp x9, .LCPI139_0
+; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI139_1]
+; CHECK-GI-NEXT:    ldr q3, [x9, :lo12:.LCPI139_0]
+; CHECK-GI-NEXT:    sub v2.2d, v3.2d, v2.2d
+; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v2.2d
+; CHECK-GI-NEXT:    usra v0.2d, v1.2d, #3
+; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i64> @llvm.fshr(<2 x i64> %a, <2 x i64> %b, <2 x i64> <i64 3, i64 3>)
   ret <2 x i64> %d
@@ -4467,10 +4774,18 @@ define <4 x i64> @fshl_v4i64_c(<4 x i64> %a, <4 x i64> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v4i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
+; CHECK-GI-NEXT:    adrp x8, .LCPI140_1
+; CHECK-GI-NEXT:    adrp x9, .LCPI140_0
 ; CHECK-GI-NEXT:    shl v0.2d, v0.2d, #3
+; CHECK-GI-NEXT:    ldr q4, [x8, :lo12:.LCPI140_1]
+; CHECK-GI-NEXT:    ldr q5, [x9, :lo12:.LCPI140_0]
 ; CHECK-GI-NEXT:    shl v1.2d, v1.2d, #3
-; CHECK-GI-NEXT:    usra v0.2d, v2.2d, #61
-; CHECK-GI-NEXT:    usra v1.2d, v3.2d, #61
+; CHECK-GI-NEXT:    sub v4.2d, v5.2d, v4.2d
+; CHECK-GI-NEXT:    neg v4.2d, v4.2d
+; CHECK-GI-NEXT:    ushl v2.2d, v2.2d, v4.2d
+; CHECK-GI-NEXT:    ushl v3.2d, v3.2d, v4.2d
+; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v2.16b
+; CHECK-GI-NEXT:    orr v1.16b, v1.16b, v3.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i64> @llvm.fshl(<4 x i64> %a, <4 x i64> %b, <4 x i64> <i64 3, i64 3, i64 3, i64 3>)
@@ -4488,8 +4803,13 @@ define <4 x i64> @fshr_v4i64_c(<4 x i64> %a, <4 x i64> %b) {
 ;
 ; CHECK-GI-LABEL: fshr_v4i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    shl v0.2d, v0.2d, #61
-; CHECK-GI-NEXT:    shl v1.2d, v1.2d, #61
+; CHECK-GI-NEXT:    adrp x8, .LCPI141_1
+; CHECK-GI-NEXT:    adrp x9, .LCPI141_0
+; CHECK-GI-NEXT:    ldr q4, [x8, :lo12:.LCPI141_1]
+; CHECK-GI-NEXT:    ldr q5, [x9, :lo12:.LCPI141_0]
+; CHECK-GI-NEXT:    sub v4.2d, v5.2d, v4.2d
+; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v4.2d
+; CHECK-GI-NEXT:    ushl v1.2d, v1.2d, v4.2d
 ; CHECK-GI-NEXT:    usra v0.2d, v2.2d, #3
 ; CHECK-GI-NEXT:    usra v1.2d, v3.2d, #3
 ; CHECK-GI-NEXT:    ret
diff --git a/llvm/test/CodeGen/AArch64/funnel-shift.ll b/llvm/test/CodeGen/AArch64/funnel-shift.ll
index e5aa360f804c1..05735e4a01c00 100644
--- a/llvm/test/CodeGen/AArch64/funnel-shift.ll
+++ b/llvm/test/CodeGen/AArch64/funnel-shift.ll
@@ -191,10 +191,20 @@ define i8 @fshl_i8_const_fold_overshift_1() {
 }
 
 define i8 @fshl_i8_const_fold_overshift_2() {
-; CHECK-LABEL: fshl_i8_const_fold_overshift_2:
-; CHECK:       // %bb.0:
-; CHECK-NEXT:    mov w0, #120 // =0x78
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: fshl_i8_const_fold_overshift_2:
+; CHECK-SD:       // %bb.0:
+; CHECK-SD-NEXT:    mov w0, #120 // =0x78
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: fshl_i8_const_fold_overshift_2:
+; CHECK-GI:       // %bb.0:
+; CHECK-GI-NEXT:    mov x8, xzr
+; CHECK-GI-NEXT:    mov w9, #-11 // =0xfffffff5
+; CHECK-GI-NEXT:    sub x8, x8, w9, uxtb
+; CHECK-GI-NEXT:    mov w9, #15 // =0xf
+; CHECK-GI-NEXT:    and x8, x8, #0x7
+; CHECK-GI-NEXT:    lsl w0, w9, w8
+; CHECK-GI-NEXT:    ret
   %f = call i8 @llvm.fshl.i8(i8 15, i8 15, i8 11)
   ret i8 %f
 }
@@ -355,10 +365,15 @@ define i37 @fshr_i37(i37 %x, i37 %y, i37 %z) {
 
 declare i7 @llvm.fshr.i7(i7, i7, i7)
 define i7 @fshr_i7_const_fold() {
-; CHECK-LABEL: fshr_i7_const_fold:
-; CHECK:       // %bb.0:
-; CHECK-NEXT:    mov w0, #31 // =0x1f
-; CHECK-NEXT:    ret
+; CHECK-SD-LABEL: fshr_i7_const_fold:
+; CHECK-SD:       // %bb.0:
+; CHECK-SD-NEXT:    mov w0, #31 // =0x1f
+; CHECK-SD-NEXT:    ret
+;
+; CHECK-GI-LABEL: fshr_i7_const_fold:
+; CHECK-GI:       // %bb.0:
+; CHECK-GI-NEXT:    mov w0, #3615 // =0xe1f
+; CHECK-GI-NEXT:    ret
   %f = call i7 @llvm.fshr.i7(i7 112, i7 127, i7 2)
   ret i7 %f
 }

>From 4b843dcbb16ede85c70dd84afc47ac6234352767 Mon Sep 17 00:00:00 2001
From: John Stuart <john.stuart.science at gmail.com>
Date: Thu, 20 Mar 2025 22:00:07 +0100
Subject: [PATCH 2/5] less undef in header

---
 .../llvm/CodeGen/GlobalISel/OptMIRBuilder.h    | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/llvm/include/llvm/CodeGen/GlobalISel/OptMIRBuilder.h b/llvm/include/llvm/CodeGen/GlobalISel/OptMIRBuilder.h
index 7df63bef1d9ff..3b15687315b42 100644
--- a/llvm/include/llvm/CodeGen/GlobalISel/OptMIRBuilder.h
+++ b/llvm/include/llvm/CodeGen/GlobalISel/OptMIRBuilder.h
@@ -19,15 +19,15 @@ class LegalizerInfo;
 struct LegalityQuery;
 
 /// OptMIRBuilder optimizes instructions while building them. It
-/// checks its operands whether they are constant or undef. It never
-/// checks whether an operand is defined by G_FSINCOS. It checks
-/// operands and registers for G_IMPLICIT_DEF, G_CONSTANT,
-/// G_BUILD_VECTOR, and G_SPLAT_VECTOR and nothing else.
-/// Based on undef, the constants and their values, it folds
-/// instructions into constants, undef, or other instructions. For
-/// optmizations and constant folding it relies on GIConstant.
-/// It can fold G_MUL into G_ADD and G_SUB. Before folding
-/// it always queries the legalizer. When it fails to fold, it
+/// checks its operands whether they are constant or
+/// G_IMPLICIT_DEF. It never checks whether an operand is defined by
+/// G_FSINCOS. It checks operands and registers for G_IMPLICIT_DEF,
+/// G_CONSTANT, G_BUILD_VECTOR, and G_SPLAT_VECTOR and nothing else.
+/// Based on G_IMPLICIT_DEF, the constants and their values, it folds
+/// instructions into constants, G_IMPLICIT_DEF, or other
+/// instructions. For optmizations and constant folding it relies on
+/// GIConstant.  It can fold G_MUL into G_ADD and G_SUB. Before
+/// folding it always queries the legalizer. When it fails to fold, it
 /// delegates the building to the CSEMIRBuilder. It is the users
 /// responsibility to only attempt to build legal instructions pass
 /// the legalizer. OptMIRBuilder can safely be used in optimization

>From 628c909f7a8387e4318ae4f041ae9a42c48d1f81 Mon Sep 17 00:00:00 2001
From: John Stuart <john.stuart.science at gmail.com>
Date: Fri, 21 Mar 2025 21:33:12 +0100
Subject: [PATCH 3/5] add add,sub,mul back to csemirbuilder

---
 llvm/lib/CodeGen/GlobalISel/CSEMIRBuilder.cpp |    3 +
 .../AArch64/GlobalISel/arm64-irtranslator.ll  |    4 +-
 .../AArch64/GlobalISel/combine-udiv.ll        |   22 +-
 .../GlobalISel/combine-umulh-to-lshr.mir      |   22 +-
 .../AArch64/GlobalISel/legalize-fshl.mir      |   55 +-
 .../AArch64/GlobalISel/legalize-fshr.mir      |   56 +-
 .../CodeGen/AArch64/arm64-neon-mul-div-cte.ll |   20 +-
 llvm/test/CodeGen/AArch64/fsh.ll              | 1108 ++++++-----------
 llvm/test/CodeGen/AArch64/funnel-shift.ll     |   31 +-
 9 files changed, 444 insertions(+), 877 deletions(-)

diff --git a/llvm/lib/CodeGen/GlobalISel/CSEMIRBuilder.cpp b/llvm/lib/CodeGen/GlobalISel/CSEMIRBuilder.cpp
index 23ff50f5296af..bf8e847011d7c 100644
--- a/llvm/lib/CodeGen/GlobalISel/CSEMIRBuilder.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/CSEMIRBuilder.cpp
@@ -199,12 +199,15 @@ MachineInstrBuilder CSEMIRBuilder::buildInstr(unsigned Opc,
     }
     break;
   }
+  case TargetOpcode::G_ADD:
   case TargetOpcode::G_PTR_ADD:
   case TargetOpcode::G_AND:
   case TargetOpcode::G_ASHR:
   case TargetOpcode::G_LSHR:
+  case TargetOpcode::G_MUL:
   case TargetOpcode::G_OR:
   case TargetOpcode::G_SHL:
+  case TargetOpcode::G_SUB:
   case TargetOpcode::G_XOR:
   case TargetOpcode::G_UDIV:
   case TargetOpcode::G_SDIV:
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
index 5da4e8aae8c4b..d457418720f47 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
@@ -462,9 +462,7 @@ next:
 ; CHECK-LABEL: name: constant_int_start
 ; CHECK: [[TWO:%[0-9]+]]:_(s32) = G_CONSTANT i32 2
 ; CHECK: [[ANSWER:%[0-9]+]]:_(s32) = G_CONSTANT i32 42
-; CHECK: [[ADD:%[0-9]+]]:_(s32) = G_ADD [[TWO]], [[ANSWER]]
-; CHECK: $w0 = COPY [[ADD]]
-; CHECK-NEXT: RET_ReallyLR implicit $w0
+; CHECK: [[RES:%[0-9]+]]:_(s32) = G_CONSTANT i32 44
 define i32 @constant_int_start() {
   %res = add i32 2, 42
   ret i32 %res
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll b/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll
index 3a34891bc37d3..b681e3b223117 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll
@@ -19,18 +19,13 @@ define <8 x i16> @combine_vec_udiv_uniform(<8 x i16> %x) {
 ; GISEL-LABEL: combine_vec_udiv_uniform:
 ; GISEL:       // %bb.0:
 ; GISEL-NEXT:    adrp x8, .LCPI0_0
-; GISEL-NEXT:    movi v3.8h, #15
-; GISEL-NEXT:    movi v4.8h, #16
 ; GISEL-NEXT:    ldr q1, [x8, :lo12:.LCPI0_0]
 ; GISEL-NEXT:    umull2 v2.4s, v0.8h, v1.8h
 ; GISEL-NEXT:    umull v1.4s, v0.4h, v1.4h
 ; GISEL-NEXT:    uzp2 v1.8h, v1.8h, v2.8h
-; GISEL-NEXT:    sub v2.8h, v4.8h, v3.8h
-; GISEL-NEXT:    neg v2.8h, v2.8h
 ; GISEL-NEXT:    sub v0.8h, v0.8h, v1.8h
-; GISEL-NEXT:    ushl v0.8h, v0.8h, v2.8h
-; GISEL-NEXT:    add v0.8h, v0.8h, v1.8h
-; GISEL-NEXT:    ushr v0.8h, v0.8h, #4
+; GISEL-NEXT:    usra v1.8h, v0.8h, #1
+; GISEL-NEXT:    ushr v0.8h, v1.8h, #4
 ; GISEL-NEXT:    ret
   %1 = udiv <8 x i16> %x, <i16 23, i16 23, i16 23, i16 23, i16 23, i16 23, i16 23, i16 23>
   ret <8 x i16> %1
@@ -140,21 +135,16 @@ define <8 x i16> @combine_vec_udiv_nonuniform3(<8 x i16> %x) {
 ; GISEL-LABEL: combine_vec_udiv_nonuniform3:
 ; GISEL:       // %bb.0:
 ; GISEL-NEXT:    adrp x8, .LCPI3_1
-; GISEL-NEXT:    movi v3.8h, #15
-; GISEL-NEXT:    movi v4.8h, #16
 ; GISEL-NEXT:    ldr q1, [x8, :lo12:.LCPI3_1]
 ; GISEL-NEXT:    adrp x8, .LCPI3_0
 ; GISEL-NEXT:    umull2 v2.4s, v0.8h, v1.8h
 ; GISEL-NEXT:    umull v1.4s, v0.4h, v1.4h
 ; GISEL-NEXT:    uzp2 v1.8h, v1.8h, v2.8h
-; GISEL-NEXT:    sub v2.8h, v4.8h, v3.8h
-; GISEL-NEXT:    neg v2.8h, v2.8h
-; GISEL-NEXT:    sub v0.8h, v0.8h, v1.8h
-; GISEL-NEXT:    ushl v0.8h, v0.8h, v2.8h
 ; GISEL-NEXT:    ldr q2, [x8, :lo12:.LCPI3_0]
-; GISEL-NEXT:    add v0.8h, v0.8h, v1.8h
-; GISEL-NEXT:    neg v1.8h, v2.8h
-; GISEL-NEXT:    ushl v0.8h, v0.8h, v1.8h
+; GISEL-NEXT:    sub v0.8h, v0.8h, v1.8h
+; GISEL-NEXT:    usra v1.8h, v0.8h, #1
+; GISEL-NEXT:    neg v0.8h, v2.8h
+; GISEL-NEXT:    ushl v0.8h, v1.8h, v0.8h
 ; GISEL-NEXT:    ret
   %1 = udiv <8 x i16> %x, <i16 7, i16 23, i16 25, i16 27, i16 31, i16 47, i16 63, i16 127>
   ret <8 x i16> %1
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-umulh-to-lshr.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-umulh-to-lshr.mir
index d5fe354719908..3ff72219810fb 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-umulh-to-lshr.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-umulh-to-lshr.mir
@@ -37,15 +37,9 @@ body:             |
     ; CHECK: liveins: $q0
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<4 x s32>) = COPY $q0
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 28
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 29
     ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C]](s32), [[C]](s32), [[C]](s32), [[C]](s32)
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 31
-    ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C1]](s32), [[C1]](s32), [[C1]](s32), [[C1]](s32)
-    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(<4 x s32>) = G_SUB [[BUILD_VECTOR1]], [[BUILD_VECTOR]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 32
-    ; CHECK-NEXT: [[BUILD_VECTOR2:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C2]](s32), [[C2]](s32), [[C2]](s32), [[C2]](s32)
-    ; CHECK-NEXT: [[SUB1:%[0-9]+]]:_(<4 x s32>) = G_SUB [[BUILD_VECTOR2]], [[SUB]]
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<4 x s32>) = G_LSHR [[COPY]], [[SUB1]](<4 x s32>)
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<4 x s32>) = G_LSHR [[COPY]], [[BUILD_VECTOR]](<4 x s32>)
     ; CHECK-NEXT: $q0 = COPY [[LSHR]](<4 x s32>)
     %0:_(<4 x s32>) = COPY $q0
     %1:_(s32) = G_CONSTANT i32 8
@@ -113,18 +107,12 @@ body:             |
     ; CHECK: liveins: $q0
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(<4 x s32>) = COPY $q0
-    ; CHECK-NEXT: %cst3:_(s32) = G_CONSTANT i32 32
     ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 28
     ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 27
     ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 26
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 25
-    ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C]](s32), [[C1]](s32), [[C2]](s32), [[C3]](s32)
-    ; CHECK-NEXT: [[C4:%[0-9]+]]:_(s32) = G_CONSTANT i32 31
-    ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C4]](s32), [[C4]](s32), [[C4]](s32), [[C4]](s32)
-    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(<4 x s32>) = G_SUB [[BUILD_VECTOR1]], [[BUILD_VECTOR]]
-    ; CHECK-NEXT: [[BUILD_VECTOR2:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR %cst3(s32), %cst3(s32), %cst3(s32), %cst3(s32)
-    ; CHECK-NEXT: [[SUB1:%[0-9]+]]:_(<4 x s32>) = G_SUB [[BUILD_VECTOR2]], [[SUB]]
-    ; CHECK-NEXT: %mulh:_(<4 x s32>) = G_LSHR [[COPY]], [[SUB1]](<4 x s32>)
+    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 29
+    ; CHECK-NEXT: [[BUILD_VECTOR:%[0-9]+]]:_(<4 x s32>) = G_BUILD_VECTOR [[C3]](s32), [[C]](s32), [[C1]](s32), [[C2]](s32)
+    ; CHECK-NEXT: %mulh:_(<4 x s32>) = G_LSHR [[COPY]], [[BUILD_VECTOR]](<4 x s32>)
     ; CHECK-NEXT: $q0 = COPY %mulh(<4 x s32>)
     %0:_(<4 x s32>) = COPY $q0
     %cst1:_(s32) = G_CONSTANT i32 8
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshl.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshl.mir
index 9abdcc954b6c1..1e549fa9a833a 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshl.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshl.mir
@@ -170,14 +170,11 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 8
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 4
-    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C2]](s64)
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C3]]
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[SUB]](s32)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -206,14 +203,12 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 8
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 2
-    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 2
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C2]](s64)
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C3]]
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[SUB]](s32)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 2
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 6
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -279,14 +274,12 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 12
-    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 12
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C2]](s64)
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C3]]
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[SUB]](s32)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 12
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -315,14 +308,12 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 4
-    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C2]](s64)
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C3]]
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[SUB]](s32)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 12
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshr.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshr.mir
index 20fbb0d8476f7..9a9e5bb42c64f 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshr.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-fshr.mir
@@ -169,14 +169,12 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 8
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 7
-    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[SUB]](s32)
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C2]]
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 7
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C3]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 1
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 7
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -205,14 +203,12 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 8
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 2
-    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[SUB]](s32)
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C2]]
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 2
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C3]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 6
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 255
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 2
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -277,14 +273,12 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 5
-    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[SUB]](s32)
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C2]]
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 5
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C3]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 11
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 5
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
@@ -313,14 +307,12 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $w0
     ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $w1
-    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 16
-    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 4
-    ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(s32) = G_SUB [[C]], [[C1]]
-    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[SUB]](s32)
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
-    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C2]]
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C3]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s64) = G_CONSTANT i64 12
+    ; CHECK-NEXT: [[SHL:%[0-9]+]]:_(s32) = G_SHL [[COPY]], [[C]](s64)
+    ; CHECK-NEXT: [[C1:%[0-9]+]]:_(s32) = G_CONSTANT i32 65535
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s32) = G_AND [[COPY1]], [[C1]]
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s64) = G_CONSTANT i64 4
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(s32) = G_LSHR [[AND]], [[C2]](s64)
     ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = disjoint G_OR [[SHL]], [[LSHR]]
     ; CHECK-NEXT: $w0 = COPY [[OR]](s32)
     ; CHECK-NEXT: RET_ReallyLR implicit $w0
diff --git a/llvm/test/CodeGen/AArch64/arm64-neon-mul-div-cte.ll b/llvm/test/CodeGen/AArch64/arm64-neon-mul-div-cte.ll
index 9425beb41042a..ca6bb8360de59 100644
--- a/llvm/test/CodeGen/AArch64/arm64-neon-mul-div-cte.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-neon-mul-div-cte.ll
@@ -179,18 +179,13 @@ define <8 x i16> @udiv8xi16(<8 x i16> %x) {
 ; CHECK-GI-LABEL: udiv8xi16:
 ; CHECK-GI:       // %bb.0:
 ; CHECK-GI-NEXT:    adrp x8, .LCPI4_0
-; CHECK-GI-NEXT:    movi v3.8h, #15
-; CHECK-GI-NEXT:    movi v4.8h, #16
 ; CHECK-GI-NEXT:    ldr q1, [x8, :lo12:.LCPI4_0]
 ; CHECK-GI-NEXT:    umull2 v2.4s, v0.8h, v1.8h
 ; CHECK-GI-NEXT:    umull v1.4s, v0.4h, v1.4h
 ; CHECK-GI-NEXT:    uzp2 v1.8h, v1.8h, v2.8h
-; CHECK-GI-NEXT:    sub v2.8h, v4.8h, v3.8h
-; CHECK-GI-NEXT:    neg v2.8h, v2.8h
 ; CHECK-GI-NEXT:    sub v0.8h, v0.8h, v1.8h
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
-; CHECK-GI-NEXT:    add v0.8h, v0.8h, v1.8h
-; CHECK-GI-NEXT:    ushr v0.8h, v0.8h, #12
+; CHECK-GI-NEXT:    usra v1.8h, v0.8h, #1
+; CHECK-GI-NEXT:    ushr v0.8h, v1.8h, #12
 ; CHECK-GI-NEXT:    ret
   %div = udiv <8 x i16> %x, <i16 6537, i16 6537, i16 6537, i16 6537, i16 6537, i16 6537, i16 6537, i16 6537>
   ret <8 x i16> %div
@@ -252,18 +247,11 @@ define <2 x i64> @udiv_v2i64(<2 x i64> %a) {
 ; CHECK-GI-NEXT:    movk x8, #9362, lsl #48
 ; CHECK-GI-NEXT:    umulh x9, x9, x8
 ; CHECK-GI-NEXT:    umulh x8, x10, x8
-; CHECK-GI-NEXT:    adrp x10, .LCPI6_0
-; CHECK-GI-NEXT:    ldr q3, [x10, :lo12:.LCPI6_0]
 ; CHECK-GI-NEXT:    mov v1.d[0], x9
-; CHECK-GI-NEXT:    adrp x9, .LCPI6_1
-; CHECK-GI-NEXT:    ldr q2, [x9, :lo12:.LCPI6_1]
-; CHECK-GI-NEXT:    sub v2.2d, v3.2d, v2.2d
 ; CHECK-GI-NEXT:    mov v1.d[1], x8
-; CHECK-GI-NEXT:    neg v2.2d, v2.2d
 ; CHECK-GI-NEXT:    sub v0.2d, v0.2d, v1.2d
-; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v2.2d
-; CHECK-GI-NEXT:    add v0.2d, v0.2d, v1.2d
-; CHECK-GI-NEXT:    ushr v0.2d, v0.2d, #2
+; CHECK-GI-NEXT:    usra v1.2d, v0.2d, #1
+; CHECK-GI-NEXT:    ushr v0.2d, v1.2d, #2
 ; CHECK-GI-NEXT:    ret
   %r = udiv <2 x i64> %a, splat (i64 7)
   ret <2 x i64> %r
diff --git a/llvm/test/CodeGen/AArch64/fsh.ll b/llvm/test/CodeGen/AArch64/fsh.ll
index 9a05c3be0ce3e..2cee2f2b2686c 100644
--- a/llvm/test/CodeGen/AArch64/fsh.ll
+++ b/llvm/test/CodeGen/AArch64/fsh.ll
@@ -613,24 +613,11 @@ entry:
 }
 
 define i8 @rotl_i8_c(i8 %a) {
-; CHECK-SD-LABEL: rotl_i8_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    ubfx w8, w0, #5, #3
-; CHECK-SD-NEXT:    orr w0, w8, w0, lsl #3
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: rotl_i8_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov x8, xzr
-; CHECK-GI-NEXT:    mov w9, #-3 // =0xfffffffd
-; CHECK-GI-NEXT:    and w10, w0, #0xff
-; CHECK-GI-NEXT:    sub x8, x8, w9, uxtb
-; CHECK-GI-NEXT:    and x9, x9, #0x7
-; CHECK-GI-NEXT:    lsr w9, w10, w9
-; CHECK-GI-NEXT:    and x8, x8, #0x7
-; CHECK-GI-NEXT:    lsl w8, w0, w8
-; CHECK-GI-NEXT:    orr w0, w9, w8
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: rotl_i8_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    ubfx w8, w0, #5, #3
+; CHECK-NEXT:    orr w0, w8, w0, lsl #3
+; CHECK-NEXT:    ret
 entry:
   %d = call i8 @llvm.fshl(i8 %a, i8 %a, i8 3)
   ret i8 %d
@@ -655,24 +642,11 @@ entry:
 }
 
 define i16 @rotl_i16_c(i16 %a) {
-; CHECK-SD-LABEL: rotl_i16_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    ubfx w8, w0, #13, #3
-; CHECK-SD-NEXT:    orr w0, w8, w0, lsl #3
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: rotl_i16_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov x8, xzr
-; CHECK-GI-NEXT:    mov w9, #-3 // =0xfffffffd
-; CHECK-GI-NEXT:    and w10, w0, #0xffff
-; CHECK-GI-NEXT:    sub x8, x8, w9, uxth
-; CHECK-GI-NEXT:    and x9, x9, #0xf
-; CHECK-GI-NEXT:    lsr w9, w10, w9
-; CHECK-GI-NEXT:    and x8, x8, #0xf
-; CHECK-GI-NEXT:    lsl w8, w0, w8
-; CHECK-GI-NEXT:    orr w0, w9, w8
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: rotl_i16_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    ubfx w8, w0, #13, #3
+; CHECK-NEXT:    orr w0, w8, w0, lsl #3
+; CHECK-NEXT:    ret
 entry:
   %d = call i16 @llvm.fshl(i16 %a, i16 %a, i16 3)
   ret i16 %d
@@ -697,16 +671,10 @@ entry:
 }
 
 define i32 @rotl_i32_c(i32 %a) {
-; CHECK-SD-LABEL: rotl_i32_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    ror w0, w0, #29
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: rotl_i32_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov w8, #-3 // =0xfffffffd
-; CHECK-GI-NEXT:    ror w0, w0, w8
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: rotl_i32_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    ror w0, w0, #29
+; CHECK-NEXT:    ret
 entry:
   %d = call i32 @llvm.fshl(i32 %a, i32 %a, i32 3)
   ret i32 %d
@@ -3241,14 +3209,9 @@ define <8 x i8> @rotl_v8i8_c(<8 x i8> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v8i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.8b, #3
-; CHECK-GI-NEXT:    movi v2.8b, #7
-; CHECK-GI-NEXT:    neg v1.8b, v1.8b
-; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
-; CHECK-GI-NEXT:    shl v2.8b, v0.8b, #3
-; CHECK-GI-NEXT:    neg v1.8b, v1.8b
-; CHECK-GI-NEXT:    ushl v0.8b, v0.8b, v1.8b
-; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
+; CHECK-GI-NEXT:    shl v1.8b, v0.8b, #3
+; CHECK-GI-NEXT:    ushr v0.8b, v0.8b, #5
+; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i8> @llvm.fshl(<8 x i8> %a, <8 x i8> %a, <8 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
@@ -3265,13 +3228,9 @@ define <8 x i8> @rotr_v8i8_c(<8 x i8> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v8i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.8b, #3
-; CHECK-GI-NEXT:    movi v2.8b, #7
-; CHECK-GI-NEXT:    neg v1.8b, v1.8b
-; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
-; CHECK-GI-NEXT:    ushr v2.8b, v0.8b, #3
-; CHECK-GI-NEXT:    ushl v0.8b, v0.8b, v1.8b
-; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
+; CHECK-GI-NEXT:    ushr v1.8b, v0.8b, #3
+; CHECK-GI-NEXT:    shl v0.8b, v0.8b, #5
+; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i8> @llvm.fshr(<8 x i8> %a, <8 x i8> %a, <8 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
@@ -3288,14 +3247,9 @@ define <16 x i8> @rotl_v16i8_c(<16 x i8> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v16i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.16b, #3
-; CHECK-GI-NEXT:    movi v2.16b, #7
-; CHECK-GI-NEXT:    neg v1.16b, v1.16b
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
-; CHECK-GI-NEXT:    shl v2.16b, v0.16b, #3
-; CHECK-GI-NEXT:    neg v1.16b, v1.16b
-; CHECK-GI-NEXT:    ushl v0.16b, v0.16b, v1.16b
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    shl v1.16b, v0.16b, #3
+; CHECK-GI-NEXT:    ushr v0.16b, v0.16b, #5
+; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i8> @llvm.fshl(<16 x i8> %a, <16 x i8> %a, <16 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
@@ -3312,13 +3266,9 @@ define <16 x i8> @rotr_v16i8_c(<16 x i8> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v16i8_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.16b, #3
-; CHECK-GI-NEXT:    movi v2.16b, #7
-; CHECK-GI-NEXT:    neg v1.16b, v1.16b
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
-; CHECK-GI-NEXT:    ushr v2.16b, v0.16b, #3
-; CHECK-GI-NEXT:    ushl v0.16b, v0.16b, v1.16b
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    ushr v1.16b, v0.16b, #3
+; CHECK-GI-NEXT:    shl v0.16b, v0.16b, #5
+; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i8> @llvm.fshr(<16 x i8> %a, <16 x i8> %a, <16 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
@@ -3335,14 +3285,9 @@ define <4 x i16> @rotl_v4i16_c(<4 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v4i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.4h, #3
-; CHECK-GI-NEXT:    movi v2.4h, #15
-; CHECK-GI-NEXT:    neg v1.4h, v1.4h
-; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
-; CHECK-GI-NEXT:    shl v2.4h, v0.4h, #3
-; CHECK-GI-NEXT:    neg v1.4h, v1.4h
-; CHECK-GI-NEXT:    ushl v0.4h, v0.4h, v1.4h
-; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
+; CHECK-GI-NEXT:    shl v1.4h, v0.4h, #3
+; CHECK-GI-NEXT:    ushr v0.4h, v0.4h, #13
+; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i16> @llvm.fshl(<4 x i16> %a, <4 x i16> %a, <4 x i16> <i16 3, i16 3, i16 3, i16 3>)
@@ -3359,13 +3304,9 @@ define <4 x i16> @rotr_v4i16_c(<4 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v4i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.4h, #3
-; CHECK-GI-NEXT:    movi v2.4h, #15
-; CHECK-GI-NEXT:    neg v1.4h, v1.4h
-; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
-; CHECK-GI-NEXT:    ushr v2.4h, v0.4h, #3
-; CHECK-GI-NEXT:    ushl v0.4h, v0.4h, v1.4h
-; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
+; CHECK-GI-NEXT:    ushr v1.4h, v0.4h, #3
+; CHECK-GI-NEXT:    shl v0.4h, v0.4h, #13
+; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i16> @llvm.fshr(<4 x i16> %a, <4 x i16> %a, <4 x i16> <i16 3, i16 3, i16 3, i16 3>)
@@ -3386,34 +3327,24 @@ define <7 x i16> @rotl_v7i16_c(<7 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v7i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov w9, #0 // =0x0
-; CHECK-GI-NEXT:    mov w10, #15 // =0xf
-; CHECK-GI-NEXT:    fmov s1, w9
-; CHECK-GI-NEXT:    fmov s2, w8
-; CHECK-GI-NEXT:    fmov s3, w10
-; CHECK-GI-NEXT:    mov v1.h[1], w9
-; CHECK-GI-NEXT:    mov v2.h[1], w8
-; CHECK-GI-NEXT:    mov v3.h[1], w10
-; CHECK-GI-NEXT:    mov v1.h[2], w9
-; CHECK-GI-NEXT:    mov v2.h[2], w8
-; CHECK-GI-NEXT:    mov v3.h[2], w10
-; CHECK-GI-NEXT:    mov v1.h[3], w9
-; CHECK-GI-NEXT:    mov v2.h[3], w8
-; CHECK-GI-NEXT:    mov v3.h[3], w10
-; CHECK-GI-NEXT:    mov v1.h[4], w9
-; CHECK-GI-NEXT:    mov v2.h[4], w8
-; CHECK-GI-NEXT:    mov v3.h[4], w10
-; CHECK-GI-NEXT:    mov v1.h[5], w9
-; CHECK-GI-NEXT:    mov v2.h[5], w8
-; CHECK-GI-NEXT:    mov v3.h[5], w10
-; CHECK-GI-NEXT:    mov v1.h[6], w9
-; CHECK-GI-NEXT:    mov v2.h[6], w8
-; CHECK-GI-NEXT:    mov v3.h[6], w10
-; CHECK-GI-NEXT:    sub v1.8h, v1.8h, v2.8h
-; CHECK-GI-NEXT:    ushl v2.8h, v0.8h, v2.8h
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v3.16b
+; CHECK-GI-NEXT:    mov w8, #13 // =0xd
+; CHECK-GI-NEXT:    mov w9, #3 // =0x3
+; CHECK-GI-NEXT:    fmov s1, w8
+; CHECK-GI-NEXT:    fmov s2, w9
+; CHECK-GI-NEXT:    mov v1.h[1], w8
+; CHECK-GI-NEXT:    mov v2.h[1], w9
+; CHECK-GI-NEXT:    mov v1.h[2], w8
+; CHECK-GI-NEXT:    mov v2.h[2], w9
+; CHECK-GI-NEXT:    mov v1.h[3], w8
+; CHECK-GI-NEXT:    mov v2.h[3], w9
+; CHECK-GI-NEXT:    mov v1.h[4], w8
+; CHECK-GI-NEXT:    mov v2.h[4], w9
+; CHECK-GI-NEXT:    mov v1.h[5], w8
+; CHECK-GI-NEXT:    mov v2.h[5], w9
+; CHECK-GI-NEXT:    mov v1.h[6], w8
+; CHECK-GI-NEXT:    mov v2.h[6], w9
 ; CHECK-GI-NEXT:    neg v1.8h, v1.8h
+; CHECK-GI-NEXT:    ushl v2.8h, v0.8h, v2.8h
 ; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v1.8h
 ; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
@@ -3437,35 +3368,25 @@ define <7 x i16> @rotr_v7i16_c(<7 x i16> %a) {
 ; CHECK-GI-LABEL: rotr_v7i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
 ; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov w9, #0 // =0x0
-; CHECK-GI-NEXT:    mov w10, #15 // =0xf
-; CHECK-GI-NEXT:    fmov s1, w9
-; CHECK-GI-NEXT:    fmov s2, w8
-; CHECK-GI-NEXT:    fmov s3, w10
-; CHECK-GI-NEXT:    mov v1.h[1], w9
-; CHECK-GI-NEXT:    mov v2.h[1], w8
-; CHECK-GI-NEXT:    mov v3.h[1], w10
-; CHECK-GI-NEXT:    mov v1.h[2], w9
-; CHECK-GI-NEXT:    mov v2.h[2], w8
-; CHECK-GI-NEXT:    mov v3.h[2], w10
-; CHECK-GI-NEXT:    mov v1.h[3], w9
-; CHECK-GI-NEXT:    mov v2.h[3], w8
-; CHECK-GI-NEXT:    mov v3.h[3], w10
-; CHECK-GI-NEXT:    mov v1.h[4], w9
-; CHECK-GI-NEXT:    mov v2.h[4], w8
-; CHECK-GI-NEXT:    mov v3.h[4], w10
-; CHECK-GI-NEXT:    mov v1.h[5], w9
-; CHECK-GI-NEXT:    mov v2.h[5], w8
-; CHECK-GI-NEXT:    mov v3.h[5], w10
-; CHECK-GI-NEXT:    mov v1.h[6], w9
-; CHECK-GI-NEXT:    mov v2.h[6], w8
-; CHECK-GI-NEXT:    mov v3.h[6], w10
-; CHECK-GI-NEXT:    sub v1.8h, v1.8h, v2.8h
-; CHECK-GI-NEXT:    neg v2.8h, v2.8h
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v3.16b
-; CHECK-GI-NEXT:    ushl v2.8h, v0.8h, v2.8h
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v1.8h
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    mov w9, #13 // =0xd
+; CHECK-GI-NEXT:    fmov s1, w8
+; CHECK-GI-NEXT:    fmov s2, w9
+; CHECK-GI-NEXT:    mov v1.h[1], w8
+; CHECK-GI-NEXT:    mov v2.h[1], w9
+; CHECK-GI-NEXT:    mov v1.h[2], w8
+; CHECK-GI-NEXT:    mov v2.h[2], w9
+; CHECK-GI-NEXT:    mov v1.h[3], w8
+; CHECK-GI-NEXT:    mov v2.h[3], w9
+; CHECK-GI-NEXT:    mov v1.h[4], w8
+; CHECK-GI-NEXT:    mov v2.h[4], w9
+; CHECK-GI-NEXT:    mov v1.h[5], w8
+; CHECK-GI-NEXT:    mov v2.h[5], w9
+; CHECK-GI-NEXT:    mov v1.h[6], w8
+; CHECK-GI-NEXT:    mov v2.h[6], w9
+; CHECK-GI-NEXT:    neg v1.8h, v1.8h
+; CHECK-GI-NEXT:    ushl v1.8h, v0.8h, v1.8h
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
+; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <7 x i16> @llvm.fshr(<7 x i16> %a, <7 x i16> %a, <7 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -3482,14 +3403,9 @@ define <8 x i16> @rotl_v8i16_c(<8 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v8i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.8h, #3
-; CHECK-GI-NEXT:    movi v2.8h, #15
-; CHECK-GI-NEXT:    neg v1.8h, v1.8h
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
-; CHECK-GI-NEXT:    shl v2.8h, v0.8h, #3
-; CHECK-GI-NEXT:    neg v1.8h, v1.8h
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v1.8h
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    shl v1.8h, v0.8h, #3
+; CHECK-GI-NEXT:    ushr v0.8h, v0.8h, #13
+; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i16> @llvm.fshl(<8 x i16> %a, <8 x i16> %a, <8 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -3506,13 +3422,9 @@ define <8 x i16> @rotr_v8i16_c(<8 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v8i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.8h, #3
-; CHECK-GI-NEXT:    movi v2.8h, #15
-; CHECK-GI-NEXT:    neg v1.8h, v1.8h
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
-; CHECK-GI-NEXT:    ushr v2.8h, v0.8h, #3
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v1.8h
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    ushr v1.8h, v0.8h, #3
+; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #13
+; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i16> @llvm.fshr(<8 x i16> %a, <8 x i16> %a, <8 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -3532,17 +3444,12 @@ define <16 x i16> @rotl_v16i16_c(<16 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v16i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.8h, #3
-; CHECK-GI-NEXT:    movi v3.8h, #15
-; CHECK-GI-NEXT:    shl v4.8h, v1.8h, #3
-; CHECK-GI-NEXT:    neg v2.8h, v2.8h
-; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
-; CHECK-GI-NEXT:    shl v3.8h, v0.8h, #3
-; CHECK-GI-NEXT:    neg v2.8h, v2.8h
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
-; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v2.8h
-; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
+; CHECK-GI-NEXT:    shl v2.8h, v0.8h, #3
+; CHECK-GI-NEXT:    shl v3.8h, v1.8h, #3
+; CHECK-GI-NEXT:    ushr v0.8h, v0.8h, #13
+; CHECK-GI-NEXT:    ushr v1.8h, v1.8h, #13
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i16> @llvm.fshl(<16 x i16> %a, <16 x i16> %a, <16 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -3562,16 +3469,12 @@ define <16 x i16> @rotr_v16i16_c(<16 x i16> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v16i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.8h, #3
-; CHECK-GI-NEXT:    movi v3.8h, #15
-; CHECK-GI-NEXT:    ushr v4.8h, v1.8h, #3
-; CHECK-GI-NEXT:    neg v2.8h, v2.8h
-; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
-; CHECK-GI-NEXT:    ushr v3.8h, v0.8h, #3
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
-; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v2.8h
-; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
+; CHECK-GI-NEXT:    ushr v2.8h, v0.8h, #3
+; CHECK-GI-NEXT:    ushr v3.8h, v1.8h, #3
+; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #13
+; CHECK-GI-NEXT:    shl v1.8h, v1.8h, #13
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i16> @llvm.fshr(<16 x i16> %a, <16 x i16> %a, <16 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -3588,14 +3491,9 @@ define <2 x i32> @rotl_v2i32_c(<2 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v2i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.2s, #3
-; CHECK-GI-NEXT:    movi v2.2s, #31
-; CHECK-GI-NEXT:    neg v1.2s, v1.2s
-; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
-; CHECK-GI-NEXT:    shl v2.2s, v0.2s, #3
-; CHECK-GI-NEXT:    neg v1.2s, v1.2s
-; CHECK-GI-NEXT:    ushl v0.2s, v0.2s, v1.2s
-; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
+; CHECK-GI-NEXT:    shl v1.2s, v0.2s, #3
+; CHECK-GI-NEXT:    ushr v0.2s, v0.2s, #29
+; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i32> @llvm.fshl(<2 x i32> %a, <2 x i32> %a, <2 x i32> <i32 3, i32 3>)
@@ -3612,13 +3510,9 @@ define <2 x i32> @rotr_v2i32_c(<2 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v2i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.2s, #3
-; CHECK-GI-NEXT:    movi v2.2s, #31
-; CHECK-GI-NEXT:    neg v1.2s, v1.2s
-; CHECK-GI-NEXT:    and v1.8b, v1.8b, v2.8b
-; CHECK-GI-NEXT:    ushr v2.2s, v0.2s, #3
-; CHECK-GI-NEXT:    ushl v0.2s, v0.2s, v1.2s
-; CHECK-GI-NEXT:    orr v0.8b, v2.8b, v0.8b
+; CHECK-GI-NEXT:    ushr v1.2s, v0.2s, #3
+; CHECK-GI-NEXT:    shl v0.2s, v0.2s, #29
+; CHECK-GI-NEXT:    orr v0.8b, v1.8b, v0.8b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i32> @llvm.fshr(<2 x i32> %a, <2 x i32> %a, <2 x i32> <i32 3, i32 3>)
@@ -3635,14 +3529,9 @@ define <4 x i32> @rotl_v4i32_c(<4 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v4i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.4s, #3
-; CHECK-GI-NEXT:    movi v2.4s, #31
-; CHECK-GI-NEXT:    neg v1.4s, v1.4s
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
-; CHECK-GI-NEXT:    shl v2.4s, v0.4s, #3
-; CHECK-GI-NEXT:    neg v1.4s, v1.4s
-; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v1.4s
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    shl v1.4s, v0.4s, #3
+; CHECK-GI-NEXT:    ushr v0.4s, v0.4s, #29
+; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i32> @llvm.fshl(<4 x i32> %a, <4 x i32> %a, <4 x i32> <i32 3, i32 3, i32 3, i32 3>)
@@ -3659,13 +3548,9 @@ define <4 x i32> @rotr_v4i32_c(<4 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v4i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v1.4s, #3
-; CHECK-GI-NEXT:    movi v2.4s, #31
-; CHECK-GI-NEXT:    neg v1.4s, v1.4s
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
-; CHECK-GI-NEXT:    ushr v2.4s, v0.4s, #3
-; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v1.4s
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    ushr v1.4s, v0.4s, #3
+; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #29
+; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i32> @llvm.fshr(<4 x i32> %a, <4 x i32> %a, <4 x i32> <i32 3, i32 3, i32 3, i32 3>)
@@ -3702,55 +3587,42 @@ define <7 x i32> @rotl_v7i32_c(<7 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v7i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov v0.s[0], wzr
-; CHECK-GI-NEXT:    mov w9, #31 // =0x1f
-; CHECK-GI-NEXT:    mov v1.s[0], w8
-; CHECK-GI-NEXT:    mov v2.s[0], w9
-; CHECK-GI-NEXT:    mov v3.s[0], w0
-; CHECK-GI-NEXT:    mov v4.s[0], w0
-; CHECK-GI-NEXT:    movi v5.4s, #3
-; CHECK-GI-NEXT:    mov v6.s[0], w4
-; CHECK-GI-NEXT:    mov v7.s[0], w8
-; CHECK-GI-NEXT:    mov v16.s[0], w4
-; CHECK-GI-NEXT:    movi v17.4s, #31
-; CHECK-GI-NEXT:    mov v0.s[1], wzr
-; CHECK-GI-NEXT:    mov v1.s[1], w8
-; CHECK-GI-NEXT:    mov v2.s[1], w9
-; CHECK-GI-NEXT:    mov v3.s[1], w1
-; CHECK-GI-NEXT:    mov v4.s[1], w1
-; CHECK-GI-NEXT:    mov v6.s[1], w5
-; CHECK-GI-NEXT:    mov v7.s[1], w8
-; CHECK-GI-NEXT:    mov v16.s[1], w5
-; CHECK-GI-NEXT:    mov v0.s[2], wzr
-; CHECK-GI-NEXT:    mov v1.s[2], w8
-; CHECK-GI-NEXT:    mov v2.s[2], w9
-; CHECK-GI-NEXT:    mov v3.s[2], w2
-; CHECK-GI-NEXT:    mov v4.s[2], w2
-; CHECK-GI-NEXT:    mov v6.s[2], w6
-; CHECK-GI-NEXT:    mov v7.s[2], w8
-; CHECK-GI-NEXT:    mov v16.s[2], w6
-; CHECK-GI-NEXT:    sub v0.4s, v0.4s, v1.4s
-; CHECK-GI-NEXT:    neg v1.4s, v5.4s
-; CHECK-GI-NEXT:    mov v3.s[3], w3
-; CHECK-GI-NEXT:    mov v4.s[3], w3
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v17.16b
-; CHECK-GI-NEXT:    and v0.16b, v0.16b, v2.16b
-; CHECK-GI-NEXT:    shl v2.4s, v3.4s, #3
-; CHECK-GI-NEXT:    ushl v3.4s, v6.4s, v7.4s
-; CHECK-GI-NEXT:    neg v1.4s, v1.4s
-; CHECK-GI-NEXT:    neg v0.4s, v0.4s
-; CHECK-GI-NEXT:    ushl v1.4s, v4.4s, v1.4s
-; CHECK-GI-NEXT:    ushl v0.4s, v16.4s, v0.4s
-; CHECK-GI-NEXT:    orr v1.16b, v2.16b, v1.16b
-; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
-; CHECK-GI-NEXT:    mov s2, v1.s[1]
-; CHECK-GI-NEXT:    mov s3, v1.s[2]
-; CHECK-GI-NEXT:    mov s4, v1.s[3]
-; CHECK-GI-NEXT:    mov s5, v0.s[1]
-; CHECK-GI-NEXT:    mov s6, v0.s[2]
-; CHECK-GI-NEXT:    fmov w0, s1
-; CHECK-GI-NEXT:    fmov w4, s0
+; CHECK-GI-NEXT:    mov v0.s[0], w0
+; CHECK-GI-NEXT:    mov v1.s[0], w0
+; CHECK-GI-NEXT:    mov w8, #29 // =0x1d
+; CHECK-GI-NEXT:    mov v2.s[0], w8
+; CHECK-GI-NEXT:    mov w9, #3 // =0x3
+; CHECK-GI-NEXT:    mov v3.s[0], w4
+; CHECK-GI-NEXT:    mov v4.s[0], w9
+; CHECK-GI-NEXT:    mov v5.s[0], w4
+; CHECK-GI-NEXT:    mov v0.s[1], w1
+; CHECK-GI-NEXT:    mov v1.s[1], w1
+; CHECK-GI-NEXT:    mov v2.s[1], w8
+; CHECK-GI-NEXT:    mov v3.s[1], w5
+; CHECK-GI-NEXT:    mov v4.s[1], w9
+; CHECK-GI-NEXT:    mov v5.s[1], w5
+; CHECK-GI-NEXT:    mov v0.s[2], w2
+; CHECK-GI-NEXT:    mov v1.s[2], w2
+; CHECK-GI-NEXT:    mov v2.s[2], w8
+; CHECK-GI-NEXT:    mov v3.s[2], w6
+; CHECK-GI-NEXT:    mov v4.s[2], w9
+; CHECK-GI-NEXT:    mov v5.s[2], w6
+; CHECK-GI-NEXT:    mov v0.s[3], w3
+; CHECK-GI-NEXT:    mov v1.s[3], w3
+; CHECK-GI-NEXT:    neg v2.4s, v2.4s
+; CHECK-GI-NEXT:    ushl v3.4s, v3.4s, v4.4s
+; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #3
+; CHECK-GI-NEXT:    ushr v1.4s, v1.4s, #29
+; CHECK-GI-NEXT:    ushl v2.4s, v5.4s, v2.4s
+; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
+; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v2.16b
+; CHECK-GI-NEXT:    mov s2, v0.s[1]
+; CHECK-GI-NEXT:    mov s3, v0.s[2]
+; CHECK-GI-NEXT:    mov s4, v0.s[3]
+; CHECK-GI-NEXT:    mov s5, v1.s[1]
+; CHECK-GI-NEXT:    mov s6, v1.s[2]
+; CHECK-GI-NEXT:    fmov w0, s0
+; CHECK-GI-NEXT:    fmov w4, s1
 ; CHECK-GI-NEXT:    fmov w1, s2
 ; CHECK-GI-NEXT:    fmov w2, s3
 ; CHECK-GI-NEXT:    fmov w3, s4
@@ -3792,54 +3664,42 @@ define <7 x i32> @rotr_v7i32_c(<7 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v7i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov v0.s[0], wzr
+; CHECK-GI-NEXT:    mov v0.s[0], w0
 ; CHECK-GI-NEXT:    mov v1.s[0], w0
+; CHECK-GI-NEXT:    mov w8, #3 // =0x3
 ; CHECK-GI-NEXT:    mov v2.s[0], w8
-; CHECK-GI-NEXT:    mov v3.s[0], w0
-; CHECK-GI-NEXT:    mov w9, #31 // =0x1f
-; CHECK-GI-NEXT:    mov v4.s[0], w8
+; CHECK-GI-NEXT:    mov w9, #29 // =0x1d
+; CHECK-GI-NEXT:    mov v3.s[0], w4
+; CHECK-GI-NEXT:    mov v4.s[0], w4
 ; CHECK-GI-NEXT:    mov v5.s[0], w9
-; CHECK-GI-NEXT:    mov v6.s[0], w4
-; CHECK-GI-NEXT:    mov v7.s[0], w4
-; CHECK-GI-NEXT:    movi v16.4s, #3
-; CHECK-GI-NEXT:    movi v17.4s, #31
-; CHECK-GI-NEXT:    mov v0.s[1], wzr
+; CHECK-GI-NEXT:    mov v0.s[1], w1
 ; CHECK-GI-NEXT:    mov v1.s[1], w1
 ; CHECK-GI-NEXT:    mov v2.s[1], w8
-; CHECK-GI-NEXT:    mov v3.s[1], w1
-; CHECK-GI-NEXT:    mov v4.s[1], w8
+; CHECK-GI-NEXT:    mov v3.s[1], w5
+; CHECK-GI-NEXT:    mov v4.s[1], w5
 ; CHECK-GI-NEXT:    mov v5.s[1], w9
-; CHECK-GI-NEXT:    mov v6.s[1], w5
-; CHECK-GI-NEXT:    mov v7.s[1], w5
-; CHECK-GI-NEXT:    neg v16.4s, v16.4s
-; CHECK-GI-NEXT:    mov v0.s[2], wzr
+; CHECK-GI-NEXT:    mov v0.s[2], w2
 ; CHECK-GI-NEXT:    mov v1.s[2], w2
 ; CHECK-GI-NEXT:    mov v2.s[2], w8
-; CHECK-GI-NEXT:    mov v3.s[2], w2
-; CHECK-GI-NEXT:    mov v4.s[2], w8
+; CHECK-GI-NEXT:    mov v3.s[2], w6
+; CHECK-GI-NEXT:    mov v4.s[2], w6
 ; CHECK-GI-NEXT:    mov v5.s[2], w9
-; CHECK-GI-NEXT:    mov v6.s[2], w6
-; CHECK-GI-NEXT:    mov v7.s[2], w6
+; CHECK-GI-NEXT:    mov v0.s[3], w3
 ; CHECK-GI-NEXT:    mov v1.s[3], w3
-; CHECK-GI-NEXT:    sub v0.4s, v0.4s, v2.4s
-; CHECK-GI-NEXT:    mov v3.s[3], w3
-; CHECK-GI-NEXT:    and v2.16b, v16.16b, v17.16b
-; CHECK-GI-NEXT:    neg v4.4s, v4.4s
-; CHECK-GI-NEXT:    and v0.16b, v0.16b, v5.16b
-; CHECK-GI-NEXT:    ushr v1.4s, v1.4s, #3
+; CHECK-GI-NEXT:    neg v2.4s, v2.4s
+; CHECK-GI-NEXT:    ushl v4.4s, v4.4s, v5.4s
+; CHECK-GI-NEXT:    ushr v0.4s, v0.4s, #3
+; CHECK-GI-NEXT:    shl v1.4s, v1.4s, #29
 ; CHECK-GI-NEXT:    ushl v2.4s, v3.4s, v2.4s
-; CHECK-GI-NEXT:    ushl v3.4s, v6.4s, v4.4s
-; CHECK-GI-NEXT:    ushl v0.4s, v7.4s, v0.4s
-; CHECK-GI-NEXT:    orr v1.16b, v1.16b, v2.16b
-; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
-; CHECK-GI-NEXT:    mov s2, v1.s[1]
-; CHECK-GI-NEXT:    mov s3, v1.s[2]
-; CHECK-GI-NEXT:    mov s4, v1.s[3]
-; CHECK-GI-NEXT:    fmov w0, s1
-; CHECK-GI-NEXT:    mov s5, v0.s[1]
-; CHECK-GI-NEXT:    mov s6, v0.s[2]
-; CHECK-GI-NEXT:    fmov w4, s0
+; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
+; CHECK-GI-NEXT:    orr v1.16b, v2.16b, v4.16b
+; CHECK-GI-NEXT:    mov s2, v0.s[1]
+; CHECK-GI-NEXT:    mov s3, v0.s[2]
+; CHECK-GI-NEXT:    mov s4, v0.s[3]
+; CHECK-GI-NEXT:    mov s5, v1.s[1]
+; CHECK-GI-NEXT:    mov s6, v1.s[2]
+; CHECK-GI-NEXT:    fmov w0, s0
+; CHECK-GI-NEXT:    fmov w4, s1
 ; CHECK-GI-NEXT:    fmov w1, s2
 ; CHECK-GI-NEXT:    fmov w2, s3
 ; CHECK-GI-NEXT:    fmov w3, s4
@@ -3864,17 +3724,12 @@ define <8 x i32> @rotl_v8i32_c(<8 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v8i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.4s, #3
-; CHECK-GI-NEXT:    movi v3.4s, #31
-; CHECK-GI-NEXT:    shl v4.4s, v1.4s, #3
-; CHECK-GI-NEXT:    neg v2.4s, v2.4s
-; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
-; CHECK-GI-NEXT:    shl v3.4s, v0.4s, #3
-; CHECK-GI-NEXT:    neg v2.4s, v2.4s
-; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v2.4s
-; CHECK-GI-NEXT:    ushl v1.4s, v1.4s, v2.4s
-; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
+; CHECK-GI-NEXT:    shl v2.4s, v0.4s, #3
+; CHECK-GI-NEXT:    shl v3.4s, v1.4s, #3
+; CHECK-GI-NEXT:    ushr v0.4s, v0.4s, #29
+; CHECK-GI-NEXT:    ushr v1.4s, v1.4s, #29
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i32> @llvm.fshl(<8 x i32> %a, <8 x i32> %a, <8 x i32> <i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3>)
@@ -3894,16 +3749,12 @@ define <8 x i32> @rotr_v8i32_c(<8 x i32> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v8i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.4s, #3
-; CHECK-GI-NEXT:    movi v3.4s, #31
-; CHECK-GI-NEXT:    ushr v4.4s, v1.4s, #3
-; CHECK-GI-NEXT:    neg v2.4s, v2.4s
-; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
-; CHECK-GI-NEXT:    ushr v3.4s, v0.4s, #3
-; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v2.4s
-; CHECK-GI-NEXT:    ushl v1.4s, v1.4s, v2.4s
-; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
+; CHECK-GI-NEXT:    ushr v2.4s, v0.4s, #3
+; CHECK-GI-NEXT:    ushr v3.4s, v1.4s, #3
+; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #29
+; CHECK-GI-NEXT:    shl v1.4s, v1.4s, #29
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i32> @llvm.fshr(<8 x i32> %a, <8 x i32> %a, <8 x i32> <i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3>)
@@ -3920,16 +3771,9 @@ define <2 x i64> @rotl_v2i64_c(<2 x i64> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v2i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    adrp x8, .LCPI112_1
-; CHECK-GI-NEXT:    ldr q1, [x8, :lo12:.LCPI112_1]
-; CHECK-GI-NEXT:    adrp x8, .LCPI112_0
-; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI112_0]
-; CHECK-GI-NEXT:    neg v1.2d, v1.2d
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
-; CHECK-GI-NEXT:    shl v2.2d, v0.2d, #3
-; CHECK-GI-NEXT:    neg v1.2d, v1.2d
-; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v1.2d
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    shl v1.2d, v0.2d, #3
+; CHECK-GI-NEXT:    ushr v0.2d, v0.2d, #61
+; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i64> @llvm.fshl(<2 x i64> %a, <2 x i64> %a, <2 x i64> <i64 3, i64 3>)
@@ -3946,15 +3790,9 @@ define <2 x i64> @rotr_v2i64_c(<2 x i64> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v2i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    adrp x8, .LCPI113_1
-; CHECK-GI-NEXT:    ldr q1, [x8, :lo12:.LCPI113_1]
-; CHECK-GI-NEXT:    adrp x8, .LCPI113_0
-; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI113_0]
-; CHECK-GI-NEXT:    neg v1.2d, v1.2d
-; CHECK-GI-NEXT:    and v1.16b, v1.16b, v2.16b
-; CHECK-GI-NEXT:    ushr v2.2d, v0.2d, #3
-; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v1.2d
-; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    ushr v1.2d, v0.2d, #3
+; CHECK-GI-NEXT:    shl v0.2d, v0.2d, #61
+; CHECK-GI-NEXT:    orr v0.16b, v1.16b, v0.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <2 x i64> @llvm.fshr(<2 x i64> %a, <2 x i64> %a, <2 x i64> <i64 3, i64 3>)
@@ -3974,19 +3812,12 @@ define <4 x i64> @rotl_v4i64_c(<4 x i64> %a) {
 ;
 ; CHECK-GI-LABEL: rotl_v4i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    adrp x8, .LCPI114_1
-; CHECK-GI-NEXT:    shl v4.2d, v1.2d, #3
-; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI114_1]
-; CHECK-GI-NEXT:    adrp x8, .LCPI114_0
-; CHECK-GI-NEXT:    ldr q3, [x8, :lo12:.LCPI114_0]
-; CHECK-GI-NEXT:    neg v2.2d, v2.2d
-; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
-; CHECK-GI-NEXT:    shl v3.2d, v0.2d, #3
-; CHECK-GI-NEXT:    neg v2.2d, v2.2d
-; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v2.2d
-; CHECK-GI-NEXT:    ushl v1.2d, v1.2d, v2.2d
-; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
+; CHECK-GI-NEXT:    shl v2.2d, v0.2d, #3
+; CHECK-GI-NEXT:    shl v3.2d, v1.2d, #3
+; CHECK-GI-NEXT:    ushr v0.2d, v0.2d, #61
+; CHECK-GI-NEXT:    ushr v1.2d, v1.2d, #61
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i64> @llvm.fshl(<4 x i64> %a, <4 x i64> %a, <4 x i64> <i64 3, i64 3, i64 3, i64 3>)
@@ -4006,18 +3837,12 @@ define <4 x i64> @rotr_v4i64_c(<4 x i64> %a) {
 ;
 ; CHECK-GI-LABEL: rotr_v4i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    adrp x8, .LCPI115_1
-; CHECK-GI-NEXT:    ushr v4.2d, v1.2d, #3
-; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI115_1]
-; CHECK-GI-NEXT:    adrp x8, .LCPI115_0
-; CHECK-GI-NEXT:    ldr q3, [x8, :lo12:.LCPI115_0]
-; CHECK-GI-NEXT:    neg v2.2d, v2.2d
-; CHECK-GI-NEXT:    and v2.16b, v2.16b, v3.16b
-; CHECK-GI-NEXT:    ushr v3.2d, v0.2d, #3
-; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v2.2d
-; CHECK-GI-NEXT:    ushl v1.2d, v1.2d, v2.2d
-; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
-; CHECK-GI-NEXT:    orr v1.16b, v4.16b, v1.16b
+; CHECK-GI-NEXT:    ushr v2.2d, v0.2d, #3
+; CHECK-GI-NEXT:    ushr v3.2d, v1.2d, #3
+; CHECK-GI-NEXT:    shl v0.2d, v0.2d, #61
+; CHECK-GI-NEXT:    shl v1.2d, v1.2d, #61
+; CHECK-GI-NEXT:    orr v0.16b, v2.16b, v0.16b
+; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i64> @llvm.fshr(<4 x i64> %a, <4 x i64> %a, <4 x i64> <i64 3, i64 3, i64 3, i64 3>)
@@ -4080,126 +3905,66 @@ entry:
 }
 
 define <8 x i8> @fshl_v8i8_c(<8 x i8> %a, <8 x i8> %b) {
-; CHECK-SD-LABEL: fshl_v8i8_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.8b, v0.8b, #3
-; CHECK-SD-NEXT:    usra v0.8b, v1.8b, #5
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshl_v8i8_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.8b, #3
-; CHECK-GI-NEXT:    movi v3.8b, #8
-; CHECK-GI-NEXT:    shl v0.8b, v0.8b, #3
-; CHECK-GI-NEXT:    sub v2.8b, v3.8b, v2.8b
-; CHECK-GI-NEXT:    neg v2.8b, v2.8b
-; CHECK-GI-NEXT:    ushl v1.8b, v1.8b, v2.8b
-; CHECK-GI-NEXT:    orr v0.8b, v0.8b, v1.8b
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshl_v8i8_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.8b, v0.8b, #3
+; CHECK-NEXT:    usra v0.8b, v1.8b, #5
+; CHECK-NEXT:    ret
 entry:
   %d = call <8 x i8> @llvm.fshl(<8 x i8> %a, <8 x i8> %b, <8 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
   ret <8 x i8> %d
 }
 
 define <8 x i8> @fshr_v8i8_c(<8 x i8> %a, <8 x i8> %b) {
-; CHECK-SD-LABEL: fshr_v8i8_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.8b, v0.8b, #5
-; CHECK-SD-NEXT:    usra v0.8b, v1.8b, #3
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshr_v8i8_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.8b, #3
-; CHECK-GI-NEXT:    movi v3.8b, #8
-; CHECK-GI-NEXT:    sub v2.8b, v3.8b, v2.8b
-; CHECK-GI-NEXT:    ushl v0.8b, v0.8b, v2.8b
-; CHECK-GI-NEXT:    usra v0.8b, v1.8b, #3
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshr_v8i8_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.8b, v0.8b, #5
+; CHECK-NEXT:    usra v0.8b, v1.8b, #3
+; CHECK-NEXT:    ret
 entry:
   %d = call <8 x i8> @llvm.fshr(<8 x i8> %a, <8 x i8> %b, <8 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
   ret <8 x i8> %d
 }
 
 define <16 x i8> @fshl_v16i8_c(<16 x i8> %a, <16 x i8> %b) {
-; CHECK-SD-LABEL: fshl_v16i8_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.16b, v0.16b, #3
-; CHECK-SD-NEXT:    usra v0.16b, v1.16b, #5
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshl_v16i8_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.16b, #3
-; CHECK-GI-NEXT:    movi v3.16b, #8
-; CHECK-GI-NEXT:    shl v0.16b, v0.16b, #3
-; CHECK-GI-NEXT:    sub v2.16b, v3.16b, v2.16b
-; CHECK-GI-NEXT:    neg v2.16b, v2.16b
-; CHECK-GI-NEXT:    ushl v1.16b, v1.16b, v2.16b
-; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshl_v16i8_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.16b, v0.16b, #3
+; CHECK-NEXT:    usra v0.16b, v1.16b, #5
+; CHECK-NEXT:    ret
 entry:
   %d = call <16 x i8> @llvm.fshl(<16 x i8> %a, <16 x i8> %b, <16 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
   ret <16 x i8> %d
 }
 
 define <16 x i8> @fshr_v16i8_c(<16 x i8> %a, <16 x i8> %b) {
-; CHECK-SD-LABEL: fshr_v16i8_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.16b, v0.16b, #5
-; CHECK-SD-NEXT:    usra v0.16b, v1.16b, #3
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshr_v16i8_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.16b, #3
-; CHECK-GI-NEXT:    movi v3.16b, #8
-; CHECK-GI-NEXT:    sub v2.16b, v3.16b, v2.16b
-; CHECK-GI-NEXT:    ushl v0.16b, v0.16b, v2.16b
-; CHECK-GI-NEXT:    usra v0.16b, v1.16b, #3
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshr_v16i8_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.16b, v0.16b, #5
+; CHECK-NEXT:    usra v0.16b, v1.16b, #3
+; CHECK-NEXT:    ret
 entry:
   %d = call <16 x i8> @llvm.fshr(<16 x i8> %a, <16 x i8> %b, <16 x i8> <i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3, i8 3>)
   ret <16 x i8> %d
 }
 
 define <4 x i16> @fshl_v4i16_c(<4 x i16> %a, <4 x i16> %b) {
-; CHECK-SD-LABEL: fshl_v4i16_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.4h, v0.4h, #3
-; CHECK-SD-NEXT:    usra v0.4h, v1.4h, #13
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshl_v4i16_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.4h, #3
-; CHECK-GI-NEXT:    movi v3.4h, #16
-; CHECK-GI-NEXT:    shl v0.4h, v0.4h, #3
-; CHECK-GI-NEXT:    sub v2.4h, v3.4h, v2.4h
-; CHECK-GI-NEXT:    neg v2.4h, v2.4h
-; CHECK-GI-NEXT:    ushl v1.4h, v1.4h, v2.4h
-; CHECK-GI-NEXT:    orr v0.8b, v0.8b, v1.8b
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshl_v4i16_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.4h, v0.4h, #3
+; CHECK-NEXT:    usra v0.4h, v1.4h, #13
+; CHECK-NEXT:    ret
 entry:
   %d = call <4 x i16> @llvm.fshl(<4 x i16> %a, <4 x i16> %b, <4 x i16> <i16 3, i16 3, i16 3, i16 3>)
   ret <4 x i16> %d
 }
 
 define <4 x i16> @fshr_v4i16_c(<4 x i16> %a, <4 x i16> %b) {
-; CHECK-SD-LABEL: fshr_v4i16_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.4h, v0.4h, #13
-; CHECK-SD-NEXT:    usra v0.4h, v1.4h, #3
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshr_v4i16_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.4h, #3
-; CHECK-GI-NEXT:    movi v3.4h, #16
-; CHECK-GI-NEXT:    sub v2.4h, v3.4h, v2.4h
-; CHECK-GI-NEXT:    ushl v0.4h, v0.4h, v2.4h
-; CHECK-GI-NEXT:    usra v0.4h, v1.4h, #3
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshr_v4i16_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.4h, v0.4h, #13
+; CHECK-NEXT:    usra v0.4h, v1.4h, #3
+; CHECK-NEXT:    ret
 entry:
   %d = call <4 x i16> @llvm.fshr(<4 x i16> %a, <4 x i16> %b, <4 x i16> <i16 3, i16 3, i16 3, i16 3>)
   ret <4 x i16> %d
@@ -4219,25 +3984,24 @@ define <7 x i16> @fshl_v7i16_c(<7 x i16> %a, <7 x i16> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v7i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov w9, #16 // =0x10
-; CHECK-GI-NEXT:    fmov s2, w9
-; CHECK-GI-NEXT:    fmov s3, w8
-; CHECK-GI-NEXT:    mov v2.h[1], w9
-; CHECK-GI-NEXT:    mov v3.h[1], w8
-; CHECK-GI-NEXT:    mov v2.h[2], w9
-; CHECK-GI-NEXT:    mov v3.h[2], w8
-; CHECK-GI-NEXT:    mov v2.h[3], w9
-; CHECK-GI-NEXT:    mov v3.h[3], w8
-; CHECK-GI-NEXT:    mov v2.h[4], w9
-; CHECK-GI-NEXT:    mov v3.h[4], w8
-; CHECK-GI-NEXT:    mov v2.h[5], w9
-; CHECK-GI-NEXT:    mov v3.h[5], w8
-; CHECK-GI-NEXT:    mov v2.h[6], w9
-; CHECK-GI-NEXT:    mov v3.h[6], w8
-; CHECK-GI-NEXT:    sub v2.8h, v2.8h, v3.8h
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v3.8h
+; CHECK-GI-NEXT:    mov w8, #13 // =0xd
+; CHECK-GI-NEXT:    mov w9, #3 // =0x3
+; CHECK-GI-NEXT:    fmov s2, w8
+; CHECK-GI-NEXT:    fmov s3, w9
+; CHECK-GI-NEXT:    mov v2.h[1], w8
+; CHECK-GI-NEXT:    mov v3.h[1], w9
+; CHECK-GI-NEXT:    mov v2.h[2], w8
+; CHECK-GI-NEXT:    mov v3.h[2], w9
+; CHECK-GI-NEXT:    mov v2.h[3], w8
+; CHECK-GI-NEXT:    mov v3.h[3], w9
+; CHECK-GI-NEXT:    mov v2.h[4], w8
+; CHECK-GI-NEXT:    mov v3.h[4], w9
+; CHECK-GI-NEXT:    mov v2.h[5], w8
+; CHECK-GI-NEXT:    mov v3.h[5], w9
+; CHECK-GI-NEXT:    mov v2.h[6], w8
+; CHECK-GI-NEXT:    mov v3.h[6], w9
 ; CHECK-GI-NEXT:    neg v2.8h, v2.8h
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v3.8h
 ; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v2.8h
 ; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
@@ -4261,25 +4025,24 @@ define <7 x i16> @fshr_v7i16_c(<7 x i16> %a, <7 x i16> %b) {
 ; CHECK-GI-LABEL: fshr_v7i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
 ; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov w9, #16 // =0x10
-; CHECK-GI-NEXT:    fmov s2, w9
-; CHECK-GI-NEXT:    fmov s3, w8
-; CHECK-GI-NEXT:    mov v2.h[1], w9
-; CHECK-GI-NEXT:    mov v3.h[1], w8
-; CHECK-GI-NEXT:    mov v2.h[2], w9
-; CHECK-GI-NEXT:    mov v3.h[2], w8
-; CHECK-GI-NEXT:    mov v2.h[3], w9
-; CHECK-GI-NEXT:    mov v3.h[3], w8
-; CHECK-GI-NEXT:    mov v2.h[4], w9
-; CHECK-GI-NEXT:    mov v3.h[4], w8
-; CHECK-GI-NEXT:    mov v2.h[5], w9
-; CHECK-GI-NEXT:    mov v3.h[5], w8
-; CHECK-GI-NEXT:    mov v2.h[6], w9
-; CHECK-GI-NEXT:    mov v3.h[6], w8
-; CHECK-GI-NEXT:    sub v2.8h, v2.8h, v3.8h
-; CHECK-GI-NEXT:    neg v3.8h, v3.8h
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
-; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v3.8h
+; CHECK-GI-NEXT:    mov w9, #13 // =0xd
+; CHECK-GI-NEXT:    fmov s2, w8
+; CHECK-GI-NEXT:    fmov s3, w9
+; CHECK-GI-NEXT:    mov v2.h[1], w8
+; CHECK-GI-NEXT:    mov v3.h[1], w9
+; CHECK-GI-NEXT:    mov v2.h[2], w8
+; CHECK-GI-NEXT:    mov v3.h[2], w9
+; CHECK-GI-NEXT:    mov v2.h[3], w8
+; CHECK-GI-NEXT:    mov v3.h[3], w9
+; CHECK-GI-NEXT:    mov v2.h[4], w8
+; CHECK-GI-NEXT:    mov v3.h[4], w9
+; CHECK-GI-NEXT:    mov v2.h[5], w8
+; CHECK-GI-NEXT:    mov v3.h[5], w9
+; CHECK-GI-NEXT:    mov v2.h[6], w8
+; CHECK-GI-NEXT:    mov v3.h[6], w9
+; CHECK-GI-NEXT:    neg v2.8h, v2.8h
+; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v3.8h
+; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v2.8h
 ; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
 ; CHECK-GI-NEXT:    ret
 entry:
@@ -4288,42 +4051,22 @@ entry:
 }
 
 define <8 x i16> @fshl_v8i16_c(<8 x i16> %a, <8 x i16> %b) {
-; CHECK-SD-LABEL: fshl_v8i16_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.8h, v0.8h, #3
-; CHECK-SD-NEXT:    usra v0.8h, v1.8h, #13
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshl_v8i16_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.8h, #3
-; CHECK-GI-NEXT:    movi v3.8h, #16
-; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #3
-; CHECK-GI-NEXT:    sub v2.8h, v3.8h, v2.8h
-; CHECK-GI-NEXT:    neg v2.8h, v2.8h
-; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v2.8h
-; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshl_v8i16_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.8h, v0.8h, #3
+; CHECK-NEXT:    usra v0.8h, v1.8h, #13
+; CHECK-NEXT:    ret
 entry:
   %d = call <8 x i16> @llvm.fshl(<8 x i16> %a, <8 x i16> %b, <8 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
   ret <8 x i16> %d
 }
 
 define <8 x i16> @fshr_v8i16_c(<8 x i16> %a, <8 x i16> %b) {
-; CHECK-SD-LABEL: fshr_v8i16_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.8h, v0.8h, #13
-; CHECK-SD-NEXT:    usra v0.8h, v1.8h, #3
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshr_v8i16_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.8h, #3
-; CHECK-GI-NEXT:    movi v3.8h, #16
-; CHECK-GI-NEXT:    sub v2.8h, v3.8h, v2.8h
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v2.8h
-; CHECK-GI-NEXT:    usra v0.8h, v1.8h, #3
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshr_v8i16_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.8h, v0.8h, #13
+; CHECK-NEXT:    usra v0.8h, v1.8h, #3
+; CHECK-NEXT:    ret
 entry:
   %d = call <8 x i16> @llvm.fshr(<8 x i16> %a, <8 x i16> %b, <8 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
   ret <8 x i16> %d
@@ -4340,16 +4083,10 @@ define <16 x i16> @fshl_v16i16_c(<16 x i16> %a, <16 x i16> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v16i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v4.8h, #3
-; CHECK-GI-NEXT:    movi v5.8h, #16
 ; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #3
 ; CHECK-GI-NEXT:    shl v1.8h, v1.8h, #3
-; CHECK-GI-NEXT:    sub v4.8h, v5.8h, v4.8h
-; CHECK-GI-NEXT:    neg v4.8h, v4.8h
-; CHECK-GI-NEXT:    ushl v2.8h, v2.8h, v4.8h
-; CHECK-GI-NEXT:    ushl v3.8h, v3.8h, v4.8h
-; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v2.16b
-; CHECK-GI-NEXT:    orr v1.16b, v1.16b, v3.16b
+; CHECK-GI-NEXT:    usra v0.8h, v2.8h, #13
+; CHECK-GI-NEXT:    usra v1.8h, v3.8h, #13
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <16 x i16> @llvm.fshl(<16 x i16> %a, <16 x i16> %b, <16 x i16> <i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3, i16 3>)
@@ -4367,11 +4104,8 @@ define <16 x i16> @fshr_v16i16_c(<16 x i16> %a, <16 x i16> %b) {
 ;
 ; CHECK-GI-LABEL: fshr_v16i16_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v4.8h, #3
-; CHECK-GI-NEXT:    movi v5.8h, #16
-; CHECK-GI-NEXT:    sub v4.8h, v5.8h, v4.8h
-; CHECK-GI-NEXT:    ushl v0.8h, v0.8h, v4.8h
-; CHECK-GI-NEXT:    ushl v1.8h, v1.8h, v4.8h
+; CHECK-GI-NEXT:    shl v0.8h, v0.8h, #13
+; CHECK-GI-NEXT:    shl v1.8h, v1.8h, #13
 ; CHECK-GI-NEXT:    usra v0.8h, v2.8h, #3
 ; CHECK-GI-NEXT:    usra v1.8h, v3.8h, #3
 ; CHECK-GI-NEXT:    ret
@@ -4381,84 +4115,44 @@ entry:
 }
 
 define <2 x i32> @fshl_v2i32_c(<2 x i32> %a, <2 x i32> %b) {
-; CHECK-SD-LABEL: fshl_v2i32_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.2s, v0.2s, #3
-; CHECK-SD-NEXT:    usra v0.2s, v1.2s, #29
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshl_v2i32_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.2s, #3
-; CHECK-GI-NEXT:    movi v3.2s, #32
-; CHECK-GI-NEXT:    shl v0.2s, v0.2s, #3
-; CHECK-GI-NEXT:    sub v2.2s, v3.2s, v2.2s
-; CHECK-GI-NEXT:    neg v2.2s, v2.2s
-; CHECK-GI-NEXT:    ushl v1.2s, v1.2s, v2.2s
-; CHECK-GI-NEXT:    orr v0.8b, v0.8b, v1.8b
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshl_v2i32_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.2s, v0.2s, #3
+; CHECK-NEXT:    usra v0.2s, v1.2s, #29
+; CHECK-NEXT:    ret
 entry:
   %d = call <2 x i32> @llvm.fshl(<2 x i32> %a, <2 x i32> %b, <2 x i32> <i32 3, i32 3>)
   ret <2 x i32> %d
 }
 
 define <2 x i32> @fshr_v2i32_c(<2 x i32> %a, <2 x i32> %b) {
-; CHECK-SD-LABEL: fshr_v2i32_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.2s, v0.2s, #29
-; CHECK-SD-NEXT:    usra v0.2s, v1.2s, #3
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshr_v2i32_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.2s, #3
-; CHECK-GI-NEXT:    movi v3.2s, #32
-; CHECK-GI-NEXT:    sub v2.2s, v3.2s, v2.2s
-; CHECK-GI-NEXT:    ushl v0.2s, v0.2s, v2.2s
-; CHECK-GI-NEXT:    usra v0.2s, v1.2s, #3
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshr_v2i32_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.2s, v0.2s, #29
+; CHECK-NEXT:    usra v0.2s, v1.2s, #3
+; CHECK-NEXT:    ret
 entry:
   %d = call <2 x i32> @llvm.fshr(<2 x i32> %a, <2 x i32> %b, <2 x i32> <i32 3, i32 3>)
   ret <2 x i32> %d
 }
 
 define <4 x i32> @fshl_v4i32_c(<4 x i32> %a, <4 x i32> %b) {
-; CHECK-SD-LABEL: fshl_v4i32_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.4s, v0.4s, #3
-; CHECK-SD-NEXT:    usra v0.4s, v1.4s, #29
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshl_v4i32_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.4s, #3
-; CHECK-GI-NEXT:    movi v3.4s, #32
-; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #3
-; CHECK-GI-NEXT:    sub v2.4s, v3.4s, v2.4s
-; CHECK-GI-NEXT:    neg v2.4s, v2.4s
-; CHECK-GI-NEXT:    ushl v1.4s, v1.4s, v2.4s
-; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshl_v4i32_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.4s, v0.4s, #3
+; CHECK-NEXT:    usra v0.4s, v1.4s, #29
+; CHECK-NEXT:    ret
 entry:
   %d = call <4 x i32> @llvm.fshl(<4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 3, i32 3, i32 3, i32 3>)
   ret <4 x i32> %d
 }
 
 define <4 x i32> @fshr_v4i32_c(<4 x i32> %a, <4 x i32> %b) {
-; CHECK-SD-LABEL: fshr_v4i32_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.4s, v0.4s, #29
-; CHECK-SD-NEXT:    usra v0.4s, v1.4s, #3
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshr_v4i32_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v2.4s, #3
-; CHECK-GI-NEXT:    movi v3.4s, #32
-; CHECK-GI-NEXT:    sub v2.4s, v3.4s, v2.4s
-; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v2.4s
-; CHECK-GI-NEXT:    usra v0.4s, v1.4s, #3
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshr_v4i32_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.4s, v0.4s, #29
+; CHECK-NEXT:    usra v0.4s, v1.4s, #3
+; CHECK-NEXT:    ret
 entry:
   %d = call <4 x i32> @llvm.fshr(<4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 3, i32 3, i32 3, i32 3>)
   ret <4 x i32> %d
@@ -4506,55 +4200,46 @@ define <7 x i32> @fshl_v7i32_c(<7 x i32> %a, <7 x i32> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v7i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov w9, #32 // =0x20
-; CHECK-GI-NEXT:    mov v2.s[0], w0
-; CHECK-GI-NEXT:    mov v0.s[0], w9
+; CHECK-GI-NEXT:    mov v0.s[0], w0
+; CHECK-GI-NEXT:    mov w8, #29 // =0x1d
+; CHECK-GI-NEXT:    mov v2.s[0], w7
 ; CHECK-GI-NEXT:    mov v1.s[0], w8
-; CHECK-GI-NEXT:    mov v3.s[0], w7
-; CHECK-GI-NEXT:    ldr s4, [sp]
-; CHECK-GI-NEXT:    mov v5.s[0], w4
-; CHECK-GI-NEXT:    mov v6.s[0], w8
-; CHECK-GI-NEXT:    ldr s7, [sp, #24]
-; CHECK-GI-NEXT:    movi v16.4s, #32
-; CHECK-GI-NEXT:    movi v17.4s, #3
-; CHECK-GI-NEXT:    mov v2.s[1], w1
-; CHECK-GI-NEXT:    ldr s18, [sp, #32]
-; CHECK-GI-NEXT:    mov v0.s[1], w9
+; CHECK-GI-NEXT:    mov w9, #3 // =0x3
+; CHECK-GI-NEXT:    mov v4.s[0], w4
+; CHECK-GI-NEXT:    mov v5.s[0], w9
+; CHECK-GI-NEXT:    ldr s3, [sp]
+; CHECK-GI-NEXT:    ldr s6, [sp, #24]
+; CHECK-GI-NEXT:    ldr s7, [sp, #32]
+; CHECK-GI-NEXT:    mov v0.s[1], w1
+; CHECK-GI-NEXT:    mov v2.s[1], v3.s[0]
+; CHECK-GI-NEXT:    ldr s3, [sp, #8]
 ; CHECK-GI-NEXT:    mov v1.s[1], w8
-; CHECK-GI-NEXT:    mov v3.s[1], v4.s[0]
-; CHECK-GI-NEXT:    ldr s4, [sp, #8]
-; CHECK-GI-NEXT:    mov v5.s[1], w5
-; CHECK-GI-NEXT:    mov v6.s[1], w8
-; CHECK-GI-NEXT:    mov v7.s[1], v18.s[0]
-; CHECK-GI-NEXT:    sub v16.4s, v16.4s, v17.4s
-; CHECK-GI-NEXT:    ldr s17, [sp, #40]
-; CHECK-GI-NEXT:    mov v2.s[2], w2
-; CHECK-GI-NEXT:    mov v0.s[2], w9
+; CHECK-GI-NEXT:    mov v6.s[1], v7.s[0]
+; CHECK-GI-NEXT:    mov v4.s[1], w5
+; CHECK-GI-NEXT:    mov v5.s[1], w9
+; CHECK-GI-NEXT:    ldr s7, [sp, #40]
+; CHECK-GI-NEXT:    mov v0.s[2], w2
+; CHECK-GI-NEXT:    mov v2.s[2], v3.s[0]
+; CHECK-GI-NEXT:    ldr s3, [sp, #16]
 ; CHECK-GI-NEXT:    mov v1.s[2], w8
-; CHECK-GI-NEXT:    mov v3.s[2], v4.s[0]
-; CHECK-GI-NEXT:    ldr s4, [sp, #16]
-; CHECK-GI-NEXT:    mov v5.s[2], w6
-; CHECK-GI-NEXT:    mov v6.s[2], w8
-; CHECK-GI-NEXT:    mov v7.s[2], v17.s[0]
-; CHECK-GI-NEXT:    mov v2.s[3], w3
-; CHECK-GI-NEXT:    sub v0.4s, v0.4s, v1.4s
-; CHECK-GI-NEXT:    mov v3.s[3], v4.s[0]
-; CHECK-GI-NEXT:    neg v1.4s, v16.4s
-; CHECK-GI-NEXT:    neg v0.4s, v0.4s
-; CHECK-GI-NEXT:    shl v2.4s, v2.4s, #3
-; CHECK-GI-NEXT:    ushl v1.4s, v3.4s, v1.4s
-; CHECK-GI-NEXT:    ushl v3.4s, v5.4s, v6.4s
-; CHECK-GI-NEXT:    ushl v0.4s, v7.4s, v0.4s
-; CHECK-GI-NEXT:    orr v1.16b, v2.16b, v1.16b
-; CHECK-GI-NEXT:    orr v0.16b, v3.16b, v0.16b
-; CHECK-GI-NEXT:    mov s2, v1.s[1]
-; CHECK-GI-NEXT:    mov s3, v1.s[2]
-; CHECK-GI-NEXT:    mov s4, v1.s[3]
-; CHECK-GI-NEXT:    fmov w0, s1
-; CHECK-GI-NEXT:    mov s5, v0.s[1]
-; CHECK-GI-NEXT:    mov s6, v0.s[2]
-; CHECK-GI-NEXT:    fmov w4, s0
+; CHECK-GI-NEXT:    mov v6.s[2], v7.s[0]
+; CHECK-GI-NEXT:    mov v4.s[2], w6
+; CHECK-GI-NEXT:    mov v5.s[2], w9
+; CHECK-GI-NEXT:    mov v0.s[3], w3
+; CHECK-GI-NEXT:    mov v2.s[3], v3.s[0]
+; CHECK-GI-NEXT:    neg v1.4s, v1.4s
+; CHECK-GI-NEXT:    ushl v3.4s, v4.4s, v5.4s
+; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #3
+; CHECK-GI-NEXT:    ushl v1.4s, v6.4s, v1.4s
+; CHECK-GI-NEXT:    usra v0.4s, v2.4s, #29
+; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
+; CHECK-GI-NEXT:    mov s2, v0.s[1]
+; CHECK-GI-NEXT:    mov s3, v0.s[2]
+; CHECK-GI-NEXT:    mov s4, v0.s[3]
+; CHECK-GI-NEXT:    mov s5, v1.s[1]
+; CHECK-GI-NEXT:    mov s6, v1.s[2]
+; CHECK-GI-NEXT:    fmov w0, s0
+; CHECK-GI-NEXT:    fmov w4, s1
 ; CHECK-GI-NEXT:    fmov w1, s2
 ; CHECK-GI-NEXT:    fmov w2, s3
 ; CHECK-GI-NEXT:    fmov w3, s4
@@ -4610,44 +4295,37 @@ define <7 x i32> @fshr_v7i32_c(<7 x i32> %a, <7 x i32> %b) {
 ; CHECK-GI:       // %bb.0: // %entry
 ; CHECK-GI-NEXT:    mov v0.s[0], w0
 ; CHECK-GI-NEXT:    mov w8, #3 // =0x3
-; CHECK-GI-NEXT:    mov w9, #32 // =0x20
-; CHECK-GI-NEXT:    mov v1.s[0], w9
-; CHECK-GI-NEXT:    mov v2.s[0], w8
-; CHECK-GI-NEXT:    mov v3.s[0], w8
-; CHECK-GI-NEXT:    mov v4.s[0], w7
-; CHECK-GI-NEXT:    ldr s5, [sp]
-; CHECK-GI-NEXT:    mov v6.s[0], w4
-; CHECK-GI-NEXT:    ldr s7, [sp, #24]
-; CHECK-GI-NEXT:    ldr s16, [sp, #32]
-; CHECK-GI-NEXT:    movi v17.4s, #3
+; CHECK-GI-NEXT:    mov v2.s[0], w7
+; CHECK-GI-NEXT:    mov v1.s[0], w8
+; CHECK-GI-NEXT:    mov w9, #29 // =0x1d
+; CHECK-GI-NEXT:    mov v4.s[0], w4
+; CHECK-GI-NEXT:    mov v5.s[0], w9
+; CHECK-GI-NEXT:    ldr s3, [sp]
+; CHECK-GI-NEXT:    ldr s6, [sp, #24]
+; CHECK-GI-NEXT:    ldr s7, [sp, #32]
 ; CHECK-GI-NEXT:    mov v0.s[1], w1
-; CHECK-GI-NEXT:    mov v1.s[1], w9
-; CHECK-GI-NEXT:    mov v2.s[1], w8
-; CHECK-GI-NEXT:    mov v3.s[1], w8
-; CHECK-GI-NEXT:    mov v4.s[1], v5.s[0]
-; CHECK-GI-NEXT:    mov v7.s[1], v16.s[0]
-; CHECK-GI-NEXT:    ldr s16, [sp, #8]
-; CHECK-GI-NEXT:    mov v6.s[1], w5
-; CHECK-GI-NEXT:    movi v5.4s, #32
+; CHECK-GI-NEXT:    mov v2.s[1], v3.s[0]
+; CHECK-GI-NEXT:    ldr s3, [sp, #8]
+; CHECK-GI-NEXT:    mov v1.s[1], w8
+; CHECK-GI-NEXT:    mov v6.s[1], v7.s[0]
+; CHECK-GI-NEXT:    mov v4.s[1], w5
+; CHECK-GI-NEXT:    mov v5.s[1], w9
+; CHECK-GI-NEXT:    ldr s7, [sp, #40]
 ; CHECK-GI-NEXT:    mov v0.s[2], w2
-; CHECK-GI-NEXT:    mov v1.s[2], w9
-; CHECK-GI-NEXT:    mov v2.s[2], w8
-; CHECK-GI-NEXT:    mov v3.s[2], w8
-; CHECK-GI-NEXT:    mov v4.s[2], v16.s[0]
-; CHECK-GI-NEXT:    ldr s16, [sp, #40]
-; CHECK-GI-NEXT:    mov v6.s[2], w6
-; CHECK-GI-NEXT:    sub v5.4s, v5.4s, v17.4s
+; CHECK-GI-NEXT:    mov v2.s[2], v3.s[0]
+; CHECK-GI-NEXT:    ldr s3, [sp, #16]
+; CHECK-GI-NEXT:    mov v1.s[2], w8
+; CHECK-GI-NEXT:    mov v6.s[2], v7.s[0]
+; CHECK-GI-NEXT:    mov v4.s[2], w6
+; CHECK-GI-NEXT:    mov v5.s[2], w9
 ; CHECK-GI-NEXT:    mov v0.s[3], w3
-; CHECK-GI-NEXT:    mov v7.s[2], v16.s[0]
-; CHECK-GI-NEXT:    ldr s16, [sp, #16]
-; CHECK-GI-NEXT:    sub v1.4s, v1.4s, v2.4s
-; CHECK-GI-NEXT:    neg v2.4s, v3.4s
-; CHECK-GI-NEXT:    mov v4.s[3], v16.s[0]
-; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v5.4s
+; CHECK-GI-NEXT:    mov v2.s[3], v3.s[0]
+; CHECK-GI-NEXT:    neg v1.4s, v1.4s
+; CHECK-GI-NEXT:    ushl v3.4s, v4.4s, v5.4s
+; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #29
 ; CHECK-GI-NEXT:    ushl v1.4s, v6.4s, v1.4s
-; CHECK-GI-NEXT:    ushl v2.4s, v7.4s, v2.4s
-; CHECK-GI-NEXT:    usra v0.4s, v4.4s, #3
-; CHECK-GI-NEXT:    orr v1.16b, v1.16b, v2.16b
+; CHECK-GI-NEXT:    usra v0.4s, v2.4s, #3
+; CHECK-GI-NEXT:    orr v1.16b, v3.16b, v1.16b
 ; CHECK-GI-NEXT:    mov s2, v0.s[1]
 ; CHECK-GI-NEXT:    mov s3, v0.s[2]
 ; CHECK-GI-NEXT:    mov s4, v0.s[3]
@@ -4677,16 +4355,10 @@ define <8 x i32> @fshl_v8i32_c(<8 x i32> %a, <8 x i32> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v8i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v4.4s, #3
-; CHECK-GI-NEXT:    movi v5.4s, #32
 ; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #3
 ; CHECK-GI-NEXT:    shl v1.4s, v1.4s, #3
-; CHECK-GI-NEXT:    sub v4.4s, v5.4s, v4.4s
-; CHECK-GI-NEXT:    neg v4.4s, v4.4s
-; CHECK-GI-NEXT:    ushl v2.4s, v2.4s, v4.4s
-; CHECK-GI-NEXT:    ushl v3.4s, v3.4s, v4.4s
-; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v2.16b
-; CHECK-GI-NEXT:    orr v1.16b, v1.16b, v3.16b
+; CHECK-GI-NEXT:    usra v0.4s, v2.4s, #29
+; CHECK-GI-NEXT:    usra v1.4s, v3.4s, #29
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <8 x i32> @llvm.fshl(<8 x i32> %a, <8 x i32> %b, <8 x i32> <i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3, i32 3>)
@@ -4704,11 +4376,8 @@ define <8 x i32> @fshr_v8i32_c(<8 x i32> %a, <8 x i32> %b) {
 ;
 ; CHECK-GI-LABEL: fshr_v8i32_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    movi v4.4s, #3
-; CHECK-GI-NEXT:    movi v5.4s, #32
-; CHECK-GI-NEXT:    sub v4.4s, v5.4s, v4.4s
-; CHECK-GI-NEXT:    ushl v0.4s, v0.4s, v4.4s
-; CHECK-GI-NEXT:    ushl v1.4s, v1.4s, v4.4s
+; CHECK-GI-NEXT:    shl v0.4s, v0.4s, #29
+; CHECK-GI-NEXT:    shl v1.4s, v1.4s, #29
 ; CHECK-GI-NEXT:    usra v0.4s, v2.4s, #3
 ; CHECK-GI-NEXT:    usra v1.4s, v3.4s, #3
 ; CHECK-GI-NEXT:    ret
@@ -4718,46 +4387,22 @@ entry:
 }
 
 define <2 x i64> @fshl_v2i64_c(<2 x i64> %a, <2 x i64> %b) {
-; CHECK-SD-LABEL: fshl_v2i64_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.2d, v0.2d, #3
-; CHECK-SD-NEXT:    usra v0.2d, v1.2d, #61
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshl_v2i64_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    adrp x8, .LCPI138_1
-; CHECK-GI-NEXT:    adrp x9, .LCPI138_0
-; CHECK-GI-NEXT:    shl v0.2d, v0.2d, #3
-; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI138_1]
-; CHECK-GI-NEXT:    ldr q3, [x9, :lo12:.LCPI138_0]
-; CHECK-GI-NEXT:    sub v2.2d, v3.2d, v2.2d
-; CHECK-GI-NEXT:    neg v2.2d, v2.2d
-; CHECK-GI-NEXT:    ushl v1.2d, v1.2d, v2.2d
-; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v1.16b
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshl_v2i64_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.2d, v0.2d, #3
+; CHECK-NEXT:    usra v0.2d, v1.2d, #61
+; CHECK-NEXT:    ret
 entry:
   %d = call <2 x i64> @llvm.fshl(<2 x i64> %a, <2 x i64> %b, <2 x i64> <i64 3, i64 3>)
   ret <2 x i64> %d
 }
 
 define <2 x i64> @fshr_v2i64_c(<2 x i64> %a, <2 x i64> %b) {
-; CHECK-SD-LABEL: fshr_v2i64_c:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    shl v0.2d, v0.2d, #61
-; CHECK-SD-NEXT:    usra v0.2d, v1.2d, #3
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshr_v2i64_c:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    adrp x8, .LCPI139_1
-; CHECK-GI-NEXT:    adrp x9, .LCPI139_0
-; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI139_1]
-; CHECK-GI-NEXT:    ldr q3, [x9, :lo12:.LCPI139_0]
-; CHECK-GI-NEXT:    sub v2.2d, v3.2d, v2.2d
-; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v2.2d
-; CHECK-GI-NEXT:    usra v0.2d, v1.2d, #3
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshr_v2i64_c:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    shl v0.2d, v0.2d, #61
+; CHECK-NEXT:    usra v0.2d, v1.2d, #3
+; CHECK-NEXT:    ret
 entry:
   %d = call <2 x i64> @llvm.fshr(<2 x i64> %a, <2 x i64> %b, <2 x i64> <i64 3, i64 3>)
   ret <2 x i64> %d
@@ -4774,18 +4419,10 @@ define <4 x i64> @fshl_v4i64_c(<4 x i64> %a, <4 x i64> %b) {
 ;
 ; CHECK-GI-LABEL: fshl_v4i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    adrp x8, .LCPI140_1
-; CHECK-GI-NEXT:    adrp x9, .LCPI140_0
 ; CHECK-GI-NEXT:    shl v0.2d, v0.2d, #3
-; CHECK-GI-NEXT:    ldr q4, [x8, :lo12:.LCPI140_1]
-; CHECK-GI-NEXT:    ldr q5, [x9, :lo12:.LCPI140_0]
 ; CHECK-GI-NEXT:    shl v1.2d, v1.2d, #3
-; CHECK-GI-NEXT:    sub v4.2d, v5.2d, v4.2d
-; CHECK-GI-NEXT:    neg v4.2d, v4.2d
-; CHECK-GI-NEXT:    ushl v2.2d, v2.2d, v4.2d
-; CHECK-GI-NEXT:    ushl v3.2d, v3.2d, v4.2d
-; CHECK-GI-NEXT:    orr v0.16b, v0.16b, v2.16b
-; CHECK-GI-NEXT:    orr v1.16b, v1.16b, v3.16b
+; CHECK-GI-NEXT:    usra v0.2d, v2.2d, #61
+; CHECK-GI-NEXT:    usra v1.2d, v3.2d, #61
 ; CHECK-GI-NEXT:    ret
 entry:
   %d = call <4 x i64> @llvm.fshl(<4 x i64> %a, <4 x i64> %b, <4 x i64> <i64 3, i64 3, i64 3, i64 3>)
@@ -4803,13 +4440,8 @@ define <4 x i64> @fshr_v4i64_c(<4 x i64> %a, <4 x i64> %b) {
 ;
 ; CHECK-GI-LABEL: fshr_v4i64_c:
 ; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    adrp x8, .LCPI141_1
-; CHECK-GI-NEXT:    adrp x9, .LCPI141_0
-; CHECK-GI-NEXT:    ldr q4, [x8, :lo12:.LCPI141_1]
-; CHECK-GI-NEXT:    ldr q5, [x9, :lo12:.LCPI141_0]
-; CHECK-GI-NEXT:    sub v4.2d, v5.2d, v4.2d
-; CHECK-GI-NEXT:    ushl v0.2d, v0.2d, v4.2d
-; CHECK-GI-NEXT:    ushl v1.2d, v1.2d, v4.2d
+; CHECK-GI-NEXT:    shl v0.2d, v0.2d, #61
+; CHECK-GI-NEXT:    shl v1.2d, v1.2d, #61
 ; CHECK-GI-NEXT:    usra v0.2d, v2.2d, #3
 ; CHECK-GI-NEXT:    usra v1.2d, v3.2d, #3
 ; CHECK-GI-NEXT:    ret
diff --git a/llvm/test/CodeGen/AArch64/funnel-shift.ll b/llvm/test/CodeGen/AArch64/funnel-shift.ll
index 05735e4a01c00..e5aa360f804c1 100644
--- a/llvm/test/CodeGen/AArch64/funnel-shift.ll
+++ b/llvm/test/CodeGen/AArch64/funnel-shift.ll
@@ -191,20 +191,10 @@ define i8 @fshl_i8_const_fold_overshift_1() {
 }
 
 define i8 @fshl_i8_const_fold_overshift_2() {
-; CHECK-SD-LABEL: fshl_i8_const_fold_overshift_2:
-; CHECK-SD:       // %bb.0:
-; CHECK-SD-NEXT:    mov w0, #120 // =0x78
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshl_i8_const_fold_overshift_2:
-; CHECK-GI:       // %bb.0:
-; CHECK-GI-NEXT:    mov x8, xzr
-; CHECK-GI-NEXT:    mov w9, #-11 // =0xfffffff5
-; CHECK-GI-NEXT:    sub x8, x8, w9, uxtb
-; CHECK-GI-NEXT:    mov w9, #15 // =0xf
-; CHECK-GI-NEXT:    and x8, x8, #0x7
-; CHECK-GI-NEXT:    lsl w0, w9, w8
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshl_i8_const_fold_overshift_2:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov w0, #120 // =0x78
+; CHECK-NEXT:    ret
   %f = call i8 @llvm.fshl.i8(i8 15, i8 15, i8 11)
   ret i8 %f
 }
@@ -365,15 +355,10 @@ define i37 @fshr_i37(i37 %x, i37 %y, i37 %z) {
 
 declare i7 @llvm.fshr.i7(i7, i7, i7)
 define i7 @fshr_i7_const_fold() {
-; CHECK-SD-LABEL: fshr_i7_const_fold:
-; CHECK-SD:       // %bb.0:
-; CHECK-SD-NEXT:    mov w0, #31 // =0x1f
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: fshr_i7_const_fold:
-; CHECK-GI:       // %bb.0:
-; CHECK-GI-NEXT:    mov w0, #3615 // =0xe1f
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: fshr_i7_const_fold:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov w0, #31 // =0x1f
+; CHECK-NEXT:    ret
   %f = call i7 @llvm.fshr.i7(i7 112, i7 127, i7 2)
   ret i7 %f
 }

>From a24cf46230e8e5cddc10ed78decedad3d109ca34 Mon Sep 17 00:00:00 2001
From: John Stuart <john.stuart.science at gmail.com>
Date: Sat, 22 Mar 2025 07:05:43 +0100
Subject: [PATCH 4/5] fix combine-udiv.mir

---
 .../CodeGen/AArch64/GlobalISel/combine-udiv.mir    | 14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.mir
index 096836d062dd6..f8578a694e2d4 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.mir
@@ -41,12 +41,9 @@ body:             |
     ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C1]](s16), [[C1]](s16), [[C1]](s16), [[C1]](s16), [[C1]](s16), [[C1]](s16), [[C1]](s16), [[C1]](s16)
     ; CHECK-NEXT: [[UMULH:%[0-9]+]]:_(<8 x s16>) = G_UMULH [[COPY]], [[BUILD_VECTOR]]
     ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(<8 x s16>) = G_SUB [[COPY]], [[UMULH]]
-    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s16) = G_CONSTANT i16 15
+    ; CHECK-NEXT: [[C2:%[0-9]+]]:_(s16) = G_CONSTANT i16 1
     ; CHECK-NEXT: [[BUILD_VECTOR2:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C2]](s16), [[C2]](s16), [[C2]](s16), [[C2]](s16), [[C2]](s16), [[C2]](s16), [[C2]](s16), [[C2]](s16)
-    ; CHECK-NEXT: [[C3:%[0-9]+]]:_(s16) = G_CONSTANT i16 16
-    ; CHECK-NEXT: [[BUILD_VECTOR3:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16)
-    ; CHECK-NEXT: [[SUB1:%[0-9]+]]:_(<8 x s16>) = G_SUB [[BUILD_VECTOR3]], [[BUILD_VECTOR2]]
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[SUB]], [[SUB1]](<8 x s16>)
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[SUB]], [[BUILD_VECTOR2]](<8 x s16>)
     ; CHECK-NEXT: [[ADD:%[0-9]+]]:_(<8 x s16>) = G_ADD [[LSHR]], [[UMULH]]
     ; CHECK-NEXT: [[LSHR1:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[ADD]], [[BUILD_VECTOR1]](<8 x s16>)
     ; CHECK-NEXT: $q0 = COPY [[LSHR1]](<8 x s16>)
@@ -195,12 +192,9 @@ body:             |
     ; CHECK-NEXT: [[BUILD_VECTOR1:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C1]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C3]](s16), [[C8]](s16), [[C8]](s16), [[C11]](s16)
     ; CHECK-NEXT: [[UMULH:%[0-9]+]]:_(<8 x s16>) = G_UMULH [[COPY]], [[BUILD_VECTOR]]
     ; CHECK-NEXT: [[SUB:%[0-9]+]]:_(<8 x s16>) = G_SUB [[COPY]], [[UMULH]]
-    ; CHECK-NEXT: [[C12:%[0-9]+]]:_(s16) = G_CONSTANT i16 15
+    ; CHECK-NEXT: [[C12:%[0-9]+]]:_(s16) = G_CONSTANT i16 1
     ; CHECK-NEXT: [[BUILD_VECTOR2:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C12]](s16), [[C12]](s16), [[C12]](s16), [[C12]](s16), [[C12]](s16), [[C12]](s16), [[C12]](s16), [[C12]](s16)
-    ; CHECK-NEXT: [[C13:%[0-9]+]]:_(s16) = G_CONSTANT i16 16
-    ; CHECK-NEXT: [[BUILD_VECTOR3:%[0-9]+]]:_(<8 x s16>) = G_BUILD_VECTOR [[C13]](s16), [[C13]](s16), [[C13]](s16), [[C13]](s16), [[C13]](s16), [[C13]](s16), [[C13]](s16), [[C13]](s16)
-    ; CHECK-NEXT: [[SUB1:%[0-9]+]]:_(<8 x s16>) = G_SUB [[BUILD_VECTOR3]], [[BUILD_VECTOR2]]
-    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[SUB]], [[SUB1]](<8 x s16>)
+    ; CHECK-NEXT: [[LSHR:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[SUB]], [[BUILD_VECTOR2]](<8 x s16>)
     ; CHECK-NEXT: [[ADD:%[0-9]+]]:_(<8 x s16>) = G_ADD [[LSHR]], [[UMULH]]
     ; CHECK-NEXT: [[LSHR1:%[0-9]+]]:_(<8 x s16>) = G_LSHR [[ADD]], [[BUILD_VECTOR1]](<8 x s16>)
     ; CHECK-NEXT: $q0 = COPY [[LSHR1]](<8 x s16>)

>From 40ffcc4e1b79d308afc5626485b9d7597fb61c4d Mon Sep 17 00:00:00 2001
From: John Stuart <john.stuart.science at gmail.com>
Date: Sat, 29 Mar 2025 18:10:03 +0100
Subject: [PATCH 5/5] fix value tracking

---
 llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerCombiner.cpp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerCombiner.cpp b/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerCombiner.cpp
index bae1c9517e9c0..3e077611fc287 100644
--- a/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerCombiner.cpp
+++ b/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerCombiner.cpp
@@ -476,7 +476,7 @@ AArch64PostLegalizerCombinerImpl::AArch64PostLegalizerCombinerImpl(
     const AArch64PostLegalizerCombinerImplRuleConfig &RuleConfig,
     const AArch64Subtarget &STI, MachineDominatorTree *MDT,
     const LegalizerInfo *LI)
-    : Combiner(MF, CInfo, TPC, &KB, CSEInfo),
+    : Combiner(MF, CInfo, TPC, &VT, CSEInfo),
       Opt(std::make_unique<OptMIRBuilder>(MF, CSEInfo, Observer, LI,
                                           /*IsPrelegalize=*/false)),
       B(*Opt), Helper(Observer, B, /*IsPreLegalize*/ false, &VT, MDT, LI),



More information about the llvm-commits mailing list