[llvm] non literal index (PR #172514)

Steven Perron via llvm-commits llvm-commits at lists.llvm.org
Tue Dec 16 08:31:46 PST 2025


https://github.com/s-perron created https://github.com/llvm/llvm-project/pull/172514

- **[SPIR-V] Legalize vector arithmetic and intrinsics for large vectors**
- **Add s64 to all scalars.**
- **Fix alignment in stores generated in legalize pointer cast.**
- **Set all def to default type in post legalization.**
- **Update tests, and add tests for 6-element vectors**
- **Handle G_STRICT_FMA**
- **Scalarize rem and div with illegal vector size that is not a power of 2.**
- **[SPIRV] Restrict OpName generation to major values**
- **[SPIRV] Implement lowering for llvm.matrix.transpose and llvm.matrix.multiply**
- **[SPIRV] Support non-constant indices for vector insert/extract**


>From d8aa9c91fb7bc6df9617b3251387206c536632c3 Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Thu, 30 Oct 2025 13:01:44 -0400
Subject: [PATCH 01/10] [SPIR-V] Legalize vector arithmetic and intrinsics for
 large vectors
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch improves the legalization of vector operations, particularly
focusing on vectors that exceed the maximum supported size (e.g., 4 elements
for shaders). This includes better handling for insert and extract element
operations, which facilitates the legalization of loads and stores for
long vectors—a common pattern when compiling HLSL matrices with Clang.

Key changes include:
- Adding legalization rules for G_FMA, G_INSERT_VECTOR_ELT, and various
  arithmetic operations to handle splitting of large vectors.
- Updating G_CONCAT_VECTORS and G_SPLAT_VECTOR to be legal for allowed
  types.
- Implementing custom legalization for G_INSERT_VECTOR_ELT using the
  spv_insertelt intrinsic.
- Enhancing SPIRVPostLegalizer to deduce types for arithmetic instructions
  and vector element intrinsics (spv_insertelt, spv_extractelt).
- Refactoring legalizeIntrinsic to uniformly handle vector legalization
  requirements.

The strategy for insert and extract operations mirrors that of bitcasts:
incoming intrinsics are converted to generic MIR instructions (G_INSERT_VECTOR_ELT
and G_EXTRACT_VECTOR_ELT) to leverage standard legalization rules (like splitting).
After legalization, they are converted back to their respective SPIR-V intrinsics
(spv_insertelt, spv_extractelt) because later passes in the backend expect these
intrinsics rather than the generic instructions.

This ensures that operations on large vectors (e.g., <16 x float>) are
correctly broken down into legal sub-vectors.
---
 llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp  | 117 ++++++++---
 llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp  |  47 ++++-
 .../SPIRV/legalization/load-store-global.ll   | 194 ++++++++++++++++++
 .../SPIRV/legalization/vector-arithmetic.ll   | 149 ++++++++++++++
 4 files changed, 464 insertions(+), 43 deletions(-)
 create mode 100644 llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
 create mode 100644 llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll

diff --git a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
index b5912c27316c9..4d83649c0f84f 100644
--- a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
@@ -113,6 +113,8 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
                            v3s1, v3s8, v3s16, v3s32, v3s64,
                            v4s1, v4s8, v4s16, v4s32, v4s64};
 
+  auto allScalars = {s1, s8, s16, s32};
+
   auto allScalarsAndVectors = {
       s1,   s8,   s16,   s32,   s64,   v2s1,  v2s8,  v2s16,  v2s32,  v2s64,
       v3s1, v3s8, v3s16, v3s32, v3s64, v4s1,  v4s8,  v4s16,  v4s32,  v4s64,
@@ -172,9 +174,25 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
 
   for (auto Opc : getTypeFoldingSupportedOpcodes()) {
     if (Opc != G_EXTRACT_VECTOR_ELT)
-      getActionDefinitionsBuilder(Opc).custom();
+      getActionDefinitionsBuilder(Opc)
+          .customFor(allScalars)
+          .customFor(allowedVectorTypes)
+          .moreElementsToNextPow2(0)
+          .fewerElementsIf(vectorElementCountIsGreaterThan(0, MaxVectorSize),
+                           LegalizeMutations::changeElementCountTo(
+                               0, ElementCount::getFixed(MaxVectorSize)))
+          .custom();
   }
 
+  getActionDefinitionsBuilder(TargetOpcode::G_FMA)
+      .legalFor(allScalars)
+      .legalFor(allowedVectorTypes)
+      .moreElementsToNextPow2(0)
+      .fewerElementsIf(vectorElementCountIsGreaterThan(0, MaxVectorSize),
+                       LegalizeMutations::changeElementCountTo(
+                           0, ElementCount::getFixed(MaxVectorSize)))
+      .alwaysLegal();
+
   getActionDefinitionsBuilder(G_INTRINSIC_W_SIDE_EFFECTS).custom();
 
   getActionDefinitionsBuilder(G_SHUFFLE_VECTOR)
@@ -192,6 +210,13 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
                            1, ElementCount::getFixed(MaxVectorSize)))
       .custom();
 
+  getActionDefinitionsBuilder(G_INSERT_VECTOR_ELT)
+      .moreElementsToNextPow2(0)
+      .fewerElementsIf(vectorElementCountIsGreaterThan(0, MaxVectorSize),
+                       LegalizeMutations::changeElementCountTo(
+                           0, ElementCount::getFixed(MaxVectorSize)))
+      .custom();
+
   // Illegal G_UNMERGE_VALUES instructions should be handled
   // during the combine phase.
   getActionDefinitionsBuilder(G_BUILD_VECTOR)
@@ -215,14 +240,13 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
       .lowerIf(vectorElementCountIsGreaterThan(1, MaxVectorSize))
       .custom();
 
+  // If the result is still illegal, the combiner should be able to remove it.
   getActionDefinitionsBuilder(G_CONCAT_VECTORS)
-      .legalIf(vectorElementCountIsLessThanOrEqualTo(0, MaxVectorSize))
-      .moreElementsToNextPow2(0)
-      .lowerIf(vectorElementCountIsGreaterThan(0, MaxVectorSize))
-      .alwaysLegal();
+      .legalForCartesianProduct(allowedVectorTypes, allowedVectorTypes)
+      .moreElementsToNextPow2(0);
 
   getActionDefinitionsBuilder(G_SPLAT_VECTOR)
-      .legalIf(vectorElementCountIsLessThanOrEqualTo(0, MaxVectorSize))
+      .legalFor(allowedVectorTypes)
       .moreElementsToNextPow2(0)
       .fewerElementsIf(vectorElementCountIsGreaterThan(0, MaxVectorSize),
                        LegalizeMutations::changeElementSizeTo(0, MaxVectorSize))
@@ -458,6 +482,23 @@ static bool legalizeExtractVectorElt(LegalizerHelper &Helper, MachineInstr &MI,
   return true;
 }
 
+static bool legalizeInsertVectorElt(LegalizerHelper &Helper, MachineInstr &MI,
+                                    SPIRVGlobalRegistry *GR) {
+  MachineIRBuilder &MIRBuilder = Helper.MIRBuilder;
+  Register DstReg = MI.getOperand(0).getReg();
+  Register SrcReg = MI.getOperand(1).getReg();
+  Register ValReg = MI.getOperand(2).getReg();
+  Register IdxReg = MI.getOperand(3).getReg();
+
+  MIRBuilder
+      .buildIntrinsic(Intrinsic::spv_insertelt, ArrayRef<Register>{DstReg})
+      .addUse(SrcReg)
+      .addUse(ValReg)
+      .addUse(IdxReg);
+  MI.eraseFromParent();
+  return true;
+}
+
 static Register convertPtrToInt(Register Reg, LLT ConvTy, SPIRVType *SpvType,
                                 LegalizerHelper &Helper,
                                 MachineRegisterInfo &MRI,
@@ -483,6 +524,8 @@ bool SPIRVLegalizerInfo::legalizeCustom(
     return legalizeBitcast(Helper, MI);
   case TargetOpcode::G_EXTRACT_VECTOR_ELT:
     return legalizeExtractVectorElt(Helper, MI, GR);
+  case TargetOpcode::G_INSERT_VECTOR_ELT:
+    return legalizeInsertVectorElt(Helper, MI, GR);
   case TargetOpcode::G_INTRINSIC:
   case TargetOpcode::G_INTRINSIC_W_SIDE_EFFECTS:
     return legalizeIntrinsic(Helper, MI);
@@ -512,6 +555,15 @@ bool SPIRVLegalizerInfo::legalizeCustom(
   }
 }
 
+static bool needsVectorLegalization(const LLT &Ty, const SPIRVSubtarget &ST) {
+  if (!Ty.isVector())
+    return false;
+  unsigned NumElements = Ty.getNumElements();
+  unsigned MaxVectorSize = ST.isShader() ? 4 : 16;
+  return (NumElements > 4 && !isPowerOf2_32(NumElements)) ||
+         NumElements > MaxVectorSize;
+}
+
 bool SPIRVLegalizerInfo::legalizeIntrinsic(LegalizerHelper &Helper,
                                            MachineInstr &MI) const {
   LLVM_DEBUG(dbgs() << "legalizeIntrinsic: " << MI);
@@ -528,41 +580,38 @@ bool SPIRVLegalizerInfo::legalizeIntrinsic(LegalizerHelper &Helper,
     LLT DstTy = MRI.getType(DstReg);
     LLT SrcTy = MRI.getType(SrcReg);
 
-    int32_t MaxVectorSize = ST.isShader() ? 4 : 16;
-
-    bool DstNeedsLegalization = false;
-    bool SrcNeedsLegalization = false;
-
-    if (DstTy.isVector()) {
-      if (DstTy.getNumElements() > 4 &&
-          !isPowerOf2_32(DstTy.getNumElements())) {
-        DstNeedsLegalization = true;
-      }
-
-      if (DstTy.getNumElements() > MaxVectorSize) {
-        DstNeedsLegalization = true;
-      }
-    }
-
-    if (SrcTy.isVector()) {
-      if (SrcTy.getNumElements() > 4 &&
-          !isPowerOf2_32(SrcTy.getNumElements())) {
-        SrcNeedsLegalization = true;
-      }
-
-      if (SrcTy.getNumElements() > MaxVectorSize) {
-        SrcNeedsLegalization = true;
-      }
-    }
-
     // If an spv_bitcast needs to be legalized, we convert it to G_BITCAST to
     // allow using the generic legalization rules.
-    if (DstNeedsLegalization || SrcNeedsLegalization) {
+    if (needsVectorLegalization(DstTy, ST) ||
+        needsVectorLegalization(SrcTy, ST)) {
       LLVM_DEBUG(dbgs() << "Replacing with a G_BITCAST\n");
       MIRBuilder.buildBitcast(DstReg, SrcReg);
       MI.eraseFromParent();
     }
     return true;
+  } else if (IntrinsicID == Intrinsic::spv_insertelt) {
+    Register DstReg = MI.getOperand(0).getReg();
+    LLT DstTy = MRI.getType(DstReg);
+
+    if (needsVectorLegalization(DstTy, ST)) {
+      Register SrcReg = MI.getOperand(2).getReg();
+      Register ValReg = MI.getOperand(3).getReg();
+      Register IdxReg = MI.getOperand(4).getReg();
+      MIRBuilder.buildInsertVectorElement(DstReg, SrcReg, ValReg, IdxReg);
+      MI.eraseFromParent();
+    }
+    return true;
+  } else if (IntrinsicID == Intrinsic::spv_extractelt) {
+    Register SrcReg = MI.getOperand(2).getReg();
+    LLT SrcTy = MRI.getType(SrcReg);
+
+    if (needsVectorLegalization(SrcTy, ST)) {
+      Register DstReg = MI.getOperand(0).getReg();
+      Register IdxReg = MI.getOperand(3).getReg();
+      MIRBuilder.buildExtractVectorElement(DstReg, SrcReg, IdxReg);
+      MI.eraseFromParent();
+    }
+    return true;
   }
   return true;
 }
diff --git a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
index c90e6d8cfbfb4..d91016a38539b 100644
--- a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
@@ -16,6 +16,7 @@
 #include "SPIRV.h"
 #include "SPIRVSubtarget.h"
 #include "SPIRVUtils.h"
+#include "llvm/CodeGen/GlobalISel/GenericMachineInstrs.h"
 #include "llvm/IR/IntrinsicsSPIRV.h"
 #include "llvm/Support/Debug.h"
 #include <stack>
@@ -66,8 +67,9 @@ static bool deduceAndAssignTypeForGUnmerge(MachineInstr *I, MachineFunction &MF,
     for (unsigned i = 0; i < I->getNumDefs() && !ScalarType; ++i) {
       for (const auto &Use :
            MRI.use_nodbg_instructions(I->getOperand(i).getReg())) {
-        assert(Use.getOpcode() == TargetOpcode::G_BUILD_VECTOR &&
-               "Expected use of G_UNMERGE_VALUES to be a G_BUILD_VECTOR");
+        if (Use.getOpcode() != TargetOpcode::G_BUILD_VECTOR)
+          continue;
+
         if (auto *VecType =
                 GR->getSPIRVTypeForVReg(Use.getOperand(0).getReg())) {
           ScalarType = GR->getScalarOrVectorComponentType(VecType);
@@ -133,10 +135,10 @@ static SPIRVType *deduceTypeFromOperandRange(MachineInstr *I,
   return ResType;
 }
 
-static SPIRVType *deduceTypeForResultRegister(MachineInstr *Use,
-                                              Register UseRegister,
-                                              SPIRVGlobalRegistry *GR,
-                                              MachineIRBuilder &MIB) {
+static SPIRVType *deduceTypeFromResultRegister(MachineInstr *Use,
+                                               Register UseRegister,
+                                               SPIRVGlobalRegistry *GR,
+                                               MachineIRBuilder &MIB) {
   for (const MachineOperand &MO : Use->defs()) {
     if (!MO.isReg())
       continue;
@@ -159,16 +161,43 @@ static SPIRVType *deduceTypeFromUses(Register Reg, MachineFunction &MF,
   MachineRegisterInfo &MRI = MF.getRegInfo();
   for (MachineInstr &Use : MRI.use_nodbg_instructions(Reg)) {
     SPIRVType *ResType = nullptr;
+    LLVM_DEBUG(dbgs() << "Looking at use " << Use);
     switch (Use.getOpcode()) {
     case TargetOpcode::G_BUILD_VECTOR:
     case TargetOpcode::G_EXTRACT_VECTOR_ELT:
     case TargetOpcode::G_UNMERGE_VALUES:
-      LLVM_DEBUG(dbgs() << "Looking at use " << Use << "\n");
-      ResType = deduceTypeForResultRegister(&Use, Reg, GR, MIB);
+    case TargetOpcode::G_ADD:
+    case TargetOpcode::G_SUB:
+    case TargetOpcode::G_MUL:
+    case TargetOpcode::G_SDIV:
+    case TargetOpcode::G_UDIV:
+    case TargetOpcode::G_SREM:
+    case TargetOpcode::G_UREM:
+    case TargetOpcode::G_FADD:
+    case TargetOpcode::G_FSUB:
+    case TargetOpcode::G_FMUL:
+    case TargetOpcode::G_FDIV:
+    case TargetOpcode::G_FREM:
+    case TargetOpcode::G_FMA:
+      ResType = deduceTypeFromResultRegister(&Use, Reg, GR, MIB);
+      break;
+    case TargetOpcode::G_INTRINSIC_W_SIDE_EFFECTS:
+    case TargetOpcode::G_INTRINSIC: {
+      auto IntrinsicID = cast<GIntrinsic>(Use).getIntrinsicID();
+      if (IntrinsicID == Intrinsic::spv_insertelt) {
+        if (Reg == Use.getOperand(2).getReg())
+          ResType = deduceTypeFromResultRegister(&Use, Reg, GR, MIB);
+      } else if (IntrinsicID == Intrinsic::spv_extractelt) {
+        if (Reg == Use.getOperand(2).getReg())
+          ResType = deduceTypeFromResultRegister(&Use, Reg, GR, MIB);
+      }
       break;
     }
-    if (ResType)
+    }
+    if (ResType) {
+      LLVM_DEBUG(dbgs() << "Deduced type from use " << *ResType);
       return ResType;
+    }
   }
   return nullptr;
 }
diff --git a/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll b/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
new file mode 100644
index 0000000000000..468d3ded4c306
--- /dev/null
+++ b/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
@@ -0,0 +1,194 @@
+; RUN: llc -O0 -verify-machineinstrs -mtriple=spirv-unknown-vulkan %s -o - | FileCheck %s
+; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv-unknown-vulkan %s -o - -filetype=obj | spirv-val %}
+
+; CHECK-DAG: OpName %[[#test_int32_double_conversion:]] "test_int32_double_conversion"
+; CHECK-DAG: %[[#int:]] = OpTypeInt 32 0
+; CHECK-DAG: %[[#v4i32:]] = OpTypeVector %[[#int]] 4
+; CHECK-DAG: %[[#double:]] = OpTypeFloat 64
+; CHECK-DAG: %[[#v4f64:]] = OpTypeVector %[[#double]] 4
+; CHECK-DAG: %[[#v2i32:]] = OpTypeVector %[[#int]] 2
+; CHECK-DAG: %[[#ptr_private_v4i32:]] = OpTypePointer Private %[[#v4i32]]
+; CHECK-DAG: %[[#ptr_private_v4f64:]] = OpTypePointer Private %[[#v4f64]]
+; CHECK-DAG: %[[#global_double:]] = OpVariable %[[#ptr_private_v4f64]] Private
+; CHECK-DAG: %[[#C15:]] = OpConstant %[[#int]] 15{{$}}
+; CHECK-DAG: %[[#C14:]] = OpConstant %[[#int]] 14{{$}}
+; CHECK-DAG: %[[#C13:]] = OpConstant %[[#int]] 13{{$}}
+; CHECK-DAG: %[[#C12:]] = OpConstant %[[#int]] 12{{$}}
+; CHECK-DAG: %[[#C11:]] = OpConstant %[[#int]] 11{{$}}
+; CHECK-DAG: %[[#C10:]] = OpConstant %[[#int]] 10{{$}}
+; CHECK-DAG: %[[#C9:]] = OpConstant %[[#int]] 9{{$}}
+; CHECK-DAG: %[[#C8:]] = OpConstant %[[#int]] 8{{$}}
+; CHECK-DAG: %[[#C7:]] = OpConstant %[[#int]] 7{{$}}
+; CHECK-DAG: %[[#C6:]] = OpConstant %[[#int]] 6{{$}}
+; CHECK-DAG: %[[#C5:]] = OpConstant %[[#int]] 5{{$}}
+; CHECK-DAG: %[[#C4:]] = OpConstant %[[#int]] 4{{$}}
+; CHECK-DAG: %[[#C3:]] = OpConstant %[[#int]] 3{{$}}
+; CHECK-DAG: %[[#C2:]] = OpConstant %[[#int]] 2{{$}}
+; CHECK-DAG: %[[#C1:]] = OpConstant %[[#int]] 1{{$}}
+; CHECK-DAG: %[[#C0:]] = OpConstant %[[#int]] 0{{$}}
+
+ at G_16 = internal addrspace(10) global [16 x i32] zeroinitializer
+ at G_4_double = internal addrspace(10) global <4 x double> zeroinitializer
+ at G_4_int = internal addrspace(10) global <4 x i32> zeroinitializer
+
+
+; This is the way matrices will be represented in HLSL. The memory type will be
+; an array, but it will be loaded as a vector.
+define spir_func void @test_load_store_global() {
+entry:
+; CHECK-DAG: %[[#PTR0:]] = OpAccessChain %[[#ptr_int:]] %[[#G16:]] %[[#C0]]
+; CHECK-DAG: %[[#VAL0:]] = OpLoad %[[#int]] %[[#PTR0]] Aligned 4
+; CHECK-DAG: %[[#PTR1:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C1]]
+; CHECK-DAG: %[[#VAL1:]] = OpLoad %[[#int]] %[[#PTR1]] Aligned 4
+; CHECK-DAG: %[[#PTR2:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C2]]
+; CHECK-DAG: %[[#VAL2:]] = OpLoad %[[#int]] %[[#PTR2]] Aligned 4
+; CHECK-DAG: %[[#PTR3:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C3]]
+; CHECK-DAG: %[[#VAL3:]] = OpLoad %[[#int]] %[[#PTR3]] Aligned 4
+; CHECK-DAG: %[[#PTR4:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C4]]
+; CHECK-DAG: %[[#VAL4:]] = OpLoad %[[#int]] %[[#PTR4]] Aligned 4
+; CHECK-DAG: %[[#PTR5:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C5]]
+; CHECK-DAG: %[[#VAL5:]] = OpLoad %[[#int]] %[[#PTR5]] Aligned 4
+; CHECK-DAG: %[[#PTR6:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C6]]
+; CHECK-DAG: %[[#VAL6:]] = OpLoad %[[#int]] %[[#PTR6]] Aligned 4
+; CHECK-DAG: %[[#PTR7:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C7]]
+; CHECK-DAG: %[[#VAL7:]] = OpLoad %[[#int]] %[[#PTR7]] Aligned 4
+; CHECK-DAG: %[[#PTR8:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C8]]
+; CHECK-DAG: %[[#VAL8:]] = OpLoad %[[#int]] %[[#PTR8]] Aligned 4
+; CHECK-DAG: %[[#PTR9:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C9]]
+; CHECK-DAG: %[[#VAL9:]] = OpLoad %[[#int]] %[[#PTR9]] Aligned 4
+; CHECK-DAG: %[[#PTR10:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C10]]
+; CHECK-DAG: %[[#VAL10:]] = OpLoad %[[#int]] %[[#PTR10]] Aligned 4
+; CHECK-DAG: %[[#PTR11:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C11]]
+; CHECK-DAG: %[[#VAL11:]] = OpLoad %[[#int]] %[[#PTR11]] Aligned 4
+; CHECK-DAG: %[[#PTR12:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C12]]
+; CHECK-DAG: %[[#VAL12:]] = OpLoad %[[#int]] %[[#PTR12]] Aligned 4
+; CHECK-DAG: %[[#PTR13:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C13]]
+; CHECK-DAG: %[[#VAL13:]] = OpLoad %[[#int]] %[[#PTR13]] Aligned 4
+; CHECK-DAG: %[[#PTR14:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C14]]
+; CHECK-DAG: %[[#VAL14:]] = OpLoad %[[#int]] %[[#PTR14]] Aligned 4
+; CHECK-DAG: %[[#PTR15:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C15]]
+; CHECK-DAG: %[[#VAL15:]] = OpLoad %[[#int]] %[[#PTR15]] Aligned 4
+; CHECK-DAG: %[[#INS0:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL0]] %[[#UNDEF:]] 0
+; CHECK-DAG: %[[#INS1:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL1]] %[[#INS0]] 1
+; CHECK-DAG: %[[#INS2:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL2]] %[[#INS1]] 2
+; CHECK-DAG: %[[#INS3:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL3]] %[[#INS2]] 3
+; CHECK-DAG: %[[#INS4:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL4]] %[[#UNDEF]] 0
+; CHECK-DAG: %[[#INS5:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL5]] %[[#INS4]] 1
+; CHECK-DAG: %[[#INS6:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL6]] %[[#INS5]] 2
+; CHECK-DAG: %[[#INS7:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL7]] %[[#INS6]] 3
+; CHECK-DAG: %[[#INS8:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL8]] %[[#UNDEF]] 0
+; CHECK-DAG: %[[#INS9:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL9]] %[[#INS8]] 1
+; CHECK-DAG: %[[#INS10:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL10]] %[[#INS9]] 2
+; CHECK-DAG: %[[#INS11:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL11]] %[[#INS10]] 3
+; CHECK-DAG: %[[#INS12:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL12]] %[[#UNDEF]] 0
+; CHECK-DAG: %[[#INS13:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL13]] %[[#INS12]] 1
+; CHECK-DAG: %[[#INS14:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL14]] %[[#INS13]] 2
+; CHECK-DAG: %[[#INS15:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL15]] %[[#INS14]] 3
+  %0 = load <16 x i32>, ptr addrspace(10) @G_16, align 64
+ 
+; CHECK-DAG: %[[#PTR0_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C0]]
+; CHECK-DAG: %[[#VAL0_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 0
+; CHECK-DAG: OpStore %[[#PTR0_S]] %[[#VAL0_S]] Aligned 64
+; CHECK-DAG: %[[#PTR1_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C1]]
+; CHECK-DAG: %[[#VAL1_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 1
+; CHECK-DAG: OpStore %[[#PTR1_S]] %[[#VAL1_S]] Aligned 64
+; CHECK-DAG: %[[#PTR2_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C2]]
+; CHECK-DAG: %[[#VAL2_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 2
+; CHECK-DAG: OpStore %[[#PTR2_S]] %[[#VAL2_S]] Aligned 64
+; CHECK-DAG: %[[#PTR3_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C3]]
+; CHECK-DAG: %[[#VAL3_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 3
+; CHECK-DAG: OpStore %[[#PTR3_S]] %[[#VAL3_S]] Aligned 64
+; CHECK-DAG: %[[#PTR4_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C4]]
+; CHECK-DAG: %[[#VAL4_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 0
+; CHECK-DAG: OpStore %[[#PTR4_S]] %[[#VAL4_S]] Aligned 64
+; CHECK-DAG: %[[#PTR5_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C5]]
+; CHECK-DAG: %[[#VAL5_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 1
+; CHECK-DAG: OpStore %[[#PTR5_S]] %[[#VAL5_S]] Aligned 64
+; CHECK-DAG: %[[#PTR6_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C6]]
+; CHECK-DAG: %[[#VAL6_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 2
+; CHECK-DAG: OpStore %[[#PTR6_S]] %[[#VAL6_S]] Aligned 64
+; CHECK-DAG: %[[#PTR7_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C7]]
+; CHECK-DAG: %[[#VAL7_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 3
+; CHECK-DAG: OpStore %[[#PTR7_S]] %[[#VAL7_S]] Aligned 64
+; CHECK-DAG: %[[#PTR8_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C8]]
+; CHECK-DAG: %[[#VAL8_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 0
+; CHECK-DAG: OpStore %[[#PTR8_S]] %[[#VAL8_S]] Aligned 64
+; CHECK-DAG: %[[#PTR9_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C9]]
+; CHECK-DAG: %[[#VAL9_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 1
+; CHECK-DAG: OpStore %[[#PTR9_S]] %[[#VAL9_S]] Aligned 64
+; CHECK-DAG: %[[#PTR10_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C10]]
+; CHECK-DAG: %[[#VAL10_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 2
+; CHECK-DAG: OpStore %[[#PTR10_S]] %[[#VAL10_S]] Aligned 64
+; CHECK-DAG: %[[#PTR11_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C11]]
+; CHECK-DAG: %[[#VAL11_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 3
+; CHECK-DAG: OpStore %[[#PTR11_S]] %[[#VAL11_S]] Aligned 64
+; CHECK-DAG: %[[#PTR12_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C12]]
+; CHECK-DAG: %[[#VAL12_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 0
+; CHECK-DAG: OpStore %[[#PTR12_S]] %[[#VAL12_S]] Aligned 64
+; CHECK-DAG: %[[#PTR13_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C13]]
+; CHECK-DAG: %[[#VAL13_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 1
+; CHECK-DAG: OpStore %[[#PTR13_S]] %[[#VAL13_S]] Aligned 64
+; CHECK-DAG: %[[#PTR14_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C14]]
+; CHECK-DAG: %[[#VAL14_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 2
+; CHECK-DAG: OpStore %[[#PTR14_S]] %[[#VAL14_S]] Aligned 64
+; CHECK-DAG: %[[#PTR15_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C15]]
+; CHECK-DAG: %[[#VAL15_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 3
+; CHECK-DAG: OpStore %[[#PTR15_S]] %[[#VAL15_S]] Aligned 64
+  store <16 x i32> %0, ptr addrspace(10) @G_16, align 64
+  ret void
+}
+
+define spir_func void @test_int32_double_conversion() {
+; CHECK: %[[#test_int32_double_conversion]] = OpFunction
+entry:
+  ; CHECK: %[[#LOAD:]] = OpLoad %[[#v4f64]] %[[#global_double]]
+  ; CHECK: %[[#VEC_SHUF1:]] = OpVectorShuffle %{{[a-zA-Z0-9_]+}} %[[#LOAD]] %{{[a-zA-Z0-9_]+}} 0 1
+  ; CHECK: %[[#VEC_SHUF2:]] = OpVectorShuffle %{{[a-zA-Z0-9_]+}} %[[#LOAD]] %{{[a-zA-Z0-9_]+}} 2 3
+  ; CHECK: %[[#BITCAST1:]] = OpBitcast %[[#v4i32]] %[[#VEC_SHUF1]]
+  ; CHECK: %[[#BITCAST2:]] = OpBitcast %[[#v4i32]] %[[#VEC_SHUF2]]
+  %0 = load <8 x i32>, ptr addrspace(10) @G_4_double
+
+  ; CHECK: %[[#EXTRACT1:]] = OpCompositeExtract %[[#int]] %[[#BITCAST1]] 0
+  ; CHECK: %[[#EXTRACT2:]] = OpCompositeExtract %[[#int]] %[[#BITCAST1]] 2
+  ; CHECK: %[[#EXTRACT3:]] = OpCompositeExtract %[[#int]] %[[#BITCAST2]] 0
+  ; CHECK: %[[#EXTRACT4:]] = OpCompositeExtract %[[#int]] %[[#BITCAST2]] 2
+  ; CHECK: %[[#CONSTRUCT1:]] = OpCompositeConstruct %[[#v4i32]] %[[#EXTRACT1]] %[[#EXTRACT2]] %[[#EXTRACT3]] %[[#EXTRACT4]]
+  %1 = shufflevector <8 x i32> %0, <8 x i32> poison, <4 x i32> <i32 0, i32 2, i32 4, i32 6>
+  
+  ; CHECK: %[[#EXTRACT5:]] = OpCompositeExtract %[[#int]] %[[#BITCAST1]] 1
+  ; CHECK: %[[#EXTRACT6:]] = OpCompositeExtract %[[#int]] %[[#BITCAST1]] 3
+  ; CHECK: %[[#EXTRACT7:]] = OpCompositeExtract %[[#int]] %[[#BITCAST2]] 1
+  ; CHECK: %[[#EXTRACT8:]] = OpCompositeExtract %[[#int]] %[[#BITCAST2]] 3
+  ; CHECK: %[[#CONSTRUCT2:]] = OpCompositeConstruct %[[#v4i32]] %[[#EXTRACT5]] %[[#EXTRACT6]] %[[#EXTRACT7]] %[[#EXTRACT8]]
+  %2 = shufflevector <8 x i32> %0, <8 x i32> poison, <4 x i32> <i32 1, i32 3, i32 5, i32 7>
+
+  ; CHECK: %[[#EXTRACT9:]] = OpCompositeExtract %[[#int]] %[[#CONSTRUCT1]] 0
+  ; CHECK: %[[#EXTRACT10:]] = OpCompositeExtract %[[#int]] %[[#CONSTRUCT2]] 0
+  ; CHECK: %[[#EXTRACT11:]] = OpCompositeExtract %[[#int]] %[[#CONSTRUCT1]] 1
+  ; CHECK: %[[#EXTRACT12:]] = OpCompositeExtract %[[#int]] %[[#CONSTRUCT2]] 1
+  ; CHECK: %[[#EXTRACT13:]] = OpCompositeExtract %[[#int]] %[[#CONSTRUCT1]] 2
+  ; CHECK: %[[#EXTRACT14:]] = OpCompositeExtract %[[#int]] %[[#CONSTRUCT2]] 2
+  ; CHECK: %[[#EXTRACT15:]] = OpCompositeExtract %[[#int]] %[[#CONSTRUCT1]] 3
+  ; CHECK: %[[#EXTRACT16:]] = OpCompositeExtract %[[#int]] %[[#CONSTRUCT2]] 3
+  ; CHECK: %[[#CONSTRUCT3:]] = OpCompositeConstruct %[[#v2i32]] %[[#EXTRACT9]] %[[#EXTRACT10]]
+  ; CHECK: %[[#CONSTRUCT4:]] = OpCompositeConstruct %[[#v2i32]] %[[#EXTRACT11]] %[[#EXTRACT12]]
+  ; CHECK: %[[#CONSTRUCT5:]] = OpCompositeConstruct %[[#v2i32]] %[[#EXTRACT13]] %[[#EXTRACT14]]
+  ; CHECK: %[[#CONSTRUCT6:]] = OpCompositeConstruct %[[#v2i32]] %[[#EXTRACT15]] %[[#EXTRACT16]]
+  %3 = shufflevector <4 x i32> %1, <4 x i32> %2, <8 x i32> <i32 0, i32 4, i32 1, i32 5, i32 2, i32 6, i32 3, i32 7>
+
+  ; CHECK: %[[#BITCAST3:]] = OpBitcast %[[#double]] %[[#CONSTRUCT3]]
+  ; CHECK: %[[#BITCAST4:]] = OpBitcast %[[#double]] %[[#CONSTRUCT4]]
+  ; CHECK: %[[#BITCAST5:]] = OpBitcast %[[#double]] %[[#CONSTRUCT5]]
+  ; CHECK: %[[#BITCAST6:]] = OpBitcast %[[#double]] %[[#CONSTRUCT6]]
+  ; CHECK: %[[#CONSTRUCT7:]] = OpCompositeConstruct %[[#v4f64]] %[[#BITCAST3]] %[[#BITCAST4]] %[[#BITCAST5]] %[[#BITCAST6]]
+  ; CHECK: OpStore %[[#global_double]] %[[#CONSTRUCT7]] Aligned 32
+  store <8 x i32> %3, ptr addrspace(10) @G_4_double
+  ret void
+}
+
+; Add a main function to make it a valid module for spirv-val
+define void @main() #1 {
+  ret void
+}
+
+attributes #1 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }
diff --git a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll
new file mode 100644
index 0000000000000..126548a1e9ea2
--- /dev/null
+++ b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll
@@ -0,0 +1,149 @@
+; RUN: llc -O0 -verify-machineinstrs -mtriple=spirv-unknown-vulkan %s -o - | FileCheck %s
+; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv-unknown-vulkan %s -o - -filetype=obj | spirv-val %}
+
+; CHECK-DAG: OpName %[[#main:]] "main"
+; CHECK-DAG: %[[#float:]] = OpTypeFloat 32
+; CHECK-DAG: %[[#v4f32:]] = OpTypeVector %[[#float]] 4
+; CHECK-DAG: %[[#int:]] = OpTypeInt 32 0
+; CHECK-DAG: %[[#c16:]] = OpConstant %[[#int]] 16
+; CHECK-DAG: %[[#v16f32:]] = OpTypeArray %[[#float]] %[[#c16]]
+; CHECK-DAG: %[[#v16i32:]] = OpTypeArray %[[#int]] %[[#c16]]
+; CHECK-DAG: %[[#ptr_ssbo_v16i32:]] = OpTypePointer Private %[[#v16i32]]
+; CHECK-DAG: %[[#v4i32:]] = OpTypeVector %[[#int]] 4
+
+ at f1 = internal addrspace(10) global [4 x [16 x float] ] zeroinitializer
+ at f2 = internal addrspace(10) global [4 x [16 x float] ] zeroinitializer
+ at i1 = internal addrspace(10) global [4 x [16 x i32] ] zeroinitializer
+ at i2 = internal addrspace(10) global [4 x [16 x i32] ] zeroinitializer
+
+define void @main() local_unnamed_addr #0 {
+; CHECK: %[[#main]] = OpFunction
+entry:
+  %2 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f1, i32 0, i32 0
+  %3 = load <16 x float>, ptr addrspace(10) %2, align 4
+  %4 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f1, i32 0, i32 1
+  %5 = load <16 x float>, ptr addrspace(10) %4, align 4
+  %6 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f1, i32 0, i32 2
+  %7 = load <16 x float>, ptr addrspace(10) %6, align 4
+  %8 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f1, i32 0, i32 3
+  %9 = load <16 x float>, ptr addrspace(10) %8, align 4
+  
+  ; We expect the large vectors to be split into size 4, and the operations performed on them.
+  ; CHECK: OpFMul %[[#v4f32]]
+  ; CHECK: OpFMul %[[#v4f32]]
+  ; CHECK: OpFMul %[[#v4f32]]
+  ; CHECK: OpFMul %[[#v4f32]]
+  %10 = fmul reassoc nnan ninf nsz arcp afn <16 x float> %3, splat (float 3.000000e+00)
+
+  ; CHECK: OpFAdd %[[#v4f32]]
+  ; CHECK: OpFAdd %[[#v4f32]]
+  ; CHECK: OpFAdd %[[#v4f32]]
+  ; CHECK: OpFAdd %[[#v4f32]]
+  %11 = fadd reassoc nnan ninf nsz arcp afn <16 x float> %10, %5
+
+  ; CHECK: OpFAdd %[[#v4f32]]
+  ; CHECK: OpFAdd %[[#v4f32]]
+  ; CHECK: OpFAdd %[[#v4f32]]
+  ; CHECK: OpFAdd %[[#v4f32]]
+  %12 = fadd reassoc nnan ninf nsz arcp afn <16 x float> %11, %7
+
+  ; CHECK: OpFSub %[[#v4f32]]
+  ; CHECK: OpFSub %[[#v4f32]]
+  ; CHECK: OpFSub %[[#v4f32]]
+  ; CHECK: OpFSub %[[#v4f32]]
+  %13 = fsub reassoc nnan ninf nsz arcp afn <16 x float> %12, %9
+
+  %14 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
+  store <16 x float> %13, ptr addrspace(10) %14, align 4
+  ret void
+}
+
+; Test integer vector arithmetic operations
+define void @test_int_vector_arithmetic() local_unnamed_addr #0 {
+; CHECK: OpFunction
+entry:
+  %2 = getelementptr [4 x [16 x i32] ], ptr addrspace(10) @i1, i32 0, i32 0
+  %3 = load <16 x i32>, ptr addrspace(10) %2, align 4
+  %4 = getelementptr [4 x [16 x i32] ], ptr addrspace(10) @i1, i32 0, i32 1
+  %5 = load <16 x i32>, ptr addrspace(10) %4, align 4
+
+  ; CHECK: OpIAdd %[[#v4i32]]
+  ; CHECK: OpIAdd %[[#v4i32]]
+  ; CHECK: OpIAdd %[[#v4i32]]
+  ; CHECK: OpIAdd %[[#v4i32]]
+  %6 = add <16 x i32> %3, %5
+
+  ; CHECK: OpISub %[[#v4i32]]
+  ; CHECK: OpISub %[[#v4i32]]
+  ; CHECK: OpISub %[[#v4i32]]
+  ; CHECK: OpISub %[[#v4i32]]
+  %7 = sub <16 x i32> %6, %5
+
+  ; CHECK: OpIMul %[[#v4i32]]
+  ; CHECK: OpIMul %[[#v4i32]]
+  ; CHECK: OpIMul %[[#v4i32]]
+  ; CHECK: OpIMul %[[#v4i32]]
+  %8 = mul <16 x i32> %7, splat (i32 2)
+
+  ; CHECK: OpSDiv %[[#v4i32]]
+  ; CHECK: OpSDiv %[[#v4i32]]
+  ; CHECK: OpSDiv %[[#v4i32]]
+  ; CHECK: OpSDiv %[[#v4i32]]
+  %9 = sdiv <16 x i32> %8, splat (i32 2)
+
+  ; CHECK: OpUDiv %[[#v4i32]]
+  ; CHECK: OpUDiv %[[#v4i32]]
+  ; CHECK: OpUDiv %[[#v4i32]]
+  ; CHECK: OpUDiv %[[#v4i32]]
+  %10 = udiv <16 x i32> %9, splat (i32 1)
+
+  ; CHECK: OpSRem %[[#v4i32]]
+  ; CHECK: OpSRem %[[#v4i32]]
+  ; CHECK: OpSRem %[[#v4i32]]
+  ; CHECK: OpSRem %[[#v4i32]]
+  %11 = srem <16 x i32> %10, splat (i32 3)
+
+  ; CHECK: OpUMod %[[#v4i32]]
+  ; CHECK: OpUMod %[[#v4i32]]
+  ; CHECK: OpUMod %[[#v4i32]]
+  ; CHECK: OpUMod %[[#v4i32]]
+  %12 = urem <16 x i32> %11, splat (i32 3)
+
+  %13 = getelementptr [4 x [16 x i32] ], ptr addrspace(10) @i2, i32 0, i32 0
+  store <16 x i32> %12, ptr addrspace(10) %13, align 4
+  ret void
+}
+
+; Test remaining float vector arithmetic operations
+define void @test_float_vector_arithmetic_continued() local_unnamed_addr #0 {
+; CHECK: OpFunction
+entry:
+  %2 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f1, i32 0, i32 0
+  %3 = load <16 x float>, ptr addrspace(10) %2, align 4
+  %4 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f1, i32 0, i32 1
+  %5 = load <16 x float>, ptr addrspace(10) %4, align 4
+
+  ; CHECK: OpFDiv %[[#v4f32]]
+  ; CHECK: OpFDiv %[[#v4f32]]
+  ; CHECK: OpFDiv %[[#v4f32]]
+  ; CHECK: OpFDiv %[[#v4f32]]
+  %6 = fdiv reassoc nnan ninf nsz arcp afn <16 x float> %3, splat (float 2.000000e+00)
+
+  ; CHECK: OpFRem %[[#v4f32]]
+  ; CHECK: OpFRem %[[#v4f32]]
+  ; CHECK: OpFRem %[[#v4f32]]
+  ; CHECK: OpFRem %[[#v4f32]]
+  %7 = frem reassoc nnan ninf nsz arcp afn <16 x float> %6, splat (float 3.000000e+00)
+
+  ; CHECK: OpExtInst %[[#v4f32]] {{.*}} Fma
+  ; CHECK: OpExtInst %[[#v4f32]] {{.*}} Fma
+  ; CHECK: OpExtInst %[[#v4f32]] {{.*}} Fma
+  ; CHECK: OpExtInst %[[#v4f32]] {{.*}} Fma
+  %8 = call reassoc nnan ninf nsz arcp afn <16 x float> @llvm.fma.v16f32(<16 x float> %5, <16 x float> %6, <16 x float> %7)
+
+  %9 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
+  store <16 x float> %8, ptr addrspace(10) %9, align 4
+  ret void
+}
+
+attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }
\ No newline at end of file

>From 92f96f7347f880cf2c803d3c39812667277fec17 Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Mon, 8 Dec 2025 15:44:01 -0500
Subject: [PATCH 02/10] Add s64 to all scalars.

---
 llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
index 4d83649c0f84f..a3ad5d7fd40c0 100644
--- a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
@@ -113,7 +113,7 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
                            v3s1, v3s8, v3s16, v3s32, v3s64,
                            v4s1, v4s8, v4s16, v4s32, v4s64};
 
-  auto allScalars = {s1, s8, s16, s32};
+  auto allScalars = {s1, s8, s16, s32, s64};
 
   auto allScalarsAndVectors = {
       s1,   s8,   s16,   s32,   s64,   v2s1,  v2s8,  v2s16,  v2s32,  v2s64,

>From c6c4a277ec1c94f67cd25594970d17036fedbdf0 Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Mon, 8 Dec 2025 16:08:01 -0500
Subject: [PATCH 03/10] Fix alignment in stores generated in legalize pointer
 cast.

---
 .../Target/SPIRV/SPIRVLegalizePointerCast.cpp |  6 +++-
 .../SPIRV/legalization/load-store-global.ll   | 30 +++++++++----------
 2 files changed, 20 insertions(+), 16 deletions(-)

diff --git a/llvm/lib/Target/SPIRV/SPIRVLegalizePointerCast.cpp b/llvm/lib/Target/SPIRV/SPIRVLegalizePointerCast.cpp
index 81c7596530ee2..a794f3e9c5363 100644
--- a/llvm/lib/Target/SPIRV/SPIRVLegalizePointerCast.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVLegalizePointerCast.cpp
@@ -168,6 +168,9 @@ class SPIRVLegalizePointerCast : public FunctionPass {
     assert(VecTy->getElementType() == ArrTy->getElementType() &&
            "Element types of array and vector must be the same.");
 
+    const DataLayout &DL = B.GetInsertBlock()->getModule()->getDataLayout();
+    uint64_t ElemSize = DL.getTypeAllocSize(ArrTy->getElementType());
+
     for (unsigned i = 0; i < VecTy->getNumElements(); ++i) {
       // Create a GEP to access the i-th element of the array.
       SmallVector<Type *, 2> Types = {DstArrayPtr->getType(),
@@ -190,7 +193,8 @@ class SPIRVLegalizePointerCast : public FunctionPass {
       buildAssignType(B, VecTy->getElementType(), Element);
 
       Types = {Element->getType(), ElementPtr->getType()};
-      Args = {Element, ElementPtr, B.getInt16(2), B.getInt8(Alignment.value())};
+      Align NewAlign = commonAlignment(Alignment, i * ElemSize);
+      Args = {Element, ElementPtr, B.getInt16(2), B.getInt8(NewAlign.value())};
       B.CreateIntrinsic(Intrinsic::spv_store, {Types}, {Args});
     }
   }
diff --git a/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll b/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
index 468d3ded4c306..39adba954568e 100644
--- a/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
+++ b/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
@@ -91,49 +91,49 @@ entry:
 ; CHECK-DAG: OpStore %[[#PTR0_S]] %[[#VAL0_S]] Aligned 64
 ; CHECK-DAG: %[[#PTR1_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C1]]
 ; CHECK-DAG: %[[#VAL1_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 1
-; CHECK-DAG: OpStore %[[#PTR1_S]] %[[#VAL1_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR1_S]] %[[#VAL1_S]] Aligned 4
 ; CHECK-DAG: %[[#PTR2_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C2]]
 ; CHECK-DAG: %[[#VAL2_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 2
-; CHECK-DAG: OpStore %[[#PTR2_S]] %[[#VAL2_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR2_S]] %[[#VAL2_S]] Aligned 8
 ; CHECK-DAG: %[[#PTR3_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C3]]
 ; CHECK-DAG: %[[#VAL3_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 3
-; CHECK-DAG: OpStore %[[#PTR3_S]] %[[#VAL3_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR3_S]] %[[#VAL3_S]] Aligned 4
 ; CHECK-DAG: %[[#PTR4_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C4]]
 ; CHECK-DAG: %[[#VAL4_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 0
-; CHECK-DAG: OpStore %[[#PTR4_S]] %[[#VAL4_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR4_S]] %[[#VAL4_S]] Aligned 16
 ; CHECK-DAG: %[[#PTR5_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C5]]
 ; CHECK-DAG: %[[#VAL5_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 1
-; CHECK-DAG: OpStore %[[#PTR5_S]] %[[#VAL5_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR5_S]] %[[#VAL5_S]] Aligned 4
 ; CHECK-DAG: %[[#PTR6_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C6]]
 ; CHECK-DAG: %[[#VAL6_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 2
-; CHECK-DAG: OpStore %[[#PTR6_S]] %[[#VAL6_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR6_S]] %[[#VAL6_S]] Aligned 8
 ; CHECK-DAG: %[[#PTR7_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C7]]
 ; CHECK-DAG: %[[#VAL7_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 3
-; CHECK-DAG: OpStore %[[#PTR7_S]] %[[#VAL7_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR7_S]] %[[#VAL7_S]] Aligned 4
 ; CHECK-DAG: %[[#PTR8_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C8]]
 ; CHECK-DAG: %[[#VAL8_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 0
-; CHECK-DAG: OpStore %[[#PTR8_S]] %[[#VAL8_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR8_S]] %[[#VAL8_S]] Aligned 32
 ; CHECK-DAG: %[[#PTR9_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C9]]
 ; CHECK-DAG: %[[#VAL9_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 1
-; CHECK-DAG: OpStore %[[#PTR9_S]] %[[#VAL9_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR9_S]] %[[#VAL9_S]] Aligned 4
 ; CHECK-DAG: %[[#PTR10_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C10]]
 ; CHECK-DAG: %[[#VAL10_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 2
-; CHECK-DAG: OpStore %[[#PTR10_S]] %[[#VAL10_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR10_S]] %[[#VAL10_S]] Aligned 8
 ; CHECK-DAG: %[[#PTR11_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C11]]
 ; CHECK-DAG: %[[#VAL11_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 3
-; CHECK-DAG: OpStore %[[#PTR11_S]] %[[#VAL11_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR11_S]] %[[#VAL11_S]] Aligned 4
 ; CHECK-DAG: %[[#PTR12_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C12]]
 ; CHECK-DAG: %[[#VAL12_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 0
-; CHECK-DAG: OpStore %[[#PTR12_S]] %[[#VAL12_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR12_S]] %[[#VAL12_S]] Aligned 16
 ; CHECK-DAG: %[[#PTR13_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C13]]
 ; CHECK-DAG: %[[#VAL13_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 1
-; CHECK-DAG: OpStore %[[#PTR13_S]] %[[#VAL13_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR13_S]] %[[#VAL13_S]] Aligned 4
 ; CHECK-DAG: %[[#PTR14_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C14]]
 ; CHECK-DAG: %[[#VAL14_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 2
-; CHECK-DAG: OpStore %[[#PTR14_S]] %[[#VAL14_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR14_S]] %[[#VAL14_S]] Aligned 8
 ; CHECK-DAG: %[[#PTR15_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C15]]
 ; CHECK-DAG: %[[#VAL15_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 3
-; CHECK-DAG: OpStore %[[#PTR15_S]] %[[#VAL15_S]] Aligned 64
+; CHECK-DAG: OpStore %[[#PTR15_S]] %[[#VAL15_S]] Aligned 4
   store <16 x i32> %0, ptr addrspace(10) @G_16, align 64
   ret void
 }

>From 0760562bd5db87b599f1d1a5248174663e9ee694 Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Tue, 9 Dec 2025 14:36:42 -0500
Subject: [PATCH 04/10] Set all def to default type in post legalization.

---
 llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp  | 31 +++++++++++--------
 .../SPIRV/legalization/load-store-global.ll   | 28 +++++++++++++++--
 2 files changed, 44 insertions(+), 15 deletions(-)

diff --git a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
index d91016a38539b..1170a7a2f3270 100644
--- a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
@@ -325,20 +325,25 @@ static void registerSpirvTypeForNewInstructions(MachineFunction &MF,
 
   for (auto *I : Worklist) {
     MachineIRBuilder MIB(*I);
-    Register ResVReg = I->getOperand(0).getReg();
-    const LLT &ResLLT = MRI.getType(ResVReg);
-    SPIRVType *ResType = nullptr;
-    if (ResLLT.isVector()) {
-      SPIRVType *CompType = GR->getOrCreateSPIRVIntegerType(
-          ResLLT.getElementType().getSizeInBits(), MIB);
-      ResType = GR->getOrCreateSPIRVVectorType(
-          CompType, ResLLT.getNumElements(), MIB, false);
-    } else {
-      ResType = GR->getOrCreateSPIRVIntegerType(ResLLT.getSizeInBits(), MIB);
+    for (unsigned Idx = 0; Idx < I->getNumDefs(); ++Idx) {
+      Register ResVReg = I->getOperand(Idx).getReg();
+      if (GR->getSPIRVTypeForVReg(ResVReg))
+        continue;
+      const LLT &ResLLT = MRI.getType(ResVReg);
+      SPIRVType *ResType = nullptr;
+      if (ResLLT.isVector()) {
+        SPIRVType *CompType = GR->getOrCreateSPIRVIntegerType(
+            ResLLT.getElementType().getSizeInBits(), MIB);
+        ResType = GR->getOrCreateSPIRVVectorType(
+            CompType, ResLLT.getNumElements(), MIB, false);
+      } else {
+        ResType = GR->getOrCreateSPIRVIntegerType(ResLLT.getSizeInBits(), MIB);
+      }
+      LLVM_DEBUG(dbgs() << "Could not determine type for " << ResVReg
+                        << ", defaulting to " << *ResType << "\n");
+
+      setRegClassType(ResVReg, ResType, GR, &MRI, MF, true);
     }
-    LLVM_DEBUG(dbgs() << "Could not determine type for " << *I
-                      << ", defaulting to " << *ResType << "\n");
-    setRegClassType(ResVReg, ResType, GR, &MRI, MF, true);
   }
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll b/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
index 39adba954568e..19b39ff59809a 100644
--- a/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
+++ b/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
@@ -1,7 +1,6 @@
 ; RUN: llc -O0 -verify-machineinstrs -mtriple=spirv-unknown-vulkan %s -o - | FileCheck %s
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv-unknown-vulkan %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-DAG: OpName %[[#test_int32_double_conversion:]] "test_int32_double_conversion"
 ; CHECK-DAG: %[[#int:]] = OpTypeInt 32 0
 ; CHECK-DAG: %[[#v4i32:]] = OpTypeVector %[[#int]] 4
 ; CHECK-DAG: %[[#double:]] = OpTypeFloat 64
@@ -139,7 +138,7 @@ entry:
 }
 
 define spir_func void @test_int32_double_conversion() {
-; CHECK: %[[#test_int32_double_conversion]] = OpFunction
+; CHECK: OpFunction
 entry:
   ; CHECK: %[[#LOAD:]] = OpLoad %[[#v4f64]] %[[#global_double]]
   ; CHECK: %[[#VEC_SHUF1:]] = OpVectorShuffle %{{[a-zA-Z0-9_]+}} %[[#LOAD]] %{{[a-zA-Z0-9_]+}} 0 1
@@ -186,6 +185,31 @@ entry:
   ret void
 }
 
+; CHECK: OpFunction
+define spir_func void @test_double_to_int_implicit_conversion() {
+entry:
+
+; CHECK: %[[#LOAD_V4F64:]] = OpLoad %[[#V4F64_TYPE:]] %[[#GLOBAL_DOUBLE_VAR:]] Aligned 32
+; CHECK: %[[#VEC_SHUF_01:]] = OpVectorShuffle %[[#V2F64_TYPE:]] %[[#LOAD_V4F64]] %[[#UNDEF_V2F64:]] 0 1
+; CHECK: %[[#VEC_SHUF_23:]] = OpVectorShuffle %[[#V2F64_TYPE:]] %[[#LOAD_V4F64]] %[[#UNDEF_V2F64]] 2 3
+; CHECK: %[[#BITCAST_V4I32_01:]] = OpBitcast %[[#V4I32_TYPE:]] %[[#VEC_SHUF_01]]
+; CHECK: %[[#BITCAST_V4I32_23:]] = OpBitcast %[[#V4I32_TYPE]] %[[#VEC_SHUF_23]]
+  %0 = load <8 x i32>, ptr addrspace(10) @G_4_double, align 64
+
+; CHECK: %[[#VEC_SHUF_0_0:]] = OpVectorShuffle %[[#V2I32_TYPE:]] %[[#BITCAST_V4I32_01]] %[[#UNDEF_V2I32:]] 0 1
+; CHECK: %[[#VEC_SHUF_0_1:]] = OpVectorShuffle %[[#V2I32_TYPE]] %[[#BITCAST_V4I32_01]] %[[#UNDEF_V2I32]] 2 3
+; CHECK: %[[#VEC_SHUF_1_0:]] = OpVectorShuffle %[[#V2I32_TYPE]] %[[#BITCAST_V4I32_23]] %[[#UNDEF_V2I32]] 0 1
+; CHECK: %[[#VEC_SHUF_1_1:]] = OpVectorShuffle %[[#V2I32_TYPE]] %[[#BITCAST_V4I32_23]] %[[#UNDEF_V2I32]] 2 3
+; CHECK: %[[#BITCAST_DOUBLE_0_0:]] = OpBitcast %[[#DOUBLE_TYPE:]] %[[#VEC_SHUF_0_0]]
+; CHECK: %[[#BITCAST_DOUBLE_0_1:]] = OpBitcast %[[#DOUBLE_TYPE]] %[[#VEC_SHUF_0_1]]
+; CHECK: %[[#BITCAST_DOUBLE_1_0:]] = OpBitcast %[[#DOUBLE_TYPE]] %[[#VEC_SHUF_1_0]]
+; CHECK: %[[#BITCAST_DOUBLE_1_1:]] = OpBitcast %[[#DOUBLE_TYPE]] %[[#VEC_SHUF_1_1]]
+; CHECK: %[[#COMPOSITE_CONSTRUCT:]] = OpCompositeConstruct %[[#V4F64_TYPE]] %[[#BITCAST_DOUBLE_0_0]] %[[#BITCAST_DOUBLE_0_1]] %[[#BITCAST_DOUBLE_1_0]] %[[#BITCAST_DOUBLE_1_1]]
+; CHECK: OpStore %[[#GLOBAL_DOUBLE_VAR]] %[[#COMPOSITE_CONSTRUCT]] Aligned 64
+  store <8 x i32> %0, ptr addrspace(10) @G_4_double, align 64
+  ret void
+}
+
 ; Add a main function to make it a valid module for spirv-val
 define void @main() #1 {
   ret void

>From 740bf6ece1408c7b12a0a313180d6d9bf83ae64e Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Wed, 10 Dec 2025 14:36:38 -0500
Subject: [PATCH 05/10] Update tests, and add tests for 6-element vectors

---
 .../SPIRV/legalization/vector-arithmetic-6.ll | 156 +++++++++++++
 .../SPIRV/legalization/vector-arithmetic.ll   | 215 +++++++++++++-----
 2 files changed, 310 insertions(+), 61 deletions(-)
 create mode 100644 llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll

diff --git a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
new file mode 100644
index 0000000000000..4f889a3043efc
--- /dev/null
+++ b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
@@ -0,0 +1,156 @@
+; RUN: llc -O0 -verify-machineinstrs -mtriple=spirv-unknown-vulkan %s -o - | FileCheck %s
+; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv-unknown-vulkan %s -o - -filetype=obj | spirv-val %}
+
+; CHECK-DAG: OpName %[[#main:]] "main"
+; CHECK-DAG: %[[#float:]] = OpTypeFloat 32
+; CHECK-DAG: %[[#v4f32:]] = OpTypeVector %[[#float]] 4
+; CHECK-DAG: %[[#int:]] = OpTypeInt 32 0
+; CHECK-DAG: %[[#c6:]] = OpConstant %[[#int]] 6
+; CHECK-DAG: %[[#v6f32:]] = OpTypeArray %[[#float]] %[[#c6]]
+; CHECK-DAG: %[[#v6i32:]] = OpTypeArray %[[#int]] %[[#c6]]
+; CHECK-DAG: %[[#ptr_ssbo_v6i32:]] = OpTypePointer Private %[[#v6i32]]
+; CHECK-DAG: %[[#v4i32:]] = OpTypeVector %[[#int]] 4
+
+ at f1 = internal addrspace(10) global [4 x [6 x float] ] zeroinitializer
+ at f2 = internal addrspace(10) global [4 x [6 x float] ] zeroinitializer
+ at i1 = internal addrspace(10) global [4 x [6 x i32] ] zeroinitializer
+ at i2 = internal addrspace(10) global [4 x [6 x i32] ] zeroinitializer
+
+define void @main() local_unnamed_addr #0 {
+; CHECK: %[[#main]] = OpFunction
+entry:
+  %2 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f1, i32 0, i32 0
+  %3 = load <6 x float>, ptr addrspace(10) %2, align 4
+  %4 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f1, i32 0, i32 1
+  %5 = load <6 x float>, ptr addrspace(10) %4, align 4
+  %6 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f1, i32 0, i32 2
+  %7 = load <6 x float>, ptr addrspace(10) %6, align 4
+  %8 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f1, i32 0, i32 3
+  %9 = load <6 x float>, ptr addrspace(10) %8, align 4
+  
+  ; We expect the 6-element vectors to be widened to 8, then split into two vectors of size 4.
+  ; CHECK: %[[#Mul1:]] = OpFMul %[[#v4f32]]
+  ; CHECK: %[[#Mul2:]] = OpFMul %[[#v4f32]]
+  %10 = fmul reassoc nnan ninf nsz arcp afn <6 x float> %3, splat (float 3.000000e+00)
+
+  ; CHECK: %[[#Add1:]] = OpFAdd %[[#v4f32]] %[[#Mul1]]
+  ; CHECK: %[[#Add2:]] = OpFAdd %[[#v4f32]] %[[#Mul2]]
+  %11 = fadd reassoc nnan ninf nsz arcp afn <6 x float> %10, %5
+
+  ; CHECK: %[[#Sub1:]] = OpFSub %[[#v4f32]] %[[#Add1]]
+  ; CHECK: %[[#Sub2:]] = OpFSub %[[#v4f32]] %[[#Add2]]
+  %13 = fsub reassoc nnan ninf nsz arcp afn <6 x float> %11, %9
+
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub2]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub2]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  
+  %14 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
+  store <6 x float> %13, ptr addrspace(10) %14, align 4
+  ret void
+}
+
+; Test integer vector arithmetic operations
+define void @test_int_vector_arithmetic() local_unnamed_addr #0 {
+; CHECK: OpFunction
+entry:
+  %2 = getelementptr [4 x [6 x i32] ], ptr addrspace(10) @i1, i32 0, i32 0
+  %3 = load <6 x i32>, ptr addrspace(10) %2, align 4
+  %4 = getelementptr [4 x [6 x i32] ], ptr addrspace(10) @i1, i32 0, i32 1
+  %5 = load <6 x i32>, ptr addrspace(10) %4, align 4
+
+  ; CHECK: %[[#Add1:]] = OpIAdd %[[#v4i32]]
+  ; CHECK: %[[#Add2:]] = OpIAdd %[[#v4i32]]
+  %6 = add <6 x i32> %3, %5
+
+  ; CHECK: %[[#Sub1:]] = OpISub %[[#v4i32]] %[[#Add1]]
+  ; CHECK: %[[#Sub2:]] = OpISub %[[#v4i32]] %[[#Add2]]
+  %7 = sub <6 x i32> %6, %5
+
+  ; CHECK: %[[#Mul1:]] = OpIMul %[[#v4i32]] %[[#Sub1]]
+  ; CHECK: %[[#Mul2:]] = OpIMul %[[#v4i32]] %[[#Sub2]]
+  %8 = mul <6 x i32> %7, splat (i32 2)
+
+  ; CHECK: %[[#SDiv1:]] = OpSDiv %[[#v4i32]] %[[#Mul1]]
+  ; CHECK: %[[#SDiv2:]] = OpSDiv %[[#v4i32]] %[[#Mul2]]
+  %9 = sdiv <6 x i32> %8, splat (i32 2)
+
+  ; CHECK: %[[#UDiv1:]] = OpUDiv %[[#v4i32]] %[[#SDiv1]]
+  ; CHECK: %[[#UDiv2:]] = OpUDiv %[[#v4i32]] %[[#SDiv2]]
+  %10 = udiv <6 x i32> %9, splat (i32 1)
+
+  ; CHECK: %[[#SRem1:]] = OpSRem %[[#v4i32]] %[[#UDiv1]]
+  ; CHECK: %[[#SRem2:]] = OpSRem %[[#v4i32]] %[[#UDiv2]]
+  %11 = srem <6 x i32> %10, splat (i32 3)
+
+  ; CHECK: %[[#UMod1:]] = OpUMod %[[#v4i32]] %[[#SRem1]]
+  ; CHECK: %[[#UMod2:]] = OpUMod %[[#v4i32]] %[[#SRem2]]
+  %12 = urem <6 x i32> %11, splat (i32 3)
+
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod2]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod2]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+
+  %13 = getelementptr [4 x [6 x i32] ], ptr addrspace(10) @i2, i32 0, i32 0
+  store <6 x i32> %12, ptr addrspace(10) %13, align 4
+  ret void
+}
+
+; Test remaining float vector arithmetic operations
+define void @test_float_vector_arithmetic_continued() local_unnamed_addr #0 {
+; CHECK: OpFunction
+entry:
+  %2 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f1, i32 0, i32 0
+  %3 = load <6 x float>, ptr addrspace(10) %2, align 4
+  %4 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f1, i32 0, i32 1
+  %5 = load <6 x float>, ptr addrspace(10) %4, align 4
+
+  ; CHECK: %[[#FDiv1:]] = OpFDiv %[[#v4f32]]
+  ; CHECK: %[[#FDiv2:]] = OpFDiv %[[#v4f32]]
+  %6 = fdiv reassoc nnan ninf nsz arcp afn <6 x float> %3, splat (float 2.000000e+00)
+
+  ; CHECK: %[[#FRem1:]] = OpFRem %[[#v4f32]] %[[#FDiv1]]
+  ; CHECK: %[[#FRem2:]] = OpFRem %[[#v4f32]] %[[#FDiv2]]
+  %7 = frem reassoc nnan ninf nsz arcp afn <6 x float> %6, splat (float 3.000000e+00)
+
+  ; CHECK: %[[#Fma1:]] = OpExtInst %[[#v4f32]] {{.*}} Fma {{.*}} %[[#FDiv1]] %[[#FRem1]]
+  ; CHECK: %[[#Fma2:]] = OpExtInst %[[#v4f32]] {{.*}} Fma {{.*}} %[[#FDiv2]] %[[#FRem2]]
+  %8 = call reassoc nnan ninf nsz arcp afn <6 x float> @llvm.fma.v6f32(<6 x float> %5, <6 x float> %6, <6 x float> %7)
+
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+
+  %9 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
+  store <6 x float> %8, ptr addrspace(10) %9, align 4
+  ret void
+}
+
+attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }
\ No newline at end of file
diff --git a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll
index 126548a1e9ea2..e5476f5b15055 100644
--- a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll
+++ b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll
@@ -29,30 +29,57 @@ entry:
   %9 = load <16 x float>, ptr addrspace(10) %8, align 4
   
   ; We expect the large vectors to be split into size 4, and the operations performed on them.
-  ; CHECK: OpFMul %[[#v4f32]]
-  ; CHECK: OpFMul %[[#v4f32]]
-  ; CHECK: OpFMul %[[#v4f32]]
-  ; CHECK: OpFMul %[[#v4f32]]
+  ; CHECK: %[[#Mul1:]] = OpFMul %[[#v4f32]]
+  ; CHECK: %[[#Mul2:]] = OpFMul %[[#v4f32]]
+  ; CHECK: %[[#Mul3:]] = OpFMul %[[#v4f32]]
+  ; CHECK: %[[#Mul4:]] = OpFMul %[[#v4f32]]
   %10 = fmul reassoc nnan ninf nsz arcp afn <16 x float> %3, splat (float 3.000000e+00)
 
-  ; CHECK: OpFAdd %[[#v4f32]]
-  ; CHECK: OpFAdd %[[#v4f32]]
-  ; CHECK: OpFAdd %[[#v4f32]]
-  ; CHECK: OpFAdd %[[#v4f32]]
+  ; CHECK: %[[#Add1:]] = OpFAdd %[[#v4f32]] %[[#Mul1]]
+  ; CHECK: %[[#Add2:]] = OpFAdd %[[#v4f32]] %[[#Mul2]]
+  ; CHECK: %[[#Add3:]] = OpFAdd %[[#v4f32]] %[[#Mul3]]
+  ; CHECK: %[[#Add4:]] = OpFAdd %[[#v4f32]] %[[#Mul4]]
   %11 = fadd reassoc nnan ninf nsz arcp afn <16 x float> %10, %5
 
-  ; CHECK: OpFAdd %[[#v4f32]]
-  ; CHECK: OpFAdd %[[#v4f32]]
-  ; CHECK: OpFAdd %[[#v4f32]]
-  ; CHECK: OpFAdd %[[#v4f32]]
-  %12 = fadd reassoc nnan ninf nsz arcp afn <16 x float> %11, %7
-
-  ; CHECK: OpFSub %[[#v4f32]]
-  ; CHECK: OpFSub %[[#v4f32]]
-  ; CHECK: OpFSub %[[#v4f32]]
-  ; CHECK: OpFSub %[[#v4f32]]
-  %13 = fsub reassoc nnan ninf nsz arcp afn <16 x float> %12, %9
-
+  ; CHECK: %[[#Sub1:]] = OpFSub %[[#v4f32]] %[[#Add1]]
+  ; CHECK: %[[#Sub2:]] = OpFSub %[[#v4f32]] %[[#Add2]]
+  ; CHECK: %[[#Sub3:]] = OpFSub %[[#v4f32]] %[[#Add3]]
+  ; CHECK: %[[#Sub4:]] = OpFSub %[[#v4f32]] %[[#Add4]]
+  %13 = fsub reassoc nnan ninf nsz arcp afn <16 x float> %11, %9
+
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub2]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub2]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub2]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub2]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub3]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub3]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub3]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub3]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub4]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub4]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub4]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub4]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  
   %14 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
   store <16 x float> %13, ptr addrspace(10) %14, align 4
   ret void
@@ -67,48 +94,81 @@ entry:
   %4 = getelementptr [4 x [16 x i32] ], ptr addrspace(10) @i1, i32 0, i32 1
   %5 = load <16 x i32>, ptr addrspace(10) %4, align 4
 
-  ; CHECK: OpIAdd %[[#v4i32]]
-  ; CHECK: OpIAdd %[[#v4i32]]
-  ; CHECK: OpIAdd %[[#v4i32]]
-  ; CHECK: OpIAdd %[[#v4i32]]
+  ; CHECK: %[[#Add1:]] = OpIAdd %[[#v4i32]]
+  ; CHECK: %[[#Add2:]] = OpIAdd %[[#v4i32]]
+  ; CHECK: %[[#Add3:]] = OpIAdd %[[#v4i32]]
+  ; CHECK: %[[#Add4:]] = OpIAdd %[[#v4i32]]
   %6 = add <16 x i32> %3, %5
 
-  ; CHECK: OpISub %[[#v4i32]]
-  ; CHECK: OpISub %[[#v4i32]]
-  ; CHECK: OpISub %[[#v4i32]]
-  ; CHECK: OpISub %[[#v4i32]]
+  ; CHECK: %[[#Sub1:]] = OpISub %[[#v4i32]] %[[#Add1]]
+  ; CHECK: %[[#Sub2:]] = OpISub %[[#v4i32]] %[[#Add2]]
+  ; CHECK: %[[#Sub3:]] = OpISub %[[#v4i32]] %[[#Add3]]
+  ; CHECK: %[[#Sub4:]] = OpISub %[[#v4i32]] %[[#Add4]]
   %7 = sub <16 x i32> %6, %5
 
-  ; CHECK: OpIMul %[[#v4i32]]
-  ; CHECK: OpIMul %[[#v4i32]]
-  ; CHECK: OpIMul %[[#v4i32]]
-  ; CHECK: OpIMul %[[#v4i32]]
+  ; CHECK: %[[#Mul1:]] = OpIMul %[[#v4i32]] %[[#Sub1]]
+  ; CHECK: %[[#Mul2:]] = OpIMul %[[#v4i32]] %[[#Sub2]]
+  ; CHECK: %[[#Mul3:]] = OpIMul %[[#v4i32]] %[[#Sub3]]
+  ; CHECK: %[[#Mul4:]] = OpIMul %[[#v4i32]] %[[#Sub4]]
   %8 = mul <16 x i32> %7, splat (i32 2)
 
-  ; CHECK: OpSDiv %[[#v4i32]]
-  ; CHECK: OpSDiv %[[#v4i32]]
-  ; CHECK: OpSDiv %[[#v4i32]]
-  ; CHECK: OpSDiv %[[#v4i32]]
+  ; CHECK: %[[#SDiv1:]] = OpSDiv %[[#v4i32]] %[[#Mul1]]
+  ; CHECK: %[[#SDiv2:]] = OpSDiv %[[#v4i32]] %[[#Mul2]]
+  ; CHECK: %[[#SDiv3:]] = OpSDiv %[[#v4i32]] %[[#Mul3]]
+  ; CHECK: %[[#SDiv4:]] = OpSDiv %[[#v4i32]] %[[#Mul4]]
   %9 = sdiv <16 x i32> %8, splat (i32 2)
 
-  ; CHECK: OpUDiv %[[#v4i32]]
-  ; CHECK: OpUDiv %[[#v4i32]]
-  ; CHECK: OpUDiv %[[#v4i32]]
-  ; CHECK: OpUDiv %[[#v4i32]]
+  ; CHECK: %[[#UDiv1:]] = OpUDiv %[[#v4i32]] %[[#SDiv1]]
+  ; CHECK: %[[#UDiv2:]] = OpUDiv %[[#v4i32]] %[[#SDiv2]]
+  ; CHECK: %[[#UDiv3:]] = OpUDiv %[[#v4i32]] %[[#SDiv3]]
+  ; CHECK: %[[#UDiv4:]] = OpUDiv %[[#v4i32]] %[[#SDiv4]]
   %10 = udiv <16 x i32> %9, splat (i32 1)
 
-  ; CHECK: OpSRem %[[#v4i32]]
-  ; CHECK: OpSRem %[[#v4i32]]
-  ; CHECK: OpSRem %[[#v4i32]]
-  ; CHECK: OpSRem %[[#v4i32]]
+  ; CHECK: %[[#SRem1:]] = OpSRem %[[#v4i32]] %[[#UDiv1]]
+  ; CHECK: %[[#SRem2:]] = OpSRem %[[#v4i32]] %[[#UDiv2]]
+  ; CHECK: %[[#SRem3:]] = OpSRem %[[#v4i32]] %[[#UDiv3]]
+  ; CHECK: %[[#SRem4:]] = OpSRem %[[#v4i32]] %[[#UDiv4]]
   %11 = srem <16 x i32> %10, splat (i32 3)
 
-  ; CHECK: OpUMod %[[#v4i32]]
-  ; CHECK: OpUMod %[[#v4i32]]
-  ; CHECK: OpUMod %[[#v4i32]]
-  ; CHECK: OpUMod %[[#v4i32]]
+  ; CHECK: %[[#UMod1:]] = OpUMod %[[#v4i32]] %[[#SRem1]]
+  ; CHECK: %[[#UMod2:]] = OpUMod %[[#v4i32]] %[[#SRem2]]
+  ; CHECK: %[[#UMod3:]] = OpUMod %[[#v4i32]] %[[#SRem3]]
+  ; CHECK: %[[#UMod4:]] = OpUMod %[[#v4i32]] %[[#SRem4]]
   %12 = urem <16 x i32> %11, splat (i32 3)
 
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod2]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod2]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod2]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod2]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod3]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod3]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod3]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod3]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod4]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod4]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod4]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod4]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+
   %13 = getelementptr [4 x [16 x i32] ], ptr addrspace(10) @i2, i32 0, i32 0
   store <16 x i32> %12, ptr addrspace(10) %13, align 4
   ret void
@@ -123,27 +183,60 @@ entry:
   %4 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f1, i32 0, i32 1
   %5 = load <16 x float>, ptr addrspace(10) %4, align 4
 
-  ; CHECK: OpFDiv %[[#v4f32]]
-  ; CHECK: OpFDiv %[[#v4f32]]
-  ; CHECK: OpFDiv %[[#v4f32]]
-  ; CHECK: OpFDiv %[[#v4f32]]
+  ; CHECK: %[[#FDiv1:]] = OpFDiv %[[#v4f32]]
+  ; CHECK: %[[#FDiv2:]] = OpFDiv %[[#v4f32]]
+  ; CHECK: %[[#FDiv3:]] = OpFDiv %[[#v4f32]]
+  ; CHECK: %[[#FDiv4:]] = OpFDiv %[[#v4f32]]
   %6 = fdiv reassoc nnan ninf nsz arcp afn <16 x float> %3, splat (float 2.000000e+00)
 
-  ; CHECK: OpFRem %[[#v4f32]]
-  ; CHECK: OpFRem %[[#v4f32]]
-  ; CHECK: OpFRem %[[#v4f32]]
-  ; CHECK: OpFRem %[[#v4f32]]
+  ; CHECK: %[[#FRem1:]] = OpFRem %[[#v4f32]] %[[#FDiv1]]
+  ; CHECK: %[[#FRem2:]] = OpFRem %[[#v4f32]] %[[#FDiv2]]
+  ; CHECK: %[[#FRem3:]] = OpFRem %[[#v4f32]] %[[#FDiv3]]
+  ; CHECK: %[[#FRem4:]] = OpFRem %[[#v4f32]] %[[#FDiv4]]
   %7 = frem reassoc nnan ninf nsz arcp afn <16 x float> %6, splat (float 3.000000e+00)
 
-  ; CHECK: OpExtInst %[[#v4f32]] {{.*}} Fma
-  ; CHECK: OpExtInst %[[#v4f32]] {{.*}} Fma
-  ; CHECK: OpExtInst %[[#v4f32]] {{.*}} Fma
-  ; CHECK: OpExtInst %[[#v4f32]] {{.*}} Fma
+  ; CHECK: %[[#Fma1:]] = OpExtInst %[[#v4f32]] {{.*}} Fma {{.*}} %[[#FDiv1]] %[[#FRem1]]
+  ; CHECK: %[[#Fma2:]] = OpExtInst %[[#v4f32]] {{.*}} Fma {{.*}} %[[#FDiv2]] %[[#FRem2]]
+  ; CHECK: %[[#Fma3:]] = OpExtInst %[[#v4f32]] {{.*}} Fma {{.*}} %[[#FDiv3]] %[[#FRem3]]
+  ; CHECK: %[[#Fma4:]] = OpExtInst %[[#v4f32]] {{.*}} Fma {{.*}} %[[#FDiv4]] %[[#FRem4]]
   %8 = call reassoc nnan ninf nsz arcp afn <16 x float> @llvm.fma.v16f32(<16 x float> %5, <16 x float> %6, <16 x float> %7)
 
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma3]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma3]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma3]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma3]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma4]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma4]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma4]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma4]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+
   %9 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
   store <16 x float> %8, ptr addrspace(10) %9, align 4
   ret void
 }
 
-attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }
\ No newline at end of file
+attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }

>From 64afb6c91d4e01cdc3a570ed6c9be13f1b6e0a3f Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Wed, 10 Dec 2025 14:51:37 -0500
Subject: [PATCH 06/10] Handle G_STRICT_FMA

---
 llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp  |  5 +-
 llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp  |  1 +
 .../SPIRV/legalization/vector-arithmetic-6.ll | 35 ++++++++++++
 .../SPIRV/legalization/vector-arithmetic.ll   | 57 +++++++++++++++++++
 4 files changed, 94 insertions(+), 4 deletions(-)

diff --git a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
index a3ad5d7fd40c0..8bcc295208f67 100644
--- a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
@@ -184,7 +184,7 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
           .custom();
   }
 
-  getActionDefinitionsBuilder(TargetOpcode::G_FMA)
+  getActionDefinitionsBuilder({G_FMA, G_STRICT_FMA})
       .legalFor(allScalars)
       .legalFor(allowedVectorTypes)
       .moreElementsToNextPow2(0)
@@ -295,9 +295,6 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
       .legalFor(allIntScalarsAndVectors)
       .legalIf(extendedScalarsAndVectors);
 
-  getActionDefinitionsBuilder({G_FMA, G_STRICT_FMA})
-      .legalFor(allFloatScalarsAndVectors);
-
   getActionDefinitionsBuilder(G_STRICT_FLDEXP)
       .legalForCartesianProduct(allFloatScalarsAndVectors, allIntScalars);
 
diff --git a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
index 1170a7a2f3270..99edb937c3daa 100644
--- a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
@@ -179,6 +179,7 @@ static SPIRVType *deduceTypeFromUses(Register Reg, MachineFunction &MF,
     case TargetOpcode::G_FDIV:
     case TargetOpcode::G_FREM:
     case TargetOpcode::G_FMA:
+    case TargetOpcode::G_STRICT_FMA:
       ResType = deduceTypeFromResultRegister(&Use, Reg, GR, MIB);
       break;
     case TargetOpcode::G_INTRINSIC_W_SIDE_EFFECTS:
diff --git a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
index 4f889a3043efc..131255396834c 100644
--- a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
+++ b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
@@ -153,4 +153,39 @@ entry:
   ret void
 }
 
+; Test constrained fma vector arithmetic operations
+define void @test_constrained_fma_vector() local_unnamed_addr #0 {
+; CHECK: OpFunction
+entry:
+  %2 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f1, i32 0, i32 0
+  %3 = load <6 x float>, ptr addrspace(10) %2, align 4
+  %4 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f1, i32 0, i32 1
+  %5 = load <6 x float>, ptr addrspace(10) %4, align 4
+  %6 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f1, i32 0, i32 2
+  %7 = load <6 x float>, ptr addrspace(10) %6, align 4
+
+  ; CHECK: %[[#Fma1:]] = OpExtInst %[[#v4f32]] {{.*}} Fma
+  ; CHECK: %[[#Fma2:]] = OpExtInst %[[#v4f32]] {{.*}} Fma
+  %8 = call <6 x float> @llvm.experimental.constrained.fma.v6f32(<6 x float> %3, <6 x float> %5, <6 x float> %7, metadata !"round.dynamic", metadata !"fpexcept.strict")
+
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+
+  %9 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
+  store <6 x float> %8, ptr addrspace(10) %9, align 4
+  ret void
+}
+
+declare <6 x float> @llvm.experimental.constrained.fma.v6f32(<6 x float>, <6 x float>, <6 x float>, metadata, metadata)
+
 attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }
\ No newline at end of file
diff --git a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll
index e5476f5b15055..dd1e3d60a52bf 100644
--- a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll
+++ b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic.ll
@@ -239,4 +239,61 @@ entry:
   ret void
 }
 
+; Test constrained fma vector arithmetic operations
+define void @test_constrained_fma_vector() local_unnamed_addr #0 {
+; CHECK: OpFunction
+entry:
+  %2 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f1, i32 0, i32 0
+  %3 = load <16 x float>, ptr addrspace(10) %2, align 4
+  %4 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f1, i32 0, i32 1
+  %5 = load <16 x float>, ptr addrspace(10) %4, align 4
+  %6 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f1, i32 0, i32 2
+  %7 = load <16 x float>, ptr addrspace(10) %6, align 4
+
+  ; CHECK: %[[#Fma1:]] = OpExtInst %[[#v4f32]] {{.*}} Fma
+  ; CHECK: %[[#Fma2:]] = OpExtInst %[[#v4f32]] {{.*}} Fma
+  ; CHECK: %[[#Fma3:]] = OpExtInst %[[#v4f32]] {{.*}} Fma
+  ; CHECK: %[[#Fma4:]] = OpExtInst %[[#v4f32]] {{.*}} Fma
+  %8 = call <16 x float> @llvm.experimental.constrained.fma.v16f32(<16 x float> %3, <16 x float> %5, <16 x float> %7, metadata !"round.dynamic", metadata !"fpexcept.strict")
+
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma3]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma3]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma3]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma3]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma4]] 0
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma4]] 1
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma4]] 2
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma4]] 3
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+
+  %9 = getelementptr [4 x [16 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
+  store <16 x float> %8, ptr addrspace(10) %9, align 4
+  ret void
+}
+
+declare <16 x float> @llvm.experimental.constrained.fma.v16f32(<16 x float>, <16 x float>, <16 x float>, metadata, metadata)
+
 attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }

>From 9ef180b05e883779720bfa54e1aebf8be60af979 Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Wed, 10 Dec 2025 15:26:50 -0500
Subject: [PATCH 07/10] Scalarize rem and div with illegal vector size that is
 not a power of 2.

---
 llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp  | 21 +++++-
 .../SPIRV/legalization/vector-arithmetic-6.ll | 69 ++++++++++++++-----
 2 files changed, 71 insertions(+), 19 deletions(-)

diff --git a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
index 8bcc295208f67..55f17b13fd07f 100644
--- a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
@@ -173,7 +173,15 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
   uint32_t MaxVectorSize = ST.isShader() ? 4 : 16;
 
   for (auto Opc : getTypeFoldingSupportedOpcodes()) {
-    if (Opc != G_EXTRACT_VECTOR_ELT)
+    switch (Opc) {
+    case G_EXTRACT_VECTOR_ELT:
+    case G_UREM:
+    case G_SREM:
+    case G_UDIV:
+    case G_SDIV:
+    case G_FREM:
+      break;
+    default:
       getActionDefinitionsBuilder(Opc)
           .customFor(allScalars)
           .customFor(allowedVectorTypes)
@@ -182,8 +190,19 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
                            LegalizeMutations::changeElementCountTo(
                                0, ElementCount::getFixed(MaxVectorSize)))
           .custom();
+      break;
+    }
   }
 
+  getActionDefinitionsBuilder({G_UREM, G_SREM, G_SDIV, G_UDIV, G_FREM})
+      .customFor(allScalars)
+      .customFor(allowedVectorTypes)
+      .scalarizeIf(numElementsNotPow2(0), 0)
+      .fewerElementsIf(vectorElementCountIsGreaterThan(0, MaxVectorSize),
+                       LegalizeMutations::changeElementCountTo(
+                           0, ElementCount::getFixed(MaxVectorSize)))
+      .custom();
+
   getActionDefinitionsBuilder({G_FMA, G_STRICT_FMA})
       .legalFor(allScalars)
       .legalFor(allowedVectorTypes)
diff --git a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
index 131255396834c..d1cbfd4811c30 100644
--- a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
+++ b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
@@ -10,6 +10,7 @@
 ; CHECK-DAG: %[[#v6i32:]] = OpTypeArray %[[#int]] %[[#c6]]
 ; CHECK-DAG: %[[#ptr_ssbo_v6i32:]] = OpTypePointer Private %[[#v6i32]]
 ; CHECK-DAG: %[[#v4i32:]] = OpTypeVector %[[#int]] 4
+; CHECK-DAG: %[[#UNDEF:]] = OpUndef %[[#int]]
 
 @f1 = internal addrspace(10) global [4 x [6 x float] ] zeroinitializer
 @f2 = internal addrspace(10) global [4 x [6 x float] ] zeroinitializer
@@ -80,33 +81,61 @@ entry:
   ; CHECK: %[[#Mul2:]] = OpIMul %[[#v4i32]] %[[#Sub2]]
   %8 = mul <6 x i32> %7, splat (i32 2)
 
-  ; CHECK: %[[#SDiv1:]] = OpSDiv %[[#v4i32]] %[[#Mul1]]
-  ; CHECK: %[[#SDiv2:]] = OpSDiv %[[#v4i32]] %[[#Mul2]]
+  ; CHECK-DAG: %[[#E1:]] = OpCompositeExtract %[[#int]] %[[#Mul1]] 0
+  ; CHECK-DAG: %[[#E2:]] = OpCompositeExtract %[[#int]] %[[#Mul1]] 1
+  ; CHECK-DAG: %[[#E3:]] = OpCompositeExtract %[[#int]] %[[#Mul1]] 2
+  ; CHECK-DAG: %[[#E4:]] = OpCompositeExtract %[[#int]] %[[#Mul1]] 3
+  ; CHECK-DAG: %[[#E5:]] = OpCompositeExtract %[[#int]] %[[#Mul2]] 0
+  ; CHECK-DAG: %[[#E6:]] = OpCompositeExtract %[[#int]] %[[#Mul2]] 1
+  ; CHECK: %[[#SDiv1:]] = OpSDiv %[[#int]] %[[#E1]]
+  ; CHECK: %[[#SDiv2:]] = OpSDiv %[[#int]] %[[#E2]]
+  ; CHECK: %[[#SDiv3:]] = OpSDiv %[[#int]] %[[#E3]]
+  ; CHECK: %[[#SDiv4:]] = OpSDiv %[[#int]] %[[#E4]]
+  ; CHECK: %[[#SDiv5:]] = OpSDiv %[[#int]] %[[#E5]]
+  ; CHECK: %[[#SDiv6:]] = OpSDiv %[[#int]] %[[#E6]]
   %9 = sdiv <6 x i32> %8, splat (i32 2)
 
-  ; CHECK: %[[#UDiv1:]] = OpUDiv %[[#v4i32]] %[[#SDiv1]]
-  ; CHECK: %[[#UDiv2:]] = OpUDiv %[[#v4i32]] %[[#SDiv2]]
+  ; CHECK: %[[#UDiv1:]] = OpUDiv %[[#int]] %[[#SDiv1]]
+  ; CHECK: %[[#UDiv2:]] = OpUDiv %[[#int]] %[[#SDiv2]]
+  ; CHECK: %[[#UDiv3:]] = OpUDiv %[[#int]] %[[#SDiv3]]
+  ; CHECK: %[[#UDiv4:]] = OpUDiv %[[#int]] %[[#SDiv4]]
+  ; CHECK: %[[#UDiv5:]] = OpUDiv %[[#int]] %[[#SDiv5]]
+  ; CHECK: %[[#UDiv6:]] = OpUDiv %[[#int]] %[[#SDiv6]]
   %10 = udiv <6 x i32> %9, splat (i32 1)
 
-  ; CHECK: %[[#SRem1:]] = OpSRem %[[#v4i32]] %[[#UDiv1]]
-  ; CHECK: %[[#SRem2:]] = OpSRem %[[#v4i32]] %[[#UDiv2]]
+  ; CHECK: %[[#SRem1:]] = OpSRem %[[#int]] %[[#UDiv1]]
+  ; CHECK: %[[#SRem2:]] = OpSRem %[[#int]] %[[#UDiv2]]
+  ; CHECK: %[[#SRem3:]] = OpSRem %[[#int]] %[[#UDiv3]]
+  ; CHECK: %[[#SRem4:]] = OpSRem %[[#int]] %[[#UDiv4]]
+  ; CHECK: %[[#SRem5:]] = OpSRem %[[#int]] %[[#UDiv5]]
+  ; CHECK: %[[#SRem6:]] = OpSRem %[[#int]] %[[#UDiv6]]
   %11 = srem <6 x i32> %10, splat (i32 3)
 
-  ; CHECK: %[[#UMod1:]] = OpUMod %[[#v4i32]] %[[#SRem1]]
-  ; CHECK: %[[#UMod2:]] = OpUMod %[[#v4i32]] %[[#SRem2]]
+  ; CHECK: %[[#UMod1:]] = OpUMod %[[#int]] %[[#SRem1]]
+  ; CHECK: %[[#UMod2:]] = OpUMod %[[#int]] %[[#SRem2]]
+  ; CHECK: %[[#UMod3:]] = OpUMod %[[#int]] %[[#SRem3]]
+  ; CHECK: %[[#UMod4:]] = OpUMod %[[#int]] %[[#SRem4]]
+  ; CHECK: %[[#UMod5:]] = OpUMod %[[#int]] %[[#SRem5]]
+  ; CHECK: %[[#UMod6:]] = OpUMod %[[#int]] %[[#SRem6]]
   %12 = urem <6 x i32> %11, splat (i32 3)
 
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 0
+  ; CHECK: %[[#Construct1:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod1]] %[[#UMod2]] %[[#UMod3]] %[[#UMod4]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct1]] 0
   ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 1
+  ; CHECK: %[[#Construct2:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod1]] %[[#UMod2]] %[[#UMod3]] %[[#UMod4]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct2]] 1
   ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 2
+  ; CHECK: %[[#Construct3:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod1]] %[[#UMod2]] %[[#UMod3]] %[[#UMod4]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct3]] 2
   ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod1]] 3
+  ; CHECK: %[[#Construct4:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod1]] %[[#UMod2]] %[[#UMod3]] %[[#UMod4]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct4]] 3
   ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod2]] 0
+  ; CHECK: %[[#Construct5:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod5]] %[[#UMod6]] %[[#UNDEF]] %[[#UNDEF]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct5]] 0
   ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#UMod2]] 1
+  ; CHECK: %[[#Construct6:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod5]] %[[#UMod6]] %[[#UNDEF]] %[[#UNDEF]]
+  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct6]] 1
   ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
 
   %13 = getelementptr [4 x [6 x i32] ], ptr addrspace(10) @i2, i32 0, i32 0
@@ -127,12 +156,16 @@ entry:
   ; CHECK: %[[#FDiv2:]] = OpFDiv %[[#v4f32]]
   %6 = fdiv reassoc nnan ninf nsz arcp afn <6 x float> %3, splat (float 2.000000e+00)
 
-  ; CHECK: %[[#FRem1:]] = OpFRem %[[#v4f32]] %[[#FDiv1]]
-  ; CHECK: %[[#FRem2:]] = OpFRem %[[#v4f32]] %[[#FDiv2]]
+  ; CHECK: OpFRem %[[#float]]
+  ; CHECK: OpFRem %[[#float]]
+  ; CHECK: OpFRem %[[#float]]
+  ; CHECK: OpFRem %[[#float]]
+  ; CHECK: OpFRem %[[#float]]
+  ; CHECK: OpFRem %[[#float]]
   %7 = frem reassoc nnan ninf nsz arcp afn <6 x float> %6, splat (float 3.000000e+00)
 
-  ; CHECK: %[[#Fma1:]] = OpExtInst %[[#v4f32]] {{.*}} Fma {{.*}} %[[#FDiv1]] %[[#FRem1]]
-  ; CHECK: %[[#Fma2:]] = OpExtInst %[[#v4f32]] {{.*}} Fma {{.*}} %[[#FDiv2]] %[[#FRem2]]
+  ; CHECK: %[[#Fma1:]] = OpExtInst %[[#v4f32]] {{.*}} Fma
+  ; CHECK: %[[#Fma2:]] = OpExtInst %[[#v4f32]] {{.*}} Fma
   %8 = call reassoc nnan ninf nsz arcp afn <6 x float> @llvm.fma.v6f32(<6 x float> %5, <6 x float> %6, <6 x float> %7)
 
   ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 0

>From 09fe8000156a6f48b26b441861097d2139dcdcd6 Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Thu, 4 Dec 2025 13:56:52 -0500
Subject: [PATCH 08/10] [SPIRV] Restrict OpName generation to major values

Refines OpName emission to only target Global Variables, Functions,
Function Parameters, Local Variables (allocas/phis), and Basic Blocks.
This reduces binary size and clutter by avoiding OpName for every
intermediate instruction (arithmetic, casts, etc.), while preserving
readability for interfaces and program structure.

Also updates the test suite to align with this change:
- Removes OpName checks for intermediate instructions.
- Adds side-effects (e.g., volatile stores) to tests where instructions
  were previously kept alive solely by their OpName usage.
- Updates checks to use generic ID matching where specific names are no
  longer available.
- Adds debug-info/opname-filtering.ll to verify the new policy.
---
 llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp |  24 +-
 .../CodeGen/SPIRV/FOrdGreaterThanEqual_int.ll |   3 +-
 .../SPIRV/GlobalISel/fn-ptr-addrspacecast.ll  |   1 +
 .../CodeGen/SPIRV/GlobalISel/fn-ptr-load.ll   |   3 +-
 .../SpecConstants/bool-spirv-specconstant.ll  |   1 +
 llvm/test/CodeGen/SPIRV/TruncToBool.ll        |   3 +-
 .../SPIRV/debug-info/opname-filtering.ll      |  46 +++
 .../CodeGen/SPIRV/entry-point-interfaces.ll   |   2 +-
 .../decorate-prefetch-w-cache-controls.ll     |  11 +-
 .../IntelFPMaxErrorFPMath.ll                  |  13 +-
 .../SPV_KHR_float_controls2/decoration.ll     | 208 ++++++--------
 .../disabled-on-amd.ll                        |   3 +-
 .../SPV_KHR_float_controls2/exec_mode3.ll     |  19 +-
 .../SPV_KHR_float_controls2/replacements.ll   |  26 +-
 .../SPV_KHR_no_integer_wrap_decoration.ll     |  19 +-
 llvm/test/CodeGen/SPIRV/freeze.ll             |  18 +-
 .../CodeGen/SPIRV/instructions/bitwise-i1.ll  |  20 +-
 .../SPIRV/instructions/integer-casts.ll       |  44 ++-
 .../CodeGen/SPIRV/llvm-intrinsics/assume.ll   |   4 +-
 .../llvm-intrinsics/constrained-arithmetic.ll |  24 +-
 llvm/test/CodeGen/SPIRV/opencl/vload2.ll      |  23 +-
 .../SPIRV/optimizations/add-check-overflow.ll |  13 +-
 .../SPIRV/passes/translate-aggregate-uaddo.ll |  10 +-
 .../CodeGen/SPIRV/pointers/phi-chain-types.ll |  14 +-
 .../pointers/type-deduce-by-call-chain.ll     |   3 +-
 llvm/test/CodeGen/SPIRV/select-builtin.ll     |   3 +-
 llvm/test/CodeGen/SPIRV/transcoding/OpDot.ll  |   8 +-
 .../transcoding/OpVectorExtractDynamic.ll     |   3 +-
 .../transcoding/OpVectorInsertDynamic_i16.ll  |   5 +-
 llvm/test/CodeGen/SPIRV/transcoding/fadd.ll   |  77 ++---
 llvm/test/CodeGen/SPIRV/transcoding/fcmp.ll   | 270 ++++++------------
 llvm/test/CodeGen/SPIRV/transcoding/fdiv.ll   |  32 +--
 llvm/test/CodeGen/SPIRV/transcoding/fmul.ll   |  32 +--
 llvm/test/CodeGen/SPIRV/transcoding/fneg.ll   |  30 +-
 .../fp_contract_reassoc_fast_mode.ll          |   6 +-
 llvm/test/CodeGen/SPIRV/transcoding/frem.ll   |  32 +--
 llvm/test/CodeGen/SPIRV/transcoding/fsub.ll   |  33 +--
 .../CodeGen/SPIRV/trunc-nonstd-bitwidth.ll    |  16 +-
 llvm/test/CodeGen/SPIRV/uitofp-with-bool.ll   |  76 ++---
 llvm/test/CodeGen/SPIRV/zero-length-array.ll  |   3 -
 40 files changed, 544 insertions(+), 637 deletions(-)
 create mode 100644 llvm/test/CodeGen/SPIRV/debug-info/opname-filtering.ll

diff --git a/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp b/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp
index 4f754021f5544..d129e8f963178 100644
--- a/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp
@@ -361,16 +361,20 @@ static void emitAssignName(Instruction *I, IRBuilder<> &B) {
       expectIgnoredInIRTranslation(I))
     return;
 
-  if (isa<CallBase>(I)) {
-    // TODO: this is a temporary workaround meant to prevent inserting internal
-    //       noise into the generated binary; remove once we rework the entire
-    //       aggregate removal machinery.
-    StringRef Name = I->getName();
-    if (Name.starts_with("spv.mutated_callsite"))
-      return;
-    if (Name.starts_with("spv.named_mutated_callsite"))
-      I->setName(Name.substr(Name.rfind('.') + 1));
-  }
+  // We want to be conservative when adding the names because they can interfere
+  // with later optimizations.
+  bool KeepName = false;
+  if (isa<AllocaInst>(I)) {
+    KeepName = true;
+  } else if (auto *CI = dyn_cast<CallBase>(I)) {
+    Function *F = CI->getCalledFunction();
+    if (F && F->getName().starts_with("llvm.spv.alloca"))
+      KeepName = true;
+  }
+
+  if (!KeepName)
+    return;
+
   reportFatalOnTokenType(I);
   setInsertPointAfterDef(B, I);
   LLVMContext &Ctx = I->getContext();
diff --git a/llvm/test/CodeGen/SPIRV/FOrdGreaterThanEqual_int.ll b/llvm/test/CodeGen/SPIRV/FOrdGreaterThanEqual_int.ll
index 86581a5468405..79ca18bb70a20 100644
--- a/llvm/test/CodeGen/SPIRV/FOrdGreaterThanEqual_int.ll
+++ b/llvm/test/CodeGen/SPIRV/FOrdGreaterThanEqual_int.ll
@@ -5,9 +5,10 @@
 
 ;; LLVM IR was generated with -cl-std=c++ option
 
-define spir_kernel void @test(float %op1, float %op2) {
+define spir_kernel void @test(float %op1, float %op2, i32 addrspace(1)* %out) {
 entry:
   %call = call spir_func i32 @_Z14isgreaterequalff(float %op1, float %op2)
+  store i32 %call, i32 addrspace(1)* %out
   ret void
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/GlobalISel/fn-ptr-addrspacecast.ll b/llvm/test/CodeGen/SPIRV/GlobalISel/fn-ptr-addrspacecast.ll
index 58638578bb3f0..0c6f6c9a53b1d 100644
--- a/llvm/test/CodeGen/SPIRV/GlobalISel/fn-ptr-addrspacecast.ll
+++ b/llvm/test/CodeGen/SPIRV/GlobalISel/fn-ptr-addrspacecast.ll
@@ -4,5 +4,6 @@ target triple = "spirv64"
 define void @addrspacecast(ptr addrspace(9) %a) {
 ; CHECK: unable to legalize instruction: %{{.*}}:pid(p4) = G_ADDRSPACE_CAST %{{.*}}:pid(p9)
   %res1 = addrspacecast ptr addrspace(9) %a to ptr addrspace(4)
+  store i8 0, ptr addrspace(4) %res1
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/GlobalISel/fn-ptr-load.ll b/llvm/test/CodeGen/SPIRV/GlobalISel/fn-ptr-load.ll
index 229f2234220ab..e7a9decf0c327 100644
--- a/llvm/test/CodeGen/SPIRV/GlobalISel/fn-ptr-load.ll
+++ b/llvm/test/CodeGen/SPIRV/GlobalISel/fn-ptr-load.ll
@@ -1,8 +1,9 @@
 ; RUN: not llc --global-isel %s -filetype=null 2>&1 | FileCheck %s
 target triple = "spirv64"
 
-define void @do_load(ptr addrspace(9) %a) {
+define void @do_load(ptr addrspace(9) %a, ptr addrspace(1) %out) {
 ; CHECK: unable to legalize instruction: %{{.*}}:iid(s32) = G_LOAD %{{.*}}:pid(p9) 
   %val = load i32, ptr addrspace(9) %a
+  store i32 %val, ptr addrspace(1) %out
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/SpecConstants/bool-spirv-specconstant.ll b/llvm/test/CodeGen/SPIRV/SpecConstants/bool-spirv-specconstant.ll
index 6e414f79bdde5..125cc6137e78b 100644
--- a/llvm/test/CodeGen/SPIRV/SpecConstants/bool-spirv-specconstant.ll
+++ b/llvm/test/CodeGen/SPIRV/SpecConstants/bool-spirv-specconstant.ll
@@ -24,6 +24,7 @@ entry:
   %selected = select i1 %3, i8 0, i8 1
   %frombool.i = zext i1 %3 to i8
   %sum = add i8 %frombool.i, %selected
+  store volatile i8 %sum, i8 addrspace(4)* %ptridx.ascast.i.i, align 1
   store i8 %selected, i8 addrspace(4)* %ptridx.ascast.i.i, align 1
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/TruncToBool.ll b/llvm/test/CodeGen/SPIRV/TruncToBool.ll
index 6682c23f34e96..8e23f4eb06914 100644
--- a/llvm/test/CodeGen/SPIRV/TruncToBool.ll
+++ b/llvm/test/CodeGen/SPIRV/TruncToBool.ll
@@ -3,10 +3,11 @@
 ; CHECK-SPIRV:      OpBitwiseAnd
 ; CHECK-SPIRV-NEXT: OpINotEqual
 
-define spir_kernel void @test(i32 %op1, i32 %op2, i8 %op3) {
+define spir_kernel void @test(i32 %op1, i32 %op2, i8 %op3, i32 addrspace(1)* %out) {
 entry:
   %0 = trunc i8 %op3 to i1
   %call = call spir_func i32 @_Z14__spirv_Selectbii(i1 zeroext %0, i32 %op1, i32 %op2)
+  store i32 %call, i32 addrspace(1)* %out
   ret void
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/debug-info/opname-filtering.ll b/llvm/test/CodeGen/SPIRV/debug-info/opname-filtering.ll
new file mode 100644
index 0000000000000..bfe25188662f4
--- /dev/null
+++ b/llvm/test/CodeGen/SPIRV/debug-info/opname-filtering.ll
@@ -0,0 +1,46 @@
+; RUN: llc -O0 -mtriple=spirv64-unknown-unknown %s -o - | FileCheck %s
+; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv64-unknown-unknown %s -o - -filetype=obj | spirv-val %}
+
+; Verify that OpName is generated for Global Variables, Functions, Parameters,
+; Local Variables (Alloca), and Basic Blocks (Labels).
+; We preserve these names because they significantly improve the readability of
+; the generated SPIR-V binary and are unlikely to inhibit optimizations (like
+; Dead Code Elimination) since they define the interface or storage of the program.
+
+; 1. Global variables ("GlobalVar")
+; CHECK-DAG: OpName %[[#GlobalVar:]] "GlobalVar"
+
+; 2. Functions ("test_names")
+; CHECK-DAG: OpName %[[#Func:]] "test_names"
+
+; 3. Function parameters ("param")
+; CHECK-DAG: OpName %[[#Param:]] "param"
+
+; 4. Local variables (AllocaInst) ("localVar")
+; CHECK-DAG: OpName %[[#LocalVar:]] "localVar"
+
+; 5. Basic Blocks ("entry", "body")
+; CHECK-DAG: OpName %[[#Entry:]] "entry"
+; CHECK-DAG: OpName %[[#Body:]] "body"
+
+; Verify that OpName is NOT generated for intermediate instructions
+; (arithmetic, etc.). This reduces file size and noise, and prevents
+; potential interference with optimizations where the presence of a Name
+; (user) might incorrectly keep a dead instruction alive in some test scenarios.
+
+; CHECK-NOT: OpName %{{.*}} "add"
+; CHECK-NOT: OpName %{{.*}} "sub"
+
+ at GlobalVar = global i32 0
+
+define spir_func void @test_names(i32 %param) {
+entry:
+  %localVar = alloca i32
+  br label %body
+
+body:
+  %add = add i32 %param, 1
+  %sub = sub i32 %add, 1
+  store i32 %sub, i32* %localVar
+  ret void
+}
diff --git a/llvm/test/CodeGen/SPIRV/entry-point-interfaces.ll b/llvm/test/CodeGen/SPIRV/entry-point-interfaces.ll
index f1e092732558e..019e60dad41ae 100644
--- a/llvm/test/CodeGen/SPIRV/entry-point-interfaces.ll
+++ b/llvm/test/CodeGen/SPIRV/entry-point-interfaces.ll
@@ -1,7 +1,7 @@
 ; RUN: llc -verify-machineinstrs -O0 -mtriple=spirv32-unknown-unknown %s -o - | FileCheck %s
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK: OpEntryPoint Kernel %[[#Func:]] "test" %[[#Interface1:]] %[[#Interface2:]] %[[#Interface3:]] %[[#Interface4:]]
+; CHECK: OpEntryPoint Kernel %[[#Func:]] "test" %[[#Interface3:]] %[[#Interface4:]] %[[#Interface1:]] %[[#Interface2:]]
 ; CHECK-DAG: OpName %[[#Func]] "test"
 ; CHECK-DAG: OpName %[[#Interface1]] "var"
 ; CHECK-DAG: OpName %[[#Interface3]] "var2"
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_cache_controls/decorate-prefetch-w-cache-controls.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_cache_controls/decorate-prefetch-w-cache-controls.ll
index 4428a2049f9ce..0b39ee853b057 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_cache_controls/decorate-prefetch-w-cache-controls.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_cache_controls/decorate-prefetch-w-cache-controls.ll
@@ -1,17 +1,14 @@
 ; Adapted from https://github.com/KhronosGroup/SPIRV-LLVM-Translator/tree/main/test/extensions/INTEL/SPV_INTEL_cache_controls
 
 ; RUN: llc -verify-machineinstrs -O0 -mtriple=spirv64-unknown-unknown --spirv-ext=+SPV_INTEL_cache_controls %s -o - | FileCheck %s --check-prefixes=CHECK-SPIRV
-; TODO: %if spirv-tools %{ llc -O0 -mtriple=spirv64-unknown-unknown --spirv-ext=+SPV_INTEL_cache_controls %s -o - -filetype=obj | spirv-val %}
+; RUN: %if spirv-tools %{ llc -verify-machineinstrs -O0 -mtriple=spirv64-unknown-unknown --spirv-ext=+SPV_INTEL_cache_controls %s -o - -filetype=obj | spirv-val %}
 
 ; CHECK-SPIRV: Capability CacheControlsINTEL
 ; CHECK-SPIRV: Extension "SPV_INTEL_cache_controls"
 
-; CHECK-SPIRV-DAG: OpName %[[#Ptr1:]] "ptr1"
-; CHECK-SPIRV-DAG: OpName %[[#Ptr2:]] "ptr2"
-; CHECK-SPIRV-DAG: OpName %[[#Ptr3:]] "ptr3"
-; CHECK-SPIRV-DAG: OpDecorate %[[#Ptr1]] CacheControlLoadINTEL 0 1
-; CHECK-SPIRV-DAG: OpDecorate %[[#Ptr2]] CacheControlLoadINTEL 1 1
-; CHECK-SPIRV-DAG: OpDecorate %[[#Ptr3]] CacheControlStoreINTEL 2 3
+; CHECK-SPIRV-DAG: OpDecorate %[[#Ptr1:]] CacheControlLoadINTEL 0 1
+; CHECK-SPIRV-DAG: OpDecorate %[[#Ptr2:]] CacheControlLoadINTEL 1 1
+; CHECK-SPIRV-DAG: OpDecorate %[[#Ptr3:]] CacheControlStoreINTEL 2 3
 ; CHECK-SPIRV: OpExtInst %[[#]] %[[#]] prefetch %[[#Ptr1]] %[[#]]
 ; CHECK-SPIRV: OpExtInst %[[#]] %[[#]] prefetch %[[#Ptr2]] %[[#]]
 ; CHECK-SPIRV: OpExtInst %[[#]] %[[#]] prefetch %[[#Ptr3]] %[[#]]
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_fp_max_error/IntelFPMaxErrorFPMath.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_fp_max_error/IntelFPMaxErrorFPMath.ll
index 635015c970d3e..34c3741cfc6ef 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_fp_max_error/IntelFPMaxErrorFPMath.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_fp_max_error/IntelFPMaxErrorFPMath.ll
@@ -9,22 +9,25 @@
 ; CHECK: OpExtension "SPV_INTEL_fp_max_error"
 
 ; CHECK: OpName %[[#CalleeName:]] "callee"
-; CHECK: OpName %[[#F3:]] "f3"
-; CHECK: OpDecorate %[[#F3]] FPMaxErrorDecorationINTEL 1075838976
+; CHECK: OpDecorate %[[#F3:]] FPMaxErrorDecorationINTEL 1075838976
 ; CHECK: OpDecorate %[[#Callee:]] FPMaxErrorDecorationINTEL 1065353216
 
 ; CHECK: %[[#FloatTy:]] = OpTypeFloat 32
-; CHECK: %[[#Callee]] = OpFunctionCall %[[#FloatTy]] %[[#CalleeName]]
 
 define float @callee(float %f1, float %f2) {
 entry:
 ret float %f1
 }
 
-define void @test_fp_max_error_decoration(float %f1, float %f2) {
+; CHECK: %[[#F3]] = OpFDiv %[[#FloatTy]]
+; CHECK: %[[#Callee]] = OpFunctionCall %[[#FloatTy]] %[[#CalleeName]]
+
+define void @test_fp_max_error_decoration(float %f1, float %f2, float* %out) {
 entry:
 %f3 = fdiv float %f1, %f2, !fpmath !0
-call float @callee(float %f1, float %f2), !fpmath !1
+store volatile float %f3, float* %out
+%call = call float @callee(float %f1, float %f2), !fpmath !1
+store volatile float %call, float* %out
 ret void
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/decoration.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/decoration.ll
index 81497f26f1aef..c7326cd7b4cfa 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/decoration.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/decoration.ll
@@ -4,80 +4,34 @@
 ; CHECK-DAG: Capability FloatControls2
 ; CHECK: Extension "SPV_KHR_float_controls2"
 
-; CHECK: OpName %[[#addRes:]] "addRes"
-; CHECK: OpName %[[#subRes:]] "subRes"
-; CHECK: OpName %[[#mulRes:]] "mulRes"
-; CHECK: OpName %[[#divRes:]] "divRes"
-; CHECK: OpName %[[#remRes:]] "remRes"
-; CHECK: OpName %[[#negRes:]] "negRes"
-; CHECK: OpName %[[#oeqRes:]] "oeqRes"
-; CHECK: OpName %[[#oneRes:]] "oneRes"
-; CHECK: OpName %[[#oltRes:]] "oltRes"
-; CHECK: OpName %[[#ogtRes:]] "ogtRes"
-; CHECK: OpName %[[#oleRes:]] "oleRes"
-; CHECK: OpName %[[#ogeRes:]] "ogeRes"
-; CHECK: OpName %[[#ordRes:]] "ordRes"
-; CHECK: OpName %[[#ueqRes:]] "ueqRes"
-; CHECK: OpName %[[#uneRes:]] "uneRes"
-; CHECK: OpName %[[#ultRes:]] "ultRes"
-; CHECK: OpName %[[#ugtRes:]] "ugtRes"
-; CHECK: OpName %[[#uleRes:]] "uleRes"
-; CHECK: OpName %[[#ugeRes:]] "ugeRes"
-; CHECK: OpName %[[#unoRes:]] "unoRes"
-; CHECK: OpName %[[#modRes:]] "modRes"
-; CHECK: OpName %[[#maxRes:]] "maxRes"
-; CHECK: OpName %[[#maxCommonRes:]] "maxCommonRes"
-; CHECK: OpName %[[#addResV:]] "addResV"
-; CHECK: OpName %[[#subResV:]] "subResV"
-; CHECK: OpName %[[#mulResV:]] "mulResV"
-; CHECK: OpName %[[#divResV:]] "divResV"
-; CHECK: OpName %[[#remResV:]] "remResV"
-; CHECK: OpName %[[#negResV:]] "negResV"
-; CHECK: OpName %[[#oeqResV:]] "oeqResV"
-; CHECK: OpName %[[#oneResV:]] "oneResV"
-; CHECK: OpName %[[#oltResV:]] "oltResV"
-; CHECK: OpName %[[#ogtResV:]] "ogtResV"
-; CHECK: OpName %[[#oleResV:]] "oleResV"
-; CHECK: OpName %[[#ogeResV:]] "ogeResV"
-; CHECK: OpName %[[#ordResV:]] "ordResV"
-; CHECK: OpName %[[#ueqResV:]] "ueqResV"
-; CHECK: OpName %[[#uneResV:]] "uneResV"
-; CHECK: OpName %[[#ultResV:]] "ultResV"
-; CHECK: OpName %[[#ugtResV:]] "ugtResV"
-; CHECK: OpName %[[#uleResV:]] "uleResV"
-; CHECK: OpName %[[#ugeResV:]] "ugeResV"
-; CHECK: OpName %[[#unoResV:]] "unoResV"
-; CHECK: OpName %[[#modResV:]] "modResV"
-; CHECK: OpName %[[#maxResV:]] "maxResV"
-; CHECK: OpName %[[#maxCommonResV:]] "maxCommonResV"
-; CHECK: OpDecorate %[[#subRes]] FPFastMathMode NotNaN
-; CHECK: OpDecorate %[[#mulRes]] FPFastMathMode NotInf
-; CHECK: OpDecorate %[[#divRes]] FPFastMathMode NSZ
-; CHECK: OpDecorate %[[#remRes]] FPFastMathMode AllowRecip
-; CHECK: OpDecorate %[[#negRes]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
-; CHECK: OpDecorate %[[#oeqRes]] FPFastMathMode NotNaN|NotInf
-; CHECK: OpDecorate %[[#oltRes]] FPFastMathMode NotNaN
-; CHECK: OpDecorate %[[#ogtRes]] FPFastMathMode NotInf
-; CHECK: OpDecorate %[[#oleRes]] FPFastMathMode NSZ
-; CHECK: OpDecorate %[[#ogeRes]] FPFastMathMode AllowRecip
-; CHECK: OpDecorate %[[#ordRes]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
-; CHECK: OpDecorate %[[#ueqRes]] FPFastMathMode NotNaN|NotInf
-; CHECK: OpDecorate %[[#maxRes]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
-; CHECK: OpDecorate %[[#maxCommonRes]] FPFastMathMode NotNaN|NotInf
-; CHECK: OpDecorate %[[#subResV]] FPFastMathMode NotNaN
-; CHECK: OpDecorate %[[#mulResV]] FPFastMathMode NotInf
-; CHECK: OpDecorate %[[#divResV]] FPFastMathMode NSZ
-; CHECK: OpDecorate %[[#remResV]] FPFastMathMode AllowRecip
-; CHECK: OpDecorate %[[#negResV]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
-; CHECK: OpDecorate %[[#oeqResV]] FPFastMathMode NotNaN|NotInf
-; CHECK: OpDecorate %[[#oltResV]] FPFastMathMode NotNaN
-; CHECK: OpDecorate %[[#ogtResV]] FPFastMathMode NotInf
-; CHECK: OpDecorate %[[#oleResV]] FPFastMathMode NSZ
-; CHECK: OpDecorate %[[#ogeResV]] FPFastMathMode AllowRecip
-; CHECK: OpDecorate %[[#ordResV]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
-; CHECK: OpDecorate %[[#ueqResV]] FPFastMathMode NotNaN|NotInf
-; CHECK: OpDecorate %[[#maxResV]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
-; CHECK: OpDecorate %[[#maxCommonResV]] FPFastMathMode NotNaN|NotInf
+; CHECK: OpDecorate %[[#subRes:]] FPFastMathMode NotNaN
+; CHECK: OpDecorate %[[#mulRes:]] FPFastMathMode NotInf
+; CHECK: OpDecorate %[[#divRes:]] FPFastMathMode NSZ
+; CHECK: OpDecorate %[[#remRes:]] FPFastMathMode AllowRecip
+; CHECK: OpDecorate %[[#negRes:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
+; CHECK: OpDecorate %[[#oeqRes:]] FPFastMathMode NotNaN|NotInf
+; CHECK: OpDecorate %[[#oltRes:]] FPFastMathMode NotNaN
+; CHECK: OpDecorate %[[#ogtRes:]] FPFastMathMode NotInf
+; CHECK: OpDecorate %[[#oleRes:]] FPFastMathMode NSZ
+; CHECK: OpDecorate %[[#ogeRes:]] FPFastMathMode AllowRecip
+; CHECK: OpDecorate %[[#ordRes:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
+; CHECK: OpDecorate %[[#ueqRes:]] FPFastMathMode NotNaN|NotInf
+; CHECK: OpDecorate %[[#maxRes:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
+; CHECK: OpDecorate %[[#maxCommonRes:]] FPFastMathMode NotNaN|NotInf
+; CHECK: OpDecorate %[[#subResV:]] FPFastMathMode NotNaN
+; CHECK: OpDecorate %[[#mulResV:]] FPFastMathMode NotInf
+; CHECK: OpDecorate %[[#divResV:]] FPFastMathMode NSZ
+; CHECK: OpDecorate %[[#remResV:]] FPFastMathMode AllowRecip
+; CHECK: OpDecorate %[[#negResV:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
+; CHECK: OpDecorate %[[#oeqResV:]] FPFastMathMode NotNaN|NotInf
+; CHECK: OpDecorate %[[#oltResV:]] FPFastMathMode NotNaN
+; CHECK: OpDecorate %[[#ogtResV:]] FPFastMathMode NotInf
+; CHECK: OpDecorate %[[#oleResV:]] FPFastMathMode NSZ
+; CHECK: OpDecorate %[[#ogeResV:]] FPFastMathMode AllowRecip
+; CHECK: OpDecorate %[[#ordResV:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
+; CHECK: OpDecorate %[[#ueqResV:]] FPFastMathMode NotNaN|NotInf
+; CHECK: OpDecorate %[[#maxResV:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
+; CHECK: OpDecorate %[[#maxCommonResV:]] FPFastMathMode NotNaN|NotInf
 
 @G_addRes = global float 0.0
 @G_subRes = global float 0.0
@@ -139,101 +93,115 @@ declare dso_local spir_func noundef nofpclass(nan inf) <2 x float> @_Z23__spirv_
 define weak_odr dso_local spir_kernel void @foo(float %1, float %2) {
 entry:
   %addRes = fadd float %1,  %2
-  store float %addRes, float* @G_addRes
+  store volatile float %addRes, float* @G_addRes
+  ; CHECK: %[[#subRes]] = OpFSub
   %subRes = fsub nnan float %1,  %2
-  store float %subRes, float* @G_subRes
+  store volatile float %subRes, float* @G_subRes
+  ; CHECK: %[[#mulRes]] = OpFMul
   %mulRes = fmul ninf float %1,  %2
-  store float %mulRes, float* @G_mulRes
+  store volatile float %mulRes, float* @G_mulRes
+  ; CHECK: %[[#divRes]] = OpFDiv
   %divRes = fdiv nsz float %1,  %2
-  store float %divRes, float* @G_divRes
+  store volatile float %divRes, float* @G_divRes
+  ; CHECK: %[[#remRes]] = OpFRem
   %remRes = frem arcp float %1,  %2
-  store float %remRes, float* @G_remRes
+  store volatile float %remRes, float* @G_remRes
+  ; CHECK: %[[#negRes]] = OpFNegate
   %negRes = fneg fast float %1
-  store float %negRes, float* @G_negRes
+  store volatile float %negRes, float* @G_negRes
+  ; CHECK: %[[#oeqRes]] = OpFOrdEqual
   %oeqRes = fcmp nnan ninf oeq float %1,  %2
-  store i1 %oeqRes, i1* @G_oeqRes
+  store volatile i1 %oeqRes, i1* @G_oeqRes
   %oneRes = fcmp one float %1,  %2, !spirv.Decorations !3
-  store i1 %oneRes, i1* @G_oneRes
+  store volatile i1 %oneRes, i1* @G_oneRes
+  ; CHECK: %[[#oltRes]] = OpFOrdLessThan
   %oltRes = fcmp nnan olt float %1,  %2, !spirv.Decorations !3
-  store i1 %oltRes, i1* @G_oltRes
+  store volatile i1 %oltRes, i1* @G_oltRes
+  ; CHECK: %[[#ogtRes]] = OpFOrdGreaterThan
   %ogtRes = fcmp ninf ogt float %1,  %2, !spirv.Decorations !3
-  store i1 %ogtRes, i1* @G_ogtRes
+  store volatile i1 %ogtRes, i1* @G_ogtRes
+  ; CHECK: %[[#oleRes]] = OpFOrdLessThanEqual
   %oleRes = fcmp nsz ole float %1,  %2, !spirv.Decorations !3
-  store i1 %oleRes, i1* @G_oleRes
+  store volatile i1 %oleRes, i1* @G_oleRes
+  ; CHECK: %[[#ogeRes]] = OpFOrdGreaterThanEqual
   %ogeRes = fcmp arcp oge float %1,  %2, !spirv.Decorations !3
-  store i1 %ogeRes, i1* @G_ogeRes
+  store volatile i1 %ogeRes, i1* @G_ogeRes
+  ; CHECK: %[[#ordRes]] = OpOrdered
   %ordRes = fcmp fast ord float %1,  %2, !spirv.Decorations !3
-  store i1 %ordRes, i1* @G_ordRes
+  store volatile i1 %ordRes, i1* @G_ordRes
+  ; CHECK: %[[#ueqRes]] = OpFUnordEqual
   %ueqRes = fcmp nnan ninf ueq float %1,  %2, !spirv.Decorations !3
-  store i1 %ueqRes, i1* @G_ueqRes
+  store volatile i1 %ueqRes, i1* @G_ueqRes
   %uneRes = fcmp une float %1,  %2, !spirv.Decorations !3
-  store i1 %uneRes, i1* @G_uneRes
+  store volatile i1 %uneRes, i1* @G_uneRes
   %ultRes = fcmp ult float %1,  %2, !spirv.Decorations !3
-  store i1 %ultRes, i1* @G_ultRes
+  store volatile i1 %ultRes, i1* @G_ultRes
   %ugtRes = fcmp ugt float %1,  %2, !spirv.Decorations !3
-  store i1 %ugtRes, i1* @G_ugtRes
+  store volatile i1 %ugtRes, i1* @G_ugtRes
   %uleRes = fcmp ule float %1,  %2, !spirv.Decorations !3
-  store i1 %uleRes, i1* @G_uleRes
+  store volatile i1 %uleRes, i1* @G_uleRes
   %ugeRes = fcmp uge float %1,  %2, !spirv.Decorations !3
-  store i1 %ugeRes, i1* @G_ugeRes
+  store volatile i1 %ugeRes, i1* @G_ugeRes
   %unoRes = fcmp uno float %1,  %2, !spirv.Decorations !3
-  store i1 %unoRes, i1* @G_unoRes
+  store volatile i1 %unoRes, i1* @G_unoRes
   %modRes = call spir_func float @_Z4fmodff(float %1, float %2)
-  store float %modRes, float* @G_modRes
+  store volatile float %modRes, float* @G_modRes
+  ; CHECK: %[[#maxRes]] = OpExtInst %[[#]] %[[#]] fmax
   %maxRes = tail call fast spir_func noundef nofpclass(nan inf) float @_Z16__spirv_ocl_fmaxff(float noundef nofpclass(nan inf) %1, float noundef nofpclass(nan inf) %2)
-  store float %maxRes, float* @G_maxRes
+  store volatile float %maxRes, float* @G_maxRes
+   ; CHECK: %[[#maxCommonRes]] = OpExtInst %[[#]] %[[#]] fmax
    %maxCommonRes = tail call spir_func noundef float @_Z23__spirv_ocl_fmax_commonff(float noundef nofpclass(nan inf) %1, float noundef nofpclass(nan inf) %2)
-  store float %maxCommonRes, float* @G_maxCommonRes
+  store volatile float %maxCommonRes, float* @G_maxCommonRes
   ret void
 }
 
 define weak_odr dso_local spir_kernel void @fooV(<2 x float> %v1, <2 x float> %v2) {
   %addResV = fadd <2 x float> %v1,  %v2
-  store <2 x float> %addResV, <2 x float>* @G_addResV
+  store volatile <2 x float> %addResV, <2 x float>* @G_addResV
   %subResV = fsub nnan <2 x float> %v1,  %v2
-  store <2 x float> %subResV, <2 x float>* @G_subResV
+  store volatile <2 x float> %subResV, <2 x float>* @G_subResV
   %mulResV = fmul ninf <2 x float> %v1,  %v2
-  store <2 x float> %mulResV, <2 x float>* @G_mulResV
+  store volatile <2 x float> %mulResV, <2 x float>* @G_mulResV
   %divResV = fdiv nsz <2 x float> %v1,  %v2
-  store <2 x float> %divResV, <2 x float>* @G_divResV
+  store volatile <2 x float> %divResV, <2 x float>* @G_divResV
   %remResV = frem arcp <2 x float> %v1,  %v2
-  store <2 x float> %remResV, <2 x float>* @G_remResV
+  store volatile <2 x float> %remResV, <2 x float>* @G_remResV
   %negResV = fneg fast <2 x float> %v1
-  store <2 x float> %negResV, <2 x float>* @G_negResV
+  store volatile <2 x float> %negResV, <2 x float>* @G_negResV
   %oeqResV = fcmp nnan ninf oeq <2 x float> %v1,  %v2
-  store <2 x i1> %oeqResV, <2 x i1>* @G_oeqResV
+  store volatile <2 x i1> %oeqResV, <2 x i1>* @G_oeqResV
   %oneResV = fcmp one <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %oneResV, <2 x i1>* @G_oneResV
+  store volatile <2 x i1> %oneResV, <2 x i1>* @G_oneResV
   %oltResV = fcmp nnan olt <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %oltResV, <2 x i1>* @G_oltResV
+  store volatile <2 x i1> %oltResV, <2 x i1>* @G_oltResV
   %ogtResV = fcmp ninf ogt <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %ogtResV, <2 x i1>* @G_ogtResV
+  store volatile <2 x i1> %ogtResV, <2 x i1>* @G_ogtResV
   %oleResV = fcmp nsz ole <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %oleResV, <2 x i1>* @G_oleResV
+  store volatile <2 x i1> %oleResV, <2 x i1>* @G_oleResV
   %ogeResV = fcmp arcp oge <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %ogeResV, <2 x i1>* @G_ogeResV
+  store volatile <2 x i1> %ogeResV, <2 x i1>* @G_ogeResV
   %ordResV = fcmp fast ord <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %ordResV, <2 x i1>* @G_ordResV
+  store volatile <2 x i1> %ordResV, <2 x i1>* @G_ordResV
   %ueqResV = fcmp nnan ninf ueq <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %ueqResV, <2 x i1>* @G_ueqResV
+  store volatile <2 x i1> %ueqResV, <2 x i1>* @G_ueqResV
   %uneResV = fcmp une <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %uneResV, <2 x i1>* @G_uneResV
+  store volatile <2 x i1> %uneResV, <2 x i1>* @G_uneResV
   %ultResV = fcmp ult <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %ultResV, <2 x i1>* @G_ultResV
+  store volatile <2 x i1> %ultResV, <2 x i1>* @G_ultResV
   %ugtResV = fcmp ugt <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %ugtResV, <2 x i1>* @G_ugtResV
+  store volatile <2 x i1> %ugtResV, <2 x i1>* @G_ugtResV
   %uleResV = fcmp ule <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %uleResV, <2 x i1>* @G_uleResV
+  store volatile <2 x i1> %uleResV, <2 x i1>* @G_uleResV
   %ugeResV = fcmp uge <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %ugeResV, <2 x i1>* @G_ugeResV
+  store volatile <2 x i1> %ugeResV, <2 x i1>* @G_ugeResV
   %unoResV = fcmp uno <2 x float> %v1,  %v2, !spirv.Decorations !3
-  store <2 x i1> %unoResV, <2 x i1>* @G_unoResV
+  store volatile <2 x i1> %unoResV, <2 x i1>* @G_unoResV
   %modResV = call spir_func <2 x float> @_Z4fmodDv2_fDv2_f(<2 x float> %v1, <2 x float> %v2)
-  store <2 x float> %modResV, <2 x float>* @G_modResV
+  store volatile <2 x float> %modResV, <2 x float>* @G_modResV
   %maxResV = tail call fast spir_func noundef nofpclass(nan inf) <2 x float> @_Z16__spirv_ocl_fmaxDv2_fDv2_f(<2 x float> noundef nofpclass(nan inf) %v1, <2 x float> noundef nofpclass(nan inf) %v2)
-  store <2 x float> %maxResV, <2 x float>* @G_maxResV
+  store volatile <2 x float> %maxResV, <2 x float>* @G_maxResV
    %maxCommonResV = tail call spir_func noundef <2 x float> @_Z23__spirv_ocl_fmax_commonDv2_fDv2_f(<2 x float> noundef nofpclass(nan inf) %v1, <2 x float> noundef nofpclass(nan inf) %v2)
-  store <2 x float> %maxCommonResV, <2 x float>* @G_maxCommonResV
+  store volatile <2 x float> %maxCommonResV, <2 x float>* @G_maxCommonResV
   ret void
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/disabled-on-amd.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/disabled-on-amd.ll
index 879aab4de4808..8619ee9881300 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/disabled-on-amd.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/disabled-on-amd.ll
@@ -14,9 +14,10 @@
 ; CHECK: SPV_KHR_float_controls2
 ; CHECK-AMD-NOT: SPV_KHR_float_controls2
 
-define spir_kernel void @foo(float %a, float %b) {
+define spir_kernel void @foo(float %a, float %b, float addrspace(1)* %out) {
 entry:
   ; Use contract to trigger a use of SPV_KHR_float_controls2
   %r1 = fadd contract float %a, %b
+  store volatile float %r1, float addrspace(1)* %out
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/exec_mode3.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/exec_mode3.ll
index 1d09187b7f6a1..9cc9870c2cfe1 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/exec_mode3.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/exec_mode3.ll
@@ -19,8 +19,8 @@
 ; CHECK-DAG: OpExecutionModeId %[[#KERNEL_FLOAT_V]] FPFastMathDefault %[[#FLOAT_TYPE:]] %[[#CONST131079]]
 ; CHECK-DAG: OpExecutionModeId %[[#KERNEL_ALL_V]] FPFastMathDefault %[[#FLOAT_TYPE:]] %[[#CONST131079]]
 ; We expect 0 for the rest of types because it's SignedZeroInfNanPreserve.
-; CHECK-DAG: OpExecutionModeId %[[#KERNEL_ALL_V]] FPFastMathDefault %[[#HALF_TYPE:]] %[[#CONST0]]
 ; CHECK-DAG: OpExecutionModeId %[[#KERNEL_ALL_V]] FPFastMathDefault %[[#DOUBLE_TYPE:]] %[[#CONST0]]
+; CHECK-DAG: OpExecutionModeId %[[#KERNEL_ALL_V]] FPFastMathDefault %[[#HALF_TYPE:]] %[[#CONST0]]
 
 ; CHECK-DAG: OpDecorate %[[#addRes:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowReassoc
 ; CHECK-DAG: OpDecorate %[[#addResH:]] FPFastMathMode None
@@ -42,10 +42,20 @@
 ; CHECK-DAG: %[[#FLOAT_V_TYPE:]] = OpTypeVector %[[#FLOAT_TYPE]]
 ; CHECK-DAG: %[[#DOUBLE_V_TYPE:]] = OpTypeVector %[[#DOUBLE_TYPE]]
 
+ at G_addRes = global float 0.0
+ at G_addResH = global half 0.0
+ at G_addResF = global float 0.0
+ at G_addResD = global double 0.0
+ at G_addResV = global <2 x float> zeroinitializer
+ at G_addResH_V = global <2 x half> zeroinitializer
+ at G_addResF_V = global <2 x float> zeroinitializer
+ at G_addResD_V = global <2 x double> zeroinitializer
+
 define dso_local dllexport spir_kernel void @k_float_controls_float(float %f) {
 entry:
 ; CHECK-DAG: %[[#addRes]] = OpFAdd %[[#FLOAT_TYPE]]
   %addRes = fadd float %f,  %f
+  store volatile float %addRes, ptr @G_addRes
   ret void
 }
 
@@ -55,8 +65,11 @@ entry:
 ; CHECK-DAG: %[[#addResF]] = OpFAdd %[[#FLOAT_TYPE]]
 ; CHECK-DAG: %[[#addResD]] = OpFAdd %[[#DOUBLE_TYPE]]
   %addResH = fadd half %h,  %h
+  store volatile half %addResH, ptr @G_addResH
   %addResF = fadd float %f,  %f
+  store volatile float %addResF, ptr @G_addResF
   %addResD = fadd double %d,  %d
+  store volatile double %addResD, ptr @G_addResD
   ret void
 }
 
@@ -64,6 +77,7 @@ define dso_local dllexport spir_kernel void @k_float_controls_float_v(<2 x float
 entry:
 ; CHECK-DAG: %[[#addRes_V]] = OpFAdd %[[#FLOAT_V_TYPE]]
   %addRes = fadd <2 x float> %f,  %f
+  store volatile <2 x float> %addRes, ptr @G_addResV
   ret void
 }
 
@@ -73,8 +87,11 @@ entry:
 ; CHECK-DAG: %[[#addResF_V]] = OpFAdd %[[#FLOAT_V_TYPE]]
 ; CHECK-DAG: %[[#addResD_V]] = OpFAdd %[[#DOUBLE_V_TYPE]]
   %addResH = fadd <2 x half> %h,  %h
+  store volatile <2 x half> %addResH, ptr @G_addResH_V
   %addResF = fadd <2 x float> %f,  %f
+  store volatile <2 x float> %addResF, ptr @G_addResF_V
   %addResD = fadd <2 x double> %d,  %d
+  store volatile <2 x double> %addResD, ptr @G_addResD_V
   ret void
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/replacements.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/replacements.ll
index bba1c93a7e78d..e7929b2f7f3d8 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/replacements.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_float_controls2/replacements.ll
@@ -1,27 +1,19 @@
 ; RUN: llc -verify-machineinstrs -O0 -mtriple=spirv64-unknown-unknown --spirv-ext=+SPV_KHR_float_controls2 %s -o - | FileCheck %s
-; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv64-unknown-unknown --spirv-ext=+SPV_KHR_float_controls2 %s -o - -filetype=obj | spirv-val %}
+; RUN: %if spirv-tools %{ llc -verify-machineinstrs -O0 -mtriple=spirv64-unknown-unknown --spirv-ext=+SPV_KHR_float_controls2 %s -o - -filetype=obj | spirv-val %}
 
 ;; This test checks that the OpenCL.std instructions fmin_common, fmax_common are replaced with fmin, fmax with NInf and NNaN instead.
 
 ; CHECK-DAG: Capability FloatControls2
 ; CHECK: Extension "SPV_KHR_float_controls2"
 
-; CHECK: OpName %[[#maxRes:]] "maxRes"
-; CHECK: OpName %[[#maxCommonRes:]] "maxCommonRes"
-; CHECK: OpName %[[#minRes:]] "minRes"
-; CHECK: OpName %[[#minCommonRes:]] "minCommonRes"
-; CHECK: OpName %[[#maxResV:]] "maxResV"
-; CHECK: OpName %[[#maxCommonResV:]] "maxCommonResV"
-; CHECK: OpName %[[#minResV:]] "minResV"
-; CHECK: OpName %[[#minCommonResV:]] "minCommonResV"
-; CHECK: OpDecorate %[[#maxRes]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
-; CHECK: OpDecorate %[[#maxCommonRes]] FPFastMathMode NotNaN|NotInf
-; CHECK: OpDecorate %[[#minRes]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
-; CHECK: OpDecorate %[[#minCommonRes]] FPFastMathMode NotNaN|NotInf
-; CHECK: OpDecorate %[[#maxResV]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
-; CHECK: OpDecorate %[[#maxCommonResV]] FPFastMathMode NotNaN|NotInf
-; CHECK: OpDecorate %[[#minResV]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
-; CHECK: OpDecorate %[[#minCommonResV]] FPFastMathMode NotNaN|NotInf
+; CHECK: OpDecorate %[[#maxRes:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
+; CHECK: OpDecorate %[[#maxCommonRes:]] FPFastMathMode NotNaN|NotInf
+; CHECK: OpDecorate %[[#minRes:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
+; CHECK: OpDecorate %[[#minCommonRes:]] FPFastMathMode NotNaN|NotInf
+; CHECK: OpDecorate %[[#maxResV:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
+; CHECK: OpDecorate %[[#maxCommonResV:]] FPFastMathMode NotNaN|NotInf
+; CHECK: OpDecorate %[[#minResV:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|AllowContract|AllowReassoc|AllowTransform
+; CHECK: OpDecorate %[[#minCommonResV:]] FPFastMathMode NotNaN|NotInf
 ; CHECK: %[[#maxRes]] = OpExtInst {{.*}} fmax
 ; CHECK: %[[#maxCommonRes]] = OpExtInst {{.*}} fmax
 ; CHECK: %[[#minRes]] = OpExtInst {{.*}} fmin
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_no_integer_wrap_decoration.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_no_integer_wrap_decoration.ll
index dac22c0d84c2e..e52a750466d98 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_no_integer_wrap_decoration.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_no_integer_wrap_decoration.ll
@@ -4,18 +4,11 @@
 
 ; CHECK-NOT: DAG-FENCE
 
-; CHECK-DAG: OpName %[[#NO_WRAP_TEST:]] "no_wrap_test"
-; CHECK-DAG: OpName %[[#A:]] "a"
-; CHECK-DAG: OpName %[[#B:]] "b"
-; CHECK-DAG: OpName %[[#C:]] "c"
-; CHECK-DAG: OpName %[[#D:]] "d"
-; CHECK-DAG: OpName %[[#E:]] "e"
-
 ; CHECK-NOT: DAG-FENCE
 
-; CHECK-DAG: OpDecorate %[[#C]] NoUnsignedWrap
-; CHECK-DAG: OpDecorate %[[#D]] NoSignedWrap
-; CHECK-DAG: OpDecorate %[[#E]] NoUnsignedWrap
+; CHECK-DAG: OpDecorate %[[#C:]] NoUnsignedWrap
+; CHECK-DAG: OpDecorate %[[#D:]] NoSignedWrap
+; CHECK-DAG: OpDecorate %[[#E:]] NoUnsignedWrap
 ; CHECK-DAG: OpDecorate %[[#E]] NoSignedWrap
 
 ; CHECK-NOT: DAG-FENCE
@@ -32,9 +25,9 @@ define i32 @no_wrap_test(i32 %a, i32 %b) {
     ret i32 %e
 }
 
-; CHECK:      %[[#NO_WRAP_TEST]] = OpFunction %[[#I32]] None %[[#FN]]
-; CHECK-NEXT: %[[#A]] = OpFunctionParameter %[[#I32]]
-; CHECK-NEXT: %[[#B]] = OpFunctionParameter %[[#I32]]
+; CHECK:      OpFunction %[[#I32]] None %[[#FN]]
+; CHECK-NEXT: %[[#A:]] = OpFunctionParameter %[[#I32]]
+; CHECK-NEXT: %[[#B:]] = OpFunctionParameter %[[#I32]]
 ; CHECK:      OpLabel
 ; CHECK:      %[[#C]] = OpIMul %[[#I32]] %[[#A]] %[[#B]]
 ; CHECK:      %[[#D]] = OpIMul %[[#I32]] %[[#A]] %[[#B]]
diff --git a/llvm/test/CodeGen/SPIRV/freeze.ll b/llvm/test/CodeGen/SPIRV/freeze.ll
index 4f7e7794ed03b..762dcbd803e05 100644
--- a/llvm/test/CodeGen/SPIRV/freeze.ll
+++ b/llvm/test/CodeGen/SPIRV/freeze.ll
@@ -1,27 +1,19 @@
 ; RUN: llc -O0 -mtriple=spirv32-unknown-unknown %s -o - | FileCheck %s
 ; TODO: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-DAG: OpName %[[Arg1:.*]] "arg1"
-; CHECK-DAG: OpName %[[Arg2:.*]] "arg2"
-; CHECK-DAG: OpName %[[NotAStaticPoison:.*]] "poison1"
-; CHECK-DAG: OpName %[[NotAStaticPoison]] "nil0"
-; CHECK-DAG: OpName %[[StaticPoisonIntFreeze:.*]] "nil1"
-; CHECK-DAG: OpName %[[StaticPoisonFloatFreeze:.*]] "nil2"
-; CHECK-DAG: OpName %[[Arg1]] "val1"
-; CHECK-DAG: OpName %[[Const100:.*]] "val2"
-; CHECK-DAG: OpName %[[Const100]] "val3"
-; CHECK: OpDecorate
 ; CHECK-DAG: %[[FloatTy:.*]] = OpTypeFloat 32
 ; CHECK-DAG: %[[ShortTy:.*]] = OpTypeInt 16 0
 ; CHECK-DAG: %[[IntTy:.*]] = OpTypeInt 32 0
 ; CHECK-DAG: %[[Undef16:.*]] = OpUndef %[[ShortTy]]
 ; CHECK-DAG: %[[Undef32:.*]] = OpUndef %[[IntTy]]
 ; CHECK-DAG: %[[UndefFloat:.*]] = OpUndef %[[FloatTy]]
-; CHECK-DAG: %[[Const100]] = OpConstant %[[IntTy]] 100
+; CHECK-DAG: %[[Const100:.*]] = OpConstant %[[IntTy]] 100
 
 define spir_func i16 @test_nil0(i16 %arg2) {
 entry:
-; CHECK: %[[NotAStaticPoison]] = OpIAdd %[[ShortTy]] %[[Arg2]] %[[Undef16]]
+; CHECK: %[[Arg2:.*]] = OpFunctionParameter
+; CHECK: %[[NotAStaticPoison:.*]] = OpIAdd %[[ShortTy]] %[[Arg2]] %[[Undef16]]
+; CHECK: OpReturnValue %[[NotAStaticPoison]]
   %poison1 = add i16 %arg2, undef
   %nil0 = freeze i16 %poison1
   ret i16 %nil0
@@ -41,7 +33,7 @@ entry:
 
 define spir_func float @freeze_float(float %arg1) {
 entry:
-; CHECK: %[[Arg1]] = OpFunctionParameter %[[FloatTy]]
+; CHECK: %[[Arg1:.*]] = OpFunctionParameter %[[FloatTy]]
   %val1 = freeze float %arg1
   ret float %val1
 }
diff --git a/llvm/test/CodeGen/SPIRV/instructions/bitwise-i1.ll b/llvm/test/CodeGen/SPIRV/instructions/bitwise-i1.ll
index 8d3657b36454b..792f7b9c194ba 100644
--- a/llvm/test/CodeGen/SPIRV/instructions/bitwise-i1.ll
+++ b/llvm/test/CodeGen/SPIRV/instructions/bitwise-i1.ll
@@ -25,17 +25,23 @@
 ; CHECK: OpLogicalOr %[[#Vec2Bool]]
 ; CHECK: OpLogicalNotEqual %[[#Vec2Bool]]
 
-define void @test1(i8 noundef %arg1, i8 noundef %arg2) {
+define void @test1(i8 noundef %arg1, i8 noundef %arg2, i8 addrspace(1)* %out) {
   %cond1 = and i8 %arg1, %arg2
+  store volatile i8 %cond1, i8 addrspace(1)* %out
   %cond2 = or i8 %arg1, %arg2
+  store volatile i8 %cond2, i8 addrspace(1)* %out
   %cond3 = xor i8 %arg1, %arg2
+  store volatile i8 %cond3, i8 addrspace(1)* %out
   ret void
 }
 
-define void @test1v(<2 x i8> noundef %arg1, <2 x i8> noundef %arg2) {
+define void @test1v(<2 x i8> noundef %arg1, <2 x i8> noundef %arg2, <2 x i8> addrspace(1)* %out) {
   %cond1 = and <2 x i8> %arg1, %arg2
+  store volatile <2 x i8> %cond1, <2 x i8> addrspace(1)* %out
   %cond2 = or <2 x i8> %arg1, %arg2
+  store volatile <2 x i8> %cond2, <2 x i8> addrspace(1)* %out
   %cond3 = xor <2 x i8> %arg1, %arg2
+  store volatile <2 x i8> %cond3, <2 x i8> addrspace(1)* %out
   ret void
 }
 
@@ -52,17 +58,23 @@ cleanup:
   ret void
 }
 
-define void @test3(i1 noundef %arg1, i1 noundef %arg2) {
+define void @test3(i1 noundef %arg1, i1 noundef %arg2, i1 addrspace(1)* %out) {
   %cond1 = and i1 %arg1, %arg2
+  store volatile i1 %cond1, i1 addrspace(1)* %out
   %cond2 = or i1 %arg1, %arg2
+  store volatile i1 %cond2, i1 addrspace(1)* %out
   %cond3 = xor i1 %arg1, %arg2
+  store volatile i1 %cond3, i1 addrspace(1)* %out
   ret void
 }
 
-define void @test3v(<2 x i1> noundef %arg1, <2 x i1> noundef %arg2) {
+define void @test3v(<2 x i1> noundef %arg1, <2 x i1> noundef %arg2, <2 x i1> addrspace(1)* %out) {
   %cond1 = and <2 x i1> %arg1, %arg2
+  store volatile <2 x i1> %cond1, <2 x i1> addrspace(1)* %out
   %cond2 = or <2 x i1> %arg1, %arg2
+  store volatile <2 x i1> %cond2, <2 x i1> addrspace(1)* %out
   %cond3 = xor <2 x i1> %arg1, %arg2
+  store volatile <2 x i1> %cond3, <2 x i1> addrspace(1)* %out
   ret void
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll b/llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll
index 5fe2cc883ceb9..52246ca8eaba8 100644
--- a/llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll
+++ b/llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll
@@ -14,12 +14,6 @@
 ; CHECK-DAG: OpName [[ZEXT8_16:%.*]] "u8tou16"
 ; CHECK-DAG: OpName [[ZEXT16_32:%.*]] "u16tou32"
 
-; CHECK-DAG: OpName %[[#R16:]] "r16"
-; CHECK-DAG: OpName %[[#R17:]] "r17"
-; CHECK-DAG: OpName %[[#R18:]] "r18"
-; CHECK-DAG: OpName %[[#R19:]] "r19"
-; CHECK-DAG: OpName %[[#R20:]] "r20"
-
 ; CHECK-DAG: OpName [[TRUNC32_16v4:%.*]] "i32toi16v4"
 ; CHECK-DAG: OpName [[TRUNC32_8v4:%.*]] "i32toi8v4"
 ; CHECK-DAG: OpName [[TRUNC16_8v4:%.*]] "i16toi8v4"
@@ -30,11 +24,13 @@
 ; CHECK-DAG: OpName [[ZEXT8_16v4:%.*]] "u8tou16v4"
 ; CHECK-DAG: OpName [[ZEXT16_32v4:%.*]] "u16tou32v4"
 
-; CHECK-DAG: OpDecorate %[[#R16]] FPRoundingMode RTZ
-; CHECK-DAG: OpDecorate %[[#R17]] FPRoundingMode RTE
-; CHECK-DAG: OpDecorate %[[#R18]] FPRoundingMode RTP
-; CHECK-DAG: OpDecorate %[[#R19]] FPRoundingMode RTN
-; CHECK-DAG: OpDecorate %[[#R20]] SaturatedConversion
+; CHECK-DAG: OpDecorate %[[#R14:]] FPRoundingMode RTZ
+; CHECK-DAG: OpDecorate %[[#R15:]] FPRoundingMode RTZ
+; CHECK-DAG: OpDecorate %[[#R16:]] FPRoundingMode RTZ
+; CHECK-DAG: OpDecorate %[[#R17:]] FPRoundingMode RTE
+; CHECK-DAG: OpDecorate %[[#R18:]] FPRoundingMode RTP
+; CHECK-DAG: OpDecorate %[[#R19:]] FPRoundingMode RTN
+; CHECK-DAG: OpDecorate %[[#R20:]] SaturatedConversion
 
 ; CHECK-DAG: [[F32:%.*]] = OpTypeFloat 32
 ; CHECK-DAG: [[F16:%.*]] = OpTypeFloat 16
@@ -264,35 +260,55 @@ define <4 x i32>  @u16tou32v4(<4 x i16> %a) {
 ; CHECK: %[[#]] = OpConvertUToPtr %[[#]] [[Arg2]]
 ; CHECK: %[[#]] = OpUConvert [[U32v4]] %[[#]]
 ; CHECK: %[[#]] = OpSConvert [[U32v4]] %[[#]]
-; CHECK: %[[#]] = OpConvertUToF [[F32]] %[[#]]
-; CHECK: %[[#]] = OpConvertUToF [[F32]] %[[#]]
+; CHECK: %[[#R14]] = OpConvertUToF [[F32]] %[[#]]
+; CHECK: %[[#R15]] = OpConvertUToF [[F32]] %[[#]]
 ; CHECK: %[[#R16]] = OpFConvert [[F32v2]] %[[#]]
 ; CHECK: %[[#R17]] = OpFConvert [[F32v2]] %[[#]]
 ; CHECK: %[[#R18]] = OpFConvert [[F32v2]] %[[#]]
 ; CHECK: %[[#R19]] = OpFConvert [[F32v2]] %[[#]]
 ; CHECK: %[[#R20]] = OpConvertFToU [[U8]] %[[#]]
 ; CHECK: OpFunctionEnd
-define dso_local spir_kernel void @test_wrappers(ptr addrspace(4) %arg, i64 %arg_ptr, <4 x i8> %arg_v2) {
+define dso_local spir_kernel void @test_wrappers(ptr addrspace(4) %arg, i64 %arg_ptr, <4 x i8> %arg_v2, ptr addrspace(1) %out_i32, ptr addrspace(1) %out_float, ptr addrspace(1) %out_half, ptr addrspace(1) %out_i64, ptr addrspace(1) %out_ptr, ptr addrspace(1) %out_v4i32, ptr addrspace(1) %out_v2f32, ptr addrspace(1) %out_i8) {
   %r1 = call spir_func i32 @__spirv_ConvertFToU(float 0.000000e+00)
+  store volatile i32 %r1, ptr addrspace(1) %out_i32
   %r2 = call spir_func i32 @__spirv_ConvertFToS(float 0.000000e+00)
+  store volatile i32 %r2, ptr addrspace(1) %out_i32
   %r3 = call spir_func float @__spirv_ConvertSToF(i32 1)
+  store volatile float %r3, ptr addrspace(1) %out_float
   %r4 = call spir_func float @__spirv_ConvertUToF(i32 1)
+  store volatile float %r4, ptr addrspace(1) %out_float
   %r5 = call spir_func i32 @__spirv_UConvert(i64 1)
+  store volatile i32 %r5, ptr addrspace(1) %out_i32
   %r6 = call spir_func i32 @__spirv_SConvert(i64 1)
+  store volatile i32 %r6, ptr addrspace(1) %out_i32
   %r7 = call spir_func half @__spirv_FConvert(float 0.000000e+00)
+  store volatile half %r7, ptr addrspace(1) %out_half
   %r8 = call spir_func i64 @__spirv_SatConvertSToU(i64 1)
+  store volatile i64 %r8, ptr addrspace(1) %out_i64
   %r9 = call spir_func i64 @__spirv_SatConvertUToS(i64 1)
+  store volatile i64 %r9, ptr addrspace(1) %out_i64
   %r10 = call spir_func i64 @__spirv_ConvertPtrToU(ptr addrspace(4) %arg)
+  store volatile i64 %r10, ptr addrspace(1) %out_i64
   %r11 = call spir_func ptr addrspace(4) @__spirv_ConvertUToPtr(i64 %arg_ptr)
+  store volatile ptr addrspace(4) %r11, ptr addrspace(1) %out_ptr
   %r12 = call spir_func <4 x i32> @_Z22__spirv_UConvert_Rint2Dv2_a(<4 x i8> %arg_v2)
+  store volatile <4 x i32> %r12, ptr addrspace(1) %out_v4i32
   %r13 = call spir_func <4 x i32> @_Z22__spirv_SConvert_Rint2Dv2_a(<4 x i8> %arg_v2)
+  store volatile <4 x i32> %r13, ptr addrspace(1) %out_v4i32
   %r14 = call spir_func float @_Z30__spirv_ConvertUToF_Rfloat_rtz(i64 %arg_ptr)
+  store volatile float %r14, ptr addrspace(1) %out_float
   %r15 = call spir_func float @__spirv_ConvertUToF_Rfloat_rtz(i64 %arg_ptr)
+  store volatile float %r15, ptr addrspace(1) %out_float
   %r16 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rtzDv2_DF16_(<2 x half> noundef <half 0xH409A, half 0xH439A>)
+  store volatile <2 x float> %r16, ptr addrspace(1) %out_v2f32
   %r17 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rteDv2_DF16_(<2 x half> noundef <half 0xH409A, half 0xH439A>)
+  store volatile <2 x float> %r17, ptr addrspace(1) %out_v2f32
   %r18 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rtpDv2_DF16_(<2 x half> noundef <half 0xH409A, half 0xH439A>)
+  store volatile <2 x float> %r18, ptr addrspace(1) %out_v2f32
   %r19 = call spir_func <2 x float> @_Z28__spirv_FConvert_Rfloat2_rtnDv2_DF16_(<2 x half> noundef <half 0xH409A, half 0xH439A>)
+  store volatile <2 x float> %r19, ptr addrspace(1) %out_v2f32
   %r20 = call spir_func i8 @_Z30__spirv_ConvertFToU_Ruchar_satf(float noundef 42.0)
+  store volatile i8 %r20, ptr addrspace(1) %out_i8
   ret void
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/llvm-intrinsics/assume.ll b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/assume.ll
index 691325251f11d..32a86ed13b4ff 100644
--- a/llvm/test/CodeGen/SPIRV/llvm-intrinsics/assume.ll
+++ b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/assume.ll
@@ -2,9 +2,7 @@
 
 ; CHECK-SPIRV-NOT: OpCapability ExpectAssumeKHR
 ; CHECK-SPIRV-NOT: OpExtension "SPV_KHR_expect_assume"
-; CHECK-SPIRV:     OpName %[[#COMPARE:]] "cmp"
-; CHECK-SPIRV:     %[[#COMPARE]] = OpINotEqual %[[#]] %[[#]] %[[#]]
-; CHECK-SPIRV-NOT: OpAssumeTrueKHR %[[#COMPARE]]
+; CHECK-SPIRV-NOT: OpAssumeTrueKHR
 
 %class.anon = type { i8 }
 
diff --git a/llvm/test/CodeGen/SPIRV/llvm-intrinsics/constrained-arithmetic.ll b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/constrained-arithmetic.ll
index 8e8e4df8fabc6..14d25ae9b686e 100644
--- a/llvm/test/CodeGen/SPIRV/llvm-intrinsics/constrained-arithmetic.ll
+++ b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/constrained-arithmetic.ll
@@ -1,27 +1,21 @@
 ; RUN: llc -verify-machineinstrs -O0 -mtriple=spirv64-unknown-unknown %s -o - | FileCheck %s
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv64-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-DAG: OpName %[[#r1:]] "r1"
-; CHECK-DAG: OpName %[[#r2:]] "r2"
-; CHECK-DAG: OpName %[[#r3:]] "r3"
-; CHECK-DAG: OpName %[[#r4:]] "r4"
-; CHECK-DAG: OpName %[[#r5:]] "r5"
-; CHECK-DAG: OpName %[[#r6:]] "r6"
-
-; CHECK-NOT: OpDecorate %[[#r5]] FPRoundingMode
-; CHECK-NOT: OpDecorate %[[#r6]] FPRoundingMode
+; CHECK-DAG: %[[#r1:]] = OpFAdd %[[#]] %[[#]]
+; CHECK-DAG: %[[#r2:]] = OpFDiv %[[#]] %[[#]]
+; CHECK-DAG: %[[#r3:]] = OpFSub %[[#]] %[[#]]
+; CHECK-DAG: %[[#r4:]] = OpFMul %[[#]] %[[#]]
+; CHECK-DAG: %[[#r5:]] = OpExtInst %[[#]] %[[#]] fma %[[#]] %[[#]] %[[#]]
+; CHECK-DAG: %[[#r6:]] = OpFRem
 
 ; CHECK-DAG: OpDecorate %[[#r1]] FPRoundingMode RTE
 ; CHECK-DAG: OpDecorate %[[#r2]] FPRoundingMode RTZ
 ; CHECK-DAG: OpDecorate %[[#r4]] FPRoundingMode RTN
 ; CHECK-DAG: OpDecorate %[[#r3]] FPRoundingMode RTP
 
-; CHECK: OpFAdd %[[#]] %[[#]]
-; CHECK: OpFDiv %[[#]] %[[#]]
-; CHECK: OpFSub %[[#]] %[[#]]
-; CHECK: OpFMul %[[#]] %[[#]]
-; CHECK: OpExtInst %[[#]] %[[#]] fma %[[#]] %[[#]] %[[#]]
-; CHECK: OpFRem
+; CHECK-NOT: OpDecorate %[[#r5]] FPRoundingMode
+; CHECK-NOT: OpDecorate %[[#r6]] FPRoundingMode
+
 
 @G_r1 = global float 0.0
 @G_r2 = global float 0.0
diff --git a/llvm/test/CodeGen/SPIRV/opencl/vload2.ll b/llvm/test/CodeGen/SPIRV/opencl/vload2.ll
index 1a1b6c484e74f..926c88fc2b632 100644
--- a/llvm/test/CodeGen/SPIRV/opencl/vload2.ll
+++ b/llvm/test/CodeGen/SPIRV/opencl/vload2.ll
@@ -3,12 +3,6 @@
 
 ; CHECK-DAG: %[[#IMPORT:]] = OpExtInstImport "OpenCL.std"
 
-; CHECK-DAG: OpName %[[#CALL1:]] "call1"
-; CHECK-DAG: OpName %[[#CALL2:]] "call2"
-; CHECK-DAG: OpName %[[#CALL3:]] "call3"
-; CHECK-DAG: OpName %[[#CALL4:]] "call4"
-; CHECK-DAG: OpName %[[#CALL5:]] "call5"
-
 ; CHECK-DAG: %[[#INT8:]] = OpTypeInt 8 0
 ; CHECK-DAG: %[[#INT16:]] = OpTypeInt 16 0
 ; CHECK-DAG: %[[#INT32:]] = OpTypeInt 32 0
@@ -27,22 +21,27 @@
 
 ; CHECK: %[[#OFFSET:]] = OpFunctionParameter %[[#INT64]]
 
-define spir_kernel void @test_fn(i64 %offset, ptr addrspace(1) %address) {
+define spir_kernel void @test_fn(i64 %offset, ptr addrspace(1) %address, ptr addrspace(1) %out) {
 ; CHECK-DAG: %[[#CASTorPARAMofPTRI8:]] = {{OpBitcast|OpFunctionParameter}}{{.*}}%[[#PTRINT8]]{{.*}}
-; CHECK-DAG: %[[#CALL1]] = OpExtInst %[[#VINT8]] %[[#IMPORT]] vloadn %[[#OFFSET]] %[[#CASTorPARAMofPTRI8]] 2
+; CHECK-DAG: %[[#CALL1:]] = OpExtInst %[[#VINT8]] %[[#IMPORT]] vloadn %[[#OFFSET]] %[[#CASTorPARAMofPTRI8]] 2
   %call1 = call spir_func <2 x i8> @_Z6vload2mPU3AS1Kc(i64 %offset, ptr addrspace(1) %address)
+  store volatile <2 x i8> %call1, ptr addrspace(1) %out
 ; CHECK-DAG: %[[#CASTorPARAMofPTRI16:]] = {{OpBitcast|OpFunctionParameter}}{{.*}}%[[#PTRINT16]]{{.*}}
-; CHECK-DAG: %[[#CALL2]] = OpExtInst %[[#VINT16]] %[[#IMPORT]] vloadn %[[#OFFSET]] %[[#CASTorPARAMofPTRI16]] 2
+; CHECK-DAG: %[[#CALL2:]] = OpExtInst %[[#VINT16]] %[[#IMPORT]] vloadn %[[#OFFSET]] %[[#CASTorPARAMofPTRI16]] 2
   %call2 = call spir_func <2 x i16> @_Z6vload2mPU3AS1Ks(i64 %offset, ptr addrspace(1) %address)
+  store volatile <2 x i16> %call2, ptr addrspace(1) %out
 ; CHECK-DAG: %[[#CASTorPARAMofPTRI32:]] = {{OpBitcast|OpFunctionParameter}}{{.*}}%[[#PTRINT32]]{{.*}}
-; CHECK-DAG: %[[#CALL3]] = OpExtInst %[[#VINT32]] %[[#IMPORT]] vloadn %[[#OFFSET]] %[[#CASTorPARAMofPTRI32]] 2
+; CHECK-DAG: %[[#CALL3:]] = OpExtInst %[[#VINT32]] %[[#IMPORT]] vloadn %[[#OFFSET]] %[[#CASTorPARAMofPTRI32]] 2
   %call3 = call spir_func <2 x i32> @_Z6vload2mPU3AS1Ki(i64 %offset, ptr addrspace(1) %address)
+  store volatile <2 x i32> %call3, ptr addrspace(1) %out
 ; CHECK-DAG: %[[#CASTorPARAMofPTRI64:]] = {{OpBitcast|OpFunctionParameter}}{{.*}}%[[#PTRINT64]]{{.*}}
-; CHECK-DAG: %[[#CALL4]] = OpExtInst %[[#VINT64]] %[[#IMPORT]] vloadn %[[#OFFSET]] %[[#CASTorPARAMofPTRI64]] 2
+; CHECK-DAG: %[[#CALL4:]] = OpExtInst %[[#VINT64]] %[[#IMPORT]] vloadn %[[#OFFSET]] %[[#CASTorPARAMofPTRI64]] 2
   %call4 = call spir_func <2 x i64> @_Z6vload2mPU3AS1Kl(i64 %offset, ptr addrspace(1) %address)
+  store volatile <2 x i64> %call4, ptr addrspace(1) %out
 ; CHECK-DAG: %[[#CASTorPARAMofPTRFLOAT:]] = {{OpBitcast|OpFunctionParameter}}{{.*}}%[[#PTRFLOAT]]{{.*}}
-; CHECK-DAG: %[[#CALL5]] = OpExtInst %[[#VFLOAT]] %[[#IMPORT]] vloadn %[[#OFFSET]] %[[#CASTorPARAMofPTRFLOAT]] 2
+; CHECK-DAG: %[[#CALL5:]] = OpExtInst %[[#VFLOAT]] %[[#IMPORT]] vloadn %[[#OFFSET]] %[[#CASTorPARAMofPTRFLOAT]] 2
   %call5 = call spir_func <2 x float> @_Z6vload2mPU3AS1Kf(i64 %offset, ptr addrspace(1) %address)
+  store volatile <2 x float> %call5, ptr addrspace(1) %out
   ret void
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/optimizations/add-check-overflow.ll b/llvm/test/CodeGen/SPIRV/optimizations/add-check-overflow.ll
index 2db620dab8801..d257a73aa19d3 100644
--- a/llvm/test/CodeGen/SPIRV/optimizations/add-check-overflow.ll
+++ b/llvm/test/CodeGen/SPIRV/optimizations/add-check-overflow.ll
@@ -16,9 +16,6 @@
 ; RUN: llc -O3 -disable-lsr -mtriple=spirv64-unknown-unknown %s -o - | FileCheck --check-prefix=NOLSR %s
 ; RUN: %if spirv-tools %{ llc -O3 -disable-lsr -mtriple=spirv64-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-DAG: OpName %[[PhiRes:.*]] "lsr.iv"
-; CHECK-DAG: OpName %[[IsOver:.*]] "fl"
-; CHECK-DAG: OpName %[[Val:.*]] "lsr.iv.next"
 ; CHECK-DAG: %[[Int:.*]] = OpTypeInt 32 0
 ; CHECK-DAG: %[[Char:.*]] = OpTypeInt 8 0
 ; CHECK-DAG: %[[PtrChar:.*]] = OpTypePointer Generic %[[Char]]
@@ -33,8 +30,8 @@
 ; CHECK: %[[APlusOne:.*]] = OpIAdd %[[Int]] %[[A]] %[[Const1]]
 ;	CHECK: OpBranch %[[#]]
 ;	CHECK: [[#]] = OpLabel
-;	CHECK: %[[PhiRes]] = OpPhi %[[Int]] %[[Val]] %[[#]] %[[APlusOne]] %[[#]]
-;	CHECK: %[[IsOver]] = OpIEqual %[[Bool]] %[[#]] %[[#]]
+;	CHECK: %[[PhiRes:.*]] = OpPhi %[[Int]] %[[Val:.*]] %[[#]] %[[APlusOne]] %[[#]]
+;	CHECK: %[[IsOver:.*]] = OpIEqual %[[Bool]] %[[#]] %[[#]]
 ;	CHECK: OpBranchConditional %[[IsOver]] %[[#]] %[[#]]
 ;	CHECK: [[#]] = OpLabel
 ;	CHECK: OpStore %[[Ptr]] %[[Const42]] Aligned 1
@@ -43,8 +40,6 @@
 ;	CHECK: [[#]] = OpLabel
 ;	OpReturnValue %[[PhiRes]]
 
-; NOLSR-DAG: OpName %[[Val:.*]] "math"
-; NOLSR-DAG: OpName %[[IsOver:.*]] "ov"
 ; NOLSR-DAG: %[[Int:.*]] = OpTypeInt 32 0
 ; NOLSR-DAG: %[[Char:.*]] = OpTypeInt 8 0
 ; NOLSR-DAG: %[[PtrChar:.*]] = OpTypePointer Generic %[[Char]]
@@ -60,11 +55,11 @@
 ; NOLSR: %[[#]] = OpLabel
 ; NOLSR: OpBranch %[[#]]
 ; NOLSR: %[[#]] = OpLabel
-; NOLSR: %[[PhiRes:.*]] = OpPhi %[[Int]] %[[A]] %[[#]] %[[Val]] %[[#]]
+; NOLSR: %[[PhiRes:.*]] = OpPhi %[[Int]] %[[A]] %[[#]] %[[Val:.*]] %[[#]]
 ; NOLSR: %[[AggRes:.*]] = OpIAddCarry %[[Struct]] %[[PhiRes]] %[[Const1]]
 ; NOLSR: %[[Val]] = OpCompositeExtract %[[Int]] %[[AggRes]] 0
 ; NOLSR: %[[Over:.*]] = OpCompositeExtract %[[Int]] %[[AggRes]] 1
-; NOLSR: %[[IsOver]] = OpINotEqual %[[Bool:.*]] %[[Over]] %[[Zero]]
+; NOLSR: %[[IsOver:.*]] = OpINotEqual %[[Bool:.*]] %[[Over]] %[[Zero]]
 ; NOLSR: OpBranchConditional %[[IsOver]] %[[#]] %[[#]]
 ; NOLSR: OpStore %[[Ptr]] %[[Const42]] Aligned 1
 ; NOLSR: OpBranch %[[#]]
diff --git a/llvm/test/CodeGen/SPIRV/passes/translate-aggregate-uaddo.ll b/llvm/test/CodeGen/SPIRV/passes/translate-aggregate-uaddo.ll
index 6baac7877a4a8..59a0668df445c 100644
--- a/llvm/test/CodeGen/SPIRV/passes/translate-aggregate-uaddo.ll
+++ b/llvm/test/CodeGen/SPIRV/passes/translate-aggregate-uaddo.ll
@@ -9,20 +9,20 @@
 ; Aggregate data are wrapped into @llvm.fake.use(),
 ; and their attributes are packed into a metadata for @llvm.spv.value.md().
 ; CHECK-IR: %[[R1:.*]] = call { i32, i1 } @llvm.uadd.with.overflow.i32
-; CHECK-IR: call void @llvm.spv.value.md(metadata !1)
+; CHECK-IR: call void @llvm.spv.value.md(metadata ![[#MD1:]])
 ; CHECK-IR: call void (...) @llvm.fake.use({ i32, i1 } %[[R1]])
 ; CHECK-IR: %math = extractvalue { i32, i1 } %[[R1]], 0
 ; CHECK-IR: %ov = extractvalue { i32, i1 } %[[R1]], 1
 ; Type/Name attributes of the value.
-; CHECK-IR: !1 = !{{[{]}}!2, !""{{[}]}}
+; CHECK-IR: ![[#MD1]] = !{{[{]}}![[#MD2:]], !""{{[}]}}
 ; Origin data type of the value.
-; CHECK-IR: !2 = !{{[{]}}{{[{]}} i32, i1 {{[}]}} poison{{[}]}}
+; CHECK-IR: ![[#MD2]] = !{{[{]}}{{[{]}} i32, i1 {{[}]}} poison{{[}]}}
 
 ; RUN: llc -O0 -mtriple=spirv64-unknown-unknown %s -o - -print-after=irtranslator 2>&1 | FileCheck %s  --check-prefix=CHECK-GMIR
 ; Required info succeeded to get through IRTranslator.
 ; CHECK-GMIR: %[[phires:.*]]:_(s32) = G_PHI
 ; CHECK-GMIR: %[[math:.*]]:id(s32), %[[ov:.*]]:_(s1) = G_UADDO %[[phires]]:_, %[[#]]:_
-; CHECK-GMIR: G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.spv.value.md), !1
+; CHECK-GMIR: G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.spv.value.md), ![[MD1:]]
 ; CHECK-GMIR: FAKE_USE %[[math]]:id(s32), %[[ov]]:_(s1)
 
 ; RUN: llc -O0 -mtriple=spirv64-unknown-unknown %s -o - -print-after=spirv-prelegalizer 2>&1 | FileCheck %s  --check-prefix=CHECK-PRE
@@ -41,8 +41,6 @@
 ; CHECK-ISEL-DAG: %[[math:.*]]:id = OpCompositeExtract %[[int32]]:type, %[[res]]:iid, 0
 ; CHECK-ISEL-DAG: %[[ov32:.*]]:iid = OpCompositeExtract %[[int32]]:type, %[[res]]:iid, 1
 ; CHECK-ISEL-DAG: %[[ov:.*]]:iid = OpINotEqual %[[bool]]:type, %[[ov32]]:iid, %[[zero32:.*]]:iid
-; CHECK-ISEL-DAG: OpName %[[math]]:id, 1752457581, 0
-; CHECK-ISEL-DAG: OpName %[[ov]]:iid, 30319
 
 define spir_func i32 @foo(i32 %a, ptr addrspace(4) %p) {
 entry:
diff --git a/llvm/test/CodeGen/SPIRV/pointers/phi-chain-types.ll b/llvm/test/CodeGen/SPIRV/pointers/phi-chain-types.ll
index 44134f83cfec3..452e6a1026b74 100644
--- a/llvm/test/CodeGen/SPIRV/pointers/phi-chain-types.ll
+++ b/llvm/test/CodeGen/SPIRV/pointers/phi-chain-types.ll
@@ -5,13 +5,7 @@
 ; RUN: llc -O0 -mtriple=spirv64-unknown-unknown %s -o - | FileCheck %s
 
 ; CHECK-DAG: OpName %[[#Foo:]] "foo"
-; CHECK-DAG: OpName %[[#FooVal1:]] "val1"
-; CHECK-DAG: OpName %[[#FooVal2:]] "val2"
-; CHECK-DAG: OpName %[[#FooVal3:]] "val3"
 ; CHECK-DAG: OpName %[[#Bar:]] "bar"
-; CHECK-DAG: OpName %[[#BarVal1:]] "val1"
-; CHECK-DAG: OpName %[[#BarVal2:]] "val2"
-; CHECK-DAG: OpName %[[#BarVal3:]] "val3"
 
 ; CHECK-DAG: %[[#Short:]] = OpTypeInt 16 0
 ; CHECK-DAG: %[[#ShortGenPtr:]] = OpTypePointer Generic %[[#Short]]
@@ -24,8 +18,8 @@
 ; CHECK: OpFunctionParameter
 ; CHECK: OpFunctionParameter
 ; CHECK: %[[#FooG1:]] = OpPtrCastToGeneric %[[#ShortGenPtr]] %[[#G1]]
-; CHECK: %[[#FooVal2]] = OpPhi %[[#ShortGenPtr]] %[[#FooArgP]] %[[#]] %[[#FooVal3]] %[[#]]
-; CHECK: %[[#FooVal1]] = OpPhi %[[#ShortGenPtr]] %[[#FooG1]] %[[#]] %[[#FooVal2]] %[[#]]
+; CHECK: %[[#FooVal2:]] = OpPhi %[[#ShortGenPtr]] %[[#FooArgP]] %[[#]] %[[#FooVal3:]] %[[#]]
+; CHECK: %[[#FooVal1:]] = OpPhi %[[#ShortGenPtr]] %[[#FooG1]] %[[#]] %[[#FooVal2]] %[[#]]
 ; CHECK: %[[#FooVal3]] = OpLoad %[[#ShortGenPtr]] %[[#]]
 
 ; CHECK: %[[#Bar:]] = OpFunction %[[#]] None %[[#]]
@@ -33,9 +27,9 @@
 ; CHECK: OpFunctionParameter
 ; CHECK: OpFunctionParameter
 ; CHECK: OpFunctionParameter
-; CHECK: %[[#BarVal3]] = OpLoad %[[#ShortGenPtr]] %[[#]]
+; CHECK: %[[#BarVal3:]] = OpLoad %[[#ShortGenPtr]] %[[#]]
 ; CHECK: %[[#BarG1:]] = OpPtrCastToGeneric %[[#ShortGenPtr]] %[[#G1]]
-; CHECK: %[[#BarVal1]] = OpPhi %[[#ShortGenPtr]] %[[#BarG1]] %[[#]] %[[#BarVal2]] %[[#]]
+; CHECK: %[[#BarVal1:]] = OpPhi %[[#ShortGenPtr]] %[[#BarG1]] %[[#]] %[[#BarVal2:]] %[[#]]
 ; CHECK: %[[#BarVal2]] = OpPhi %[[#ShortGenPtr]] %[[#BarArgP]] %[[#]] %[[#BarVal3]] %[[#]]
 
 @G1 = internal addrspace(3) global i16 undef, align 8
diff --git a/llvm/test/CodeGen/SPIRV/pointers/type-deduce-by-call-chain.ll b/llvm/test/CodeGen/SPIRV/pointers/type-deduce-by-call-chain.ll
index dbc88cd1a7859..979fe48511972 100644
--- a/llvm/test/CodeGen/SPIRV/pointers/type-deduce-by-call-chain.ll
+++ b/llvm/test/CodeGen/SPIRV/pointers/type-deduce-by-call-chain.ll
@@ -3,7 +3,6 @@
 
 ; CHECK-SPIRV-DAG: OpName %[[ArgCum:.*]] "_arg_cum"
 ; CHECK-SPIRV-DAG: OpName %[[FunTest:.*]] "test"
-; CHECK-SPIRV-DAG: OpName %[[Addr:.*]] "addr"
 ; CHECK-SPIRV-DAG: OpName %[[StubObj:.*]] "stub_object"
 ; CHECK-SPIRV-DAG: OpName %[[MemOrder:.*]] "mem_order"
 ; CHECK-SPIRV-DAG: OpName %[[FooStub:.*]] "foo_stub"
@@ -24,7 +23,7 @@
 ; CHECK-SPIRV: %[[FunTest]] = OpFunction %[[TyVoid]] None %[[TyFunPtrLong]]
 ; CHECK-SPIRV: %[[ArgCum]] = OpFunctionParameter %[[TyPtrLong]]
 
-; CHECK-SPIRV: OpFunctionCall %[[TyVoid]] %[[FooFunc]] %[[Addr]] %[[Const3]]
+; CHECK-SPIRV: OpFunctionCall %[[TyVoid]] %[[FooFunc]] %[[Addr:.*]] %[[Const3]]
 
 ; CHECK-SPIRV: %[[HalfAddr:.*]] = OpPtrCastToGeneric
 ; CHECK-SPIRV-NEXT: %[[HalfAddrCasted:.*]] = OpBitcast %[[TyGenPtrLong]] %[[HalfAddr]]
diff --git a/llvm/test/CodeGen/SPIRV/select-builtin.ll b/llvm/test/CodeGen/SPIRV/select-builtin.ll
index 6717970d160fc..b4601d33dd38f 100644
--- a/llvm/test/CodeGen/SPIRV/select-builtin.ll
+++ b/llvm/test/CodeGen/SPIRV/select-builtin.ll
@@ -6,10 +6,11 @@
 
 ;; LLVM IR was generated with -cl-std=c++ option
 
-define spir_kernel void @test(i32 %op1, i32 %op2) {
+define spir_kernel void @test(i32 %op1, i32 %op2, i32 addrspace(1)* %out) {
 entry:
   %0 = trunc i8 undef to i1
   %call = call spir_func i32 @_Z14__spirv_Selectbii(i1 zeroext %0, i32 %op1, i32 %op2)
+  store i32 %call, i32 addrspace(1)* %out
   ret void
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/OpDot.ll b/llvm/test/CodeGen/SPIRV/transcoding/OpDot.ll
index 58fcc3688c89d..c761e7729d25e 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/OpDot.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/OpDot.ll
@@ -14,9 +14,10 @@
 ; CHECK-SPIRV-NOT:   %[[#]] = OpDot %[[#]] %[[#]] %[[#]]
 ; CHECK-SPIRV:       OpFunctionEnd
 
-define spir_kernel void @testScalar(float %f) {
+define spir_kernel void @testScalar(float %f, float addrspace(1)* %out) {
 entry:
   %call = tail call spir_func float @_Z3dotff(float %f, float %f)
+  store float %call, float addrspace(1)* %out
   ret void
 }
 
@@ -28,11 +29,14 @@ entry:
 ; CHECK-SPIRV:       %[[#]] = OpDot %[[#TyHalf]] %[[#]] %[[#]]
 ; CHECK-SPIRV:       OpFunctionEnd
 
-define spir_kernel void @testVector(<2 x float> %f, <2 x half> %h) {
+define spir_kernel void @testVector(<2 x float> %f, <2 x half> %h, float addrspace(1)* %out, half addrspace(1)* %outh) {
 entry:
   %call = tail call spir_func float @_Z3dotDv2_fS_(<2 x float> %f, <2 x float> %f)
+  store float %call, float addrspace(1)* %out
   %call2 = tail call spir_func float @__spirv_Dot(<2 x float> %f, <2 x float> %f)
+  store float %call2, float addrspace(1)* %out
   %call3 = tail call spir_func half @_Z11__spirv_DotDv2_DF16_S_(<2 x half> %h, <2 x half> %h)
+  store half %call3, half addrspace(1)* %outh
   ret void
 }
 
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/OpVectorExtractDynamic.ll b/llvm/test/CodeGen/SPIRV/transcoding/OpVectorExtractDynamic.ll
index 03731b6d67565..ef92c4418d21d 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/OpVectorExtractDynamic.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/OpVectorExtractDynamic.ll
@@ -5,12 +5,11 @@
 
 ; CHECK-SPIRV: OpName %[[#vec:]] "vec"
 ; CHECK-SPIRV: OpName %[[#index:]] "index"
-; CHECK-SPIRV: OpName %[[#res:]] "res"
 
 ; CHECK-SPIRV: %[[#float:]] = OpTypeFloat 32
 ; CHECK-SPIRV: %[[#float2:]] = OpTypeVector %[[#float]] 2
 
-; CHECK-SPIRV: %[[#res]] = OpVectorExtractDynamic %[[#float]] %[[#vec]] %[[#index]]
+; CHECK-SPIRV: %[[#res:]] = OpVectorExtractDynamic %[[#float]] %[[#vec]] %[[#index]]
 
 define spir_kernel void @test(float addrspace(1)* nocapture %out, <2 x float> %vec, i32 %index) {
 entry:
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/OpVectorInsertDynamic_i16.ll b/llvm/test/CodeGen/SPIRV/transcoding/OpVectorInsertDynamic_i16.ll
index c939f827e2bba..76f990ed6806a 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/OpVectorInsertDynamic_i16.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/OpVectorInsertDynamic_i16.ll
@@ -4,9 +4,6 @@
 ; RUN: llc -verify-machineinstrs -O0 -mtriple=spirv32-unknown-unknown %s -o - | FileCheck %s
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK:     OpName %[[#v:]] "v"
-; CHECK:     OpName %[[#index:]] "index"
-; CHECK:     OpName %[[#res:]] "res"
 ; CHECK-DAG: %[[#int16:]] = OpTypeInt 16
 ; CHECK-DAG: %[[#int32:]] = OpTypeInt 32
 ; CHECK-DAG: %[[#int16_2:]] = OpTypeVector %[[#int16]] 2
@@ -17,7 +14,7 @@
 ; CHECK-NOT: %[[#idx2:]] = OpConstant %[[#int32]] 1{{$}}
 ; CHECK:     %[[#vec1:]] = OpCompositeInsert %[[#int16_2]] %[[#const1]] %[[#undef]] 0
 ; CHECK:     %[[#vec2:]] = OpCompositeInsert %[[#int16_2]] %[[#const2]] %[[#vec1]] 1
-; CHECK:     %[[#res]] = OpVectorInsertDynamic %[[#int16_2]] %[[#vec2]] %[[#v]] %[[#index]]
+; CHECK:     %[[#res:]] = OpVectorInsertDynamic %[[#int16_2]] %[[#vec2]] %[[#v:]] %[[#index:]]
 
 define spir_kernel void @test(<2 x i16>* nocapture %out, i16 %v, i32 %index) {
 entry:
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/fadd.ll b/llvm/test/CodeGen/SPIRV/transcoding/fadd.ll
index d84fd492a86a8..ff73eb4c2eb5d 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/fadd.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/fadd.ll
@@ -1,64 +1,65 @@
 ; RUN: llc -O0 -mtriple=spirv32-unknown-unknown %s -o - | FileCheck %s --check-prefix=CHECK-SPIRV
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-SPIRV:     OpName %[[#r1:]] "r1"
-; CHECK-SPIRV:     OpName %[[#r2:]] "r2"
-; CHECK-SPIRV:     OpName %[[#r3:]] "r3"
-; CHECK-SPIRV:     OpName %[[#r4:]] "r4"
-; CHECK-SPIRV:     OpName %[[#r5:]] "r5"
-; CHECK-SPIRV:     OpName %[[#r6:]] "r6"
-; CHECK-SPIRV:     OpName %[[#r7:]] "r7"
-; CHECK-SPIRV:     OpName %[[#r1d:]] "r1"
-; CHECK-SPIRV:     OpName %[[#r2d:]] "r2"
-; CHECK-SPIRV:     OpName %[[#r3d:]] "r3"
-; CHECK-SPIRV:     OpName %[[#r4d:]] "r4"
-; CHECK-SPIRV:     OpName %[[#r5d:]] "r5"
-; CHECK-SPIRV:     OpName %[[#r6d:]] "r6"
-; CHECK-SPIRV:     OpName %[[#r7d:]] "r7"
-; CHECK-SPIRV-NOT: OpDecorate %[[#r1]] FPFastMathMode
-; CHECK-SPIRV-DAG: OpDecorate %[[#r2]] FPFastMathMode NotNaN
-; CHECK-SPIRV-DAG: OpDecorate %[[#r3]] FPFastMathMode NotInf
-; CHECK-SPIRV-DAG: OpDecorate %[[#r4]] FPFastMathMode NSZ
-; CHECK-SPIRV-DAG: OpDecorate %[[#r5]] FPFastMathMode AllowRecip
-; CHECK-SPIRV-DAG: OpDecorate %[[#r6]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|Fast
-; CHECK-SPIRV-DAG: OpDecorate %[[#r7]] FPFastMathMode NotNaN|NotInf
+; CHECK-SPIRV-NOT: OpDecorate %[[#]] FPFastMathMode
+; CHECK-SPIRV-DAG: OpDecorate %[[#r2:]] FPFastMathMode NotNaN
+; CHECK-SPIRV-DAG: OpDecorate %[[#r3:]] FPFastMathMode NotInf
+; CHECK-SPIRV-DAG: OpDecorate %[[#r4:]] FPFastMathMode NSZ
+; CHECK-SPIRV-DAG: OpDecorate %[[#r5:]] FPFastMathMode AllowRecip
+; CHECK-SPIRV-DAG: OpDecorate %[[#r6:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|Fast
+; CHECK-SPIRV-DAG: OpDecorate %[[#r7:]] FPFastMathMode NotNaN|NotInf
 ; CHECK-SPIRV-DAG: %[[#float:]] = OpTypeFloat 32
 ; CHECK-SPIRV-DAG: %[[#double:]] = OpTypeFloat 64
 
-; CHECK-SPIRV:     %[[#r1]] = OpFAdd %[[#float]]
+; CHECK-SPIRV:     %[[#r1:]] = OpFAdd %[[#float]]
 ; CHECK-SPIRV:     %[[#r2]] = OpFAdd %[[#float]]
 ; CHECK-SPIRV:     %[[#r3]] = OpFAdd %[[#float]]
 ; CHECK-SPIRV:     %[[#r4]] = OpFAdd %[[#float]]
 ; CHECK-SPIRV:     %[[#r5]] = OpFAdd %[[#float]]
 ; CHECK-SPIRV:     %[[#r6]] = OpFAdd %[[#float]]
 ; CHECK-SPIRV:     %[[#r7]] = OpFAdd %[[#float]]
-define spir_kernel void @testFAdd_float(float %a, float %b) {
+define spir_kernel void @testFAdd_float(float %a, float %b, ptr addrspace(1) %out) {
 entry:
   %r1 = fadd float %a, %b
+  store volatile float %r1, float addrspace(1)* %out
   %r2 = fadd nnan float %a, %b
+  store volatile float %r2, float addrspace(1)* %out
   %r3 = fadd ninf float %a, %b
+  store volatile float %r3, float addrspace(1)* %out
   %r4 = fadd nsz float %a, %b
+  store volatile float %r4, float addrspace(1)* %out
   %r5 = fadd arcp float %a, %b
+  store volatile float %r5, float addrspace(1)* %out
   %r6 = fadd fast float %a, %b
+  store volatile float %r6, float addrspace(1)* %out
   %r7 = fadd nnan ninf float %a, %b
+  store volatile float %r7, float addrspace(1)* %out
   ret void
 }
 
-; CHECK-SPIRV:     %[[#r1d]] = OpFAdd %[[#double]]
-; CHECK-SPIRV:     %[[#r2d]] = OpFAdd %[[#double]]
-; CHECK-SPIRV:     %[[#r3d]] = OpFAdd %[[#double]]
-; CHECK-SPIRV:     %[[#r4d]] = OpFAdd %[[#double]]
-; CHECK-SPIRV:     %[[#r5d]] = OpFAdd %[[#double]]
-; CHECK-SPIRV:     %[[#r6d]] = OpFAdd %[[#double]]
-; CHECK-SPIRV:     %[[#r7d]] = OpFAdd %[[#double]]
-define spir_kernel void @testFAdd_double(double %a, double %b) {
+; CHECK-SPIRV:     %[[#]] = OpFAdd %[[#double]]
+; CHECK-SPIRV:     %[[#]] = OpFAdd %[[#double]]
+; CHECK-SPIRV:     %[[#]] = OpFAdd %[[#double]]
+; CHECK-SPIRV:     %[[#]] = OpFAdd %[[#double]]
+; CHECK-SPIRV:     %[[#]] = OpFAdd %[[#double]]
+; CHECK-SPIRV:     %[[#]] = OpFAdd %[[#double]]
+; CHECK-SPIRV:     %[[#]] = OpFAdd %[[#double]]
+
+define spir_kernel void @testFAdd_double(double %a, double %b, double addrspace(1)* %out) local_unnamed_addr {
 entry:
-  %r1 = fadd double %a, %b
-  %r2 = fadd nnan double %a, %b
-  %r3 = fadd ninf double %a, %b
-  %r4 = fadd nsz double %a, %b
-  %r5 = fadd arcp double %a, %b
-  %r6 = fadd fast double %a, %b
-  %r7 = fadd nnan ninf double %a, %b
+  %r11 = fadd double %a, %b
+  store volatile double %r11, double addrspace(1)* %out
+  %r12 = fadd nnan double %a, %b
+  store volatile double %r12, double addrspace(1)* %out
+  %r13 = fadd ninf double %a, %b
+  store volatile double %r13, double addrspace(1)* %out
+  %r14 = fadd nsz double %a, %b
+  store volatile double %r14, double addrspace(1)* %out
+  %r15 = fadd arcp double %a, %b
+  store volatile double %r15, double addrspace(1)* %out
+  %r16 = fadd fast double %a, %b
+  store volatile double %r16, double addrspace(1)* %out
+  %r17 = fadd nnan ninf double %a, %b
+  store volatile double %r17, double addrspace(1)* %out
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/fcmp.ll b/llvm/test/CodeGen/SPIRV/transcoding/fcmp.ll
index c752e278927a9..1423b970b1e08 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/fcmp.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/fcmp.ll
@@ -1,188 +1,98 @@
 ; RUN: llc -O0 -mtriple=spirv32-unknown-unknown %s -o - | FileCheck %s --check-prefix=CHECK-SPIRV
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-SPIRV: OpName %[[#r1:]] "r1"
-; CHECK-SPIRV: OpName %[[#r2:]] "r2"
-; CHECK-SPIRV: OpName %[[#r3:]] "r3"
-; CHECK-SPIRV: OpName %[[#r4:]] "r4"
-; CHECK-SPIRV: OpName %[[#r5:]] "r5"
-; CHECK-SPIRV: OpName %[[#r6:]] "r6"
-; CHECK-SPIRV: OpName %[[#r7:]] "r7"
-; CHECK-SPIRV: OpName %[[#r8:]] "r8"
-; CHECK-SPIRV: OpName %[[#r9:]] "r9"
-; CHECK-SPIRV: OpName %[[#r10:]] "r10"
-; CHECK-SPIRV: OpName %[[#r11:]] "r11"
-; CHECK-SPIRV: OpName %[[#r12:]] "r12"
-; CHECK-SPIRV: OpName %[[#r13:]] "r13"
-; CHECK-SPIRV: OpName %[[#r14:]] "r14"
-; CHECK-SPIRV: OpName %[[#r15:]] "r15"
-; CHECK-SPIRV: OpName %[[#r16:]] "r16"
-; CHECK-SPIRV: OpName %[[#r17:]] "r17"
-; CHECK-SPIRV: OpName %[[#r18:]] "r18"
-; CHECK-SPIRV: OpName %[[#r19:]] "r19"
-; CHECK-SPIRV: OpName %[[#r20:]] "r20"
-; CHECK-SPIRV: OpName %[[#r21:]] "r21"
-; CHECK-SPIRV: OpName %[[#r22:]] "r22"
-; CHECK-SPIRV: OpName %[[#r23:]] "r23"
-; CHECK-SPIRV: OpName %[[#r24:]] "r24"
-; CHECK-SPIRV: OpName %[[#r25:]] "r25"
-; CHECK-SPIRV: OpName %[[#r26:]] "r26"
-; CHECK-SPIRV: OpName %[[#r27:]] "r27"
-; CHECK-SPIRV: OpName %[[#r28:]] "r28"
-; CHECK-SPIRV: OpName %[[#r29:]] "r29"
-; CHECK-SPIRV: OpName %[[#r30:]] "r30"
-; CHECK-SPIRV: OpName %[[#r31:]] "r31"
-; CHECK-SPIRV: OpName %[[#r32:]] "r32"
-; CHECK-SPIRV: OpName %[[#r33:]] "r33"
-; CHECK-SPIRV: OpName %[[#r34:]] "r34"
-; CHECK-SPIRV: OpName %[[#r35:]] "r35"
-; CHECK-SPIRV: OpName %[[#r36:]] "r36"
-; CHECK-SPIRV: OpName %[[#r37:]] "r37"
-; CHECK-SPIRV: OpName %[[#r38:]] "r38"
-; CHECK-SPIRV: OpName %[[#r39:]] "r39"
-; CHECK-SPIRV: OpName %[[#r40:]] "r40"
-; CHECK-SPIRV: OpName %[[#r41:]] "r41"
-; CHECK-SPIRV: OpName %[[#r42:]] "r42"
-; CHECK-SPIRV: OpName %[[#r43:]] "r43"
-; CHECK-SPIRV: OpName %[[#r44:]] "r44"
-; CHECK-SPIRV: OpName %[[#r45:]] "r45"
-; CHECK-SPIRV: OpName %[[#r46:]] "r46"
-; CHECK-SPIRV: OpName %[[#r47:]] "r47"
-; CHECK-SPIRV: OpName %[[#r48:]] "r48"
-; CHECK-SPIRV: OpName %[[#r49:]] "r49"
-; CHECK-SPIRV: OpName %[[#r50:]] "r50"
-; CHECK-SPIRV: OpName %[[#r51:]] "r51"
-; CHECK-SPIRV: OpName %[[#r52:]] "r52"
-; CHECK-SPIRV: OpName %[[#r53:]] "r53"
-; CHECK-SPIRV: OpName %[[#r54:]] "r54"
-; CHECK-SPIRV: OpName %[[#r55:]] "r55"
-; CHECK-SPIRV: OpName %[[#r56:]] "r56"
-; CHECK-SPIRV: OpName %[[#r57:]] "r57"
-; CHECK-SPIRV: OpName %[[#r58:]] "r58"
-; CHECK-SPIRV: OpName %[[#r59:]] "r59"
-; CHECK-SPIRV: OpName %[[#r60:]] "r60"
-; CHECK-SPIRV: OpName %[[#r61:]] "r61"
-; CHECK-SPIRV: OpName %[[#r62:]] "r62"
-; CHECK-SPIRV: OpName %[[#r63:]] "r63"
-; CHECK-SPIRV: OpName %[[#r64:]] "r64"
-; CHECK-SPIRV: OpName %[[#r65:]] "r65"
-; CHECK-SPIRV: OpName %[[#r66:]] "r66"
-; CHECK-SPIRV: OpName %[[#r67:]] "r67"
-; CHECK-SPIRV: OpName %[[#r68:]] "r68"
-; CHECK-SPIRV: OpName %[[#r69:]] "r69"
-; CHECK-SPIRV: OpName %[[#r70:]] "r70"
-; CHECK-SPIRV: OpName %[[#r71:]] "r71"
-; CHECK-SPIRV: OpName %[[#r72:]] "r72"
-; CHECK-SPIRV: OpName %[[#r73:]] "r73"
-; CHECK-SPIRV: OpName %[[#r74:]] "r74"
-; CHECK-SPIRV: OpName %[[#r75:]] "r75"
-; CHECK-SPIRV: OpName %[[#r76:]] "r76"
-; CHECK-SPIRV: OpName %[[#r77:]] "r77"
-; CHECK-SPIRV: OpName %[[#r78:]] "r78"
-; CHECK-SPIRV: OpName %[[#r79:]] "r79"
-; CHECK-SPIRV: OpName %[[#r80:]] "r80"
-; CHECK-SPIRV: OpName %[[#r81:]] "r81"
-; CHECK-SPIRV: OpName %[[#r82:]] "r82"
-; CHECK-SPIRV: OpName %[[#r83:]] "r83"
-; CHECK-SPIRV: OpName %[[#r84:]] "r84"
-; CHECK-SPIRV: OpName %[[#r85:]] "r85"
-; CHECK-SPIRV: OpName %[[#r86:]] "r86"
-; CHECK-SPIRV: OpName %[[#r87:]] "r87"
-; CHECK-SPIRV: OpName %[[#r88:]] "r88"
-; CHECK-SPIRV: OpName %[[#r89:]] "r89"
-; CHECK-SPIRV: OpName %[[#r90:]] "r90"
 ; CHECK-SPIRV-NOT: OpDecorate %{{.*}} FPFastMathMode
 ; CHECK-SPIRV: %[[#bool:]] = OpTypeBool
-; CHECK-SPIRV: %[[#r1]] = OpFOrdEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r2]] = OpFOrdEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r3]] = OpFOrdEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r4]] = OpFOrdEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r5]] = OpFOrdEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r6]] = OpFOrdEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r7]] = OpFOrdEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r8]] = OpFOrdNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r9]] = OpFOrdNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r10]] = OpFOrdNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r11]] = OpFOrdNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r12]] = OpFOrdNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r13]] = OpFOrdNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r14]] = OpFOrdNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r15]] = OpFOrdLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r16]] = OpFOrdLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r17]] = OpFOrdLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r18]] = OpFOrdLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r19]] = OpFOrdLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r20]] = OpFOrdLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r21]] = OpFOrdLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r22]] = OpFOrdGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r23]] = OpFOrdGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r24]] = OpFOrdGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r25]] = OpFOrdGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r26]] = OpFOrdGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r27]] = OpFOrdGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r28]] = OpFOrdGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r29]] = OpFOrdLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r30]] = OpFOrdLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r31]] = OpFOrdLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r32]] = OpFOrdLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r33]] = OpFOrdLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r34]] = OpFOrdLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r35]] = OpFOrdLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r36]] = OpFOrdGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r37]] = OpFOrdGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r38]] = OpFOrdGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r39]] = OpFOrdGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r40]] = OpFOrdGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r41]] = OpFOrdGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r42]] = OpFOrdGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r43]] = OpOrdered %[[#bool]]
-; CHECK-SPIRV: %[[#r44]] = OpOrdered %[[#bool]]
-; CHECK-SPIRV: %[[#r45]] = OpOrdered %[[#bool]]
-; CHECK-SPIRV: %[[#r46]] = OpFUnordEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r47]] = OpFUnordEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r48]] = OpFUnordEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r49]] = OpFUnordEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r50]] = OpFUnordEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r51]] = OpFUnordEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r52]] = OpFUnordEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r53]] = OpFUnordNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r54]] = OpFUnordNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r55]] = OpFUnordNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r56]] = OpFUnordNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r57]] = OpFUnordNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r58]] = OpFUnordNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r59]] = OpFUnordNotEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r60]] = OpFUnordLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r61]] = OpFUnordLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r62]] = OpFUnordLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r63]] = OpFUnordLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r64]] = OpFUnordLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r65]] = OpFUnordLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r66]] = OpFUnordLessThan %[[#bool]]
-; CHECK-SPIRV: %[[#r67]] = OpFUnordGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r68]] = OpFUnordGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r69]] = OpFUnordGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r70]] = OpFUnordGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r71]] = OpFUnordGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r72]] = OpFUnordGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r73]] = OpFUnordGreaterThan %[[#bool]]
-; CHECK-SPIRV: %[[#r74]] = OpFUnordLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r75]] = OpFUnordLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r76]] = OpFUnordLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r77]] = OpFUnordLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r78]] = OpFUnordLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r79]] = OpFUnordLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r80]] = OpFUnordLessThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r81]] = OpFUnordGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r82]] = OpFUnordGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r83]] = OpFUnordGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r84]] = OpFUnordGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r85]] = OpFUnordGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r86]] = OpFUnordGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r87]] = OpFUnordGreaterThanEqual %[[#bool]]
-; CHECK-SPIRV: %[[#r88]] = OpUnordered %[[#bool]]
-; CHECK-SPIRV: %[[#r89]] = OpUnordered %[[#bool]]
-; CHECK-SPIRV: %[[#r90]] = OpUnordered %[[#bool]]
+; CHECK-SPIRV: %[[#r1:]] = OpFOrdEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r2:]] = OpFOrdEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r3:]] = OpFOrdEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r4:]] = OpFOrdEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r5:]] = OpFOrdEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r6:]] = OpFOrdEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r7:]] = OpFOrdEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r8:]] = OpFOrdNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r9:]] = OpFOrdNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r10:]] = OpFOrdNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r11:]] = OpFOrdNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r12:]] = OpFOrdNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r13:]] = OpFOrdNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r14:]] = OpFOrdNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r15:]] = OpFOrdLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r16:]] = OpFOrdLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r17:]] = OpFOrdLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r18:]] = OpFOrdLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r19:]] = OpFOrdLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r20:]] = OpFOrdLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r21:]] = OpFOrdLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r22:]] = OpFOrdGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r23:]] = OpFOrdGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r24:]] = OpFOrdGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r25:]] = OpFOrdGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r26:]] = OpFOrdGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r27:]] = OpFOrdGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r28:]] = OpFOrdGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r29:]] = OpFOrdLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r30:]] = OpFOrdLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r31:]] = OpFOrdLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r32:]] = OpFOrdLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r33:]] = OpFOrdLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r34:]] = OpFOrdLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r35:]] = OpFOrdLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r36:]] = OpFOrdGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r37:]] = OpFOrdGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r38:]] = OpFOrdGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r39:]] = OpFOrdGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r40:]] = OpFOrdGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r41:]] = OpFOrdGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r42:]] = OpFOrdGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r43:]] = OpOrdered %[[#bool]]
+; CHECK-SPIRV: %[[#r44:]] = OpOrdered %[[#bool]]
+; CHECK-SPIRV: %[[#r45:]] = OpOrdered %[[#bool]]
+; CHECK-SPIRV: %[[#r46:]] = OpFUnordEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r47:]] = OpFUnordEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r48:]] = OpFUnordEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r49:]] = OpFUnordEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r50:]] = OpFUnordEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r51:]] = OpFUnordEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r52:]] = OpFUnordEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r53:]] = OpFUnordNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r54:]] = OpFUnordNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r55:]] = OpFUnordNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r56:]] = OpFUnordNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r57:]] = OpFUnordNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r58:]] = OpFUnordNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r59:]] = OpFUnordNotEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r60:]] = OpFUnordLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r61:]] = OpFUnordLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r62:]] = OpFUnordLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r63:]] = OpFUnordLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r64:]] = OpFUnordLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r65:]] = OpFUnordLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r66:]] = OpFUnordLessThan %[[#bool]]
+; CHECK-SPIRV: %[[#r67:]] = OpFUnordGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r68:]] = OpFUnordGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r69:]] = OpFUnordGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r70:]] = OpFUnordGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r71:]] = OpFUnordGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r72:]] = OpFUnordGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r73:]] = OpFUnordGreaterThan %[[#bool]]
+; CHECK-SPIRV: %[[#r74:]] = OpFUnordLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r75:]] = OpFUnordLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r76:]] = OpFUnordLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r77:]] = OpFUnordLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r78:]] = OpFUnordLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r79:]] = OpFUnordLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r80:]] = OpFUnordLessThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r81:]] = OpFUnordGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r82:]] = OpFUnordGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r83:]] = OpFUnordGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r84:]] = OpFUnordGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r85:]] = OpFUnordGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r86:]] = OpFUnordGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r87:]] = OpFUnordGreaterThanEqual %[[#bool]]
+; CHECK-SPIRV: %[[#r88:]] = OpUnordered %[[#bool]]
+; CHECK-SPIRV: %[[#r89:]] = OpUnordered %[[#bool]]
+; CHECK-SPIRV: %[[#r90:]] = OpUnordered %[[#bool]]
 
 @G = global [90 x i1] zeroinitializer
 
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/fdiv.ll b/llvm/test/CodeGen/SPIRV/transcoding/fdiv.ll
index 79b786814c716..1d04abd4d6508 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/fdiv.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/fdiv.ll
@@ -1,22 +1,15 @@
 ; RUN: llc -O0 -mtriple=spirv32-unknown-unknown %s -o - | FileCheck %s --check-prefix=CHECK-SPIRV
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-SPIRV:     OpName %[[#r1:]] "r1"
-; CHECK-SPIRV:     OpName %[[#r2:]] "r2"
-; CHECK-SPIRV:     OpName %[[#r3:]] "r3"
-; CHECK-SPIRV:     OpName %[[#r4:]] "r4"
-; CHECK-SPIRV:     OpName %[[#r5:]] "r5"
-; CHECK-SPIRV:     OpName %[[#r6:]] "r6"
-; CHECK-SPIRV:     OpName %[[#r7:]] "r7"
-; CHECK-SPIRV-NOT: OpDecorate %[[#r1]] FPFastMathMode
-; CHECK-SPIRV-DAG: OpDecorate %[[#r2]] FPFastMathMode NotNaN
-; CHECK-SPIRV-DAG: OpDecorate %[[#r3]] FPFastMathMode NotInf
-; CHECK-SPIRV-DAG: OpDecorate %[[#r4]] FPFastMathMode NSZ
-; CHECK-SPIRV-DAG: OpDecorate %[[#r5]] FPFastMathMode AllowRecip
-; CHECK-SPIRV-DAG: OpDecorate %[[#r6]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|Fast
-; CHECK-SPIRV-DAG: OpDecorate %[[#r7]] FPFastMathMode NotNaN|NotInf
+; CHECK-SPIRV-NOT: OpDecorate %[[#]] FPFastMathMode
+; CHECK-SPIRV-DAG: OpDecorate %[[#r2:]] FPFastMathMode {{NotNaN(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r3:]] FPFastMathMode {{NotInf(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r4:]] FPFastMathMode {{NSZ(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r5:]] FPFastMathMode {{AllowRecip(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r6:]] FPFastMathMode {{NotNaN\|NotInf\|NSZ\|AllowRecip\|Fast(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r7:]] FPFastMathMode {{NotNaN\|NotInf(\||$)}}
 ; CHECK-SPIRV:     %[[#float:]] = OpTypeFloat 32
-; CHECK-SPIRV:     %[[#r1]] = OpFDiv %[[#float]]
+; CHECK-SPIRV:     %[[#r1:]] = OpFDiv %[[#float]]
 ; CHECK-SPIRV:     %[[#r2]] = OpFDiv %[[#float]]
 ; CHECK-SPIRV:     %[[#r3]] = OpFDiv %[[#float]]
 ; CHECK-SPIRV:     %[[#r4]] = OpFDiv %[[#float]]
@@ -24,14 +17,21 @@
 ; CHECK-SPIRV:     %[[#r6]] = OpFDiv %[[#float]]
 ; CHECK-SPIRV:     %[[#r7]] = OpFDiv %[[#float]]
 
-define spir_kernel void @testFDiv(float %a, float %b) local_unnamed_addr {
+define spir_kernel void @testFDiv(float %a, float %b, float addrspace(1)* %out) local_unnamed_addr {
 entry:
   %r1 = fdiv float %a, %b
+  store volatile float %r1, float addrspace(1)* %out
   %r2 = fdiv nnan float %a, %b
+  store volatile float %r2, float addrspace(1)* %out
   %r3 = fdiv ninf float %a, %b
+  store volatile float %r3, float addrspace(1)* %out
   %r4 = fdiv nsz float %a, %b
+  store volatile float %r4, float addrspace(1)* %out
   %r5 = fdiv arcp float %a, %b
+  store volatile float %r5, float addrspace(1)* %out
   %r6 = fdiv fast float %a, %b
+  store volatile float %r6, float addrspace(1)* %out
   %r7 = fdiv nnan ninf float %a, %b
+  store volatile float %r7, float addrspace(1)* %out
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/fmul.ll b/llvm/test/CodeGen/SPIRV/transcoding/fmul.ll
index fdab29c9041cb..4745124802d96 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/fmul.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/fmul.ll
@@ -1,22 +1,15 @@
 ; RUN: llc -O0 -mtriple=spirv32-unknown-unknown %s -o - | FileCheck %s --check-prefix=CHECK-SPIRV
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-SPIRV:     OpName %[[#r1:]] "r1"
-; CHECK-SPIRV:     OpName %[[#r2:]] "r2"
-; CHECK-SPIRV:     OpName %[[#r3:]] "r3"
-; CHECK-SPIRV:     OpName %[[#r4:]] "r4"
-; CHECK-SPIRV:     OpName %[[#r5:]] "r5"
-; CHECK-SPIRV:     OpName %[[#r6:]] "r6"
-; CHECK-SPIRV:     OpName %[[#r7:]] "r7"
-; CHECK-SPIRV-NOT: OpDecorate %[[#r1]] FPFastMathMode
-; CHECK-SPIRV-DAG: OpDecorate %[[#r2]] FPFastMathMode NotNaN
-; CHECK-SPIRV-DAG: OpDecorate %[[#r3]] FPFastMathMode NotInf
-; CHECK-SPIRV-DAG: OpDecorate %[[#r4]] FPFastMathMode NSZ
-; CHECK-SPIRV-DAG: OpDecorate %[[#r5]] FPFastMathMode AllowRecip
-; CHECK-SPIRV-DAG: OpDecorate %[[#r6]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|Fast
-; CHECK-SPIRV-DAG: OpDecorate %[[#r7]] FPFastMathMode NotNaN|NotInf
+; CHECK-SPIRV-NOT: OpDecorate %[[#]] FPFastMathMode
+; CHECK-SPIRV-DAG: OpDecorate %[[#r2:]] FPFastMathMode {{NotNaN(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r3:]] FPFastMathMode {{NotInf(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r4:]] FPFastMathMode {{NSZ(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r5:]] FPFastMathMode {{AllowRecip(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r6:]] FPFastMathMode {{NotNaN\|NotInf\|NSZ\|AllowRecip\|Fast(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r7:]] FPFastMathMode {{NotNaN\|NotInf(\||$)}}
 ; CHECK-SPIRV:     %[[#float:]] = OpTypeFloat 32
-; CHECK-SPIRV:     %[[#r1]] = OpFMul %[[#float]]
+; CHECK-SPIRV:     %[[#r1:]] = OpFMul %[[#float]]
 ; CHECK-SPIRV:     %[[#r2]] = OpFMul %[[#float]]
 ; CHECK-SPIRV:     %[[#r3]] = OpFMul %[[#float]]
 ; CHECK-SPIRV:     %[[#r4]] = OpFMul %[[#float]]
@@ -24,14 +17,21 @@
 ; CHECK-SPIRV:     %[[#r6]] = OpFMul %[[#float]]
 ; CHECK-SPIRV:     %[[#r7]] = OpFMul %[[#float]]
 
-define spir_kernel void @testFMul(float %a, float %b) local_unnamed_addr {
+define spir_kernel void @testFMul(float %a, float %b, float addrspace(1)* %out) local_unnamed_addr {
 entry:
   %r1 = fmul float %a, %b
+  store volatile float %r1, float addrspace(1)* %out
   %r2 = fmul nnan float %a, %b
+  store volatile float %r2, float addrspace(1)* %out
   %r3 = fmul ninf float %a, %b
+  store volatile float %r3, float addrspace(1)* %out
   %r4 = fmul nsz float %a, %b
+  store volatile float %r4, float addrspace(1)* %out
   %r5 = fmul arcp float %a, %b
+  store volatile float %r5, float addrspace(1)* %out
   %r6 = fmul fast float %a, %b
+  store volatile float %r6, float addrspace(1)* %out
   %r7 = fmul nnan ninf float %a, %b
+  store volatile float %r7, float addrspace(1)* %out
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/fneg.ll b/llvm/test/CodeGen/SPIRV/transcoding/fneg.ll
index 60bbfe6b7f393..21947517694f2 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/fneg.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/fneg.ll
@@ -1,31 +1,31 @@
 ; RUN: llc -O0 -mtriple=spirv32-unknown-unknown %s -o - | FileCheck %s --check-prefix=CHECK-SPIRV
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-SPIRV: OpName %[[#r1:]] "r1"
-; CHECK-SPIRV: OpName %[[#r2:]] "r2"
-; CHECK-SPIRV: OpName %[[#r3:]] "r3"
-; CHECK-SPIRV: OpName %[[#r4:]] "r4"
-; CHECK-SPIRV: OpName %[[#r5:]] "r5"
-; CHECK-SPIRV: OpName %[[#r6:]] "r6"
-; CHECK-SPIRV: OpName %[[#r7:]] "r7"
 ; CHECK-SPIRV-NOT: OpDecorate %{{.*}} FPFastMathMode
 ; CHECK-SPIRV: %[[#float:]] = OpTypeFloat 32
-; CHECK-SPIRV: %[[#r1]] = OpFNegate %[[#float]]
-; CHECK-SPIRV: %[[#r2]] = OpFNegate %[[#float]]
-; CHECK-SPIRV: %[[#r3]] = OpFNegate %[[#float]]
-; CHECK-SPIRV: %[[#r4]] = OpFNegate %[[#float]]
-; CHECK-SPIRV: %[[#r5]] = OpFNegate %[[#float]]
-; CHECK-SPIRV: %[[#r6]] = OpFNegate %[[#float]]
-; CHECK-SPIRV: %[[#r7]] = OpFNegate %[[#float]]
+; CHECK-SPIRV: %[[#r1:]] = OpFNegate %[[#float]]
+; CHECK-SPIRV: %[[#r2:]] = OpFNegate %[[#float]]
+; CHECK-SPIRV: %[[#r3:]] = OpFNegate %[[#float]]
+; CHECK-SPIRV: %[[#r4:]] = OpFNegate %[[#float]]
+; CHECK-SPIRV: %[[#r5:]] = OpFNegate %[[#float]]
+; CHECK-SPIRV: %[[#r6:]] = OpFNegate %[[#float]]
+; CHECK-SPIRV: %[[#r7:]] = OpFNegate %[[#float]]
 
-define spir_kernel void @testFNeg(float %a) local_unnamed_addr {
+define spir_kernel void @testFNeg(float %a, float addrspace(1)* %out) local_unnamed_addr {
 entry:
   %r1 = fneg float %a
+  store volatile float %r1, float addrspace(1)* %out
   %r2 = fneg nnan float %a
+  store volatile float %r2, float addrspace(1)* %out
   %r3 = fneg ninf float %a
+  store volatile float %r3, float addrspace(1)* %out
   %r4 = fneg nsz float %a
+  store volatile float %r4, float addrspace(1)* %out
   %r5 = fneg arcp float %a
+  store volatile float %r5, float addrspace(1)* %out
   %r6 = fneg fast float %a
+  store volatile float %r6, float addrspace(1)* %out
   %r7 = fneg nnan ninf float %a
+  store volatile float %r7, float addrspace(1)* %out
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/fp_contract_reassoc_fast_mode.ll b/llvm/test/CodeGen/SPIRV/transcoding/fp_contract_reassoc_fast_mode.ll
index 974043c11991f..307fc1e49ecbc 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/fp_contract_reassoc_fast_mode.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/fp_contract_reassoc_fast_mode.ll
@@ -2,10 +2,8 @@
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
 ; CHECK-SPIRV-NOT: OpCapability FPFastMathModeINTEL
-; CHECK-SPIRV:     OpName %[[#mu:]] "mul"
-; CHECK-SPIRV:     OpName %[[#su:]] "sub"
-; CHECK-SPIRV-NOT: OpDecorate %[[#mu]] FPFastMathMode AllowContractFastINTEL
-; CHECK-SPIRV-NOT: OpDecorate %[[#su]] FPFastMathMode AllowReassocINTEL
+; CHECK-SPIRV-NOT: OpDecorate %[[#]] FPFastMathMode AllowContractFastINTEL
+; CHECK-SPIRV-NOT: OpDecorate %[[#]] FPFastMathMode AllowReassocINTEL
 
 define spir_kernel void @test(float %a, float %b) {
 entry:
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/frem.ll b/llvm/test/CodeGen/SPIRV/transcoding/frem.ll
index d36ba7f70e453..f07a3a2d6f075 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/frem.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/frem.ll
@@ -1,22 +1,15 @@
 ; RUN: llc -O0 -mtriple=spirv32-unknown-unknown %s -o - | FileCheck %s --check-prefix=CHECK-SPIRV
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-SPIRV:     OpName %[[#r1:]] "r1"
-; CHECK-SPIRV:     OpName %[[#r2:]] "r2"
-; CHECK-SPIRV:     OpName %[[#r3:]] "r3"
-; CHECK-SPIRV:     OpName %[[#r4:]] "r4"
-; CHECK-SPIRV:     OpName %[[#r5:]] "r5"
-; CHECK-SPIRV:     OpName %[[#r6:]] "r6"
-; CHECK-SPIRV:     OpName %[[#r7:]] "r7"
-; CHECK-SPIRV-NOT: OpDecorate %[[#r1]] FPFastMathMode
-; CHECK-SPIRV-DAG: OpDecorate %[[#r2]] FPFastMathMode NotNaN
-; CHECK-SPIRV-DAG: OpDecorate %[[#r3]] FPFastMathMode NotInf
-; CHECK-SPIRV-DAG: OpDecorate %[[#r4]] FPFastMathMode NSZ
-; CHECK-SPIRV-DAG: OpDecorate %[[#r5]] FPFastMathMode AllowRecip
-; CHECK-SPIRV-DAG: OpDecorate %[[#r6]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|Fast
-; CHECK-SPIRV-DAG: OpDecorate %[[#r7]] FPFastMathMode NotNaN|NotInf
+; CHECK-SPIRV-NOT: OpDecorate %[[#]] FPFastMathMode
+; CHECK-SPIRV-DAG: OpDecorate %[[#r2:]] FPFastMathMode NotNaN{{$}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r3:]] FPFastMathMode NotInf{{$}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r4:]] FPFastMathMode NSZ{{$}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r5:]] FPFastMathMode AllowRecip{{$}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r6:]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|Fast{{$}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r7:]] FPFastMathMode NotNaN|NotInf{{$}}
 ; CHECK-SPIRV:     %[[#float:]] = OpTypeFloat 32
-; CHECK-SPIRV:     %[[#r1]] = OpFRem %[[#float]]
+; CHECK-SPIRV:     %[[#r1:]] = OpFRem %[[#float]]
 ; CHECK-SPIRV:     %[[#r2]] = OpFRem %[[#float]]
 ; CHECK-SPIRV:     %[[#r3]] = OpFRem %[[#float]]
 ; CHECK-SPIRV:     %[[#r4]] = OpFRem %[[#float]]
@@ -24,14 +17,21 @@
 ; CHECK-SPIRV:     %[[#r6]] = OpFRem %[[#float]]
 ; CHECK-SPIRV:     %[[#r7]] = OpFRem %[[#float]]
 
-define spir_kernel void @testFRem(float %a, float %b) local_unnamed_addr {
+define spir_kernel void @testFRem(float %a, float %b, float addrspace(1)* %out) local_unnamed_addr {
 entry:
   %r1 = frem float %a, %b
+  store volatile float %r1, float addrspace(1)* %out
   %r2 = frem nnan float %a, %b
+  store volatile float %r2, float addrspace(1)* %out
   %r3 = frem ninf float %a, %b
+  store volatile float %r3, float addrspace(1)* %out
   %r4 = frem nsz float %a, %b
+  store volatile float %r4, float addrspace(1)* %out
   %r5 = frem arcp float %a, %b
+  store volatile float %r5, float addrspace(1)* %out
   %r6 = frem fast float %a, %b
+  store volatile float %r6, float addrspace(1)* %out
   %r7 = frem nnan ninf float %a, %b
+  store volatile float %r7, float addrspace(1)* %out
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/transcoding/fsub.ll b/llvm/test/CodeGen/SPIRV/transcoding/fsub.ll
index 3677c00405626..3f980b1646142 100644
--- a/llvm/test/CodeGen/SPIRV/transcoding/fsub.ll
+++ b/llvm/test/CodeGen/SPIRV/transcoding/fsub.ll
@@ -1,22 +1,16 @@
 ; RUN: llc -O0 -mtriple=spirv32-unknown-unknown %s -o - | FileCheck %s --check-prefix=CHECK-SPIRV
 ; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv32-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
-; CHECK-SPIRV:     OpName %[[#r1:]] "r1"
-; CHECK-SPIRV:     OpName %[[#r2:]] "r2"
-; CHECK-SPIRV:     OpName %[[#r3:]] "r3"
-; CHECK-SPIRV:     OpName %[[#r4:]] "r4"
-; CHECK-SPIRV:     OpName %[[#r5:]] "r5"
-; CHECK-SPIRV:     OpName %[[#r6:]] "r6"
-; CHECK-SPIRV:     OpName %[[#r7:]] "r7"
-; CHECK-SPIRV-NOT: OpDecorate %[[#r1]] FPFastMathMode
-; CHECK-SPIRV-DAG: OpDecorate %[[#r2]] FPFastMathMode NotNaN
-; CHECK-SPIRV-DAG: OpDecorate %[[#r3]] FPFastMathMode NotInf
-; CHECK-SPIRV-DAG: OpDecorate %[[#r4]] FPFastMathMode NSZ
-; CHECK-SPIRV-DAG: OpDecorate %[[#r5]] FPFastMathMode AllowRecip
-; CHECK-SPIRV-DAG: OpDecorate %[[#r6]] FPFastMathMode NotNaN|NotInf|NSZ|AllowRecip|Fast
-; CHECK-SPIRV-DAG: OpDecorate %[[#r7]] FPFastMathMode NotNaN|NotInf
+; CHECK-SPIRV-NOT: OpDecorate {{.*}} FPFastMathMode
+; CHECK-SPIRV-DAG: OpDecorate %[[#r2:]] FPFastMathMode {{NotNaN(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r3:]] FPFastMathMode {{NotInf(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r4:]] FPFastMathMode {{NSZ(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r5:]] FPFastMathMode {{AllowRecip(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r6:]] FPFastMathMode {{NotNaN\|NotInf\|NSZ\|AllowRecip\|Fast(\||$)}}
+; CHECK-SPIRV-DAG: OpDecorate %[[#r7:]] FPFastMathMode {{NotNaN\|NotInf(\||$)}}
+; CHECK-SPIRV-NOT: OpDecorate {{.*}} FPFastMathMode
 ; CHECK-SPIRV:     %[[#float:]] = OpTypeFloat 32
-; CHECK-SPIRV:     %[[#r1]] = OpFSub %[[#float]]
+; CHECK-SPIRV:     %[[#r1:]] = OpFSub %[[#float]]
 ; CHECK-SPIRV:     %[[#r2]] = OpFSub %[[#float]]
 ; CHECK-SPIRV:     %[[#r3]] = OpFSub %[[#float]]
 ; CHECK-SPIRV:     %[[#r4]] = OpFSub %[[#float]]
@@ -24,14 +18,21 @@
 ; CHECK-SPIRV:     %[[#r6]] = OpFSub %[[#float]]
 ; CHECK-SPIRV:     %[[#r7]] = OpFSub %[[#float]]
 
-define spir_kernel void @testFSub(float %a, float %b) local_unnamed_addr {
+define spir_kernel void @testFSub(float %a, float %b, float addrspace(1)* %out) local_unnamed_addr {
 entry:
   %r1 = fsub float %a, %b
+  store volatile float %r1, float addrspace(1)* %out
   %r2 = fsub nnan float %a, %b
+  store volatile float %r2, float addrspace(1)* %out
   %r3 = fsub ninf float %a, %b
+  store volatile float %r3, float addrspace(1)* %out
   %r4 = fsub nsz float %a, %b
+  store volatile float %r4, float addrspace(1)* %out
   %r5 = fsub arcp float %a, %b
+  store volatile float %r5, float addrspace(1)* %out
   %r6 = fsub fast float %a, %b
+  store volatile float %r6, float addrspace(1)* %out
   %r7 = fsub nnan ninf float %a, %b
+  store volatile float %r7, float addrspace(1)* %out
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/trunc-nonstd-bitwidth.ll b/llvm/test/CodeGen/SPIRV/trunc-nonstd-bitwidth.ll
index 16cd00b7180a7..6b669007f544b 100644
--- a/llvm/test/CodeGen/SPIRV/trunc-nonstd-bitwidth.ll
+++ b/llvm/test/CodeGen/SPIRV/trunc-nonstd-bitwidth.ll
@@ -12,12 +12,6 @@
 ; XFAIL: expensive_checks
 
 ; CHECK-DAG: OpName %[[#Struct:]] "struct"
-; CHECK-DAG: OpName %[[#Arg:]] "arg"
-; CHECK-DAG: OpName %[[#QArg:]] "qarg"
-; CHECK-DAG: OpName %[[#R:]] "r"
-; CHECK-DAG: OpName %[[#Q:]] "q"
-; CHECK-DAG: OpName %[[#Tr:]] "tr"
-; CHECK-DAG: OpName %[[#Tq:]] "tq"
 ; CHECK-DAG: %[[#Struct]] = OpTypeStruct %[[#]] %[[#]] %[[#]]
 ; CHECK-DAG: %[[#PtrStruct:]] = OpTypePointer CrossWorkgroup %[[#Struct]]
 ; CHECK-EXT-DAG: %[[#Int40:]] = OpTypeInt 40 0
@@ -26,19 +20,21 @@
 ; CHECK-DAG: %[[#PtrInt40:]] = OpTypePointer CrossWorkgroup %[[#Int40]]
 
 ; CHECK: OpFunction
-
-; CHECK-EXT: %[[#Tr]] = OpUConvert %[[#Int40]] %[[#R]]
+; CHECK: %[[#Arg:]] = OpFunctionParameter
+; CHECK-EXT: %[[#Tr:]] = OpUConvert %[[#Int40]] %[[#R:]]
 ; CHECK-EXT: %[[#Store:]] = OpInBoundsPtrAccessChain %[[#PtrStruct]] %[[#Arg]] %[[#]]
 ; CHECK-EXT: %[[#StoreAsInt40:]] = OpBitcast %[[#PtrInt40]] %[[#Store]]
 ; CHECK-EXT: OpStore %[[#StoreAsInt40]] %[[#Tr]]
 
 ; CHECK-NOEXT: %[[#Store:]] = OpInBoundsPtrAccessChain %[[#PtrStruct]] %[[#Arg]] %[[#]]
 ; CHECK-NOEXT: %[[#StoreAsInt40:]] = OpBitcast %[[#PtrInt40]] %[[#Store]]
-; CHECK-NOEXT: OpStore %[[#StoreAsInt40]] %[[#R]]
+; CHECK-NOEXT: OpStore %[[#StoreAsInt40]] %[[#R:]]
 
 ; CHECK: OpFunction
 
-; CHECK-EXT: %[[#Tq]] = OpUConvert %[[#Int40]] %[[#Q]]
+; CHECK: %[[#QArg:]] = OpFunctionParameter
+; CHECK: %[[#Q:]] = OpFunctionParameter
+; CHECK-EXT: %[[#Tq:]] = OpUConvert %[[#Int40]] %[[#Q]]
 ; CHECK-EXT: OpStore %[[#QArg]] %[[#Tq]]
 
 ; CHECK-NOEXT: OpStore %[[#QArg]] %[[#Q]]
diff --git a/llvm/test/CodeGen/SPIRV/uitofp-with-bool.ll b/llvm/test/CodeGen/SPIRV/uitofp-with-bool.ll
index 9c8b4070d834d..aed759ba843c3 100644
--- a/llvm/test/CodeGen/SPIRV/uitofp-with-bool.ll
+++ b/llvm/test/CodeGen/SPIRV/uitofp-with-bool.ll
@@ -9,26 +9,6 @@
 ;; clang -x cl -cl-std=CL2.0 -target spir64 -emit-llvm -S -c test.cl
 
 
-; SPV-DAG: OpName %[[#s1:]] "s1"
-; SPV-DAG: OpName %[[#s2:]] "s2"
-; SPV-DAG: OpName %[[#s3:]] "s3"
-; SPV-DAG: OpName %[[#s4:]] "s4"
-; SPV-DAG: OpName %[[#s5:]] "s5"
-; SPV-DAG: OpName %[[#s6:]] "s6"
-; SPV-DAG: OpName %[[#s7:]] "s7"
-; SPV-DAG: OpName %[[#s8:]] "s8"
-; SPV-DAG: OpName %[[#z1:]] "z1"
-; SPV-DAG: OpName %[[#z2:]] "z2"
-; SPV-DAG: OpName %[[#z3:]] "z3"
-; SPV-DAG: OpName %[[#z4:]] "z4"
-; SPV-DAG: OpName %[[#z5:]] "z5"
-; SPV-DAG: OpName %[[#z6:]] "z6"
-; SPV-DAG: OpName %[[#z7:]] "z7"
-; SPV-DAG: OpName %[[#z8:]] "z8"
-; SPV-DAG: OpName %[[#ufp1:]] "ufp1"
-; SPV-DAG: OpName %[[#ufp2:]] "ufp2"
-; SPV-DAG: OpName %[[#sfp1:]] "sfp1"
-; SPV-DAG: OpName %[[#sfp2:]] "sfp2"
 ; SPV-DAG: %[[#int_32:]] = OpTypeInt 32 0
 ; SPV-DAG: %[[#int_8:]] = OpTypeInt 8 0
 ; SPV-DAG: %[[#int_16:]] = OpTypeInt 16 0
@@ -98,76 +78,76 @@
 define dso_local spir_kernel void @K(float addrspace(1)* nocapture %A, i32 %B, i1 %i1s, <2 x i1> %i1v) local_unnamed_addr {
 entry:
 
-; SPV-DAG: %[[#cmp_res:]] = OpSGreaterThan %[[#bool]] %[[#B]] %[[#zero_32]]
+; SPV: %[[#cmp_res:]] = OpSGreaterThan %[[#bool]] %[[#B]] %[[#zero_32]]
   %cmp = icmp sgt i32 %B, 0
-; SPV-DAG: %[[#select_res:]] = OpSelect %[[#int_32]] %[[#cmp_res]] %[[#one_32]] %[[#zero_32]]
-; SPV-DAG: %[[#utof_res:]] = OpConvertUToF %[[#float]] %[[#select_res]]
+; SPV: %[[#select_res:]] = OpSelect %[[#int_32]] %[[#cmp_res]] %[[#one_32]] %[[#zero_32]]
+; SPV: %[[#utof_res:]] = OpConvertUToF %[[#float]] %[[#select_res]]
   %conv = uitofp i1 %cmp to float
-; SPV-DAG: OpStore %[[#A]] %[[#utof_res]]
+; SPV: OpStore %[[#A]] %[[#utof_res]]
   store float %conv, float addrspace(1)* %A, align 4;
 
-; SPV-DAG: %[[#s1]] = OpSelect %[[#int_8]] %[[#i1s]] %[[#mone_8]] %[[#zero_8]]
+; SPV: %[[#s1:]] = OpSelect %[[#int_8]] %[[#i1s]] %[[#mone_8]] %[[#zero_8]]
   %s1 = sext i1 %i1s to i8
   store i8 %s1, ptr @G_s1
-; SPV-DAG: %[[#s2]] = OpSelect %[[#int_16]] %[[#i1s]] %[[#mone_16]] %[[#zero_16]]
+; SPV: %[[#s2:]] = OpSelect %[[#int_16]] %[[#i1s]] %[[#mone_16]] %[[#zero_16]]
   %s2 = sext i1 %i1s to i16
   store i16 %s2, ptr @G_s2
-; SPV-DAG: %[[#s3]] = OpSelect %[[#int_32]] %[[#i1s]] %[[#mone_32]] %[[#zero_32]]
+; SPV: %[[#s3:]] = OpSelect %[[#int_32]] %[[#i1s]] %[[#mone_32]] %[[#zero_32]]
   %s3 = sext i1 %i1s to i32
   store i32 %s3, ptr @G_s3
-; SPV-DAG: %[[#s4]] = OpSelect %[[#int_64]] %[[#i1s]] %[[#mone_64]] %[[#zero_64]]
+; SPV: %[[#s4:]] = OpSelect %[[#int_64]] %[[#i1s]] %[[#mone_64]] %[[#zero_64]]
   %s4 = sext i1 %i1s to i64
   store i64 %s4, ptr @G_s4
-; SPV-DAG: %[[#s5]] = OpSelect %[[#vec_8]] %[[#i1v]] %[[#mones_8]] %[[#zeros_8]]
+; SPV: %[[#s5:]] = OpSelect %[[#vec_8]] %[[#i1v]] %[[#mones_8]] %[[#zeros_8]]
   %s5 = sext <2 x i1> %i1v to <2 x i8>
   store <2 x i8> %s5, ptr @G_s5
-; SPV-DAG: %[[#s6]] = OpSelect %[[#vec_16]] %[[#i1v]] %[[#mones_16]] %[[#zeros_16]]
+; SPV: %[[#s6:]] = OpSelect %[[#vec_16]] %[[#i1v]] %[[#mones_16]] %[[#zeros_16]]
   %s6 = sext <2 x i1> %i1v to <2 x i16>
   store <2 x i16> %s6, ptr @G_s6
-; SPV-DAG: %[[#s7]] = OpSelect %[[#vec_32]] %[[#i1v]] %[[#mones_32]] %[[#zeros_32]]
+; SPV: %[[#s7:]] = OpSelect %[[#vec_32]] %[[#i1v]] %[[#mones_32]] %[[#zeros_32]]
   %s7 = sext <2 x i1> %i1v to <2 x i32>
   store <2 x i32> %s7, ptr @G_s7
-; SPV-DAG: %[[#s8]] = OpSelect %[[#vec_64]] %[[#i1v]] %[[#mones_64]] %[[#zeros_64]]
+; SPV: %[[#s8:]] = OpSelect %[[#vec_64]] %[[#i1v]] %[[#mones_64]] %[[#zeros_64]]
   %s8 = sext <2 x i1> %i1v to <2 x i64>
   store <2 x i64> %s8, ptr @G_s8
-; SPV-DAG: %[[#z1]] = OpSelect %[[#int_8]] %[[#i1s]] %[[#one_8]] %[[#zero_8]]
+; SPV: %[[#z1:]] = OpSelect %[[#int_8]] %[[#i1s]] %[[#one_8]] %[[#zero_8]]
   %z1 = zext i1 %i1s to i8
   store i8 %z1, ptr @G_z1
-; SPV-DAG: %[[#z2]] = OpSelect %[[#int_16]] %[[#i1s]] %[[#one_16]] %[[#zero_16]]
+; SPV: %[[#z2:]] = OpSelect %[[#int_16]] %[[#i1s]] %[[#one_16]] %[[#zero_16]]
   %z2 = zext i1 %i1s to i16
   store i16 %z2, ptr @G_z2
-; SPV-DAG: %[[#z3]] = OpSelect %[[#int_32]] %[[#i1s]] %[[#one_32]] %[[#zero_32]]
+; SPV: %[[#z3:]] = OpSelect %[[#int_32]] %[[#i1s]] %[[#one_32]] %[[#zero_32]]
   %z3 = zext i1 %i1s to i32
   store i32 %z3, ptr @G_z3
-; SPV-DAG: %[[#z4]] = OpSelect %[[#int_64]] %[[#i1s]] %[[#one_64]] %[[#zero_64]]
+; SPV: %[[#z4:]] = OpSelect %[[#int_64]] %[[#i1s]] %[[#one_64]] %[[#zero_64]]
   %z4 = zext i1 %i1s to i64
   store i64 %z4, ptr @G_z4
-; SPV-DAG: %[[#z5]] = OpSelect %[[#vec_8]] %[[#i1v]] %[[#ones_8]] %[[#zeros_8]]
+; SPV: %[[#z5:]] = OpSelect %[[#vec_8]] %[[#i1v]] %[[#ones_8]] %[[#zeros_8]]
   %z5 = zext <2 x i1> %i1v to <2 x i8>
   store <2 x i8> %z5, ptr @G_z5
-; SPV-DAG: %[[#z6]] = OpSelect %[[#vec_16]] %[[#i1v]] %[[#ones_16]] %[[#zeros_16]]
+; SPV: %[[#z6:]] = OpSelect %[[#vec_16]] %[[#i1v]] %[[#ones_16]] %[[#zeros_16]]
   %z6 = zext <2 x i1> %i1v to <2 x i16>
   store <2 x i16> %z6, ptr @G_z6
-; SPV-DAG: %[[#z7]] = OpSelect %[[#vec_32]] %[[#i1v]] %[[#ones_32]] %[[#zeros_32]]
+; SPV: %[[#z7:]] = OpSelect %[[#vec_32]] %[[#i1v]] %[[#ones_32]] %[[#zeros_32]]
   %z7 = zext <2 x i1> %i1v to <2 x i32>
   store <2 x i32> %z7, ptr @G_z7
-; SPV-DAG: %[[#z8]] = OpSelect %[[#vec_64]] %[[#i1v]] %[[#ones_64]] %[[#zeros_64]]
+; SPV: %[[#z8:]] = OpSelect %[[#vec_64]] %[[#i1v]] %[[#ones_64]] %[[#zeros_64]]
   %z8 = zext <2 x i1> %i1v to <2 x i64>
   store <2 x i64> %z8, ptr @G_z8
-; SPV-DAG: %[[#ufp1_res:]] = OpSelect %[[#int_32]] %[[#i1s]] %[[#one_32]] %[[#zero_32]]
-; SPV-DAG: %[[#ufp1]] = OpConvertUToF %[[#float]] %[[#ufp1_res]]
+; SPV: %[[#ufp1_res:]] = OpSelect %[[#int_32]] %[[#i1s]] %[[#one_32]] %[[#zero_32]]
+; SPV: %[[#ufp1:]] = OpConvertUToF %[[#float]] %[[#ufp1_res]]
   %ufp1 = uitofp i1 %i1s to float
   store float %ufp1, ptr @G_ufp1
-; SPV-DAG: %[[#ufp2_res:]] = OpSelect %[[#vec_32]] %[[#i1v]] %[[#ones_32]] %[[#zeros_32]]
-; SPV-DAG: %[[#ufp2]] = OpConvertUToF %[[#vec_float]] %[[#ufp2_res]]
+; SPV: %[[#ufp2_res:]] = OpSelect %[[#vec_32]] %[[#i1v]] %[[#ones_32]] %[[#zeros_32]]
+; SPV: %[[#ufp2:]] = OpConvertUToF %[[#vec_float]] %[[#ufp2_res]]
   %ufp2 = uitofp <2 x i1> %i1v to <2 x float>
   store <2 x float> %ufp2, ptr @G_ufp2
-; SPV-DAG: %[[#sfp1_res:]] = OpSelect %[[#int_32]] %[[#i1s]] %[[#one_32]] %[[#zero_32]]
-; SPV-DAG: %[[#sfp1]] = OpConvertSToF %[[#float]] %[[#sfp1_res]]
+; SPV: %[[#sfp1_res:]] = OpSelect %[[#int_32]] %[[#i1s]] %[[#one_32]] %[[#zero_32]]
+; SPV: %[[#sfp1:]] = OpConvertSToF %[[#float]] %[[#sfp1_res]]
   %sfp1 = sitofp i1 %i1s to float
   store float %sfp1, ptr @G_sfp1
-; SPV-DAG: %[[#sfp2_res:]] = OpSelect %[[#vec_32]] %[[#i1v]] %[[#ones_32]] %[[#zeros_32]]
-; SPV-DAG: %[[#sfp2]] = OpConvertSToF %[[#vec_float]] %[[#sfp2_res]]
+; SPV: %[[#sfp2_res:]] = OpSelect %[[#vec_32]] %[[#i1v]] %[[#ones_32]] %[[#zeros_32]]
+; SPV: %[[#sfp2:]] = OpConvertSToF %[[#vec_float]] %[[#sfp2_res]]
   %sfp2 = sitofp <2 x i1> %i1v to <2 x float>
   store <2 x float> %sfp2, ptr @G_sfp2
   ret void
diff --git a/llvm/test/CodeGen/SPIRV/zero-length-array.ll b/llvm/test/CodeGen/SPIRV/zero-length-array.ll
index cb34529ebfecd..53cdcb5518397 100644
--- a/llvm/test/CodeGen/SPIRV/zero-length-array.ll
+++ b/llvm/test/CodeGen/SPIRV/zero-length-array.ll
@@ -5,9 +5,6 @@
 
 ; For compute, nothing is generated, but compilation doesn't crash.
 ; CHECK: OpName %[[#FOO:]] "foo"
-; CHECK: OpName %[[#RTM:]] "reg2mem alloca point"
-; CHECK: %[[#INT:]] = OpTypeInt 32 0
-; CHECK: %[[#RTM]] = OpConstant %[[#INT]] 0
 ; CHECK: %[[#FOO]] = OpFunction
 ; CHECK-NEXT: = OpLabel
 ; CHECK-NEXT: OpReturn

>From f9d9c4e6bb9743805ce1f279990a23aba9695092 Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Tue, 2 Dec 2025 16:07:14 -0500
Subject: [PATCH 09/10] [SPIRV] Implement lowering for llvm.matrix.transpose
 and llvm.matrix.multiply

This patch implements the lowering for the llvm.matrix.transpose and
llvm.matrix.multiply intrinsics in the SPIR-V backend.

- llvm.matrix.transpose is lowered to a G_SHUFFLE_VECTOR with a
  mask calculated to transpose the elements.
- llvm.matrix.multiply is lowered by decomposing the operation into
  dot products of rows and columns:
  - Rows and columns are extracted using G_UNMERGE_VALUES or shuffles.
  - Dot products are computed using OpDot for floating point vectors
    or standard arithmetic for scalars/integers.
  - The result is reconstructed using G_BUILD_VECTOR.

This change also updates SPIRVPostLegalizer to improve type deduction
for G_UNMERGE_VALUES, enabling correct type assignment for the
intermediate virtual registers generated during lowering.

New tests are added to verify support for various matrix sizes and
element types (float and int).
---
 llvm/lib/Target/SPIRV/SPIRVCombine.td         |  18 +-
 llvm/lib/Target/SPIRV/SPIRVCombinerHelper.cpp | 189 ++++++++++++++++++
 llvm/lib/Target/SPIRV/SPIRVCombinerHelper.h   |  21 ++
 llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp  |   7 +-
 llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp  |  95 +++++----
 .../SPIRV/llvm-intrinsics/matrix-multiply.ll  | 168 ++++++++++++++++
 .../SPIRV/llvm-intrinsics/matrix-transpose.ll | 124 ++++++++++++
 7 files changed, 567 insertions(+), 55 deletions(-)
 create mode 100644 llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-multiply.ll
 create mode 100644 llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-transpose.ll

diff --git a/llvm/lib/Target/SPIRV/SPIRVCombine.td b/llvm/lib/Target/SPIRV/SPIRVCombine.td
index 991a5de1c4e83..7d69465de4ffb 100644
--- a/llvm/lib/Target/SPIRV/SPIRVCombine.td
+++ b/llvm/lib/Target/SPIRV/SPIRVCombine.td
@@ -22,8 +22,22 @@ def vector_select_to_faceforward_lowering : GICombineRule <
   (apply [{ Helper.applySPIRVFaceForward(*${root}); }])
 >;
 
+def matrix_transpose_lowering
+    : GICombineRule<(defs root:$root),
+                    (match (wip_match_opcode G_INTRINSIC):$root,
+                        [{ return Helper.matchMatrixTranspose(*${root}); }]),
+                    (apply [{ Helper.applyMatrixTranspose(*${root}); }])>;
+
+def matrix_multiply_lowering
+    : GICombineRule<(defs root:$root),
+                    (match (wip_match_opcode G_INTRINSIC):$root,
+                        [{ return Helper.matchMatrixMultiply(*${root}); }]),
+                    (apply [{ Helper.applyMatrixMultiply(*${root}); }])>;
+
 def SPIRVPreLegalizerCombiner
     : GICombiner<"SPIRVPreLegalizerCombinerImpl",
-                       [vector_length_sub_to_distance_lowering, vector_select_to_faceforward_lowering]> {
-    let CombineAllMethodName = "tryCombineAllImpl";
+                 [vector_length_sub_to_distance_lowering,
+                  vector_select_to_faceforward_lowering,
+                  matrix_transpose_lowering, matrix_multiply_lowering]> {
+  let CombineAllMethodName = "tryCombineAllImpl";
 }
diff --git a/llvm/lib/Target/SPIRV/SPIRVCombinerHelper.cpp b/llvm/lib/Target/SPIRV/SPIRVCombinerHelper.cpp
index fad2b676fee04..693b74c1e06d7 100644
--- a/llvm/lib/Target/SPIRV/SPIRVCombinerHelper.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVCombinerHelper.cpp
@@ -7,9 +7,13 @@
 //===----------------------------------------------------------------------===//
 
 #include "SPIRVCombinerHelper.h"
+#include "SPIRVGlobalRegistry.h"
+#include "SPIRVUtils.h"
 #include "llvm/CodeGen/GlobalISel/GenericMachineInstrs.h"
 #include "llvm/CodeGen/GlobalISel/MIPatternMatch.h"
+#include "llvm/IR/DerivedTypes.h"
 #include "llvm/IR/IntrinsicsSPIRV.h"
+#include "llvm/IR/LLVMContext.h" // Explicitly include for LLVMContext
 #include "llvm/Target/TargetMachine.h"
 
 using namespace llvm;
@@ -209,3 +213,188 @@ void SPIRVCombinerHelper::applySPIRVFaceForward(MachineInstr &MI) const {
   GR->invalidateMachineInstr(FalseInstr);
   FalseInstr->eraseFromParent();
 }
+
+bool SPIRVCombinerHelper::matchMatrixTranspose(MachineInstr &MI) const {
+  return MI.getOpcode() == TargetOpcode::G_INTRINSIC &&
+         cast<GIntrinsic>(MI).getIntrinsicID() == Intrinsic::matrix_transpose;
+}
+
+void SPIRVCombinerHelper::applyMatrixTranspose(MachineInstr &MI) const {
+  Register ResReg = MI.getOperand(0).getReg();
+  Register InReg = MI.getOperand(2).getReg();
+  uint32_t Rows = MI.getOperand(3).getImm();
+  uint32_t Cols = MI.getOperand(4).getImm();
+
+  Builder.setInstrAndDebugLoc(MI);
+
+  if (Rows == 1 && Cols == 1) {
+    Builder.buildCopy(ResReg, InReg);
+    MI.eraseFromParent();
+    return;
+  }
+
+  SmallVector<int, 16> Mask;
+  for (uint32_t K = 0; K < Rows * Cols; ++K) {
+    uint32_t R = K / Cols;
+    uint32_t C = K % Cols;
+    Mask.push_back(C * Rows + R);
+  }
+
+  Builder.buildShuffleVector(ResReg, InReg, InReg, Mask);
+  MI.eraseFromParent();
+}
+
+bool SPIRVCombinerHelper::matchMatrixMultiply(MachineInstr &MI) const {
+  return MI.getOpcode() == TargetOpcode::G_INTRINSIC &&
+         cast<GIntrinsic>(MI).getIntrinsicID() == Intrinsic::matrix_multiply;
+}
+
+SmallVector<Register, 4>
+SPIRVCombinerHelper::extractColumns(Register MatrixReg, uint32_t NumberOfCols,
+                                    SPIRVType *SpvColType,
+                                    SPIRVGlobalRegistry *GR) const {
+  // If the matrix is a single colunm, return that single column.
+  if (NumberOfCols == 1)
+    return {MatrixReg};
+
+  SmallVector<Register, 4> Cols;
+  LLT ColTy = GR->getRegType(SpvColType);
+  for (uint32_t J = 0; J < NumberOfCols; ++J)
+    Cols.push_back(MRI.createGenericVirtualRegister(ColTy));
+  Builder.buildUnmerge(Cols, MatrixReg);
+  for (Register R : Cols) {
+    setRegClassType(R, SpvColType, GR, &MRI, Builder.getMF());
+  }
+  return Cols;
+}
+
+SmallVector<Register, 4>
+SPIRVCombinerHelper::extractRows(Register MatrixReg, uint32_t NumRows,
+                                 uint32_t NumCols, SPIRVType *SpvRowType,
+                                 SPIRVGlobalRegistry *GR) const {
+  SmallVector<Register, 4> Rows;
+  LLT VecTy = GR->getRegType(SpvRowType);
+
+  // If there is only one column, then each row is a scalar that needs
+  // to be extracted.
+  if (NumCols == 1) {
+    assert(SpvRowType->getOpcode() != SPIRV::OpTypeVector);
+    for (uint32_t I = 0; I < NumRows; ++I)
+      Rows.push_back(MRI.createGenericVirtualRegister(VecTy));
+    Builder.buildUnmerge(Rows, MatrixReg);
+    for (Register R : Rows) {
+      setRegClassType(R, SpvRowType, GR, &MRI, Builder.getMF());
+    }
+    return Rows;
+  }
+
+  // If the matrix is a single row return that row.
+  if (NumRows == 1) {
+    return {MatrixReg};
+  }
+
+  for (uint32_t I = 0; I < NumRows; ++I) {
+    SmallVector<int, 4> Mask;
+    for (uint32_t k = 0; k < NumCols; ++k)
+      Mask.push_back(k * NumRows + I);
+    Rows.push_back(Builder.buildShuffleVector(VecTy, MatrixReg, MatrixReg, Mask)
+                       .getReg(0));
+  }
+  for (Register R : Rows) {
+    setRegClassType(R, SpvRowType, GR, &MRI, Builder.getMF());
+  }
+  return Rows;
+}
+
+Register SPIRVCombinerHelper::computeDotProduct(Register RowA, Register ColB,
+                                                SPIRVType *SpvVecType,
+                                                SPIRVGlobalRegistry *GR) const {
+  bool IsVectorOp = SpvVecType->getOpcode() == SPIRV::OpTypeVector;
+  SPIRVType *SpvScalarType = GR->getScalarOrVectorComponentType(SpvVecType);
+  bool IsFloatOp = SpvScalarType->getOpcode() == SPIRV::OpTypeFloat;
+  LLT VecTy = GR->getRegType(SpvVecType);
+
+  Register DotRes;
+  if (IsVectorOp) {
+    LLT ScalarTy = VecTy.getElementType();
+    Intrinsic::SPVIntrinsics DotIntrinsic =
+        (IsFloatOp ? Intrinsic::spv_fdot : Intrinsic::spv_udot);
+    DotRes = Builder.buildIntrinsic(DotIntrinsic, {ScalarTy})
+                 .addUse(RowA)
+                 .addUse(ColB)
+                 .getReg(0);
+  } else {
+    if (IsFloatOp)
+      DotRes = Builder.buildFMul(VecTy, RowA, ColB).getReg(0);
+    else
+      DotRes = Builder.buildMul(VecTy, RowA, ColB).getReg(0);
+  }
+  setRegClassType(DotRes, SpvScalarType, GR, &MRI, Builder.getMF());
+  return DotRes;
+}
+
+SmallVector<Register, 16>
+SPIRVCombinerHelper::computeDotProducts(const SmallVector<Register, 4> &RowsA,
+                                        const SmallVector<Register, 4> &ColsB,
+                                        SPIRVType *SpvVecType,
+                                        SPIRVGlobalRegistry *GR) const {
+  SmallVector<Register, 16> ResultScalars;
+  for (uint32_t J = 0; J < ColsB.size(); ++J) {
+    for (uint32_t I = 0; I < RowsA.size(); ++I) {
+      ResultScalars.push_back(
+          computeDotProduct(RowsA[I], ColsB[J], SpvVecType, GR));
+    }
+  }
+  return ResultScalars;
+}
+
+SPIRVType *
+SPIRVCombinerHelper::getDotProductVectorType(Register ResReg, uint32_t K,
+                                             SPIRVGlobalRegistry *GR) const {
+  // Loop over all non debug uses of ResReg
+  Type *ScalarResType = nullptr;
+  for (auto &UseMI : MRI.use_instructions(ResReg)) {
+    if (UseMI.getOpcode() != TargetOpcode::G_INTRINSIC_W_SIDE_EFFECTS)
+      continue;
+
+    if (!isSpvIntrinsic(UseMI, Intrinsic::spv_assign_type))
+      continue;
+
+    Type *Ty = getMDOperandAsType(UseMI.getOperand(2).getMetadata(), 0);
+    if (Ty->isVectorTy())
+      ScalarResType = cast<VectorType>(Ty)->getElementType();
+    else
+      ScalarResType = Ty;
+    assert(ScalarResType->isIntegerTy() || ScalarResType->isFloatingPointTy());
+    break;
+  }
+  Type *VecType =
+      (K > 1 ? FixedVectorType::get(ScalarResType, K) : ScalarResType);
+  return GR->getOrCreateSPIRVType(VecType, Builder,
+                                  SPIRV::AccessQualifier::None, false);
+}
+
+void SPIRVCombinerHelper::applyMatrixMultiply(MachineInstr &MI) const {
+  Register ResReg = MI.getOperand(0).getReg();
+  Register AReg = MI.getOperand(2).getReg();
+  Register BReg = MI.getOperand(3).getReg();
+  uint32_t NumRowsA = MI.getOperand(4).getImm();
+  uint32_t NumColsA = MI.getOperand(5).getImm();
+  uint32_t NumColsB = MI.getOperand(6).getImm();
+
+  Builder.setInstrAndDebugLoc(MI);
+
+  SPIRVGlobalRegistry *GR =
+      MI.getMF()->getSubtarget<SPIRVSubtarget>().getSPIRVGlobalRegistry();
+
+  SPIRVType *SpvVecType = getDotProductVectorType(ResReg, NumColsA, GR);
+  SmallVector<Register, 4> ColsB =
+      extractColumns(BReg, NumColsB, SpvVecType, GR);
+  SmallVector<Register, 4> RowsA =
+      extractRows(AReg, NumRowsA, NumColsA, SpvVecType, GR);
+  SmallVector<Register, 16> ResultScalars =
+      computeDotProducts(RowsA, ColsB, SpvVecType, GR);
+
+  Builder.buildBuildVector(ResReg, ResultScalars);
+  MI.eraseFromParent();
+}
\ No newline at end of file
diff --git a/llvm/lib/Target/SPIRV/SPIRVCombinerHelper.h b/llvm/lib/Target/SPIRV/SPIRVCombinerHelper.h
index 3118cdc744b8f..b6b3b36f03ade 100644
--- a/llvm/lib/Target/SPIRV/SPIRVCombinerHelper.h
+++ b/llvm/lib/Target/SPIRV/SPIRVCombinerHelper.h
@@ -33,6 +33,27 @@ class SPIRVCombinerHelper : public CombinerHelper {
   void applySPIRVDistance(MachineInstr &MI) const;
   bool matchSelectToFaceForward(MachineInstr &MI) const;
   void applySPIRVFaceForward(MachineInstr &MI) const;
+  bool matchMatrixTranspose(MachineInstr &MI) const;
+  void applyMatrixTranspose(MachineInstr &MI) const;
+  bool matchMatrixMultiply(MachineInstr &MI) const;
+  void applyMatrixMultiply(MachineInstr &MI) const;
+
+private:
+  SPIRVType *getDotProductVectorType(Register ResReg, uint32_t K,
+                                     SPIRVGlobalRegistry *GR) const;
+  SmallVector<Register, 4> extractColumns(Register BReg, uint32_t N,
+                                          SPIRVType *SpvVecType,
+                                          SPIRVGlobalRegistry *GR) const;
+  SmallVector<Register, 4> extractRows(Register AReg, uint32_t NumRows,
+                                       uint32_t NumCols, SPIRVType *SpvRowType,
+                                       SPIRVGlobalRegistry *GR) const;
+  SmallVector<Register, 16>
+  computeDotProducts(const SmallVector<Register, 4> &RowsA,
+                     const SmallVector<Register, 4> &ColsB,
+                     SPIRVType *SpvVecType, SPIRVGlobalRegistry *GR) const;
+  Register computeDotProduct(Register RowA, Register ColB,
+                             SPIRVType *SpvVecType,
+                             SPIRVGlobalRegistry *GR) const;
 };
 
 } // end namespace llvm
diff --git a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
index 40cb7dda53ba4..a7626c3d43c34 100644
--- a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
@@ -173,6 +173,7 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
   // non-shader contexts, vector sizes of 8 and 16 are also permitted, but
   // arbitrary sizes (e.g., 6 or 11) are not.
   uint32_t MaxVectorSize = ST.isShader() ? 4 : 16;
+  LLVM_DEBUG(dbgs() << "MaxVectorSize: " << MaxVectorSize << "\n");
 
   for (auto Opc : getTypeFoldingSupportedOpcodes()) {
     switch (Opc) {
@@ -221,8 +222,7 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
       .moreElementsToNextPow2(0)
       .lowerIf(vectorElementCountIsGreaterThan(0, MaxVectorSize))
       .moreElementsToNextPow2(1)
-      .lowerIf(vectorElementCountIsGreaterThan(1, MaxVectorSize))
-      .alwaysLegal();
+      .lowerIf(vectorElementCountIsGreaterThan(1, MaxVectorSize));
 
   getActionDefinitionsBuilder(G_EXTRACT_VECTOR_ELT)
       .moreElementsToNextPow2(1)
@@ -263,8 +263,7 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
 
   // If the result is still illegal, the combiner should be able to remove it.
   getActionDefinitionsBuilder(G_CONCAT_VECTORS)
-      .legalForCartesianProduct(allowedVectorTypes, allowedVectorTypes)
-      .moreElementsToNextPow2(0);
+      .legalForCartesianProduct(allowedVectorTypes, allowedVectorTypes);
 
   getActionDefinitionsBuilder(G_SPLAT_VECTOR)
       .legalFor(allowedVectorTypes)
diff --git a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
index 99edb937c3daa..5f52f60da37e1 100644
--- a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
@@ -51,54 +51,6 @@ static SPIRVType *deduceIntTypeFromResult(Register ResVReg,
   return GR->getOrCreateSPIRVIntegerType(Ty.getScalarSizeInBits(), MIB);
 }
 
-static bool deduceAndAssignTypeForGUnmerge(MachineInstr *I, MachineFunction &MF,
-                                           SPIRVGlobalRegistry *GR) {
-  MachineRegisterInfo &MRI = MF.getRegInfo();
-  Register SrcReg = I->getOperand(I->getNumOperands() - 1).getReg();
-  SPIRVType *ScalarType = nullptr;
-  if (SPIRVType *DefType = GR->getSPIRVTypeForVReg(SrcReg)) {
-    assert(DefType->getOpcode() == SPIRV::OpTypeVector);
-    ScalarType = GR->getSPIRVTypeForVReg(DefType->getOperand(1).getReg());
-  }
-
-  if (!ScalarType) {
-    // If we could not deduce the type from the source, try to deduce it from
-    // the uses of the results.
-    for (unsigned i = 0; i < I->getNumDefs() && !ScalarType; ++i) {
-      for (const auto &Use :
-           MRI.use_nodbg_instructions(I->getOperand(i).getReg())) {
-        if (Use.getOpcode() != TargetOpcode::G_BUILD_VECTOR)
-          continue;
-
-        if (auto *VecType =
-                GR->getSPIRVTypeForVReg(Use.getOperand(0).getReg())) {
-          ScalarType = GR->getScalarOrVectorComponentType(VecType);
-          break;
-        }
-      }
-    }
-  }
-
-  if (!ScalarType)
-    return false;
-
-  for (unsigned i = 0; i < I->getNumDefs(); ++i) {
-    Register DefReg = I->getOperand(i).getReg();
-    if (GR->getSPIRVTypeForVReg(DefReg))
-      continue;
-
-    LLT DefLLT = MRI.getType(DefReg);
-    SPIRVType *ResType =
-        DefLLT.isVector()
-            ? GR->getOrCreateSPIRVVectorType(
-                  ScalarType, DefLLT.getNumElements(), *I,
-                  *MF.getSubtarget<SPIRVSubtarget>().getInstrInfo())
-            : ScalarType;
-    setRegClassType(DefReg, ResType, GR, &MRI, MF);
-  }
-  return true;
-}
-
 static SPIRVType *deduceTypeFromSingleOperand(MachineInstr *I,
                                               MachineIRBuilder &MIB,
                                               SPIRVGlobalRegistry *GR,
@@ -179,6 +131,7 @@ static SPIRVType *deduceTypeFromUses(Register Reg, MachineFunction &MF,
     case TargetOpcode::G_FDIV:
     case TargetOpcode::G_FREM:
     case TargetOpcode::G_FMA:
+    case TargetOpcode::COPY:
     case TargetOpcode::G_STRICT_FMA:
       ResType = deduceTypeFromResultRegister(&Use, Reg, GR, MIB);
       break;
@@ -223,6 +176,50 @@ static SPIRVType *deduceResultTypeFromOperands(MachineInstr *I,
   }
 }
 
+static bool deduceAndAssignTypeForGUnmerge(MachineInstr *I, MachineFunction &MF,
+                                           SPIRVGlobalRegistry *GR,
+                                           MachineIRBuilder &MIB) {
+  MachineRegisterInfo &MRI = MF.getRegInfo();
+  Register SrcReg = I->getOperand(I->getNumOperands() - 1).getReg();
+  SPIRVType *ScalarType = nullptr;
+  if (SPIRVType *DefType = GR->getSPIRVTypeForVReg(SrcReg)) {
+    assert(DefType->getOpcode() == SPIRV::OpTypeVector);
+    ScalarType = GR->getSPIRVTypeForVReg(DefType->getOperand(1).getReg());
+  }
+
+  if (!ScalarType) {
+    // If we could not deduce the type from the source, try to deduce it from
+    // the uses of the results.
+    for (unsigned i = 0; i < I->getNumDefs(); ++i) {
+      Register DefReg = I->getOperand(i).getReg();
+      ScalarType = deduceTypeFromUses(DefReg, MF, GR, MIB);
+      if (ScalarType) {
+        ScalarType = GR->getScalarOrVectorComponentType(ScalarType);
+        break;
+      }
+    }
+  }
+
+  if (!ScalarType)
+    return false;
+
+  for (unsigned i = 0; i < I->getNumOperands(); ++i) {
+    Register DefReg = I->getOperand(i).getReg();
+    if (GR->getSPIRVTypeForVReg(DefReg))
+      continue;
+
+    LLT DefLLT = MRI.getType(DefReg);
+    SPIRVType *ResType =
+        DefLLT.isVector()
+            ? GR->getOrCreateSPIRVVectorType(
+                  ScalarType, DefLLT.getNumElements(), *I,
+                  *MF.getSubtarget<SPIRVSubtarget>().getInstrInfo())
+            : ScalarType;
+    setRegClassType(DefReg, ResType, GR, &MRI, MF);
+  }
+  return true;
+}
+
 static bool deduceAndAssignSpirvType(MachineInstr *I, MachineFunction &MF,
                                      SPIRVGlobalRegistry *GR,
                                      MachineIRBuilder &MIB) {
@@ -234,7 +231,7 @@ static bool deduceAndAssignSpirvType(MachineInstr *I, MachineFunction &MF,
   // unlike the other instructions which have a single result register. The main
   // deduction logic is designed for the single-definition case.
   if (I->getOpcode() == TargetOpcode::G_UNMERGE_VALUES)
-    return deduceAndAssignTypeForGUnmerge(I, MF, GR);
+    return deduceAndAssignTypeForGUnmerge(I, MF, GR, MIB);
 
   LLVM_DEBUG(dbgs() << "Inferring type from operands\n");
   SPIRVType *ResType = deduceResultTypeFromOperands(I, GR, MIB);
diff --git a/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-multiply.ll b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-multiply.ll
new file mode 100644
index 0000000000000..4f8dfd0494009
--- /dev/null
+++ b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-multiply.ll
@@ -0,0 +1,168 @@
+; RUN: llc -O0 -mtriple=spirv1.5-unknown-vulkan1.2 %s -o - | FileCheck %s --check-prefixes=CHECK,VK1_1
+; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv1.5-unknown-vulkan1.2 %s -o - -filetype=obj | spirv-val --target-env vulkan1.2 %}
+
+; RUN: llc -O0 -mtriple=spirv1.6-unknown-vulkan1.3 %s -o - | FileCheck %s --check-prefixes=CHECK,VK1_3
+; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv1.6-unknown-vulkan1.3 %s -o - -filetype=obj | spirv-val --target-env vulkan1.3 %}
+
+ at private_v4f32 = internal addrspace(10) global [4 x float] poison
+ at private_v4i32 = internal addrspace(10) global [4 x i32] poison
+ at private_v6f32 = internal addrspace(10) global [6 x float] poison
+ at private_v2f32 = internal addrspace(10) global [2 x float] poison
+ at private_v1f32 = internal addrspace(10) global [1 x float] poison
+
+
+; CHECK-DAG: %[[Float_ID:[0-9]+]] = OpTypeFloat 32
+; CHECK-DAG: %[[V2F32_ID:[0-9]+]] = OpTypeVector %[[Float_ID]] 2
+; CHECK-DAG: %[[V3F32_ID:[0-9]+]] = OpTypeVector %[[Float_ID]] 3
+; CHECK-DAG: %[[V4F32_ID:[0-9]+]] = OpTypeVector %[[Float_ID]] 4
+; CHECK-DAG: %[[Int_ID:[0-9]+]] = OpTypeInt 32 0
+; CHECK-DAG: %[[V2I32_ID:[0-9]+]] = OpTypeVector %[[Int_ID]] 2
+; CHECK-DAG: %[[V4I32_ID:[0-9]+]] = OpTypeVector %[[Int_ID]] 4
+
+; Test Matrix Multiply 2x2 * 2x2 float
+; CHECK-LABEL: ; -- Begin function test_matrix_multiply_f32_2x2_2x2
+; CHECK:       %[[A:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] {{.*}} {{.*}} 3
+; CHECK:       %[[B:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] {{.*}} {{.*}} 3
+; CHECK-DAG:   %[[B_Col0:[0-9]+]] = OpVectorShuffle %[[V2F32_ID]] %[[B]] %[[#]] 0 1
+; CHECK-DAG:   %[[B_Col1:[0-9]+]] = OpVectorShuffle %[[V2F32_ID]] %[[B]] %[[#]] 2 3
+; CHECK-DAG:   %[[A_Row0:[0-9]+]] = OpVectorShuffle %[[V2F32_ID]] %[[A]] %[[A]] 0 2
+; CHECK-DAG:   %[[A_Row1:[0-9]+]] = OpVectorShuffle %[[V2F32_ID]] %[[A]] %[[A]] 1 3
+; CHECK-DAG:   %[[C00:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row0]] %[[B_Col0]]
+; CHECK-DAG:   %[[C10:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row1]] %[[B_Col0]]
+; CHECK-DAG:   %[[C01:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row0]] %[[B_Col1]]
+; CHECK-DAG:   %[[C11:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row1]] %[[B_Col1]]
+; CHECK:       OpCompositeConstruct %[[V4F32_ID]] %[[C00]] %[[C10]] %[[C01]] %[[C11]]
+define internal void @test_matrix_multiply_f32_2x2_2x2() {
+  %1 = load <4 x float>, ptr addrspace(10) @private_v4f32
+  %2 = load <4 x float>, ptr addrspace(10) @private_v4f32
+  %3 = call <4 x float> @llvm.matrix.multiply.v4f32.v4f32.v4f32(<4 x float> %1, <4 x float> %2, i32 2, i32 2, i32 2)
+  store <4 x float> %3, ptr addrspace(10) @private_v4f32
+  ret void
+}
+
+; Test Matrix Multiply 2x2 * 2x2 int
+; CHECK-LABEL: ; -- Begin function test_matrix_multiply_i32_2x2_2x2
+; CHECK:       %[[A:[0-9]+]] = OpCompositeInsert %[[V4I32_ID]] {{.*}} {{.*}} 3
+; CHECK:       %[[B:[0-9]+]] = OpCompositeInsert %[[V4I32_ID]] {{.*}} {{.*}} 3
+; CHECK-DAG:   %[[B_Col0:[0-9]+]] = OpVectorShuffle %[[V2I32_ID]] %[[B]] %[[#]] 0 1
+; CHECK-DAG:   %[[B_Col1:[0-9]+]] = OpVectorShuffle %[[V2I32_ID]] %[[B]] %[[#]] 2 3
+; CHECK-DAG:   %[[A_Row0:[0-9]+]] = OpVectorShuffle %[[V2I32_ID]] %[[A]] %[[A]] 0 2
+; CHECK-DAG:   %[[A_Row1:[0-9]+]] = OpVectorShuffle %[[V2I32_ID]] %[[A]] %[[A]] 1 3
+;
+; -- C00 = dot(A_Row0, B_Col0)
+; VK1_1-DAG:   %[[Mul00:[0-9]+]] = OpIMul %[[V2I32_ID]] %[[A_Row0]] %[[B_Col0]]
+; VK1_1-DAG:   %[[E00_0:[0-9]+]] = OpCompositeExtract %[[Int_ID]] %[[Mul00]] 0
+; VK1_1-DAG:   %[[E00_1:[0-9]+]] = OpCompositeExtract %[[Int_ID]] %[[Mul00]] 1
+; VK1_1-DAG:   %[[C00:[0-9]+]] = OpIAdd %[[Int_ID]] %[[E00_0]] %[[E00_1]]
+; VK1_3-DAG:   %[[C00:[0-9]+]] = OpUDot %[[Int_ID]] %[[A_Row0]] %[[B_Col0]]
+;
+; -- C10 = dot(A_Row1, B_Col0)
+; VK1_1-DAG:   %[[Mul10:[0-9]+]] = OpIMul %[[V2I32_ID]] %[[A_Row1]] %[[B_Col0]]
+; VK1_1-DAG:   %[[E10_0:[0-9]+]] = OpCompositeExtract %[[Int_ID]] %[[Mul10]] 0
+; VK1_1-DAG:   %[[E10_1:[0-9]+]] = OpCompositeExtract %[[Int_ID]] %[[Mul10]] 1
+; VK1_1-DAG:   %[[C10:[0-9]+]] = OpIAdd %[[Int_ID]] %[[E10_0]] %[[E10_1]]
+; VK1_3-DAG:   %[[C10:[0-9]+]] = OpUDot %[[Int_ID]] %[[A_Row1]] %[[B_Col0]]
+;
+; -- C11 = dot(A_Row1, B_Col1)
+; VK1_1-DAG:   %[[Mul11:[0-9]+]] = OpIMul %[[V2I32_ID]] %[[A_Row1]] %[[B_Col1]]
+; VK1_1-DAG:   %[[E11_0:[0-9]+]] = OpCompositeExtract %[[Int_ID]] %[[Mul11]] 0
+; VK1_1-DAG:   %[[E11_1:[0-9]+]] = OpCompositeExtract %[[Int_ID]] %[[Mul11]] 1
+; VK1_1-DAG:   %[[C11:[0-9]+]] = OpIAdd %[[Int_ID]] %[[E11_0]] %[[E11_1]]
+; VK1_3-DAG:   %[[C11:[0-9]+]] = OpUDot %[[Int_ID]] %[[A_Row1]] %[[B_Col1]]
+;
+; -- C01 = dot(A_Row0, B_Col1)
+; VK1_1-DAG:   %[[Mul01:[0-9]+]] = OpIMul %[[V2I32_ID]] %[[A_Row0]] %[[B_Col1]]
+; VK1_1-DAG:   %[[E01_0:[0-9]+]] = OpCompositeExtract %[[Int_ID]] %[[Mul01]] 0
+; VK1_1-DAG:   %[[E01_1:[0-9]+]] = OpCompositeExtract %[[Int_ID]] %[[Mul01]] 1
+; VK1_1-DAG:   %[[C01:[0-9]+]] = OpIAdd %[[Int_ID]] %[[E01_0]] %[[E01_1]]
+; VK1_3-DAG:   %[[C01:[0-9]+]] = OpUDot %[[Int_ID]] %[[A_Row0]] %[[B_Col1]]
+;
+; CHECK:       OpCompositeConstruct %[[V4I32_ID]] %[[C00]] %[[C10]] %[[C01]] %[[C11]]
+define internal void @test_matrix_multiply_i32_2x2_2x2() {
+  %1 = load <4 x i32>, ptr addrspace(10) @private_v4i32
+  %2 = load <4 x i32>, ptr addrspace(10) @private_v4i32
+  %3 = call <4 x i32> @llvm.matrix.multiply.v4i32.v4i32.v4i32(<4 x i32> %1, <4 x i32> %2, i32 2, i32 2, i32 2)
+  store <4 x i32> %3, ptr addrspace(10) @private_v4i32
+  ret void
+}
+
+; Test Matrix Multiply 2x3 * 3x2 float (Result 2x2 float)
+; CHECK-LABEL: ; -- Begin function test_matrix_multiply_f32_2x3_3x2
+; CHECK-DAG:   %[[B:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]]
+; CHECK-DAG:   %[[A:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]]
+;
+; CHECK-DAG:   %[[B_Col0:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]]
+; CHECK-DAG:   %[[B_Col1:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]]
+; CHECK-DAG:   %[[A_Row0:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]]
+; CHECK-DAG:   %[[A_Row1:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]]
+;
+; CHECK-DAG:   %[[C00:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row0]] %[[B_Col0]]
+; CHECK-DAG:   %[[C10:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row1]] %[[B_Col0]]
+; CHECK-DAG:   %[[C01:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row0]] %[[B_Col1]]
+; CHECK-DAG:   %[[C11:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row1]] %[[B_Col1]]
+; CHECK:       OpCompositeConstruct %[[V4F32_ID]] %[[C00]] %[[C10]] %[[C01]] %[[C11]]
+define internal void @test_matrix_multiply_f32_2x3_3x2() {
+  %1 = load <6 x float>, ptr addrspace(10) @private_v6f32
+  %2 = load <6 x float>, ptr addrspace(10) @private_v6f32
+  %3 = call <4 x float> @llvm.matrix.multiply.v4f32.v6f32.v6f32(<6 x float> %1, <6 x float> %2, i32 2, i32 3, i32 2)
+  store <4 x float> %3, ptr addrspace(10) @private_v4f32
+  ret void
+}
+
+; Test Matrix Multiply 2x2 * 2x1 float (Result 2x1 vector)
+; CHECK-LABEL: ; -- Begin function test_matrix_multiply_f32_2x2_2x1_vec
+; CHECK:       %[[A:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] {{.*}} {{.*}} 3
+; CHECK:       %[[B:[0-9]+]] = OpCompositeInsert %[[V2F32_ID]] {{.*}} {{.*}} 1
+; CHECK-DAG:   %[[A_Row0:[0-9]+]] = OpVectorShuffle %[[V2F32_ID]] %[[A]] %[[A]] 0 2
+; CHECK-DAG:   %[[A_Row1:[0-9]+]] = OpVectorShuffle %[[V2F32_ID]] %[[A]] %[[A]] 1 3
+; CHECK-DAG:   %[[C00:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row0]] %[[B]]
+; CHECK-DAG:   %[[C10:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row1]] %[[B]]
+; CHECK:       OpCompositeConstruct %[[V2F32_ID]] %[[C00]] %[[C10]]
+define internal void @test_matrix_multiply_f32_2x2_2x1_vec() {
+  %1 = load <4 x float>, ptr addrspace(10) @private_v4f32
+  %2 = load <2 x float>, ptr addrspace(10) @private_v2f32
+  %3 = call <2 x float> @llvm.matrix.multiply.v2f32.v4f32.v2f32(<4 x float> %1, <2 x float> %2, i32 2, i32 2, i32 1)
+  store <2 x float> %3, ptr addrspace(10) @private_v2f32
+  ret void
+}
+
+; Test Matrix Multiply 1x2 * 2x2 float (Result 1x2 vector)
+; CHECK-LABEL: ; -- Begin function test_matrix_multiply_f32_1x2_2x2_vec
+; CHECK:       %[[A:[0-9]+]] = OpCompositeInsert %[[V2F32_ID]] {{.*}} {{.*}} 1
+; CHECK:       %[[B:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] {{.*}} {{.*}} 3
+; CHECK-DAG:   %[[B_Col0:[0-9]+]] = OpVectorShuffle %[[V2F32_ID]] %[[B]] %[[#]] 0 1
+; CHECK-DAG:   %[[B_Col1:[0-9]+]] = OpVectorShuffle %[[V2F32_ID]] %[[B]] %[[#]] 2 3
+; CHECK-DAG:   %[[C00:[0-9]+]] = OpDot %[[Float_ID]] %[[A]] %[[B_Col0]]
+; CHECK-DAG:   %[[C01:[0-9]+]] = OpDot %[[Float_ID]] %[[A]] %[[B_Col1]]
+; CHECK:       OpCompositeConstruct %[[V2F32_ID]] %[[C00]] %[[C01]]
+define internal void @test_matrix_multiply_f32_1x2_2x2_vec() {
+  %1 = load <2 x float>, ptr addrspace(10) @private_v2f32
+  %2 = load <4 x float>, ptr addrspace(10) @private_v4f32
+  %3 = call <2 x float> @llvm.matrix.multiply.v2f32.v2f32.v4f32(<2 x float> %1, <4 x float> %2, i32 1, i32 2, i32 2)
+  store <2 x float> %3, ptr addrspace(10) @private_v2f32
+  ret void
+}
+
+; Test Matrix Multiply 1x2 * 2x1 float (Result 1x1 scalar - OpDot)
+; TODO(171175): The SPIR-V backend does not legalize single element vectors.
+; CHECK-DISABLE: ; -- Begin function test_matrix_multiply_f32_1x2_2x1_scalar
+; define internal void @test_matrix_multiply_f32_1x2_2x1_scalar() {
+;   %1 = load <2 x float>, ptr addrspace(10) @private_v2f32
+;   %2 = load <2 x float>, ptr addrspace(10) @private_v2f32
+;   %3 = call <1 x float> @llvm.matrix.multiply.v1f32.v2f32.v2f32(<2 x float> %1, <2 x float> %2, i32 1, i32 2, i32 1)
+;   store <1 x float> %3, ptr addrspace(10) @private_v1f32
+;   ret void
+; }
+
+define void @main() #0 {
+  ret void
+}
+
+declare <4 x float> @llvm.matrix.multiply.v4f32.v4f32.v4f32(<4 x float>, <4 x float>, i32, i32, i32)
+declare <4 x i32> @llvm.matrix.multiply.v4i32.v4i32.v4i32(<4 x i32>, <4 x i32>, i32, i32, i32)
+declare <4 x float> @llvm.matrix.multiply.v4f32.v6f32.v6f32(<6 x float>, <6 x float>, i32, i32, i32)
+declare <2 x float> @llvm.matrix.multiply.v2f32.v4f32.v2f32(<4 x float>, <2 x float>, i32, i32, i32)
+declare <2 x float> @llvm.matrix.multiply.v2f32.v2f32.v4f32(<2 x float>, <4 x float>, i32, i32, i32)
+; declare <1 x float> @llvm.matrix.multiply.v1f32.v2f32.v2f32(<2 x float>, <2 x float>, i32, i32, i32)
+
+attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }
diff --git a/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-transpose.ll b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-transpose.ll
new file mode 100644
index 0000000000000..3474fecae9957
--- /dev/null
+++ b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-transpose.ll
@@ -0,0 +1,124 @@
+; RUN: llc -O0 -mtriple=spirv1.6-unknown-vulkan1.3 %s -o - | FileCheck %s
+; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv1.6-unknown-vulkan1.3 %s -o - -filetype=obj | spirv-val --target-env vulkan1.3 %}
+
+ at private_v4f32 = internal addrspace(10) global [4 x float] poison
+ at private_v6f32 = internal addrspace(10) global [6 x float] poison
+ at private_v1f32 = internal addrspace(10) global [1 x float] poison
+
+; CHECK-DAG: %[[Float_ID:[0-9]+]] = OpTypeFloat 32
+; CHECK-DAG: %[[V4F32_ID:[0-9]+]] = OpTypeVector %[[Float_ID]] 4
+
+; Test Transpose 2x2 float
+; CHECK-LABEL: ; -- Begin function test_transpose_f32_2x2
+; CHECK: %[[Shuffle:[0-9]+]] = OpVectorShuffle %[[V4F32_ID]] {{.*}} 0 2 1 3
+define internal void @test_transpose_f32_2x2() {
+ %1 = load <4 x float>, ptr addrspace(10) @private_v4f32
+ %2 = call <4 x float> @llvm.matrix.transpose.v4f32.i32(<4 x float> %1, i32 2, i32 2)
+ store <4 x float> %2, ptr addrspace(10) @private_v4f32
+ ret void
+}
+
+; Test Transpose 2x3 float (Result is 3x2 float)
+; Note: We should add more code to the prelegalizer combiner to be able to remove the insert and extracts. 
+;       This test should reduce to a series of access chains, loads, and stores.
+; CHECK-LABEL: ; -- Begin function test_transpose_f32_2x3
+define internal void @test_transpose_f32_2x3() {
+; -- Load input 2x3 matrix elements
+; CHECK: %[[AccessChain1:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID:[0-9]+]] %[[private_v6f32:[0-9]+]] %[[int_0:[0-9]+]]
+; CHECK: %[[Load1:[0-9]+]] = OpLoad %[[Float_ID]] %[[AccessChain1]]
+; CHECK: %[[AccessChain2:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_1:[0-9]+]]
+; CHECK: %[[Load2:[0-9]+]] = OpLoad %[[Float_ID]] %[[AccessChain2]]
+; CHECK: %[[AccessChain3:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_2:[0-9]+]]
+; CHECK: %[[Load3:[0-9]+]] = OpLoad %[[Float_ID]] %[[AccessChain3]]
+; CHECK: %[[AccessChain4:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_3:[0-9]+]]
+; CHECK: %[[Load4:[0-9]+]] = OpLoad %[[Float_ID]] %[[AccessChain4]]
+; CHECK: %[[AccessChain5:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_4:[0-9]+]]
+; CHECK: %[[Load5:[0-9]+]] = OpLoad %[[Float_ID]] %[[AccessChain5]]
+; CHECK: %[[AccessChain6:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_5:[0-9]+]]
+; CHECK: %[[Load6:[0-9]+]] = OpLoad %[[Float_ID]] %[[AccessChain6]]
+;
+; -- Construct intermediate vectors
+; CHECK: %[[CompositeInsert1:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load1]] %[[undef_V4F32_ID:[0-9]+]] 0
+; CHECK: %[[CompositeInsert2:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load2]] %[[CompositeInsert1]] 1
+; CHECK: %[[CompositeInsert3:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load3]] %[[CompositeInsert2]] 2
+; CHECK: %[[CompositeInsert4:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load4]] %[[CompositeInsert3]] 3
+; CHECK: %[[CompositeInsert5:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load5]] %[[undef_V4F32_ID]] 0
+; CHECK: %[[CompositeInsert6:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load6]] %[[CompositeInsert5]] 1
+  %1 = load <6 x float>, ptr addrspace(10) @private_v6f32
+
+; -- Extract elements for transposition
+; CHECK: %[[Extract1:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert4]] 0
+; CHECK: %[[Extract2:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert4]] 2
+; CHECK: %[[Extract3:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert6]] 0
+; CHECK: %[[Extract4:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert4]] 1
+; CHECK: %[[Extract5:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert4]] 3
+; CHECK: %[[Extract6:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert6]] 1
+  %2 = call <6 x float> @llvm.matrix.transpose.v6f32.i32(<6 x float> %1, i32 2, i32 3)
+
+; -- Store output 3x2 matrix elements
+; CHECK: %[[AccessChain7:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_0]]
+; CHECK: %[[CompositeConstruct1:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract1]] %[[Extract2]] %[[Extract3]] %[[Extract4]]
+; CHECK: %[[Extract7:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct1]] 0
+; CHECK: OpStore %[[AccessChain7]] %[[Extract7]]
+; CHECK: %[[AccessChain8:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_1]]
+; CHECK: %[[CompositeConstruct2:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract1]] %[[Extract2]] %[[Extract3]] %[[Extract4]]
+; CHECK: %[[Extract8:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct2]] 1
+; CHECK: OpStore %[[AccessChain8]] %[[Extract8]]
+; CHECK: %[[AccessChain9:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_2]]
+; CHECK: %[[CompositeConstruct3:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract1]] %[[Extract2]] %[[Extract3]] %[[Extract4]]
+; CHECK: %[[Extract9:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct3]] 2
+; CHECK: OpStore %[[AccessChain9]] %[[Extract9]]
+; CHECK: %[[AccessChain10:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_3]]
+; CHECK: %[[CompositeConstruct4:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract1]] %[[Extract2]] %[[Extract3]] %[[Extract4]]
+; CHECK: %[[Extract10:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct4]] 3
+; CHECK: OpStore %[[AccessChain10]] %[[Extract10]]
+; CHECK: %[[AccessChain11:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_4]]
+; CHECK: %[[CompositeConstruct5:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract5]] %[[Extract6]] %[[undef_Float_ID:[0-9]+]] %[[undef_Float_ID]]
+; CHECK: %[[Extract11:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct5]] 0
+; CHECK: OpStore %[[AccessChain11]] %[[Extract11]]
+; CHECK: %[[AccessChain12:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_5]]
+; CHECK: %[[CompositeConstruct6:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract5]] %[[Extract6]] %[[undef_Float_ID]] %[[undef_Float_ID]]
+; CHECK: %[[Extract12:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct6]] 1
+; CHECK: OpStore %[[AccessChain12]] %[[Extract12]]
+  store <6 x float> %2, ptr addrspace(10) @private_v6f32
+  ret void
+}
+
+; Test Transpose 1x4 float (Result is 4x1 float), should be a copy (vector of 4 floats)
+; CHECK-LABEL: ; -- Begin function test_transpose_f32_1x4_to_4x1
+; CHECK: %[[Shuffle:[0-9]+]] = OpVectorShuffle %[[V4F32_ID]] {{.*}} 0 1 2 3
+define internal void @test_transpose_f32_1x4_to_4x1() {
+ %1 = load <4 x float>, ptr addrspace(10) @private_v4f32
+ %2 = call <4 x float> @llvm.matrix.transpose.v4f32.i32(<4 x float> %1, i32 1, i32 4)
+ store <4 x float> %2, ptr addrspace(10) @private_v4f32
+ ret void
+}
+
+; Test Transpose 4x1 float (Result is 1x4 float), should be a copy (vector of 4 floats)
+; CHECK-LABEL: ; -- Begin function test_transpose_f32_4x1_to_1x4
+; CHECK: %[[Shuffle:[0-9]+]] = OpVectorShuffle %[[V4F32_ID]] {{.*}} 0 1 2 3
+define internal void @test_transpose_f32_4x1_to_1x4() {
+ %1 = load <4 x float>, ptr addrspace(10) @private_v4f32
+ %2 = call <4 x float> @llvm.matrix.transpose.v4f32.i32(<4 x float> %1, i32 4, i32 1)
+ store <4 x float> %2, ptr addrspace(10) @private_v4f32
+ ret void
+}
+
+; Test Transpose 1x1 float (Result is 1x1 float), should be a copy (scalar float)
+; TODO(171175): The SPIR-V backend does not seem to be legalizing single element vectors.
+; define internal void @test_transpose_f32_1x1() {
+;   %1 = load <1 x float>, ptr addrspace(10) @private_v1f32
+;   %2 = call <1 x float> @llvm.matrix.transpose.v1f32.i32(<1 x float> %1, i32 1, i32 1)
+;   store <1 x float> %2, ptr addrspace(10) @private_v1f32
+;   ret void
+; }
+
+define void @main() #0 {
+  ret void
+}
+
+declare <4 x float> @llvm.matrix.transpose.v4f32.i32(<4 x float>, i32, i32)
+declare <6 x float> @llvm.matrix.transpose.v6f32.i32(<6 x float>, i32, i32)
+; declare <1 x float> @llvm.matrix.transpose.v1f32.i32(<1 x float>, i32, i32)
+
+attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }

>From 3276155a621b96b800ce86fba57575fbc88b85b1 Mon Sep 17 00:00:00 2001
From: Steven Perron <stevenperron at google.com>
Date: Mon, 15 Dec 2025 10:34:34 -0500
Subject: [PATCH 10/10] [SPIRV] Support non-constant indices for vector
 insert/extract

This patch updates the legalization of spv_insertelt and spv_extractelt to
handle non-constant (dynamic) indices. When a dynamic index is encountered, the
vector is spilled to the stack, and the element is accessed via OpAccessChain
(lowered from spv_gep).

This patch also adds custom legalization for G_STORE to scalarize vector stores
and refines the legalization rules for G_LOAD, G_STORE, and G_BUILD_VECTOR.

Fixes https://github.com/llvm/llvm-project/issues/170534
---
 llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp  | 215 +++++++++++++++++-
 llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp  |  43 +++-
 .../SPIRV/legalization/load-store-global.ll   |  96 +++-----
 .../spv-extractelt-legalization.ll            |  66 ++++++
 .../SPIRV/legalization/vector-arithmetic-6.ll |  99 ++++----
 .../SPIRV/llvm-intrinsics/matrix-multiply.ll  |  19 +-
 .../SPIRV/llvm-intrinsics/matrix-transpose.ll |  48 ++--
 7 files changed, 415 insertions(+), 171 deletions(-)
 create mode 100644 llvm/test/CodeGen/SPIRV/legalization/spv-extractelt-legalization.ll

diff --git a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
index a7626c3d43c34..f3e8112de242d 100644
--- a/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVLegalizerInfo.cpp
@@ -241,10 +241,7 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
   // Illegal G_UNMERGE_VALUES instructions should be handled
   // during the combine phase.
   getActionDefinitionsBuilder(G_BUILD_VECTOR)
-      .legalIf(vectorElementCountIsLessThanOrEqualTo(0, MaxVectorSize))
-      .fewerElementsIf(vectorElementCountIsGreaterThan(0, MaxVectorSize),
-                       LegalizeMutations::changeElementCountTo(
-                           0, ElementCount::getFixed(MaxVectorSize)));
+      .legalIf(vectorElementCountIsLessThanOrEqualTo(0, MaxVectorSize));
 
   // When entering the legalizer, there should be no G_BITCAST instructions.
   // They should all be calls to the `spv_bitcast` intrinsic. The call to
@@ -305,9 +302,14 @@ SPIRVLegalizerInfo::SPIRVLegalizerInfo(const SPIRVSubtarget &ST) {
                                   all(typeIsNot(0, p9), typeIs(1, p9))))
       .legalForCartesianProduct(allPtrs, allPtrs);
 
+  // Should we be legalizing bad scalar sizes like s5 here instead
+  // of handling them in the instruction selector?
   getActionDefinitionsBuilder({G_LOAD, G_STORE})
       .unsupportedIf(typeIs(1, p9))
-      .legalIf(typeInSet(1, allPtrs));
+      .legalForCartesianProduct(allowedVectorTypes, allPtrs)
+      .legalForCartesianProduct(allPtrs, allPtrs)
+      .legalIf(isScalar(0))
+      .custom();
 
   getActionDefinitionsBuilder({G_SMIN, G_SMAX, G_UMIN, G_UMAX, G_ABS,
                                G_BITREVERSE, G_SADDSAT, G_UADDSAT, G_SSUBSAT,
@@ -530,6 +532,59 @@ static Register convertPtrToInt(Register Reg, LLT ConvTy, SPIRVType *SpvType,
   return ConvReg;
 }
 
+static bool legalizeLoad(LegalizerHelper &Helper, MachineInstr &MI,
+                         SPIRVGlobalRegistry *GR) {
+  return true;
+}
+
+static bool legalizeStore(LegalizerHelper &Helper, MachineInstr &MI,
+                          SPIRVGlobalRegistry *GR) {
+  MachineRegisterInfo &MRI = MI.getMF()->getRegInfo();
+  MachineIRBuilder &MIRBuilder = Helper.MIRBuilder;
+  Register ValReg = MI.getOperand(0).getReg();
+  Register PtrReg = MI.getOperand(1).getReg();
+  LLT ValTy = MRI.getType(ValReg);
+
+  assert(ValTy.isVector() && "Expected vector store");
+
+  SmallVector<Register, 8> SplitRegs;
+  LLT EltTy = ValTy.getElementType();
+  unsigned NumElts = ValTy.getNumElements();
+
+  for (unsigned i = 0; i < NumElts; ++i)
+    SplitRegs.push_back(MRI.createGenericVirtualRegister(EltTy));
+
+  MIRBuilder.buildUnmerge(SplitRegs, ValReg);
+
+  LLT PtrTy = MRI.getType(PtrReg);
+  auto Zero = MIRBuilder.buildConstant(LLT::scalar(32), 0);
+
+  for (unsigned i = 0; i < NumElts; ++i) {
+    auto Idx = MIRBuilder.buildConstant(LLT::scalar(32), i);
+    Register EltPtr = MRI.createGenericVirtualRegister(PtrTy);
+
+    MIRBuilder.buildIntrinsic(Intrinsic::spv_gep, ArrayRef<Register>{EltPtr})
+        .addImm(1) // InBounds
+        .addUse(PtrReg)
+        .addUse(Zero.getReg(0))
+        .addUse(Idx.getReg(0));
+
+    MachinePointerInfo EltPtrInfo;
+    Align EltAlign = Align(1);
+    if (!MI.memoperands_empty()) {
+      MachineMemOperand *MMO = *MI.memoperands_begin();
+      EltPtrInfo =
+          MMO->getPointerInfo().getWithOffset(i * EltTy.getSizeInBytes());
+      EltAlign = commonAlignment(MMO->getAlign(), i * EltTy.getSizeInBytes());
+    }
+
+    MIRBuilder.buildStore(SplitRegs[i], EltPtr, EltPtrInfo, EltAlign);
+  }
+
+  MI.eraseFromParent();
+  return true;
+}
+
 bool SPIRVLegalizerInfo::legalizeCustom(
     LegalizerHelper &Helper, MachineInstr &MI,
     LostDebugLocObserver &LocObserver) const {
@@ -570,6 +625,10 @@ bool SPIRVLegalizerInfo::legalizeCustom(
     }
     return true;
   }
+  case TargetOpcode::G_LOAD:
+    return legalizeLoad(Helper, MI, GR);
+  case TargetOpcode::G_STORE:
+    return legalizeStore(Helper, MI, GR);
   }
 }
 
@@ -612,11 +671,61 @@ bool SPIRVLegalizerInfo::legalizeIntrinsic(LegalizerHelper &Helper,
     LLT DstTy = MRI.getType(DstReg);
 
     if (needsVectorLegalization(DstTy, ST)) {
+      Register DstReg = MI.getOperand(0).getReg();
       Register SrcReg = MI.getOperand(2).getReg();
       Register ValReg = MI.getOperand(3).getReg();
-      Register IdxReg = MI.getOperand(4).getReg();
-      MIRBuilder.buildInsertVectorElement(DstReg, SrcReg, ValReg, IdxReg);
+      LLT SrcTy = MRI.getType(SrcReg);
+      MachineOperand &IdxOperand = MI.getOperand(4);
+
+      if (getImm(IdxOperand, &MRI)) {
+        uint64_t IdxVal = foldImm(IdxOperand, &MRI);
+        if (IdxVal < SrcTy.getNumElements()) {
+          SmallVector<Register, 8> Regs;
+          SPIRVType *ElementType = GR->getScalarOrVectorComponentType(
+              GR->getSPIRVTypeForVReg(DstReg));
+          LLT ElementLLTTy = GR->getRegType(ElementType);
+          for (unsigned I = 0, E = SrcTy.getNumElements(); I < E; ++I) {
+            Register Reg = MRI.createGenericVirtualRegister(ElementLLTTy);
+            MRI.setRegClass(Reg, GR->getRegClass(ElementType));
+            GR->assignSPIRVTypeToVReg(ElementType, Reg, *MI.getMF());
+            Regs.push_back(Reg);
+          }
+          MIRBuilder.buildUnmerge(Regs, SrcReg);
+          Regs[IdxVal] = ValReg;
+          MIRBuilder.buildBuildVector(DstReg, Regs);
+          MI.eraseFromParent();
+          return true;
+        }
+      }
+
+      LLT EltTy = SrcTy.getElementType();
+      Align VecAlign = Helper.getStackTemporaryAlignment(SrcTy);
+
+      MachinePointerInfo PtrInfo;
+      auto StackTemp = Helper.createStackTemporary(
+          TypeSize::getFixed(SrcTy.getSizeInBytes()), VecAlign, PtrInfo);
+
+      MIRBuilder.buildStore(SrcReg, StackTemp, PtrInfo, VecAlign);
+
+      Register IdxReg = IdxOperand.getReg();
+      LLT PtrTy = MRI.getType(StackTemp.getReg(0));
+      Register EltPtr = MRI.createGenericVirtualRegister(PtrTy);
+      auto Zero = MIRBuilder.buildConstant(LLT::scalar(32), 0);
+
+      MIRBuilder.buildIntrinsic(Intrinsic::spv_gep, ArrayRef<Register>{EltPtr})
+          .addImm(1) // InBounds
+          .addUse(StackTemp.getReg(0))
+          .addUse(Zero.getReg(0))
+          .addUse(IdxReg);
+
+      MachinePointerInfo EltPtrInfo =
+          MachinePointerInfo(PtrTy.getAddressSpace());
+      Align EltAlign = Helper.getStackTemporaryAlignment(EltTy);
+      MIRBuilder.buildStore(ValReg, EltPtr, EltPtrInfo, EltAlign);
+
+      MIRBuilder.buildLoad(DstReg, StackTemp, PtrInfo, VecAlign);
       MI.eraseFromParent();
+      return true;
     }
     return true;
   } else if (IntrinsicID == Intrinsic::spv_extractelt) {
@@ -625,9 +734,97 @@ bool SPIRVLegalizerInfo::legalizeIntrinsic(LegalizerHelper &Helper,
 
     if (needsVectorLegalization(SrcTy, ST)) {
       Register DstReg = MI.getOperand(0).getReg();
-      Register IdxReg = MI.getOperand(3).getReg();
-      MIRBuilder.buildExtractVectorElement(DstReg, SrcReg, IdxReg);
+      MachineOperand &IdxOperand = MI.getOperand(3);
+
+      if (getImm(IdxOperand, &MRI)) {
+        uint64_t IdxVal = foldImm(IdxOperand, &MRI);
+        if (IdxVal < SrcTy.getNumElements()) {
+          LLT DstTy = MRI.getType(DstReg);
+          SmallVector<Register, 8> Regs;
+          SPIRVType *DstSpvTy = GR->getSPIRVTypeForVReg(DstReg);
+          for (unsigned I = 0, E = SrcTy.getNumElements(); I < E; ++I) {
+            if (I == IdxVal) {
+              Regs.push_back(DstReg);
+            } else {
+              Register Reg = MRI.createGenericVirtualRegister(DstTy);
+              MRI.setRegClass(Reg, GR->getRegClass(DstSpvTy));
+              GR->assignSPIRVTypeToVReg(DstSpvTy, Reg, *MI.getMF());
+              Regs.push_back(Reg);
+            }
+          }
+          MIRBuilder.buildUnmerge(Regs, SrcReg);
+          MI.eraseFromParent();
+          return true;
+        }
+      }
+
+      LLT EltTy = SrcTy.getElementType();
+      Align VecAlign = Helper.getStackTemporaryAlignment(SrcTy);
+
+      MachinePointerInfo PtrInfo;
+      auto StackTemp = Helper.createStackTemporary(
+          TypeSize::getFixed(SrcTy.getSizeInBytes()), VecAlign, PtrInfo);
+
+      // Set the type of StackTemp to a pointer to an array of the element type.
+      SPIRVType *SpvSrcTy = GR->getSPIRVTypeForVReg(SrcReg);
+      SPIRVType *EltSpvTy = GR->getScalarOrVectorComponentType(SpvSrcTy);
+      const Type *LLVMEltTy = GR->getTypeForSPIRVType(EltSpvTy);
+      const Type *LLVMArrTy =
+          ArrayType::get(const_cast<Type *>(LLVMEltTy), SrcTy.getNumElements());
+      SPIRVType *ArrSpvTy = GR->getOrCreateSPIRVType(
+          LLVMArrTy, MIRBuilder, SPIRV::AccessQualifier::ReadWrite, true);
+      SPIRVType *PtrToArrSpvTy = GR->getOrCreateSPIRVPointerType(
+          ArrSpvTy, MIRBuilder, SPIRV::StorageClass::Function);
+      setRegClassType(StackTemp.getReg(0), PtrToArrSpvTy, GR, &MRI,
+                      MIRBuilder.getMF());
+
+      // Store the vector elements one by one.
+      SmallVector<Register, 8> Regs;
+      for (unsigned I = 0, E = SrcTy.getNumElements(); I < E; ++I) {
+        Register Reg = MRI.createGenericVirtualRegister(EltTy);
+        MRI.setRegClass(Reg, GR->getRegClass(EltSpvTy));
+        GR->assignSPIRVTypeToVReg(EltSpvTy, Reg, *MI.getMF());
+        Regs.push_back(Reg);
+      }
+      MIRBuilder.buildUnmerge(Regs, SrcReg);
+
+      auto ZeroNew = MIRBuilder.buildConstant(LLT::scalar(32), 0);
+      LLT PtrTyNew = MRI.getType(StackTemp.getReg(0));
+
+      for (unsigned I = 0, E = SrcTy.getNumElements(); I < E; ++I) {
+        auto Idx = MIRBuilder.buildConstant(LLT::scalar(32), I);
+        Register EltPtr = MRI.createGenericVirtualRegister(PtrTyNew);
+        MIRBuilder
+            .buildIntrinsic(Intrinsic::spv_gep, ArrayRef<Register>{EltPtr})
+            .addImm(1) // InBounds
+            .addUse(StackTemp.getReg(0))
+            .addUse(ZeroNew.getReg(0))
+            .addUse(Idx.getReg(0));
+
+        MachinePointerInfo EltPtrInfo =
+            PtrInfo.getWithOffset(I * EltTy.getSizeInBytes());
+        Align EltAlign = commonAlignment(VecAlign, I * EltTy.getSizeInBytes());
+        MIRBuilder.buildStore(Regs[I], EltPtr, EltPtrInfo, EltAlign);
+      }
+
+      Register IdxReg = IdxOperand.getReg();
+      LLT PtrTy = MRI.getType(StackTemp.getReg(0));
+      Register EltPtr = MRI.createGenericVirtualRegister(PtrTy);
+      auto Zero = MIRBuilder.buildConstant(LLT::scalar(32), 0);
+
+      MIRBuilder.buildIntrinsic(Intrinsic::spv_gep, ArrayRef<Register>{EltPtr})
+          .addImm(1) // InBounds
+          .addUse(StackTemp.getReg(0))
+          .addUse(Zero.getReg(0))
+          .addUse(IdxReg);
+
+      MachinePointerInfo EltPtrInfo =
+          MachinePointerInfo(PtrTy.getAddressSpace());
+      Align EltAlign = Helper.getStackTemporaryAlignment(EltTy);
+      MIRBuilder.buildLoad(DstReg, EltPtr, EltPtrInfo, EltAlign);
+
       MI.eraseFromParent();
+      return true;
     }
     return true;
   }
diff --git a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
index 5f52f60da37e1..5b4ddc267c9b8 100644
--- a/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
+++ b/llvm/lib/Target/SPIRV/SPIRVPostLegalizer.cpp
@@ -17,6 +17,7 @@
 #include "SPIRVSubtarget.h"
 #include "SPIRVUtils.h"
 #include "llvm/CodeGen/GlobalISel/GenericMachineInstrs.h"
+#include "llvm/CodeGen/MachineFrameInfo.h"
 #include "llvm/IR/IntrinsicsSPIRV.h"
 #include "llvm/Support/Debug.h"
 #include <stack>
@@ -107,6 +108,37 @@ static SPIRVType *deduceTypeFromResultRegister(MachineInstr *Use,
   return nullptr;
 }
 
+static SPIRVType *deducePointerTypeFromResultRegister(MachineInstr *Use,
+                                                      Register UseRegister,
+                                                      SPIRVGlobalRegistry *GR,
+                                                      MachineIRBuilder &MIB) {
+  assert(Use->getOpcode() == TargetOpcode::G_LOAD ||
+         Use->getOpcode() == TargetOpcode::G_STORE);
+
+  Register ValueReg = Use->getOperand(0).getReg();
+  SPIRVType *ValueType = GR->getSPIRVTypeForVReg(ValueReg);
+  if (!ValueType)
+    return nullptr;
+
+  return GR->getOrCreateSPIRVPointerType(ValueType, MIB,
+                                         SPIRV::StorageClass::Function);
+}
+
+static SPIRVType *deduceTypeFromPointerOperand(MachineInstr *Use,
+                                               Register UseRegister,
+                                               SPIRVGlobalRegistry *GR,
+                                               MachineIRBuilder &MIB) {
+  assert(Use->getOpcode() == TargetOpcode::G_LOAD ||
+         Use->getOpcode() == TargetOpcode::G_STORE);
+
+  Register PtrReg = Use->getOperand(1).getReg();
+  SPIRVType *PtrType = GR->getSPIRVTypeForVReg(PtrReg);
+  if (!PtrType)
+    return nullptr;
+
+  return GR->getPointeeType(PtrType);
+}
+
 static SPIRVType *deduceTypeFromUses(Register Reg, MachineFunction &MF,
                                      SPIRVGlobalRegistry *GR,
                                      MachineIRBuilder &MIB) {
@@ -135,6 +167,13 @@ static SPIRVType *deduceTypeFromUses(Register Reg, MachineFunction &MF,
     case TargetOpcode::G_STRICT_FMA:
       ResType = deduceTypeFromResultRegister(&Use, Reg, GR, MIB);
       break;
+    case TargetOpcode::G_LOAD:
+    case TargetOpcode::G_STORE:
+      if (Reg == Use.getOperand(1).getReg())
+        ResType = deducePointerTypeFromResultRegister(&Use, Reg, GR, MIB);
+      else
+        ResType = deduceTypeFromPointerOperand(&Use, Reg, GR, MIB);
+      break;
     case TargetOpcode::G_INTRINSIC_W_SIDE_EFFECTS:
     case TargetOpcode::G_INTRINSIC: {
       auto IntrinsicID = cast<GIntrinsic>(Use).getIntrinsicID();
@@ -323,6 +362,7 @@ static void registerSpirvTypeForNewInstructions(MachineFunction &MF,
 
   for (auto *I : Worklist) {
     MachineIRBuilder MIB(*I);
+    LLVM_DEBUG(dbgs() << "Assigning default type to results in " << *I);
     for (unsigned Idx = 0; Idx < I->getNumDefs(); ++Idx) {
       Register ResVReg = I->getOperand(Idx).getReg();
       if (GR->getSPIRVTypeForVReg(ResVReg))
@@ -337,9 +377,6 @@ static void registerSpirvTypeForNewInstructions(MachineFunction &MF,
       } else {
         ResType = GR->getOrCreateSPIRVIntegerType(ResLLT.getSizeInBits(), MIB);
       }
-      LLVM_DEBUG(dbgs() << "Could not determine type for " << ResVReg
-                        << ", defaulting to " << *ResType << "\n");
-
       setRegClassType(ResVReg, ResType, GR, &MRI, MF, true);
     }
   }
diff --git a/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll b/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
index 19b39ff59809a..39e72fb9b1f51 100644
--- a/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
+++ b/llvm/test/CodeGen/SPIRV/legalization/load-store-global.ll
@@ -67,72 +67,40 @@ entry:
 ; CHECK-DAG: %[[#VAL14:]] = OpLoad %[[#int]] %[[#PTR14]] Aligned 4
 ; CHECK-DAG: %[[#PTR15:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C15]]
 ; CHECK-DAG: %[[#VAL15:]] = OpLoad %[[#int]] %[[#PTR15]] Aligned 4
-; CHECK-DAG: %[[#INS0:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL0]] %[[#UNDEF:]] 0
-; CHECK-DAG: %[[#INS1:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL1]] %[[#INS0]] 1
-; CHECK-DAG: %[[#INS2:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL2]] %[[#INS1]] 2
-; CHECK-DAG: %[[#INS3:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL3]] %[[#INS2]] 3
-; CHECK-DAG: %[[#INS4:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL4]] %[[#UNDEF]] 0
-; CHECK-DAG: %[[#INS5:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL5]] %[[#INS4]] 1
-; CHECK-DAG: %[[#INS6:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL6]] %[[#INS5]] 2
-; CHECK-DAG: %[[#INS7:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL7]] %[[#INS6]] 3
-; CHECK-DAG: %[[#INS8:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL8]] %[[#UNDEF]] 0
-; CHECK-DAG: %[[#INS9:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL9]] %[[#INS8]] 1
-; CHECK-DAG: %[[#INS10:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL10]] %[[#INS9]] 2
-; CHECK-DAG: %[[#INS11:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL11]] %[[#INS10]] 3
-; CHECK-DAG: %[[#INS12:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL12]] %[[#UNDEF]] 0
-; CHECK-DAG: %[[#INS13:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL13]] %[[#INS12]] 1
-; CHECK-DAG: %[[#INS14:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL14]] %[[#INS13]] 2
-; CHECK-DAG: %[[#INS15:]] = OpCompositeInsert %[[#v4i32]] %[[#VAL15]] %[[#INS14]] 3
   %0 = load <16 x i32>, ptr addrspace(10) @G_16, align 64
  
-; CHECK-DAG: %[[#PTR0_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C0]]
-; CHECK-DAG: %[[#VAL0_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 0
-; CHECK-DAG: OpStore %[[#PTR0_S]] %[[#VAL0_S]] Aligned 64
-; CHECK-DAG: %[[#PTR1_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C1]]
-; CHECK-DAG: %[[#VAL1_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 1
-; CHECK-DAG: OpStore %[[#PTR1_S]] %[[#VAL1_S]] Aligned 4
-; CHECK-DAG: %[[#PTR2_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C2]]
-; CHECK-DAG: %[[#VAL2_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 2
-; CHECK-DAG: OpStore %[[#PTR2_S]] %[[#VAL2_S]] Aligned 8
-; CHECK-DAG: %[[#PTR3_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C3]]
-; CHECK-DAG: %[[#VAL3_S:]] = OpCompositeExtract %[[#int]] %[[#INS3]] 3
-; CHECK-DAG: OpStore %[[#PTR3_S]] %[[#VAL3_S]] Aligned 4
-; CHECK-DAG: %[[#PTR4_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C4]]
-; CHECK-DAG: %[[#VAL4_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 0
-; CHECK-DAG: OpStore %[[#PTR4_S]] %[[#VAL4_S]] Aligned 16
-; CHECK-DAG: %[[#PTR5_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C5]]
-; CHECK-DAG: %[[#VAL5_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 1
-; CHECK-DAG: OpStore %[[#PTR5_S]] %[[#VAL5_S]] Aligned 4
-; CHECK-DAG: %[[#PTR6_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C6]]
-; CHECK-DAG: %[[#VAL6_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 2
-; CHECK-DAG: OpStore %[[#PTR6_S]] %[[#VAL6_S]] Aligned 8
-; CHECK-DAG: %[[#PTR7_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C7]]
-; CHECK-DAG: %[[#VAL7_S:]] = OpCompositeExtract %[[#int]] %[[#INS7]] 3
-; CHECK-DAG: OpStore %[[#PTR7_S]] %[[#VAL7_S]] Aligned 4
-; CHECK-DAG: %[[#PTR8_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C8]]
-; CHECK-DAG: %[[#VAL8_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 0
-; CHECK-DAG: OpStore %[[#PTR8_S]] %[[#VAL8_S]] Aligned 32
-; CHECK-DAG: %[[#PTR9_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C9]]
-; CHECK-DAG: %[[#VAL9_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 1
-; CHECK-DAG: OpStore %[[#PTR9_S]] %[[#VAL9_S]] Aligned 4
-; CHECK-DAG: %[[#PTR10_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C10]]
-; CHECK-DAG: %[[#VAL10_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 2
-; CHECK-DAG: OpStore %[[#PTR10_S]] %[[#VAL10_S]] Aligned 8
-; CHECK-DAG: %[[#PTR11_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C11]]
-; CHECK-DAG: %[[#VAL11_S:]] = OpCompositeExtract %[[#int]] %[[#INS11]] 3
-; CHECK-DAG: OpStore %[[#PTR11_S]] %[[#VAL11_S]] Aligned 4
-; CHECK-DAG: %[[#PTR12_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C12]]
-; CHECK-DAG: %[[#VAL12_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 0
-; CHECK-DAG: OpStore %[[#PTR12_S]] %[[#VAL12_S]] Aligned 16
-; CHECK-DAG: %[[#PTR13_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C13]]
-; CHECK-DAG: %[[#VAL13_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 1
-; CHECK-DAG: OpStore %[[#PTR13_S]] %[[#VAL13_S]] Aligned 4
-; CHECK-DAG: %[[#PTR14_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C14]]
-; CHECK-DAG: %[[#VAL14_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 2
-; CHECK-DAG: OpStore %[[#PTR14_S]] %[[#VAL14_S]] Aligned 8
-; CHECK-DAG: %[[#PTR15_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C15]]
-; CHECK-DAG: %[[#VAL15_S:]] = OpCompositeExtract %[[#int]] %[[#INS15]] 3
-; CHECK-DAG: OpStore %[[#PTR15_S]] %[[#VAL15_S]] Aligned 4
+; CHECK: %[[#PTR0_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C0]]
+; CHECK: OpStore %[[#PTR0_S]] %[[#VAL0]] Aligned 64
+; CHECK: %[[#PTR1_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C1]]
+; CHECK: OpStore %[[#PTR1_S]] %[[#VAL1]] Aligned 4
+; CHECK: %[[#PTR2_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C2]]
+; CHECK: OpStore %[[#PTR2_S]] %[[#VAL2]] Aligned 8
+; CHECK: %[[#PTR3_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C3]]
+; CHECK: OpStore %[[#PTR3_S]] %[[#VAL3]] Aligned 4
+; CHECK: %[[#PTR4_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C4]]
+; CHECK: OpStore %[[#PTR4_S]] %[[#VAL4]] Aligned 16
+; CHECK: %[[#PTR5_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C5]]
+; CHECK: OpStore %[[#PTR5_S]] %[[#VAL5]] Aligned 4
+; CHECK: %[[#PTR6_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C6]]
+; CHECK: OpStore %[[#PTR6_S]] %[[#VAL6]] Aligned 8
+; CHECK: %[[#PTR7_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C7]]
+; CHECK: OpStore %[[#PTR7_S]] %[[#VAL7]] Aligned 4
+; CHECK: %[[#PTR8_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C8]]
+; CHECK: OpStore %[[#PTR8_S]] %[[#VAL8]] Aligned 32
+; CHECK: %[[#PTR9_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C9]]
+; CHECK: OpStore %[[#PTR9_S]] %[[#VAL9]] Aligned 4
+; CHECK: %[[#PTR10_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C10]]
+; CHECK: OpStore %[[#PTR10_S]] %[[#VAL10]] Aligned 8
+; CHECK: %[[#PTR11_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C11]]
+; CHECK: OpStore %[[#PTR11_S]] %[[#VAL11]] Aligned 4
+; CHECK: %[[#PTR12_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C12]]
+; CHECK: OpStore %[[#PTR12_S]] %[[#VAL12]] Aligned 16
+; CHECK: %[[#PTR13_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C13]]
+; CHECK: OpStore %[[#PTR13_S]] %[[#VAL13]] Aligned 4
+; CHECK: %[[#PTR14_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C14]]
+; CHECK: OpStore %[[#PTR14_S]] %[[#VAL14]] Aligned 8
+; CHECK: %[[#PTR15_S:]] = OpAccessChain %[[#ptr_int]] %[[#G16]] %[[#C15]]
+; CHECK: OpStore %[[#PTR15_S]] %[[#VAL15]] Aligned 4
   store <16 x i32> %0, ptr addrspace(10) @G_16, align 64
   ret void
 }
diff --git a/llvm/test/CodeGen/SPIRV/legalization/spv-extractelt-legalization.ll b/llvm/test/CodeGen/SPIRV/legalization/spv-extractelt-legalization.ll
new file mode 100644
index 0000000000000..3188f8b31aac5
--- /dev/null
+++ b/llvm/test/CodeGen/SPIRV/legalization/spv-extractelt-legalization.ll
@@ -0,0 +1,66 @@
+; RUN: llc -O0 -mtriple=spirv-unknown-vulkan-compute %s -o - | FileCheck %s
+
+; CHECK-DAG: %[[#Int:]] = OpTypeInt 32 0
+; CHECK-DAG: %[[#Const0:]] = OpConstant %[[#Int]] 0
+; CHECK-DAG: %[[#Const1:]] = OpConstant %[[#Int]] 1
+; CHECK-DAG: %[[#Const2:]] = OpConstant %[[#Int]] 2
+; CHECK-DAG: %[[#Const3:]] = OpConstant %[[#Int]] 3
+; CHECK-DAG: %[[#Const4:]] = OpConstant %[[#Int]] 4
+; CHECK-DAG: %[[#Const5:]] = OpConstant %[[#Int]] 5
+; CHECK-DAG: %[[#Const10:]] = OpConstant %[[#Int]] 10
+; CHECK-DAG: %[[#Const20:]] = OpConstant %[[#Int]] 20
+; CHECK-DAG: %[[#Const30:]] = OpConstant %[[#Int]] 30
+; CHECK-DAG: %[[#Const40:]] = OpConstant %[[#Int]] 40
+; CHECK-DAG: %[[#Const50:]] = OpConstant %[[#Int]] 50
+; CHECK-DAG: %[[#Const60:]] = OpConstant %[[#Int]] 60
+; CHECK-DAG: %[[#Arr:]] = OpTypeArray %[[#Int]] %[[#]]
+; CHECK-DAG: %[[#PtrArr:]] = OpTypePointer Function %[[#Arr]]
+
+ at G = addrspace(1) global i32 0, align 4
+
+define void @main() #0 {
+entry:
+; CHECK: %[[#Var:]] = OpVariable %[[#PtrArr]] Function
+
+; CHECK: %[[#Idx:]] = OpLoad %[[#Int]]
+  %idx = load i32, ptr addrspace(1) @G, align 4
+
+
+; CHECK: %[[#PtrElt0:]] = OpInBoundsAccessChain %[[#]] %[[#Var]] %[[#Const0]]
+; CHECK: OpStore %[[#PtrElt0]] %[[#Const10]]
+  %vec = insertelement <6 x i32> poison, i32 10, i64 0
+
+; CHECK: %[[#PtrElt1:]] = OpInBoundsAccessChain %[[#]] %[[#Var]] %[[#Const1]]
+; CHECK: OpStore %[[#PtrElt1]] %[[#Const20]]
+  %vec2 = insertelement <6 x i32> %vec, i32 20, i64 1
+
+; CHECK: %[[#PtrElt2:]] = OpInBoundsAccessChain %[[#]] %[[#Var]] %[[#Const2]]
+; CHECK: OpStore %[[#PtrElt2]] %[[#Const30]]
+  %vec3 = insertelement <6 x i32> %vec2, i32 30, i64 2
+
+; CHECK: %[[#PtrElt3:]] = OpInBoundsAccessChain %[[#]] %[[#Var]] %[[#Const3]]
+; CHECK: OpStore %[[#PtrElt3]] %[[#Const40]]
+  %vec4 = insertelement <6 x i32> %vec3, i32 40, i64 3
+
+; CHECK: %[[#PtrElt4:]] = OpInBoundsAccessChain %[[#]] %[[#Var]] %[[#Const4]]
+; CHECK: OpStore %[[#PtrElt4]] %[[#Const50]]
+  %vec5 = insertelement <6 x i32> %vec4, i32 50, i64 4
+
+; CHECK: %[[#PtrElt5:]] = OpInBoundsAccessChain %[[#]] %[[#Var]] %[[#Const5]]
+; CHECK: OpStore %[[#PtrElt5]] %[[#Const60]]
+  %vec6 = insertelement <6 x i32> %vec5, i32 60, i64 5
+
+; CHECK: %[[#Ptr:]] = OpInBoundsAccessChain %[[#]] %[[#Var]] %[[#Idx]]
+; CHECK: %[[#Ld:]] = OpLoad %[[#Int]] %[[#Ptr]]
+  %res = extractelement <6 x i32> %vec6, i32 %idx
+  
+; CHECK: OpStore {{.*}} %[[#Ld]]
+  store i32 %res, ptr addrspace(1) @G, align 4
+  ret void
+}
+
+attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" }
+
+
+
+
diff --git a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
index d1cbfd4811c30..028495b10bee2 100644
--- a/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
+++ b/llvm/test/CodeGen/SPIRV/legalization/vector-arithmetic-6.ll
@@ -42,18 +42,19 @@ entry:
   ; CHECK: %[[#Sub2:]] = OpFSub %[[#v4f32]] %[[#Add2]]
   %13 = fsub reassoc nnan ninf nsz arcp afn <6 x float> %11, %9
 
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 0
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 1
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 2
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 3
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub2]] 0
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Sub2]] 1
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT0:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 0
+  ; CHECK: %[[#EXTRACT1:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 1
+  ; CHECK: %[[#EXTRACT2:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 2
+  ; CHECK: %[[#EXTRACT3:]] = OpCompositeExtract %[[#float]] %[[#Sub1]] 3
+  ; CHECK: %[[#EXTRACT4:]] = OpCompositeExtract %[[#float]] %[[#Sub2]] 0
+  ; CHECK: %[[#EXTRACT5:]] = OpCompositeExtract %[[#float]] %[[#Sub2]] 1
+
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT0]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT1]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT2]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT3]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT4]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT5]]
   
   %14 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
   store <6 x float> %13, ptr addrspace(10) %14, align 4
@@ -119,24 +120,12 @@ entry:
   ; CHECK: %[[#UMod6:]] = OpUMod %[[#int]] %[[#SRem6]]
   %12 = urem <6 x i32> %11, splat (i32 3)
 
-  ; CHECK: %[[#Construct1:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod1]] %[[#UMod2]] %[[#UMod3]] %[[#UMod4]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct1]] 0
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#Construct2:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod1]] %[[#UMod2]] %[[#UMod3]] %[[#UMod4]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct2]] 1
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#Construct3:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod1]] %[[#UMod2]] %[[#UMod3]] %[[#UMod4]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct3]] 2
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#Construct4:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod1]] %[[#UMod2]] %[[#UMod3]] %[[#UMod4]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct4]] 3
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#Construct5:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod5]] %[[#UMod6]] %[[#UNDEF]] %[[#UNDEF]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct5]] 0
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#Construct6:]] = OpCompositeConstruct %[[#v4i32]] %[[#UMod5]] %[[#UMod6]] %[[#UNDEF]] %[[#UNDEF]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#int]] %[[#Construct6]] 1
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: OpStore {{.*}} %[[#UMod1]]
+  ; CHECK: OpStore {{.*}} %[[#UMod2]]
+  ; CHECK: OpStore {{.*}} %[[#UMod3]]
+  ; CHECK: OpStore {{.*}} %[[#UMod4]]
+  ; CHECK: OpStore {{.*}} %[[#UMod5]]
+  ; CHECK: OpStore {{.*}} %[[#UMod6]]
 
   %13 = getelementptr [4 x [6 x i32] ], ptr addrspace(10) @i2, i32 0, i32 0
   store <6 x i32> %12, ptr addrspace(10) %13, align 4
@@ -168,18 +157,19 @@ entry:
   ; CHECK: %[[#Fma2:]] = OpExtInst %[[#v4f32]] {{.*}} Fma
   %8 = call reassoc nnan ninf nsz arcp afn <6 x float> @llvm.fma.v6f32(<6 x float> %5, <6 x float> %6, <6 x float> %7)
 
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 0
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 1
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 2
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 3
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 0
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 1
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT0:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 0
+  ; CHECK: %[[#EXTRACT1:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 1
+  ; CHECK: %[[#EXTRACT2:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 2
+  ; CHECK: %[[#EXTRACT3:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 3
+  ; CHECK: %[[#EXTRACT4:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 0
+  ; CHECK: %[[#EXTRACT5:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 1
+
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT0]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT1]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT2]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT3]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT4]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT5]]
 
   %9 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
   store <6 x float> %8, ptr addrspace(10) %9, align 4
@@ -201,18 +191,19 @@ entry:
   ; CHECK: %[[#Fma2:]] = OpExtInst %[[#v4f32]] {{.*}} Fma
   %8 = call <6 x float> @llvm.experimental.constrained.fma.v6f32(<6 x float> %3, <6 x float> %5, <6 x float> %7, metadata !"round.dynamic", metadata !"fpexcept.strict")
 
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 0
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 1
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 2
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 3
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 0
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
-  ; CHECK: %[[#EXTRACT:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 1
-  ; CHECK: OpStore {{.*}} %[[#EXTRACT]]
+  ; CHECK: %[[#EXTRACT0:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 0
+  ; CHECK: %[[#EXTRACT1:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 1
+  ; CHECK: %[[#EXTRACT2:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 2
+  ; CHECK: %[[#EXTRACT3:]] = OpCompositeExtract %[[#float]] %[[#Fma1]] 3
+  ; CHECK: %[[#EXTRACT4:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 0
+  ; CHECK: %[[#EXTRACT5:]] = OpCompositeExtract %[[#float]] %[[#Fma2]] 1
+
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT0]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT1]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT2]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT3]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT4]]
+  ; CHECK: OpStore {{.*}} %[[#EXTRACT5]]
 
   %9 = getelementptr [4 x [6 x float] ], ptr addrspace(10) @f2, i32 0, i32 0
   store <6 x float> %8, ptr addrspace(10) %9, align 4
diff --git a/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-multiply.ll b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-multiply.ll
index 4f8dfd0494009..cb41c8a1ce2f7 100644
--- a/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-multiply.ll
+++ b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-multiply.ll
@@ -88,18 +88,15 @@ define internal void @test_matrix_multiply_i32_2x2_2x2() {
 
 ; Test Matrix Multiply 2x3 * 3x2 float (Result 2x2 float)
 ; CHECK-LABEL: ; -- Begin function test_matrix_multiply_f32_2x3_3x2
-; CHECK-DAG:   %[[B:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]]
-; CHECK-DAG:   %[[A:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]]
+; CHECK:       %[[Col0B:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]] {{.*}} {{.*}} {{.*}}
+; CHECK:       %[[Col1B:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]] {{.*}} {{.*}} {{.*}}
+; CHECK:       %[[Row0A:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]] {{.*}} {{.*}} {{.*}}
+; CHECK:       %[[Row1A:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]] {{.*}} {{.*}} {{.*}}
 ;
-; CHECK-DAG:   %[[B_Col0:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]]
-; CHECK-DAG:   %[[B_Col1:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]]
-; CHECK-DAG:   %[[A_Row0:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]]
-; CHECK-DAG:   %[[A_Row1:[0-9]+]] = OpCompositeConstruct %[[V3F32_ID]]
-;
-; CHECK-DAG:   %[[C00:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row0]] %[[B_Col0]]
-; CHECK-DAG:   %[[C10:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row1]] %[[B_Col0]]
-; CHECK-DAG:   %[[C01:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row0]] %[[B_Col1]]
-; CHECK-DAG:   %[[C11:[0-9]+]] = OpDot %[[Float_ID]] %[[A_Row1]] %[[B_Col1]]
+; CHECK-DAG:   %[[C00:[0-9]+]] = OpDot %[[Float_ID]] %[[Row0A]] %[[Col0B]]
+; CHECK-DAG:   %[[C10:[0-9]+]] = OpDot %[[Float_ID]] %[[Row1A]] %[[Col0B]]
+; CHECK-DAG:   %[[C01:[0-9]+]] = OpDot %[[Float_ID]] %[[Row0A]] %[[Col1B]]
+; CHECK-DAG:   %[[C11:[0-9]+]] = OpDot %[[Float_ID]] %[[Row1A]] %[[Col1B]]
 ; CHECK:       OpCompositeConstruct %[[V4F32_ID]] %[[C00]] %[[C10]] %[[C01]] %[[C11]]
 define internal void @test_matrix_multiply_f32_2x3_3x2() {
   %1 = load <6 x float>, ptr addrspace(10) @private_v6f32
diff --git a/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-transpose.ll b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-transpose.ll
index 3474fecae9957..3106d5d55ef77 100644
--- a/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-transpose.ll
+++ b/llvm/test/CodeGen/SPIRV/llvm-intrinsics/matrix-transpose.ll
@@ -38,48 +38,36 @@ define internal void @test_transpose_f32_2x3() {
 ; CHECK: %[[Load6:[0-9]+]] = OpLoad %[[Float_ID]] %[[AccessChain6]]
 ;
 ; -- Construct intermediate vectors
-; CHECK: %[[CompositeInsert1:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load1]] %[[undef_V4F32_ID:[0-9]+]] 0
-; CHECK: %[[CompositeInsert2:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load2]] %[[CompositeInsert1]] 1
-; CHECK: %[[CompositeInsert3:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load3]] %[[CompositeInsert2]] 2
-; CHECK: %[[CompositeInsert4:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load4]] %[[CompositeInsert3]] 3
-; CHECK: %[[CompositeInsert5:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load5]] %[[undef_V4F32_ID]] 0
-; CHECK: %[[CompositeInsert6:[0-9]+]] = OpCompositeInsert %[[V4F32_ID]] %[[Load6]] %[[CompositeInsert5]] 1
+; CHECK: %[[Construct1:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Load1]] %[[Load2]] %[[Load3]] %[[Load4]]
+; CHECK: %[[Construct2:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Load1]] %[[Load2]] %[[Load3]] %[[Load4]]
+; CHECK: %[[Construct3:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Load5]] %[[Load6]] {{.*}} {{.*}}
+; CHECK: %[[Construct4:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Load1]] %[[Load2]] %[[Load3]] %[[Load4]]
+; CHECK: %[[Construct5:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Load1]] %[[Load2]] %[[Load3]] %[[Load4]]
+; CHECK: %[[Construct6:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Load5]] %[[Load6]] {{.*}} {{.*}}
   %1 = load <6 x float>, ptr addrspace(10) @private_v6f32
 
 ; -- Extract elements for transposition
-; CHECK: %[[Extract1:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert4]] 0
-; CHECK: %[[Extract2:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert4]] 2
-; CHECK: %[[Extract3:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert6]] 0
-; CHECK: %[[Extract4:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert4]] 1
-; CHECK: %[[Extract5:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert4]] 3
-; CHECK: %[[Extract6:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeInsert6]] 1
+; CHECK: %[[Extract1:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[Construct1]] 0
+; CHECK: %[[Extract2:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[Construct2]] 2
+; CHECK: %[[Extract3:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[Construct3]] 0
+; CHECK: %[[Extract4:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[Construct4]] 1
+; CHECK: %[[Extract5:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[Construct5]] 3
+; CHECK: %[[Extract6:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[Construct6]] 1
   %2 = call <6 x float> @llvm.matrix.transpose.v6f32.i32(<6 x float> %1, i32 2, i32 3)
 
 ; -- Store output 3x2 matrix elements
 ; CHECK: %[[AccessChain7:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_0]]
-; CHECK: %[[CompositeConstruct1:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract1]] %[[Extract2]] %[[Extract3]] %[[Extract4]]
-; CHECK: %[[Extract7:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct1]] 0
-; CHECK: OpStore %[[AccessChain7]] %[[Extract7]]
+; CHECK: OpStore %[[AccessChain7]] %[[Extract1]]
 ; CHECK: %[[AccessChain8:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_1]]
-; CHECK: %[[CompositeConstruct2:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract1]] %[[Extract2]] %[[Extract3]] %[[Extract4]]
-; CHECK: %[[Extract8:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct2]] 1
-; CHECK: OpStore %[[AccessChain8]] %[[Extract8]]
+; CHECK: OpStore %[[AccessChain8]] %[[Extract2]]
 ; CHECK: %[[AccessChain9:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_2]]
-; CHECK: %[[CompositeConstruct3:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract1]] %[[Extract2]] %[[Extract3]] %[[Extract4]]
-; CHECK: %[[Extract9:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct3]] 2
-; CHECK: OpStore %[[AccessChain9]] %[[Extract9]]
+; CHECK: OpStore %[[AccessChain9]] %[[Extract3]]
 ; CHECK: %[[AccessChain10:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_3]]
-; CHECK: %[[CompositeConstruct4:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract1]] %[[Extract2]] %[[Extract3]] %[[Extract4]]
-; CHECK: %[[Extract10:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct4]] 3
-; CHECK: OpStore %[[AccessChain10]] %[[Extract10]]
+; CHECK: OpStore %[[AccessChain10]] %[[Extract4]]
 ; CHECK: %[[AccessChain11:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_4]]
-; CHECK: %[[CompositeConstruct5:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract5]] %[[Extract6]] %[[undef_Float_ID:[0-9]+]] %[[undef_Float_ID]]
-; CHECK: %[[Extract11:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct5]] 0
-; CHECK: OpStore %[[AccessChain11]] %[[Extract11]]
+; CHECK: OpStore %[[AccessChain11]] %[[Extract5]]
 ; CHECK: %[[AccessChain12:[0-9]+]] = OpAccessChain %[[_ptr_Float_ID]] %[[private_v6f32]] %[[int_5]]
-; CHECK: %[[CompositeConstruct6:[0-9]+]] = OpCompositeConstruct %[[V4F32_ID]] %[[Extract5]] %[[Extract6]] %[[undef_Float_ID]] %[[undef_Float_ID]]
-; CHECK: %[[Extract12:[0-9]+]] = OpCompositeExtract %[[Float_ID]] %[[CompositeConstruct6]] 1
-; CHECK: OpStore %[[AccessChain12]] %[[Extract12]]
+; CHECK: OpStore %[[AccessChain12]] %[[Extract6]]
   store <6 x float> %2, ptr addrspace(10) @private_v6f32
   ret void
 }



More information about the llvm-commits mailing list