[llvm] [RISCV][GlobalISel] Legalize Scalable Vector Loads (PR #84965)

Jiahan Xie via llvm-commits llvm-commits at lists.llvm.org
Tue Mar 12 11:01:30 PDT 2024


https://github.com/jiahanxie353 created https://github.com/llvm/llvm-project/pull/84965

Hi @michaelmaitland  I'm starting to look at legalizing vector load for rvv. 

Using the simplest test case:
```llvm
define <vscale x 1 x i8>  @vload_nx1i8(ptr %pa) {
  %va = load <vscale x 1 x i8>, ptr %pa
 ret <vscale x 1 x i8> %va
}
```
It reports `UnableToLegalize` here: https://github.com/llvm/llvm-project/blob/a4aac22683a44264bb3883242b1c6b711f534e8b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp#L3406-L3407

I looked at the documentation and definition of `allowsMemoryAccess`:
https://github.com/llvm/llvm-project/blob/0ebf511ad011a83022edb171e044c98d9d16b1fa/llvm/include/llvm/CodeGen/TargetLowering.h#L1901-L1909
> Return true if the target supports a memory access of this type for the given address space and alignment.

And then I looked at the existing test case, such as `load_i16_unaligned`: https://github.com/llvm/llvm-project/blob/683a9ac803a56f6dda9b783a6e2d6d92a5d0626c/llvm/test/CodeGen/RISCV/GlobalISel/legalizer/legalize-load-rv32.mir#L163-L196

I wonder why all these test cases *do not* `allowMemoryAccess` but it *does* for the case of vectors.


>From 709f7fcf7f6c97491dba2e27768c593917a74630 Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Sun, 14 Jan 2024 22:29:10 -0500
Subject: [PATCH 01/16] GISel support is in progress for G_LOAD

---
 llvm/lib/Target/RISCV/RISCVISelLowering.cpp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 08678a859ae2b6..a7a3da27c0c0bc 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -20923,7 +20923,7 @@ bool RISCVTargetLowering::fallBackToDAGISel(const Instruction &Inst) const {
   if (Op == Instruction::Add || Op == Instruction::Sub ||
       Op == Instruction::And || Op == Instruction::Or ||
       Op == Instruction::Xor || Op == Instruction::InsertElement ||
-      Op == Instruction::Xor || Op == Instruction::ShuffleVector)
+      Op == Instruction::ShuffleVector || Op == Instruction::Load)
     return false;
 
   if (Inst.getType()->isScalableTy())

>From a263084b7c77727336ac1c778cf22ad195aedab7 Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Mon, 29 Jan 2024 20:40:01 -0500
Subject: [PATCH 02/16] change the type of StoreSize to be TypeSize

---
 llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
index 365870f540daeb..0512138df6b4be 100644
--- a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
@@ -1361,9 +1361,8 @@ static bool isSwiftError(const Value *V) {
 
 bool IRTranslator::translateLoad(const User &U, MachineIRBuilder &MIRBuilder) {
   const LoadInst &LI = cast<LoadInst>(U);
-
-  unsigned StoreSize = DL->getTypeStoreSize(LI.getType());
-  if (StoreSize == 0)
+  TypeSize StoreSize = DL->getTypeStoreSize(LI.getType());
+  if (StoreSize.isZero())
     return true;
 
   ArrayRef<Register> Regs = getOrCreateVRegs(LI);

>From 9415ee25e755175c1186068c86a0ad238b490e55 Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Mon, 29 Jan 2024 20:43:42 -0500
Subject: [PATCH 03/16] MMO can be scalable vector type so we use TypeSize
 isKnownGT to compare load memory size with result size

---
 llvm/lib/CodeGen/MachineVerifier.cpp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/llvm/lib/CodeGen/MachineVerifier.cpp b/llvm/lib/CodeGen/MachineVerifier.cpp
index 90cbf097370de2..b4e9882a871ad0 100644
--- a/llvm/lib/CodeGen/MachineVerifier.cpp
+++ b/llvm/lib/CodeGen/MachineVerifier.cpp
@@ -1198,7 +1198,7 @@ void MachineVerifier::verifyPreISelGenericInstruction(const MachineInstr *MI) {
         if (MMO.getSizeInBits() >= ValTy.getSizeInBits())
           report("Generic extload must have a narrower memory type", MI);
       } else if (MI->getOpcode() == TargetOpcode::G_LOAD) {
-        if (MMO.getSize() > ValTy.getSizeInBytes())
+        if (TypeSize::isKnownGT(MMO.getMemoryType().getSizeInBytes(), ValTy.getSizeInBytes()))
           report("load memory size cannot exceed result size", MI);
       } else if (MI->getOpcode() == TargetOpcode::G_STORE) {
         if (ValTy.getSizeInBytes() < MMO.getSize())

>From ef873090c1a1b4e5513a029cedf6736c39531fd8 Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Tue, 30 Jan 2024 08:57:38 -0500
Subject: [PATCH 04/16] simple test case for scalable vector load

---
 llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll | 7 +++++++
 1 file changed, 7 insertions(+)
 create mode 100644 llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll

diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
new file mode 100644
index 00000000000000..5f98c6a7066c70
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
@@ -0,0 +1,7 @@
+; NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
+; RUN: llc -mtriple=riscv32 -mattr=+v -global-isel -stop-after=irtranslator -verify-machineinstrs < %s | FileCheck  -check-prefixes=RV32I %s
+
+define void @vload_vint8m1(ptr %pa) {
+  %va = load <vscale x 8 x i8>, ptr %pa
+	ret void
+}

>From 49d6e93b0c1da4c7346d5d1b034e435605051a57 Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Tue, 30 Jan 2024 10:42:03 -0500
Subject: [PATCH 05/16] add test check for this simplest test case

---
 .../CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll   | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
index 5f98c6a7066c70..faf360a5a97db4 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
@@ -2,6 +2,13 @@
 ; RUN: llc -mtriple=riscv32 -mattr=+v -global-isel -stop-after=irtranslator -verify-machineinstrs < %s | FileCheck  -check-prefixes=RV32I %s
 
 define void @vload_vint8m1(ptr %pa) {
-  %va = load <vscale x 8 x i8>, ptr %pa
-	ret void
+  ; RV32I-LABEL: name: vload_vint8m1
+  ; RV32I: bb.1 (%ir-block.0):
+  ; RV32I-NEXT:   liveins: $x10
+  ; RV32I-NEXT: {{  $}}
+  ; RV32I-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32I-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s8>) from %ir.pa)
+  ; RV32I-NEXT:   PseudoRET
+  %va = load <vscale x 8 x i8>, ptr %pa, align 8
+  ret void
 }

>From 4ac79522c957fc94dd89f84e75091e3a2baa2b98 Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Tue, 30 Jan 2024 10:42:49 -0500
Subject: [PATCH 06/16] replace getSize with get size in bytes min value to
 accomodate scalable vectors; and have comprehensive test for all vector types
 and both rv32/64

---
 llvm/lib/CodeGen/MachineOperand.cpp           |   3 +-
 .../RISCV/GlobalISel/irtranslator/vec-ld.ll   | 498 +++++++++++++++++-
 2 files changed, 488 insertions(+), 13 deletions(-)

diff --git a/llvm/lib/CodeGen/MachineOperand.cpp b/llvm/lib/CodeGen/MachineOperand.cpp
index c7c0a1c20d57f4..24a93e83a1d36b 100644
--- a/llvm/lib/CodeGen/MachineOperand.cpp
+++ b/llvm/lib/CodeGen/MachineOperand.cpp
@@ -1240,7 +1240,8 @@ void MachineMemOperand::print(raw_ostream &OS, ModuleSlotTracker &MST,
        << "unknown-address";
   }
   MachineOperand::printOperandOffset(OS, getOffset());
-  if (getSize() > 0 && getAlign() != getSize())
+  unsigned MinSize = getType().getSizeInBytes().getKnownMinValue();
+  if (MinSize > 0 && getAlign() != MinSize)
     OS << ", align " << getAlign().value();
   if (getAlign() != getBaseAlign())
     OS << ", basealign " << getBaseAlign().value();
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
index faf360a5a97db4..c90572d04e30c8 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
@@ -1,14 +1,488 @@
 ; NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
-; RUN: llc -mtriple=riscv32 -mattr=+v -global-isel -stop-after=irtranslator -verify-machineinstrs < %s | FileCheck  -check-prefixes=RV32I %s
-
-define void @vload_vint8m1(ptr %pa) {
-  ; RV32I-LABEL: name: vload_vint8m1
-  ; RV32I: bb.1 (%ir-block.0):
-  ; RV32I-NEXT:   liveins: $x10
-  ; RV32I-NEXT: {{  $}}
-  ; RV32I-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32I-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s8>) from %ir.pa)
-  ; RV32I-NEXT:   PseudoRET
-  %va = load <vscale x 8 x i8>, ptr %pa, align 8
-  ret void
+; RUN: llc -mtriple=riscv32 -mattr=+v -global-isel -stop-after=irtranslator -verify-machineinstrs < %s | FileCheck  -check-prefixes=RV32 %s
+; RUN: llc -mtriple=riscv64 -mattr=+v -global-isel -stop-after=irtranslator -verify-machineinstrs < %s | FileCheck  -check-prefixes=RV64 %s
+
+define <vscale x 1 x i8>  @vload_nx1i8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx1i8
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx1i8
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 1 x i8>, ptr %pa
+	ret <vscale x 1 x i8> %va
+}
+
+define <vscale x 2 x i8>  @vload_nx2i8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i8
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s8>) from %ir.pa)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx2i8
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s8>) from %ir.pa)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 2 x i8>, ptr %pa
+	ret <vscale x 2 x i8> %va
+}
+
+define <vscale x 4 x i8>  @vload_nx4i8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i8
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s8>) from %ir.pa)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx4i8
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s8>) from %ir.pa)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 4 x i8>, ptr %pa
+	ret <vscale x 4 x i8> %va
+}
+
+define <vscale x 8 x i8>  @vload_nx8i8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx8i8
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s8>) from %ir.pa)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 8 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx8i8
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s8>) from %ir.pa)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 8 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 8 x i8>, ptr %pa
+	ret <vscale x 8 x i8> %va
+}
+
+define <vscale x 16 x i8>  @vload_nx16i8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx16i8
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s8>) from %ir.pa)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
+  ;
+  ; RV64-LABEL: name: vload_nx16i8
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s8>) from %ir.pa)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 16 x i8>, ptr %pa
+	ret <vscale x 16 x i8> %va
+}
+
+define <vscale x 32 x i8>  @vload_nx32i8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx32i8
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 32 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 32 x s8>) from %ir.pa)
+  ; RV32-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 32 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m4
+  ;
+  ; RV64-LABEL: name: vload_nx32i8
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 32 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 32 x s8>) from %ir.pa)
+  ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 32 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m4
+  %va = load <vscale x 32 x i8>, ptr %pa
+	ret <vscale x 32 x i8> %va
+}
+
+define <vscale x 64 x i8>  @vload_nx64i8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx64i8
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 64 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 64 x s8>) from %ir.pa)
+  ; RV32-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 64 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m8
+  ;
+  ; RV64-LABEL: name: vload_nx64i8
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 64 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 64 x s8>) from %ir.pa)
+  ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 64 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m8
+  %va = load <vscale x 64 x i8>, ptr %pa
+	ret <vscale x 64 x i8> %va
+}
+
+define <vscale x 1 x i16>  @vload_nx1i16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx1i16
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s16>) from %ir.pa, align 2)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s16>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx1i16
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s16>) from %ir.pa, align 2)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s16>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 1 x i16>, ptr %pa
+	ret <vscale x 1 x i16> %va
+}
+
+define <vscale x 2 x i16>  @vload_nx2i16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i16
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s16>) from %ir.pa, align 4)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s16>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx2i16
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s16>) from %ir.pa, align 4)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s16>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 2 x i16>, ptr %pa
+	ret <vscale x 2 x i16> %va
+}
+
+define <vscale x 4 x i16>  @vload_nx4i16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i16
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 8)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx4i16
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 8)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 4 x i16>, ptr %pa
+	ret <vscale x 4 x i16> %va
+}
+
+define <vscale x 8 x i16>  @vload_nx8i16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx8i16
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s16>) from %ir.pa, align 16)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 8 x s16>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
+  ;
+  ; RV64-LABEL: name: vload_nx8i16
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s16>) from %ir.pa, align 16)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 8 x s16>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 8 x i16>, ptr %pa
+	ret <vscale x 8 x i16> %va
+}
+
+define <vscale x 16 x i16>  @vload_nx16i16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx16i16
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s16>) from %ir.pa, align 32)
+  ; RV32-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 16 x s16>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m4
+  ;
+  ; RV64-LABEL: name: vload_nx16i16
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s16>) from %ir.pa, align 32)
+  ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 16 x s16>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m4
+  %va = load <vscale x 16 x i16>, ptr %pa
+	ret <vscale x 16 x i16> %va
+}
+
+define <vscale x 32 x i16>  @vload_nx32i16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx32i16
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 32 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 32 x s16>) from %ir.pa, align 64)
+  ; RV32-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 32 x s16>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m8
+  ;
+  ; RV64-LABEL: name: vload_nx32i16
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 32 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 32 x s16>) from %ir.pa, align 64)
+  ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 32 x s16>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m8
+  %va = load <vscale x 32 x i16>, ptr %pa
+	ret <vscale x 32 x i16> %va
+}
+
+define <vscale x 1 x i32>  @vload_nx1i32(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx1i32
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s32>) from %ir.pa, align 4)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s32>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx1i32
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s32>) from %ir.pa, align 4)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s32>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 1 x i32>, ptr %pa
+	ret <vscale x 1 x i32> %va
+}
+
+define <vscale x 2 x i32>  @vload_nx2i32(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i32
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 8)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx2i32
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 8)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 2 x i32>, ptr %pa
+	ret <vscale x 2 x i32> %va
+}
+
+define <vscale x 4 x i32>  @vload_nx4i32(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i32
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s32>) from %ir.pa, align 16)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 4 x s32>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
+  ;
+  ; RV64-LABEL: name: vload_nx4i32
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s32>) from %ir.pa, align 16)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 4 x s32>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 4 x i32>, ptr %pa
+	ret <vscale x 4 x i32> %va
+}
+
+define <vscale x 8 x i32>  @vload_nx8i32(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx8i32
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s32>) from %ir.pa, align 32)
+  ; RV32-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 8 x s32>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m4
+  ;
+  ; RV64-LABEL: name: vload_nx8i32
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s32>) from %ir.pa, align 32)
+  ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 8 x s32>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m4
+  %va = load <vscale x 8 x i32>, ptr %pa
+	ret <vscale x 8 x i32> %va
 }
+
+define <vscale x 16 x i32>  @vload_nx16i32(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx16i32
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s32>) from %ir.pa, align 64)
+  ; RV32-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 16 x s32>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m8
+  ;
+  ; RV64-LABEL: name: vload_nx16i32
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s32>) from %ir.pa, align 64)
+  ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 16 x s32>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m8
+  %va = load <vscale x 16 x i32>, ptr %pa
+	ret <vscale x 16 x i32> %va
+}
+
+define <vscale x 1 x i64>  @vload_nx1i64(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx1i64
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s64>) from %ir.pa, align 8)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s64>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx1i64
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s64>) from %ir.pa, align 8)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s64>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 1 x i64>, ptr %pa
+	ret <vscale x 1 x i64> %va
+}
+
+define <vscale x 2 x i64>  @vload_nx2i64(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i64
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 16)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
+  ;
+  ; RV64-LABEL: name: vload_nx2i64
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 16)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 2 x i64>, ptr %pa
+	ret <vscale x 2 x i64> %va
+}
+
+define <vscale x 4 x i64>  @vload_nx4i64(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i64
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s64>) from %ir.pa, align 32)
+  ; RV32-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 4 x s64>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m4
+  ;
+  ; RV64-LABEL: name: vload_nx4i64
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s64>) from %ir.pa, align 32)
+  ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 4 x s64>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m4
+  %va = load <vscale x 4 x i64>, ptr %pa
+	ret <vscale x 4 x i64> %va
+}
+
+define <vscale x 8 x i64>  @vload_nx8i64(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx8i64
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s64>) from %ir.pa, align 64)
+  ; RV32-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 8 x s64>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m8
+  ;
+  ; RV64-LABEL: name: vload_nx8i64
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s64>) from %ir.pa, align 64)
+  ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 8 x s64>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m8
+  %va = load <vscale x 8 x i64>, ptr %pa
+	ret <vscale x 8 x i64> %va
+}
+

>From 7b303ad083406dd797c28bdc1e46414f94611cde Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Wed, 28 Feb 2024 12:10:17 -0500
Subject: [PATCH 07/16] MMO getType

---
 llvm/lib/CodeGen/MachineVerifier.cpp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/llvm/lib/CodeGen/MachineVerifier.cpp b/llvm/lib/CodeGen/MachineVerifier.cpp
index b4e9882a871ad0..cfd3aa8fc68d6c 100644
--- a/llvm/lib/CodeGen/MachineVerifier.cpp
+++ b/llvm/lib/CodeGen/MachineVerifier.cpp
@@ -1198,7 +1198,7 @@ void MachineVerifier::verifyPreISelGenericInstruction(const MachineInstr *MI) {
         if (MMO.getSizeInBits() >= ValTy.getSizeInBits())
           report("Generic extload must have a narrower memory type", MI);
       } else if (MI->getOpcode() == TargetOpcode::G_LOAD) {
-        if (TypeSize::isKnownGT(MMO.getMemoryType().getSizeInBytes(), ValTy.getSizeInBytes()))
+        if (TypeSize::isKnownGT(MMO.getType().getSizeInBytes(), ValTy.getSizeInBytes()))
           report("load memory size cannot exceed result size", MI);
       } else if (MI->getOpcode() == TargetOpcode::G_STORE) {
         if (ValTy.getSizeInBytes() < MMO.getSize())

>From 01cf82428a090ecba2c3ccf93dfab57527848d76 Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Wed, 28 Feb 2024 13:38:31 -0500
Subject: [PATCH 08/16] git clang format

---
 llvm/lib/CodeGen/MachineVerifier.cpp | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/llvm/lib/CodeGen/MachineVerifier.cpp b/llvm/lib/CodeGen/MachineVerifier.cpp
index cfd3aa8fc68d6c..698fe869a99609 100644
--- a/llvm/lib/CodeGen/MachineVerifier.cpp
+++ b/llvm/lib/CodeGen/MachineVerifier.cpp
@@ -1198,7 +1198,8 @@ void MachineVerifier::verifyPreISelGenericInstruction(const MachineInstr *MI) {
         if (MMO.getSizeInBits() >= ValTy.getSizeInBits())
           report("Generic extload must have a narrower memory type", MI);
       } else if (MI->getOpcode() == TargetOpcode::G_LOAD) {
-        if (TypeSize::isKnownGT(MMO.getType().getSizeInBytes(), ValTy.getSizeInBytes()))
+        if (TypeSize::isKnownGT(MMO.getType().getSizeInBytes(),
+                                ValTy.getSizeInBytes()))
           report("load memory size cannot exceed result size", MI);
       } else if (MI->getOpcode() == TargetOpcode::G_STORE) {
         if (ValTy.getSizeInBytes() < MMO.getSize())

>From bd1f3a7158c06fbff7598abb0052319794d17722 Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Wed, 28 Feb 2024 13:41:30 -0500
Subject: [PATCH 09/16] check isValid for the memory type before comparing
 sizes

---
 llvm/lib/CodeGen/MachineOperand.cpp | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/llvm/lib/CodeGen/MachineOperand.cpp b/llvm/lib/CodeGen/MachineOperand.cpp
index 24a93e83a1d36b..2d20c987f1fcff 100644
--- a/llvm/lib/CodeGen/MachineOperand.cpp
+++ b/llvm/lib/CodeGen/MachineOperand.cpp
@@ -1240,7 +1240,8 @@ void MachineMemOperand::print(raw_ostream &OS, ModuleSlotTracker &MST,
        << "unknown-address";
   }
   MachineOperand::printOperandOffset(OS, getOffset());
-  unsigned MinSize = getType().getSizeInBytes().getKnownMinValue();
+  uint64_t MinSize = MemoryType.isValid() ? getType().getSizeInBytes().getKnownMinValue() : ~UINT64_C(0);
+  // TODO: getSize should return TypeSize
   if (MinSize > 0 && getAlign() != MinSize)
     OS << ", align " << getAlign().value();
   if (getAlign() != getBaseAlign())

>From 4580537b7b2bf28056de51291582fdd18c6eac4c Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Wed, 28 Feb 2024 13:41:44 -0500
Subject: [PATCH 10/16] update test cases

---
 .../RISCV/GlobalISel/irtranslator/vec-ld.ll   | 60 +++++++++----------
 1 file changed, 30 insertions(+), 30 deletions(-)

diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
index c90572d04e30c8..84e747f8957b29 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
@@ -162,7 +162,7 @@ define <vscale x 1 x i16>  @vload_nx1i16(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s16>) from %ir.pa, align 2)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s16>) from %ir.pa)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
@@ -171,7 +171,7 @@ define <vscale x 1 x i16>  @vload_nx1i16(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s16>) from %ir.pa, align 2)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s16>) from %ir.pa)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 1 x i16>, ptr %pa
@@ -184,7 +184,7 @@ define <vscale x 2 x i16>  @vload_nx2i16(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s16>) from %ir.pa, align 4)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s16>) from %ir.pa)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
@@ -193,7 +193,7 @@ define <vscale x 2 x i16>  @vload_nx2i16(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s16>) from %ir.pa, align 4)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s16>) from %ir.pa)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 2 x i16>, ptr %pa
@@ -206,7 +206,7 @@ define <vscale x 4 x i16>  @vload_nx4i16(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 8)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
@@ -215,7 +215,7 @@ define <vscale x 4 x i16>  @vload_nx4i16(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 8)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 4 x i16>, ptr %pa
@@ -228,7 +228,7 @@ define <vscale x 8 x i16>  @vload_nx8i16(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s16>) from %ir.pa, align 16)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s16>) from %ir.pa)
   ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 8 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8m2
   ;
@@ -237,7 +237,7 @@ define <vscale x 8 x i16>  @vload_nx8i16(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s16>) from %ir.pa, align 16)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s16>) from %ir.pa)
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 8 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
   %va = load <vscale x 8 x i16>, ptr %pa
@@ -250,7 +250,7 @@ define <vscale x 16 x i16>  @vload_nx16i16(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s16>) from %ir.pa, align 32)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s16>) from %ir.pa)
   ; RV32-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 16 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8m4
   ;
@@ -259,7 +259,7 @@ define <vscale x 16 x i16>  @vload_nx16i16(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s16>) from %ir.pa, align 32)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s16>) from %ir.pa)
   ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 16 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8m4
   %va = load <vscale x 16 x i16>, ptr %pa
@@ -272,7 +272,7 @@ define <vscale x 32 x i16>  @vload_nx32i16(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 32 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 32 x s16>) from %ir.pa, align 64)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 32 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 32 x s16>) from %ir.pa)
   ; RV32-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 32 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8m8
   ;
@@ -281,7 +281,7 @@ define <vscale x 32 x i16>  @vload_nx32i16(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 32 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 32 x s16>) from %ir.pa, align 64)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 32 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 32 x s16>) from %ir.pa)
   ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 32 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8m8
   %va = load <vscale x 32 x i16>, ptr %pa
@@ -294,7 +294,7 @@ define <vscale x 1 x i32>  @vload_nx1i32(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s32>) from %ir.pa, align 4)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s32>) from %ir.pa)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s32>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
@@ -303,7 +303,7 @@ define <vscale x 1 x i32>  @vload_nx1i32(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s32>) from %ir.pa, align 4)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s32>) from %ir.pa)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 1 x i32>, ptr %pa
@@ -316,7 +316,7 @@ define <vscale x 2 x i32>  @vload_nx2i32(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 8)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
@@ -325,7 +325,7 @@ define <vscale x 2 x i32>  @vload_nx2i32(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 8)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 2 x i32>, ptr %pa
@@ -338,7 +338,7 @@ define <vscale x 4 x i32>  @vload_nx4i32(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s32>) from %ir.pa, align 16)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s32>) from %ir.pa)
   ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 4 x s32>)
   ; RV32-NEXT:   PseudoRET implicit $v8m2
   ;
@@ -347,7 +347,7 @@ define <vscale x 4 x i32>  @vload_nx4i32(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s32>) from %ir.pa, align 16)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s32>) from %ir.pa)
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 4 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
   %va = load <vscale x 4 x i32>, ptr %pa
@@ -360,7 +360,7 @@ define <vscale x 8 x i32>  @vload_nx8i32(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s32>) from %ir.pa, align 32)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s32>) from %ir.pa)
   ; RV32-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 8 x s32>)
   ; RV32-NEXT:   PseudoRET implicit $v8m4
   ;
@@ -369,7 +369,7 @@ define <vscale x 8 x i32>  @vload_nx8i32(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s32>) from %ir.pa, align 32)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s32>) from %ir.pa)
   ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 8 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8m4
   %va = load <vscale x 8 x i32>, ptr %pa
@@ -382,7 +382,7 @@ define <vscale x 16 x i32>  @vload_nx16i32(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s32>) from %ir.pa, align 64)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s32>) from %ir.pa)
   ; RV32-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 16 x s32>)
   ; RV32-NEXT:   PseudoRET implicit $v8m8
   ;
@@ -391,7 +391,7 @@ define <vscale x 16 x i32>  @vload_nx16i32(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s32>) from %ir.pa, align 64)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s32>) from %ir.pa)
   ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 16 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8m8
   %va = load <vscale x 16 x i32>, ptr %pa
@@ -404,7 +404,7 @@ define <vscale x 1 x i64>  @vload_nx1i64(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s64>) from %ir.pa, align 8)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s64>) from %ir.pa)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s64>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
@@ -413,7 +413,7 @@ define <vscale x 1 x i64>  @vload_nx1i64(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s64>) from %ir.pa, align 8)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s64>) from %ir.pa)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 1 x i64>, ptr %pa
@@ -426,7 +426,7 @@ define <vscale x 2 x i64>  @vload_nx2i64(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 16)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa)
   ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV32-NEXT:   PseudoRET implicit $v8m2
   ;
@@ -435,7 +435,7 @@ define <vscale x 2 x i64>  @vload_nx2i64(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 16)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa)
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
   %va = load <vscale x 2 x i64>, ptr %pa
@@ -448,7 +448,7 @@ define <vscale x 4 x i64>  @vload_nx4i64(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s64>) from %ir.pa, align 32)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s64>) from %ir.pa)
   ; RV32-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 4 x s64>)
   ; RV32-NEXT:   PseudoRET implicit $v8m4
   ;
@@ -457,7 +457,7 @@ define <vscale x 4 x i64>  @vload_nx4i64(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s64>) from %ir.pa, align 32)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s64>) from %ir.pa)
   ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 4 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8m4
   %va = load <vscale x 4 x i64>, ptr %pa
@@ -470,7 +470,7 @@ define <vscale x 8 x i64>  @vload_nx8i64(ptr %pa) {
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s64>) from %ir.pa, align 64)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s64>) from %ir.pa)
   ; RV32-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 8 x s64>)
   ; RV32-NEXT:   PseudoRET implicit $v8m8
   ;
@@ -479,7 +479,7 @@ define <vscale x 8 x i64>  @vload_nx8i64(ptr %pa) {
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s64>) from %ir.pa, align 64)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x s64>) from %ir.pa)
   ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 8 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8m8
   %va = load <vscale x 8 x i64>, ptr %pa

>From 97ee5285d01bf7d1e179dc4e20a11d2b5d10ca9e Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Wed, 28 Feb 2024 13:42:55 -0500
Subject: [PATCH 11/16] clang format

---
 llvm/lib/CodeGen/MachineOperand.cpp | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/llvm/lib/CodeGen/MachineOperand.cpp b/llvm/lib/CodeGen/MachineOperand.cpp
index 2d20c987f1fcff..4f248351ca35ca 100644
--- a/llvm/lib/CodeGen/MachineOperand.cpp
+++ b/llvm/lib/CodeGen/MachineOperand.cpp
@@ -1240,7 +1240,9 @@ void MachineMemOperand::print(raw_ostream &OS, ModuleSlotTracker &MST,
        << "unknown-address";
   }
   MachineOperand::printOperandOffset(OS, getOffset());
-  uint64_t MinSize = MemoryType.isValid() ? getType().getSizeInBytes().getKnownMinValue() : ~UINT64_C(0);
+  uint64_t MinSize = MemoryType.isValid()
+                         ? getType().getSizeInBytes().getKnownMinValue()
+                         : ~UINT64_C(0);
   // TODO: getSize should return TypeSize
   if (MinSize > 0 && getAlign() != MinSize)
     OS << ", align " << getAlign().value();

>From d010b2e4c7e8c47c5dd596a39acbe447d61779be Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Wed, 28 Feb 2024 14:37:59 -0500
Subject: [PATCH 12/16] add align argument

---
 .../RISCV/GlobalISel/irtranslator/vec-ld.ll   | 374 ++++++++++++++++--
 1 file changed, 352 insertions(+), 22 deletions(-)

diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
index 84e747f8957b29..f9a9ed9ae44990 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
@@ -21,7 +21,7 @@ define <vscale x 1 x i8>  @vload_nx1i8(ptr %pa) {
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 1 x i8>, ptr %pa
-	ret <vscale x 1 x i8> %va
+  ret <vscale x 1 x i8> %va
 }
 
 define <vscale x 2 x i8>  @vload_nx2i8(ptr %pa) {
@@ -43,7 +43,7 @@ define <vscale x 2 x i8>  @vload_nx2i8(ptr %pa) {
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s8>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 2 x i8>, ptr %pa
-	ret <vscale x 2 x i8> %va
+  ret <vscale x 2 x i8> %va
 }
 
 define <vscale x 4 x i8>  @vload_nx4i8(ptr %pa) {
@@ -65,7 +65,7 @@ define <vscale x 4 x i8>  @vload_nx4i8(ptr %pa) {
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s8>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 4 x i8>, ptr %pa
-	ret <vscale x 4 x i8> %va
+  ret <vscale x 4 x i8> %va
 }
 
 define <vscale x 8 x i8>  @vload_nx8i8(ptr %pa) {
@@ -87,7 +87,7 @@ define <vscale x 8 x i8>  @vload_nx8i8(ptr %pa) {
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 8 x s8>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 8 x i8>, ptr %pa
-	ret <vscale x 8 x i8> %va
+  ret <vscale x 8 x i8> %va
 }
 
 define <vscale x 16 x i8>  @vload_nx16i8(ptr %pa) {
@@ -109,7 +109,7 @@ define <vscale x 16 x i8>  @vload_nx16i8(ptr %pa) {
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
   %va = load <vscale x 16 x i8>, ptr %pa
-	ret <vscale x 16 x i8> %va
+  ret <vscale x 16 x i8> %va
 }
 
 define <vscale x 32 x i8>  @vload_nx32i8(ptr %pa) {
@@ -131,7 +131,7 @@ define <vscale x 32 x i8>  @vload_nx32i8(ptr %pa) {
   ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 32 x s8>)
   ; RV64-NEXT:   PseudoRET implicit $v8m4
   %va = load <vscale x 32 x i8>, ptr %pa
-	ret <vscale x 32 x i8> %va
+  ret <vscale x 32 x i8> %va
 }
 
 define <vscale x 64 x i8>  @vload_nx64i8(ptr %pa) {
@@ -153,7 +153,7 @@ define <vscale x 64 x i8>  @vload_nx64i8(ptr %pa) {
   ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 64 x s8>)
   ; RV64-NEXT:   PseudoRET implicit $v8m8
   %va = load <vscale x 64 x i8>, ptr %pa
-	ret <vscale x 64 x i8> %va
+  ret <vscale x 64 x i8> %va
 }
 
 define <vscale x 1 x i16>  @vload_nx1i16(ptr %pa) {
@@ -175,7 +175,7 @@ define <vscale x 1 x i16>  @vload_nx1i16(ptr %pa) {
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 1 x i16>, ptr %pa
-	ret <vscale x 1 x i16> %va
+  ret <vscale x 1 x i16> %va
 }
 
 define <vscale x 2 x i16>  @vload_nx2i16(ptr %pa) {
@@ -197,7 +197,7 @@ define <vscale x 2 x i16>  @vload_nx2i16(ptr %pa) {
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 2 x i16>, ptr %pa
-	ret <vscale x 2 x i16> %va
+  ret <vscale x 2 x i16> %va
 }
 
 define <vscale x 4 x i16>  @vload_nx4i16(ptr %pa) {
@@ -219,7 +219,7 @@ define <vscale x 4 x i16>  @vload_nx4i16(ptr %pa) {
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 4 x i16>, ptr %pa
-	ret <vscale x 4 x i16> %va
+  ret <vscale x 4 x i16> %va
 }
 
 define <vscale x 8 x i16>  @vload_nx8i16(ptr %pa) {
@@ -241,7 +241,7 @@ define <vscale x 8 x i16>  @vload_nx8i16(ptr %pa) {
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 8 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
   %va = load <vscale x 8 x i16>, ptr %pa
-	ret <vscale x 8 x i16> %va
+  ret <vscale x 8 x i16> %va
 }
 
 define <vscale x 16 x i16>  @vload_nx16i16(ptr %pa) {
@@ -263,7 +263,7 @@ define <vscale x 16 x i16>  @vload_nx16i16(ptr %pa) {
   ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 16 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8m4
   %va = load <vscale x 16 x i16>, ptr %pa
-	ret <vscale x 16 x i16> %va
+  ret <vscale x 16 x i16> %va
 }
 
 define <vscale x 32 x i16>  @vload_nx32i16(ptr %pa) {
@@ -285,7 +285,7 @@ define <vscale x 32 x i16>  @vload_nx32i16(ptr %pa) {
   ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 32 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8m8
   %va = load <vscale x 32 x i16>, ptr %pa
-	ret <vscale x 32 x i16> %va
+  ret <vscale x 32 x i16> %va
 }
 
 define <vscale x 1 x i32>  @vload_nx1i32(ptr %pa) {
@@ -307,7 +307,7 @@ define <vscale x 1 x i32>  @vload_nx1i32(ptr %pa) {
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 1 x i32>, ptr %pa
-	ret <vscale x 1 x i32> %va
+  ret <vscale x 1 x i32> %va
 }
 
 define <vscale x 2 x i32>  @vload_nx2i32(ptr %pa) {
@@ -329,7 +329,7 @@ define <vscale x 2 x i32>  @vload_nx2i32(ptr %pa) {
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 2 x i32>, ptr %pa
-	ret <vscale x 2 x i32> %va
+  ret <vscale x 2 x i32> %va
 }
 
 define <vscale x 4 x i32>  @vload_nx4i32(ptr %pa) {
@@ -351,7 +351,7 @@ define <vscale x 4 x i32>  @vload_nx4i32(ptr %pa) {
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 4 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
   %va = load <vscale x 4 x i32>, ptr %pa
-	ret <vscale x 4 x i32> %va
+  ret <vscale x 4 x i32> %va
 }
 
 define <vscale x 8 x i32>  @vload_nx8i32(ptr %pa) {
@@ -373,7 +373,7 @@ define <vscale x 8 x i32>  @vload_nx8i32(ptr %pa) {
   ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 8 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8m4
   %va = load <vscale x 8 x i32>, ptr %pa
-	ret <vscale x 8 x i32> %va
+  ret <vscale x 8 x i32> %va
 }
 
 define <vscale x 16 x i32>  @vload_nx16i32(ptr %pa) {
@@ -395,7 +395,7 @@ define <vscale x 16 x i32>  @vload_nx16i32(ptr %pa) {
   ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 16 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8m8
   %va = load <vscale x 16 x i32>, ptr %pa
-	ret <vscale x 16 x i32> %va
+  ret <vscale x 16 x i32> %va
 }
 
 define <vscale x 1 x i64>  @vload_nx1i64(ptr %pa) {
@@ -417,7 +417,7 @@ define <vscale x 1 x i64>  @vload_nx1i64(ptr %pa) {
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8
   %va = load <vscale x 1 x i64>, ptr %pa
-	ret <vscale x 1 x i64> %va
+  ret <vscale x 1 x i64> %va
 }
 
 define <vscale x 2 x i64>  @vload_nx2i64(ptr %pa) {
@@ -439,7 +439,7 @@ define <vscale x 2 x i64>  @vload_nx2i64(ptr %pa) {
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
   %va = load <vscale x 2 x i64>, ptr %pa
-	ret <vscale x 2 x i64> %va
+  ret <vscale x 2 x i64> %va
 }
 
 define <vscale x 4 x i64>  @vload_nx4i64(ptr %pa) {
@@ -461,7 +461,7 @@ define <vscale x 4 x i64>  @vload_nx4i64(ptr %pa) {
   ; RV64-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 4 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8m4
   %va = load <vscale x 4 x i64>, ptr %pa
-	ret <vscale x 4 x i64> %va
+  ret <vscale x 4 x i64> %va
 }
 
 define <vscale x 8 x i64>  @vload_nx8i64(ptr %pa) {
@@ -483,6 +483,336 @@ define <vscale x 8 x i64>  @vload_nx8i64(ptr %pa) {
   ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 8 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8m8
   %va = load <vscale x 8 x i64>, ptr %pa
-	ret <vscale x 8 x i64> %va
+  ret <vscale x 8 x i64> %va
+}
+
+define <vscale x 1 x i8>  @vload_nx1i8_align2(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx1i8_align2
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 2)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx1i8_align2
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 2)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 1 x i8>, ptr %pa, align 2
+  ret <vscale x 1 x i8> %va
+}
+
+define <vscale x 1 x i8>  @vload_nx1i8_align8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx1i8_align8
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 8)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx1i8_align8
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 8)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 1 x i8>, ptr %pa, align 8
+  ret <vscale x 1 x i8> %va
+}
+
+define <vscale x 1 x i8>  @vload_nx1i8_align32(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx1i8_align32
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 32)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx1i8_align32
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 32)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 1 x i8>, ptr %pa, align 32
+  ret <vscale x 1 x i8> %va
+}
+
+define <vscale x 4 x i16>  @vload_nx4i16_align8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i16_align8
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx4i16_align8
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 4 x i16>, ptr %pa, align 8
+  ret <vscale x 4 x i16> %va
+}
+
+define <vscale x 4 x i16>  @vload_nx4i16_align16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i16_align16
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 16)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx4i16_align16
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 16)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 4 x i16>, ptr %pa, align 16
+  ret <vscale x 4 x i16> %va
+}
+
+define <vscale x 4 x i16>  @vload_nx4i16_align64(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i16_align64
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 64)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx4i16_align64
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 64)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 4 x i16>, ptr %pa, align 64
+  ret <vscale x 4 x i16> %va
+}
+
+define <vscale x 4 x i16>  @vload_nx4i16_align128(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i16_align128
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 128)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx4i16_align128
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 128)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 4 x i16>, ptr %pa, align 128
+  ret <vscale x 4 x i16> %va
+}
+
+define <vscale x 2 x i32>  @vload_nx2i32_align4(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i32_align4
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 4)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx2i32_align4
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 4)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 2 x i32>, ptr %pa, align 4
+  ret <vscale x 2 x i32> %va
+}
+
+define <vscale x 2 x i32>  @vload_nx2i32_align32(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i32_align32
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 32)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx2i32_align32
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 32)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 2 x i32>, ptr %pa, align 32
+  ret <vscale x 2 x i32> %va
+}
+
+define <vscale x 2 x i32>  @vload_nx2i32_align64(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i32_align64
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 64)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx2i32_align64
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 64)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 2 x i32>, ptr %pa, align 64
+  ret <vscale x 2 x i32> %va
+}
+
+define <vscale x 2 x i32>  @vload_nx2i32_align128(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i32_align128
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 128)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx2i32_align128
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 128)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 2 x i32>, ptr %pa, align 128
+  ret <vscale x 2 x i32> %va
+}
+
+define <vscale x 2 x i64>  @vload_nx2i64_align32(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i64_align32
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 32)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
+  ;
+  ; RV64-LABEL: name: vload_nx2i64_align32
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 32)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 2 x i64>, ptr %pa, align 32
+  ret <vscale x 2 x i64> %va
+}
+
+define <vscale x 2 x i64>  @vload_nx2i64_align64(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i64_align64
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 64)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
+  ;
+  ; RV64-LABEL: name: vload_nx2i64_align64
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 64)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 2 x i64>, ptr %pa, align 64
+  ret <vscale x 2 x i64> %va
+}
+
+define <vscale x 2 x i64>  @vload_nx2i64_align128(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i64_align128
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 128)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
+  ;
+  ; RV64-LABEL: name: vload_nx2i64_align128
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 128)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 2 x i64>, ptr %pa, align 128
+  ret <vscale x 2 x i64> %va
+}
+
+define <vscale x 2 x i64>  @vload_nx2i64_align256(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i64_align256
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 256)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
+  ;
+  ; RV64-LABEL: name: vload_nx2i64_align256
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 256)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 2 x i64>, ptr %pa, align 256
+  ret <vscale x 2 x i64> %va
 }
 

>From a669217b43f8f8261c22807534df02b847692e7b Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Tue, 5 Mar 2024 17:11:53 -0500
Subject: [PATCH 13/16] align can be smaller/equal/larger than individual
 element size and smaller/equal/larger than the total vector size

---
 .../RISCV/GlobalISel/irtranslator/vec-ld.ll   | 262 +++++++++++-------
 1 file changed, 163 insertions(+), 99 deletions(-)

diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
index f9a9ed9ae44990..5650e7422bec5d 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
@@ -486,160 +486,225 @@ define <vscale x 8 x i64>  @vload_nx8i64(ptr %pa) {
   ret <vscale x 8 x i64> %va
 }
 
-define <vscale x 1 x i8>  @vload_nx1i8_align2(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx1i8_align2
+define <vscale x 16 x i8>  @vload_nx16i8_align1(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx16i8_align1
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 2)
-  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
-  ; RV32-NEXT:   PseudoRET implicit $v8
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s8>) from %ir.pa, align 1)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
   ;
-  ; RV64-LABEL: name: vload_nx1i8_align2
+  ; RV64-LABEL: name: vload_nx16i8_align1
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 2)
-  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
-  ; RV64-NEXT:   PseudoRET implicit $v8
-  %va = load <vscale x 1 x i8>, ptr %pa, align 2
-  ret <vscale x 1 x i8> %va
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s8>) from %ir.pa, align 1)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 16 x i8>, ptr %pa, align 1
+  ret <vscale x 16 x i8> %va
 }
 
-define <vscale x 1 x i8>  @vload_nx1i8_align8(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx1i8_align8
+define <vscale x 16 x i8>  @vload_nx16i8_align2(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx16i8_align2
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 8)
-  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
-  ; RV32-NEXT:   PseudoRET implicit $v8
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s8>) from %ir.pa, align 2)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
   ;
-  ; RV64-LABEL: name: vload_nx1i8_align8
+  ; RV64-LABEL: name: vload_nx16i8_align2
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 8)
-  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
-  ; RV64-NEXT:   PseudoRET implicit $v8
-  %va = load <vscale x 1 x i8>, ptr %pa, align 8
-  ret <vscale x 1 x i8> %va
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s8>) from %ir.pa, align 2)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 16 x i8>, ptr %pa, align 2
+  ret <vscale x 16 x i8> %va
 }
 
-define <vscale x 1 x i8>  @vload_nx1i8_align32(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx1i8_align32
+define <vscale x 16 x i8>  @vload_nx16i8_align16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx16i8_align16
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 32)
-  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s8>) from %ir.pa)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
+  ;
+  ; RV64-LABEL: name: vload_nx16i8_align16
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s8>) from %ir.pa)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 16 x i8>, ptr %pa, align 16
+  ret <vscale x 16 x i8> %va
+}
+
+define <vscale x 16 x i8>  @vload_nx16i8_align64(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx16i8_align64
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s8>) from %ir.pa, align 64)
+  ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m2
+  ;
+  ; RV64-LABEL: name: vload_nx16i8_align64
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 16 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 16 x s8>) from %ir.pa, align 64)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 16 x s8>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 16 x i8>, ptr %pa, align 64
+  ret <vscale x 16 x i8> %va
+}
+
+define <vscale x 4 x i16>  @vload_nx4i16_align1(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i16_align1
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 1)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
-  ; RV64-LABEL: name: vload_nx1i8_align32
+  ; RV64-LABEL: name: vload_nx4i16_align1
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x s8>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x s8>) from %ir.pa, align 32)
-  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x s8>)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 1)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
-  %va = load <vscale x 1 x i8>, ptr %pa, align 32
-  ret <vscale x 1 x i8> %va
+  %va = load <vscale x 4 x i16>, ptr %pa, align 1
+  ret <vscale x 4 x i16> %va
 }
 
-define <vscale x 4 x i16>  @vload_nx4i16_align8(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx4i16_align8
+define <vscale x 4 x i16>  @vload_nx4i16_align2(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i16_align2
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 2)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
-  ; RV64-LABEL: name: vload_nx4i16_align8
+  ; RV64-LABEL: name: vload_nx4i16_align2
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 2)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
-  %va = load <vscale x 4 x i16>, ptr %pa, align 8
+  %va = load <vscale x 4 x i16>, ptr %pa, align 2
   ret <vscale x 4 x i16> %va
 }
 
-define <vscale x 4 x i16>  @vload_nx4i16_align16(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx4i16_align16
+define <vscale x 4 x i16>  @vload_nx4i16_align4(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i16_align4
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 16)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 4)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
-  ; RV64-LABEL: name: vload_nx4i16_align16
+  ; RV64-LABEL: name: vload_nx4i16_align4
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 16)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 4)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
-  %va = load <vscale x 4 x i16>, ptr %pa, align 16
+  %va = load <vscale x 4 x i16>, ptr %pa, align 4
   ret <vscale x 4 x i16> %va
 }
-
-define <vscale x 4 x i16>  @vload_nx4i16_align64(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx4i16_align64
+define <vscale x 4 x i16>  @vload_nx4i16_align8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i16_align8
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 64)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
-  ; RV64-LABEL: name: vload_nx4i16_align64
+  ; RV64-LABEL: name: vload_nx4i16_align8
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 64)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
-  %va = load <vscale x 4 x i16>, ptr %pa, align 64
+  %va = load <vscale x 4 x i16>, ptr %pa, align 8
   ret <vscale x 4 x i16> %va
 }
 
-define <vscale x 4 x i16>  @vload_nx4i16_align128(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx4i16_align128
+define <vscale x 4 x i16>  @vload_nx4i16_align16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx4i16_align16
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 128)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 16)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
-  ; RV64-LABEL: name: vload_nx4i16_align128
+  ; RV64-LABEL: name: vload_nx4i16_align16
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 128)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 4 x s16>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 4 x s16>) from %ir.pa, align 16)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 4 x s16>)
   ; RV64-NEXT:   PseudoRET implicit $v8
-  %va = load <vscale x 4 x i16>, ptr %pa, align 128
+  %va = load <vscale x 4 x i16>, ptr %pa, align 16
   ret <vscale x 4 x i16> %va
 }
 
+define <vscale x 2 x i32>  @vload_nx2i32_align2(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i32_align2
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 2)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx2i32_align2
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 2)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 2 x i32>, ptr %pa, align 2
+  ret <vscale x 2 x i32> %va
+}
+
 define <vscale x 2 x i32>  @vload_nx2i32_align4(ptr %pa) {
   ; RV32-LABEL: name: vload_nx2i32_align4
   ; RV32: bb.1 (%ir-block.0):
@@ -662,157 +727,156 @@ define <vscale x 2 x i32>  @vload_nx2i32_align4(ptr %pa) {
   ret <vscale x 2 x i32> %va
 }
 
-define <vscale x 2 x i32>  @vload_nx2i32_align32(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx2i32_align32
+define <vscale x 2 x i32>  @vload_nx2i32_align8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i32_align8
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 32)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
-  ; RV64-LABEL: name: vload_nx2i32_align32
+  ; RV64-LABEL: name: vload_nx2i32_align8
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 32)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8
-  %va = load <vscale x 2 x i32>, ptr %pa, align 32
+  %va = load <vscale x 2 x i32>, ptr %pa, align 8
   ret <vscale x 2 x i32> %va
 }
 
-define <vscale x 2 x i32>  @vload_nx2i32_align64(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx2i32_align64
+define <vscale x 2 x i32>  @vload_nx2i32_align16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i32_align16
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 64)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 16)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
-  ; RV64-LABEL: name: vload_nx2i32_align64
+  ; RV64-LABEL: name: vload_nx2i32_align16
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 64)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 16)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8
-  %va = load <vscale x 2 x i32>, ptr %pa, align 64
+  %va = load <vscale x 2 x i32>, ptr %pa, align 16
   ret <vscale x 2 x i32> %va
 }
 
-define <vscale x 2 x i32>  @vload_nx2i32_align128(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx2i32_align128
+define <vscale x 2 x i32>  @vload_nx2i32_align256(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i32_align256
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 128)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 256)
   ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
   ; RV32-NEXT:   PseudoRET implicit $v8
   ;
-  ; RV64-LABEL: name: vload_nx2i32_align128
+  ; RV64-LABEL: name: vload_nx2i32_align256
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 128)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s32>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s32>) from %ir.pa, align 256)
   ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x s32>)
   ; RV64-NEXT:   PseudoRET implicit $v8
-  %va = load <vscale x 2 x i32>, ptr %pa, align 128
+  %va = load <vscale x 2 x i32>, ptr %pa, align 256
   ret <vscale x 2 x i32> %va
 }
-
-define <vscale x 2 x i64>  @vload_nx2i64_align32(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx2i64_align32
+define <vscale x 2 x i64>  @vload_nx2i64_align4(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i64_align4
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 32)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 4)
   ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV32-NEXT:   PseudoRET implicit $v8m2
   ;
-  ; RV64-LABEL: name: vload_nx2i64_align32
+  ; RV64-LABEL: name: vload_nx2i64_align4
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 32)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 4)
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
-  %va = load <vscale x 2 x i64>, ptr %pa, align 32
+  %va = load <vscale x 2 x i64>, ptr %pa, align 4
   ret <vscale x 2 x i64> %va
 }
 
-define <vscale x 2 x i64>  @vload_nx2i64_align64(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx2i64_align64
+define <vscale x 2 x i64>  @vload_nx2i64_align8(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i64_align8
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 64)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 8)
   ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV32-NEXT:   PseudoRET implicit $v8m2
   ;
-  ; RV64-LABEL: name: vload_nx2i64_align64
+  ; RV64-LABEL: name: vload_nx2i64_align8
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 64)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 8)
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
-  %va = load <vscale x 2 x i64>, ptr %pa, align 64
+  %va = load <vscale x 2 x i64>, ptr %pa, align 8
   ret <vscale x 2 x i64> %va
 }
 
-define <vscale x 2 x i64>  @vload_nx2i64_align128(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx2i64_align128
+define <vscale x 2 x i64>  @vload_nx2i64_align16(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i64_align16
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 128)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa)
   ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV32-NEXT:   PseudoRET implicit $v8m2
   ;
-  ; RV64-LABEL: name: vload_nx2i64_align128
+  ; RV64-LABEL: name: vload_nx2i64_align16
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 128)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa)
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
-  %va = load <vscale x 2 x i64>, ptr %pa, align 128
+  %va = load <vscale x 2 x i64>, ptr %pa, align 16
   ret <vscale x 2 x i64> %va
 }
 
-define <vscale x 2 x i64>  @vload_nx2i64_align256(ptr %pa) {
-  ; RV32-LABEL: name: vload_nx2i64_align256
+define <vscale x 2 x i64>  @vload_nx2i64_align32(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2i64_align32
   ; RV32: bb.1 (%ir-block.0):
   ; RV32-NEXT:   liveins: $x10
   ; RV32-NEXT: {{  $}}
   ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 256)
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 32)
   ; RV32-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV32-NEXT:   PseudoRET implicit $v8m2
   ;
-  ; RV64-LABEL: name: vload_nx2i64_align256
+  ; RV64-LABEL: name: vload_nx2i64_align32
   ; RV64: bb.1 (%ir-block.0):
   ; RV64-NEXT:   liveins: $x10
   ; RV64-NEXT: {{  $}}
   ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
-  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 256)
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x s64>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x s64>) from %ir.pa, align 32)
   ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x s64>)
   ; RV64-NEXT:   PseudoRET implicit $v8m2
-  %va = load <vscale x 2 x i64>, ptr %pa, align 256
+  %va = load <vscale x 2 x i64>, ptr %pa, align 32
   ret <vscale x 2 x i64> %va
 }
 

>From 2330c3e3a6735ddef98ec9976bf63ac773e1d982 Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Tue, 5 Mar 2024 17:34:22 -0500
Subject: [PATCH 14/16] load a vector of pointers

---
 .../RISCV/GlobalISel/irtranslator/vec-ld.ll   | 66 +++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
index 5650e7422bec5d..31b3c3fe3c5be8 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ld.ll
@@ -880,3 +880,69 @@ define <vscale x 2 x i64>  @vload_nx2i64_align32(ptr %pa) {
   ret <vscale x 2 x i64> %va
 }
 
+define <vscale x 1 x ptr>  @vload_nx1ptr(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx1ptr
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x p0>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x p0>) from %ir.pa)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x p0>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx1ptr
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 1 x p0>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 1 x p0>) from %ir.pa)
+  ; RV64-NEXT:   $v8 = COPY [[LOAD]](<vscale x 1 x p0>)
+  ; RV64-NEXT:   PseudoRET implicit $v8
+  %va = load <vscale x 1 x ptr>, ptr %pa
+  ret <vscale x 1 x ptr> %va
+}
+
+define <vscale x 2 x ptr>  @vload_nx2ptr(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx2ptr
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x p0>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x p0>) from %ir.pa)
+  ; RV32-NEXT:   $v8 = COPY [[LOAD]](<vscale x 2 x p0>)
+  ; RV32-NEXT:   PseudoRET implicit $v8
+  ;
+  ; RV64-LABEL: name: vload_nx2ptr
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 2 x p0>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 2 x p0>) from %ir.pa)
+  ; RV64-NEXT:   $v8m2 = COPY [[LOAD]](<vscale x 2 x p0>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m2
+  %va = load <vscale x 2x ptr>, ptr %pa
+  ret <vscale x 2 x ptr> %va
+}
+
+define <vscale x 8 x ptr>  @vload_nx8ptr(ptr %pa) {
+  ; RV32-LABEL: name: vload_nx8ptr
+  ; RV32: bb.1 (%ir-block.0):
+  ; RV32-NEXT:   liveins: $x10
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV32-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x p0>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x p0>) from %ir.pa)
+  ; RV32-NEXT:   $v8m4 = COPY [[LOAD]](<vscale x 8 x p0>)
+  ; RV32-NEXT:   PseudoRET implicit $v8m4
+  ;
+  ; RV64-LABEL: name: vload_nx8ptr
+  ; RV64: bb.1 (%ir-block.0):
+  ; RV64-NEXT:   liveins: $x10
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(p0) = COPY $x10
+  ; RV64-NEXT:   [[LOAD:%[0-9]+]]:_(<vscale x 8 x p0>) = G_LOAD [[COPY]](p0) :: (load (<vscale x 8 x p0>) from %ir.pa)
+  ; RV64-NEXT:   $v8m8 = COPY [[LOAD]](<vscale x 8 x p0>)
+  ; RV64-NEXT:   PseudoRET implicit $v8m8
+  %va = load <vscale x 8 x ptr>, ptr %pa
+  ret <vscale x 8 x ptr> %va
+}
+

>From 2e9cbd71feeaaf7d6189cffb3727d2e407312bdf Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Wed, 6 Mar 2024 16:32:20 -0500
Subject: [PATCH 15/16] leave TODO that getSize should return TypeSize in the
 header file

---
 llvm/include/llvm/CodeGen/MachineMemOperand.h | 1 +
 llvm/lib/CodeGen/MachineOperand.cpp           | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/llvm/include/llvm/CodeGen/MachineMemOperand.h b/llvm/include/llvm/CodeGen/MachineMemOperand.h
index 12c18aaea5b26c..11576ec4456818 100644
--- a/llvm/include/llvm/CodeGen/MachineMemOperand.h
+++ b/llvm/include/llvm/CodeGen/MachineMemOperand.h
@@ -235,6 +235,7 @@ class MachineMemOperand {
   LLT getMemoryType() const { return MemoryType; }
 
   /// Return the size in bytes of the memory reference.
+  // TODO: should return TypeSize
   uint64_t getSize() const {
     return MemoryType.isValid() ? MemoryType.getSizeInBytes() : ~UINT64_C(0);
   }
diff --git a/llvm/lib/CodeGen/MachineOperand.cpp b/llvm/lib/CodeGen/MachineOperand.cpp
index 4f248351ca35ca..6d03ee5180111e 100644
--- a/llvm/lib/CodeGen/MachineOperand.cpp
+++ b/llvm/lib/CodeGen/MachineOperand.cpp
@@ -1243,7 +1243,7 @@ void MachineMemOperand::print(raw_ostream &OS, ModuleSlotTracker &MST,
   uint64_t MinSize = MemoryType.isValid()
                          ? getType().getSizeInBytes().getKnownMinValue()
                          : ~UINT64_C(0);
-  // TODO: getSize should return TypeSize
+
   if (MinSize > 0 && getAlign() != MinSize)
     OS << ", align " << getAlign().value();
   if (getAlign() != getBaseAlign())

>From 8c854705f1ed9f3b4b91c190e54ac5e226ff53a9 Mon Sep 17 00:00:00 2001
From: jiahanxie353 <jx353 at cornell.edu>
Date: Tue, 12 Mar 2024 13:48:34 -0400
Subject: [PATCH 16/16] take vector into account when calculating mem size

---
 llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index bd3ff7265d51f9..756d181acf8fa1 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -3330,8 +3330,12 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerLoad(GAnyLoad &LoadMI) {
   LLT MemTy = MMO.getMemoryType();
   MachineFunction &MF = MIRBuilder.getMF();
 
-  unsigned MemSizeInBits = MemTy.getSizeInBits();
-  unsigned MemStoreSizeInBits = 8 * MemTy.getSizeInBytes();
+  unsigned MemSizeInBits = MemTy.isVector()
+                               ? MemTy.getSizeInBits().getKnownMinValue()
+                               : MemTy.getSizeInBits();
+  unsigned MemStoreSizeInBits =
+      MemTy.isVector() ? 8 * MemTy.getSizeInBytes().getKnownMinValue()
+                       : 8 * MemTy.getSizeInBytes();
 
   if (MemSizeInBits != MemStoreSizeInBits) {
     if (MemTy.isVector())
@@ -3406,6 +3410,7 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerLoad(GAnyLoad &LoadMI) {
     if (TLI.allowsMemoryAccess(Ctx, MIRBuilder.getDataLayout(), MemTy, MMO))
       return UnableToLegalize;
 
+    llvm::errs() << "able to legalize\n";
     SmallSplitSize = LargeSplitSize = MemSizeInBits / 2;
   }
 
@@ -3637,7 +3642,7 @@ void LegalizerHelper::changeOpcode(MachineInstr &MI, unsigned NewOpcode) {
 LegalizerHelper::LegalizeResult
 LegalizerHelper::lower(MachineInstr &MI, unsigned TypeIdx, LLT LowerHintTy) {
   using namespace TargetOpcode;
-
+  llvm::errs() << "Helper\n";
   switch(MI.getOpcode()) {
   default:
     return UnableToLegalize;



More information about the llvm-commits mailing list