[llvm] [RISCV][GISel] Instruction select for vector G_ADD, G_SUB (PR #74114)
Craig Topper via llvm-commits
llvm-commits at lists.llvm.org
Mon Dec 11 09:29:57 PST 2023
================
@@ -0,0 +1,511 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
+# RUN: llc -mtriple=riscv32 -mattr=+m,+v -run-pass=regbankselect \
+# RUN: -disable-gisel-legality-check -simplify-mir -verify-machineinstrs %s \
+# RUN: -o - | FileCheck -check-prefix=RV32I %s
+
+---
+name: add_nxv1s8
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv1s8
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 1 x s8>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 1 x s8>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 1 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 1 x s8>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 1 x s8>) = COPY $v10
+ %1:_(<vscale x 1 x s8>) = COPY $v11
+ %2:_(<vscale x 1 x s8>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 1 x s8>)
+ PseudoRET implicit $v10
+
+...
+---
+name: sub_nxv2s8
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: sub_nxv2s8
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 2 x s8>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 2 x s8>) = COPY $v11
+ ; RV32I-NEXT: [[SUB:%[0-9]+]]:vrb(<vscale x 2 x s8>) = G_SUB [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[SUB]](<vscale x 2 x s8>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 2 x s8>) = COPY $v10
+ %1:_(<vscale x 2 x s8>) = COPY $v11
+ %2:_(<vscale x 2 x s8>) = G_SUB %0, %1
+ $v10 = COPY %2(<vscale x 2 x s8>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv4s8
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv4s8
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 4 x s8>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 4 x s8>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 4 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 4 x s8>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 4 x s8>) = COPY $v10
+ %1:_(<vscale x 4 x s8>) = COPY $v11
+ %2:_(<vscale x 4 x s8>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 4 x s8>)
+ PseudoRET implicit $v10
+
+...
+---
+name: sub_nxv8s8
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: sub_nxv8s8
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 8 x s8>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 8 x s8>) = COPY $v11
+ ; RV32I-NEXT: [[SUB:%[0-9]+]]:vrb(<vscale x 8 x s8>) = G_SUB [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[SUB]](<vscale x 8 x s8>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 8 x s8>) = COPY $v10
+ %1:_(<vscale x 8 x s8>) = COPY $v11
+ %2:_(<vscale x 8 x s8>) = G_SUB %0, %1
+ $v10 = COPY %2(<vscale x 8 x s8>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv16s8
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv16s8
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 16 x s8>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 16 x s8>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 16 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 16 x s8>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 16 x s8>) = COPY $v10
+ %1:_(<vscale x 16 x s8>) = COPY $v11
+ %2:_(<vscale x 16 x s8>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 16 x s8>)
+ PseudoRET implicit $v10
+
+...
+---
+name: sub_nxv32s8
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: sub_nxv32s8
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 32 x s8>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 32 x s8>) = COPY $v11
+ ; RV32I-NEXT: [[SUB:%[0-9]+]]:vrb(<vscale x 32 x s8>) = G_SUB [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[SUB]](<vscale x 32 x s8>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 32 x s8>) = COPY $v10
+ %1:_(<vscale x 32 x s8>) = COPY $v11
+ %2:_(<vscale x 32 x s8>) = G_SUB %0, %1
+ $v10 = COPY %2(<vscale x 32 x s8>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv64s8
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv64s8
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 64 x s8>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 64 x s8>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 64 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 64 x s8>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 64 x s8>) = COPY $v10
+ %1:_(<vscale x 64 x s8>) = COPY $v11
+ %2:_(<vscale x 64 x s8>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 64 x s8>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv1s16
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv1s16
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 1 x s16>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 1 x s16>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 1 x s16>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 1 x s16>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 1 x s16>) = COPY $v10
+ %1:_(<vscale x 1 x s16>) = COPY $v11
+ %2:_(<vscale x 1 x s16>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 1 x s16>)
+ PseudoRET implicit $v10
+
+...
+---
+name: sub_nxv2s16
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: sub_nxv2s16
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 2 x s16>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 2 x s16>) = COPY $v11
+ ; RV32I-NEXT: [[SUB:%[0-9]+]]:vrb(<vscale x 2 x s16>) = G_SUB [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[SUB]](<vscale x 2 x s16>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 2 x s16>) = COPY $v10
+ %1:_(<vscale x 2 x s16>) = COPY $v11
+ %2:_(<vscale x 2 x s16>) = G_SUB %0, %1
+ $v10 = COPY %2(<vscale x 2 x s16>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv4s16
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv4s16
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 4 x s16>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 4 x s16>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 4 x s16>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 4 x s16>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 4 x s16>) = COPY $v10
+ %1:_(<vscale x 4 x s16>) = COPY $v11
+ %2:_(<vscale x 4 x s16>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 4 x s16>)
+ PseudoRET implicit $v10
+
+...
+---
+name: sub_nxv8s16
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: sub_nxv8s16
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 8 x s16>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 8 x s16>) = COPY $v11
+ ; RV32I-NEXT: [[SUB:%[0-9]+]]:vrb(<vscale x 8 x s16>) = G_SUB [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[SUB]](<vscale x 8 x s16>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 8 x s16>) = COPY $v10
+ %1:_(<vscale x 8 x s16>) = COPY $v11
+ %2:_(<vscale x 8 x s16>) = G_SUB %0, %1
+ $v10 = COPY %2(<vscale x 8 x s16>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv16s16
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv16s16
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 16 x s16>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 16 x s16>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 16 x s16>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 16 x s16>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 16 x s16>) = COPY $v10
+ %1:_(<vscale x 16 x s16>) = COPY $v11
+ %2:_(<vscale x 16 x s16>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 16 x s16>)
+ PseudoRET implicit $v10
+
+...
+---
+name: sub_nxv32s16
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: sub_nxv32s16
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 32 x s16>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 32 x s16>) = COPY $v11
+ ; RV32I-NEXT: [[SUB:%[0-9]+]]:vrb(<vscale x 32 x s16>) = G_SUB [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[SUB]](<vscale x 32 x s16>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 32 x s16>) = COPY $v10
+ %1:_(<vscale x 32 x s16>) = COPY $v11
+ %2:_(<vscale x 32 x s16>) = G_SUB %0, %1
+ $v10 = COPY %2(<vscale x 32 x s16>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv1s32
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv1s32
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 1 x s32>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 1 x s32>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 1 x s32>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 1 x s32>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 1 x s32>) = COPY $v10
+ %1:_(<vscale x 1 x s32>) = COPY $v11
+ %2:_(<vscale x 1 x s32>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 1 x s32>)
+ PseudoRET implicit $v10
+
+...
+---
+name: sub_nxv2s32
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: sub_nxv2s32
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 2 x s32>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 2 x s32>) = COPY $v11
+ ; RV32I-NEXT: [[SUB:%[0-9]+]]:vrb(<vscale x 2 x s32>) = G_SUB [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[SUB]](<vscale x 2 x s32>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 2 x s32>) = COPY $v10
+ %1:_(<vscale x 2 x s32>) = COPY $v11
+ %2:_(<vscale x 2 x s32>) = G_SUB %0, %1
+ $v10 = COPY %2(<vscale x 2 x s32>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv4s32
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv4s32
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 4 x s32>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 4 x s32>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 4 x s32>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 4 x s32>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 4 x s32>) = COPY $v10
+ %1:_(<vscale x 4 x s32>) = COPY $v11
+ %2:_(<vscale x 4 x s32>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 4 x s32>)
+ PseudoRET implicit $v10
+
+...
+---
+name: sub_nxv8s32
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: sub_nxv8s32
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 8 x s32>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 8 x s32>) = COPY $v11
+ ; RV32I-NEXT: [[SUB:%[0-9]+]]:vrb(<vscale x 8 x s32>) = G_SUB [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[SUB]](<vscale x 8 x s32>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 8 x s32>) = COPY $v10
+ %1:_(<vscale x 8 x s32>) = COPY $v11
+ %2:_(<vscale x 8 x s32>) = G_SUB %0, %1
+ $v10 = COPY %2(<vscale x 8 x s32>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv16s32
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv16s32
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 16 x s32>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 16 x s32>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 16 x s32>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 16 x s32>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 16 x s32>) = COPY $v10
+ %1:_(<vscale x 16 x s32>) = COPY $v11
+ %2:_(<vscale x 16 x s32>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 16 x s32>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv1s64
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv1s64
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 1 x s64>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 1 x s64>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 1 x s64>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 1 x s64>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 1 x s64>) = COPY $v10
+ %1:_(<vscale x 1 x s64>) = COPY $v11
+ %2:_(<vscale x 1 x s64>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 1 x s64>)
+ PseudoRET implicit $v10
+
+...
+---
+name: sub_nxv2s64
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: sub_nxv2s64
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 2 x s64>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 2 x s64>) = COPY $v11
+ ; RV32I-NEXT: [[SUB:%[0-9]+]]:vrb(<vscale x 2 x s64>) = G_SUB [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[SUB]](<vscale x 2 x s64>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 2 x s64>) = COPY $v10
+ %1:_(<vscale x 2 x s64>) = COPY $v11
+ %2:_(<vscale x 2 x s64>) = G_SUB %0, %1
+ $v10 = COPY %2(<vscale x 2 x s64>)
+ PseudoRET implicit $v10
+
+...
+---
+name: add_nxv4s64
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
+
+ ; RV32I-LABEL: name: add_nxv4s64
+ ; RV32I: liveins: $v10, $v11
+ ; RV32I-NEXT: {{ $}}
+ ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 4 x s64>) = COPY $v10
+ ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 4 x s64>) = COPY $v11
+ ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 4 x s64>) = G_ADD [[COPY]], [[COPY1]]
+ ; RV32I-NEXT: $v10 = COPY [[ADD]](<vscale x 4 x s64>)
+ ; RV32I-NEXT: PseudoRET implicit $v10
+ %0:_(<vscale x 4 x s64>) = COPY $v10
+ %1:_(<vscale x 4 x s64>) = COPY $v11
+ %2:_(<vscale x 4 x s64>) = G_ADD %0, %1
+ $v10 = COPY %2(<vscale x 4 x s64>)
+ PseudoRET implicit $v10
+
+...
+---
+name: sub_nxv8s64
+legalized: true
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ liveins: $v10, $v11
----------------
topperc wrote:
> Be careful which vector registers you are passing as livein. M8 operations will be using `v10, v11, v12, v13, v14, v15, v16, v17` as a single register grouping. That means that you probably should use the next available register for the next live in.
>
>
>
> I am calling out this specific LMUL 8 case in this test, but make sure this holds for all your test cases for their respective LMULs. It could be helpful to see which registers the IR translator for a test that lowers to this MIR test case chooses if you are not sure.
Why hand construct the test at all? Write the IR test and use -stop-before to get the MIR.
https://github.com/llvm/llvm-project/pull/74114
More information about the llvm-commits
mailing list