[llvm] [RISCV][GISEL] Legalize G_BITCAST for scalable vectors (PR #85970)
Michael Maitland via llvm-commits
llvm-commits at lists.llvm.org
Wed Mar 20 10:17:26 PDT 2024
https://github.com/michaelmaitland created https://github.com/llvm/llvm-project/pull/85970
SelectionDAG marks ISD::BITCAST as legal between scalable vector types and ISelDAGToDAG deletes them.
We mark G_BITCAST between scalable vectors as legal in GISel. A future patch will handle what to do with them after the legalizer (likley either drop them in a isel-preprocess or convert them to COPYs).
BITCAST is needed for legalization of G_INSERT and G_EXTRACT. This is a precommit for legalization of G_INSERT and G_EXTRACT.
>From d0ee13fc39bced506fe08cf9627722ac2f750823 Mon Sep 17 00:00:00 2001
From: Michael Maitland <michaeltmaitland at gmail.com>
Date: Wed, 20 Mar 2024 08:41:31 -0700
Subject: [PATCH] [RISCV][GISEL] Legalize G_BITCAST for scalable vectors
SelectionDAG marks ISD::BITCAST as legal between scalable vector types
and ISelDAGToDAG deletes them.
We mark G_BITCAST between scalable vectors as legal in GISel. A future
patch will handle what to do with them after the legalizer (likley either drop
them in a isel-preprocess or convert them to COPYs).
---
.../llvm/CodeGen/GlobalISel/LegalizerInfo.h | 2 +
.../CodeGen/GlobalISel/LegalityPredicates.cpp | 6 +
.../Target/RISCV/GISel/RISCVLegalizerInfo.cpp | 3 +
.../legalizer/rvv/legalize-bitcast.mir | 761 ++++++++++++++++++
4 files changed, 772 insertions(+)
create mode 100644 llvm/test/CodeGen/RISCV/GlobalISel/legalizer/rvv/legalize-bitcast.mir
diff --git a/llvm/include/llvm/CodeGen/GlobalISel/LegalizerInfo.h b/llvm/include/llvm/CodeGen/GlobalISel/LegalizerInfo.h
index 6afaea3f3fc5c6..5b7e7a5a28c0a4 100644
--- a/llvm/include/llvm/CodeGen/GlobalISel/LegalizerInfo.h
+++ b/llvm/include/llvm/CodeGen/GlobalISel/LegalizerInfo.h
@@ -282,6 +282,8 @@ LegalityPredicate typePairAndMemDescInSet(
LegalityPredicate isScalar(unsigned TypeIdx);
/// True iff the specified type index is a vector.
LegalityPredicate isVector(unsigned TypeIdx);
+/// True iff the specified type index is a scalable vector.
+LegalityPredicate isScalableVector(unsigned TypeIdx);
/// True iff the specified type index is a pointer (with any address space).
LegalityPredicate isPointer(unsigned TypeIdx);
/// True iff the specified type index is a pointer with the specified address
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalityPredicates.cpp b/llvm/lib/CodeGen/GlobalISel/LegalityPredicates.cpp
index 2c77ed8b060088..41c1494e12538e 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalityPredicates.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalityPredicates.cpp
@@ -76,6 +76,12 @@ LegalityPredicate LegalityPredicates::isVector(unsigned TypeIdx) {
};
}
+LegalityPredicate LegalityPredicates::isScalableVector(unsigned TypeIdx) {
+ return [=](const LegalityQuery &Query) {
+ return Query.Types[TypeIdx].isScalableVector();
+ };
+}
+
LegalityPredicate LegalityPredicates::isPointer(unsigned TypeIdx) {
return [=](const LegalityQuery &Query) {
return Query.Types[TypeIdx].isPointer();
diff --git a/llvm/lib/Target/RISCV/GISel/RISCVLegalizerInfo.cpp b/llvm/lib/Target/RISCV/GISel/RISCVLegalizerInfo.cpp
index 64ae4e94a8c929..e5c2eda8e84ad2 100644
--- a/llvm/lib/Target/RISCV/GISel/RISCVLegalizerInfo.cpp
+++ b/llvm/lib/Target/RISCV/GISel/RISCVLegalizerInfo.cpp
@@ -155,6 +155,9 @@ RISCVLegalizerInfo::RISCVLegalizerInfo(const RISCVSubtarget &ST)
getActionDefinitionsBuilder(G_BITREVERSE).maxScalar(0, sXLen).lower();
+ getActionDefinitionsBuilder(G_BITCAST).legalIf(
+ all(isScalableVector(0), isScalableVector(1)));
+
auto &BSWAPActions = getActionDefinitionsBuilder(G_BSWAP);
if (ST.hasStdExtZbb() || ST.hasStdExtZbkb())
BSWAPActions.legalFor({sXLen}).clampScalar(0, sXLen, sXLen);
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/legalizer/rvv/legalize-bitcast.mir b/llvm/test/CodeGen/RISCV/GlobalISel/legalizer/rvv/legalize-bitcast.mir
new file mode 100644
index 00000000000000..3309614006fcb1
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/legalizer/rvv/legalize-bitcast.mir
@@ -0,0 +1,761 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
+# RUN: llc -mtriple=riscv32 -mattr=+v -run-pass=legalizer %s -o - | FileCheck %s
+# RUN: llc -mtriple=riscv64 -mattr=+v -run-pass=legalizer %s -o - | FileCheck %s
+
+# Extend from s1 element vectors
+---
+name: bitcastnxv1i8_nxv1i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv1i8_nxv1i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 1 x s8>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 1 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 1 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 1 x s8>) = G_BITCAST %1(<vscale x 1 x s1>)
+ $v8 = COPY %0(<vscale x 1 x s8>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv1i16_nxv1i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv1i16_nxv1i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 1 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 1 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 1 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 1 x s16>) = G_BITCAST %1(<vscale x 1 x s1>)
+ $v8 = COPY %0(<vscale x 1 x s16>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv1i32_nxv1i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv1i32_nxv1i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 1 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 1 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 1 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 1 x s32>) = G_BITCAST %1(<vscale x 1 x s1>)
+ $v8 = COPY %0(<vscale x 1 x s32>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv1i64_nxv1i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv1i64_nxv1i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 1 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 1 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 1 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 1 x s64>) = G_BITCAST %1(<vscale x 1 x s1>)
+ $v8 = COPY %0(<vscale x 1 x s64>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv2i8_nxv2i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv2i8_nxv2i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 2 x s8>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 2 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 2 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 2 x s8>) = G_BITCAST %1(<vscale x 2 x s1>)
+ $v8 = COPY %0(<vscale x 2 x s8>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv2i16_nxv2i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv2i16_nxv2i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 2 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 2 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 2 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 2 x s16>) = G_BITCAST %1(<vscale x 2 x s1>)
+ $v8 = COPY %0(<vscale x 2 x s16>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv2i32_nxv2i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv2i32_nxv2i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 2 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 2 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 2 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 2 x s32>) = G_BITCAST %1(<vscale x 2 x s1>)
+ $v8 = COPY %0(<vscale x 2 x s32>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv2i64_nxv2i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv2i64_nxv2i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 2 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m2 = COPY [[DEF]](<vscale x 2 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
+ %1:_(<vscale x 2 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 2 x s64>) = G_BITCAST %1(<vscale x 2 x s1>)
+ $v8m2 = COPY %0(<vscale x 2 x s64>)
+ PseudoRET implicit $v8m2
+...
+---
+name: bitcastnxv4i8_nxv4i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv4i8_nxv4i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 4 x s8>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 4 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 4 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 4 x s8>) = G_BITCAST %1(<vscale x 4 x s1>)
+ $v8 = COPY %0(<vscale x 4 x s8>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv4i16_nxv4i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv4i16_nxv4i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 4 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 4 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 4 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 4 x s16>) = G_BITCAST %1(<vscale x 4 x s1>)
+ $v8 = COPY %0(<vscale x 4 x s16>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv4i32_nxv4i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv4i32_nxv4i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 4 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m2 = COPY [[DEF]](<vscale x 4 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
+ %1:_(<vscale x 4 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 4 x s32>) = G_BITCAST %1(<vscale x 4 x s1>)
+ $v8m2 = COPY %0(<vscale x 4 x s32>)
+ PseudoRET implicit $v8m2
+...
+---
+name: bitcastnxv4i64_nxv4i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv4i64_nxv4i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 4 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m4 = COPY [[DEF]](<vscale x 4 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
+ %1:_(<vscale x 4 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 4 x s64>) = G_BITCAST %1(<vscale x 4 x s1>)
+ $v8m4 = COPY %0(<vscale x 4 x s64>)
+ PseudoRET implicit $v8m4
+...
+---
+name: bitcastnxv8i8_nxv8i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv8i8_nxv8i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 8 x s8>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 8 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 8 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 8 x s8>) = G_BITCAST %1(<vscale x 8 x s1>)
+ $v8 = COPY %0(<vscale x 8 x s8>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv8i16_nxv8i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv8i16_nxv8i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 8 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m2 = COPY [[DEF]](<vscale x 8 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
+ %1:_(<vscale x 8 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 8 x s16>) = G_BITCAST %1(<vscale x 8 x s1>)
+ $v8m2 = COPY %0(<vscale x 8 x s16>)
+ PseudoRET implicit $v8m2
+...
+---
+name: bitcastnxv8i32_nxv8i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv8i32_nxv8i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 8 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m4 = COPY [[DEF]](<vscale x 8 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
+ %1:_(<vscale x 8 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 8 x s32>) = G_BITCAST %1(<vscale x 8 x s1>)
+ $v8m4 = COPY %0(<vscale x 8 x s32>)
+ PseudoRET implicit $v8m4
+...
+---
+name: bitcastnxv8i64_nxv8i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv8i64_nxv8i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 8 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m8 = COPY [[DEF]](<vscale x 8 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
+ %1:_(<vscale x 8 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 8 x s64>) = G_BITCAST %1(<vscale x 8 x s1>)
+ $v8m8 = COPY %0(<vscale x 8 x s64>)
+ PseudoRET implicit $v8m8
+...
+---
+name: bitcastnxv16i8_nxv16i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv16i8_nxv16i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 16 x s8>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m2 = COPY [[DEF]](<vscale x 16 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
+ %1:_(<vscale x 16 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 16 x s8>) = G_BITCAST %1(<vscale x 16 x s1>)
+ $v8m2 = COPY %0(<vscale x 16 x s8>)
+ PseudoRET implicit $v8m2
+...
+---
+name: bitcastnxv16i16_nxv16i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv16i16_nxv16i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 16 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m4 = COPY [[DEF]](<vscale x 16 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
+ %1:_(<vscale x 16 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 16 x s16>) = G_BITCAST %1(<vscale x 16 x s1>)
+ $v8m4 = COPY %0(<vscale x 16 x s16>)
+ PseudoRET implicit $v8m4
+...
+---
+name: bitcastnxv16i32_nxv16i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv16i32_nxv16i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 16 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m8 = COPY [[DEF]](<vscale x 16 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
+ %1:_(<vscale x 16 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 16 x s32>) = G_BITCAST %1(<vscale x 16 x s1>)
+ $v8m8 = COPY %0(<vscale x 16 x s32>)
+ PseudoRET implicit $v8m8
+...
+---
+name: bitcastnxv32i8_nxv32i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv32i8_nxv32i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 32 x s8>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m4 = COPY [[DEF]](<vscale x 32 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
+ %1:_(<vscale x 32 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 32 x s8>) = G_BITCAST %1(<vscale x 32 x s1>)
+ $v8m4 = COPY %0(<vscale x 32 x s8>)
+ PseudoRET implicit $v8m4
+...
+---
+name: bitcastnxv32i16_nxv32i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv32i16_nxv32i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 32 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m8 = COPY [[DEF]](<vscale x 32 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
+ %1:_(<vscale x 32 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 32 x s16>) = G_BITCAST %1(<vscale x 32 x s1>)
+ $v8m8 = COPY %0(<vscale x 32 x s16>)
+ PseudoRET implicit $v8m8
+...
+---
+name: bitcastnxv64i8_nxv64i1
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv64i8_nxv64i1
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 64 x s8>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m8 = COPY [[DEF]](<vscale x 64 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
+ %1:_(<vscale x 64 x s1>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 64 x s8>) = G_BITCAST %1(<vscale x 64 x s1>)
+ $v8m8 = COPY %0(<vscale x 64 x s8>)
+ PseudoRET implicit $v8m8
+...
+
+# Extend from s8 element vectors
+---
+name: bitcastnxv1i16_nxv1i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv1i16_nxv1i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 1 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 1 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 1 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 1 x s16>) = G_BITCAST %1(<vscale x 1 x s8>)
+ $v8 = COPY %0(<vscale x 1 x s16>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv1i32_nxv1i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv1i32_nxv1i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 1 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 1 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 1 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 1 x s32>) = G_BITCAST %1(<vscale x 1 x s8>)
+ $v8 = COPY %0(<vscale x 1 x s32>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv1i64_nxv1i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv1i64_nxv1i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 1 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 1 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 1 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 1 x s64>) = G_BITCAST %1(<vscale x 1 x s8>)
+ $v8 = COPY %0(<vscale x 1 x s64>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv2i16_nxv2i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv2i16_nxv2i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 2 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 2 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 2 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 2 x s16>) = G_BITCAST %1(<vscale x 2 x s8>)
+ $v8 = COPY %0(<vscale x 2 x s16>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv2i32_nxv2i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv2i32_nxv2i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 2 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 2 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 2 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 2 x s32>) = G_BITCAST %1(<vscale x 2 x s8>)
+ $v8 = COPY %0(<vscale x 2 x s32>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv2i64_nxv2i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv2i64_nxv2i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 2 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m2 = COPY [[DEF]](<vscale x 2 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
+ %1:_(<vscale x 2 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 2 x s64>) = G_BITCAST %1(<vscale x 2 x s8>)
+ $v8m2 = COPY %0(<vscale x 2 x s64>)
+ PseudoRET implicit $v8m2
+...
+---
+name: bitcastnxv4i16_nxv4i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv4i16_nxv4i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 4 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 4 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 4 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 4 x s16>) = G_BITCAST %1(<vscale x 4 x s8>)
+ $v8 = COPY %0(<vscale x 4 x s16>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv4i32_nxv4i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv4i32_nxv4i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 4 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m2 = COPY [[DEF]](<vscale x 4 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
+ %1:_(<vscale x 4 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 4 x s32>) = G_BITCAST %1(<vscale x 4 x s8>)
+ $v8m2 = COPY %0(<vscale x 4 x s32>)
+ PseudoRET implicit $v8m2
+...
+---
+name: bitcastnxv4i64_nxv4i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv4i64_nxv4i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 4 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m4 = COPY [[DEF]](<vscale x 4 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
+ %1:_(<vscale x 4 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 4 x s64>) = G_BITCAST %1(<vscale x 4 x s8>)
+ $v8m4 = COPY %0(<vscale x 4 x s64>)
+ PseudoRET implicit $v8m4
+...
+---
+name: bitcastnxv8i16_nxv8i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv8i16_nxv8i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 8 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m2 = COPY [[DEF]](<vscale x 8 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
+ %1:_(<vscale x 8 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 8 x s16>) = G_BITCAST %1(<vscale x 8 x s8>)
+ $v8m2 = COPY %0(<vscale x 8 x s16>)
+ PseudoRET implicit $v8m2
+...
+---
+name: bitcastnxv8i32_nxv8i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv8i32_nxv8i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 8 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m4 = COPY [[DEF]](<vscale x 8 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
+ %1:_(<vscale x 8 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 8 x s32>) = G_BITCAST %1(<vscale x 8 x s8>)
+ $v8m4 = COPY %0(<vscale x 8 x s32>)
+ PseudoRET implicit $v8m4
+...
+---
+name: bitcastnxv8i64_nxv8i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv8i64_nxv8i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 8 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m8 = COPY [[DEF]](<vscale x 8 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
+ %1:_(<vscale x 8 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 8 x s64>) = G_BITCAST %1(<vscale x 8 x s8>)
+ $v8m8 = COPY %0(<vscale x 8 x s64>)
+ PseudoRET implicit $v8m8
+...
+---
+name: bitcastnxv16i16_nxv16i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv16i16_nxv16i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 16 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m4 = COPY [[DEF]](<vscale x 16 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
+ %1:_(<vscale x 16 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 16 x s16>) = G_BITCAST %1(<vscale x 16 x s8>)
+ $v8m4 = COPY %0(<vscale x 16 x s16>)
+ PseudoRET implicit $v8m4
+...
+---
+name: bitcastnxv16i32_nxv16i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv16i32_nxv16i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 16 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m8 = COPY [[DEF]](<vscale x 16 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
+ %1:_(<vscale x 16 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 16 x s32>) = G_BITCAST %1(<vscale x 16 x s8>)
+ $v8m8 = COPY %0(<vscale x 16 x s32>)
+ PseudoRET implicit $v8m8
+...
+---
+name: bitcastnxv32i16_nxv32i8
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv32i16_nxv32i8
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 32 x s16>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m8 = COPY [[DEF]](<vscale x 32 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
+ %1:_(<vscale x 32 x s8>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 32 x s16>) = G_BITCAST %1(<vscale x 32 x s8>)
+ $v8m8 = COPY %0(<vscale x 32 x s16>)
+ PseudoRET implicit $v8m8
+...
+
+# Extend from s16 element vectors
+---
+name: bitcastnxv1i32_nxv1i16
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv1i32_nxv1i16
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 1 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 1 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 1 x s16>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 1 x s32>) = G_BITCAST %1(<vscale x 1 x s16>)
+ $v8 = COPY %0(<vscale x 1 x s32>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv1i64_nxv1i16
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv1i64_nxv1i16
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 1 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 1 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 1 x s16>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 1 x s64>) = G_BITCAST %1(<vscale x 1 x s16>)
+ $v8 = COPY %0(<vscale x 1 x s64>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv2i32_nxv2i16
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv2i32_nxv2i16
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 2 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 2 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 2 x s16>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 2 x s32>) = G_BITCAST %1(<vscale x 2 x s16>)
+ $v8 = COPY %0(<vscale x 2 x s32>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv2i64_nxv2i16
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv2i64_nxv2i16
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 2 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m2 = COPY [[DEF]](<vscale x 2 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
+ %1:_(<vscale x 2 x s16>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 2 x s64>) = G_BITCAST %1(<vscale x 2 x s16>)
+ $v8m2 = COPY %0(<vscale x 2 x s64>)
+ PseudoRET implicit $v8m2
+...
+---
+name: bitcastnxv4i32_nxv4i16
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv4i32_nxv4i16
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 4 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m2 = COPY [[DEF]](<vscale x 4 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
+ %1:_(<vscale x 4 x s16>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 4 x s32>) = G_BITCAST %1(<vscale x 4 x s16>)
+ $v8m2 = COPY %0(<vscale x 4 x s32>)
+ PseudoRET implicit $v8m2
+...
+---
+name: bitcastnxv4i64_nxv4i16
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv4i64_nxv4i16
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 4 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m4 = COPY [[DEF]](<vscale x 4 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
+ %1:_(<vscale x 4 x s16>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 4 x s64>) = G_BITCAST %1(<vscale x 4 x s16>)
+ $v8m4 = COPY %0(<vscale x 4 x s64>)
+ PseudoRET implicit $v8m4
+...
+---
+name: bitcastnxv8i32_nxv8i16
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv8i32_nxv8i16
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 8 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m4 = COPY [[DEF]](<vscale x 8 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
+ %1:_(<vscale x 8 x s16>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 8 x s32>) = G_BITCAST %1(<vscale x 8 x s16>)
+ $v8m4 = COPY %0(<vscale x 8 x s32>)
+ PseudoRET implicit $v8m4
+...
+---
+name: bitcastnxv8i64_nxv8i16
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv8i64_nxv8i16
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 8 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m8 = COPY [[DEF]](<vscale x 8 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
+ %1:_(<vscale x 8 x s16>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 8 x s64>) = G_BITCAST %1(<vscale x 8 x s16>)
+ $v8m8 = COPY %0(<vscale x 8 x s64>)
+ PseudoRET implicit $v8m8
+...
+---
+name: bitcastnxv16i32_nxv16i16
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv16i32_nxv16i16
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 16 x s32>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m8 = COPY [[DEF]](<vscale x 16 x s32>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
+ %1:_(<vscale x 16 x s16>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 16 x s32>) = G_BITCAST %1(<vscale x 16 x s16>)
+ $v8m8 = COPY %0(<vscale x 16 x s32>)
+ PseudoRET implicit $v8m8
+...
+
+# Extend from s32 element vectors
+---
+name: bitcastnxv1i64_nxv1i32
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv1i64_nxv1i32
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 1 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8 = COPY [[DEF]](<vscale x 1 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
+ %1:_(<vscale x 1 x s32>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 1 x s64>) = G_BITCAST %1(<vscale x 1 x s32>)
+ $v8 = COPY %0(<vscale x 1 x s64>)
+ PseudoRET implicit $v8
+...
+---
+name: bitcastnxv2i64_nxv2i32
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv2i64_nxv2i32
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 2 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m2 = COPY [[DEF]](<vscale x 2 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
+ %1:_(<vscale x 2 x s32>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 2 x s64>) = G_BITCAST %1(<vscale x 2 x s32>)
+ $v8m2 = COPY %0(<vscale x 2 x s64>)
+ PseudoRET implicit $v8m2
+...
+---
+name: bitcastnxv4i64_nxv4i32
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv4i64_nxv4i32
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 4 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m4 = COPY [[DEF]](<vscale x 4 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
+ %1:_(<vscale x 4 x s32>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 4 x s64>) = G_BITCAST %1(<vscale x 4 x s32>)
+ $v8m4 = COPY %0(<vscale x 4 x s64>)
+ PseudoRET implicit $v8m4
+...
+---
+name: bitcastnxv8i64_nxv8i32
+legalized: false
+tracksRegLiveness: true
+body: |
+ bb.0.entry:
+ ; CHECK-LABEL: name: bitcastnxv8i64_nxv8i32
+ ; CHECK: [[DEF:%[0-9]+]]:_(<vscale x 8 x s64>) = G_IMPLICIT_DEF
+ ; CHECK-NEXT: $v8m8 = COPY [[DEF]](<vscale x 8 x s64>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
+ %1:_(<vscale x 8 x s32>) = G_IMPLICIT_DEF
+ %0:_(<vscale x 8 x s64>) = G_BITCAST %1(<vscale x 8 x s32>)
+ $v8m8 = COPY %0(<vscale x 8 x s64>)
+ PseudoRET implicit $v8m8
+...
More information about the llvm-commits
mailing list