[llvm] r331601 - [globalisel] Update GlobalISel emitter to match new representation of extending loads
Daniel Sanders via llvm-commits
llvm-commits at lists.llvm.org
Sat May 5 13:53:24 PDT 2018
Author: dsanders
Date: Sat May 5 13:53:24 2018
New Revision: 331601
URL: http://llvm.org/viewvc/llvm-project?rev=331601&view=rev
Log:
[globalisel] Update GlobalISel emitter to match new representation of extending loads
Summary:
Previously, a extending load was represented at (G_*EXT (G_LOAD x)).
This had a few drawbacks:
* G_LOAD had to be legal for all sizes you could extend from, even if
registers didn't naturally hold those sizes.
* All sizes you could extend from had to be allocatable just in case the
extend went missing (e.g. by optimization).
* At minimum, G_*EXT and G_TRUNC had to be legal for these sizes. As we
improve optimization of extends and truncates, this legality requirement
would spread without considerable care w.r.t when certain combines were
permitted.
* The SelectionDAG importer required some ugly and fragile pattern
rewriting to translate patterns into this style.
This patch changes the representation to:
* (G_[SZ]EXTLOAD x)
* (G_LOAD x) any-extends when MMO.getSize() * 8 < ResultTy.getSizeInBits()
which resolves these issues by allowing targets to work entirely in their
native register sizes, and by having a more direct translation from
SelectionDAG patterns.
Each extending load can be lowered by the legalizer into separate extends
and loads, however a target that supports s1 will need the any-extending
load to extend to at least s8 since LLVM does not represent memory accesses
smaller than 8 bit. The legalizer can widenScalar G_LOAD into an
any-extending load but sign/zero-extending loads need help from something
else like a combiner pass. A follow-up patch that adds combiner helpers for
for this will follow.
The new representation requires that the MMO correctly reflect the memory
access so this has been corrected in a couple tests. I've also moved the
extending loads to their own tests since they are (mostly) separate opcodes
now. Additionally, the re-write appears to have invalidated two tests from
select-with-no-legality-check.mir since the matcher table no longer contains
loads that result in s1's and they aren't legal in AArch64 anymore.
Depends on D45540
Reviewers: ab, aditya_nandakumar, bogner, rtereshin, volkan, rovka, javed.absar
Reviewed By: rtereshin
Subscribers: javed.absar, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D45541
Added:
llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-extload.mir
llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-sextload.mir
llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-zextload.mir
Modified:
llvm/trunk/include/llvm/CodeGen/GlobalISel/InstructionSelector.h
llvm/trunk/include/llvm/CodeGen/GlobalISel/InstructionSelectorImpl.h
llvm/trunk/include/llvm/Target/GlobalISel/SelectionDAGCompat.td
llvm/trunk/lib/Target/AArch64/AArch64InstructionSelector.cpp
llvm/trunk/lib/Target/AArch64/AArch64LegalizerInfo.cpp
llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-extload.mir
llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-sextload.mir
llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-zextload.mir
llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-atomicrmw.mir
llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-cmpxchg.mir
llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-load.mir
llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-with-no-legality-check.mir
llvm/trunk/test/TableGen/GlobalISelEmitter.td
llvm/trunk/utils/TableGen/GlobalISelEmitter.cpp
Modified: llvm/trunk/include/llvm/CodeGen/GlobalISel/InstructionSelector.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/GlobalISel/InstructionSelector.h?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/include/llvm/CodeGen/GlobalISel/InstructionSelector.h (original)
+++ llvm/trunk/include/llvm/CodeGen/GlobalISel/InstructionSelector.h Sat May 5 13:53:24 2018
@@ -117,6 +117,19 @@ enum {
GIM_CheckAtomicOrdering,
GIM_CheckAtomicOrderingOrStrongerThan,
GIM_CheckAtomicOrderingWeakerThan,
+ /// Check the size of the memory access for the given machine memory operand.
+ /// - InsnID - Instruction ID
+ /// - MMOIdx - MMO index
+ /// - Size - The size in bytes of the memory access
+ GIM_CheckMemorySizeEqualTo,
+ /// Check the size of the memory access for the given machine memory operand
+ /// against the size of an operand.
+ /// - InsnID - Instruction ID
+ /// - MMOIdx - MMO index
+ /// - OpIdx - The operand index to compare the MMO against
+ GIM_CheckMemorySizeEqualToLLT,
+ GIM_CheckMemorySizeLessThanLLT,
+ GIM_CheckMemorySizeGreaterThanLLT,
/// Check the type for the specified operand
/// - InsnID - Instruction ID
Modified: llvm/trunk/include/llvm/CodeGen/GlobalISel/InstructionSelectorImpl.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/GlobalISel/InstructionSelectorImpl.h?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/include/llvm/CodeGen/GlobalISel/InstructionSelectorImpl.h (original)
+++ llvm/trunk/include/llvm/CodeGen/GlobalISel/InstructionSelectorImpl.h Sat May 5 13:53:24 2018
@@ -72,7 +72,8 @@ bool InstructionSelector::executeMatchTa
while (true) {
assert(CurrentIdx != ~0u && "Invalid MatchTable index");
- switch (MatchTable[CurrentIdx++]) {
+ int64_t MatcherOpcode = MatchTable[CurrentIdx++];
+ switch (MatcherOpcode) {
case GIM_Try: {
DEBUG_WITH_TYPE(TgtInstructionSelector::getName(),
dbgs() << CurrentIdx << ": Begin try-block\n");
@@ -284,6 +285,87 @@ bool InstructionSelector::executeMatchTa
return false;
break;
}
+ case GIM_CheckMemorySizeEqualTo: {
+ int64_t InsnID = MatchTable[CurrentIdx++];
+ int64_t MMOIdx = MatchTable[CurrentIdx++];
+ uint64_t Size = MatchTable[CurrentIdx++];
+
+ DEBUG_WITH_TYPE(TgtInstructionSelector::getName(),
+ dbgs() << CurrentIdx
+ << ": GIM_CheckMemorySizeEqual(MIs[" << InsnID
+ << "]->memoperands() + " << MMOIdx
+ << ", Size=" << Size << ")\n");
+ assert(State.MIs[InsnID] != nullptr && "Used insn before defined");
+
+ if (State.MIs[InsnID]->getNumMemOperands() <= MMOIdx) {
+ if (handleReject() == RejectAndGiveUp)
+ return false;
+ break;
+ }
+
+ MachineMemOperand *MMO = *(State.MIs[InsnID]->memoperands_begin() + MMOIdx);
+
+ DEBUG_WITH_TYPE(TgtInstructionSelector::getName(),
+ dbgs() << MMO->getSize() << " bytes vs " << Size
+ << " bytes\n");
+ if (MMO->getSize() != Size)
+ if (handleReject() == RejectAndGiveUp)
+ return false;
+
+ break;
+ }
+ case GIM_CheckMemorySizeEqualToLLT:
+ case GIM_CheckMemorySizeLessThanLLT:
+ case GIM_CheckMemorySizeGreaterThanLLT: {
+ int64_t InsnID = MatchTable[CurrentIdx++];
+ int64_t MMOIdx = MatchTable[CurrentIdx++];
+ int64_t OpIdx = MatchTable[CurrentIdx++];
+
+ DEBUG_WITH_TYPE(
+ TgtInstructionSelector::getName(),
+ dbgs() << CurrentIdx << ": GIM_CheckMemorySize"
+ << (MatcherOpcode == GIM_CheckMemorySizeEqualToLLT
+ ? "EqualTo"
+ : MatcherOpcode == GIM_CheckMemorySizeGreaterThanLLT
+ ? "GreaterThan"
+ : "LessThan")
+ << "LLT(MIs[" << InsnID << "]->memoperands() + " << MMOIdx
+ << ", OpIdx=" << OpIdx << ")\n");
+ assert(State.MIs[InsnID] != nullptr && "Used insn before defined");
+
+ MachineOperand &MO = State.MIs[InsnID]->getOperand(OpIdx);
+ if (!MO.isReg()) {
+ DEBUG_WITH_TYPE(TgtInstructionSelector::getName(),
+ dbgs() << CurrentIdx << ": Not a register\n");
+ if (handleReject() == RejectAndGiveUp)
+ return false;
+ break;
+ }
+
+ if (State.MIs[InsnID]->getNumMemOperands() <= MMOIdx) {
+ if (handleReject() == RejectAndGiveUp)
+ return false;
+ break;
+ }
+
+ MachineMemOperand *MMO = *(State.MIs[InsnID]->memoperands_begin() + MMOIdx);
+
+ unsigned Size = MRI.getType(MO.getReg()).getSizeInBits();
+ if (MatcherOpcode == GIM_CheckMemorySizeEqualToLLT &&
+ MMO->getSize() * 8 != Size) {
+ if (handleReject() == RejectAndGiveUp)
+ return false;
+ } else if (MatcherOpcode == GIM_CheckMemorySizeLessThanLLT &&
+ MMO->getSize() * 8 >= Size) {
+ if (handleReject() == RejectAndGiveUp)
+ return false;
+ } else if (MatcherOpcode == GIM_CheckMemorySizeGreaterThanLLT &&
+ MMO->getSize() * 8 <= Size)
+ if (handleReject() == RejectAndGiveUp)
+ return false;
+
+ break;
+ }
case GIM_CheckType: {
int64_t InsnID = MatchTable[CurrentIdx++];
int64_t OpIdx = MatchTable[CurrentIdx++];
Modified: llvm/trunk/include/llvm/Target/GlobalISel/SelectionDAGCompat.td
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Target/GlobalISel/SelectionDAGCompat.td?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/include/llvm/Target/GlobalISel/SelectionDAGCompat.td (original)
+++ llvm/trunk/include/llvm/Target/GlobalISel/SelectionDAGCompat.td Sat May 5 13:53:24 2018
@@ -28,6 +28,13 @@ class GINodeEquiv<Instruction i, SDNode
// (ISD::LOAD, ISD::ATOMIC_LOAD, ISD::STORE, ISD::ATOMIC_STORE) but GlobalISel
// stores this information in the MachineMemoryOperand.
bit CheckMMOIsNonAtomic = 0;
+
+ // SelectionDAG has one node for all loads and uses predicates to
+ // differentiate them. GlobalISel on the other hand uses separate opcodes.
+ // When this is true, the resulting opcode is G_LOAD/G_SEXTLOAD/G_ZEXTLOAD
+ // depending on the predicates on the node.
+ Instruction IfSignExtend = ?;
+ Instruction IfZeroExtend = ?;
}
// These are defined in the same order as the G_* instructions.
@@ -80,11 +87,15 @@ def : GINodeEquiv<G_BSWAP, bswap>;
// Broadly speaking G_LOAD is equivalent to ISD::LOAD but there are some
// complications that tablegen must take care of. For example, Predicates such
// as isSignExtLoad require that this is not a perfect 1:1 mapping since a
-// sign-extending load is (G_SEXT (G_LOAD x)) in GlobalISel. Additionally,
+// sign-extending load is (G_SEXTLOAD x) in GlobalISel. Additionally,
// G_LOAD handles both atomic and non-atomic loads where as SelectionDAG had
// separate nodes for them. This GINodeEquiv maps the non-atomic loads to
// G_LOAD with a non-atomic MachineMemOperand.
-def : GINodeEquiv<G_LOAD, ld> { let CheckMMOIsNonAtomic = 1; }
+def : GINodeEquiv<G_LOAD, ld> {
+ let CheckMMOIsNonAtomic = 1;
+ let IfSignExtend = G_SEXTLOAD;
+ let IfZeroExtend = G_ZEXTLOAD;
+}
// Broadly speaking G_STORE is equivalent to ISD::STORE but there are some
// complications that tablegen must take care of. For example, predicates such
// as isTruncStore require that this is not a perfect 1:1 mapping since a
Modified: llvm/trunk/lib/Target/AArch64/AArch64InstructionSelector.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64InstructionSelector.cpp?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/lib/Target/AArch64/AArch64InstructionSelector.cpp (original)
+++ llvm/trunk/lib/Target/AArch64/AArch64InstructionSelector.cpp Sat May 5 13:53:24 2018
@@ -977,7 +977,6 @@ bool AArch64InstructionSelector::select(
case TargetOpcode::G_LOAD:
case TargetOpcode::G_STORE: {
- LLT MemTy = Ty;
LLT PtrTy = MRI.getType(I.getOperand(1).getReg());
if (PtrTy != LLT::pointer(0, 64)) {
@@ -991,6 +990,7 @@ bool AArch64InstructionSelector::select(
DEBUG(dbgs() << "Atomic load/store not supported yet\n");
return false;
}
+ unsigned MemSizeInBits = MemOp.getSize() * 8;
// FIXME: PR36018: Volatile loads in some cases are incorrectly selected by
// folding with an extend. Until we have a G_SEXTLOAD solution bail out if
@@ -1012,7 +1012,7 @@ bool AArch64InstructionSelector::select(
const RegisterBank &RB = *RBI.getRegBank(ValReg, MRI, TRI);
const unsigned NewOpc =
- selectLoadStoreUIOp(I.getOpcode(), RB.getID(), MemTy.getSizeInBits());
+ selectLoadStoreUIOp(I.getOpcode(), RB.getID(), MemSizeInBits);
if (NewOpc == I.getOpcode())
return false;
@@ -1025,7 +1025,7 @@ bool AArch64InstructionSelector::select(
if (PtrMI->getOpcode() == TargetOpcode::G_GEP) {
if (auto COff = getConstantVRegVal(PtrMI->getOperand(2).getReg(), MRI)) {
int64_t Imm = *COff;
- const unsigned Size = MemTy.getSizeInBits() / 8;
+ const unsigned Size = MemSizeInBits / 8;
const unsigned Scale = Log2_32(Size);
if ((Imm & (Size - 1)) == 0 && Imm >= 0 && Imm < (0x1000 << Scale)) {
unsigned Ptr2Reg = PtrMI->getOperand(1).getReg();
Modified: llvm/trunk/lib/Target/AArch64/AArch64LegalizerInfo.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/AArch64/AArch64LegalizerInfo.cpp?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/lib/Target/AArch64/AArch64LegalizerInfo.cpp (original)
+++ llvm/trunk/lib/Target/AArch64/AArch64LegalizerInfo.cpp Sat May 5 13:53:24 2018
@@ -136,20 +136,53 @@ AArch64LegalizerInfo::AArch64LegalizerIn
.widenScalarToNextPow2(0);
getActionDefinitionsBuilder({G_SEXTLOAD, G_ZEXTLOAD})
+ .legalForTypesWithMemSize({{s32, p0, 8},
+ {s32, p0, 16},
+ {s32, p0, 32},
+ {s64, p0, 64},
+ {p0, p0, 64},
+ {v2s32, p0, 64}})
+ .clampScalar(0, s32, s64)
+ .widenScalarToNextPow2(0)
+ // TODO: We could support sum-of-pow2's but the lowering code doesn't know
+ // how to do that yet.
+ .unsupportedIfMemSizeNotPow2()
+ // Lower anything left over into G_*EXT and G_LOAD
.lower();
- getActionDefinitionsBuilder({G_LOAD, G_STORE})
+ getActionDefinitionsBuilder(G_LOAD)
.legalForTypesWithMemSize({{s8, p0, 8},
{s16, p0, 16},
{s32, p0, 32},
{s64, p0, 64},
{p0, p0, 64},
{v2s32, p0, 64}})
+ // These extends are also legal
+ .legalForTypesWithMemSize({{s32, p0, 8},
+ {s32, p0, 16}})
+ .clampScalar(0, s8, s64)
+ .widenScalarToNextPow2(0)
// TODO: We could support sum-of-pow2's but the lowering code doesn't know
// how to do that yet.
.unsupportedIfMemSizeNotPow2()
+ // Lower any any-extending loads left into G_ANYEXT and G_LOAD
+ .lowerIf([=](const LegalityQuery &Query) {
+ return Query.Types[0].getSizeInBits() != Query.MMODescrs[0].Size * 8;
+ })
+ .clampNumElements(0, v2s32, v2s32);
+
+ getActionDefinitionsBuilder(G_STORE)
+ .legalForTypesWithMemSize({{s8, p0, 8},
+ {s16, p0, 16},
+ {s32, p0, 32},
+ {s64, p0, 64},
+ {p0, p0, 64},
+ {v2s32, p0, 64}})
.clampScalar(0, s8, s64)
.widenScalarToNextPow2(0)
+ // TODO: We could support sum-of-pow2's but the lowering code doesn't know
+ // how to do that yet.
+ .unsupportedIfMemSizeNotPow2()
.lowerIf([=](const LegalityQuery &Query) {
return Query.Types[0].isScalar() &&
Query.Types[0].getSizeInBits() != Query.MMODescrs[0].Size * 8;
Modified: llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-extload.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-extload.mir?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-extload.mir (original)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-extload.mir Sat May 5 13:53:24 2018
@@ -16,9 +16,8 @@ body: |
liveins: $x0
; CHECK-LABEL: name: test_extload
; CHECK: [[T0:%[0-9]+]]:_(p0) = COPY $x0
- ; CHECK: [[T1:%[0-9]+]]:_(s8) = G_LOAD [[T0]](p0) :: (load 1 from %ir.addr)
- ; CHECK: [[T2:%[0-9]+]]:_(s32) = G_ANYEXT [[T1]](s8)
- ; CHECK: $w0 = COPY [[T2]](s32)
+ ; CHECK: [[T1:%[0-9]+]]:_(s32) = G_LOAD [[T0]](p0) :: (load 1 from %ir.addr)
+ ; CHECK: $w0 = COPY [[T1]](s32)
%0:_(p0) = COPY $x0
%1:_(s32) = G_LOAD %0 :: (load 1 from %ir.addr)
$w0 = COPY %1
Modified: llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-sextload.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-sextload.mir?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-sextload.mir (original)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-sextload.mir Sat May 5 13:53:24 2018
@@ -16,9 +16,8 @@ body: |
liveins: $x0
; CHECK-LABEL: name: test_zextload
; CHECK: [[T0:%[0-9]+]]:_(p0) = COPY $x0
- ; CHECK: [[T1:%[0-9]+]]:_(s8) = G_LOAD [[T0]](p0) :: (load 1 from %ir.addr)
- ; CHECK: [[T2:%[0-9]+]]:_(s32) = G_SEXT [[T1]](s8)
- ; CHECK: $w0 = COPY [[T2]](s32)
+ ; CHECK: [[T1:%[0-9]+]]:_(s32) = G_SEXTLOAD [[T0]](p0) :: (load 1 from %ir.addr)
+ ; CHECK: $w0 = COPY [[T1]](s32)
%0:_(p0) = COPY $x0
%1:_(s32) = G_SEXTLOAD %0 :: (load 1 from %ir.addr)
$w0 = COPY %1
Modified: llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-zextload.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-zextload.mir?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-zextload.mir (original)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/legalize-zextload.mir Sat May 5 13:53:24 2018
@@ -16,9 +16,8 @@ body: |
liveins: $x0
; CHECK-LABEL: name: test_sextload
; CHECK: [[T0:%[0-9]+]]:_(p0) = COPY $x0
- ; CHECK: [[T1:%[0-9]+]]:_(s8) = G_LOAD [[T0]](p0) :: (load 1 from %ir.addr)
- ; CHECK: [[T2:%[0-9]+]]:_(s32) = G_ZEXT [[T1]](s8)
- ; CHECK: $w0 = COPY [[T2]](s32)
+ ; CHECK: [[T1:%[0-9]+]]:_(s32) = G_ZEXTLOAD [[T0]](p0) :: (load 1 from %ir.addr)
+ ; CHECK: $w0 = COPY [[T1]](s32)
%0:_(p0) = COPY $x0
%1:_(s32) = G_ZEXTLOAD %0 :: (load 1 from %ir.addr)
$w0 = COPY %1
Modified: llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-atomicrmw.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-atomicrmw.mir?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-atomicrmw.mir (original)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-atomicrmw.mir Sat May 5 13:53:24 2018
@@ -68,11 +68,11 @@ body: |
; CHECK-LABEL: name: atomicrmw_add_i32
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[CST:%[0-9]+]]:gpr32 = MOVi32imm 1
- ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDADDALW [[CST]], [[COPY]] :: (load store seq_cst 8 on %ir.addr)
+ ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDADDALW [[CST]], [[COPY]] :: (load store seq_cst 4 on %ir.addr)
; CHECK: $w0 = COPY [[RES]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_CONSTANT i32 1
- %2:gpr(s32) = G_ATOMICRMW_ADD %0, %1 :: (load store seq_cst 8 on %ir.addr)
+ %2:gpr(s32) = G_ATOMICRMW_ADD %0, %1 :: (load store seq_cst 4 on %ir.addr)
$w0 = COPY %2(s32)
...
@@ -88,11 +88,11 @@ body: |
; CHECK-LABEL: name: atomicrmw_sub_i32
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[CST:%[0-9]+]]:gpr32 = MOVi32imm 1
- ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDADDALW [[CST]], [[COPY]] :: (load store seq_cst 8 on %ir.addr)
+ ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDADDALW [[CST]], [[COPY]] :: (load store seq_cst 4 on %ir.addr)
; CHECK: $w0 = COPY [[RES]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_CONSTANT i32 1
- %2:gpr(s32) = G_ATOMICRMW_ADD %0, %1 :: (load store seq_cst 8 on %ir.addr)
+ %2:gpr(s32) = G_ATOMICRMW_ADD %0, %1 :: (load store seq_cst 4 on %ir.addr)
$w0 = COPY %2(s32)
...
@@ -109,11 +109,11 @@ body: |
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[CST:%[0-9]+]]:gpr32 = MOVi32imm 1
; CHECK: [[CST2:%[0-9]+]]:gpr32 = ORNWrr $wzr, [[CST]]
- ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDCLRAW [[CST2]], [[COPY]] :: (load store acquire 8 on %ir.addr)
+ ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDCLRAW [[CST2]], [[COPY]] :: (load store acquire 4 on %ir.addr)
; CHECK: $w0 = COPY [[RES]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_CONSTANT i32 1
- %2:gpr(s32) = G_ATOMICRMW_AND %0, %1 :: (load store acquire 8 on %ir.addr)
+ %2:gpr(s32) = G_ATOMICRMW_AND %0, %1 :: (load store acquire 4 on %ir.addr)
$w0 = COPY %2(s32)
...
@@ -129,11 +129,11 @@ body: |
; CHECK-LABEL: name: atomicrmw_or_i32
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[CST:%[0-9]+]]:gpr32 = MOVi32imm 1
- ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDSETLW [[CST]], [[COPY]] :: (load store release 8 on %ir.addr)
+ ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDSETLW [[CST]], [[COPY]] :: (load store release 4 on %ir.addr)
; CHECK: $w0 = COPY [[RES]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_CONSTANT i32 1
- %2:gpr(s32) = G_ATOMICRMW_OR %0, %1 :: (load store release 8 on %ir.addr)
+ %2:gpr(s32) = G_ATOMICRMW_OR %0, %1 :: (load store release 4 on %ir.addr)
$w0 = COPY %2(s32)
...
@@ -149,11 +149,11 @@ body: |
; CHECK-LABEL: name: atomicrmw_xor_i32
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[CST:%[0-9]+]]:gpr32 = MOVi32imm 1
- ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDEORALW [[CST]], [[COPY]] :: (load store acq_rel 8 on %ir.addr)
+ ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDEORALW [[CST]], [[COPY]] :: (load store acq_rel 4 on %ir.addr)
; CHECK: $w0 = COPY [[RES]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_CONSTANT i32 1
- %2:gpr(s32) = G_ATOMICRMW_XOR %0, %1 :: (load store acq_rel 8 on %ir.addr)
+ %2:gpr(s32) = G_ATOMICRMW_XOR %0, %1 :: (load store acq_rel 4 on %ir.addr)
$w0 = COPY %2(s32)
...
@@ -169,11 +169,11 @@ body: |
; CHECK-LABEL: name: atomicrmw_min_i32
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[CST:%[0-9]+]]:gpr32 = MOVi32imm 1
- ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDSMINALW [[CST]], [[COPY]] :: (load store acq_rel 8 on %ir.addr)
+ ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDSMINALW [[CST]], [[COPY]] :: (load store acq_rel 4 on %ir.addr)
; CHECK: $w0 = COPY [[RES]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_CONSTANT i32 1
- %2:gpr(s32) = G_ATOMICRMW_MIN %0, %1 :: (load store acq_rel 8 on %ir.addr)
+ %2:gpr(s32) = G_ATOMICRMW_MIN %0, %1 :: (load store acq_rel 4 on %ir.addr)
$w0 = COPY %2(s32)
...
@@ -189,11 +189,11 @@ body: |
; CHECK-LABEL: name: atomicrmw_max_i32
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[CST:%[0-9]+]]:gpr32 = MOVi32imm 1
- ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDSMAXALW [[CST]], [[COPY]] :: (load store acq_rel 8 on %ir.addr)
+ ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDSMAXALW [[CST]], [[COPY]] :: (load store acq_rel 4 on %ir.addr)
; CHECK: $w0 = COPY [[RES]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_CONSTANT i32 1
- %2:gpr(s32) = G_ATOMICRMW_MAX %0, %1 :: (load store acq_rel 8 on %ir.addr)
+ %2:gpr(s32) = G_ATOMICRMW_MAX %0, %1 :: (load store acq_rel 4 on %ir.addr)
$w0 = COPY %2(s32)
...
@@ -209,11 +209,11 @@ body: |
; CHECK-LABEL: name: atomicrmw_umin_i32
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[CST:%[0-9]+]]:gpr32 = MOVi32imm 1
- ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDUMINALW [[CST]], [[COPY]] :: (load store acq_rel 8 on %ir.addr)
+ ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDUMINALW [[CST]], [[COPY]] :: (load store acq_rel 4 on %ir.addr)
; CHECK: $w0 = COPY [[RES]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_CONSTANT i32 1
- %2:gpr(s32) = G_ATOMICRMW_UMIN %0, %1 :: (load store acq_rel 8 on %ir.addr)
+ %2:gpr(s32) = G_ATOMICRMW_UMIN %0, %1 :: (load store acq_rel 4 on %ir.addr)
$w0 = COPY %2(s32)
...
@@ -229,10 +229,10 @@ body: |
; CHECK-LABEL: name: atomicrmw_umax_i32
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[CST:%[0-9]+]]:gpr32 = MOVi32imm 1
- ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDUMAXALW [[CST]], [[COPY]] :: (load store acq_rel 8 on %ir.addr)
+ ; CHECK: [[RES:%[0-9]+]]:gpr32 = LDUMAXALW [[CST]], [[COPY]] :: (load store acq_rel 4 on %ir.addr)
; CHECK: $w0 = COPY [[RES]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_CONSTANT i32 1
- %2:gpr(s32) = G_ATOMICRMW_UMAX %0, %1 :: (load store acq_rel 8 on %ir.addr)
+ %2:gpr(s32) = G_ATOMICRMW_UMAX %0, %1 :: (load store acq_rel 4 on %ir.addr)
$w0 = COPY %2(s32)
...
Modified: llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-cmpxchg.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-cmpxchg.mir?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-cmpxchg.mir (original)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-cmpxchg.mir Sat May 5 13:53:24 2018
@@ -21,12 +21,12 @@ body: |
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[CMP:%[0-9]+]]:gpr32 = MOVi32imm 0
; CHECK: [[CST:%[0-9]+]]:gpr32 = MOVi32imm 1
- ; CHECK: [[RES:%[0-9]+]]:gpr32 = CASW [[CMP]], [[CST]], [[COPY]] :: (load store monotonic 8 on %ir.addr)
+ ; CHECK: [[RES:%[0-9]+]]:gpr32 = CASW [[CMP]], [[CST]], [[COPY]] :: (load store monotonic 4 on %ir.addr)
; CHECK: $w0 = COPY [[RES]]
%0:gpr(p0) = COPY $x0
%1:gpr(s32) = G_CONSTANT i32 0
%2:gpr(s32) = G_CONSTANT i32 1
- %3:gpr(s32) = G_ATOMIC_CMPXCHG %0, %1, %2 :: (load store monotonic 8 on %ir.addr)
+ %3:gpr(s32) = G_ATOMIC_CMPXCHG %0, %1, %2 :: (load store monotonic 4 on %ir.addr)
$w0 = COPY %3(s32)
...
Added: llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-extload.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-extload.mir?rev=331601&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-extload.mir (added)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-extload.mir Sat May 5 13:53:24 2018
@@ -0,0 +1,48 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
+# RUN: llc -mtriple=aarch64-- -run-pass=instruction-select -verify-machineinstrs -global-isel %s -o - | FileCheck %s
+
+--- |
+ target datalayout = "e-m:o-i64:64-i128:128-n32:64-S128"
+
+ define void @aextload_s32_from_s16(i16 *%addr) { ret void }
+
+ define void @aextload_s32_from_s16_not_combined(i16 *%addr) { ret void }
+...
+
+---
+name: aextload_s32_from_s16
+legalized: true
+regBankSelected: true
+
+body: |
+ bb.0:
+ liveins: $x0
+
+ ; CHECK-LABEL: name: aextload_s32_from_s16
+ ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
+ ; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 0 :: (load 2 from %ir.addr)
+ ; CHECK: $w0 = COPY [[T0]]
+ %0:gpr(p0) = COPY $x0
+ %1:gpr(s32) = G_LOAD %0 :: (load 2 from %ir.addr)
+ $w0 = COPY %1(s32)
+...
+
+---
+name: aextload_s32_from_s16_not_combined
+legalized: true
+regBankSelected: true
+
+body: |
+ bb.0:
+ liveins: $x0
+
+ ; CHECK-LABEL: name: aextload_s32_from_s16
+ ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
+ ; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 0 :: (load 2 from %ir.addr)
+ ; CHECK: [[T1:%[0-9]+]]:gpr32all = COPY [[T0]]
+ ; CHECK: $w0 = COPY [[T1]]
+ %0:gpr(p0) = COPY $x0
+ %1:gpr(s16) = G_LOAD %0 :: (load 2 from %ir.addr)
+ %2:gpr(s32) = G_ANYEXT %1
+ $w0 = COPY %2(s32)
+...
Modified: llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-load.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-load.mir?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-load.mir (original)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-load.mir Sat May 5 13:53:24 2018
@@ -6,7 +6,9 @@
define void @load_s64_gpr(i64* %addr) { ret void }
define void @load_s32_gpr(i32* %addr) { ret void }
+ define void @load_s16_gpr_anyext(i16* %addr) { ret void }
define void @load_s16_gpr(i16* %addr) { ret void }
+ define void @load_s8_gpr_anyext(i8* %addr) { ret void }
define void @load_s8_gpr(i8* %addr) { ret void }
define void @load_fi_s64_gpr() {
@@ -30,10 +32,6 @@
define void @load_gep_32_s8_fpr(i8* %addr) { ret void }
define void @load_v2s32(i64 *%addr) { ret void }
-
- define void @sextload_s32_from_s16(i16 *%addr) { ret void }
- define void @zextload_s32_from_s16(i16 *%addr) { ret void }
- define void @aextload_s32_from_s16(i16 *%addr) { ret void }
...
---
@@ -81,6 +79,24 @@ body: |
...
---
+name: load_s16_gpr_anyext
+legalized: true
+regBankSelected: true
+
+body: |
+ bb.0:
+ liveins: $x0
+
+ ; CHECK-LABEL: name: load_s16_gpr_anyext
+ ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
+ ; CHECK: [[LDRHHui:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 0 :: (load 2 from %ir.addr)
+ ; CHECK: $w0 = COPY [[LDRHHui]]
+ %0:gpr(p0) = COPY $x0
+ %1:gpr(s32) = G_LOAD %0 :: (load 2 from %ir.addr)
+ $w0 = COPY %1(s32)
+...
+
+---
name: load_s16_gpr
legalized: true
regBankSelected: true
@@ -96,7 +112,8 @@ body: |
; CHECK-LABEL: name: load_s16_gpr
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[LDRHHui:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 0 :: (load 2 from %ir.addr)
- ; CHECK: $w0 = COPY [[LDRHHui]]
+ ; CHECK: [[T0:%[0-9]+]]:gpr32all = COPY [[LDRHHui]]
+ ; CHECK: $w0 = COPY [[T0]]
%0(p0) = COPY $x0
%1(s16) = G_LOAD %0 :: (load 2 from %ir.addr)
%2:gpr(s32) = G_ANYEXT %1
@@ -104,6 +121,24 @@ body: |
...
---
+name: load_s8_gpr_anyext
+legalized: true
+regBankSelected: true
+
+body: |
+ bb.0:
+ liveins: $x0
+
+ ; CHECK-LABEL: name: load_s8_gpr
+ ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
+ ; CHECK: [[LDRBBui:%[0-9]+]]:gpr32 = LDRBBui [[COPY]], 0 :: (load 1 from %ir.addr)
+ ; CHECK: $w0 = COPY [[LDRBBui]]
+ %0:gpr(p0) = COPY $x0
+ %1:gpr(s32) = G_LOAD %0 :: (load 1 from %ir.addr)
+ $w0 = COPY %1(s32)
+...
+
+---
name: load_s8_gpr
legalized: true
regBankSelected: true
@@ -119,7 +154,8 @@ body: |
; CHECK-LABEL: name: load_s8_gpr
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[LDRBBui:%[0-9]+]]:gpr32 = LDRBBui [[COPY]], 0 :: (load 1 from %ir.addr)
- ; CHECK: $w0 = COPY [[LDRBBui]]
+ ; CHECK: [[T0:%[0-9]+]]:gpr32all = COPY [[LDRBBui]]
+ ; CHECK: $w0 = COPY [[T0]]
%0(p0) = COPY $x0
%1(s8) = G_LOAD %0 :: (load 1 from %ir.addr)
%2:gpr(s32) = G_ANYEXT %1
@@ -220,7 +256,8 @@ body: |
; CHECK-LABEL: name: load_gep_64_s16_gpr
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[LDRHHui:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 32 :: (load 2 from %ir.addr)
- ; CHECK: $w0 = COPY [[LDRHHui]]
+ ; CHECK: [[T0:%[0-9]+]]:gpr32all = COPY [[LDRHHui]]
+ ; CHECK: $w0 = COPY [[T0]]
%0(p0) = COPY $x0
%1(s64) = G_CONSTANT i64 64
%2(p0) = G_GEP %0, %1
@@ -247,7 +284,8 @@ body: |
; CHECK-LABEL: name: load_gep_1_s8_gpr
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
; CHECK: [[LDRBBui:%[0-9]+]]:gpr32 = LDRBBui [[COPY]], 1 :: (load 1 from %ir.addr)
- ; CHECK: $w0 = COPY [[LDRBBui]]
+ ; CHECK: [[T0:%[0-9]+]]:gpr32all = COPY [[LDRBBui]]
+ ; CHECK: $w0 = COPY [[T0]]
%0(p0) = COPY $x0
%1(s64) = G_CONSTANT i64 1
%2(p0) = G_GEP %0, %1
@@ -468,59 +506,3 @@ body: |
%1(<2 x s32>) = G_LOAD %0 :: (load 8 from %ir.addr)
$d0 = COPY %1(<2 x s32>)
...
----
-name: sextload_s32_from_s16
-legalized: true
-regBankSelected: true
-
-body: |
- bb.0:
- liveins: $w0
-
- ; CHECK-LABEL: name: sextload_s32_from_s16
- ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
- ; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRSHWui [[COPY]], 0 :: (load 2 from %ir.addr)
- ; CHECK: $w0 = COPY [[T0]]
- %0:gpr(p0) = COPY $x0
- %1:gpr(s16) = G_LOAD %0 :: (load 2 from %ir.addr)
- %2:gpr(s32) = G_SEXT %1
- $w0 = COPY %2(s32)
-...
-
----
-name: zextload_s32_from_s16
-legalized: true
-regBankSelected: true
-
-body: |
- bb.0:
- liveins: $w0
-
- ; CHECK-LABEL: name: zextload_s32_from_s16
- ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
- ; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 0 :: (load 2 from %ir.addr)
- ; CHECK: $w0 = COPY [[T0]]
- %0:gpr(p0) = COPY $x0
- %1:gpr(s16) = G_LOAD %0 :: (load 2 from %ir.addr)
- %2:gpr(s32) = G_ZEXT %1
- $w0 = COPY %2(s32)
-...
-
----
-name: aextload_s32_from_s16
-legalized: true
-regBankSelected: true
-
-body: |
- bb.0:
- liveins: $w0
-
- ; CHECK-LABEL: name: aextload_s32_from_s16
- ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
- ; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 0 :: (load 2 from %ir.addr)
- ; CHECK: $w0 = COPY [[T0]]
- %0:gpr(p0) = COPY $x0
- %1:gpr(s16) = G_LOAD %0 :: (load 2 from %ir.addr)
- %2:gpr(s32) = G_ANYEXT %1
- $w0 = COPY %2(s32)
-...
Added: llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-sextload.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-sextload.mir?rev=331601&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-sextload.mir (added)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-sextload.mir Sat May 5 13:53:24 2018
@@ -0,0 +1,47 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
+# RUN: llc -mtriple=aarch64-- -run-pass=instruction-select -verify-machineinstrs -global-isel %s -o - | FileCheck %s
+
+--- |
+ target datalayout = "e-m:o-i64:64-i128:128-n32:64-S128"
+
+ define void @sextload_s32_from_s16(i16 *%addr) { ret void }
+ define void @sextload_s32_from_s16_not_combined(i16 *%addr) { ret void }
+...
+
+---
+name: sextload_s32_from_s16
+legalized: true
+regBankSelected: true
+
+body: |
+ bb.0:
+ liveins: $x0
+
+ ; CHECK-LABEL: name: sextload_s32_from_s16
+ ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
+ ; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRSHWui [[COPY]], 0 :: (load 2 from %ir.addr)
+ ; CHECK: $w0 = COPY [[T0]]
+ %0:gpr(p0) = COPY $x0
+ %1:gpr(s32) = G_SEXTLOAD %0 :: (load 2 from %ir.addr)
+ $w0 = COPY %1(s32)
+...
+
+---
+name: sextload_s32_from_s16_not_combined
+legalized: true
+regBankSelected: true
+
+body: |
+ bb.0:
+ liveins: $x0
+
+ ; CHECK-LABEL: name: sextload_s32_from_s16_not_combined
+ ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
+ ; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 0 :: (load 2 from %ir.addr)
+ ; CHECK: [[T1:%[0-9]+]]:gpr32 = SBFMWri [[T0]], 0, 15
+ ; CHECK: $w0 = COPY [[T1]]
+ %0:gpr(p0) = COPY $x0
+ %1:gpr(s16) = G_LOAD %0 :: (load 2 from %ir.addr)
+ %2:gpr(s32) = G_SEXT %1
+ $w0 = COPY %2(s32)
+...
Modified: llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-with-no-legality-check.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-with-no-legality-check.mir?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-with-no-legality-check.mir (original)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-with-no-legality-check.mir Sat May 5 13:53:24 2018
@@ -231,6 +231,7 @@ body: |
$noreg = PATCHABLE_RET
...
+# The rules that generated this test has changed. The generator should be rerun
---
name: test_rule92_id2150_at_idx7770
alignment: 2
@@ -238,7 +239,7 @@ legalized: true
regBankSelected: true
tracksRegLiveness: true
registers:
- - { id: 0, class: fpr }
+ - { id: 0, class: gpr }
- { id: 1, class: gpr }
- { id: 2, class: gpr }
liveins:
@@ -253,11 +254,11 @@ body: |
; CHECK: [[LDRBBui:%[0-9]+]]:gpr32 = LDRBBui [[COPY]], 0 :: (load 1)
; CHECK: $noreg = PATCHABLE_RET [[LDRBBui]]
%2:gpr(p0) = COPY $x0
- %0:fpr(s1) = G_LOAD %2(p0) :: (load 1)
- %1:gpr(s32) = G_ANYEXT %0(s1)
- $noreg = PATCHABLE_RET %1(s32)
+ %0:gpr(s32) = G_LOAD %2(p0) :: (load 1)
+ $noreg = PATCHABLE_RET %0(s32)
...
+# The rules that generated this test has changed. The generator should be rerun
---
name: test_rule96_id2146_at_idx8070
alignment: 2
@@ -277,8 +278,10 @@ body: |
; CHECK-LABEL: name: test_rule96_id2146_at_idx8070
; CHECK: liveins: $x0
; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
- ; CHECK: [[LDRBBui:%[0-9]+]]:gpr32 = LDRBBui [[COPY]], 0 :: (load 1)
- ; CHECK: $noreg = PATCHABLE_RET [[LDRBBui]]
+ ; CHECK: [[LDRBui:%[0-9]+]]:fpr8 = LDRBui [[COPY]], 0 :: (load 1)
+ ; CHECK: [[COPY2:%[0-9]+]]:gpr32 = COPY [[LDRBui]]
+ ; CHECK: [[UBFMWri:%[0-9]+]]:gpr32 = UBFMWri [[COPY2]], 0, 0
+ ; CHECK: $noreg = PATCHABLE_RET [[UBFMWri]]
%2:gpr(p0) = COPY $x0
%0:fpr(s1) = G_LOAD %2(p0) :: (load 1)
%1:gpr(s32) = G_ZEXT %0(s1)
Added: llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-zextload.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-zextload.mir?rev=331601&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-zextload.mir (added)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/select-zextload.mir Sat May 5 13:53:24 2018
@@ -0,0 +1,46 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
+# RUN: llc -mtriple=aarch64-- -run-pass=instruction-select -verify-machineinstrs -global-isel %s -o - | FileCheck %s
+
+--- |
+ target datalayout = "e-m:o-i64:64-i128:128-n32:64-S128"
+
+ define void @zextload_s32_from_s16(i16 *%addr) { ret void }
+ define void @zextload_s32_from_s16_not_combined(i16 *%addr) { ret void }
+...
+
+---
+name: zextload_s32_from_s16
+legalized: true
+regBankSelected: true
+
+body: |
+ bb.0:
+ liveins: $x0
+
+ ; CHECK-LABEL: name: zextload_s32_from_s16
+ ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
+ ; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 0 :: (load 2 from %ir.addr)
+ ; CHECK: $w0 = COPY [[T0]]
+ %0:gpr(p0) = COPY $x0
+ %1:gpr(s32) = G_ZEXTLOAD %0 :: (load 2 from %ir.addr)
+ $w0 = COPY %1(s32)
+...
+---
+name: zextload_s32_from_s16_not_combined
+legalized: true
+regBankSelected: true
+
+body: |
+ bb.0:
+ liveins: $x0
+
+ ; CHECK-LABEL: name: zextload_s32_from_s16_not_combined
+ ; CHECK: [[COPY:%[0-9]+]]:gpr64sp = COPY $x0
+ ; CHECK: [[T0:%[0-9]+]]:gpr32 = LDRHHui [[COPY]], 0 :: (load 2 from %ir.addr)
+ ; CHECK: [[T1:%[0-9]+]]:gpr32 = UBFMWri [[T0]], 0, 15
+ ; CHECK: $w0 = COPY [[T1]]
+ %0:gpr(p0) = COPY $x0
+ %1:gpr(s16) = G_LOAD %0 :: (load 2 from %ir.addr)
+ %2:gpr(s32) = G_ZEXT %1
+ $w0 = COPY %2(s32)
+...
Modified: llvm/trunk/test/TableGen/GlobalISelEmitter.td
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/TableGen/GlobalISelEmitter.td?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/test/TableGen/GlobalISelEmitter.td (original)
+++ llvm/trunk/test/TableGen/GlobalISelEmitter.td Sat May 5 13:53:24 2018
@@ -112,11 +112,9 @@ def HasC : Predicate<"Subtarget->hasC()"
// CHECK-LABEL: // LLT Objects.
// CHECK-NEXT: enum {
-// CHECK-NEXT: GILLT_s16,
// CHECK-NEXT: GILLT_s32,
// CHECK-NEXT: }
// CHECK-NEXT: const static LLT TypeObjects[] = {
-// CHECK-NEXT: LLT::scalar(16),
// CHECK-NEXT: LLT::scalar(32),
// CHECK-NEXT: };
@@ -265,6 +263,7 @@ def HasC : Predicate<"Subtarget->hasC()"
// CHECK-NEXT: GIM_CheckComplexPattern, /*MI*/1, /*Op*/3, /*Renderer*/2, GICP_gi_complex,
// CHECK-NEXT: GIM_CheckIsSafeToFold, /*InsnID*/1,
// CHECK-NEXT: // (select:{ *:[i32] } GPR32:{ *:[i32] }:$src1, (complex_rr:{ *:[i32] } GPR32:{ *:[i32] }:$src2a, GPR32:{ *:[i32] }:$src2b), (select:{ *:[i32] } GPR32:{ *:[i32] }:$src3, complex:{ *:[i32] }:$src4, (complex:{ *:[i32] } i32imm:{ *:[i32] }:$src5a, i32imm:{ *:[i32] }:$src5b))) => (INSN3:{ *:[i32] } GPR32:{ *:[i32] }:$src1, GPR32:{ *:[i32] }:$src2b, GPR32:{ *:[i32] }:$src2a, (INSN4:{ *:[i32] } GPR32:{ *:[i32] }:$src3, complex:{ *:[i32] }:$src4, i32imm:{ *:[i32] }:$src5a, i32imm:{ *:[i32] }:$src5b))
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_MakeTempReg, /*TempRegID*/0, /*TypeID*/GILLT_s32,
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/1, /*Opcode*/MyTarget::INSN4,
// CHECK-NEXT: GIR_AddTempRegister, /*InsnID*/1, /*TempRegID*/0, /*TempRegFlags*/RegState::Define,
@@ -313,6 +312,7 @@ def : Pat<(select GPR32:$src1, (complex_
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/3, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckComplexPattern, /*MI*/0, /*Op*/3, /*Renderer*/1, GICP_gi_complex,
// CHECK-NEXT: // (select:{ *:[i32] } GPR32:{ *:[i32] }:$src1, complex:{ *:[i32] }:$src2, complex:{ *:[i32] }:$src3) => (INSN2:{ *:[i32] } GPR32:{ *:[i32] }:$src1, complex:{ *:[i32] }:$src3, complex:{ *:[i32] }:$src2)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::INSN2,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/1, // src1
@@ -348,6 +348,7 @@ def : Pat<(select GPR32:$src1, (complex_
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/2, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckComplexPattern, /*MI*/0, /*Op*/2, /*Renderer*/0, GICP_gi_complex,
// CHECK-NEXT: // (sub:{ *:[i32] } GPR32:{ *:[i32] }:$src1, complex:{ *:[i32] }:$src2) => (INSN1:{ *:[i32] } GPR32:{ *:[i32] }:$src1, complex:{ *:[i32] }:$src2)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::INSN1,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/1, // src1
@@ -411,6 +412,7 @@ def : Pat<(select GPR32:$src1, complex:$
// CHECK-NEXT: GIM_CheckIsSafeToFold, /*InsnID*/1,
// CHECK-NEXT: GIM_CheckIsSafeToFold, /*InsnID*/2,
// CHECK-NEXT: // (sub:{ *:[i32] } (sub:{ *:[i32] } GPR32:{ *:[i32] }:$src1, GPR32:{ *:[i32] }:$src2), (sub:{ *:[i32] } GPR32:{ *:[i32] }:$src3, GPR32:{ *:[i32] }:$src4)) => (INSNBOB:{ *:[i32] } GPR32:{ *:[i32] }:$src1, GPR32:{ *:[i32] }:$src2, GPR32:{ *:[i32] }:$src3, GPR32:{ *:[i32] }:$src4)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::INSNBOB,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/1, /*OpIdx*/1, // src1
@@ -449,6 +451,7 @@ def INSNBOB : I<(outs GPR32:$dst), (ins
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/2, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/0, /*Op*/2, /*RC*/MyTarget::GPR32RegClassID,
// CHECK-NEXT: // (intrinsic_wo_chain:{ *:[i32] } [[ID:[0-9]+]]:{ *:[iPTR] }, GPR32:{ *:[i32] }:$src1) => (MOV:{ *:[i32] } GPR32:{ *:[i32] }:$src1)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::MOV,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
@@ -485,6 +488,7 @@ def MOV : I<(outs GPR32:$dst), (ins GPR3
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/2, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckConstantInt, /*MI*/0, /*Op*/2, -2
// CHECK-NEXT: // (xor:{ *:[i32] } GPR32:{ *:[i32] }:$src1, -2:{ *:[i32] }) => (XORI:{ *:[i32] } GPR32:{ *:[i32] }:$src1)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::XORI,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_AddImm, /*InsnID*/0, /*Imm*/-1,
@@ -515,6 +519,7 @@ def XORI : I<(outs GPR32:$dst), (ins m1:
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/2, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckConstantInt, /*MI*/0, /*Op*/2, -3
// CHECK-NEXT: // (xor:{ *:[i32] } GPR32:{ *:[i32] }:$src1, -3:{ *:[i32] }) => (XOR:{ *:[i32] } GPR32:{ *:[i32] }:$src1)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::XOR,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_AddRegister, /*InsnID*/0, MyTarget::R0,
@@ -545,6 +550,7 @@ def XOR : I<(outs GPR32:$dst), (ins Z:$s
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/2, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckConstantInt, /*MI*/0, /*Op*/2, -4
// CHECK-NEXT: // (xor:{ *:[i32] } GPR32:{ *:[i32] }:$src1, -4:{ *:[i32] }) => (XORlike:{ *:[i32] } GPR32:{ *:[i32] }:$src1)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::XORlike,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_AddImm, /*InsnID*/0, /*Imm*/-1,
@@ -576,6 +582,7 @@ def XORlike : I<(outs GPR32:$dst), (ins
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/2, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckConstantInt, /*MI*/0, /*Op*/2, -5,
// CHECK-NEXT: // (xor:{ *:[i32] } GPR32:{ *:[i32] }:$src1, -5:{ *:[i32] }) => (XORManyDefaults:{ *:[i32] } GPR32:{ *:[i32] }:$src1)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::XORManyDefaults,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_AddImm, /*InsnID*/0, /*Imm*/-1,
@@ -610,6 +617,7 @@ def XORManyDefaults : I<(outs GPR32:$dst
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/2, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckConstantInt, /*MI*/0, /*Op*/2, -1,
// CHECK-NEXT: // (xor:{ *:[i32] } GPR32:{ *:[i32] }:$Wm, -1:{ *:[i32] }) => (ORN:{ *:[i32] } R0:{ *:[i32] }, GPR32:{ *:[i32] }:$Wm)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::ORN,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_AddRegister, /*InsnID*/0, MyTarget::R0,
@@ -627,47 +635,6 @@ def ORN : I<(outs GPR32:$dst), (ins GPR3
def : Pat<(not GPR32:$Wm), (ORN R0, GPR32:$Wm)>;
-//===- Test a simple pattern with a sextload -------------------------------===//
-
-// OPT-NEXT: GIM_Try, /*On fail goto*//*Label [[GRP_LABEL_NUM:[0-9]+]]*/ [[GRP_LABEL:[0-9]+]],
-// OPT-NEXT: GIM_CheckOpcode, /*MI*/0, TargetOpcode::G_SEXT,
-// CHECK-NEXT: GIM_Try, /*On fail goto*//*Label [[LABEL_NUM:[0-9]+]]*/ [[LABEL:[0-9]+]],
-// CHECK-NEXT: GIM_CheckNumOperands, /*MI*/0, /*Expected*/2,
-// CHECK-NEXT: GIM_RecordInsn, /*DefineMI*/1, /*MI*/0, /*OpIdx*/1, // MIs[1]
-// CHECK-NEXT: GIM_CheckNumOperands, /*MI*/1, /*Expected*/2,
-// NOOPT-NEXT: GIM_CheckOpcode, /*MI*/0, TargetOpcode::G_SEXT,
-// OPT-NEXT: // No instruction predicates
-// CHECK-NEXT: // MIs[0] dst
-// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/0, /*Type*/GILLT_s32,
-// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/0, /*Op*/0, /*RC*/MyTarget::GPR32RegClassID,
-// CHECK-NEXT: // MIs[0] Operand 1
-// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/1, /*Type*/GILLT_s16,
-// CHECK-NEXT: GIM_CheckOpcode, /*MI*/1, TargetOpcode::G_LOAD,
-// CHECK-NEXT: GIM_CheckAtomicOrdering, /*MI*/1, /*Order*/(int64_t)AtomicOrdering::NotAtomic,
-// CHECK-NEXT: // MIs[1] Operand 0
-// CHECK-NEXT: GIM_CheckType, /*MI*/1, /*Op*/0, /*Type*/GILLT_s16,
-// CHECK-NEXT: // MIs[1] src1
-// CHECK-NEXT: GIM_CheckPointerToAny, /*MI*/1, /*Op*/1, /*SizeInBits*/32,
-// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/1, /*Op*/1, /*RC*/MyTarget::GPR32RegClassID,
-// CHECK-NEXT: GIM_CheckIsSafeToFold, /*InsnID*/1,
-// CHECK-NEXT: // (sext:{ *:[i32] } (ld:{ *:[i16] } GPR32:{ *:[i32] }:$src1)<<P:Predicate_unindexedload>>) => (SEXTLOAD:{ *:[i32] } GPR32:{ *:[i32] }:$src1)
-// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::SEXTLOAD,
-// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
-// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/1, /*OpIdx*/1, // src1
-// CHECK-NEXT: GIR_MergeMemOperands, /*InsnID*/0, /*MergeInsnID's*/0, 1, GIU_MergeMemOperands_EndOfList,
-// CHECK-NEXT: GIR_EraseFromParent, /*InsnID*/0,
-// CHECK-NEXT: GIR_ConstrainSelectedInstOperands, /*InsnID*/0,
-// CHECK-NEXT: GIR_Done,
-// CHECK-NEXT: // Label [[LABEL_NUM]]: @[[LABEL]]
-// Closing the G_SEXT group.
-// OPT-NEXT: GIM_Reject,
-// OPT-NEXT: GIR_Done,
-// OPT-NEXT: // Label [[GRP_LABEL_NUM]]: @[[GRP_LABEL]]
-
-def SEXTLOAD : I<(outs GPR32:$dst), (ins GPR32:$src1),
- [(set GPR32:$dst, (sextloadi16 GPR32:$src1))]>;
-
-
//===- Test a nested instruction match. -----------------------------------===//
// OPT-NEXT: GIM_Try, /*On fail goto*//*Label [[GRP_LABEL_NUM:[0-9]+]]*/ [[GRP_LABEL:[0-9]+]],
@@ -698,6 +665,7 @@ def SEXTLOAD : I<(outs GPR32:$dst), (ins
// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/0, /*Op*/2, /*RC*/MyTarget::GPR32RegClassID,
// CHECK-NEXT: GIM_CheckIsSafeToFold, /*InsnID*/1,
// CHECK-NEXT: // (mul:{ *:[i32] } (add:{ *:[i32] } GPR32:{ *:[i32] }:$src1, GPR32:{ *:[i32] }:$src2), GPR32:{ *:[i32] }:$src3) => (MULADD:{ *:[i32] } GPR32:{ *:[i32] }:$src1, GPR32:{ *:[i32] }:$src2, GPR32:{ *:[i32] }:$src3)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::MULADD,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/1, /*OpIdx*/1, // src1
@@ -735,6 +703,7 @@ def SEXTLOAD : I<(outs GPR32:$dst), (ins
// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/1, /*Op*/2, /*RC*/MyTarget::GPR32RegClassID,
// CHECK-NEXT: GIM_CheckIsSafeToFold, /*InsnID*/1,
// CHECK-NEXT: // (mul:{ *:[i32] } GPR32:{ *:[i32] }:$src3, (add:{ *:[i32] } GPR32:{ *:[i32] }:$src1, GPR32:{ *:[i32] }:$src2)) => (MULADD:{ *:[i32] } GPR32:{ *:[i32] }:$src1, GPR32:{ *:[i32] }:$src2, GPR32:{ *:[i32] }:$src3)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::MULADD,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/1, /*OpIdx*/1, // src1
@@ -768,6 +737,7 @@ def MULADD : I<(outs GPR32:$dst), (ins G
// CHECK-NEXT: // MIs[0] Operand 1
// CHECK-NEXT: GIM_CheckLiteralInt, /*MI*/0, /*Op*/1, 1,
// CHECK-NEXT: // 1:{ *:[i32] } => (MOV1:{ *:[i32] })
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::MOV1,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_EraseFromParent, /*InsnID*/0,
@@ -789,6 +759,7 @@ def MOV1 : I<(outs GPR32:$dst), (ins), [
// CHECK-NEXT: // MIs[0] Operand 1
// CHECK-NEXT: // No operand predicates
// CHECK-NEXT: // (imm:{ *:[i32] })<<P:Predicate_simm8>>:$imm => (MOVimm8:{ *:[i32] } (imm:{ *:[i32] }):$imm)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::MOVimm8,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_CopyConstantAsSImm, /*NewInsnID*/0, /*OldInsnID*/0, // imm
@@ -812,6 +783,7 @@ def MOVimm8 : I<(outs GPR32:$dst), (ins
// CHECK-NEXT: // MIs[0] Operand 1
// CHECK-NEXT: // No operand predicates
// CHECK-NEXT: // (imm:{ *:[i32] })<<P:Predicate_simm9>>:$imm => (MOVimm9:{ *:[i32] } (imm:{ *:[i32] }):$imm)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::MOVimm9,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_CopyConstantAsSImm, /*NewInsnID*/0, /*OldInsnID*/0, // imm
@@ -834,6 +806,7 @@ def MOVimm9 : I<(outs GPR32:$dst), (ins
// CHECK-NEXT: // MIs[0] Operand 1
// CHECK-NEXT: // No operand predicates
// CHECK-NEXT: // (imm:{ *:[i32] })<<P:Predicate_cimm8>><<X:cimm8_xform>>:$imm => (MOVcimm8:{ *:[i32] } (cimm8_xform:{ *:[i32] } (imm:{ *:[i32] }):$imm))
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::MOVcimm8,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_CustomRenderer, /*InsnID*/0, /*OldInsnID*/0, /*Renderer*/GICR_renderImm8, // imm
@@ -861,6 +834,7 @@ def MOVcimm8 : I<(outs GPR32:$dst), (ins
// CHECK-NEXT: // MIs[0] Operand 1
// CHECK-NEXT: // No operand predicates
// CHECK-NEXT: // (fpimm:{ *:[f32] })<<P:Predicate_fpimmz>>:$imm => (MOVfpimmz:{ *:[f32] } (fpimm:{ *:[f32] }):$imm)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::MOVfpimmz,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_CopyFConstantAsFPImm, /*NewInsnID*/0, /*OldInsnID*/0, // imm
@@ -880,6 +854,7 @@ def MOVcimm8 : I<(outs GPR32:$dst), (ins
// CHECK-NEXT: GIM_Try, /*On fail goto*//*Label [[LABEL_NUM:[0-9]+]]*/ [[LABEL:[0-9]+]],
// CHECK-NEXT: GIM_CheckNumOperands, /*MI*/0, /*Expected*/2,
// NOOPT-NEXT: GIM_CheckOpcode, /*MI*/0, TargetOpcode::G_LOAD,
+// CHECK-NEXT: GIM_CheckMemorySizeEqualToLLT, /*MI*/0, /*MMO*/0, /*OpIdx*/0,
// CHECK-NEXT: GIM_CheckAtomicOrdering, /*MI*/0, /*Order*/(int64_t)AtomicOrdering::NotAtomic,
// CHECK-NEXT: // MIs[0] dst
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/0, /*Type*/GILLT_s32,
@@ -888,6 +863,7 @@ def MOVcimm8 : I<(outs GPR32:$dst), (ins
// CHECK-NEXT: GIM_CheckPointerToAny, /*MI*/0, /*Op*/1, /*SizeInBits*/32,
// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/0, /*Op*/1, /*RC*/MyTarget::GPR32RegClassID,
// CHECK-NEXT: // (ld:{ *:[i32] } GPR32:{ *:[i32] }:$src1)<<P:Predicate_unindexedload>><<P:Predicate_load>> => (LOAD:{ *:[i32] } GPR32:{ *:[i32] }:$src1)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_MutateOpcode, /*InsnID*/0, /*RecycleInsnID*/0, /*Opcode*/MyTarget::LOAD,
// CHECK-NEXT: GIR_ConstrainSelectedInstOperands, /*InsnID*/0,
// CHECK-NEXT: GIR_Done,
@@ -900,6 +876,36 @@ def MOVcimm8 : I<(outs GPR32:$dst), (ins
def LOAD : I<(outs GPR32:$dst), (ins GPR32:$src1),
[(set GPR32:$dst, (load GPR32:$src1))]>;
+//===- Test a simple pattern with a sextload -------------------------------===//
+
+// OPT-NEXT: GIM_Try, /*On fail goto*//*Label [[GRP_LABEL_NUM:[0-9]+]]*/ [[GRP_LABEL:[0-9]+]],
+// OPT-NEXT: GIM_CheckOpcode, /*MI*/0, TargetOpcode::G_SEXTLOAD,
+// CHECK-NEXT: GIM_Try, /*On fail goto*//*Label [[LABEL_NUM:[0-9]+]]*/ [[LABEL:[0-9]+]],
+// CHECK-NEXT: GIM_CheckNumOperands, /*MI*/0, /*Expected*/2,
+// NOOPT-NEXT: GIM_CheckOpcode, /*MI*/0, TargetOpcode::G_SEXTLOAD,
+// CHECK-NEXT: GIM_CheckMemorySizeEqualTo, /*MI*/0, /*MMO*/0, /*Size*/2,
+// CHECK-NEXT: GIM_CheckAtomicOrdering, /*MI*/0, /*Order*/(int64_t)AtomicOrdering::NotAtomic,
+// CHECK-NEXT: // MIs[0] dst
+// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/0, /*Type*/GILLT_s32,
+// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/0, /*Op*/0, /*RC*/MyTarget::GPR32RegClassID,
+// CHECK-NEXT: // MIs[0] src1
+// CHECK-NEXT: GIM_CheckPointerToAny, /*MI*/0, /*Op*/1, /*SizeInBits*/32,
+// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/0, /*Op*/1, /*RC*/MyTarget::GPR32RegClassID,
+// CHECK-NEXT: // (ld:{ *:[i32] } GPR32:{ *:[i32] }:$src1)<<P:Predicate_unindexedload>><<P:Predicate_sextload>><<P:Predicate_sextloadi16>> => (SEXTLOAD:{ *:[i32] } GPR32:{ *:[i32] }:$src1)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
+// CHECK-NEXT: GIR_MutateOpcode, /*InsnID*/0, /*RecycleInsnID*/0, /*Opcode*/MyTarget::SEXTLOAD,
+// CHECK-NEXT: GIR_ConstrainSelectedInstOperands, /*InsnID*/0,
+// CHECK-NEXT: GIR_Done,
+// CHECK-NEXT: // Label [[LABEL_NUM]]: @[[LABEL]]
+// Closing the G_SEXT group.
+// OPT-NEXT: GIM_Reject,
+// OPT-NEXT: GIR_Done,
+// OPT-NEXT: // Label [[GRP_LABEL_NUM]]: @[[GRP_LABEL]]
+
+def SEXTLOAD : I<(outs GPR32:$dst), (ins GPR32:$src1),
+ [(set GPR32:$dst, (sextloadi16 GPR32:$src1))]>;
+
+
//===- Test a simple pattern with regclass operands. ----------------------===//
// OPT-NEXT: GIM_Try, /*On fail goto*//*Label [[GRP_LABEL_NUM:[0-9]+]]*/ [[GRP_LABEL:[0-9]+]],
@@ -918,6 +924,7 @@ def LOAD : I<(outs GPR32:$dst), (ins GPR
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/2, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/0, /*Op*/2, /*RC*/MyTarget::GPR32RegClassID,
// CHECK-NEXT: // (add:{ *:[i32] } GPR32:{ *:[i32] }:$src1, GPR32:{ *:[i32] }:$src2) => (ADD:{ *:[i32] } GPR32:{ *:[i32] }:$src1, GPR32:{ *:[i32] }:$src2)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_MutateOpcode, /*InsnID*/0, /*RecycleInsnID*/0, /*Opcode*/MyTarget::ADD,
// CHECK-NEXT: GIR_ConstrainSelectedInstOperands, /*InsnID*/0,
// CHECK-NEXT: GIR_Done,
@@ -942,6 +949,7 @@ def ADD : I<(outs GPR32:$dst), (ins GPR3
// CHECK-NEXT: // MIs[0] src{{$}}
// CHECK-NEXT: GIM_CheckIsSameOperand, /*MI*/0, /*OpIdx*/2, /*OtherMI*/0, /*OtherOpIdx*/1,
// CHECK-NEXT: // (add:{ *:[i32] } GPR32:{ *:[i32] }:$src, GPR32:{ *:[i32] }:$src) => (DOUBLE:{ *:[i32] } GPR32:{ *:[i32] }:$src)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::DOUBLE,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/1, // src
@@ -966,6 +974,7 @@ def DOUBLE : I<(outs GPR32:$dst), (ins G
// CHECK-NEXT: // MIs[0] src2
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/2, /*Type*/GILLT_s32,
// CHECK-NEXT: // (add:{ *:[i32] } i32:{ *:[i32] }:$src1, i32:{ *:[i32] }:$src2) => (ADD:{ *:[i32] } i32:{ *:[i32] }:$src1, i32:{ *:[i32] }:$src2)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_MutateOpcode, /*InsnID*/0, /*RecycleInsnID*/0, /*Opcode*/MyTarget::ADD,
// CHECK-NEXT: GIR_ConstrainSelectedInstOperands, /*InsnID*/0,
// CHECK-NEXT: GIR_Done,
@@ -999,6 +1008,7 @@ def : Pat<(add i32:$src1, i32:$src2),
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/2, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/0, /*Op*/2, /*RC*/MyTarget::GPR32RegClassID,
// CHECK-NEXT: // (mul:{ *:[i32] } GPR32:{ *:[i32] }:$src1, GPR32:{ *:[i32] }:$src2) => (MUL:{ *:[i32] } GPR32:{ *:[i32] }:$src2, GPR32:{ *:[i32] }:$src1)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::MUL,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/2, // src2
@@ -1034,6 +1044,7 @@ def MUL : I<(outs GPR32:$dst), (ins GPR3
// CHECK-NEXT: GIM_CheckType, /*MI*/0, /*Op*/1, /*Type*/GILLT_s32,
// CHECK-NEXT: GIM_CheckRegBankForClass, /*MI*/0, /*Op*/1, /*RC*/MyTarget::FPR32RegClassID,
// CHECK-NEXT: // (bitconvert:{ *:[i32] } FPR32:{ *:[f32] }:$src1) => (COPY_TO_REGCLASS:{ *:[i32] } FPR32:{ *:[f32] }:$src1, GPR32:{ *:[i32] })
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_MutateOpcode, /*InsnID*/0, /*RecycleInsnID*/0, /*Opcode*/TargetOpcode::COPY,
// CHECK-NEXT: GIR_ConstrainOperandRC, /*InsnID*/0, /*Op*/0, /*RC GPR32*/1,
// CHECK-NEXT: GIR_Done,
@@ -1062,6 +1073,7 @@ def : Pat<(i32 (bitconvert FPR32:$src1))
// CHECK-NEXT: // MIs[0] Operand 1
// CHECK-NEXT: // No operand predicates
// CHECK-NEXT: // (imm:{ *:[i32] }):$imm => (MOVimm:{ *:[i32] } (imm:{ *:[i32] }):$imm)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_BuildMI, /*InsnID*/0, /*Opcode*/MyTarget::MOVimm,
// CHECK-NEXT: GIR_Copy, /*NewInsnID*/0, /*OldInsnID*/0, /*OpIdx*/0, // dst
// CHECK-NEXT: GIR_CopyConstantAsSImm, /*NewInsnID*/0, /*OldInsnID*/0, // imm
@@ -1093,6 +1105,7 @@ def MOVfpimmz : I<(outs FPR32:$dst), (in
// CHECK-NEXT: // MIs[0] target
// CHECK-NEXT: GIM_CheckIsMBB, /*MI*/0, /*Op*/0,
// CHECK-NEXT: // (br (bb:{ *:[Other] }):$target) => (BR (bb:{ *:[Other] }):$target)
+// CHECK-NEXT: // Rule ID {{[0-9]+}}
// CHECK-NEXT: GIR_MutateOpcode, /*InsnID*/0, /*RecycleInsnID*/0, /*Opcode*/MyTarget::BR,
// CHECK-NEXT: GIR_ConstrainSelectedInstOperands, /*InsnID*/0,
// CHECK-NEXT: GIR_Done,
Modified: llvm/trunk/utils/TableGen/GlobalISelEmitter.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/utils/TableGen/GlobalISelEmitter.cpp?rev=331601&r1=331600&r2=331601&view=diff
==============================================================================
--- llvm/trunk/utils/TableGen/GlobalISelEmitter.cpp (original)
+++ llvm/trunk/utils/TableGen/GlobalISelEmitter.cpp Sat May 5 13:53:24 2018
@@ -176,6 +176,9 @@ public:
bool operator==(const LLTCodeGen &B) const { return Ty == B.Ty; }
};
+// Track all types that are used so we can emit the corresponding enum.
+std::set<LLTCodeGen> KnownTypes;
+
class InstructionMatcher;
/// Convert an MVT to an equivalent LLT if possible, or the invalid LLT() for
/// MVTs that don't map cleanly to an LLT (e.g., iPTR, *any, ...).
@@ -285,12 +288,16 @@ static Error isTrivialOperatorNode(const
if (Predicate.isImmediatePattern())
continue;
- if (Predicate.isNonExtLoad())
+ if (Predicate.isNonExtLoad() || Predicate.isAnyExtLoad() ||
+ Predicate.isSignExtLoad() || Predicate.isZeroExtLoad())
continue;
if (Predicate.isNonTruncStore())
continue;
+ if (Predicate.isLoad() && Predicate.getMemoryVT())
+ continue;
+
if (Predicate.isLoad() || Predicate.isStore()) {
if (Predicate.isUnindexed())
continue;
@@ -863,6 +870,8 @@ public:
IPM_Opcode,
IPM_ImmPredicate,
IPM_AtomicOrderingMMO,
+ IPM_MemoryLLTSize,
+ IPM_MemoryVsLLTSize,
OPM_SameOperand,
OPM_ComplexPattern,
OPM_IntrinsicID,
@@ -964,8 +973,6 @@ protected:
LLTCodeGen Ty;
public:
- static std::set<LLTCodeGen> KnownTypes;
-
LLTOperandMatcher(unsigned InsnVarID, unsigned OpIdx, const LLTCodeGen &Ty)
: OperandPredicateMatcher(OPM_LLT, InsnVarID, OpIdx), Ty(Ty) {
KnownTypes.insert(Ty);
@@ -989,8 +996,6 @@ public:
}
};
-std::set<LLTCodeGen> LLTOperandMatcher::KnownTypes;
-
/// Generates code to check that an operand is a pointer to any address space.
///
/// In SelectionDAG, the types did not describe pointers or address spaces. As a
@@ -1518,6 +1523,82 @@ public:
}
};
+/// Generates code to check that the size of an MMO is exactly N bytes.
+class MemorySizePredicateMatcher : public InstructionPredicateMatcher {
+protected:
+ unsigned MMOIdx;
+ uint64_t Size;
+
+public:
+ MemorySizePredicateMatcher(unsigned InsnVarID, unsigned MMOIdx, unsigned Size)
+ : InstructionPredicateMatcher(IPM_MemoryLLTSize, InsnVarID),
+ MMOIdx(MMOIdx), Size(Size) {}
+
+ static bool classof(const PredicateMatcher *P) {
+ return P->getKind() == IPM_MemoryLLTSize;
+ }
+ bool isIdentical(const PredicateMatcher &B) const override {
+ return InstructionPredicateMatcher::isIdentical(B) &&
+ MMOIdx == cast<MemorySizePredicateMatcher>(&B)->MMOIdx &&
+ Size == cast<MemorySizePredicateMatcher>(&B)->Size;
+ }
+
+ void emitPredicateOpcodes(MatchTable &Table,
+ RuleMatcher &Rule) const override {
+ Table << MatchTable::Opcode("GIM_CheckMemorySizeEqualTo")
+ << MatchTable::Comment("MI") << MatchTable::IntValue(InsnVarID)
+ << MatchTable::Comment("MMO") << MatchTable::IntValue(MMOIdx)
+ << MatchTable::Comment("Size") << MatchTable::IntValue(Size)
+ << MatchTable::LineBreak;
+ }
+};
+
+/// Generates code to check that the size of an MMO is less-than, equal-to, or
+/// greater than a given LLT.
+class MemoryVsLLTSizePredicateMatcher : public InstructionPredicateMatcher {
+public:
+ enum RelationKind {
+ GreaterThan,
+ EqualTo,
+ LessThan,
+ };
+
+protected:
+ unsigned MMOIdx;
+ RelationKind Relation;
+ unsigned OpIdx;
+
+public:
+ MemoryVsLLTSizePredicateMatcher(unsigned InsnVarID, unsigned MMOIdx,
+ enum RelationKind Relation,
+ unsigned OpIdx)
+ : InstructionPredicateMatcher(IPM_MemoryVsLLTSize, InsnVarID),
+ MMOIdx(MMOIdx), Relation(Relation), OpIdx(OpIdx) {}
+
+ static bool classof(const PredicateMatcher *P) {
+ return P->getKind() == IPM_MemoryVsLLTSize;
+ }
+ bool isIdentical(const PredicateMatcher &B) const override {
+ return InstructionPredicateMatcher::isIdentical(B) &&
+ MMOIdx == cast<MemoryVsLLTSizePredicateMatcher>(&B)->MMOIdx &&
+ Relation == cast<MemoryVsLLTSizePredicateMatcher>(&B)->Relation &&
+ OpIdx == cast<MemoryVsLLTSizePredicateMatcher>(&B)->OpIdx;
+ }
+
+ void emitPredicateOpcodes(MatchTable &Table,
+ RuleMatcher &Rule) const override {
+ Table << MatchTable::Opcode(Relation == EqualTo
+ ? "GIM_CheckMemorySizeEqualToLLT"
+ : Relation == GreaterThan
+ ? "GIM_CheckMemorySizeGreaterThanLLT"
+ : "GIM_CheckMemorySizeLessThanLLT")
+ << MatchTable::Comment("MI") << MatchTable::IntValue(InsnVarID)
+ << MatchTable::Comment("MMO") << MatchTable::IntValue(MMOIdx)
+ << MatchTable::Comment("OpIdx") << MatchTable::IntValue(OpIdx)
+ << MatchTable::LineBreak;
+ }
+};
+
/// Generates code to check that a set of predicates and operands match for a
/// particular instruction.
///
@@ -2621,6 +2702,8 @@ private:
void gatherNodeEquivs();
Record *findNodeEquiv(Record *N) const;
+ const CodeGenInstruction *getEquivNode(Record &Equiv,
+ const TreePatternNode *N) const;
Error importRulePredicates(RuleMatcher &M, ArrayRef<Predicate> Predicates);
Expected<InstructionMatcher &> createAndImportSelDAGMatcher(
@@ -2667,9 +2750,6 @@ private:
void declareSubtargetFeature(Record *Predicate);
- TreePatternNode *fixupPatternNode(TreePatternNode *N);
- void fixupPatternTrees(TreePattern *P);
-
/// Takes a sequence of \p Rules and group them based on the predicates
/// they share. \p StorageGroupMatcher is used as a memory container
/// for the group that are created as part of this process.
@@ -2734,9 +2814,22 @@ Record *GlobalISelEmitter::findNodeEquiv
return NodeEquivs.lookup(N);
}
+const CodeGenInstruction *
+GlobalISelEmitter::getEquivNode(Record &Equiv, const TreePatternNode *N) const {
+ for (const auto &Predicate : N->getPredicateFns()) {
+ if (!Equiv.isValueUnset("IfSignExtend") && Predicate.isLoad() &&
+ Predicate.isSignExtLoad())
+ return &Target.getInstruction(Equiv.getValueAsDef("IfSignExtend"));
+ if (!Equiv.isValueUnset("IfZeroExtend") && Predicate.isLoad() &&
+ Predicate.isZeroExtLoad())
+ return &Target.getInstruction(Equiv.getValueAsDef("IfZeroExtend"));
+ }
+ return &Target.getInstruction(Equiv.getValueAsDef("I"));
+}
+
GlobalISelEmitter::GlobalISelEmitter(RecordKeeper &RK)
- : RK(RK), CGP(RK, [&](TreePattern *P) { fixupPatternTrees(P); }),
- Target(CGP.getTargetInfo()), CGRegs(RK, Target.getHwModes()) {}
+ : RK(RK), CGP(RK), Target(CGP.getTargetInfo()),
+ CGRegs(RK, Target.getHwModes()) {}
//===- Emitter ------------------------------------------------------------===//
@@ -2776,7 +2869,7 @@ Expected<InstructionMatcher &> GlobalISe
if (!SrcGIEquivOrNull)
return failedImport("Pattern operator lacks an equivalent Instruction" +
explainOperator(Src->getOperator()));
- SrcGIOrNull = &Target.getInstruction(SrcGIEquivOrNull->getValueAsDef("I"));
+ SrcGIOrNull = getEquivNode(*SrcGIEquivOrNull, Src);
// The operators look good: match the opcode
InsnMatcher.addPredicate<InstructionOpcodeMatcher>(SrcGIOrNull);
@@ -2801,8 +2894,26 @@ Expected<InstructionMatcher &> GlobalISe
continue;
}
- // No check required. G_LOAD by itself is a non-extending load.
- if (Predicate.isNonExtLoad())
+ // G_LOAD is used for both non-extending and any-extending loads.
+ if (Predicate.isLoad() && Predicate.isNonExtLoad()) {
+ InsnMatcher.addPredicate<MemoryVsLLTSizePredicateMatcher>(
+ 0, MemoryVsLLTSizePredicateMatcher::EqualTo, 0);
+ continue;
+ }
+ if (Predicate.isLoad() && Predicate.isAnyExtLoad()) {
+ InsnMatcher.addPredicate<MemoryVsLLTSizePredicateMatcher>(
+ 0, MemoryVsLLTSizePredicateMatcher::LessThan, 0);
+ continue;
+ }
+
+ // No check required. We already did it by swapping the opcode.
+ if (!SrcGIEquivOrNull->isValueUnset("IfSignExtend") &&
+ Predicate.isSignExtLoad())
+ continue;
+
+ // No check required. We already did it by swapping the opcode.
+ if (!SrcGIEquivOrNull->isValueUnset("IfZeroExtend") &&
+ Predicate.isZeroExtLoad())
continue;
// No check required. G_STORE by itself is a non-extending store.
@@ -2817,8 +2928,13 @@ Expected<InstructionMatcher &> GlobalISe
if (!MemTyOrNone)
return failedImport("MemVT could not be converted to LLT");
- OperandMatcher &OM = InsnMatcher.getOperand(0);
- OM.addPredicate<LLTOperandMatcher>(MemTyOrNone.getValue());
+ // MMO's work in bytes so we must take care of unusual types like i1
+ // don't round down.
+ unsigned MemSizeInBits =
+ llvm::alignTo(MemTyOrNone->get().getSizeInBits(), 8);
+
+ InsnMatcher.addPredicate<MemorySizePredicateMatcher>(
+ 0, MemSizeInBits / 8);
continue;
}
}
@@ -3406,6 +3522,7 @@ Expected<RuleMatcher> GlobalISelEmitter:
M.addAction<DebugCommentAction>(llvm::to_string(*P.getSrcPattern()) +
" => " +
llvm::to_string(*P.getDstPattern()));
+ M.addAction<DebugCommentAction>("Rule ID " + llvm::to_string(M.getRuleID()));
if (auto Error = importRulePredicates(M, P.getPredicates()))
return std::move(Error);
@@ -3846,7 +3963,7 @@ void GlobalISelEmitter::run(raw_ostream
// Emit a table containing the LLT objects needed by the matcher and an enum
// for the matcher to reference them with.
std::vector<LLTCodeGen> TypeObjects;
- for (const auto &Ty : LLTOperandMatcher::KnownTypes)
+ for (const auto &Ty : KnownTypes)
TypeObjects.push_back(Ty);
llvm::sort(TypeObjects.begin(), TypeObjects.end());
OS << "// LLT Objects.\n"
@@ -4035,81 +4152,6 @@ void GlobalISelEmitter::declareSubtarget
Predicate, SubtargetFeatureInfo(Predicate, SubtargetFeatures.size()));
}
-TreePatternNode *GlobalISelEmitter::fixupPatternNode(TreePatternNode *N) {
- if (!N->isLeaf()) {
- for (unsigned I = 0, E = N->getNumChildren(); I < E; ++I) {
- TreePatternNode *OrigChild = N->getChild(I);
- TreePatternNode *NewChild = fixupPatternNode(OrigChild);
- if (OrigChild != NewChild)
- N->setChild(I, NewChild);
- }
-
- if (N->getOperator()->getName() == "ld") {
- // If it's a signext-load we need to adapt the pattern slightly. We need
- // to split the node into (sext (ld ...)), remove the <<signext>> predicate,
- // and then apply the <<signextTY>> predicate by updating the result type
- // of the load.
- //
- // For example:
- // (ld:[i32] [iPTR])<<unindexed>><<signext>><<signexti16>>
- // must be transformed into:
- // (sext:[i32] (ld:[i16] [iPTR])<<unindexed>>)
- //
- // Likewise for zeroext-load and anyext-load.
-
- std::vector<TreePredicateFn> Predicates;
- bool IsSignExtLoad = false;
- bool IsZeroExtLoad = false;
- bool IsAnyExtLoad = false;
- Record *MemVT = nullptr;
- for (const auto &P : N->getPredicateFns()) {
- if (P.isLoad() && P.isSignExtLoad()) {
- IsSignExtLoad = true;
- continue;
- }
- if (P.isLoad() && P.isZeroExtLoad()) {
- IsZeroExtLoad = true;
- continue;
- }
- if (P.isLoad() && P.isAnyExtLoad()) {
- IsAnyExtLoad = true;
- continue;
- }
- if (P.isLoad() && P.getMemoryVT()) {
- MemVT = P.getMemoryVT();
- continue;
- }
- Predicates.push_back(P);
- }
-
- if ((IsSignExtLoad || IsZeroExtLoad || IsAnyExtLoad) && MemVT) {
- assert((IsSignExtLoad + IsZeroExtLoad + IsAnyExtLoad) == 1 &&
- "IsSignExtLoad, IsZeroExtLoad, IsAnyExtLoad are mutually exclusive");
- TreePatternNode *Ext = new TreePatternNode(
- RK.getDef(IsSignExtLoad ? "sext"
- : IsZeroExtLoad ? "zext" : "anyext"),
- {N}, 1);
- Ext->setType(0, N->getType(0));
- N->clearPredicateFns();
- N->setPredicateFns(Predicates);
- N->setType(0, getValueType(MemVT));
- return Ext;
- }
- }
- }
-
- return N;
-}
-
-void GlobalISelEmitter::fixupPatternTrees(TreePattern *P) {
- for (unsigned I = 0, E = P->getNumTrees(); I < E; ++I) {
- TreePatternNode *OrigTree = P->getTree(I);
- TreePatternNode *NewTree = fixupPatternNode(OrigTree);
- if (OrigTree != NewTree)
- P->setTree(I, NewTree);
- }
-}
-
std::unique_ptr<PredicateMatcher> RuleMatcher::forgetFirstCondition() {
assert(!insnmatchers_empty() &&
"Trying to forget something that does not exist");
More information about the llvm-commits
mailing list