[llvm] cd59458 - [AArch64][GlobalISel] Make LR livein to entry in llvm.returnaddress selection

Jessica Paquette via llvm-commits llvm-commits at lists.llvm.org
Mon May 11 11:33:03 PDT 2020


Author: Jessica Paquette
Date: 2020-05-11T11:32:12-07:00
New Revision: cd59458f27af9f441a8e3646e985ffb92700fba5

URL: https://github.com/llvm/llvm-project/commit/cd59458f27af9f441a8e3646e985ffb92700fba5
DIFF: https://github.com/llvm/llvm-project/commit/cd59458f27af9f441a8e3646e985ffb92700fba5.diff

LOG: [AArch64][GlobalISel] Make LR livein to entry in llvm.returnaddress selection

This fixes a couple verifier failures on this bot:

http://green.lab.llvm.org/green/job/test-suite-verify-machineinstrs-aarch64-globalisel-O0-g/

The failures show up in eeprof-1.c and pr17377.c in the GCC C Torture Suite.

Specifically:

*** Bad machine code: MBB has allocatable live-in, but isn't entry or landing-pad. ***
- function:    foo
- basic block: %bb.3 if.end (0x7fac7106dfc8)
- p. register: $lr

and

*** Bad machine code: Using an undefined physical register ***
- function:    f
- basic block: %bb.1 entry (0x7f8941092588)
- instruction: %18:gpr64 = COPY $lr
- operand 1:   $lr

Unlike SDAG, we were setting LR as a live in to the block containing the
returnaddress.

Also, this ensures that we don't add LR as a livein to the entry block twice.
In MachineBasicBlock.h there's a comment saying

"Note that it is an error to add the same register to the same set more than
once unless the intention is to call sortUniqueLiveIns after all registers are
added."

so it's probably good to avoid adding LR twice.

Surprisingly the verifier doesn't complain about that. Maybe it should.

Differential Revision: https://reviews.llvm.org/D79657

Added: 
    llvm/test/CodeGen/AArch64/GlobalISel/select-returnaddress-liveins.mir

Modified: 
    llvm/lib/Target/AArch64/AArch64InstructionSelector.cpp

Removed: 
    


################################################################################
diff  --git a/llvm/lib/Target/AArch64/AArch64InstructionSelector.cpp b/llvm/lib/Target/AArch64/AArch64InstructionSelector.cpp
index 5d04481e2b4a..fdb7a969ba45 100644
--- a/llvm/lib/Target/AArch64/AArch64InstructionSelector.cpp
+++ b/llvm/lib/Target/AArch64/AArch64InstructionSelector.cpp
@@ -4666,11 +4666,13 @@ bool AArch64InstructionSelector::selectIntrinsic(MachineInstr &I,
       }
       MFI.setReturnAddressIsTaken(true);
       MF.addLiveIn(AArch64::LR, &AArch64::GPR64spRegClass);
-      I.getParent()->addLiveIn(AArch64::LR);
       // Insert the copy from LR/X30 into the entry block, before it can be
       // clobbered by anything.
+      MachineBasicBlock &EntryBlock = *MF.begin();
+      if (!EntryBlock.isLiveIn(AArch64::LR))
+        EntryBlock.addLiveIn(AArch64::LR);
       MachineIRBuilder EntryBuilder(MF);
-      EntryBuilder.setInstr(*MF.begin()->begin());
+      EntryBuilder.setInstr(*EntryBlock.begin());
       EntryBuilder.buildCopy({DstReg}, {Register(AArch64::LR)});
       MFReturnAddr = DstReg;
       I.eraseFromParent();

diff  --git a/llvm/test/CodeGen/AArch64/GlobalISel/select-returnaddress-liveins.mir b/llvm/test/CodeGen/AArch64/GlobalISel/select-returnaddress-liveins.mir
new file mode 100644
index 000000000000..433c7848433f
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/select-returnaddress-liveins.mir
@@ -0,0 +1,61 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
+# RUN: llc -mtriple=aarch64 -global-isel -run-pass=instruction-select -verify-machineinstrs %s -o - | FileCheck %s
+
+--- |
+  define void @lr_other_block() { ret void }
+  define void @already_live_in() { ret void }
+  declare i8* @llvm.returnaddress(i32 immarg)
+...
+---
+name:            lr_other_block
+alignment:       4
+legalized:       true
+regBankSelected: true
+tracksRegLiveness: true
+body:             |
+  ; CHECK-LABEL: name: lr_other_block
+  ; CHECK: bb.0:
+  ; CHECK:   successors: %bb.1(0x80000000)
+  ; CHECK:   liveins: $w0, $x0, $lr
+  ; CHECK:   [[COPY:%[0-9]+]]:gpr64 = COPY $lr
+  ; CHECK:   B %bb.1
+  ; CHECK: bb.1:
+  ; CHECK:   $x0 = COPY [[COPY]]
+  ; CHECK:   RET_ReallyLR implicit $x0
+  ; LR should be added as a livein to the entry block.
+
+  bb.0:
+    ; We should have lr as a livein to bb.0, and a copy from LR.
+    liveins: $w0, $x0
+    G_BR %bb.1
+  bb.1:
+    %4:gpr(p0) = G_INTRINSIC intrinsic(@llvm.returnaddress), 0
+    $x0 = COPY %4
+    RET_ReallyLR implicit $x0
+...
+---
+name:            already_live_in
+alignment:       4
+legalized:       true
+regBankSelected: true
+tracksRegLiveness: true
+body:             |
+  ; CHECK-LABEL: name: already_live_in
+  ; CHECK: bb.0:
+  ; CHECK:   successors: %bb.1(0x80000000)
+  ; CHECK:   liveins: $w0, $x0, $lr
+  ; CHECK:   [[COPY:%[0-9]+]]:gpr64 = COPY $lr
+  ; CHECK:   B %bb.1
+  ; CHECK: bb.1:
+  ; CHECK:   $x0 = COPY [[COPY]]
+  ; CHECK:   RET_ReallyLR implicit $x0
+  ; We should not have LR listed as a livein twice.
+
+  bb.0:
+    liveins: $w0, $x0, $lr
+    G_BR %bb.1
+  bb.1:
+    %4:gpr(p0) = G_INTRINSIC intrinsic(@llvm.returnaddress), 0
+    $x0 = COPY %4
+    RET_ReallyLR implicit $x0
+...


        


More information about the llvm-commits mailing list