[llvm] r186226 - Fix ARM paired GPR COPY lowering
JF Bastien
jfb at google.com
Fri Jul 12 16:33:03 PDT 2013
Author: jfb
Date: Fri Jul 12 18:33:03 2013
New Revision: 186226
URL: http://llvm.org/viewvc/llvm-project?rev=186226&view=rev
Log:
Fix ARM paired GPR COPY lowering
ARM paired GPR COPY was being lowered to two MOVr without CC. This
patch puts the CC back.
My test is a reduction of the case where I encountered the issue,
64-bit atomics use paired GPRs.
The issue only occurs with selectionDAG, FastISel doesn't encounter it
so I didn't bother calling it.
Added:
llvm/trunk/test/CodeGen/ARM/copy-paired-reg.ll
Modified:
llvm/trunk/lib/Target/ARM/ARMBaseInstrInfo.cpp
Modified: llvm/trunk/lib/Target/ARM/ARMBaseInstrInfo.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/ARM/ARMBaseInstrInfo.cpp?rev=186226&r1=186225&r2=186226&view=diff
==============================================================================
--- llvm/trunk/lib/Target/ARM/ARMBaseInstrInfo.cpp (original)
+++ llvm/trunk/lib/Target/ARM/ARMBaseInstrInfo.cpp Fri Jul 12 18:33:03 2013
@@ -745,6 +745,9 @@ void ARMBaseInstrInfo::copyPhysReg(Machi
if (Opc == ARM::VORRq)
Mov.addReg(Src);
Mov = AddDefaultPred(Mov);
+ // MOVr can set CC.
+ if (Opc == ARM::MOVr)
+ Mov = AddDefaultCC(Mov);
}
// Add implicit super-register defs and kills to the last instruction.
Mov->addRegisterDefined(DestReg, TRI);
Added: llvm/trunk/test/CodeGen/ARM/copy-paired-reg.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/copy-paired-reg.ll?rev=186226&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/ARM/copy-paired-reg.ll (added)
+++ llvm/trunk/test/CodeGen/ARM/copy-paired-reg.ll Fri Jul 12 18:33:03 2013
@@ -0,0 +1,17 @@
+; RUN: llc < %s -mtriple=armv7-apple-ios -verify-machineinstrs
+; RUN: llc < %s -mtriple=armv7-linux-gnueabi -verify-machineinstrs
+
+define void @f() {
+ %a = alloca i8, i32 8, align 8
+ %b = alloca i8, i32 8, align 8
+
+ %c = bitcast i8* %a to i64*
+ %d = bitcast i8* %b to i64*
+
+ store atomic i64 0, i64* %c seq_cst, align 8
+ store atomic i64 0, i64* %d seq_cst, align 8
+
+ %e = load atomic i64* %d seq_cst, align 8
+
+ ret void
+}
More information about the llvm-commits
mailing list