[llvm-commits] [llvm] r143023 - in /llvm/trunk: lib/Target/ARM/ARMISelLowering.cpp test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll
James Molloy
james.molloy at arm.com
Wed Oct 26 01:53:19 PDT 2011
Author: jamesm
Date: Wed Oct 26 03:53:19 2011
New Revision: 143023
URL: http://llvm.org/viewvc/llvm-project?rev=143023&view=rev
Log:
Revert r142530 at least temporarily while a discussion is had on llvm-commits regarding exactly how much optsize should optimize for size over performance.
Removed:
llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll
Modified:
llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp
Modified: llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp?rev=143023&r1=143022&r2=143023&view=diff
==============================================================================
--- llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp (original)
+++ llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp Wed Oct 26 03:53:19 2011
@@ -2104,11 +2104,8 @@
}
// If we have T2 ops, we can materialize the address directly via movt/movw
- // pair. This is always cheaper in terms of performance, but uses at least 2
- // extra bytes.
- MachineFunction &MF = DAG.getMachineFunction();
- if (Subtarget->useMovt() &&
- !MF.getFunction()->hasFnAttr(Attribute::OptimizeForSize)) {
+ // pair. This is always cheaper.
+ if (Subtarget->useMovt()) {
++NumMovwMovt;
// FIXME: Once remat is capable of dealing with instructions with register
// operands, expand this into two nodes.
Removed: llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll?rev=143022&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll (original)
+++ llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll (removed)
@@ -1,26 +0,0 @@
-; RUN: llc < %s -mtriple=armv7-unknown-linux-eabi | FileCheck %s
-
-; Check that when optimizing for size, a literal pool load is used
-; instead of the (potentially faster) movw/movt pair when loading
-; a large constant.
-
- at x = global i32* inttoptr (i32 305419888 to i32*), align 4
-
-define i32 @f() optsize {
- ; CHECK: f:
- ; CHECK: ldr r{{.}}, {{.?}}LCPI{{.}}_{{.}}
- ; CHECK: ldr r{{.}}, [{{(pc, )?}}r{{.}}]
- ; CHECK: ldr r{{.}}, [r{{.}}]
- %1 = load i32** @x, align 4
- %2 = load i32* %1
- ret i32 %2
-}
-
-define i32 @g() {
- ; CHECK: g:
- ; CHECK: movw
- ; CHECK: movt
- %1 = load i32** @x, align 4
- %2 = load i32* %1
- ret i32 %2
-}
More information about the llvm-commits
mailing list