[llvm-commits] [llvm] r142530 - in /llvm/trunk: lib/Target/ARM/ARMISelLowering.cpp test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll

James Molloy james.molloy at arm.com
Wed Oct 19 07:11:07 PDT 2011


Author: jamesm
Date: Wed Oct 19 09:11:07 2011
New Revision: 142530

URL: http://llvm.org/viewvc/llvm-project?rev=142530&view=rev
Log:
Use literal pool loads instead of MOVW/MOVT for materializing global addresses when optimizing for size.

On spec/gcc, this caused a codesize improvement of ~1.9% for ARM mode and ~4.9% for Thumb(2) mode. This is
codesize including literal pools.

The pools themselves doubled in size for ARM mode and quintupled for Thumb mode, leaving suggestion that there
is still perhaps redundancy in LLVM's use of constant pools that could be decreased by sharing entries.

Fixes PR11087.


Added:
    llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll   (with props)
Modified:
    llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp

Modified: llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp?rev=142530&r1=142529&r2=142530&view=diff
==============================================================================
--- llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp (original)
+++ llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp Wed Oct 19 09:11:07 2011
@@ -2103,8 +2103,10 @@
   }
 
   // If we have T2 ops, we can materialize the address directly via movt/movw
-  // pair. This is always cheaper.
-  if (Subtarget->useMovt()) {
+  // pair. This is always cheaper in terms of performance, but uses at least 2
+  // extra bytes.
+  if (Subtarget->useMovt() &&
+      !DAG.getMachineFunction().getFunction()->hasFnAttr(Attribute::OptimizeForSize)) {
     ++NumMovwMovt;
     // FIXME: Once remat is capable of dealing with instructions with register
     // operands, expand this into two nodes.
@@ -2129,7 +2131,8 @@
   ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
 
   // FIXME: Enable this for static codegen when tool issues are fixed.
-  if (Subtarget->useMovt() && RelocM != Reloc::Static) {
+  if (Subtarget->useMovt() && RelocM != Reloc::Static &&
+      !DAG.getMachineFunction().getFunction()->hasFnAttr(Attribute::OptimizeForSize)) {
     ++NumMovwMovt;
     // FIXME: Once remat is capable of dealing with instructions with register
     // operands, expand this into two nodes.

Added: llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll?rev=142530&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll (added)
+++ llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll Wed Oct 19 09:11:07 2011
@@ -0,0 +1,27 @@
+; RUN: llc < %s -mtriple=armv7-apple-darwin  | FileCheck %s
+; RUN: llc < %s -mtriple=armv7-unknown-linux-eabi | FileCheck %s
+
+; Check that when optimizing for size, a literal pool load is used
+; instead of the (potentially faster) movw/movt pair when loading
+; a large constant.
+
+ at x = global i32* inttoptr (i32 305419888 to i32*), align 4
+
+define i32 @f() optsize {
+  ; CHECK: f:
+  ; CHECK: ldr  r{{.}}, {{.?}}LCPI{{.}}_{{.}}
+  ; CHECK: ldr  r{{.}}, [{{(pc, )?}}r{{.}}]
+  ; CHECK: ldr  r{{.}}, [r{{.}}]
+  %1 = load i32** @x, align 4
+  %2 = load i32* %1
+  ret i32 %2
+}
+
+define i32 @g() {
+  ; CHECK: g:
+  ; CHECK: movw
+  ; CHECK: movt
+  %1 = load i32** @x, align 4
+  %2 = load i32* %1
+  ret i32 %2
+}

Propchange: llvm/trunk/test/CodeGen/ARM/2011-10-18-DisableMovtSize.ll
------------------------------------------------------------------------------
    svn:eol-style = native





More information about the llvm-commits mailing list