[PATCH] D97208: [X86] Always use rip-relative addressing on 64-bit when rematerializing all zeros/ones registers using a folded load.

Craig Topper via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Mar 29 10:06:50 PDT 2021


This revision was landed with ongoing or failed builds.
This revision was automatically updated to reflect the committed changes.
Closed by commit rG54bacaf31127: [X86] Always use rip-relative addressing on 64-bit when rematerializing all… (authored by craig.topper).

Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D97208/new/

https://reviews.llvm.org/D97208

Files:
  llvm/lib/Target/X86/X86InstrInfo.cpp
  llvm/test/CodeGen/X86/avx-cmp.ll
  llvm/test/CodeGen/X86/mmx-fold-zero.ll


Index: llvm/test/CodeGen/X86/mmx-fold-zero.ll
===================================================================
--- llvm/test/CodeGen/X86/mmx-fold-zero.ll
+++ llvm/test/CodeGen/X86/mmx-fold-zero.ll
@@ -1,4 +1,4 @@
-; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --no_x86_scrub_rip
 ; RUN: llc < %s -mtriple=i686-unknown-unknown -mattr=+mmx,+sse2 | FileCheck %s --check-prefix=X86
 ; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+mmx,+sse2 | FileCheck %s --check-prefix=X64
 
@@ -70,7 +70,7 @@
 ; X64-NEXT:    paddw %mm2, %mm0
 ; X64-NEXT:    paddw %mm6, %mm0
 ; X64-NEXT:    pmuludq %mm3, %mm0
-; X64-NEXT:    paddw {{\.LCPI[0-9]+_[0-9]+}}, %mm0
+; X64-NEXT:    paddw {{\.LCPI[0-9]+_[0-9]+}}(%rip), %mm0
 ; X64-NEXT:    paddw %mm1, %mm0
 ; X64-NEXT:    pmuludq %mm7, %mm0
 ; X64-NEXT:    pmuludq {{[-0-9]+}}(%r{{[sb]}}p), %mm0 # 8-byte Folded Reload
Index: llvm/test/CodeGen/X86/avx-cmp.ll
===================================================================
--- llvm/test/CodeGen/X86/avx-cmp.ll
+++ llvm/test/CodeGen/X86/avx-cmp.ll
@@ -1,4 +1,4 @@
-; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --no_x86_scrub_rip
 ; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx | FileCheck %s
 
 define <8 x i32> @cmp00(<8 x float> %a, <8 x float> %b) nounwind {
@@ -49,7 +49,7 @@
 ; CHECK-NEXT:    # in Loop: Header=BB2_2 Depth=1
 ; CHECK-NEXT:    vmovsd (%rsp), %xmm0 # 8-byte Reload
 ; CHECK-NEXT:    # xmm0 = mem[0],zero
-; CHECK-NEXT:    vucomisd {{\.LCPI[0-9]+_[0-9]+}}, %xmm0
+; CHECK-NEXT:    vucomisd {{\.LCPI[0-9]+_[0-9]+}}(%rip), %xmm0
 ; CHECK-NEXT:    jne .LBB2_5
 ; CHECK-NEXT:    jnp .LBB2_2
 ; CHECK-NEXT:  .LBB2_5: # %if.then
Index: llvm/lib/Target/X86/X86InstrInfo.cpp
===================================================================
--- llvm/lib/Target/X86/X86InstrInfo.cpp
+++ llvm/lib/Target/X86/X86InstrInfo.cpp
@@ -6085,15 +6085,16 @@
 
     // x86-32 PIC requires a PIC base register for constant pools.
     unsigned PICBase = 0;
-    if (MF.getTarget().isPositionIndependent()) {
-      if (Subtarget.is64Bit())
-        PICBase = X86::RIP;
-      else
-        // FIXME: PICBase = getGlobalBaseReg(&MF);
-        // This doesn't work for several reasons.
-        // 1. GlobalBaseReg may have been spilled.
-        // 2. It may not be live at MI.
-        return nullptr;
+    // Since we're using Small or Kernel code model, we can always use
+    // RIP-relative addressing for a smaller encoding.
+    if (Subtarget.is64Bit()) {
+      PICBase = X86::RIP;
+    } else if (MF.getTarget().isPositionIndependent()) {
+      // FIXME: PICBase = getGlobalBaseReg(&MF);
+      // This doesn't work for several reasons.
+      // 1. GlobalBaseReg may have been spilled.
+      // 2. It may not be live at MI.
+      return nullptr;
     }
 
     // Create a constant-pool entry.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D97208.333920.patch
Type: text/x-patch
Size: 3054 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20210329/d7ab8319/attachment.bin>


More information about the llvm-commits mailing list