[llvm] 9283681 - [CriticalAntiDepBreaker] Teach the regmask clobber check to check if any subregister is preserved before considering the super register clobbered

Craig Topper via llvm-commits llvm-commits at lists.llvm.org
Wed Nov 27 11:21:27 PST 2019


Author: Craig Topper
Date: 2019-11-27T11:20:58-08:00
New Revision: 9283681e168141bab9a883e48ce1da80b86afca3

URL: https://github.com/llvm/llvm-project/commit/9283681e168141bab9a883e48ce1da80b86afca3
DIFF: https://github.com/llvm/llvm-project/commit/9283681e168141bab9a883e48ce1da80b86afca3.diff

LOG: [CriticalAntiDepBreaker] Teach the regmask clobber check to check if any subregister is preserved before considering the super register clobbered

X86 has some calling conventions where bits 127:0 of a vector register are callee saved, but the upper bits aren't. Previously we could detect that the full ymm register was clobbered when the xmm portion was really preserved. This patch checks the subregisters to make sure they aren't preserved.

Fixes PR44140

Differential Revision: https://reviews.llvm.org/D70699

Added: 
    

Modified: 
    llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
    llvm/test/CodeGen/X86/pr44140.ll

Removed: 
    


################################################################################
diff  --git a/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp b/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
index 702e7e244bce..8d9d48402b31 100644
--- a/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
+++ b/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
@@ -261,15 +261,25 @@ void CriticalAntiDepBreaker::ScanInstruction(MachineInstr &MI, unsigned Count) {
     for (unsigned i = 0, e = MI.getNumOperands(); i != e; ++i) {
       MachineOperand &MO = MI.getOperand(i);
 
-      if (MO.isRegMask())
-        for (unsigned i = 0, e = TRI->getNumRegs(); i != e; ++i)
-          if (MO.clobbersPhysReg(i)) {
+      if (MO.isRegMask()) {
+        auto ClobbersPhysRegAndSubRegs = [&](unsigned PhysReg) {
+          for (MCSubRegIterator SRI(PhysReg, TRI, true); SRI.isValid(); ++SRI)
+            if (!MO.clobbersPhysReg(*SRI))
+              return false;
+
+          return true;
+        };
+
+        for (unsigned i = 0, e = TRI->getNumRegs(); i != e; ++i) {
+          if (ClobbersPhysRegAndSubRegs(i)) {
             DefIndices[i] = Count;
             KillIndices[i] = ~0u;
             KeepRegs.reset(i);
             Classes[i] = nullptr;
             RegRefs.erase(i);
           }
+        }
+      }
 
       if (!MO.isReg()) continue;
       Register Reg = MO.getReg();

diff  --git a/llvm/test/CodeGen/X86/pr44140.ll b/llvm/test/CodeGen/X86/pr44140.ll
index 9916252e6c49..941f45d2d99a 100644
--- a/llvm/test/CodeGen/X86/pr44140.ll
+++ b/llvm/test/CodeGen/X86/pr44140.ll
@@ -10,7 +10,6 @@ define win64cc void @opaque() {
 
 ; We need xmm6 to be live from the loop header across all iterations of the loop.
 ; We shouldn't clobber ymm6 inside the loop.
-; FIXME: We currently clobber ymm6
 define i32 @main() {
 ; CHECK-LABEL: main:
 ; CHECK:       # %bb.0: # %start
@@ -23,7 +22,7 @@ define i32 @main() {
 ; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
 ; CHECK-NEXT:    vmovups {{[0-9]+}}(%rsp), %ymm0
 ; CHECK-NEXT:    vmovups {{[0-9]+}}(%rsp), %ymm1
-; CHECK-NEXT:    vmovups {{[0-9]+}}(%rsp), %ymm6
+; CHECK-NEXT:    vmovups {{[0-9]+}}(%rsp), %ymm7
 ; CHECK-NEXT:    vmovups {{[0-9]+}}(%rsp), %ymm2
 ; CHECK-NEXT:    vmovups {{[0-9]+}}(%rsp), %ymm3
 ; CHECK-NEXT:    vmovups %ymm0, {{[0-9]+}}(%rsp)
@@ -31,10 +30,10 @@ define i32 @main() {
 ; CHECK-NEXT:    vmovups {{[0-9]+}}(%rsp), %ymm1
 ; CHECK-NEXT:    vmovups %ymm3, {{[0-9]+}}(%rsp)
 ; CHECK-NEXT:    vmovups %ymm2, {{[0-9]+}}(%rsp)
-; CHECK-NEXT:    vmovups %ymm6, {{[0-9]+}}(%rsp)
+; CHECK-NEXT:    vmovups %ymm7, {{[0-9]+}}(%rsp)
 ; CHECK-NEXT:    vmovups %ymm3, {{[0-9]+}}(%rsp)
 ; CHECK-NEXT:    vmovups %ymm2, {{[0-9]+}}(%rsp)
-; CHECK-NEXT:    vmovups %ymm6, {{[0-9]+}}(%rsp)
+; CHECK-NEXT:    vmovups %ymm7, {{[0-9]+}}(%rsp)
 ; CHECK-NEXT:    vmovups %ymm1, {{[0-9]+}}(%rsp)
 ; CHECK-NEXT:    vmovups %ymm1, {{[0-9]+}}(%rsp)
 ; CHECK-NEXT:    vmovups {{[0-9]+}}(%rsp), %ymm5


        


More information about the llvm-commits mailing list