[llvm] 9933015 - [X86] Fold MMX_MOVD64from64rr + store to MMX_MOVQ64mr instead of MMX_MOVD64from64mr.

Craig Topper via llvm-commits llvm-commits at lists.llvm.org
Tue Mar 22 14:22:31 PDT 2022


Author: Craig Topper
Date: 2022-03-22T14:21:55-07:00
New Revision: 9933015fdd761adc1883ce87708a6aaf988031a0

URL: https://github.com/llvm/llvm-project/commit/9933015fdd761adc1883ce87708a6aaf988031a0
DIFF: https://github.com/llvm/llvm-project/commit/9933015fdd761adc1883ce87708a6aaf988031a0.diff

LOG: [X86] Fold MMX_MOVD64from64rr + store to MMX_MOVQ64mr instead of MMX_MOVD64from64mr.

MMX_MOVD64from64rr moves an MMX register to a 64-bit GPR.

MMX_MOVD64from64mr is the memory version of moving a MMX register to a
64-bit GPR. It requires the REX.W bit to be set. There are no isel
patterns that use this instruction.

MMX_MOVQ64mr is the MMX register store instruction. It doesn't
require a REX.W prefix. This makes it one byte shorter to encode
than MMX_MOVD64from64mr in many cases.

Both store instructions output the same mnemonic string. The assembler
would choose MMX_MOVQ64mr if it was to parse the output. Which is
another reason using it is the correct thing to do.

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D122241

Added: 
    

Modified: 
    llvm/lib/Target/X86/X86InstrFoldTables.cpp
    llvm/test/CodeGen/X86/stack-folding-mmx.ll

Removed: 
    


################################################################################
diff  --git a/llvm/lib/Target/X86/X86InstrFoldTables.cpp b/llvm/lib/Target/X86/X86InstrFoldTables.cpp
index bd3360b01c955..b8dff70f96128 100644
--- a/llvm/lib/Target/X86/X86InstrFoldTables.cpp
+++ b/llvm/lib/Target/X86/X86InstrFoldTables.cpp
@@ -292,7 +292,7 @@ static const X86MemoryFoldTableEntry MemoryFoldTable0[] = {
   { X86::JMP32r_NT,           X86::JMP32m_NT,           TB_FOLDED_LOAD },
   { X86::JMP64r,              X86::JMP64m,              TB_FOLDED_LOAD },
   { X86::JMP64r_NT,           X86::JMP64m_NT,           TB_FOLDED_LOAD },
-  { X86::MMX_MOVD64from64rr,  X86::MMX_MOVD64from64mr,  TB_FOLDED_STORE | TB_NO_REVERSE },
+  { X86::MMX_MOVD64from64rr,  X86::MMX_MOVQ64mr,        TB_FOLDED_STORE | TB_NO_REVERSE },
   { X86::MMX_MOVD64grr,       X86::MMX_MOVD64mr,        TB_FOLDED_STORE | TB_NO_REVERSE },
   { X86::MOV16ri,             X86::MOV16mi,             TB_FOLDED_STORE },
   { X86::MOV16rr,             X86::MOV16mr,             TB_FOLDED_STORE },

diff  --git a/llvm/test/CodeGen/X86/stack-folding-mmx.ll b/llvm/test/CodeGen/X86/stack-folding-mmx.ll
index 8b6c2e687941a..11ca9e2a547ee 100644
--- a/llvm/test/CodeGen/X86/stack-folding-mmx.ll
+++ b/llvm/test/CodeGen/X86/stack-folding-mmx.ll
@@ -140,7 +140,7 @@ define i64 @stack_fold_movq_store(x86_mmx %a0) nounwind {
 ; CHECK-NEXT:    pushq %r12
 ; CHECK-NEXT:    pushq %rbx
 ; CHECK-NEXT:    paddb %mm0, %mm0
-; CHECK-NEXT:    movq %mm0, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Folded Spill
+; CHECK-NEXT:    movq %mm0, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Spill
 ; CHECK-NEXT:    #APP
 ; CHECK-NEXT:    nop
 ; CHECK-NEXT:    #NO_APP


        


More information about the llvm-commits mailing list