[PATCH] D61048: [X86] Remove dead nodes left after ReplaceAllUsesWith calls during address matching

Craig Topper via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Apr 23 18:05:24 PDT 2019


craig.topper marked 2 inline comments as done.
craig.topper added inline comments.


================
Comment at: llvm/test/CodeGen/X86/fold-and-shift.ll:9
+; CHECK-NEXT:    movl $255, %ecx
+; CHECK-NEXT:    andl {{[0-9]+}}(%esp), %ecx
 ; CHECK-NEXT:    movl (%eax,%ecx,4), %eax
----------------
Looks like we were previously selecting movzx because the load folding was being suppressed by an artificial extra use on the load. Now we favor folding the load over folding the immediate like we normally would. We can probably hack IsProfitableToFold to restore the old behavior. The best option would be to narrow the load and use movzx from memory. But that's only valid if the load isn't volatile. I suppose we probably want the previous behavior in the case of a volatile load?


================
Comment at: llvm/test/CodeGen/X86/fold-and-shift.ll:26
 ; CHECK-NEXT:    movl {{[0-9]+}}(%esp), %eax
-; CHECK-NEXT:    movl {{[0-9]+}}(%esp), %ecx
-; CHECK-NEXT:    movzwl %cx, %ecx
+; CHECK-NEXT:    movl $65535, %ecx # imm = 0xFFFF
+; CHECK-NEXT:    andl {{[0-9]+}}(%esp), %ecx
----------------
Similar to above


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D61048/new/

https://reviews.llvm.org/D61048





More information about the llvm-commits mailing list