[PATCH] D99029: [RISCV] Don't form MULW for (sext_inreg (mul X, Y), i32)) if the mul has another use.

Craig Topper via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Sat Mar 20 18:23:19 PDT 2021


craig.topper added inline comments.


================
Comment at: llvm/test/CodeGen/RISCV/xaluo.ll:703
+; RV64-NEXT:    mul a1, a0, a1
+; RV64-NEXT:    sext.w a0, a1
+; RV64-NEXT:    xor a0, a0, a1
----------------
This is an interesting case. The mul is used by the sext.w and the sw. Both only consume the lower 32 bits. So it would be better to use MULW for the sext_inreg+mul and pass that result to the sw.

Pretty sure anything we do to try to make that happen in DAG combine or lowering will be defeated by SimplifyDemandedBits. Unless we add a RISCVISD::MULW node to hide it.

We could probably catch this specific case with a PreprocessISelDAG peephole that would like for a sext_inreg+mul. We could examine all other users of the mul and see if they only need the lower 32 bits. We might not be able to catch many cases without recursively checking the users of the users though. Ideally we'd be able to see that the user was a ADDW/SUBW but that requires know the use is an add that is itself used by a sext_inreg.


================
Comment at: llvm/test/CodeGen/RISCV/xaluo.ll:1050
 ; RV64-NEXT:    mulhu a0, a0, a1
-; RV64-NEXT:    srli a0, a0, 32
-; RV64-NEXT:    snez a1, a0
-; RV64-NEXT:    mulw a0, a4, a3
+; RV64-NEXT:    srli a1, a0, 32
+; RV64-NEXT:    snez a1, a1
----------------
This demonstrates that we now get full benefit of the (mul (and X, 0xffffffff), (and Y, 0xffffffff)) optimization.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D99029/new/

https://reviews.llvm.org/D99029



More information about the llvm-commits mailing list