[all-commits] [llvm/llvm-project] b0d882: [RISCV] Add isel pattern to optimize (mul (and X, ...

Craig Topper via All-commits all-commits at lists.llvm.org
Sat Mar 20 14:56:11 PDT 2021


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: b0d8823a8a440549f303f9ba45aaa5550e1dc536
      https://github.com/llvm/llvm-project/commit/b0d8823a8a440549f303f9ba45aaa5550e1dc536
  Author: Craig Topper <craig.topper at sifive.com>
  Date:   2021-03-20 (Sat, 20 Mar 2021)

  Changed paths:
    M llvm/lib/Target/RISCV/RISCVInstrInfoM.td
    M llvm/test/CodeGen/RISCV/rv64i-w-insts-legalization.ll
    M llvm/test/CodeGen/RISCV/xaluo.ll

  Log Message:
  -----------
  [RISCV] Add isel pattern to optimize (mul (and X, 0xffffffff), (and Y, 0xffffffff)) on RV64

This patterns computes the full 64 bit product of a 32x32 unsigned
multiply. This requires a two pairs of SLLI+SRLI to zero the
upper 32 bits of the inputs.

We can do better than this by using two SLLI to move the lower
bits to the upper bits then use MULHU to compute the product. This
is the high half of a full 64x64 product. Since we put 32 0s in the lower
bits of the inputs we know the 128-bit product will have zeros in the
lower 64 bits. So the upper 64 bits, which MULHU computes, will contain
the original 64 bit product we were after.

The same trick would work for (mul (sext_inreg X, i32), (sext_inreg Y, i32))
using MULHS, but sext_inreg is sext.w which is already one instruction so we
wouldn't save anything.

Differential Revision: https://reviews.llvm.org/D99026




More information about the All-commits mailing list