[all-commits] [llvm/llvm-project] 22f789: [RISCV] Use vmv.v.x for any rv32 e64 splat with eq...

Philip Reames via All-commits all-commits at lists.llvm.org
Mon Mar 10 11:12:15 PDT 2025


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: 22f7897374a1d0964e36c6fefccc3e3630a89465
      https://github.com/llvm/llvm-project/commit/22f7897374a1d0964e36c6fefccc3e3630a89465
  Author: Philip Reames <preames at rivosinc.com>
  Date:   2025-03-10 (Mon, 10 Mar 2025)

  Changed paths:
    M llvm/lib/Target/RISCV/RISCVISelLowering.cpp
    M llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
    M llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz-vp.ll
    M llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
    M llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz-vp.ll

  Log Message:
  -----------
  [RISCV] Use vmv.v.x for any rv32 e64 splat with equal halves (#130530)

The prior logic was reasoning in terms of vsetivli immediates, but using
the vmv.v.x is strongly profitable for high LMUL cases.  The key
difference is that the vmv.v.x form is rematerializeable during
register allocation, and the vsle form is not.

This change uses the vlmax form of the vsetvli for all cases where the
2 x size can't be encoded as a vsetivli.  This has the effect of increasing
VL more than necessary across the vmv.v.x, which could in theory be
problematic performance-wise on some hardware.  We can revisit (or
add a tune flag) if this turns out to be noteworthy.



To unsubscribe from these emails, change your notification settings at https://github.com/llvm/llvm-project/settings/notifications


More information about the All-commits mailing list