[llvm] [RegAlloc][RISCV] Increase the spill weight by target factor (PR #113675)

Philip Reames via llvm-commits llvm-commits at lists.llvm.org
Wed Nov 27 13:21:51 PST 2024


================
@@ -2307,36 +2307,38 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64(<vscale x 7 x i64> %va, <vscale
 ; RV32-NEXT:    vsll.vx v24, v24, a3, v0.t
 ; RV32-NEXT:    vor.vv v16, v16, v24, v0.t
 ; RV32-NEXT:    csrr a4, vlenb
-; RV32-NEXT:    slli a4, a4, 4
+; RV32-NEXT:    slli a4, a4, 3
 ; RV32-NEXT:    add a4, sp, a4
 ; RV32-NEXT:    addi a4, a4, 16
 ; RV32-NEXT:    vs8r.v v16, (a4) # Unknown-size Folded Spill
 ; RV32-NEXT:    addi a4, sp, 8
 ; RV32-NEXT:    vsetvli a5, zero, e64, m8, ta, ma
 ; RV32-NEXT:    vlse64.v v16, (a4), zero
 ; RV32-NEXT:    csrr a4, vlenb
-; RV32-NEXT:    slli a4, a4, 3
----------------
preames wrote:

Off topic for this review, but just an observation.  

If I'm reading this code right, we're spilling a zero strided load from a constant on the stack.  I think this is coming from SPLAT_VECTOR_SPLIT_I64_VL.  

This is a missed rematerialization optimization if we can prove the stack slot constant.  

We could probably also just use a vector select between two constants (i.e. vmv.v.x + vmerge.vx) here instead.  This form wouldn't be easy to remat.  

This is also a case where we have a splat of one vreg value to a whole register group.  (We don't model it that way today.)  We could spill only one vreg here, and "remat" via a chain of whole register moves.  This pattern comes up with e.g. vrgather.vi and a few other shuffles, so this might be interesting to explore as a way to decrease register pressure in these cases.  

https://github.com/llvm/llvm-project/pull/113675


More information about the llvm-commits mailing list