[PATCH] D106601: [RISCV] Teach vsetvli insertion pass that it doesn't need to insert vsetvli for unit strided stores in some cases.

Roger Ferrer Ibanez via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Jul 28 03:46:42 PDT 2021


rogfer01 added a comment.

I'm curious: why can't we apply a similar approach to loads as well? Don't they compute the EEW and EMUL in a similar way?

Also I think this could be applied to non-unit stride. I understand indexed memory accesses are the odd ones here.

  vsetivli zero, 2, e32, mf2, ta, mu
  vle32.v v25, (a0)
  vfwcvt.rtz.xu.f.v v26, v25
  vsetvli zero, zero, e64, m1, ta, mu
  # the previous vsetvli can be removed because
  # a vle64 under SEW=32 and LMUL=1/2
  # will be executed as
  # EEW=64
  # EMUL=(EEW/SEW) * LMUL=(64/32)*(1/2)=1
  vle64.v v26, (a1) 


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D106601/new/

https://reviews.llvm.org/D106601



More information about the llvm-commits mailing list