[all-commits] [llvm/llvm-project] 38318d: [RISCV][LoopVectorize] Use DataWithEVL as the pref...

Luke Lau via All-commits all-commits at lists.llvm.org
Tue Jul 22 06:03:20 PDT 2025


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: 38318dd05615a2f38abdeeae99e7423165308902
      https://github.com/llvm/llvm-project/commit/38318dd05615a2f38abdeeae99e7423165308902
  Author: Luke Lau <luke at igalia.com>
  Date:   2025-07-22 (Tue, 22 Jul 2025)

  Changed paths:
    M llvm/lib/Target/RISCV/RISCVTargetTransformInfo.h
    M llvm/test/Transforms/LoopVectorize/RISCV/low-trip-count.ll
    M llvm/test/Transforms/LoopVectorize/RISCV/pr88802.ll
    M llvm/test/Transforms/LoopVectorize/RISCV/scalable-tailfold.ll
    M llvm/test/Transforms/LoopVectorize/RISCV/truncate-to-minimal-bitwidth-cost.ll
    M llvm/test/Transforms/LoopVectorize/RISCV/uniform-load-store.ll

  Log Message:
  -----------
  [RISCV][LoopVectorize] Use DataWithEVL as the preferred tail folding style (#148686)

In preparation to eventually make EVL tail folding the default, this
patch sets DataWithEVL as the preferred tail folding style for RISC-V,
but doesn't enable tail folding by default.

And although tail folding isn't enabled by default, the loop vectorizer
will actually tail fold loops with a small trip count, so this will
cause some EVL vectorized loops to be generated in the default
configuration.

The EVL tail folding work is still not complete, e.g. we still need to
handle interleave groups etc., see #123069, but a lot of these missing
features also apply to the data (masked) tail folding strategy, which is
the default anyway.

The actual overall performance picture is much better, on TSVC EVL tail
folding is faster than data on every benchmark on the spacemit-x60[^1]:
https://lnt.lukelau.me/db_default/v4/nts/755?compare_to=756
And on SPEC CPU 2017 we see a geomean improvement[^2]:
https://lnt.lukelau.me/db_default/v4/nts/751?compare_to=753

This is likely due to masked instructions generally being less
performant on the spacemit-x60, up to twice as slow:
https://camel-cdr.github.io/rvv-bench-results/bpi_f3/index.html

[^1]: These benchmarks don't exactly give the same performance numbers
as this patch, but it's a good indicator that EVL tail folding is
generally faster than masked tail folding.
[^2]: The large code size increase in 505.mcf_r is due to a function
being inlined now



To unsubscribe from these emails, change your notification settings at https://github.com/llvm/llvm-project/settings/notifications


More information about the All-commits mailing list