[llvm] [SLP]Initial FMAD support (PR #149102)
David Green via llvm-commits
llvm-commits at lists.llvm.org
Fri Aug 8 10:04:25 PDT 2025
davemgreen wrote:
A note we saw some fallout from this in internal performance testing too. Something like this example doing a reduce of a fmul under fast-math is no longer vectorizing from the SLP vectorizer: https://godbolt.org/z/rYWM7dxEj. It wasn't helped on AArch64 by a different set of cost calculations that marked a fmul the same cost as a fma, but that example is x86. The original fadds in a reduction can be combined into a fma, but the expanded reduction will still become fma for part of it.
https://github.com/llvm/llvm-project/pull/149102
More information about the llvm-commits
mailing list