[llvm] [SLP]Initial FMAD support (PR #149102)
Alexey Bataev via llvm-commits
llvm-commits at lists.llvm.org
Fri Aug 8 10:19:08 PDT 2025
alexey-bataev wrote:
> A note we saw some fallout from this in internal performance testing too. Something like this example doing a reduce of a fmul under fast-math is no longer vectorizing from the SLP vectorizer: https://godbolt.org/z/rYWM7dxEj. It wasn't helped on AArch64 by a different set of cost calculations that marked a fmul the same cost as a fma, but that example is x86. The original fadds in a reduction can be combined into a fma, but the expanded reduction will still become fma for part of it.
There should be a follow up patch to support fma-based reduction
https://github.com/llvm/llvm-project/pull/149102
More information about the llvm-commits
mailing list