[PATCH] D95690: [LoopVectorize] improve IR fast-math-flags propagation in reductions
Sanjay Patel via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Mon Feb 1 08:45:28 PST 2021
spatel added a comment.
In D95690#2533764 <https://reviews.llvm.org/D95690#2533764>, @dmgreen wrote:
> It looks like we don't expand non-fast vecreduce_fmax in ExpandReductions:
>
> // FIXME: We only expand 'fast' reductions here because the underlying
> // code in createMinMaxOp() assumes that comparisons use 'fast'
> // semantics.
>
> And otherwise expand VECREDUCE_FMAX to (I think) FMAXNUM. Not using ExpandReductions sounds fine to me, but do we also need to fix those assumptions during lowering, if we are fixing the assumptions in the vectorizer? I guess we are still requiring NoNan so FMAXNUM should be fine? I didn't see anything normally requiring nsz for that.
There's still a bug in ExpandReductions, and we do need to fix that - it's preventing expected vectorization from SLP as noted here:
https://llvm.org/PR23116
But yes, since we are still requiring NoNan here, I think we are safe (this patch can't make things worse unless I've missed some loophole in lowering).
There was also this example:
https://llvm.org/PR43574
...but we either solved that one or made it invisible with the changes so far. I need to investigate the IR after each pass.
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D95690/new/
https://reviews.llvm.org/D95690
More information about the llvm-commits
mailing list