[llvm] [LV][VPlan] Add initial support for CSA vectorization (PR #121222)

Florian Hahn via llvm-commits llvm-commits at lists.llvm.org
Sun Feb 9 14:13:39 PST 2025


https://github.com/fhahn commented:

> @ayalz thanks for the review!
> 
> > Continuing with this argument, better to also sink the loading of a[i] to after the loop, instead of loading vectorized va with mask inside the loop?
> 
> I agree in the example you bring up in your example. But consider a case where a[i] is needed unconditionally in the loop. For example if `cond[i]` is replaced with `a[i]`. Then doing it after the loop is doing duplicate loads. Additionally, consider that `t = f(a, b, c)`, then we need to keep track of `a, b, c` live out of the loop, which may be tricky.
> 
> I think there is some room to do such an optimization, but I prefer to leave this as future work. WDYT?
> 
> > The reduction becomes a "FindLast" reduction once this function is sunk. Sound reasonable?

IIUC the current patch will always load a wide value on each vector iteration vs only doing an extra scalar load at most outside the loop, which seems like it would be more profitable in general?

@ayalz suggestion may help to avoid additional recipes by re-using the existing recipes (and example is sinking stores for reductions to an invariant address).

I added some comments inline to clarify some of the added recieps in the patch, perhaps some of them can be replaced by existing recipes to reduce the complexity. 

Recipes also should avoid holding references to underlying IR if possible or modifying existing IR.

> 
> According to [#106560 (comment)](https://github.com/llvm/llvm-project/pull/106560#issuecomment-2419166743), there is no plan to support `FindLast` at the moment. Above I describe a scenario where it is not beneficial to do this sinking, so we would not be able to rely on `FindLast` in all instances.
> 

Now that @Mel-Chen's recent changes landed, I'd hope that adding support for `FindLast` may be slightly easier, but I am not sure. Maybe @Mel-Chen has additional thoughts?

https://github.com/llvm/llvm-project/pull/121222


More information about the llvm-commits mailing list