[llvm] [VPlan] Impl VPlan-based pattern match for ExtendedRed and MulAccRed (NFCI) (PR #113903)

Elvis Wang via llvm-commits llvm-commits at lists.llvm.org
Fri Nov 1 10:16:37 PDT 2024


ElvisWang123 wrote:

> This might be a case where gradual lowering would help. We could have a more abstract recipe early on which combines mul-extend in a single recipe, facilitating simple cost-computation. Before code-gen, we can replace the recipe with wide recipes for the adds and extends, so there is no need to duplicate codegen for those, similar to how things are sketched for scalar phis in #114305

Thanks for your advice, I'm working on this direction.


> Hello. I believe the basic patterns are the ones listed in the summary, which are an extended-add-reduction or an extended mla reduction:
> 
> ```
> reduce(ext(...))
> reduce.add(mul(...))
> reduce.add(mul(ext(...), ext(...)))
> ```
Thanks, I will only model these three patterns for reduction.


> There are also other patterns that come up too. The first I believe should be equivalent to `vecreduce(mul(ext, ext))`, providing the ext nodes are the correct types. I don't remember about the second.
> 
> ```
> reduce.add(ext(mul(ext(...), ext(...))))
> reduce.add(ext(mul(...)))
> ```
Thanks for caching that. I misunderstood that how many patterns can be folded into mve reduction-like instructions in the original patch.
> AArch64 has a stand-alone umull instruction (both for scalar and for vector, although the type sizes differ), that performs a `mul(ext, ext)`. Sometimes it might be better to fold towards `ext(load)` though, depending on the types.

I think we already model the  instruction cost for `ext(load)` in `VPWidenCastRecipe::computeCost()`. We compute the CastContextHint which depends on the load/store for the `ext` instructions. But I am not quite sure will ARMTTI handle this pattern correctly or not.

In summary, I think we only need two new recipes for reduction - `reduce(ext)` and `reduce.add(mul(<optional>(ext), <optional>(ext)))` ? 

If there is any question, please let me know.


https://github.com/llvm/llvm-project/pull/113903


More information about the llvm-commits mailing list