[llvm] [LV][AArch64] Prefer Fixed over Scalable if cost-model is equal (Neoverse V2) (PR #95819)

Sjoerd Meijer via llvm-commits llvm-commits at lists.llvm.org
Tue Jun 18 13:59:07 PDT 2024


sjoerdmeijer wrote:

> > The fact that Neon has LDP and postinc would feel like enough of a reason to favour Neon on a tie.
> 
> Whereas I'd say "The fact that Neon has LDP and postinc would feel like enough of a reason for Neon to have a lower cost when comparing equivalent vector lengths."

I don't think tweaking the cost of a load instructions is going to cut it here. It is too fragile, is going to have so many side effects, thus probably doesn't achieve what we want. I don't really see that as a way forward.

> I'd just rather see the compiler make decisions based on data from the IR rather than implement a switch that effectively means "ignore the data". 

I see what you mean but I agree and disagree with that statement. First of all, we unleash the cost-model on fixed and scalable and all sorts of VFs and combinations, and if there is a tie we happen to default to scalable. Thus, the status quo is that we make an  arbitrary decision that is similarly not based on any data or IR or empirical data. The "arbitration" in case of a tie and proposal to change to fixed is based on empirical data though, which is clearly an improvement over the status quo. However, perhaps we agree on this point now, and the remaining point of discussion is that we could make the arbitration a somewhat better informed decision:

> I appreciate this might not always be clean, for example take a look at AArch64TTIImpl::preferPredicateOverEpilogue. 

As the original author of that hook, I am very well aware and I am happy with this suggestion and this:

> but at least that's showing a decision based on the input IR that's concrete (i.e. we think predication is bad for small loops) 

It's similar I think to the heuristic that I sketched earlier: a scan of the loop body to see if small loops don't require predication. And so I am happy to go down that route to progress this. Please let me know what you think. 

https://github.com/llvm/llvm-project/pull/95819


More information about the llvm-commits mailing list