[llvm] [RFC][LV] VPlan-based cost model (PR #67647)

Kolya Panchenko via llvm-commits llvm-commits at lists.llvm.org
Tue Apr 30 11:13:57 PDT 2024


nikolaypanchenko wrote:

> At the moment, computing costs is framed in terms of the IR instructions we generate (i.e. what `::execute` will generate) and target-specific info is pulled in via TTI only. In that sense, I think it makes sense to keep them defined in the same place. Do you by any chance have some examples where TTI would not be sufficient for downstream customizations?

Not sure I follow TTI reason. `llvm::Instruction` does not have `getCost` or `execute` methods, unlike `VPRecipe`. So I do see TTI/BasicTTI as as dedicated object to compute cost of instruction(s). Unlike vectorizer's cost model, TTI was done to be caller-agonostic as much as possible, so it's caller's responsibility to estimate other context-related information.
Regarding TTI's downstream customization, I'd say it first important to notice X86, ARM, RISC-V etc have their own TTI's implementation. At least for RISC-V we didn't find a reason to have our own TTI, however that is possible to have vendor-specific TTI.
My points about separate object for cost model are:
* separation of concerns: VPlan represents possible vectorization, cost model estimates cost of that vectorization.
* clear single instance to estimate the cost. Unless context does not matter, `VPRecipe::getCost` has to accept some "state", so I don't see how it's going to be beneficial from encapsulation point of view.
Regarding vendor-specific heuristic, as I mentioned earlier, separate object to evaluate the cost does not mean absence of a default cost model that vendor can use as a base class: BasicTTI -> RISCVTTI   

Similar goes to `execute` and selectiondag. 

> In general, I think `llvm` tries to limit target-specific customizations in the middle-end to TTI and possibly via intrinsics to encourage as much code-sharing upstream as possible. I am not aware of other areas in the middle-end where other means are provided to customize for downstream users explicitly, although I may be missing something and I don't think it is an official policy that's spelled out anywhere.

Yeah, I can understand why "downstream has to deal with it" is beneficial for upstream. My biggest worry that with a variety of different hws, that might become a big concern for everyone in a long run.

https://github.com/llvm/llvm-project/pull/67647


More information about the llvm-commits mailing list