[PATCH] D104751: Unpack the CostEstimate feature in ML inlining models.

Jacob Hegna via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Jun 29 10:39:19 PDT 2021


jacobhegna added inline comments.


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:943
+  InlineCostFeaturesArray Cost = {};
+  int LoadEliminationCost = 0;
+  unsigned SROACostSavings = 0;
----------------
mtrofin wrote:
> Where is this used, other than in onDisableLoadElimination, where it's set to 0?
it is meant so that InlineCostFeatures::LoadElimination is only ever set once, but there is a better way to express that. It also was never set to a nonzero value in the first place, which is just a bug.


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:947
+  int SingleBBBonus = 0;
+  int Threshold = 5;
+
----------------
mtrofin wrote:
> Is this really a threshold, or can it have a name that better communicates what it accumulates? (apologies, can't right now figure what that may be)
> 
> same for the 2 bonuses above.
these are copied from the heuristic inline visitor and used in almost the same way - there is some complexity in the other visitor that isn't replicated here. I'll keep the names the same to reflect that duplication. In the future, there some be some abstraction so that the two visitors could share these values/logic for updating them.


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:2716
+                                 ORE, *Call.getCalledFunction(), Call);
+  auto R = CFA.analyze();
+  (void)R; // Explicitly ignore the result. In any case where we are computing
----------------
mtrofin wrote:
> Not sure. what's the value of CFA.features() if !R?
> 
> may be more clear to handle the error here and return {} or something like that. It creates less coupling between deep implementation details and their consumers; it also simplifies the state space of the values this API can produce.
> 
> Better yet: return an Optional<InlineCostFeaturesArray>, because a 0-valued feature vector isn't necessarily equivalent to CFA.analyze() saying "no". Returning None in that case is a more clear API
I'm wasn't returning an optional for a very specific reason: std::array cannot be efficiently moved-from. so, if I return an optional, we couldn't explicitly unpack it later without eating another copy of the array. I can swap it back now, then in MLInlineAdvisor.cpp avoid the copy by taking a reference to the array inside the optional. In a later patch, I want to switch the array to a vector, so that we can efficiently std::move it, and then returning an optional here wouldn't come with a performance hit


================
Comment at: llvm/lib/Analysis/MLInlineAdvisor.cpp:46
 
+// clang-format off
 const std::array<std::string, NumberOfFeatures> llvm::FeatureNameMap{
----------------
mtrofin wrote:
> why
clang-format doesn't seem to understand these back-to-back macros and tries to add another level of indentation to each line, it makes it hard to read.


================
Comment at: llvm/lib/Analysis/MLInlineAdvisor.cpp:257
   ModelRunner->setFeature(FeatureIndex::CalleeUsers, CalleeBefore.Uses);
+  ModelRunner->setFeature(FeatureIndex::CostEstimate, CostEstimate);
+
----------------
mtrofin wrote:
> do we still want this here?
ya - it increases stability in training. still looking in how to completely remove the heuristics from the ML without sacrificing training stability.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D104751/new/

https://reviews.llvm.org/D104751



More information about the llvm-commits mailing list