[PATCH] D104751: Unpack the CostEstimate feature in ML inlining models.

Mircea Trofin via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Jun 29 09:33:39 PDT 2021


mtrofin added inline comments.


================
Comment at: llvm/include/llvm/Analysis/InlineAdvisor.h:40
 /// training.
-enum class InliningAdvisorMode : int {
-  Default,
-  Release,
-  Development
-};
+enum class InliningAdvisorMode : int { Default, Release, Development };
 
----------------
unrelated change - if the linter did this, just group it (and others like it) upfront, and commit that as a NFC, then rebase this one


================
Comment at: llvm/include/llvm/Analysis/InlineCost.h:276
+/// even if inlining is impossible.
+InlineCostFeaturesArray getInliningCostFeatures(
+    CallBase &Call, TargetTransformInfo &CalleeTTI,
----------------
I'd drop the name 'array', suggests too much impl detail. 


================
Comment at: llvm/include/llvm/Analysis/InlineModelFeatureMaps.h:49
+// clang-format off
+enum class InlineCostFeatures : size_t {
+#define POPULATE_INDICES(INDEX_NAME, NAME) INDEX_NAME,
----------------
Aah! I see why InlineCostFeaturesArray.

How about this enum were InlineCostFeatureIndex, and then the collection is InlineCostFeatures


================
Comment at: llvm/lib/Analysis/CMakeLists.txt:8
   # discoverability.
-  set(LLVM_INLINER_MODEL_CURRENT_URL "https://github.com/google/ml-compiler-opt/releases/download/inlining-Oz-v0.1/inlining-Oz-acabaf6-v0.1.tar.gz")
+  set(LLVM_INLINER_MODEL_CURRENT_URL "asedf")
 
----------------
more clear to say "<TO BE UPDATED>"


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:132
 namespace {
+
 class InlineCostCallAnalyzer;
----------------
remove spurious change


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:414
 public:
-  CallAnalyzer(
-      Function &Callee, CallBase &Call, const TargetTransformInfo &TTI,
-      function_ref<AssumptionCache &(Function &)> GetAssumptionCache,
-      function_ref<BlockFrequencyInfo &(Function &)> GetBFI = nullptr,
-      ProfileSummaryInfo *PSI = nullptr,
-      OptimizationRemarkEmitter *ORE = nullptr)
+  CallAnalyzer(Function &Callee, CallBase &Call, const TargetTransformInfo &TTI,
+               function_ref<AssumptionCache &(Function &)> GetAssumptionCache,
----------------
formatting change - see previous comment


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:425
 
-  Optional<Constant*> getSimplifiedValue(Instruction *I) {
+  Optional<Constant *> getSimplifiedValue(Instruction *I) {
     if (SimplifiedValues.find(I) != SimplifiedValues.end())
----------------
formatting change


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:943
+  InlineCostFeaturesArray Cost = {};
+  int LoadEliminationCost = 0;
+  unsigned SROACostSavings = 0;
----------------
Where is this used, other than in onDisableLoadElimination, where it's set to 0?


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:944
+  int LoadEliminationCost = 0;
+  unsigned SROACostSavings = 0;
+  int VectorBonus = 0;
----------------
this is counting the opportunities for SROACostSavings, right? Maybe rename to SROACostSavingOpportunities?


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:947
+  int SingleBBBonus = 0;
+  int Threshold = 5;
+
----------------
Is this really a threshold, or can it have a name that better communicates what it accumulates? (apologies, can't right now figure what that may be)

same for the 2 bonuses above.


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:1156
 
-void InlineCostAnnotationWriter::emitInstructionAnnot(const Instruction *I,
-                                                formatted_raw_ostream &OS) {
+void InlineCostAnnotationWriter::emitInstructionAnnot(
+    const Instruction *I, formatted_raw_ostream &OS) {
----------------
formatting


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:1262
       AllocatedSize = SaturatingMultiplyAdd(
-          AllocSize->getLimitedValue(), DL.getTypeAllocSize(Ty).getKnownMinSize(),
-          AllocatedSize);
+          AllocSize->getLimitedValue(),
+          DL.getTypeAllocSize(Ty).getKnownMinSize(), AllocatedSize);
----------------
formatting


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:1417
+          SmallVector<Constant *, 2> Indices;
+          for (unsigned int Index = 1; Index < COps.size(); ++Index)
             Indices.push_back(COps[Index]);
----------------
formatting


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:2155
   // Select condition is a constant.
-  Value *SelectedV = CondC->isAllOnesValue()
-                         ? TrueVal
-                         : (CondC->isNullValue()) ? FalseVal : nullptr;
+  Value *SelectedV = CondC->isAllOnesValue()  ? TrueVal
+                     : (CondC->isNullValue()) ? FalseVal
----------------
formatting


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:2622
 /// Dump stats about this call's analysis.
-LLVM_DUMP_METHOD void InlineCostCallAnalyzer::dump() {
-  print();
-}
+LLVM_DUMP_METHOD void InlineCostCallAnalyzer::dump() { print(); }
 #endif
----------------
formatting


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:2716
+                                 ORE, *Call.getCalledFunction(), Call);
+  auto R = CFA.analyze();
+  (void)R; // Explicitly ignore the result. In any case where we are computing
----------------
Not sure. what's the value of CFA.features() if !R?

may be more clear to handle the error here and return {} or something like that. It creates less coupling between deep implementation details and their consumers; it also simplifies the state space of the values this API can produce.

Better yet: return an Optional<InlineCostFeaturesArray>, because a 0-valued feature vector isn't necessarily equivalent to CFA.analyze() saying "no". Returning None in that case is a more clear API


================
Comment at: llvm/lib/Analysis/InlineCost.cpp:2990
   PrintInstructionComments = true;
-  std::function<AssumptionCache &(Function &)> GetAssumptionCache = [&](
-      Function &F) -> AssumptionCache & {
+  std::function<AssumptionCache &(Function &)> GetAssumptionCache =
+      [&](Function &F) -> AssumptionCache & {
----------------
formatting


================
Comment at: llvm/lib/Analysis/MLInlineAdvisor.cpp:46
 
+// clang-format off
 const std::array<std::string, NumberOfFeatures> llvm::FeatureNameMap{
----------------
why


================
Comment at: llvm/lib/Analysis/MLInlineAdvisor.cpp:257
   ModelRunner->setFeature(FeatureIndex::CalleeUsers, CalleeBefore.Uses);
+  ModelRunner->setFeature(FeatureIndex::CostEstimate, CostEstimate);
+
----------------
do we still want this here?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D104751/new/

https://reviews.llvm.org/D104751



More information about the llvm-commits mailing list