[PATCH] D83733: [llvm] Development-mode InlineAdvisor

Mircea Trofin via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Jul 17 09:20:09 PDT 2020


mtrofin marked 2 inline comments as not done.
mtrofin added inline comments.


================
Comment at: llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:30
+
+static cl::opt<std::string> TFTrainedModelPath(
+    "ml-inliner-trained-model", cl::Hidden,
----------------
davidxl wrote:
> TrainedModel sounds misleading as if it has been trained. Perhaps just TrainModel or ModelInTrain? Or TrainingModel?
It's actually correct, it is trained: this would be a model obtained from a previous iteration of the reinforcement learning training loop. We aren't mutating it here.


================
Comment at: llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:131
+/// and can default to the default policy when we need to produce training logs
+/// from it.
+class DevelopmentModeMLInlineAdvisor : public MLInlineAdvisor {
----------------
davidxl wrote:
> May be documenting like this:
> 
> The advisor operates in two modes: 1) log collection, and 2) inference (during training). In the first mode, the default advisor is used while in 2), the MLInlineAdvisor is used.
logging may happen with either advisors, actually. To bootstrap, we use the default, then, to improve the model under training, we use the MLInlineAdvisor.

I expanded the comment to further clarify.


================
Comment at: llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:132
+/// from it.
+class DevelopmentModeMLInlineAdvisor : public MLInlineAdvisor {
+public:
----------------
davidxl wrote:
> Naming: MLInlineAdvisorForTraining or TrainingModeInlineAdvisor
We use the term 'development' everywhere else, though - e.g. the command line flag is called 'development'. 


================
Comment at: llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:256
+/// DynamicModelRunner - training mode implementation. It uses TF C APIs to
+/// dynamically load and evaluate a TF SavedModel. Runtime performance is
+/// sacrificed for ease of use while training.
----------------
davidxl wrote:
> SavedModel --> TrainedModel or ModelBeingTrained
SavedModel is a tensorflow term. Added a link in the comment.


================
Comment at: llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:258
+/// sacrificed for ease of use while training.
+class DynamicModelRunner final : public MLModelRunner {
+public:
----------------
davidxl wrote:
> Perhaps change DynamicModel name to be TrainModel to make the name consistent with the option.
I don't want to suggest it's being trained, though. How about LoadableModelRunner - the key aspect of this ModelRunner is that the model may be loaded from command line;

or, to contrast the release/AOT case ('ReleaseModeModelRunner) - DevelopmentModeModelRunner

wdyt?


================
Comment at: llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:356
+    CallBase &CB, OptimizationRemarkEmitter &ORE) {
+  if (IsDoingInference && !isLogging())
+    return MLInlineAdvisor::getAdviceFromModel(CB, ORE);
----------------
davidxl wrote:
> can Inference and Logging both be on?
Yes. I updated the comment earlier on, so now this should be more clear, I think


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D83733/new/

https://reviews.llvm.org/D83733





More information about the llvm-commits mailing list