[PATCH] D85451: [NFC][MLInliner] Refactor logging implementation
Mircea Trofin via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Fri Aug 7 09:33:45 PDT 2020
mtrofin marked 3 inline comments as done.
mtrofin added inline comments.
================
Comment at: llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:107
+ const T *TensorData, size_t TensorCount,
+ Optional<StringRef> LoggingName = None) const {
+ writeRawTensorsAsFeatureLists(OutFile, Spec,
----------------
yundiqian wrote:
> when we will be using this?
This is what typed cases call - e.g. when we log the feature tensors. We know their type at compile time. It saves us from typecasting all the time the data buffer.
================
Comment at: llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:123-138
+ if (Spec.isElementType<int64_t>()) {
+ FieldName = "int64_list";
+ ValueWriter = [&](const char *Data) {
+ writeTensorValues<int64_t>(OutFile, Data, Spec.getElementCount());
+ };
+ } else if (Spec.isElementType<int32_t>()) {
+ FieldName = "int64_list";
----------------
yundiqian wrote:
> is it possible to template this as what you did in last two diffs?
don't think it'll add much value, there are just the 3 cases, because of the limitation in what the Feature protobuf supports. Also there's the int32_t special casing.
================
Comment at: llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:162
+ std::vector<int64_t> Decisions;
std::vector<bool> Effects;
std::vector<int64_t> Rewards;
----------------
yundiqian wrote:
> should Effects also becomes int64_t if Decisions are int64_t?
No, and we may actually yank out Effects - we don't currently use it.
================
Comment at: llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:382-384
+ writeTensorsAsFeatureLists(OutFile,
+ TensorSpec::createSpec<int64_t>(DecisionName, {1}),
+ Decisions.data(), NumberOfRecords);
----------------
yundiqian wrote:
> shall we log Effect instead of Decision?
Maybe, but we need to evaluate that alternative. Effects should be the same as decision, except if somehow inlining failed for a correctness reason we didn't detect upfront. So training, currently, may end up being confused by this aspect.
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D85451/new/
https://reviews.llvm.org/D85451
More information about the llvm-commits
mailing list