[PATCH] D107800: [CSSPGO][llvm-profgen] Cap context stack to reduce memory usage
Wenlei He via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Tue Aug 10 17:05:04 PDT 2021
wenlei added inline comments.
================
Comment at: llvm/tools/llvm-profgen/ProfileGenerator.cpp:53
+cl::opt<int> CSProfCtxStackCap(
+ "csprof-ctx-stack-cap", cl::init(20), cl::ZeroOrMore,
+ cl::desc("Cap context stack at a given depth. No cap if the input is -1."));
----------------
I think we could unify the switch names, e.g. `csprof-max-context-depth` and `csprof-max-cold-context-depth`?
================
Comment at: llvm/tools/llvm-profgen/ProfileGenerator.cpp:615
CSProfileGenerator::compressRecursionContext(ContextStrStack);
+ CSProfileGenerator::capContextStack(ContextStrStack, CSProfCtxStackCap);
----------------
Since `getExpandedContextStr` only covers line-based profile, for probe we rely on the trimming here in profile generation, which is later then where we do the trimming for line-based profile. Do we see peak memory drop if we trim the context in profile generation instead of during unwinder?
================
Comment at: llvm/tools/llvm-profgen/ProfileGenerator.h:73
+ // Cap the context stack by cutting off from the bottom at a given depth.
+ template <typename T>
----------------
nit: bottom-up order in stack is usually callers-callee order, from bottom can be confusing as it means we trim callees which is not the case.
also suggest rename capContextStack to trimContext.
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D107800/new/
https://reviews.llvm.org/D107800
More information about the llvm-commits
mailing list