[PATCH] D120430: [memprof] Symbolize and cache stack frames.

Snehasish Kumar via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Mar 2 18:27:40 PST 2022


snehasish marked an inline comment as done.
snehasish added a comment.

Updated the patch with comments. PTAL, thanks!

In D120430#3355934 <https://reviews.llvm.org/D120430#3355934>, @tejohnson wrote:

> What is the memory impact of caching? Hopefully not too onerous since this is a big speedup!

There is hardly any change in peak memory consumption, valgrind massif shows 8.618G (now) vs 8.615G (before). In this patch we have introduced a layer of indirection to where the indexing (keys in map of PC->Frame) account for the small increase in memory.



================
Comment at: llvm/lib/ProfileData/RawMemProfReader.cpp:355
   for (const uint64_t Address : CallStack) {
-    Expected<DIInliningInfo> DIOr = Symbolizer->symbolizeInlinedCode(
-        getModuleOffset(Address), Specifier, /*UseSymbolTable=*/false);
----------------
tejohnson wrote:
> davidxl wrote:
> > Using a map from addr to Frames for caching can also avoid redundant symbolization computation. Is there an advantage of doing eager symbolization?
> I suspect because it makes it easier for adding additional records, such as identifying interior call stack nodes within their functions (we need to mark these with metadata as well).
Caching the frames outside the local context allows us to decouple the record generation from the symbolization and makes it easier to extend for pruning (D120860) and marking interior callstack nodes (as Teresa noted). Note that the symbolization here isn't eager in the sense that more work is performed, the number of unique addresses symbolized remains the same before and after this patch.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D120430/new/

https://reviews.llvm.org/D120430



More information about the llvm-commits mailing list