[all-commits] [llvm/llvm-project] 02d6aa: [MemProf] Reduce unnecessary context id computatio...

Teresa Johnson via All-commits all-commits at lists.llvm.org
Tue Sep 24 16:19:09 PDT 2024


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: 02d6aad5cc940f17904c1288dfabc3fd2d439279
      https://github.com/llvm/llvm-project/commit/02d6aad5cc940f17904c1288dfabc3fd2d439279
  Author: Teresa Johnson <tejohnson at google.com>
  Date:   2024-09-24 (Tue, 24 Sep 2024)

  Changed paths:
    M llvm/lib/Transforms/IPO/MemProfContextDisambiguation.cpp

  Log Message:
  -----------
  [MemProf] Reduce unnecessary context id computation (NFC) (#109857)

One of the memory reduction techniques was to compute node context ids
on the fly. This reduced memory at the expense of some compile time
increase.

For a large binary we were spending a lot of time invoking getContextIds
on the node during assignStackNodesPostOrder, because we were iterating
through the stack ids for a call from leaf to root (first to last node
in the parlance used in that code). However, all calls for a given entry
in the StackIdToMatchingCalls map share the same last node, so we can
borrow the approach used by similar code in updateStackNodes and compute
the context ids on the last node once, then iterate each call's stack
ids in reverse order while reusing the last node's context ids.

This reduced the thin link time by 43% for a large target. It isn't
clear why there wasn't a similar increase measured when introducing the
node context id recomputation, but the compile time was longer to start
with then.



To unsubscribe from these emails, change your notification settings at https://github.com/llvm/llvm-project/settings/notifications


More information about the All-commits mailing list