[all-commits] [llvm/llvm-project] dc3f8c: [memprof] Improve deserialization performance in V...

Kazu Hirata via All-commits all-commits at lists.llvm.org
Fri Jun 7 17:26:20 PDT 2024


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: dc3f8c2f587e9647d1ce86426c2cf317c98f453c
      https://github.com/llvm/llvm-project/commit/dc3f8c2f587e9647d1ce86426c2cf317c98f453c
  Author: Kazu Hirata <kazu at google.com>
  Date:   2024-06-07 (Fri, 07 Jun 2024)

  Changed paths:
    M llvm/include/llvm/ProfileData/MemProf.h
    M llvm/lib/ProfileData/InstrProfWriter.cpp
    M llvm/lib/ProfileData/MemProf.cpp
    M llvm/unittests/ProfileData/MemProfTest.cpp

  Log Message:
  -----------
  [memprof] Improve deserialization performance in V3 (#94787)

We call llvm::sort in a couple of places in the V3 encoding:

- We sort Frames by FrameIds for stability of the output.

- We sort call stacks in the dictionary order to maximize the length
  of the common prefix between adjacent call stacks.

It turns out that we can improve the deserialization performance by
modifying the comparison functions -- without changing the format at
all.  Both places take advantage of the histogram of Frames -- how
many times each Frame occurs in the call stacks.

- Frames: We serialize popular Frames in the descending order of
  popularity for improved cache locality.  For two equally popular
  Frames, we break a tie by serializing one that tends to appear
  earlier in call stacks.  Here, "earlier" means a smaller index
  within llvm::SmallVector<FrameId>.

- Call Stacks: We sort the call stacks to reduce the number of times
  we follow pointers to parents during deserialization.  Specifically,
  instead of comparing two call stacks in the strcmp style -- integer
  comparisons of FrameIds, we compare two FrameIds F1 and F2 with
  Histogram[F1] < Histogram[F2] at respective indexes.  Since we
  encode from the end of the sorted list of call stacks, we tend to
  encode popular call stacks first.

Since the two places use the same histogram, we compute it once and
share it in the two places.

Sorting the call stacks reduces the number of "jumps" by 74% when we
deserialize all MemProfRecords.  The cycle and instruction counts go
down by 10% and 1.5%, respectively.

If we sort the Frames in addition to the call stacks, then the cycle
and instruction counts go down by 14% and 1.6%, respectively, relative
to the same baseline (that is, without this patch).



To unsubscribe from these emails, change your notification settings at https://github.com/llvm/llvm-project/settings/notifications


More information about the All-commits mailing list