[llvm-dev] RFC: Sanitizer-based Heap Profiler

Xinliang David Li via llvm-dev llvm-dev at lists.llvm.org
Thu Jul 8 09:54:33 PDT 2021


On Thu, Jul 8, 2021 at 8:03 AM Andrey Bokhanko <andreybokhanko at gmail.com>
wrote:

> Hi Teresa,
>
> One more thing, if you don't mind.
>
> On Tue, Jul 6, 2021 at 12:54 AM Teresa Johnson <tejohnson at google.com>
> wrote:
>
>> We initially plan to use the profile information to provide guidance to
>> the dynamic allocation runtime on data allocation and placement. We'll send
>> more details on that when it is fleshed out too.
>>
>
> I played with the current implementation, and became a bit concerned if
> the current data profile is sufficient for an efficient data allocation
> optimization.
>


>
> First, there is no information on temporal locality -- only total_lifetime
> of an allocation block is recorded, not start / end times -- let alone
> timestamps of actual memory accesses. I wonder what criteria would be used
> by data profile-based allocation runtime to allocate two blocks from the
> same memory chunk?
>

First, I think per-allocation start-end time should be added to approximate
temporal locality.

Detailed temporal locality information is not tracked is by design for a
various of reasons:

1.  This can be done with static analysis. The idea is for the compiler to
instrument a potentially hot access region and profile the start and end
address of the accessed memory regions. This information can be combined
with the regular heap profile data. In profile-use phase, the compiler can
perform access pattern analysis and produce affinity graph

2.  We try to make use of existing allocator runtime (tcmalloc) for
locality optimization. The runtime has been tuned for years to have the
most efficient code for fast-path allocation.  For hot allocation sites,
adding too much overhead (e.g. via wrapper etc) can lead to overhead that
totally eat up the gains from the locality optimization;

3. tcmalloc currently uses size class based partitioning, which makes
co-allocation of small objects of different size classes impossible. Even
for objects with the same type/size, due to the use of free lists, there is
no guarantee that consecutively allocated objects are placed together.

4. a bump-pointer allocator has its own sets of problems -- when not used
carefully, it can lead to huge memory waste due to fragmentation.  In
reality it only helps grouping for initial set of allocations when pointer
bumps continuously -- during stable state, the allocations will also be all
over the place and no contiguity can be guaranteed.

This is why initially we focus more coarse grain locality optimization --
1) co-placement to improve DTLB performance and 2) improving dcache
utilization using only lifetime and hotness information.

Longer term, we need to beef up compiler based analysis -- objects with the
exact life times can be safely co-allocated via compiler based
transformation. Also objects with similar lifetimes can be co-allocated
without introducing too much fragmentation.


Thanks,

David


>
> Second, according to the data from [Savage'20], memory accesses affinity
> (= space distance between temporarily close memory accesses from two
> different allocated blocks) is crucial: figure #12 demonstrates that this
> is vital for omnetpp benchmark from SPEC CPU 2017.
>
> Said this, my concerns are based essentially on a single paper that
> employs specific algorithms to guide memory allocation and measures their
> impact on a specific set of benchmarks. I wonder if you have preliminary
> data that validates sufficiency of the implemented data profile for
> efficient optimization of heap memory allocations?
>
> References:
> [Savage'20] Savage, J., & Jones, T. M. (2020). HALO: Post-Link Heap-Layout
> Optimisation. CGO 2020: Proceedings of the 18th ACM/IEEE International
> Symposium on Code Generation and Optimization,
> https://doi.org/10.1145/3368826.3377914
>
> Yours,
> Andrey
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210708/555bade4/attachment.html>


More information about the llvm-dev mailing list