<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jul 12, 2021 at 6:40 PM Wenlei He <<a href="mailto:wenlei@fb.com">wenlei@fb.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div lang="EN-US" style="overflow-wrap: break-word;">
<div class="gmail-m_3100698701968559501WordSection1">
<p class="MsoNormal">This is a big undertaking with good potential but also some uncertainty on how effective such optimizations are for larger workloads, so really appreciate the pioneering effort in LLVM.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">We (facebook) are very interested in this too. I've reached out to David and Teresa a while ago about this, and was going to wait for the RFC before having more detailed discussions. But now that we're discussing it, here’s my two cents
about the responsibility division between compiler and allocator, and the API.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">I think it'd be beneficial if we let compiler do more heavy lighting instead of relying heavily on allocator. If we rely on less magic inside an allocator, we will likely benefit more users who may use different allocators. Otherwise there's
a risk that the compiler part may be too coupled with a specific allocator, which limits the overall effectiveness of PGHO outside of that allocator.</p></div></div></blockquote><div><br></div><div>I agree with this in general. At a high level, there are multiple different types of locality optimizations each requiring different treatment for the peak performance:</div><div><br></div><div>1) Locality policies fully enforced by the compiler. Examples include co-allocation of different types of small sized objects with the same lifetime (which is statically determinable). </div><div>2) Locality policies determined by the compiler but provided as hints to the allocator(s). Examples include coarse grain locality optimizations as well as co-allocations for objects whose lifetime can not be proved to be identical statically. The allocator provides APIs to pass the hints, but can choose to ignore them. Due to the freelists and fragmentation, there is no guarantee two small objects in the same locality group (compiler hint) will be allocated in the same cacheline.</div><div>3) A hybrid of those -- there are some compiler transformations needed but also relying on collaboration from the allocator. An example is struct splitting. Blindly splitting the struct without hint to the runtime won't prevent the cold parts from polluting the cache.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lang="EN-US" style="overflow-wrap: break-word;"><div class="gmail-m_3100698701968559501WordSection1"><p class="MsoNormal">
<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">This also affects what we want to expose in the new API for hinting the allocator (e.g. provide grouping or arena-like hint computed by compiler vs. passing a number of factors through the API which would help compute that inside allocator).
With a general, stable API, hope we won't need to change API when we want to take more runtime info (temporal etc., even just for experiments) into account, or when we improve and leverage more from compiler analysis (I agree that in the long run we should
improve compiler analysis).</p></div></div></blockquote><div><br></div><div>Agree. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lang="EN-US" style="overflow-wrap: break-word;"><div class="gmail-m_3100698701968559501WordSection1"><p class="MsoNormal"><u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">I've talked with jemalloc folks on our side, and we're flexible to API changes. In this case, it makes sense to avoid abstraction overhead from wrappers.
<u></u><u></u></p>
<p class="MsoNormal"><u></u> </p></div></div></blockquote><div><br></div><div>yes. A mature allocator is tuned for years to reduce the allocation overhead as well as minimizing the memory fragmentation. Duplicating those in the wrappers can be a huge task -- except for special cases.</div><div><br></div><div>thanks,</div><div><br></div><div>David</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lang="EN-US" style="overflow-wrap: break-word;"><div class="gmail-m_3100698701968559501WordSection1"><p class="MsoNormal"><u></u></p>
<p class="MsoNormal">Looking forward to the RFC and more discussions on this.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Thanks,<u></u><u></u></p>
<p class="MsoNormal">Wenlei<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(181,196,223);padding:3pt 0in 0in">
<p class="MsoNormal" style="margin-bottom:12pt"><b><span style="font-size:12pt;color:black">From:
</span></b><span style="font-size:12pt;color:black">llvm-dev <<a href="mailto:llvm-dev-bounces@lists.llvm.org" target="_blank">llvm-dev-bounces@lists.llvm.org</a>> on behalf of Xinliang David Li via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>><br>
<b>Date: </b>Thursday, July 8, 2021 at 9:55 AM<br>
<b>To: </b>Andrey Bokhanko <<a href="mailto:andreybokhanko@gmail.com" target="_blank">andreybokhanko@gmail.com</a>><br>
<b>Cc: </b>llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>><br>
<b>Subject: </b>Re: [llvm-dev] RFC: Sanitizer-based Heap Profiler<u></u><u></u></span></p>
</div>
<div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">On Thu, Jul 8, 2021 at 8:03 AM Andrey Bokhanko <<a href="mailto:andreybokhanko@gmail.com" target="_blank">andreybokhanko@gmail.com</a>> wrote:<u></u><u></u></p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin-left:4.8pt;margin-right:0in">
<div>
<div>
<p class="MsoNormal">Hi Teresa,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">One more thing, if you don't mind.<u></u><u></u></p>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">On Tue, Jul 6, 2021 at 12:54 AM Teresa Johnson <<a href="mailto:tejohnson@google.com" target="_blank">tejohnson@google.com</a>> wrote:<u></u><u></u></p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin-left:4.8pt;margin-right:0in">
<div>
<div>
<p class="MsoNormal">We initially plan to use the profile information to provide guidance to the dynamic allocation runtime on data allocation and placement. We'll send more details on that when it is fleshed out too. <u></u><u></u></p>
</div>
</div>
</blockquote>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">I played with the current implementation, and became a bit concerned if the current data profile is sufficient for an efficient data allocation optimization.<u></u><u></u></p>
</div>
</div>
</div>
</blockquote>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin-left:4.8pt;margin-right:0in">
<div>
<div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">First, there is no information on temporal locality -- only total_lifetime of an allocation block is recorded, not start / end times -- let alone timestamps of actual memory accesses. I wonder what criteria would be used by data profile-based
allocation runtime to allocate two blocks from the same memory chunk?<u></u><u></u></p>
</div>
</div>
</div>
</blockquote>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">First, I think per-allocation start-end time should be added to approximate temporal locality. <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Detailed temporal locality information is not tracked is by design for a various of reasons:<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">1. This can be done with static analysis. The idea is for the compiler to instrument a potentially hot access region and profile the start and end address of the accessed memory regions. This information can be combined with the regular
heap profile data. In profile-use phase, the compiler can perform access pattern analysis and produce affinity graph<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">2. We try to make use of existing allocator runtime (tcmalloc) for locality optimization. The runtime has been tuned for years to have the most efficient code for fast-path allocation. For hot allocation sites, adding too much overhead
(e.g. via wrapper etc) can lead to overhead that totally eat up the gains from the locality optimization;<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">3. tcmalloc currently uses size class based partitioning, which makes co-allocation of small objects of different size classes impossible. Even for objects with the same type/size, due to the use of free lists, there is no guarantee that
consecutively allocated objects are placed together.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">4. a bump-pointer allocator has its own sets of problems -- when not used carefully, it can lead to huge memory waste due to fragmentation. In reality it only helps grouping for initial set of allocations when pointer bumps continuously
-- during stable state, the allocations will also be all over the place and no contiguity can be guaranteed. <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">This is why initially we focus more coarse grain locality optimization -- 1) co-placement to improve DTLB performance and 2) improving dcache utilization using only lifetime and hotness information.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Longer term, we need to beef up compiler based analysis -- objects with the exact life times can be safely co-allocated via compiler based transformation. Also objects with similar lifetimes can be co-allocated without introducing too much
fragmentation.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Thanks,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">David<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin-left:4.8pt;margin-right:0in">
<div>
<div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Second, according to the data from [Savage'20], memory accesses affinity (= space distance between temporarily close memory accesses from two different allocated blocks) is crucial: figure #12 demonstrates that this is vital for omnetpp benchmark
from SPEC CPU 2017.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Said this, my concerns are based essentially on a single paper that employs specific algorithms to guide memory allocation and measures their impact on a specific set of benchmarks. I wonder if you have preliminary data that validates sufficiency of
the implemented data profile for efficient optimization of heap memory allocations?<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">References:<u></u><u></u></p>
</div>
<div>
<div>
<p class="MsoNormal">[Savage'20] Savage, J., & Jones, T. M. (2020). HALO: Post-Link Heap-Layout Optimisation. CGO 2020: Proceedings of the 18th ACM/IEEE International Symposium on Code Generation and Optimization, <a href="https://doi.org/10.1145/3368826.3377914" target="_blank">https://doi.org/10.1145/3368826.3377914</a><u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
</div>
<div>
<p class="MsoNormal">Yours,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">Andrey<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote></div></div>