[PATCH] D87120: [HeapProf] Heap profiling runtime support

Vitaly Buka via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Sep 4 05:02:49 PDT 2020


vitalybuka added inline comments.


================
Comment at: compiler-rt/lib/heapprof/heapprof_allocator.cpp:62-113
+// The memory chunk allocated from the underlying allocator looks like this:
+// H H U U U U U U
+//   H -- ChunkHeader (32 bytes)
+//   U -- user memory.
+
+// If there is left padding before the ChunkHeader (due to use of memalign),
+// we store a magic value in the first uptr word of the memory block and
----------------
All this ChunkHeader stuff is Asan optimization hack to reuse redzones.
There is metatada support in underlying allocators.
Please take a look at hwasan_allocator


================
Comment at: compiler-rt/lib/heapprof/heapprof_allocator.cpp:561
+    }
+    if (using_primary_allocator) {
+      CHECK(size);
----------------
there is also asan specific inconsistency
primiary/secondary difference is because primiary is reuse redzones and secondary not
unfortunately asan is the worst example. 
msan, hwasan and lsan have less unneeded stuff


================
Comment at: compiler-rt/lib/heapprof/heapprof_allocator.h:44-95
+#if SANITIZER_CAN_USE_ALLOCATOR64
+const uptr kAllocatorSpace = 0x600000000000ULL;
+const uptr kAllocatorSize = 0x40000000000ULL; // 4T.
+typedef DefaultSizeClassMap SizeClassMap;
+template <typename AddressSpaceViewTy>
+struct AP64 { // Allocator64 parameters. Deliberately using a short name.
+  static const uptr kSpaceBeg = kAllocatorSpace;
----------------
this probably can be moved into cc file


================
Comment at: compiler-rt/lib/heapprof/heapprof_allocator.h:63-76
+typedef CompactSizeClassMap SizeClassMap;
+template <typename AddressSpaceViewTy> struct AP32 {
+  static const uptr kSpaceBeg = 0;
+  static const u64 kSpaceSize = SANITIZER_MMAP_RANGE_SIZE;
+  static const uptr kMetadataSize = 16;
+  typedef __heapprof::SizeClassMap SizeClassMap;
+  static const uptr kRegionSizeLog = 20;
----------------
if it's targeted to _x86_64__ you don't need AP32


================
Comment at: compiler-rt/lib/sanitizer_common/sanitizer_stackdepotbase.h:182
+    uptr v = atomic_load(p, memory_order_relaxed);
+    Node *s = (Node *)(v & ~1UL);
+    for (; s; s = s->link) {
----------------
we probably should lock/unlock here


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D87120/new/

https://reviews.llvm.org/D87120



More information about the llvm-commits mailing list