[PATCH] D87120: [HeapProf] Heap profiling runtime support
Vitaly Buka via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Sun Sep 6 21:00:40 PDT 2020
vitalybuka added inline comments.
================
Comment at: compiler-rt/lib/heapprof/heapprof_allocator.cpp:78-95
+struct ChunkHeader {
+ // 1-st 4 bytes.
+ u32 from_memalign : 1;
+ // This field is used for small sizes. For large sizes it is equal to
+ // SizeClassMap::kMaxSize and the actual size is stored in the
+ // SecondaryAllocator's metadata.
+ u32 user_requested_size : 29;
----------------
if you have dummy you can use them for user_requested_size and avoid special case for SizeClassMap::kMaxSize
you probably can steal 32bit from alloc_context_id and use it as offset
this way you don't need metadata
so if it's 0 then chunk starts with header, otherwise += offset bytes.
If I understand offset is a function of (alloc_beg, alignment)
if alignment is 2^x then just one byte for alignmentLog2 is enough
================
Comment at: compiler-rt/lib/heapprof/heapprof_allocator.cpp:62-113
+// The memory chunk allocated from the underlying allocator looks like this:
+// H H U U U U U U
+// H -- ChunkHeader (32 bytes)
+// U -- user memory.
+
+// If there is left padding before the ChunkHeader (due to use of memalign),
+// we store a magic value in the first uptr word of the memory block and
----------------
tejohnson wrote:
> vitalybuka wrote:
> > All this ChunkHeader stuff is Asan optimization hack to reuse redzones.
> > There is metatada support in underlying allocators.
> > Please take a look at hwasan_allocator
> Note that this serves a specific purpose in the heap allocator as well (and in fact having a 32-byte header was part of the original design). It allows us to support a 64-byte shadow counter granularity with only a 32-byte alignment, so less overhead to avoid counter aliasing.
As I understand it's just an optimization. This trick lets Asan to put Header into redzone (in your case in alignment)
I am not sure how important this for asan and why @eugenis decided not use it and just use GetMetadata D40935
HWAsan uses 64byte alignment as well.
================
Comment at: compiler-rt/lib/heapprof/heapprof_allocator.cpp:561
+ }
+ if (using_primary_allocator) {
+ CHECK(size);
----------------
tejohnson wrote:
> vitalybuka wrote:
> > there is also asan specific inconsistency
> > primiary/secondary difference is because primiary is reuse redzones and secondary not
> > unfortunately asan is the worst example.
> > msan, hwasan and lsan have less unneeded stuff
> Can you clarify why this isn't needed here? I do see references to the primary/secondary allocator in both hwasan and lsan. In fact lsan has code that conditionally memsets to 0 if it is from primary (similar to what is further below here in asan).
primary vs secondary for asan are result of:
1. right redzone is needed for the secondary - irrelevant here
2. meta[1] contains chunk_beg, but I already removing this part from asan D86932, and explained above how to avoid to use GetMetaData here as well
================
Comment at: compiler-rt/lib/heapprof/heapprof_allocator.h:67
+ static const u64 kSpaceSize = SANITIZER_MMAP_RANGE_SIZE;
+ static const uptr kMetadataSize = 16;
+ typedef __heapprof::SizeClassMap SizeClassMap;
----------------
it was a typo in asan, neither asan nor heapprof use metadata with any primary allocator
but I believe you don't need AP32 at all if 32bit is not a target
================
Comment at: compiler-rt/lib/sanitizer_common/sanitizer_linux.cpp:752
+
int internal_dlinfo(void *handle, int request, void *p) {
#if SANITIZER_FREEBSD
----------------
would you like to extract sanitizer_common into a separate patch?
To my taste I'd cut this patch in a dozen of smaller ones.
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D87120/new/
https://reviews.llvm.org/D87120
More information about the llvm-commits
mailing list