[PATCH] D38245: [Sanitizers] Allocator: new "release memory to OS" implementation

Aleksey Shlyapnikov via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Sep 25 11:30:16 PDT 2017


alekseyshl created this revision.
Herald added subscribers: mehdi_amini, kubamracek.

The current implementation of the allocator returning freed memory
back to OS (controlled by allocator_release_to_os_interval_ms flag)
requires sorting of the free chunks list, which has two major issues,
first, when free list grows to millions of chunks, sorting, even the
fastest one, is just too slow, and second, sorting chunks in place
is unacceptable for Scudo allocator as it makes allocations more
predictable and less secure.

The proposed approach is linear in complexity (altough requires quite
a bit more temporary memory). The idea is to count the number of free
chunks on each memory page and release pages containing free chunks
only. It requires one iteration over the free list of chunks and one
iteration over the array of page counters. The obvious disadvantage
is the allocation of the array of the counters, but even in the worst
case we support (4T allocator space, 64 buckets, 16 bytes bucket size,
full free list, which leads to 2 bytes per page counter and ~17M page
counters), requires just about 34Mb of the intermediate buffer (comparing
to ~64Gb of actually allocated chunks) and usually it stays under 100K
and released after each use. It is expected to be a relatively rare event,
releasing memory back to OS, keeping the buffer between those runs
and added complexity of the bookkeeping seems unnesessary here (it can
always be improved later, though, never say never).

The most interesting problem here is how to calculate the number of chunks
falling into each memory page in the bucket. Skipping all the details,
there are three cases when the number of chunks per page is constant:

1. P >= C, P % C == 0 --> N = P / C
2. C > P , C % P == 0 --> N = 1
3. C <= P, P % C != 0 && C % (P % C) == 0 --> N = P / C + 1

where P is page size, C is chunk size and N is the number of chunks per
page and the rest of the cases, where the number of chunks per page is
calculated on the go, during the page counter array iteration.

Among the rest, there are still cases where N can be deduced from the
page index, but they require not that much less calculations per page
than the current "brute force" way and 2/3 of the buckets fall into
the first three categories anyway, so, for the sake of simplicity,
it was decided to stick to those two variations. It can always be
refined and improved later, should we see that brute force way slows
us down unacceptably.


https://reviews.llvm.org/D38245

Files:
  lib/sanitizer_common/sanitizer_allocator_primary64.h
  lib/sanitizer_common/tests/sanitizer_allocator_test.cc

-------------- next part --------------
A non-text attachment was scrubbed...
Name: D38245.116579.patch
Type: text/x-patch
Size: 24881 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20170925/dad09fe0/attachment.bin>


More information about the llvm-commits mailing list