[PATCH] D33454: [sanitizer] Change the 32-bit Primary AllocateRegion to reduce fragmentation

Kostya Kortchinsky via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue May 23 11:29:03 PDT 2017


cryptoad created this revision.
Herald added a subscriber: kubamracek.

Currently, AllocateRegion has a tendency to fragment memory: it allocates
`2*kRegionSize`, and if the memory is aligned, will unmap `kRegionSize` bytes,
thus creating a hole, which can't itself be reused for another region. This
is exacerbated by the fact that if 2 regions get allocated one after another
without any `mmap` in between, the second will be aligned due to mappings 
generally being contiguous.

An idea, suggested by @aleskseyshl, to prevent such a behavior is to have a
stash of regions: if the `2*kRegionSize` allocation is properly aligned, split
it in two, and stash the second part to be returned next time a region is
requested.

At this point, I thought about a couple of ways to implement this:

- either an `IntrusiveList` of regions candidates, storing `next` at the begining of the region;
- a small array of regions candidates existing in the Primary.

While the second option is more constrained in terms of size, it offers several
advantages:

- security wise, a pointer in a region candidate could be overflowed into, and abused when popping an element;
- we do not dirty the first page of the region by storing something in it;
- unless several threads request regions simultaneously from different size classes, the stash rarely goes above 1 entry.

I am not certain about the Windows impact of this change, as `sanitizer_win.cc`
has its own version of MmapAlignedOrDie, maybe someone could chime in on this.

MmapAlignedOrDie is effectively unused after this change and could be removed
at a later point. I didn't notice any sizeable performance gain, even though we
are saving a few `mmap`/`munmap` syscalls.


https://reviews.llvm.org/D33454

Files:
  lib/sanitizer_common/sanitizer_allocator_primary32.h


Index: lib/sanitizer_common/sanitizer_allocator_primary32.h
===================================================================
--- lib/sanitizer_common/sanitizer_allocator_primary32.h
+++ lib/sanitizer_common/sanitizer_allocator_primary32.h
@@ -106,6 +106,7 @@
   void Init(s32 release_to_os_interval_ms) {
     possible_regions.TestOnlyInit();
     internal_memset(size_class_info_array, 0, sizeof(size_class_info_array));
+    num_stashed_regions = 0;
   }
 
   s32 ReleaseToOSIntervalMs() const {
@@ -277,10 +278,41 @@
 
   uptr AllocateRegion(AllocatorStats *stat, uptr class_id) {
     CHECK_LT(class_id, kNumClasses);
-    uptr res = reinterpret_cast<uptr>(MmapAlignedOrDie(kRegionSize, kRegionSize,
-                                      "SizeClassAllocator32"));
-    MapUnmapCallback().OnMap(res, kRegionSize);
-    stat->Add(AllocatorStatMapped, kRegionSize);
+    uptr res = 0;
+    {
+      SpinMutexLock l(&region_stash_mutex);
+      if (num_stashed_regions > 0)
+        res = regions_stash[--num_stashed_regions];
+    }
+    if (!res) {
+      uptr map_size = 2 * kRegionSize;
+      uptr map_res = (uptr)MmapOrDie(map_size, "SizeClassAllocator32");
+      uptr map_end = map_res + map_size;
+      res = map_res;
+      bool extra_region = false;
+      // If the result of the allocation is aligned, split it in two, and
+      // attempt to store the second part into the stash. If it's full, just
+      // unmap it.
+      if (IsAligned(res, kRegionSize)) {
+        SpinMutexLock l(&region_stash_mutex);
+        if (num_stashed_regions < kMaxStashedRegions) {
+          regions_stash[num_stashed_regions++] = res + kRegionSize;
+          extra_region = true;
+        }
+      } else {
+        // We are not aligned, trim the memory in front of us.
+        res = (map_res + kRegionSize) & ~(kRegionSize - 1);
+        UnmapOrDie((void*)map_res, res - map_res);
+      }
+      if (!extra_region) {
+        // We have extra memory behind us, trim it.
+        uptr end = res + kRegionSize;
+        UnmapOrDie((void*)end, map_end - end);
+        map_size -= kRegionSize;
+      }
+      MapUnmapCallback().OnMap(res, map_size);
+      stat->Add(AllocatorStatMapped, map_size);
+    }
     CHECK_EQ(0U, (res & (kRegionSize - 1)));
     possible_regions.set(ComputeRegionId(res), static_cast<u8>(class_id));
     return res;
@@ -316,8 +348,11 @@
     }
   }
 
+  static const uptr kMaxStashedRegions = 8;
+  SpinMutex region_stash_mutex;
+  uptr num_stashed_regions;
+  uptr regions_stash[kMaxStashedRegions];
+
   ByteMap possible_regions;
   SizeClassInfo size_class_info_array[kNumClasses];
 };
-
-


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D33454.99959.patch
Type: text/x-patch
Size: 2630 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20170523/11501d3a/attachment.bin>


More information about the llvm-commits mailing list