[compiler-rt] 436ea54 - [scudo] Mitigate page releasing thrashing

Chia-hung Duan via llvm-commits llvm-commits at lists.llvm.org
Mon Mar 6 08:39:44 PST 2023


Author: Chia-hung Duan
Date: 2023-03-06T16:38:18Z
New Revision: 436ea5485d02c529e26a7a1007b82d581be016c4

URL: https://github.com/llvm/llvm-project/commit/436ea5485d02c529e26a7a1007b82d581be016c4
DIFF: https://github.com/llvm/llvm-project/commit/436ea5485d02c529e26a7a1007b82d581be016c4.diff

LOG: [scudo] Mitigate page releasing thrashing

We have the heuristic to determine the threshold of doing page
releasing for smaller size classes. However, in a case that the
memory usage is bouncing between that threshold may result in
frequent try of page releasing but not returning much memory.

This CL add another heuristic to mitigate this problem by increasing
the minimum pages that potentially can be released. Note that this
heuristic is only applied on SizeClassAllocator64. SizeClassAllocator32
has a smaller group size so the overhead is smaller than 64-bit
platform.

Differential Revision: https://reviews.llvm.org/D144768

Added: 
    

Modified: 
    compiler-rt/lib/scudo/standalone/primary64.h

Removed: 
    


################################################################################
diff  --git a/compiler-rt/lib/scudo/standalone/primary64.h b/compiler-rt/lib/scudo/standalone/primary64.h
index ec11824ffbda8..0612883f8aa83 100644
--- a/compiler-rt/lib/scudo/standalone/primary64.h
+++ b/compiler-rt/lib/scudo/standalone/primary64.h
@@ -85,6 +85,43 @@ template <typename Config> class SizeClassAllocator64 {
       Region->ReleaseInfo.LastReleaseAtNs = Time;
     }
     setOption(Option::ReleaseInterval, static_cast<sptr>(ReleaseToOsInterval));
+
+    const uptr GroupSize = (1U << GroupSizeLog);
+    const uptr PagesInGroup = GroupSize / PageSize;
+    const uptr MinSizeClass = getSizeByClassId(1);
+    // When trying to release pages back to memory, visiting smaller size
+    // classes is expensive. Therefore, we only try to release smaller size
+    // classes when the amount of free blocks goes over a certain threshold (See
+    // the comment in releaseToOSMaybe() for more details). For example, for
+    // size class 32, we only do the release when the size of free blocks is
+    // greater than 97% of pages in a group. However, this may introduce another
+    // issue that if the number of free blocks is bouncing between 97% ~ 100%.
+    // Which means we may try many page releases but only release very few of
+    // them (less than 3% in a group). Even though we have
+    // `&ReleaseToOsIntervalMs` which slightly reduce the frequency of these
+    // calls but it will be better to have another guard to mitigate this issue.
+    //
+    // Here we add another constraint on the minimum size requirement. The
+    // constraint is determined by the size of in-use blocks in the minimal size
+    // class. Take size class 32 as an example,
+    //
+    //   +-     one memory group      -+
+    //   +----------------------+------+
+    //   |  97% of free blocks  |      |
+    //   +----------------------+------+
+    //                           \    /
+    //                      3% in-use blocks
+    //
+    //   * The release size threshold is 97%.
+    //
+    // The 3% size in a group is about 7 pages. For two consecutive
+    // releaseToOSMaybe(), we require the 
diff erence between `PushedBlocks`
+    // should be greater than 7 pages. This mitigates the page releasing
+    // thrashing which is caused by memory usage bouncing around the threshold.
+    // The smallest size class takes longest time to do the page release so we
+    // use its size of in-use blocks as a heuristic.
+    SmallerBlockReleasePageDelta =
+        PagesInGroup * (1 + MinSizeClass / 16U) / 100;
   }
 
   void unmapTestOnly() NO_THREAD_SAFETY_ANALYSIS {
@@ -413,6 +450,9 @@ template <typename Config> class SizeClassAllocator64 {
   static_assert(sizeof(RegionInfo) % SCUDO_CACHE_LINE_SIZE == 0, "");
 
   uptr PrimaryBase = 0;
+  // The minimum size of pushed blocks that we will try to release the pages in
+  // that size class.
+  uptr SmallerBlockReleasePageDelta = 1;
   MapPlatformData Data = {};
   atomic_s32 ReleaseToOsIntervalMs = {};
   alignas(SCUDO_CACHE_LINE_SIZE) RegionInfo RegionInfoArray[NumClasses];
@@ -772,7 +812,7 @@ template <typename Config> class SizeClassAllocator64 {
     if (BytesPushed < PageSize)
       return 0; // Nothing new to release.
 
-    bool CheckDensity = BlockSize < PageSize / 16U;
+    const bool CheckDensity = BlockSize < PageSize / 16U;
     // Releasing smaller blocks is expensive, so we want to make sure that a
     // significant amount of bytes are free, and that there has been a good
     // amount of batches pushed to the freelist before attempting to release.
@@ -873,11 +913,23 @@ template <typename Config> class SizeClassAllocator64 {
       // bytes used by free blocks exceed certain proportion of group size. Note
       // that this heuristic only applies when all the spaces in a BatchGroup
       // are allocated.
-      if (CheckDensity && (BytesInBG * 100U) / AllocatedGroupSize <
-                              (100U - 1U - BlockSize / 16U)) {
-        Prev = BG;
-        BG = BG->Next;
-        continue;
+      if (CheckDensity) {
+        const bool HighDensity = (BytesInBG * 100U) / AllocatedGroupSize >=
+                                 (100U - 1U - BlockSize / 16U);
+        const bool MayHaveReleasedAll = NumBlocks >= (GroupSize / BlockSize);
+        // If all blocks in the group are released, we will do range marking
+        // which is fast. Otherwise, we will wait until we have accumulated
+        // a certain amount of free memory.
+        const bool ReachReleaseDelta =
+            MayHaveReleasedAll ? true
+                               : PushedBytesDelta * BlockSize >=
+                                     PageSize * SmallerBlockReleasePageDelta;
+
+        if (!HighDensity || !ReachReleaseDelta) {
+          Prev = BG;
+          BG = BG->Next;
+          continue;
+        }
       }
 
       // If `BG` is the first BatchGroup in the list, we only need to advance


        


More information about the llvm-commits mailing list