[compiler-rt] 6130d67 - [scudo] Fix the calculating of memory group usage
Chia-hung Duan via llvm-commits
llvm-commits at lists.llvm.org
Fri Oct 28 14:13:18 PDT 2022
Author: Chia-hung Duan
Date: 2022-10-28T20:29:17Z
New Revision: 6130d67f70ad2e063b4407b6ee31c7db19464fee
URL: https://github.com/llvm/llvm-project/commit/6130d67f70ad2e063b4407b6ee31c7db19464fee
DIFF: https://github.com/llvm/llvm-project/commit/6130d67f70ad2e063b4407b6ee31c7db19464fee.diff
LOG: [scudo] Fix the calculating of memory group usage
In SizeClassAllocator64, the boundary of a memory group may not align to
the region begin. Which means the begin addr of a memory group may
smaller than region begin. This leads to wrong judgement of memory group
usage.
Differential Revision: https://reviews.llvm.org/D136898
Added:
Modified:
compiler-rt/lib/scudo/standalone/primary32.h
compiler-rt/lib/scudo/standalone/primary64.h
Removed:
################################################################################
diff --git a/compiler-rt/lib/scudo/standalone/primary32.h b/compiler-rt/lib/scudo/standalone/primary32.h
index 6e791a127df66..a3d908cee9e52 100644
--- a/compiler-rt/lib/scudo/standalone/primary32.h
+++ b/compiler-rt/lib/scudo/standalone/primary32.h
@@ -708,8 +708,10 @@ template <typename Config> class SizeClassAllocator32 {
if (AllocatedGroupSize == 0)
continue;
+ // TransferBatches are pushed in front of BG.Batches. The first one may
+ // not have all caches used.
const uptr NumBlocks = (BG.Batches.size() - 1) * BG.MaxCachedPerBatch +
- BG.Batches.back()->getCount();
+ BG.Batches.front()->getCount();
const uptr BytesInBG = NumBlocks * BlockSize;
// Given the randomness property, we try to release the pages only if the
// bytes used by free blocks exceed certain proportion of allocated
diff --git a/compiler-rt/lib/scudo/standalone/primary64.h b/compiler-rt/lib/scudo/standalone/primary64.h
index a3684c9d45864..c64f9b8e062f7 100644
--- a/compiler-rt/lib/scudo/standalone/primary64.h
+++ b/compiler-rt/lib/scudo/standalone/primary64.h
@@ -702,16 +702,49 @@ template <typename Config> class SizeClassAllocator64 {
BG.PushedBlocks - BG.PushedBlocksAtLastCheckpoint;
if (PushedBytesDelta * BlockSize < PageSize)
continue;
+
+ // Group boundary does not necessarily have the same alignment as Region.
+ // It may sit across a Region boundary. Which means that we may have the
+ // following two cases,
+ //
+ // 1. Group boundary sits before RegionBeg.
+ //
+ // (BatchGroupBeg)
+ // batchGroupBase RegionBeg BatchGroupEnd
+ // | | |
+ // v v v
+ // +------------+----------------+
+ // \ /
+ // ------ GroupSize ------
+ //
+ // 2. Group boundary sits after RegionBeg.
+ //
+ // (BatchGroupBeg)
+ // RegionBeg batchGroupBase BatchGroupEnd
+ // | | |
+ // v v v
+ // +-----------+-----------------------------+
+ // \ /
+ // ------ GroupSize ------
+ //
+ // Note that in the first case, the group range before RegionBeg is never
+ // used. Therefore, while calculating the used group size, we should
+ // exclude that part to get the correct size.
+ const uptr BatchGroupBeg =
+ Max(batchGroupBase(BG.GroupId, CompactPtrBase), Region->RegionBeg);
+ DCHECK_GE(AllocatedUserEnd, BatchGroupBeg);
const uptr BatchGroupEnd =
batchGroupBase(BG.GroupId, CompactPtrBase) + GroupSize;
const uptr AllocatedGroupSize = AllocatedUserEnd >= BatchGroupEnd
- ? GroupSize
- : AllocatedUserEnd - BatchGroupEnd;
+ ? BatchGroupEnd - BatchGroupBeg
+ : AllocatedUserEnd - BatchGroupBeg;
if (AllocatedGroupSize == 0)
continue;
+ // TransferBatches are pushed in front of BG.Batches. The first one may
+ // not have all caches used.
const uptr NumBlocks = (BG.Batches.size() - 1) * BG.MaxCachedPerBatch +
- BG.Batches.back()->getCount();
+ BG.Batches.front()->getCount();
const uptr BytesInBG = NumBlocks * BlockSize;
// Given the randomness property, we try to release the pages only if the
// bytes used by free blocks exceed certain proportion of group size. Note
More information about the llvm-commits
mailing list