[compiler-rt] [scudo] Mitigate commitbase exploit (PR #75295)

via llvm-commits llvm-commits at lists.llvm.org
Tue Dec 12 23:52:52 PST 2023


https://github.com/SIELA1915 created https://github.com/llvm/llvm-project/pull/75295

This is a proposed mitigation for an attack which uses a header leak of a chunk and a free where we can overwrite the `0x40` bytes before the free location (primary + secondary header) in order to get an arbitrary location for the next secondary chunk allocation. To achieve that we set the ClassId in the chunk header to 0, we set the CommitBase in the secondary chunk header to the location where we want the next chunk to be, and we set the CommitSize to something close to the size of the next secondary allocation and we then recalculate the checksum of the header with the cookie we bruteforced from the header leak.
When the free happens, the chunk is put into the secondary cache, and when it is retrieved again from the cache a chunk will be allocated at the CommitBase location, which we were able to freely choose.

This mitigation tracks in a separate memory region the addresses of all currently allocated secondary chunks and only adds free'd secondary chunks to the cache if they were recorded in this dedicated memory region.
The tracking in this mitigation happens through a simple list of addresses, which is looped over at allocation/deallocation. Each block of contiguous memory has a at compile time configurable number of addresses stored, as well as one additional address pointing to the next block in case there are more addresses.

Two options where added in the Allocator Config, `InUseBlocksSize` which specifies the number of header addresses which are saved in a contiguous  memory region and `VerifyInUseAddresses` to completely disable this mitigation.

Performance impact was tested using some custom benchmarks, which are available at [https://github.com/SIELA1915/house-of-scudo/tree/master/benchmark](https://github.com/SIELA1915/house-of-scudo/tree/master/benchmark).

| benchmark name           | runtime (in s) | with mitigation (in s) |
|--------------------------|----------------|------------------------|
| single-block-used        | 5.8379         | 5.8170                 |
| many-blocks-used         | 7.0976         | 9.8458                 |
| random-blocks-order      | 3.8206         | 4.38057                |
| random-half-blocks-order | 3.7602         | 4.5997                 |
| many-many-blocks-used    | 6.4697         | 9.487                  |

While in the worst case (many secondary allocations with many secondary chunks being allocated at the same time) the performance impact is quite high, in usual usage it should not matter match. This seems to also be confirmed by the following benchmark tests of [mimalloc-bench](https://github.com/daanx/mimalloc-bench), where scudo is the normal version and scudo_fixed the one with the mitigation:

|   test         | alloc       | time    | rss    | user  | sys  | page-faults | page-reclaims|
|----------------|-------------|---------|--------|-------|------|-------------|--------------|
| cfrac          | scudo       | 08.82   | 4632   | 8.59  | 0.05 | 0           | 614          |
| cfrac          | scudo_fixed | 08.52   | 4644   | 8.43  | 0.03 | 0           | 618          |
| espresso       | scudo       | 05.37   | 4704   | 5.27  | 0.03 | 0           | 640          |
| espresso       | scudo_fixed | 05.44   | 4708   | 5.39  | 0.05 | 0           | 643          |
| barnes         | scudo       | 03.03   | 62200  | 2.90  | 0.06 | 0           | 4329         |
| barnes         | scudo_fixed | 03.25   | 63364  | 3.19  | 0.01 | 0           | 2901         |
| redis          | scudo       | 5.159   | 9660   | 0.19  | 0.04 | 0           | 1474         |
| redis          | scudo_fixed | 4.820   | 9684   | 0.22  | 0.01 | 0           | 1481         |
| larsonN-sized  | scudo       | 82.708  | 14688  | 5.51  | 0.14 | 6           | 2521         |
| larsonN-sized  | scudo_fixed | 67.638  | 15396  | 8.41  | 0.18 | 0           | 2692         |
| mstressN       | scudo       | 00.14   | 12008  | 0.07  | 0.04 | 1           | 27306        |
| mstressN       | scudo_fixed | 00.13   | 12028  | 0.07  | 0.03 | 0           | 27410        |
| rptestN        | scudo       | 2.211   | 23092  | 0.36  | 0.40 | 3           | 59795        |
| rptestN        | scudo_fixed | 1.522   | 16392  | 0.25  | 0.38 | 0           | 59246        |
| gs             | scudo       | 01.38   | 40072  | 1.18  | 0.04 | 252         | 29345        |
| gs             | scudo_fixed | 01.26   | 40152  | 1.13  | 0.07 | 0           | 29696        |
| lua            | scudo       | 05.85   | 71428  | 4.75  | 0.46 | 926         | 178671       |
| lua            | scudo_fixed | 05.38   | 71848  | 4.76  | 0.40 | 0           | 179362       |
| alloc-test1    | scudo       | 04.82   | 15660  | 4.67  | 0.02 | 2           | 3011         |
| alloc-test1    | scudo_fixed | 04.89   | 15712  | 4.68  | 0.03 | 0           | 3013         |
| alloc-testN    | scudo       | 05.42   | 15236  | 9.40  | 0.05 | 0           | 3757         |
| alloc-testN    | scudo_fixed | 05.34   | 15148  | 9.38  | 0.06 | 0           | 3937         |
| sh6benchN      | scudo       | 06.67   | 478560 | 10.67 | 0.70 | 1           | 134372       |
| sh6benchN      | scudo_fixed | 06.58   | 478860 | 10.84 | 0.51 | 0           | 135097       |
| sh8benchN      | scudo       | 43.33   | 151272 | 67.93 | 3.43 | 1           | 293981       |
| sh8benchN      | scudo_fixed | 41.48   | 151816 | 66.44 | 3.05 | 0           | 298107       |
| xmalloc-test   | scudo       | 10.060  | 48904  | 7.78  | 0.34 | 3           | 25004        |
| xmalloc-testN  | scudo_fixed | 9.826   | 49136  | 7.78  | 0.33 | 0           | 28887        |
| cache-scratch1 | scudo       | 01.40   | 4016   | 1.32  | 0.00 | 1           | 253          |
| cache-scratch1 | scudo_fixed | 01.33   | 3832   | 1.28  | 0.00 | 0           | 249          |
| cache-scratchN | scudo       | 00.79   | 3944   | 1.40  | 0.00 | 0           | 254          |
| cache-scratchN | scudo_fixed | 00.76   | 3924   | 1.32  | 0.01 | 0           | 256          |
| glibc-simple   | scudo       | 04.57   | 3472   | 4.38  | 0.02 | 1           | 331          |
| glibc-simple   | scudo_fixed | 03.76   | 3720   | 3.68  | 0.01 | 0           | 335          |
| glibc-thread   | scudo       | 14.607  | 3932   | 3.49  | 0.01 | 1           | 401          |
| glibc-thread   | scudo_fixed | 14.650  | 3964   | 3.51  | 0.00 | 0           | 401          |


>From 70f5ca3fb598137b4300f277c9c0bbda5929ba1e Mon Sep 17 00:00:00 2001
From: Elias Boschung <siela1915 at gmail.com>
Date: Wed, 13 Dec 2023 08:13:59 +0100
Subject: [PATCH] [scudo] verify free'd secondary was allocated before

---
 .../lib/scudo/standalone/allocator_config.h   |   2 +
 compiler-rt/lib/scudo/standalone/secondary.h  | 116 ++++++++++++++++++
 2 files changed, 118 insertions(+)

diff --git a/compiler-rt/lib/scudo/standalone/allocator_config.h b/compiler-rt/lib/scudo/standalone/allocator_config.h
index 3c6aa3acb0e45b..b796b07b261dcf 100644
--- a/compiler-rt/lib/scudo/standalone/allocator_config.h
+++ b/compiler-rt/lib/scudo/standalone/allocator_config.h
@@ -153,6 +153,8 @@ struct DefaultConfig {
       static const s32 MinReleaseToOsIntervalMs = INT32_MIN;
       static const s32 MaxReleaseToOsIntervalMs = INT32_MAX;
     };
+      static const bool VerifyInUseAddresses = true;
+      static const u32 InUseBlocksSize = 1000U;
     template <typename Config> using CacheT = MapAllocatorCache<Config>;
   };
 
diff --git a/compiler-rt/lib/scudo/standalone/secondary.h b/compiler-rt/lib/scudo/standalone/secondary.h
index f52a4188bcf3a6..202dfd5897a1de 100644
--- a/compiler-rt/lib/scudo/standalone/secondary.h
+++ b/compiler-rt/lib/scudo/standalone/secondary.h
@@ -11,6 +11,7 @@
 
 #include "chunk.h"
 #include "common.h"
+#include "internal_defs.h"
 #include "list.h"
 #include "mem_map.h"
 #include "memtag.h"
@@ -476,6 +477,17 @@ template <typename Config> class MapAllocator {
     DCHECK_EQ(FreedBytes, 0U);
     Cache.init(ReleaseToOsInterval);
     Stats.init();
+
+    if (Config::Secondary::VerifyInUseAddresses) {
+        ReservedMemoryT InUseReserved;
+        InUseReserved.create(0U, sizeof(void *)*(Config::Secondary::InUseBlocksSize+1), "scudo:secondary_integrity");
+        DCHECK_NE(InUseReserved.getBase(), 0U);
+        
+        InUseAddresses = InUseReserved.dispatch(InUseReserved.getBase(), sizeof(void *)*(Config::Secondary::InUseBlocksSize+1));
+        CHECK(InUseAddresses.isAllocated());
+        InUseAddresses.setMemoryPermission(InUseAddresses.getBase(), sizeof(void *)*(Config::Secondary::InUseBlocksSize+1), 0);
+    }
+    
     if (LIKELY(S))
       S->link(&Stats);
   }
@@ -544,6 +556,9 @@ template <typename Config> class MapAllocator {
   u32 NumberOfAllocs GUARDED_BY(Mutex) = 0;
   u32 NumberOfFrees GUARDED_BY(Mutex) = 0;
   LocalStats Stats GUARDED_BY(Mutex);
+
+  MemMapT InUseAddresses GUARDED_BY(Mutex) = {};
+
 };
 
 // As with the Primary, the size passed to this function includes any desired
@@ -588,6 +603,43 @@ void *MapAllocator<Config>::allocate(const Options &Options, uptr Size,
                BlockEnd - PtrInt);
       {
         ScopedLock L(Mutex);
+
+        if (Config::Secondary::VerifyInUseAddresses) {
+            void **IntegrityList = reinterpret_cast<void **>(InUseAddresses.getBase());
+            bool isFull = true;
+            bool reachedEnd = false;
+            
+            while (!reachedEnd) {
+                for (u32 I = 0; I < Config::Secondary::InUseBlocksSize; I++) {
+                    if (IntegrityList[I] == nullptr) {
+                        isFull = false;
+                        IntegrityList[I] = Ptr;
+                        break;
+                    }
+                }
+                if (isFull && IntegrityList[Config::Secondary::InUseBlocksSize] != nullptr) {
+                    IntegrityList = static_cast<void **>(IntegrityList[Config::Secondary::InUseBlocksSize]);
+                } else {
+                    reachedEnd = true;
+                }
+            }
+            
+            if (isFull) {
+                ReservedMemoryT InUseReserved;
+                InUseReserved.create(0U, sizeof(void *)*(Config::Secondary::InUseBlocksSize+1), "scudo:secondary_integrity");
+                DCHECK_NE(InUseReserved.getBase(), 0U);
+                
+                MemMapT NewAddresses = InUseReserved.dispatch(InUseReserved.getBase(), sizeof(void *)*(Config::Secondary::InUseBlocksSize+1));
+                CHECK(NewAddresses.isAllocated());
+                NewAddresses.setMemoryPermission(NewAddresses.getBase(), sizeof(void *)*(Config::Secondary::InUseBlocksSize+1), 0);
+                
+                IntegrityList[Config::Secondary::InUseBlocksSize] = reinterpret_cast<void *>(NewAddresses.getBase());
+                
+                IntegrityList = static_cast<void **>(IntegrityList[Config::Secondary::InUseBlocksSize]);
+                IntegrityList[0] = Ptr;
+            }
+        }
+      
         InUseBlocks.push_back(H);
         AllocatedBytes += H->CommitSize;
         FragmentedBytes += H->MemMap.getCapacity() - H->CommitSize;
@@ -662,6 +714,43 @@ void *MapAllocator<Config>::allocate(const Options &Options, uptr Size,
     *BlockEndPtr = CommitBase + CommitSize;
   {
     ScopedLock L(Mutex);
+
+    if (Config::Secondary::VerifyInUseAddresses) {
+        void **IntegrityList = reinterpret_cast<void **>(InUseAddresses.getBase());
+        bool isFull = true;
+        bool reachedEnd = false;
+        
+        while (!reachedEnd) {
+            for (u32 I = 0; I < Config::Secondary::InUseBlocksSize; I++) {
+                if (IntegrityList[I] == nullptr) {
+                    isFull = false;
+                    IntegrityList[I] = reinterpret_cast<void *>(HeaderPos + LargeBlock::getHeaderSize());
+                    break;
+                }
+            }
+            if (isFull && IntegrityList[Config::Secondary::InUseBlocksSize] != nullptr) {
+                IntegrityList = static_cast<void **>(IntegrityList[Config::Secondary::InUseBlocksSize]);
+            } else {
+                reachedEnd = true;
+            }
+        }
+        
+        if (isFull) {
+            ReservedMemoryT InUseReserved;
+            InUseReserved.create(0U, sizeof(void *)*(Config::Secondary::InUseBlocksSize+1), "scudo:secondary_integrity");
+            DCHECK_NE(InUseReserved.getBase(), 0U);
+            
+            MemMapT NewAddresses = InUseReserved.dispatch(InUseReserved.getBase(), sizeof(void *)*(Config::Secondary::InUseBlocksSize+1));
+            CHECK(NewAddresses.isAllocated());
+            NewAddresses.setMemoryPermission(NewAddresses.getBase(), sizeof(void *)*(Config::Secondary::InUseBlocksSize+1), 0);
+            
+            IntegrityList[Config::Secondary::InUseBlocksSize] = reinterpret_cast<void *>(NewAddresses.getBase());
+            
+            IntegrityList = static_cast<void **>(IntegrityList[Config::Secondary::InUseBlocksSize]);
+            IntegrityList[0] = reinterpret_cast<void *>(HeaderPos + LargeBlock::getHeaderSize());
+        }
+    }
+
     InUseBlocks.push_back(H);
     AllocatedBytes += CommitSize;
     FragmentedBytes += H->MemMap.getCapacity() - CommitSize;
@@ -681,6 +770,33 @@ void MapAllocator<Config>::deallocate(const Options &Options, void *Ptr)
   const uptr CommitSize = H->CommitSize;
   {
     ScopedLock L(Mutex);
+
+
+    if (Config::Secondary::VerifyInUseAddresses) {
+        void **IntegrityList = reinterpret_cast<void **>(InUseAddresses.getBase());
+        bool isValid = false;
+        bool reachedEnd = false;
+        
+        while (!reachedEnd) {
+            for (u32 I = 0; I < Config::Secondary::InUseBlocksSize; I++) {
+                if (IntegrityList[I] == Ptr) {
+                    isValid = true;
+                    IntegrityList[I] = nullptr;
+                    break;
+                }
+            }
+            if (!isValid && IntegrityList[Config::Secondary::InUseBlocksSize] != nullptr) {
+                IntegrityList = static_cast<void **>(IntegrityList[Config::Secondary::InUseBlocksSize]);
+            } else {
+                reachedEnd = true;
+            }
+        }
+        
+        if (!isValid) {
+            return;
+        }
+    }
+    
     InUseBlocks.remove(H);
     FreedBytes += CommitSize;
     FragmentedBytes -= H->MemMap.getCapacity() - CommitSize;



More information about the llvm-commits mailing list