[PATCH] D60593: [GwpAsan] Introduce GWP-ASan.

Vlad Tsyrklevich via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri May 3 12:04:45 PDT 2019


vlad.tsyrklevich added inline comments.


================
Comment at: compiler-rt/lib/gwp_asan/guarded_pool_allocator.cpp:426
+  if (!gwp_asan::SingletonPtr)
+    return (void *)123;
+  return gwp_asan::SingletonPtr->allocate(Size);
----------------
nullptr


================
Comment at: compiler-rt/lib/gwp_asan/guarded_pool_allocator.cpp:431
+void __gwp_asan_deallocate(void *Ptr) {
+  gwp_asan::SingletonPtr->deallocate(Ptr);
+}
----------------
if (!gwp_asan::SingletonPtr) return; to match the other functions?


================
Comment at: compiler-rt/test/gwp_asan/no_reuse_ahead_of_time.cpp:35
+
+    // CAREFUL: We always use a sampled allocation to initialise GWP-ASan
+    // in the interface helpers. For this case, we already have called malloc
----------------
Why does anything need to care about initializing the GPA ahead of time for anything other than allocate? The underlying methods should return the correct values. And even for allocate() we can get around this edge case by doing allocate(PageSize+1).


================
Comment at: compiler-rt/test/gwp_asan/num_usable_slots.cpp:12
+// RUN: %env_gwp_asan_options=NumUsableGuardedSlots=128 %run %t 128 2>&1
+// RUN: %env_gwp_asan_options=NumUsableGuardedSlots=129 %run %t 129 2>&1
+
----------------
hctim wrote:
> vlad.tsyrklevich wrote:
> > vlad.tsyrklevich wrote:
> > > Add a negative test or two (e.g. do we actually fail when invoked with 8 slots but we iterate 9 times? Do we fail when we have 8 slots but iterate 7 times?)
> > We test +1 case but not the -1 case.
> I'm confused why we would need to test the -1 case (assuming that -1 means `NumIterations == MaxSimultaneousAllocations - 1`), given that  `NumIterations == MSA` provide this coverage. Is my assumption correct?
This looks fine as is.


================
Comment at: compiler-rt/test/gwp_asan/num_usable_slots.cpp:60
+  char *Ptr = reinterpret_cast<char*>(malloc(PageSize));
+  // The system allocator doesn't have use-after-free READ detection. If this
+  // was a sampled allocation, this would be caught.
----------------
hctim wrote:
> vlad.tsyrklevich wrote:
> > I'm not sure what a better test is but this isn't always true (e.g. the allocator could actually take back that page if all the allocations on it are gone.) These tests could test stronger assertions and be simpler if there was a __gwp_asan_pointer_is_mine() or something like that.
> Changed it to use the same mechanism from above to detect if it's in the pool.
> 
> I think the current assumption is strong enough, but may be broken if `atoi()` or `alloca()` changes to call malloc (unlikely), or if a shared library calls `malloc()` during dynamic loading. WDYT?
> 
> Edit: On further thought, this also will spuriously pass if the implementing allocator uses the page immediately after the allocation pool to back the allocation. I've added interface definitions and started a cleanup so that we can exercise the allocator internals directly.
If we want to be safe, because I can definitely see runtimes hitting malloc() before main()in normal code paths, we could make SampleRate high and use the GPA allocate directly for this test.


================
Comment at: compiler-rt/test/gwp_asan/pattern_malloc_free.cpp:1
+// REQUIRES: gwp_asan
+// This test ensures that normal allocation/memory access/deallocation works
----------------
I would collapse the pattern_* tests into two tests:
1) Test that malloc, calloc, realloc, new, new[] all satisfy pointerismine
2) And then this test.

Less code and should have equal coverage.


================
Comment at: compiler-rt/test/gwp_asan/reuse_quarantine.cpp:8
+// RUN: %env_gwp_asan_options=MaxSimultaneousAllocations=2 %run %t
+// RUN: %env_gwp_asan_options=MaxSimultaneousAllocations=127 %run %t
+// RUN: %env_gwp_asan_options=MaxSimultaneousAllocations=128 %run %t
----------------
Maybe delete the 1/2 cases (will fail on runtimes that call malloc before main() and just call it with some other higher value that should always work like 32.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D60593/new/

https://reviews.llvm.org/D60593





More information about the llvm-commits mailing list