[compiler-rt] [compiler-rt][ASan] Add function copying annotations (PR #91702)
via llvm-commits
llvm-commits at lists.llvm.org
Tue Oct 8 03:26:13 PDT 2024
https://github.com/AdvenamTacet updated https://github.com/llvm/llvm-project/pull/91702
>From 97cbd5988383d82f7ba8e1251d5bfe28737d4389 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at trailofbits.com>
Date: Fri, 10 May 2024 07:27:17 +0200
Subject: [PATCH 01/25] [compiler-rt][ASan] Add function moving annotations
This PR adds a `__sanitizer_move_contiguous_container_annotations` function, which moves annotations from one memory area to another.
Old area is unpoisoned at the end. New area is annotated in the same way as the old region at the beginning (within limitations of ASan).
```cpp
void __sanitizer_move_contiguous_container_annotations(
const void *old_storage_beg_p, const void *old_storage_end_p,
const void *new_storage_beg_p, const void *new_storage_end_p) {
```
This function aims to help with short string annotations and similar container annotations.
Right now we change trait types of `std::basic_string` when compiling with ASan.
https://github.com/llvm/llvm-project/blob/87f3407856e61a73798af4e41b28bc33b5bf4ce6/libcxx/include/string#L738-L751
The goal is to not change `__trivially_relocatable` when compiling with ASan.
If this function is accpeted and upstreamed, the next step is creating function like `__memcpy_with_asan` moving memory with ASan.
And then using this function instead of `__builtin__memcpy` while moving trivially relocatable objects.
NOTICE: I did not test it yet, so it's probably not compiling, but I don't expect big changes. PR is WIP until I test it.
---
I'm thinking if there is a good way to address fact that in a container the new buffer is usually bigger than the previous one.
We may add two more arguments to the functions to address it (the beginning and the end of the whole buffer.
Another potential change is removing `new_storage_end_p` as it's redundant, because we require the same size.
---
.../include/sanitizer/common_interface_defs.h | 24 +++
compiler-rt/lib/asan/asan_errors.cpp | 14 ++
compiler-rt/lib/asan/asan_errors.h | 19 +++
compiler-rt/lib/asan/asan_poisoning.cpp | 128 +++++++++++++++
compiler-rt/lib/asan/asan_report.cpp | 10 ++
compiler-rt/lib/asan/asan_report.h | 3 +
.../sanitizer_common_interface.inc | 1 +
.../sanitizer_interface_internal.h | 4 +
.../TestCases/move_container_annotations.cpp | 155 ++++++++++++++++++
9 files changed, 358 insertions(+)
create mode 100644 compiler-rt/test/asan/TestCases/move_container_annotations.cpp
diff --git a/compiler-rt/include/sanitizer/common_interface_defs.h b/compiler-rt/include/sanitizer/common_interface_defs.h
index f9fce595b37bb8..a3c91a9be6439b 100644
--- a/compiler-rt/include/sanitizer/common_interface_defs.h
+++ b/compiler-rt/include/sanitizer/common_interface_defs.h
@@ -193,6 +193,30 @@ void SANITIZER_CDECL __sanitizer_annotate_double_ended_contiguous_container(
const void *old_container_beg, const void *old_container_end,
const void *new_container_beg, const void *new_container_end);
+/// Moves annotation from one storage to another.
+/// At the end, new buffer is annotated in the same way as old buffer at
+/// the very beginning. Old buffer is fully unpoisoned.
+/// Main purpose of that function is use while moving trivially relocatable
+/// objects, which memory may be poisoned (therefore not trivially with ASan).
+///
+/// A contiguous container is a container that keeps all of its elements
+/// in a contiguous region of memory. The container owns the region of memory
+/// <c>[old_storage_beg, old_storage_end)</c> and
+/// <c>[new_storage_beg, new_storage_end)</c>;
+/// There is no requirement where objects are kept.
+/// Poisoned and non-poisoned memory areas can alternate,
+/// there are no shadow memory restrictions.
+///
+/// Argument requirements:
+/// New containert has to have the same size as the old container.
+/// \param old_storage_beg Beginning of the old container region.
+/// \param old_storage_end End of the old container region.
+/// \param new_storage_beg Beginning of the new container region.
+/// \param new_storage_end End of the new container region.
+void SANITIZER_CDECL __sanitizer_move_contiguous_container_annotations(
+ const void *old_storage_beg, const void *old_storage_end,
+ const void *new_storage_beg, const void *new_storage_end);
+
/// Returns true if the contiguous container <c>[beg, end)</c> is properly
/// poisoned.
///
diff --git a/compiler-rt/lib/asan/asan_errors.cpp b/compiler-rt/lib/asan/asan_errors.cpp
index 6f2fd28bfdf11a..11d67f4ace1064 100644
--- a/compiler-rt/lib/asan/asan_errors.cpp
+++ b/compiler-rt/lib/asan/asan_errors.cpp
@@ -348,6 +348,20 @@ void ErrorBadParamsToAnnotateDoubleEndedContiguousContainer::Print() {
ReportErrorSummary(scariness.GetDescription(), stack);
}
+void ErrorBadParamsToMoveContiguousContainerAnnotations::Print() {
+ Report(
+ "ERROR: AddressSanitizer: bad parameters to "
+ "__sanitizer_move_contiguous_container_annotations:\n"
+ " old_storage_beg : %p\n"
+ " old_storage_end : %p\n"
+ " new_storage_beg : %p\n"
+ " new_storage_end : %p\n",
+ (void *)old_storage_beg, (void *)old_storage_end, (void *)new_storage_beg,
+ (void *)new_storage_end);
+ stack->Print();
+ ReportErrorSummary(scariness.GetDescription(), stack);
+}
+
void ErrorODRViolation::Print() {
Decorator d;
Printf("%s", d.Error());
diff --git a/compiler-rt/lib/asan/asan_errors.h b/compiler-rt/lib/asan/asan_errors.h
index 634f6da5443552..40526039fbc76f 100644
--- a/compiler-rt/lib/asan/asan_errors.h
+++ b/compiler-rt/lib/asan/asan_errors.h
@@ -353,6 +353,24 @@ struct ErrorBadParamsToAnnotateDoubleEndedContiguousContainer : ErrorBase {
void Print();
};
+struct ErrorBadParamsToMoveContiguousContainerAnnotations : ErrorBase {
+ const BufferedStackTrace *stack;
+ uptr old_storage_beg, old_storage_end, new_storage_beg, new_storage_end;
+
+ ErrorBadParamsToMoveContiguousContainerAnnotations() = default; // (*)
+ ErrorBadParamsToMoveContiguousContainerAnnotations(
+ u32 tid, BufferedStackTrace *stack_, uptr old_storage_beg_,
+ uptr old_storage_end_, uptr new_storage_beg_, uptr new_storage_end_)
+ : ErrorBase(tid, 10,
+ "bad-__sanitizer_annotate_double_ended_contiguous_container"),
+ stack(stack_),
+ old_storage_beg(old_storage_beg_),
+ old_storage_end(old_storage_end_),
+ new_storage_beg(new_storage_beg_),
+ new_storage_end(new_storage_end_) {}
+ void Print();
+};
+
struct ErrorODRViolation : ErrorBase {
__asan_global global1, global2;
u32 stack_id1, stack_id2;
@@ -421,6 +439,7 @@ struct ErrorGeneric : ErrorBase {
macro(StringFunctionSizeOverflow) \
macro(BadParamsToAnnotateContiguousContainer) \
macro(BadParamsToAnnotateDoubleEndedContiguousContainer) \
+ macro(BadParamsToMoveContiguousContainerAnnotations) \
macro(ODRViolation) \
macro(InvalidPointerPair) \
macro(Generic)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index d600b1a0c241fa..10f5e97f8a5aee 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -576,6 +576,134 @@ void __sanitizer_annotate_double_ended_contiguous_container(
}
}
+// This function moves annotation from one buffer to another.
+// Old buffer is unpoisoned at the end.
+void __sanitizer_move_contiguous_container_annotations(
+ const void *old_storage_beg_p, const void *old_storage_end_p,
+ const void *new_storage_beg_p, const void *new_storage_end_p) {
+ if (!flags()->detect_container_overflow)
+ return;
+
+ VPrintf(2, "contiguous_container_old: %p %p\n", old_storage_beg_p,
+ old_storage_end_p);
+ VPrintf(2, "contiguous_container_new: %p %p\n", new_storage_beg_p,
+ new_storage_end_p);
+
+ uptr old_storage_beg = reinterpret_cast<uptr>(old_storage_beg_p);
+ uptr old_storage_end = reinterpret_cast<uptr>(old_storage_end_p);
+ uptr new_storage_beg = reinterpret_cast<uptr>(new_storage_beg_p);
+ uptr new_storage_end = reinterpret_cast<uptr>(new_storage_end_p);
+
+ constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
+
+ if (!(old_storage_beg <= old_storage_end) ||
+ !(new_storage_beg <= new_storage_end) ||
+ (old_storage_end - old_storage_beg) !=
+ (new_storage_end - new_storage_beg)) {
+ GET_STACK_TRACE_FATAL_HERE;
+ ReportBadParamsToMoveContiguousContainerAnnotations(
+ old_storage_beg, old_storage_end, new_storage_beg, new_storage_end,
+ &stack);
+ }
+
+ uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
+ uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
+ uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
+ uptr old_external_beg = RoundDownTo(old_storage_beg, granularity);
+ uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
+ uptr old_internal_end = RoundDownTo(old_storage_end, granularity);
+
+ // At the very beginning we poison the whole buffer.
+ // Later we unpoison what is necessary.
+ PoisonShadow(new_internal_beg, new_internal_end - new_internal_beg,
+ kAsanContiguousContainerOOBMagic);
+ if (new_internal_beg != new_storage_beg) {
+ uptr new_unpoisoned = *(u8 *)MemToShadow(new_external_beg);
+ if (new_unpoisoned > (new_storage_beg - new_external_beg)) {
+ *(u8 *)MemToShadow(new_external_beg) =
+ static_cast<u8>(new_storage_beg - new_external_beg);
+ }
+ }
+ if (new_internal_end != new_storage_end) {
+ uptr new_unpoisoned = *(u8 *)MemToShadow(new_internal_end);
+ if (new_unpoisoned <= (new_storage_end - new_internal_end)) {
+ *(u8 *)MemToShadow(new_external_beg) =
+ static_cast<u8>(kAsanContiguousContainerOOBMagic);
+ }
+ }
+
+ // There are two cases.
+ // 1) Distance between buffers is granule-aligned.
+ // 2) It's not aligned, that case is slower.
+ if (old_storage_beg % granularity == new_storage_beg % granularity) {
+ // When buffers are aligned in the same way, we can just copy shadow memory,
+ // except first and last granule.
+ __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
+ (u8 *)MemToShadow(old_internal_beg),
+ (new_internal_end - new_internal_beg) / granularity);
+ // In first granule we cannot poison anything before beginning of the
+ // container.
+ if (new_internal_beg != new_storage_beg) {
+ uptr old_unpoisoned = *(u8 *)MemToShadow(old_external_beg);
+ uptr new_unpoisoned = *(u8 *)MemToShadow(new_external_beg);
+
+ if (old_unpoisoned > old_storage_beg - old_external_beg) {
+ *(u8 *)MemToShadow(new_external_beg) = old_unpoisoned;
+ } else if (new_unpoisoned > new_storage_beg - new_external_beg) {
+ *(u8 *)MemToShadow(new_external_beg) =
+ new_storage_beg - new_external_beg;
+ }
+ }
+ // In last granule we cannot poison anything after the end of the container.
+ if (new_internal_end != new_storage_end) {
+ uptr old_unpoisoned = *(u8 *)MemToShadow(old_internal_end);
+ uptr new_unpoisoned = *(u8 *)MemToShadow(new_internal_end);
+ if (new_unpoisoned <= new_storage_end - new_internal_end &&
+ old_unpoisoned < new_unpoisoned) {
+ *(u8 *)MemToShadow(new_internal_end) = old_unpoisoned;
+ }
+ }
+ } else {
+ // If buffers are not aligned, we have to go byte by byte.
+ uptr old_ptr = old_storage_beg;
+ uptr new_ptr = new_storage_beg;
+ uptr next_new;
+ for (; new_ptr + granularity <= new_storage_end;) {
+ next_new = RoundUpTo(new_ptr + 1, granularity);
+ uptr unpoison_to = 0;
+ for (; new_ptr != next_new; ++new_ptr, ++old_ptr) {
+ if (!AddressIsPoisoned(old_ptr)) {
+ unpoison_to = new_ptr + 1;
+ }
+ }
+ if (unpoison_to != 0) {
+ uptr granule_beg = new_ptr - granularity;
+ uptr value = unpoison_to - granule_beg;
+ *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
+ }
+ }
+ // Only case left is the end of the container in the middle of a granule.
+ // If memory after the end is unpoisoned, we cannot change anything.
+ // But if it's poisoned, we should unpoison as little as possible.
+ if (new_ptr != new_storage_end && AddressIsPoisoned(new_storage_end)) {
+ uptr unpoison_to = 0;
+ for (; new_ptr != new_storage_end; ++new_ptr, ++old_ptr) {
+ if (!AddressIsPoisoned(old_ptr)) {
+ unpoison_to = new_ptr + 1;
+ }
+ }
+ if (unpoison_to != 0) {
+ uptr granule_beg = RoundDownTo(new_storage_end, granularity);
+ uptr value = unpoison_to - granule_beg;
+ *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
+ }
+ }
+ }
+
+ __asan_unpoison_memory_region((void *)old_storage_beg,
+ old_storage_end - old_storage_beg);
+}
+
static const void *FindBadAddress(uptr begin, uptr end, bool poisoned) {
CHECK_LE(begin, end);
constexpr uptr kMaxRangeToCheck = 32;
diff --git a/compiler-rt/lib/asan/asan_report.cpp b/compiler-rt/lib/asan/asan_report.cpp
index fd590e401f67fe..00ef334bb88ffa 100644
--- a/compiler-rt/lib/asan/asan_report.cpp
+++ b/compiler-rt/lib/asan/asan_report.cpp
@@ -367,6 +367,16 @@ void ReportBadParamsToAnnotateDoubleEndedContiguousContainer(
in_report.ReportError(error);
}
+void ReportBadParamsToMoveContiguousContainerAnnotations(
+ uptr old_storage_beg, uptr old_storage_end, uptr new_storage_beg,
+ uptr new_storage_end, BufferedStackTrace *stack) {
+ ScopedInErrorReport in_report;
+ ErrorBadParamsToMoveContiguousContainerAnnotations error(
+ GetCurrentTidOrInvalid(), stack, old_storage_beg, old_storage_end,
+ new_storage_beg, new_storage_end);
+ in_report.ReportError(error);
+}
+
void ReportODRViolation(const __asan_global *g1, u32 stack_id1,
const __asan_global *g2, u32 stack_id2) {
ScopedInErrorReport in_report;
diff --git a/compiler-rt/lib/asan/asan_report.h b/compiler-rt/lib/asan/asan_report.h
index 3540b3b4b1bfe0..d98c87e5a6930b 100644
--- a/compiler-rt/lib/asan/asan_report.h
+++ b/compiler-rt/lib/asan/asan_report.h
@@ -88,6 +88,9 @@ void ReportBadParamsToAnnotateDoubleEndedContiguousContainer(
uptr storage_beg, uptr storage_end, uptr old_container_beg,
uptr old_container_end, uptr new_container_beg, uptr new_container_end,
BufferedStackTrace *stack);
+void ReportBadParamsToMoveContiguousContainerAnnotations(
+ uptr old_storage_beg, uptr old_storage_end, uptr new_storage_beg,
+ uptr new_storage_end, BufferedStackTrace *stack);
void ReportODRViolation(const __asan_global *g1, u32 stack_id1,
const __asan_global *g2, u32 stack_id2);
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc b/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
index 66744aa021e62b..e8963b9ca5ebf3 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
@@ -10,6 +10,7 @@
INTERFACE_FUNCTION(__sanitizer_acquire_crash_state)
INTERFACE_FUNCTION(__sanitizer_annotate_contiguous_container)
INTERFACE_FUNCTION(__sanitizer_annotate_double_ended_contiguous_container)
+INTERFACE_FUNCTION(__sanitizer_move_contiguous_container_annotations)
INTERFACE_FUNCTION(__sanitizer_contiguous_container_find_bad_address)
INTERFACE_FUNCTION(
__sanitizer_double_ended_contiguous_container_find_bad_address)
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
index c424ab1cecf94f..8c818d3c4c24ae 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
@@ -76,6 +76,10 @@ void __sanitizer_annotate_double_ended_contiguous_container(
const void *old_container_beg, const void *old_container_end,
const void *new_container_beg, const void *new_container_end);
SANITIZER_INTERFACE_ATTRIBUTE
+void __sanitizer_move_contiguous_container_annotations(
+ const void *old_storage_beg, const void *old_storage_end,
+ const void *new_storage_beg, const void *new_storage_end);
+SANITIZER_INTERFACE_ATTRIBUTE
int __sanitizer_verify_contiguous_container(const void *beg, const void *mid,
const void *end);
SANITIZER_INTERFACE_ATTRIBUTE
diff --git a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
new file mode 100644
index 00000000000000..f21826ca0b7939
--- /dev/null
+++ b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
@@ -0,0 +1,155 @@
+// RUN: %clangxx_asan -fexceptions -O %s -o %t && %env_asan_opts=detect_stack_use_after_return=0 %run %t
+//
+// Test __sanitizer_move_contiguous_container_annotations.
+
+#include <algorithm>
+#include <deque>
+#include <numeric>
+
+#include <assert.h>
+#include <sanitizer/asan_interface.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+static constexpr size_t kGranularity = 8;
+
+template <class T> static constexpr T RoundDown(T x) {
+ return reinterpret_cast<T>(reinterpret_cast<uintptr_t>(x) &
+ ~(kGranularity - 1));
+}
+template <class T> static constexpr T RoundUp(T x) {
+ return (x == RoundDown(x))
+ ? x
+ : reinterpret_cast<T>(reinterpret_cast<uintptr_t>(RoundDown(x)) +
+ kGranularity);
+}
+
+static std::deque<int> GetPoisonedState(char *begin, char *end) {
+ std::deque<int> result;
+ for (; begin != end; ++begin) {
+ result.push_back(__asan_address_is_poisoned(begin));
+ }
+ return result;
+}
+
+static void RandomPoison(char *beg, char *end) {
+ if (beg != RoundDown(beg) && (rand() % 2 == 1)) {
+ __asan_poison_memory_region(beg, RoundUp(beg) - beg);
+ __asan_unpoison_memory_region(beg, rand() % (RoundUp(beg) - beg + 1));
+ }
+ for (beg = RoundUp(beg); beg + kGranularity <= end; beg += kGranularity) {
+ __asan_poison_memory_region(beg, kGranularity);
+ __asan_unpoison_memory_region(beg, rand() % (kGranularity + 1));
+ }
+ if (end > beg && __asan_address_is_poisoned(end)) {
+ __asan_poison_memory_region(beg, kGranularity);
+ __asan_unpoison_memory_region(beg, rand() % (end - beg + 1));
+ }
+}
+
+static size_t count_unpoisoned(std::deque<int> &poison_states, size_t n) {
+ size_t result = 0;
+ for (size_t i = 0; i < n && !poison_states.empty(); ++i) {
+ if (!poison_states.front()) {
+ result = i + 1;
+ }
+ poison_states.pop_front();
+ }
+
+ return result;
+}
+
+void TestMove(size_t capacity, size_t off_old, size_t off_new,
+ int poison_buffers) {
+ size_t old_buffer_size = capacity + off_old + kGranularity * 2;
+ size_t new_buffer_size = capacity + off_new + kGranularity * 2;
+ char *old_buffer = new char[old_buffer_size];
+ char *new_buffer = new char[new_buffer_size];
+ char *old_buffer_end = old_buffer + old_buffer_size;
+ char *new_buffer_end = new_buffer + new_buffer_size;
+ bool poison_old = poison_buffers % 2 == 1;
+ bool poison_new = poison_buffers / 2 == 1;
+ if (poison_old)
+ __asan_poison_memory_region(old_buffer, old_buffer_size);
+ if (poison_new)
+ __asan_poison_memory_region(new_buffer, new_buffer_size);
+ char *old_beg = old_buffer + off_old;
+ char *new_beg = new_buffer + off_new;
+ char *old_end = old_beg + capacity;
+ char *new_end = new_beg + capacity;
+
+ for (int i = 0; i < 1000; i++) {
+ RandomPoison(old_beg, old_end);
+ std::deque<int> poison_states(old_beg, old_end);
+ __sanitizer_move_contiguous_container_annotations(old_beg, old_end, new_beg,
+ new_end);
+
+ // If old_buffer were poisoned, expected state of memory before old_beg
+ // is undetermined.
+ // If old buffer were not poisoned, that memory should still be unpoisoned.
+ // Area between old_beg and old_end should never be poisoned.
+ char *cur = poison_old ? old_beg : old_buffer;
+ for (; cur < old_end; ++cur) {
+ assert(!__asan_address_is_poisoned(cur));
+ }
+ // Memory after old_beg should be the same as at the beginning.
+ for (; cur < old_buffer_end; ++cur) {
+ assert(__asan_address_is_poisoned(cur) == poison_old);
+ }
+
+ // If new_buffer were not poisoned, memory before new_beg should never
+ // be poisoned. Otherwise, its state is undetermined.
+ if (!poison_new) {
+ for (cur = new_buffer; cur < new_beg; ++cur) {
+ assert(!__asan_address_is_poisoned(cur));
+ }
+ }
+ //In every granule, poisoned memory should be after last expected unpoisoned.
+ char *next;
+ for (cur = new_beg; cur + kGranularity <= new_end; cur = next) {
+ next = RoundUp(cur + 1);
+ size_t unpoisoned = count_unpoisoned(poison_states, next - cur);
+ if (unpoisoned > 0) {
+ assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
+ }
+ if (cur + unpoisoned < next) {
+ assert(__asan_address_is_poisoned(cur + unpoisoned));
+ }
+ }
+ // [cur; new_end) is not checked yet.
+ // If new_buffer were not poisoned, it cannot be poisoned and we can ignore check.
+ // If new_buffer were poisoned, it should be same as earlier.
+ if (cur < new_end && poison_new) {
+ size_t unpoisoned = count_unpoisoned(poison_states, new_end - cur);
+ if (unpoisoned > 0) {
+ assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
+ }
+ if (cur + unpoisoned < new_end) {
+ assert(__asan_address_is_poisoned(cur + unpoisoned));
+ }
+ }
+ // Memory annotations after new_end should be unchanged.
+ for (cur = new_end; cur < new_buffer_end; ++cur) {
+ assert(__asan_address_is_poisoned(cur) == poison_new);
+ }
+ }
+
+ __asan_unpoison_memory_region(old_buffer, old_buffer_size);
+ __asan_unpoison_memory_region(new_buffer, new_buffer_size);
+ delete[] old_buffer;
+ delete[] new_buffer;
+}
+
+int main(int argc, char **argv) {
+ int n = argc == 1 ? 64 : atoi(argv[1]);
+ for (int i = 0; i <= n; i++) {
+ for (int j = 0; j < kGranularity * 2; j++) {
+ for (int k = 0; k < kGranularity * 2; k++) {
+ for (int poison = 0; poison < 4; ++poison) {
+ TestMove(i, j, k, poison);
+ }
+ }
+ }
+ }
+}
>From b120349cc49c6084de4c33e5a937ca2b33293871 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Tue, 20 Aug 2024 04:12:54 +0200
Subject: [PATCH 02/25] Not overlapping containers
Implemented correct container annotations copying function (non-overlapping containers only)
Refactored the initial PoC to a fully functional implementation for copying container annotations without moving/copying the memory content itself. This function currently supports cases where the source and destination containers do not overlap. Handling overlapping containers would add complexity.
The function is designed to work irrespective of whether the buffers are granule-aligned or the distance between them is granule-aligned. However, such scenarios may have an impact on performance.
A Test case has been included to verify the correctness of the implementation.
Removed unpoisoning oryginal buffer at the end, users can do it by themselves if they want to.
---
compiler-rt/lib/asan/asan_poisoning.cpp | 176 +++++++++++-------
.../TestCases/move_container_annotations.cpp | 45 +++--
2 files changed, 135 insertions(+), 86 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index 10f5e97f8a5aee..a079f0ea607910 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -576,8 +576,46 @@ void __sanitizer_annotate_double_ended_contiguous_container(
}
}
-// This function moves annotation from one buffer to another.
-// Old buffer is unpoisoned at the end.
+static bool WithinOneGranule(uptr p, uptr q) {
+ if (p == q)
+ return true;
+ return RoundDownTo(p, ASAN_SHADOW_GRANULARITY) ==
+ RoundDownTo(q - 1, ASAN_SHADOW_GRANULARITY);
+}
+
+static void PoisonContainer(uptr storage_beg, uptr storage_end) {
+ constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
+ uptr internal_beg = RoundUpTo(storage_beg, granularity);
+ uptr external_beg = RoundDownTo(storage_beg, granularity);
+ uptr internal_end = RoundDownTo(storage_end, granularity);
+
+ if (internal_end > internal_beg)
+ PoisonShadow(internal_beg, internal_end - internal_beg,
+ kAsanContiguousContainerOOBMagic);
+ // The new buffer may start in the middle of a granule.
+ if (internal_beg != storage_beg && internal_beg < internal_end &&
+ !AddressIsPoisoned(storage_beg)) {
+ *(u8 *)MemToShadow(external_beg) =
+ static_cast<u8>(storage_beg - external_beg);
+ }
+ // The new buffer may end in the middle of a granule.
+ if (internal_end != storage_end && AddressIsPoisoned(storage_end)) {
+ *(u8 *)MemToShadow(internal_end) =
+ static_cast<u8>(kAsanContiguousContainerOOBMagic);
+ }
+}
+
+// This function copies ASan memory annotations (poisoned/unpoisoned states)
+// from one buffer to another.
+// It's main purpose is to help with relocating trivially relocatable objects,
+// which memory may be poisoned, without calling copy constructor.
+// However, it does not move memory content itself, only annotations.
+// If the buffers aren't aligned (the distance between buffers isn't
+// granule-aligned)
+// // old_storage_beg % granularity != new_storage_beg % granularity
+// the function handles this by going byte by byte, slowing down performance.
+// The old buffer annotations are not removed. If necessary,
+// user can unpoison old buffer with __asan_unpoison_memory_region.
void __sanitizer_move_contiguous_container_annotations(
const void *old_storage_beg_p, const void *old_storage_end_p,
const void *new_storage_beg_p, const void *new_storage_end_p) {
@@ -606,6 +644,9 @@ void __sanitizer_move_contiguous_container_annotations(
&stack);
}
+ if (old_storage_beg == old_storage_end)
+ return;
+
uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
@@ -615,52 +656,62 @@ void __sanitizer_move_contiguous_container_annotations(
// At the very beginning we poison the whole buffer.
// Later we unpoison what is necessary.
- PoisonShadow(new_internal_beg, new_internal_end - new_internal_beg,
- kAsanContiguousContainerOOBMagic);
- if (new_internal_beg != new_storage_beg) {
- uptr new_unpoisoned = *(u8 *)MemToShadow(new_external_beg);
- if (new_unpoisoned > (new_storage_beg - new_external_beg)) {
- *(u8 *)MemToShadow(new_external_beg) =
- static_cast<u8>(new_storage_beg - new_external_beg);
- }
- }
- if (new_internal_end != new_storage_end) {
- uptr new_unpoisoned = *(u8 *)MemToShadow(new_internal_end);
- if (new_unpoisoned <= (new_storage_end - new_internal_end)) {
- *(u8 *)MemToShadow(new_external_beg) =
- static_cast<u8>(kAsanContiguousContainerOOBMagic);
- }
- }
+ PoisonContainer(new_storage_beg, new_storage_end);
// There are two cases.
// 1) Distance between buffers is granule-aligned.
- // 2) It's not aligned, that case is slower.
+ // 2) It's not aligned, and therefore requires going byte by byte.
if (old_storage_beg % granularity == new_storage_beg % granularity) {
// When buffers are aligned in the same way, we can just copy shadow memory,
- // except first and last granule.
- __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
- (u8 *)MemToShadow(old_internal_beg),
- (new_internal_end - new_internal_beg) / granularity);
- // In first granule we cannot poison anything before beginning of the
- // container.
- if (new_internal_beg != new_storage_beg) {
- uptr old_unpoisoned = *(u8 *)MemToShadow(old_external_beg);
- uptr new_unpoisoned = *(u8 *)MemToShadow(new_external_beg);
-
- if (old_unpoisoned > old_storage_beg - old_external_beg) {
- *(u8 *)MemToShadow(new_external_beg) = old_unpoisoned;
- } else if (new_unpoisoned > new_storage_beg - new_external_beg) {
- *(u8 *)MemToShadow(new_external_beg) =
- new_storage_beg - new_external_beg;
- }
- }
- // In last granule we cannot poison anything after the end of the container.
- if (new_internal_end != new_storage_end) {
- uptr old_unpoisoned = *(u8 *)MemToShadow(old_internal_end);
- uptr new_unpoisoned = *(u8 *)MemToShadow(new_internal_end);
- if (new_unpoisoned <= new_storage_end - new_internal_end &&
- old_unpoisoned < new_unpoisoned) {
- *(u8 *)MemToShadow(new_internal_end) = old_unpoisoned;
+ // except the first and the last granule.
+ if (new_internal_end > new_internal_beg)
+ __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
+ (u8 *)MemToShadow(old_internal_beg),
+ (new_internal_end - new_internal_beg) / granularity);
+ // If the beginning and the end of the storage are aligned, we are done.
+ // Otherwise, we have to handle remaining granules.
+ if (new_internal_beg != new_storage_beg ||
+ new_internal_end != new_storage_end) {
+ if (WithinOneGranule(new_storage_beg, new_storage_end)) {
+ if (new_internal_end == new_storage_end) {
+ if (!AddressIsPoisoned(old_storage_beg)) {
+ *(u8 *)MemToShadow(new_external_beg) =
+ *(u8 *)MemToShadow(old_external_beg);
+ } else if (!AddressIsPoisoned(new_storage_beg)) {
+ *(u8 *)MemToShadow(new_external_beg) =
+ new_storage_beg - new_external_beg;
+ }
+ } else if (AddressIsPoisoned(new_storage_end)) {
+ if (!AddressIsPoisoned(old_storage_beg)) {
+ *(u8 *)MemToShadow(new_external_beg) =
+ AddressIsPoisoned(old_storage_end)
+ ? *(u8 *)MemToShadow(old_internal_end)
+ : new_storage_end - new_external_beg;
+ } else if (!AddressIsPoisoned(new_storage_beg)) {
+ *(u8 *)MemToShadow(new_external_beg) =
+ (new_storage_beg == new_external_beg)
+ ? static_cast<u8>(kAsanContiguousContainerOOBMagic)
+ : new_storage_beg - new_external_beg;
+ }
+ }
+ } else {
+ // Buffer is not within one granule!
+ if (new_internal_beg != new_storage_beg) {
+ if (!AddressIsPoisoned(old_storage_beg)) {
+ *(u8 *)MemToShadow(new_external_beg) =
+ *(u8 *)MemToShadow(old_external_beg);
+ } else if (!AddressIsPoisoned(new_storage_beg)) {
+ *(u8 *)MemToShadow(new_external_beg) =
+ new_storage_beg - new_external_beg;
+ }
+ }
+ if (new_internal_end != new_storage_end &&
+ AddressIsPoisoned(new_storage_end)) {
+ *(u8 *)MemToShadow(new_internal_end) =
+ AddressIsPoisoned(old_storage_end)
+ ? *(u8 *)MemToShadow(old_internal_end)
+ : old_storage_end - old_internal_end;
+ }
}
}
} else {
@@ -668,40 +719,31 @@ void __sanitizer_move_contiguous_container_annotations(
uptr old_ptr = old_storage_beg;
uptr new_ptr = new_storage_beg;
uptr next_new;
- for (; new_ptr + granularity <= new_storage_end;) {
+ for (; new_ptr < new_storage_end;) {
next_new = RoundUpTo(new_ptr + 1, granularity);
uptr unpoison_to = 0;
- for (; new_ptr != next_new; ++new_ptr, ++old_ptr) {
+ for (; new_ptr != next_new && new_ptr != new_storage_end;
+ ++new_ptr, ++old_ptr) {
if (!AddressIsPoisoned(old_ptr)) {
unpoison_to = new_ptr + 1;
}
}
- if (unpoison_to != 0) {
- uptr granule_beg = new_ptr - granularity;
- uptr value = unpoison_to - granule_beg;
- *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
- }
- }
- // Only case left is the end of the container in the middle of a granule.
- // If memory after the end is unpoisoned, we cannot change anything.
- // But if it's poisoned, we should unpoison as little as possible.
- if (new_ptr != new_storage_end && AddressIsPoisoned(new_storage_end)) {
- uptr unpoison_to = 0;
- for (; new_ptr != new_storage_end; ++new_ptr, ++old_ptr) {
- if (!AddressIsPoisoned(old_ptr)) {
- unpoison_to = new_ptr + 1;
+ if (new_ptr < new_storage_end || new_ptr == new_internal_end ||
+ AddressIsPoisoned(new_storage_end)) {
+ uptr granule_beg = RoundDownTo(new_ptr - 1, granularity);
+ if (unpoison_to != 0) {
+ uptr value =
+ (unpoison_to == next_new) ? 0 : unpoison_to - granule_beg;
+ *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
+ } else {
+ *(u8 *)MemToShadow(granule_beg) =
+ (granule_beg >= new_storage_beg)
+ ? static_cast<u8>(kAsanContiguousContainerOOBMagic)
+ : new_storage_beg - granule_beg;
}
}
- if (unpoison_to != 0) {
- uptr granule_beg = RoundDownTo(new_storage_end, granularity);
- uptr value = unpoison_to - granule_beg;
- *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
- }
}
}
-
- __asan_unpoison_memory_region((void *)old_storage_beg,
- old_storage_end - old_storage_beg);
}
static const void *FindBadAddress(uptr begin, uptr end, bool poisoned) {
diff --git a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
index f21826ca0b7939..cbbcfe89b0d682 100644
--- a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
@@ -27,14 +27,15 @@ template <class T> static constexpr T RoundUp(T x) {
static std::deque<int> GetPoisonedState(char *begin, char *end) {
std::deque<int> result;
- for (; begin != end; ++begin) {
- result.push_back(__asan_address_is_poisoned(begin));
+ for (char *ptr = begin; ptr != end; ++ptr) {
+ result.push_back(__asan_address_is_poisoned(ptr));
}
return result;
}
static void RandomPoison(char *beg, char *end) {
- if (beg != RoundDown(beg) && (rand() % 2 == 1)) {
+ if (beg != RoundDown(beg) && RoundDown(beg) != RoundDown(end) &&
+ rand() % 2 == 1) {
__asan_poison_memory_region(beg, RoundUp(beg) - beg);
__asan_unpoison_memory_region(beg, rand() % (RoundUp(beg) - beg + 1));
}
@@ -70,31 +71,36 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
char *new_buffer_end = new_buffer + new_buffer_size;
bool poison_old = poison_buffers % 2 == 1;
bool poison_new = poison_buffers / 2 == 1;
- if (poison_old)
- __asan_poison_memory_region(old_buffer, old_buffer_size);
- if (poison_new)
- __asan_poison_memory_region(new_buffer, new_buffer_size);
char *old_beg = old_buffer + off_old;
char *new_beg = new_buffer + off_new;
char *old_end = old_beg + capacity;
char *new_end = new_beg + capacity;
- for (int i = 0; i < 1000; i++) {
+ for (int i = 0; i < 75; i++) {
+ if (poison_old)
+ __asan_poison_memory_region(old_buffer, old_buffer_size);
+ if (poison_new)
+ __asan_poison_memory_region(new_buffer, new_buffer_size);
+
RandomPoison(old_beg, old_end);
- std::deque<int> poison_states(old_beg, old_end);
+ std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
__sanitizer_move_contiguous_container_annotations(old_beg, old_end, new_beg,
new_end);
// If old_buffer were poisoned, expected state of memory before old_beg
// is undetermined.
// If old buffer were not poisoned, that memory should still be unpoisoned.
- // Area between old_beg and old_end should never be poisoned.
- char *cur = poison_old ? old_beg : old_buffer;
- for (; cur < old_end; ++cur) {
- assert(!__asan_address_is_poisoned(cur));
+ char *cur;
+ if (!poison_old) {
+ for (cur = old_buffer; cur < old_beg; ++cur) {
+ assert(!__asan_address_is_poisoned(cur));
+ }
+ }
+ for (size_t i = 0; i < poison_states.size(); ++i) {
+ assert(__asan_address_is_poisoned(&old_beg[i]) == poison_states[i]);
}
- // Memory after old_beg should be the same as at the beginning.
- for (; cur < old_buffer_end; ++cur) {
+ // Memory after old_end should be the same as at the beginning.
+ for (cur = old_end; cur < old_buffer_end; ++cur) {
assert(__asan_address_is_poisoned(cur) == poison_old);
}
@@ -118,7 +124,8 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
}
}
// [cur; new_end) is not checked yet.
- // If new_buffer were not poisoned, it cannot be poisoned and we can ignore check.
+ // If new_buffer were not poisoned, it cannot be poisoned and we can ignore
+ // a separate check.
// If new_buffer were poisoned, it should be same as earlier.
if (cur < new_end && poison_new) {
size_t unpoisoned = count_unpoisoned(poison_states, new_end - cur);
@@ -143,9 +150,9 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
int main(int argc, char **argv) {
int n = argc == 1 ? 64 : atoi(argv[1]);
- for (int i = 0; i <= n; i++) {
- for (int j = 0; j < kGranularity * 2; j++) {
- for (int k = 0; k < kGranularity * 2; k++) {
+ for (size_t j = 0; j < kGranularity * 2; j++) {
+ for (size_t k = 0; k < kGranularity * 2; k++) {
+ for (int i = 0; i <= n; i++) {
for (int poison = 0; poison < 4; ++poison) {
TestMove(i, j, k, poison);
}
>From 9aee9a41049c25f450c6a6008a85bf01edb8db58 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Tue, 27 Aug 2024 07:22:38 +0200
Subject: [PATCH 03/25] Simplify and improve readability
Added helper functions, removed unnecessary branches, simplified code.
---
.../include/sanitizer/common_interface_defs.h | 2 +-
compiler-rt/lib/asan/asan_errors.cpp | 4 +-
compiler-rt/lib/asan/asan_errors.h | 8 +-
compiler-rt/lib/asan/asan_poisoning.cpp | 199 ++++++++----------
compiler-rt/lib/asan/asan_report.cpp | 4 +-
compiler-rt/lib/asan/asan_report.h | 2 +-
.../sanitizer_common_interface.inc | 2 +-
.../sanitizer_interface_internal.h | 2 +-
.../TestCases/move_container_annotations.cpp | 16 +-
9 files changed, 108 insertions(+), 131 deletions(-)
diff --git a/compiler-rt/include/sanitizer/common_interface_defs.h b/compiler-rt/include/sanitizer/common_interface_defs.h
index a3c91a9be6439b..e9101e39d72ddd 100644
--- a/compiler-rt/include/sanitizer/common_interface_defs.h
+++ b/compiler-rt/include/sanitizer/common_interface_defs.h
@@ -213,7 +213,7 @@ void SANITIZER_CDECL __sanitizer_annotate_double_ended_contiguous_container(
/// \param old_storage_end End of the old container region.
/// \param new_storage_beg Beginning of the new container region.
/// \param new_storage_end End of the new container region.
-void SANITIZER_CDECL __sanitizer_move_contiguous_container_annotations(
+void SANITIZER_CDECL __sanitizer_copy_contiguous_container_annotations(
const void *old_storage_beg, const void *old_storage_end,
const void *new_storage_beg, const void *new_storage_end);
diff --git a/compiler-rt/lib/asan/asan_errors.cpp b/compiler-rt/lib/asan/asan_errors.cpp
index 11d67f4ace1064..8b712c70029f76 100644
--- a/compiler-rt/lib/asan/asan_errors.cpp
+++ b/compiler-rt/lib/asan/asan_errors.cpp
@@ -348,10 +348,10 @@ void ErrorBadParamsToAnnotateDoubleEndedContiguousContainer::Print() {
ReportErrorSummary(scariness.GetDescription(), stack);
}
-void ErrorBadParamsToMoveContiguousContainerAnnotations::Print() {
+void ErrorBadParamsToCopyContiguousContainerAnnotations::Print() {
Report(
"ERROR: AddressSanitizer: bad parameters to "
- "__sanitizer_move_contiguous_container_annotations:\n"
+ "__sanitizer_copy_contiguous_container_annotations:\n"
" old_storage_beg : %p\n"
" old_storage_end : %p\n"
" new_storage_beg : %p\n"
diff --git a/compiler-rt/lib/asan/asan_errors.h b/compiler-rt/lib/asan/asan_errors.h
index 40526039fbc76f..b3af655e66639a 100644
--- a/compiler-rt/lib/asan/asan_errors.h
+++ b/compiler-rt/lib/asan/asan_errors.h
@@ -353,12 +353,12 @@ struct ErrorBadParamsToAnnotateDoubleEndedContiguousContainer : ErrorBase {
void Print();
};
-struct ErrorBadParamsToMoveContiguousContainerAnnotations : ErrorBase {
+struct ErrorBadParamsToCopyContiguousContainerAnnotations : ErrorBase {
const BufferedStackTrace *stack;
uptr old_storage_beg, old_storage_end, new_storage_beg, new_storage_end;
- ErrorBadParamsToMoveContiguousContainerAnnotations() = default; // (*)
- ErrorBadParamsToMoveContiguousContainerAnnotations(
+ ErrorBadParamsToCopyContiguousContainerAnnotations() = default; // (*)
+ ErrorBadParamsToCopyContiguousContainerAnnotations(
u32 tid, BufferedStackTrace *stack_, uptr old_storage_beg_,
uptr old_storage_end_, uptr new_storage_beg_, uptr new_storage_end_)
: ErrorBase(tid, 10,
@@ -439,7 +439,7 @@ struct ErrorGeneric : ErrorBase {
macro(StringFunctionSizeOverflow) \
macro(BadParamsToAnnotateContiguousContainer) \
macro(BadParamsToAnnotateDoubleEndedContiguousContainer) \
- macro(BadParamsToMoveContiguousContainerAnnotations) \
+ macro(BadParamsToCopyContiguousContainerAnnotations) \
macro(ODRViolation) \
macro(InvalidPointerPair) \
macro(Generic)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index a079f0ea607910..e661b907642726 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -576,6 +576,7 @@ void __sanitizer_annotate_double_ended_contiguous_container(
}
}
+// Checks if two pointers fall within the same memory granule.
static bool WithinOneGranule(uptr p, uptr q) {
if (p == q)
return true;
@@ -583,25 +584,58 @@ static bool WithinOneGranule(uptr p, uptr q) {
RoundDownTo(q - 1, ASAN_SHADOW_GRANULARITY);
}
-static void PoisonContainer(uptr storage_beg, uptr storage_end) {
+// Copies ASan memory annotation (a shadow memory value)
+// from one granule to another.
+static void CopyGranuleAnnotation(uptr dst, uptr src) {
+ *(u8 *)MemToShadow(dst) = *(u8 *)MemToShadow(src);
+}
+
+// Marks the specified number of bytes in a granule as accessible or
+// poisones the whole granule with kAsanContiguousContainerOOBMagic value.
+static void AnnotateContainerGranuleAccessibleBytes(uptr ptr, u8 n) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
- uptr internal_beg = RoundUpTo(storage_beg, granularity);
- uptr external_beg = RoundDownTo(storage_beg, granularity);
- uptr internal_end = RoundDownTo(storage_end, granularity);
-
- if (internal_end > internal_beg)
- PoisonShadow(internal_beg, internal_end - internal_beg,
- kAsanContiguousContainerOOBMagic);
- // The new buffer may start in the middle of a granule.
- if (internal_beg != storage_beg && internal_beg < internal_end &&
- !AddressIsPoisoned(storage_beg)) {
- *(u8 *)MemToShadow(external_beg) =
- static_cast<u8>(storage_beg - external_beg);
+ if (n == granularity) {
+ *(u8 *)MemToShadow(ptr) = 0;
+ } else if (n == 0) {
+ *(u8 *)MemToShadow(ptr) = static_cast<u8>(kAsanContiguousContainerOOBMagic);
+ } else {
+ *(u8 *)MemToShadow(ptr) = n;
}
- // The new buffer may end in the middle of a granule.
- if (internal_end != storage_end && AddressIsPoisoned(storage_end)) {
- *(u8 *)MemToShadow(internal_end) =
- static_cast<u8>(kAsanContiguousContainerOOBMagic);
+}
+
+// Performs a byte-by-byte copy of ASan annotations (shadow memory values).
+// Result may be different due to ASan limitations, but result cannot lead
+// to false positives (more memory than requested may get unpoisoned).
+static void SlowCopyContainerAnnotations(uptr old_storage_beg,
+ uptr old_storage_end,
+ uptr new_storage_beg,
+ uptr new_storage_end) {
+ constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
+ uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
+ uptr old_ptr = old_storage_beg;
+ uptr new_ptr = new_storage_beg;
+
+ while (new_ptr < new_storage_end) {
+ uptr next_new = RoundUpTo(new_ptr + 1, granularity);
+ uptr granule_begin = next_new - granularity;
+ uptr unpoisoned_bytes = 0;
+
+ for (; new_ptr != next_new && new_ptr != new_storage_end;
+ ++new_ptr, ++old_ptr) {
+ if (!AddressIsPoisoned(old_ptr)) {
+ unpoisoned_bytes = new_ptr - granule_begin + 1;
+ }
+ }
+ if (new_ptr < new_storage_end || new_ptr == new_internal_end ||
+ AddressIsPoisoned(new_storage_end)) {
+ if (unpoisoned_bytes != 0 || granule_begin >= new_storage_beg) {
+ AnnotateContainerGranuleAccessibleBytes(granule_begin,
+ unpoisoned_bytes);
+ } else if (!AddressIsPoisoned(new_storage_beg)) {
+ AnnotateContainerGranuleAccessibleBytes(
+ granule_begin, new_storage_beg - granule_begin);
+ }
+ }
}
}
@@ -616,7 +650,7 @@ static void PoisonContainer(uptr storage_beg, uptr storage_end) {
// the function handles this by going byte by byte, slowing down performance.
// The old buffer annotations are not removed. If necessary,
// user can unpoison old buffer with __asan_unpoison_memory_region.
-void __sanitizer_move_contiguous_container_annotations(
+void __sanitizer_copy_contiguous_container_annotations(
const void *old_storage_beg_p, const void *old_storage_end_p,
const void *new_storage_beg_p, const void *new_storage_end_p) {
if (!flags()->detect_container_overflow)
@@ -639,7 +673,7 @@ void __sanitizer_move_contiguous_container_annotations(
(old_storage_end - old_storage_beg) !=
(new_storage_end - new_storage_beg)) {
GET_STACK_TRACE_FATAL_HERE;
- ReportBadParamsToMoveContiguousContainerAnnotations(
+ ReportBadParamsToCopyContiguousContainerAnnotations(
old_storage_beg, old_storage_end, new_storage_beg, new_storage_end,
&stack);
}
@@ -647,101 +681,44 @@ void __sanitizer_move_contiguous_container_annotations(
if (old_storage_beg == old_storage_end)
return;
+ if (old_storage_beg % granularity != new_storage_beg % granularity ||
+ WithinOneGranule(new_storage_beg, new_storage_end)) {
+ SlowCopyContainerAnnotations(old_storage_beg, old_storage_end,
+ new_storage_beg, new_storage_end);
+ return;
+ }
+
uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
- uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
- uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
- uptr old_external_beg = RoundDownTo(old_storage_beg, granularity);
uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
- uptr old_internal_end = RoundDownTo(old_storage_end, granularity);
-
- // At the very beginning we poison the whole buffer.
- // Later we unpoison what is necessary.
- PoisonContainer(new_storage_beg, new_storage_end);
-
- // There are two cases.
- // 1) Distance between buffers is granule-aligned.
- // 2) It's not aligned, and therefore requires going byte by byte.
- if (old_storage_beg % granularity == new_storage_beg % granularity) {
- // When buffers are aligned in the same way, we can just copy shadow memory,
- // except the first and the last granule.
- if (new_internal_end > new_internal_beg)
- __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
- (u8 *)MemToShadow(old_internal_beg),
- (new_internal_end - new_internal_beg) / granularity);
- // If the beginning and the end of the storage are aligned, we are done.
- // Otherwise, we have to handle remaining granules.
- if (new_internal_beg != new_storage_beg ||
- new_internal_end != new_storage_end) {
- if (WithinOneGranule(new_storage_beg, new_storage_end)) {
- if (new_internal_end == new_storage_end) {
- if (!AddressIsPoisoned(old_storage_beg)) {
- *(u8 *)MemToShadow(new_external_beg) =
- *(u8 *)MemToShadow(old_external_beg);
- } else if (!AddressIsPoisoned(new_storage_beg)) {
- *(u8 *)MemToShadow(new_external_beg) =
- new_storage_beg - new_external_beg;
- }
- } else if (AddressIsPoisoned(new_storage_end)) {
- if (!AddressIsPoisoned(old_storage_beg)) {
- *(u8 *)MemToShadow(new_external_beg) =
- AddressIsPoisoned(old_storage_end)
- ? *(u8 *)MemToShadow(old_internal_end)
- : new_storage_end - new_external_beg;
- } else if (!AddressIsPoisoned(new_storage_beg)) {
- *(u8 *)MemToShadow(new_external_beg) =
- (new_storage_beg == new_external_beg)
- ? static_cast<u8>(kAsanContiguousContainerOOBMagic)
- : new_storage_beg - new_external_beg;
- }
- }
- } else {
- // Buffer is not within one granule!
- if (new_internal_beg != new_storage_beg) {
- if (!AddressIsPoisoned(old_storage_beg)) {
- *(u8 *)MemToShadow(new_external_beg) =
- *(u8 *)MemToShadow(old_external_beg);
- } else if (!AddressIsPoisoned(new_storage_beg)) {
- *(u8 *)MemToShadow(new_external_beg) =
- new_storage_beg - new_external_beg;
- }
- }
- if (new_internal_end != new_storage_end &&
- AddressIsPoisoned(new_storage_end)) {
- *(u8 *)MemToShadow(new_internal_end) =
- AddressIsPoisoned(old_storage_end)
- ? *(u8 *)MemToShadow(old_internal_end)
- : old_storage_end - old_internal_end;
- }
- }
+ if (new_internal_end > new_internal_beg) {
+ uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
+ __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
+ (u8 *)MemToShadow(old_internal_beg),
+ (new_internal_end - new_internal_beg) / granularity);
+ }
+ // The only remaining cases involve edge granules when the container starts or
+ // ends within a granule. We already know that the container's start and end
+ // points lie in different granules.
+ if (new_internal_beg != new_storage_beg) {
+ // First granule
+ uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
+ uptr old_external_beg = RoundDownTo(old_storage_beg, granularity);
+ if (!AddressIsPoisoned(old_storage_beg)) {
+ CopyGranuleAnnotation(new_external_beg, old_external_beg);
+ } else if (!AddressIsPoisoned(new_storage_beg)) {
+ AnnotateContainerGranuleAccessibleBytes(
+ new_external_beg, new_storage_beg - new_external_beg);
}
- } else {
- // If buffers are not aligned, we have to go byte by byte.
- uptr old_ptr = old_storage_beg;
- uptr new_ptr = new_storage_beg;
- uptr next_new;
- for (; new_ptr < new_storage_end;) {
- next_new = RoundUpTo(new_ptr + 1, granularity);
- uptr unpoison_to = 0;
- for (; new_ptr != next_new && new_ptr != new_storage_end;
- ++new_ptr, ++old_ptr) {
- if (!AddressIsPoisoned(old_ptr)) {
- unpoison_to = new_ptr + 1;
- }
- }
- if (new_ptr < new_storage_end || new_ptr == new_internal_end ||
- AddressIsPoisoned(new_storage_end)) {
- uptr granule_beg = RoundDownTo(new_ptr - 1, granularity);
- if (unpoison_to != 0) {
- uptr value =
- (unpoison_to == next_new) ? 0 : unpoison_to - granule_beg;
- *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
- } else {
- *(u8 *)MemToShadow(granule_beg) =
- (granule_beg >= new_storage_beg)
- ? static_cast<u8>(kAsanContiguousContainerOOBMagic)
- : new_storage_beg - granule_beg;
- }
- }
+ }
+ if (new_internal_end != new_storage_end &&
+ AddressIsPoisoned(new_storage_end)) {
+ // Last granule
+ uptr old_internal_end = RoundDownTo(old_storage_end, granularity);
+ if (AddressIsPoisoned(old_storage_end)) {
+ CopyGranuleAnnotation(new_internal_end, old_internal_end);
+ } else {
+ AnnotateContainerGranuleAccessibleBytes(
+ new_internal_end, old_storage_end - old_internal_end);
}
}
}
diff --git a/compiler-rt/lib/asan/asan_report.cpp b/compiler-rt/lib/asan/asan_report.cpp
index 00ef334bb88ffa..45aa607dcda07f 100644
--- a/compiler-rt/lib/asan/asan_report.cpp
+++ b/compiler-rt/lib/asan/asan_report.cpp
@@ -367,11 +367,11 @@ void ReportBadParamsToAnnotateDoubleEndedContiguousContainer(
in_report.ReportError(error);
}
-void ReportBadParamsToMoveContiguousContainerAnnotations(
+void ReportBadParamsToCopyContiguousContainerAnnotations(
uptr old_storage_beg, uptr old_storage_end, uptr new_storage_beg,
uptr new_storage_end, BufferedStackTrace *stack) {
ScopedInErrorReport in_report;
- ErrorBadParamsToMoveContiguousContainerAnnotations error(
+ ErrorBadParamsToCopyContiguousContainerAnnotations error(
GetCurrentTidOrInvalid(), stack, old_storage_beg, old_storage_end,
new_storage_beg, new_storage_end);
in_report.ReportError(error);
diff --git a/compiler-rt/lib/asan/asan_report.h b/compiler-rt/lib/asan/asan_report.h
index d98c87e5a6930b..3143d83abe390d 100644
--- a/compiler-rt/lib/asan/asan_report.h
+++ b/compiler-rt/lib/asan/asan_report.h
@@ -88,7 +88,7 @@ void ReportBadParamsToAnnotateDoubleEndedContiguousContainer(
uptr storage_beg, uptr storage_end, uptr old_container_beg,
uptr old_container_end, uptr new_container_beg, uptr new_container_end,
BufferedStackTrace *stack);
-void ReportBadParamsToMoveContiguousContainerAnnotations(
+void ReportBadParamsToCopyContiguousContainerAnnotations(
uptr old_storage_beg, uptr old_storage_end, uptr new_storage_beg,
uptr new_storage_end, BufferedStackTrace *stack);
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc b/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
index e8963b9ca5ebf3..4ea75cdd67cb93 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
@@ -10,7 +10,7 @@
INTERFACE_FUNCTION(__sanitizer_acquire_crash_state)
INTERFACE_FUNCTION(__sanitizer_annotate_contiguous_container)
INTERFACE_FUNCTION(__sanitizer_annotate_double_ended_contiguous_container)
-INTERFACE_FUNCTION(__sanitizer_move_contiguous_container_annotations)
+INTERFACE_FUNCTION(__sanitizer_copy_contiguous_container_annotations)
INTERFACE_FUNCTION(__sanitizer_contiguous_container_find_bad_address)
INTERFACE_FUNCTION(
__sanitizer_double_ended_contiguous_container_find_bad_address)
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
index 8c818d3c4c24ae..d9fe1567b475b5 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
@@ -76,7 +76,7 @@ void __sanitizer_annotate_double_ended_contiguous_container(
const void *old_container_beg, const void *old_container_end,
const void *new_container_beg, const void *new_container_end);
SANITIZER_INTERFACE_ATTRIBUTE
-void __sanitizer_move_contiguous_container_annotations(
+void __sanitizer_copy_contiguous_container_annotations(
const void *old_storage_beg, const void *old_storage_end,
const void *new_storage_beg, const void *new_storage_end);
SANITIZER_INTERFACE_ATTRIBUTE
diff --git a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
index cbbcfe89b0d682..b98140fceb2557 100644
--- a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
@@ -1,6 +1,6 @@
// RUN: %clangxx_asan -fexceptions -O %s -o %t && %env_asan_opts=detect_stack_use_after_return=0 %run %t
//
-// Test __sanitizer_move_contiguous_container_annotations.
+// Test __sanitizer_copy_contiguous_container_annotations.
#include <algorithm>
#include <deque>
@@ -61,8 +61,8 @@ static size_t count_unpoisoned(std::deque<int> &poison_states, size_t n) {
return result;
}
-void TestMove(size_t capacity, size_t off_old, size_t off_new,
- int poison_buffers) {
+void TestNonOverlappingContainers(size_t capacity, size_t off_old,
+ size_t off_new, int poison_buffers) {
size_t old_buffer_size = capacity + off_old + kGranularity * 2;
size_t new_buffer_size = capacity + off_new + kGranularity * 2;
char *old_buffer = new char[old_buffer_size];
@@ -76,7 +76,7 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
char *old_end = old_beg + capacity;
char *new_end = new_beg + capacity;
- for (int i = 0; i < 75; i++) {
+ for (int i = 0; i < 35; i++) {
if (poison_old)
__asan_poison_memory_region(old_buffer, old_buffer_size);
if (poison_new)
@@ -84,7 +84,7 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
RandomPoison(old_beg, old_end);
std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
- __sanitizer_move_contiguous_container_annotations(old_beg, old_end, new_beg,
+ __sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
new_end);
// If old_buffer were poisoned, expected state of memory before old_beg
@@ -150,11 +150,11 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
int main(int argc, char **argv) {
int n = argc == 1 ? 64 : atoi(argv[1]);
- for (size_t j = 0; j < kGranularity * 2; j++) {
- for (size_t k = 0; k < kGranularity * 2; k++) {
+ for (size_t j = 0; j < kGranularity + 2; j++) {
+ for (size_t k = 0; k < kGranularity + 2; k++) {
for (int i = 0; i <= n; i++) {
for (int poison = 0; poison < 4; ++poison) {
- TestMove(i, j, k, poison);
+ TestNonOverlappingContainers(i, j, k, poison);
}
}
}
>From a0cadc483b4a0527578717d23afafb3d94bb373b Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Wed, 28 Aug 2024 03:28:51 +0200
Subject: [PATCH 04/25] Add support for overlapping containers
Adds support for overlapping containers.
Adds test case for that.
---
compiler-rt/lib/asan/asan_poisoning.cpp | 109 ++++++++++++++++--
.../TestCases/move_container_annotations.cpp | 80 ++++++++++++-
2 files changed, 173 insertions(+), 16 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index e661b907642726..fd83b918ac02aa 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -639,6 +639,42 @@ static void SlowCopyContainerAnnotations(uptr old_storage_beg,
}
}
+// This function is basically the same as SlowCopyContainerAnnotations,
+// but goes through elements in reversed order
+static void SlowRCopyContainerAnnotations(uptr old_storage_beg,
+ uptr old_storage_end,
+ uptr new_storage_beg,
+ uptr new_storage_end) {
+ constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
+ uptr new_internal_beg = RoundDownTo(new_storage_beg, granularity);
+ uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
+ uptr old_ptr = old_storage_end;
+ uptr new_ptr = new_storage_end;
+
+ while (new_ptr > new_storage_beg) {
+ uptr granule_begin = RoundDownTo(new_ptr - 1, granularity);
+ uptr unpoisoned_bytes = 0;
+
+ for (; new_ptr != granule_begin && new_ptr != new_storage_beg;
+ --new_ptr, --old_ptr) {
+ if (unpoisoned_bytes == 0 && !AddressIsPoisoned(old_ptr - 1)) {
+ unpoisoned_bytes = new_ptr - granule_begin;
+ }
+ }
+
+ if (new_ptr >= new_internal_end && !AddressIsPoisoned(new_storage_end)) {
+ continue;
+ }
+
+ if (granule_begin == new_ptr || unpoisoned_bytes != 0) {
+ AnnotateContainerGranuleAccessibleBytes(granule_begin, unpoisoned_bytes);
+ } else if (!AddressIsPoisoned(new_storage_beg)) {
+ AnnotateContainerGranuleAccessibleBytes(granule_begin,
+ new_storage_beg - granule_begin);
+ }
+ }
+}
+
// This function copies ASan memory annotations (poisoned/unpoisoned states)
// from one buffer to another.
// It's main purpose is to help with relocating trivially relocatable objects,
@@ -678,9 +714,61 @@ void __sanitizer_copy_contiguous_container_annotations(
&stack);
}
- if (old_storage_beg == old_storage_end)
+ if (old_storage_beg == old_storage_end || old_storage_beg == new_storage_beg)
return;
+ // The only edge cases involve edge granules when the container starts or
+ // ends within a granule. We already know that the container's start and end
+ // points lie in different granules.
+ uptr old_external_end = RoundUpTo(old_storage_end, granularity);
+ if (old_storage_beg < new_storage_beg &&
+ new_storage_beg <= old_external_end) {
+ // In this case, we have to copy elements in reversed order, because
+ // destination buffer starts in the middle of the source buffer (or shares
+ // first granule with it).
+ // It still may be possible to optimize, but reversed order has to be kept.
+ if (old_storage_beg % granularity != new_storage_beg % granularity ||
+ WithinOneGranule(new_storage_beg, new_storage_end)) {
+ SlowRCopyContainerAnnotations(old_storage_beg, old_storage_end,
+ new_storage_beg, new_storage_end);
+ return;
+ }
+ uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
+ if (new_internal_end != new_storage_end &&
+ AddressIsPoisoned(new_storage_end)) {
+ // Last granule
+ uptr old_internal_end = RoundDownTo(old_storage_end, granularity);
+ if (AddressIsPoisoned(old_storage_end)) {
+ CopyGranuleAnnotation(new_internal_end, old_internal_end);
+ } else {
+ AnnotateContainerGranuleAccessibleBytes(
+ new_internal_end, old_storage_end - old_internal_end);
+ }
+ }
+
+ uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
+ if (new_internal_end > new_internal_beg) {
+ uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
+ __builtin_memmove((u8 *)MemToShadow(new_internal_beg),
+ (u8 *)MemToShadow(old_internal_beg),
+ (new_internal_end - new_internal_beg) / granularity);
+ }
+
+ if (new_internal_beg != new_storage_beg) {
+ // First granule
+ uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
+ uptr old_external_beg = RoundDownTo(old_storage_beg, granularity);
+ if (!AddressIsPoisoned(old_storage_beg)) {
+ CopyGranuleAnnotation(new_external_beg, old_external_beg);
+ } else if (!AddressIsPoisoned(new_storage_beg)) {
+ AnnotateContainerGranuleAccessibleBytes(
+ new_external_beg, new_storage_beg - new_external_beg);
+ }
+ }
+ return;
+ }
+
+ // Simple copy of annotations of all internal granules.
if (old_storage_beg % granularity != new_storage_beg % granularity ||
WithinOneGranule(new_storage_beg, new_storage_end)) {
SlowCopyContainerAnnotations(old_storage_beg, old_storage_end,
@@ -689,16 +777,6 @@ void __sanitizer_copy_contiguous_container_annotations(
}
uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
- uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
- if (new_internal_end > new_internal_beg) {
- uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
- __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
- (u8 *)MemToShadow(old_internal_beg),
- (new_internal_end - new_internal_beg) / granularity);
- }
- // The only remaining cases involve edge granules when the container starts or
- // ends within a granule. We already know that the container's start and end
- // points lie in different granules.
if (new_internal_beg != new_storage_beg) {
// First granule
uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
@@ -710,6 +788,15 @@ void __sanitizer_copy_contiguous_container_annotations(
new_external_beg, new_storage_beg - new_external_beg);
}
}
+
+ uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
+ if (new_internal_end > new_internal_beg) {
+ uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
+ __builtin_memmove((u8 *)MemToShadow(new_internal_beg),
+ (u8 *)MemToShadow(old_internal_beg),
+ (new_internal_end - new_internal_beg) / granularity);
+ }
+
if (new_internal_end != new_storage_end &&
AddressIsPoisoned(new_storage_end)) {
// Last granule
diff --git a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
index b98140fceb2557..db3b3e96968cec 100644
--- a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
@@ -111,7 +111,7 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
assert(!__asan_address_is_poisoned(cur));
}
}
- //In every granule, poisoned memory should be after last expected unpoisoned.
+
char *next;
for (cur = new_beg; cur + kGranularity <= new_end; cur = next) {
next = RoundUp(cur + 1);
@@ -124,15 +124,14 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
}
}
// [cur; new_end) is not checked yet.
- // If new_buffer were not poisoned, it cannot be poisoned and we can ignore
- // a separate check.
+ // If new_buffer were not poisoned, it cannot be poisoned.
// If new_buffer were poisoned, it should be same as earlier.
- if (cur < new_end && poison_new) {
+ if (cur < new_end) {
size_t unpoisoned = count_unpoisoned(poison_states, new_end - cur);
if (unpoisoned > 0) {
assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
}
- if (cur + unpoisoned < new_end) {
+ if (cur + unpoisoned < new_end && poison_new) {
assert(__asan_address_is_poisoned(cur + unpoisoned));
}
}
@@ -148,6 +147,76 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
delete[] new_buffer;
}
+void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
+ int poison_buffers) {
+ size_t buffer_size = capacity + off_old + off_new + kGranularity * 3;
+ char *buffer = new char[buffer_size];
+ char *buffer_end = buffer + buffer_size;
+ bool poison_whole = poison_buffers % 2 == 1;
+ bool poison_new = poison_buffers / 2 == 1;
+ char *old_beg = buffer + kGranularity + off_old;
+ char *new_beg = buffer + kGranularity + off_new;
+ char *old_end = old_beg + capacity;
+ char *new_end = new_beg + capacity;
+
+ for (int i = 0; i < 35; i++) {
+ if (poison_whole)
+ __asan_poison_memory_region(buffer, buffer_size);
+ if (poison_new)
+ __asan_poison_memory_region(new_beg, new_end - new_beg);
+
+ RandomPoison(old_beg, old_end);
+ std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
+ __sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
+ new_end);
+ // This variable is used only when buffer ends in the middle of a granule.
+ bool can_modify_last_granule = __asan_address_is_poisoned(new_end);
+
+ // If whole buffer were poisoned, expected state of memory before first container
+ // is undetermined.
+ // If old buffer were not poisoned, that memory should still be unpoisoned.
+ char *cur;
+ if (!poison_whole) {
+ for (cur = buffer; cur < old_beg && cur < new_beg; ++cur) {
+ assert(!__asan_address_is_poisoned(cur));
+ }
+ }
+
+ // Memory after end of both containers should be the same as at the beginning.
+ for (cur = (old_end > new_end) ? old_end : new_end; cur < buffer_end;
+ ++cur) {
+ assert(__asan_address_is_poisoned(cur) == poison_whole);
+ }
+
+ char *next;
+ for (cur = new_beg; cur + kGranularity <= new_end; cur = next) {
+ next = RoundUp(cur + 1);
+ size_t unpoisoned = count_unpoisoned(poison_states, next - cur);
+ if (unpoisoned > 0) {
+ assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
+ }
+ if (cur + unpoisoned < next) {
+ assert(__asan_address_is_poisoned(cur + unpoisoned));
+ }
+ }
+ // [cur; new_end) is not checked yet, if container ends in the middle of a granule.
+ // It can be poisoned, only if non-container bytes in that granule were poisoned.
+ // Otherwise, it should be unpoisoned.
+ if (cur < new_end) {
+ size_t unpoisoned = count_unpoisoned(poison_states, new_end - cur);
+ if (unpoisoned > 0) {
+ assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
+ }
+ if (cur + unpoisoned < new_end && can_modify_last_granule) {
+ assert(__asan_address_is_poisoned(cur + unpoisoned));
+ }
+ }
+ }
+
+ __asan_unpoison_memory_region(buffer, buffer_size);
+ delete[] buffer;
+}
+
int main(int argc, char **argv) {
int n = argc == 1 ? 64 : atoi(argv[1]);
for (size_t j = 0; j < kGranularity + 2; j++) {
@@ -155,6 +224,7 @@ int main(int argc, char **argv) {
for (int i = 0; i <= n; i++) {
for (int poison = 0; poison < 4; ++poison) {
TestNonOverlappingContainers(i, j, k, poison);
+ TestOverlappingContainers(i, j, k, poison);
}
}
}
>From ec709a37b823496bafae05d9d66255cad2b92764 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Wed, 28 Aug 2024 11:59:29 +0200
Subject: [PATCH 05/25] Rename test
---
...e_container_annotations.cpp => copy_container_annotations.cpp} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename compiler-rt/test/asan/TestCases/{move_container_annotations.cpp => copy_container_annotations.cpp} (100%)
diff --git a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
similarity index 100%
rename from compiler-rt/test/asan/TestCases/move_container_annotations.cpp
rename to compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
>From ae6d646c5abf5163b71f2eaaf748ace976840f6e Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Fri, 30 Aug 2024 15:36:35 +0200
Subject: [PATCH 06/25] Refactor
Update comments, simplify flow, improve readability.
---
.../include/sanitizer/common_interface_defs.h | 45 +--
compiler-rt/lib/asan/asan_errors.cpp | 6 +-
compiler-rt/lib/asan/asan_poisoning.cpp | 273 +++++++++---------
.../sanitizer_interface_internal.h | 6 +-
.../TestCases/copy_container_annotations.cpp | 8 +-
5 files changed, 172 insertions(+), 166 deletions(-)
diff --git a/compiler-rt/include/sanitizer/common_interface_defs.h b/compiler-rt/include/sanitizer/common_interface_defs.h
index e9101e39d72ddd..0ee641bcaeaf02 100644
--- a/compiler-rt/include/sanitizer/common_interface_defs.h
+++ b/compiler-rt/include/sanitizer/common_interface_defs.h
@@ -193,29 +193,40 @@ void SANITIZER_CDECL __sanitizer_annotate_double_ended_contiguous_container(
const void *old_container_beg, const void *old_container_end,
const void *new_container_beg, const void *new_container_end);
-/// Moves annotation from one storage to another.
-/// At the end, new buffer is annotated in the same way as old buffer at
-/// the very beginning. Old buffer is fully unpoisoned.
-/// Main purpose of that function is use while moving trivially relocatable
-/// objects, which memory may be poisoned (therefore not trivially with ASan).
+/// Copies memory annotations from a source storage region to a destination
+/// storage region. After the operation, the destination region has the same
+/// memory annotations as the source region,as long as ASan limitations allow it
+/// (more bytes may be unpoisoned than in the source region, resulting in more
+/// false negatives, but never false positives).
+/// If the source and destination regions overlap, only the minimal required
+/// changes are made to preserve the correct annotations. Old storage bytes that
+/// are not in the new storage should have the same annotations, as long as ASan
+/// limitations allow it.
+///
+/// This function is primarily designed to be used when moving trivially
+/// relocatable objects that may have poisoned memory, making direct copying
+/// problematic under AddressSanitizer (ASan).
+/// However, this function does not move memory content itself,
+/// only annotations.
///
/// A contiguous container is a container that keeps all of its elements
/// in a contiguous region of memory. The container owns the region of memory
-/// <c>[old_storage_beg, old_storage_end)</c> and
-/// <c>[new_storage_beg, new_storage_end)</c>;
-/// There is no requirement where objects are kept.
-/// Poisoned and non-poisoned memory areas can alternate,
-/// there are no shadow memory restrictions.
+/// <c>[src_begin, src_end)</c> and <c>[dst_begin, dst_end)</c>.
+/// The memory within these regions may be alternately poisoned and
+/// non-poisoned, with possibly smaller poisoned and unpoisoned regions.
+///
+/// If this function fully poisons a granule, it is marked as "container
+/// overflow".
///
/// Argument requirements:
-/// New containert has to have the same size as the old container.
-/// \param old_storage_beg Beginning of the old container region.
-/// \param old_storage_end End of the old container region.
-/// \param new_storage_beg Beginning of the new container region.
-/// \param new_storage_end End of the new container region.
+/// The destination container must have the same size as the source container,
+/// which is inferred from the beginning and end of the source region. Addresses
+/// may be granule-unaligned, but this may affect performance. \param src_begin
+/// Beginning of the source container region. \param src_end End of the source
+/// container region. \param dst_begin Beginning of the destination container
+/// region.
void SANITIZER_CDECL __sanitizer_copy_contiguous_container_annotations(
- const void *old_storage_beg, const void *old_storage_end,
- const void *new_storage_beg, const void *new_storage_end);
+ const void *src_begin, const void *src_end, const void *dst_begin);
/// Returns true if the contiguous container <c>[beg, end)</c> is properly
/// poisoned.
diff --git a/compiler-rt/lib/asan/asan_errors.cpp b/compiler-rt/lib/asan/asan_errors.cpp
index 8b712c70029f76..4f112cc5d1bca5 100644
--- a/compiler-rt/lib/asan/asan_errors.cpp
+++ b/compiler-rt/lib/asan/asan_errors.cpp
@@ -352,9 +352,9 @@ void ErrorBadParamsToCopyContiguousContainerAnnotations::Print() {
Report(
"ERROR: AddressSanitizer: bad parameters to "
"__sanitizer_copy_contiguous_container_annotations:\n"
- " old_storage_beg : %p\n"
- " old_storage_end : %p\n"
- " new_storage_beg : %p\n"
+ " src_storage_beg : %p\n"
+ " src_storage_end : %p\n"
+ " dst_storage_beg : %p\n"
" new_storage_end : %p\n",
(void *)old_storage_beg, (void *)old_storage_end, (void *)new_storage_beg,
(void *)new_storage_end);
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index fd83b918ac02aa..ee8354ae5c23b4 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -576,12 +576,12 @@ void __sanitizer_annotate_double_ended_contiguous_container(
}
}
-// Checks if two pointers fall within the same memory granule.
+// Checks if a buffer [p; q) falls into a single granule.
static bool WithinOneGranule(uptr p, uptr q) {
+ constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
if (p == q)
return true;
- return RoundDownTo(p, ASAN_SHADOW_GRANULARITY) ==
- RoundDownTo(q - 1, ASAN_SHADOW_GRANULARITY);
+ return RoundDownTo(p, granularity) == RoundDownTo(q - 1, granularity);
}
// Copies ASan memory annotation (a shadow memory value)
@@ -606,75 +606,110 @@ static void AnnotateContainerGranuleAccessibleBytes(uptr ptr, u8 n) {
// Performs a byte-by-byte copy of ASan annotations (shadow memory values).
// Result may be different due to ASan limitations, but result cannot lead
// to false positives (more memory than requested may get unpoisoned).
-static void SlowCopyContainerAnnotations(uptr old_storage_beg,
- uptr old_storage_end,
- uptr new_storage_beg,
- uptr new_storage_end) {
+static void SlowCopyContainerAnnotations(uptr src_storage_beg,
+ uptr src_storage_end,
+ uptr dst_storage_beg,
+ uptr dst_storage_end) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
- uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
- uptr old_ptr = old_storage_beg;
- uptr new_ptr = new_storage_beg;
+ uptr dst_internal_end = RoundDownTo(dst_storage_end, granularity);
+ uptr src_ptr = src_storage_beg;
+ uptr dst_ptr = dst_storage_beg;
- while (new_ptr < new_storage_end) {
- uptr next_new = RoundUpTo(new_ptr + 1, granularity);
+ while (dst_ptr < dst_storage_end) {
+ uptr next_new = RoundUpTo(dst_ptr + 1, granularity);
uptr granule_begin = next_new - granularity;
uptr unpoisoned_bytes = 0;
- for (; new_ptr != next_new && new_ptr != new_storage_end;
- ++new_ptr, ++old_ptr) {
- if (!AddressIsPoisoned(old_ptr)) {
- unpoisoned_bytes = new_ptr - granule_begin + 1;
+ for (; dst_ptr != next_new && dst_ptr != dst_storage_end;
+ ++dst_ptr, ++src_ptr) {
+ if (!AddressIsPoisoned(src_ptr)) {
+ unpoisoned_bytes = dst_ptr - granule_begin + 1;
}
}
- if (new_ptr < new_storage_end || new_ptr == new_internal_end ||
- AddressIsPoisoned(new_storage_end)) {
- if (unpoisoned_bytes != 0 || granule_begin >= new_storage_beg) {
+ if (dst_ptr < dst_storage_end || dst_ptr == dst_internal_end ||
+ AddressIsPoisoned(dst_storage_end)) {
+ if (unpoisoned_bytes != 0 || granule_begin >= dst_storage_beg) {
AnnotateContainerGranuleAccessibleBytes(granule_begin,
unpoisoned_bytes);
- } else if (!AddressIsPoisoned(new_storage_beg)) {
+ } else if (!AddressIsPoisoned(dst_storage_beg)) {
AnnotateContainerGranuleAccessibleBytes(
- granule_begin, new_storage_beg - granule_begin);
+ granule_begin, dst_storage_beg - granule_begin);
}
}
}
}
-// This function is basically the same as SlowCopyContainerAnnotations,
-// but goes through elements in reversed order
-static void SlowRCopyContainerAnnotations(uptr old_storage_beg,
- uptr old_storage_end,
- uptr new_storage_beg,
- uptr new_storage_end) {
+// Performs a byte-by-byte copy of ASan annotations (shadow memory values),
+// going through bytes in reversed order, but not reversing annotations.
+// Result may be different due to ASan limitations, but result cannot lead
+// to false positives (more memory than requested may get unpoisoned).
+static void SlowReversedCopyContainerAnnotations(uptr src_storage_beg,
+ uptr src_storage_end,
+ uptr dst_storage_beg,
+ uptr dst_storage_end) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
- uptr new_internal_beg = RoundDownTo(new_storage_beg, granularity);
- uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
- uptr old_ptr = old_storage_end;
- uptr new_ptr = new_storage_end;
+ uptr dst_internal_beg = RoundDownTo(dst_storage_beg, granularity);
+ uptr dst_internal_end = RoundDownTo(dst_storage_end, granularity);
+ uptr src_ptr = src_storage_end;
+ uptr dst_ptr = dst_storage_end;
- while (new_ptr > new_storage_beg) {
- uptr granule_begin = RoundDownTo(new_ptr - 1, granularity);
+ while (dst_ptr > dst_storage_beg) {
+ uptr granule_begin = RoundDownTo(dst_ptr - 1, granularity);
uptr unpoisoned_bytes = 0;
- for (; new_ptr != granule_begin && new_ptr != new_storage_beg;
- --new_ptr, --old_ptr) {
- if (unpoisoned_bytes == 0 && !AddressIsPoisoned(old_ptr - 1)) {
- unpoisoned_bytes = new_ptr - granule_begin;
+ for (; dst_ptr != granule_begin && dst_ptr != dst_storage_beg;
+ --dst_ptr, --src_ptr) {
+ if (unpoisoned_bytes == 0 && !AddressIsPoisoned(src_ptr - 1)) {
+ unpoisoned_bytes = dst_ptr - granule_begin;
}
}
- if (new_ptr >= new_internal_end && !AddressIsPoisoned(new_storage_end)) {
+ if (dst_ptr >= dst_internal_end && !AddressIsPoisoned(dst_storage_end)) {
continue;
}
- if (granule_begin == new_ptr || unpoisoned_bytes != 0) {
+ if (granule_begin == dst_ptr || unpoisoned_bytes != 0) {
AnnotateContainerGranuleAccessibleBytes(granule_begin, unpoisoned_bytes);
- } else if (!AddressIsPoisoned(new_storage_beg)) {
+ } else if (!AddressIsPoisoned(dst_storage_beg)) {
AnnotateContainerGranuleAccessibleBytes(granule_begin,
- new_storage_beg - granule_begin);
+ dst_storage_beg - granule_begin);
}
}
}
+// A helper function for __sanitizer_copy_contiguous_container_annotations,
+// has assumption about begin and end of the container.
+// Should not be used stand alone.
+static void CopyContainerFirstGranuleAnnotation(uptr src_storage_begin,
+ uptr dst_storage_begin) {
+ constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
+ // First granule
+ uptr dst_external_begin = RoundDownTo(dst_storage_begin, granularity);
+ uptr src_external_begin = RoundDownTo(src_storage_begin, granularity);
+ if (!AddressIsPoisoned(src_storage_begin)) {
+ CopyGranuleAnnotation(dst_external_begin, src_external_begin);
+ } else if (!AddressIsPoisoned(dst_storage_begin)) {
+ AnnotateContainerGranuleAccessibleBytes(
+ dst_external_begin, dst_storage_begin - dst_external_begin);
+ }
+}
+
+// A helper function for __sanitizer_copy_contiguous_container_annotations,
+// has assumption about begin and end of the container.
+// Should not be used stand alone.
+static void CopyContainerLastGranuleAnnotation(uptr src_storage_end,
+ uptr dst_internal_end) {
+ constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
+ // Last granule
+ uptr src_internal_end = RoundDownTo(src_storage_end, granularity);
+ if (AddressIsPoisoned(src_storage_end)) {
+ CopyGranuleAnnotation(dst_internal_end, src_internal_end);
+ } else {
+ AnnotateContainerGranuleAccessibleBytes(dst_internal_end,
+ src_storage_end - src_internal_end);
+ }
+}
+
// This function copies ASan memory annotations (poisoned/unpoisoned states)
// from one buffer to another.
// It's main purpose is to help with relocating trivially relocatable objects,
@@ -682,130 +717,90 @@ static void SlowRCopyContainerAnnotations(uptr old_storage_beg,
// However, it does not move memory content itself, only annotations.
// If the buffers aren't aligned (the distance between buffers isn't
// granule-aligned)
-// // old_storage_beg % granularity != new_storage_beg % granularity
+// // src_storage_beg % granularity != dst_storage_beg % granularity
// the function handles this by going byte by byte, slowing down performance.
// The old buffer annotations are not removed. If necessary,
// user can unpoison old buffer with __asan_unpoison_memory_region.
void __sanitizer_copy_contiguous_container_annotations(
- const void *old_storage_beg_p, const void *old_storage_end_p,
- const void *new_storage_beg_p, const void *new_storage_end_p) {
+ const void *src_begin_p, const void *src_end_p, const void *dst_begin_p) {
if (!flags()->detect_container_overflow)
return;
- VPrintf(2, "contiguous_container_old: %p %p\n", old_storage_beg_p,
- old_storage_end_p);
- VPrintf(2, "contiguous_container_new: %p %p\n", new_storage_beg_p,
- new_storage_end_p);
+ VPrintf(2, "contiguous_container_src: %p %p\n", src_begin_p, src_end_p);
+ VPrintf(2, "contiguous_container_dst: %p\n", dst_begin_p);
- uptr old_storage_beg = reinterpret_cast<uptr>(old_storage_beg_p);
- uptr old_storage_end = reinterpret_cast<uptr>(old_storage_end_p);
- uptr new_storage_beg = reinterpret_cast<uptr>(new_storage_beg_p);
- uptr new_storage_end = reinterpret_cast<uptr>(new_storage_end_p);
+ uptr src_storage_begin = reinterpret_cast<uptr>(src_begin_p);
+ uptr src_storage_end = reinterpret_cast<uptr>(src_end_p);
+ uptr dst_storage_begin = reinterpret_cast<uptr>(dst_begin_p);
+ uptr dst_storage_end =
+ dst_storage_begin + (src_storage_end - src_storage_begin);
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
- if (!(old_storage_beg <= old_storage_end) ||
- !(new_storage_beg <= new_storage_end) ||
- (old_storage_end - old_storage_beg) !=
- (new_storage_end - new_storage_beg)) {
+ if (src_storage_begin > src_storage_end) {
GET_STACK_TRACE_FATAL_HERE;
ReportBadParamsToCopyContiguousContainerAnnotations(
- old_storage_beg, old_storage_end, new_storage_beg, new_storage_end,
+ src_storage_begin, src_storage_end, dst_storage_begin, dst_storage_end,
&stack);
}
- if (old_storage_beg == old_storage_end || old_storage_beg == new_storage_beg)
+ if (src_storage_begin == src_storage_end ||
+ src_storage_begin == dst_storage_begin)
return;
- // The only edge cases involve edge granules when the container starts or
- // ends within a granule. We already know that the container's start and end
- // points lie in different granules.
- uptr old_external_end = RoundUpTo(old_storage_end, granularity);
- if (old_storage_beg < new_storage_beg &&
- new_storage_beg <= old_external_end) {
- // In this case, we have to copy elements in reversed order, because
- // destination buffer starts in the middle of the source buffer (or shares
- // first granule with it).
- // It still may be possible to optimize, but reversed order has to be kept.
- if (old_storage_beg % granularity != new_storage_beg % granularity ||
- WithinOneGranule(new_storage_beg, new_storage_end)) {
- SlowRCopyContainerAnnotations(old_storage_beg, old_storage_end,
- new_storage_beg, new_storage_end);
- return;
- }
-
- uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
- if (new_internal_end != new_storage_end &&
- AddressIsPoisoned(new_storage_end)) {
- // Last granule
- uptr old_internal_end = RoundDownTo(old_storage_end, granularity);
- if (AddressIsPoisoned(old_storage_end)) {
- CopyGranuleAnnotation(new_internal_end, old_internal_end);
- } else {
- AnnotateContainerGranuleAccessibleBytes(
- new_internal_end, old_storage_end - old_internal_end);
- }
- }
-
- uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
- if (new_internal_end > new_internal_beg) {
- uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
- __builtin_memmove((u8 *)MemToShadow(new_internal_beg),
- (u8 *)MemToShadow(old_internal_beg),
- (new_internal_end - new_internal_beg) / granularity);
- }
-
- if (new_internal_beg != new_storage_beg) {
- // First granule
- uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
- uptr old_external_beg = RoundDownTo(old_storage_beg, granularity);
- if (!AddressIsPoisoned(old_storage_beg)) {
- CopyGranuleAnnotation(new_external_beg, old_external_beg);
- } else if (!AddressIsPoisoned(new_storage_beg)) {
- AnnotateContainerGranuleAccessibleBytes(
- new_external_beg, new_storage_beg - new_external_beg);
- }
+ // Due to support for overlapping buffers, we may have to copy elements
+ // in reversed order, when destination buffer starts in the middle of
+ // the source buffer (or shares first granule with it).
+ //
+ // When buffers are not granule-aligned (or distance between them,
+ // to be specific), annotatios have to be copied byte by byte.
+ //
+ // The only remaining edge cases involve edge granules,
+ // when the container starts or ends within a granule.
+ uptr src_external_end = RoundUpTo(src_storage_end, granularity);
+ bool copy_in_reversed_order = src_storage_begin < dst_storage_begin &&
+ dst_storage_begin <= src_external_end;
+ if (src_storage_begin % granularity != dst_storage_begin % granularity ||
+ WithinOneGranule(dst_storage_begin, dst_storage_end)) {
+ if (copy_in_reversed_order) {
+ SlowReversedCopyContainerAnnotations(src_storage_begin, src_storage_end,
+ dst_storage_begin, dst_storage_end);
+ } else {
+ SlowCopyContainerAnnotations(src_storage_begin, src_storage_end,
+ dst_storage_begin, dst_storage_end);
}
return;
}
- // Simple copy of annotations of all internal granules.
- if (old_storage_beg % granularity != new_storage_beg % granularity ||
- WithinOneGranule(new_storage_beg, new_storage_end)) {
- SlowCopyContainerAnnotations(old_storage_beg, old_storage_end,
- new_storage_beg, new_storage_end);
- return;
- }
-
- uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
- if (new_internal_beg != new_storage_beg) {
- // First granule
- uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
- uptr old_external_beg = RoundDownTo(old_storage_beg, granularity);
- if (!AddressIsPoisoned(old_storage_beg)) {
- CopyGranuleAnnotation(new_external_beg, old_external_beg);
- } else if (!AddressIsPoisoned(new_storage_beg)) {
- AnnotateContainerGranuleAccessibleBytes(
- new_external_beg, new_storage_beg - new_external_beg);
+ // As buffers are granule-aligned, we can just copy annotations of granules
+ // from the middle.
+ uptr dst_internal_begin = RoundUpTo(dst_storage_begin, granularity);
+ uptr dst_internal_end = RoundDownTo(dst_storage_end, granularity);
+ if (copy_in_reversed_order) {
+ if (dst_internal_end != dst_storage_end &&
+ AddressIsPoisoned(dst_storage_end)) {
+ CopyContainerLastGranuleAnnotation(src_storage_end, dst_internal_end);
+ }
+ } else {
+ if (dst_internal_begin != dst_storage_begin) {
+ CopyContainerFirstGranuleAnnotation(src_storage_begin, dst_storage_begin);
}
}
- uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
- if (new_internal_end > new_internal_beg) {
- uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
- __builtin_memmove((u8 *)MemToShadow(new_internal_beg),
- (u8 *)MemToShadow(old_internal_beg),
- (new_internal_end - new_internal_beg) / granularity);
+ if (dst_internal_end > dst_internal_begin) {
+ uptr src_internal_begin = RoundUpTo(src_storage_begin, granularity);
+ __builtin_memmove((u8 *)MemToShadow(dst_internal_begin),
+ (u8 *)MemToShadow(src_internal_begin),
+ (dst_internal_end - dst_internal_begin) / granularity);
}
- if (new_internal_end != new_storage_end &&
- AddressIsPoisoned(new_storage_end)) {
- // Last granule
- uptr old_internal_end = RoundDownTo(old_storage_end, granularity);
- if (AddressIsPoisoned(old_storage_end)) {
- CopyGranuleAnnotation(new_internal_end, old_internal_end);
- } else {
- AnnotateContainerGranuleAccessibleBytes(
- new_internal_end, old_storage_end - old_internal_end);
+ if (copy_in_reversed_order) {
+ if (dst_internal_begin != dst_storage_begin) {
+ CopyContainerFirstGranuleAnnotation(src_storage_begin, dst_storage_begin);
+ }
+ } else {
+ if (dst_internal_end != dst_storage_end &&
+ AddressIsPoisoned(dst_storage_end)) {
+ CopyContainerLastGranuleAnnotation(src_storage_end, dst_internal_end);
}
}
}
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
index d9fe1567b475b5..beb46df5172ebd 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
@@ -76,9 +76,9 @@ void __sanitizer_annotate_double_ended_contiguous_container(
const void *old_container_beg, const void *old_container_end,
const void *new_container_beg, const void *new_container_end);
SANITIZER_INTERFACE_ATTRIBUTE
-void __sanitizer_copy_contiguous_container_annotations(
- const void *old_storage_beg, const void *old_storage_end,
- const void *new_storage_beg, const void *new_storage_end);
+void __sanitizer_copy_contiguous_container_annotations(const void *src_begin,
+ const void *src_end,
+ const void *dst_begin);
SANITIZER_INTERFACE_ATTRIBUTE
int __sanitizer_verify_contiguous_container(const void *beg, const void *mid,
const void *end);
diff --git a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
index db3b3e96968cec..2cb7f33f53145b 100644
--- a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
@@ -84,8 +84,8 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
RandomPoison(old_beg, old_end);
std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
- __sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
- new_end);
+ __sanitizer_copy_contiguous_container_annotations(old_beg, old_end,
+ new_beg);
// If old_buffer were poisoned, expected state of memory before old_beg
// is undetermined.
@@ -167,8 +167,8 @@ void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
RandomPoison(old_beg, old_end);
std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
- __sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
- new_end);
+ __sanitizer_copy_contiguous_container_annotations(old_beg, old_end,
+ new_beg);
// This variable is used only when buffer ends in the middle of a granule.
bool can_modify_last_granule = __asan_address_is_poisoned(new_end);
>From 03dffefc300f7bfef4ada7890d87543b2b9046a7 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Mon, 16 Sep 2024 09:22:35 +0200
Subject: [PATCH 07/25] Code review fixes
- VPrintf level changed to 3.
- unique_ptr used in the test example.
---
compiler-rt/lib/asan/asan_poisoning.cpp | 4 +-
.../TestCases/copy_container_annotations.cpp | 49 ++++++++++---------
2 files changed, 29 insertions(+), 24 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index ee8354ae5c23b4..c99853540268f1 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -726,8 +726,8 @@ void __sanitizer_copy_contiguous_container_annotations(
if (!flags()->detect_container_overflow)
return;
- VPrintf(2, "contiguous_container_src: %p %p\n", src_begin_p, src_end_p);
- VPrintf(2, "contiguous_container_dst: %p\n", dst_begin_p);
+ VPrintf(3, "contiguous_container_src: %p %p\n", src_begin_p, src_end_p);
+ VPrintf(3, "contiguous_container_dst: %p\n", dst_begin_p);
uptr src_storage_begin = reinterpret_cast<uptr>(src_begin_p);
uptr src_storage_end = reinterpret_cast<uptr>(src_end_p);
diff --git a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
index 2cb7f33f53145b..b2635c79a1a940 100644
--- a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
@@ -5,6 +5,7 @@
#include <algorithm>
#include <deque>
#include <numeric>
+#include <memory>
#include <assert.h>
#include <sanitizer/asan_interface.h>
@@ -65,22 +66,25 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
size_t off_new, int poison_buffers) {
size_t old_buffer_size = capacity + off_old + kGranularity * 2;
size_t new_buffer_size = capacity + off_new + kGranularity * 2;
- char *old_buffer = new char[old_buffer_size];
- char *new_buffer = new char[new_buffer_size];
- char *old_buffer_end = old_buffer + old_buffer_size;
- char *new_buffer_end = new_buffer + new_buffer_size;
+
+ // Use unique_ptr with a custom deleter to manage the buffers
+ std::unique_ptr<char[]> old_buffer = std::make_unique<char[]>(old_buffer_size);
+ std::unique_ptr<char[]> new_buffer = std::make_unique<char[]>(new_buffer_size);
+
+ char *old_buffer_end = old_buffer.get() + old_buffer_size;
+ char *new_buffer_end = new_buffer.get() + new_buffer_size;
bool poison_old = poison_buffers % 2 == 1;
bool poison_new = poison_buffers / 2 == 1;
- char *old_beg = old_buffer + off_old;
- char *new_beg = new_buffer + off_new;
+ char *old_beg = old_buffer.get() + off_old;
+ char *new_beg = new_buffer.get() + off_new;
char *old_end = old_beg + capacity;
char *new_end = new_beg + capacity;
for (int i = 0; i < 35; i++) {
if (poison_old)
- __asan_poison_memory_region(old_buffer, old_buffer_size);
+ __asan_poison_memory_region(old_buffer.get(), old_buffer_size);
if (poison_new)
- __asan_poison_memory_region(new_buffer, new_buffer_size);
+ __asan_poison_memory_region(new_buffer.get(), new_buffer_size);
RandomPoison(old_beg, old_end);
std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
@@ -92,7 +96,7 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
// If old buffer were not poisoned, that memory should still be unpoisoned.
char *cur;
if (!poison_old) {
- for (cur = old_buffer; cur < old_beg; ++cur) {
+ for (cur = old_buffer.get(); cur < old_beg; ++cur) {
assert(!__asan_address_is_poisoned(cur));
}
}
@@ -107,7 +111,7 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
// If new_buffer were not poisoned, memory before new_beg should never
// be poisoned. Otherwise, its state is undetermined.
if (!poison_new) {
- for (cur = new_buffer; cur < new_beg; ++cur) {
+ for (cur = new_buffer.get(); cur < new_beg; ++cur) {
assert(!__asan_address_is_poisoned(cur));
}
}
@@ -141,27 +145,29 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
}
}
- __asan_unpoison_memory_region(old_buffer, old_buffer_size);
- __asan_unpoison_memory_region(new_buffer, new_buffer_size);
- delete[] old_buffer;
- delete[] new_buffer;
+ __asan_unpoison_memory_region(old_buffer.get(), old_buffer_size);
+ __asan_unpoison_memory_region(new_buffer.get(), new_buffer_size);
}
+
void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
int poison_buffers) {
size_t buffer_size = capacity + off_old + off_new + kGranularity * 3;
- char *buffer = new char[buffer_size];
- char *buffer_end = buffer + buffer_size;
+
+ // Use unique_ptr with a custom deleter to manage the buffer
+ std::unique_ptr<char[]> buffer = std::make_unique<char[]>(buffer_size);
+
+ char *buffer_end = buffer.get() + buffer_size;
bool poison_whole = poison_buffers % 2 == 1;
bool poison_new = poison_buffers / 2 == 1;
- char *old_beg = buffer + kGranularity + off_old;
- char *new_beg = buffer + kGranularity + off_new;
+ char *old_beg = buffer.get() + kGranularity + off_old;
+ char *new_beg = buffer.get() + kGranularity + off_new;
char *old_end = old_beg + capacity;
char *new_end = new_beg + capacity;
for (int i = 0; i < 35; i++) {
if (poison_whole)
- __asan_poison_memory_region(buffer, buffer_size);
+ __asan_poison_memory_region(buffer.get(), buffer_size);
if (poison_new)
__asan_poison_memory_region(new_beg, new_end - new_beg);
@@ -177,7 +183,7 @@ void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
// If old buffer were not poisoned, that memory should still be unpoisoned.
char *cur;
if (!poison_whole) {
- for (cur = buffer; cur < old_beg && cur < new_beg; ++cur) {
+ for (cur = buffer.get(); cur < old_beg && cur < new_beg; ++cur) {
assert(!__asan_address_is_poisoned(cur));
}
}
@@ -213,8 +219,7 @@ void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
}
}
- __asan_unpoison_memory_region(buffer, buffer_size);
- delete[] buffer;
+ __asan_unpoison_memory_region(buffer.get(), buffer_size);
}
int main(int argc, char **argv) {
>From 3cfd0635061876d4efaf3aab40fe5c2c959d19a5 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Mon, 16 Sep 2024 09:49:06 +0200
Subject: [PATCH 08/25] Bring back destination container end argument
For safety.
Fix code formatting.
---
compiler-rt/lib/asan/asan_poisoning.cpp | 13 ++++++++-----
.../TestCases/copy_container_annotations.cpp | 17 +++++++++--------
2 files changed, 17 insertions(+), 13 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index c99853540268f1..03f07e04df92bd 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -721,8 +721,10 @@ static void CopyContainerLastGranuleAnnotation(uptr src_storage_end,
// the function handles this by going byte by byte, slowing down performance.
// The old buffer annotations are not removed. If necessary,
// user can unpoison old buffer with __asan_unpoison_memory_region.
-void __sanitizer_copy_contiguous_container_annotations(
- const void *src_begin_p, const void *src_end_p, const void *dst_begin_p) {
+void __sanitizer_copy_contiguous_container_annotations(const void *src_begin_p,
+ const void *src_end_p,
+ const void *dst_begin_p,
+ const void *dst_end_p) {
if (!flags()->detect_container_overflow)
return;
@@ -732,12 +734,13 @@ void __sanitizer_copy_contiguous_container_annotations(
uptr src_storage_begin = reinterpret_cast<uptr>(src_begin_p);
uptr src_storage_end = reinterpret_cast<uptr>(src_end_p);
uptr dst_storage_begin = reinterpret_cast<uptr>(dst_begin_p);
- uptr dst_storage_end =
- dst_storage_begin + (src_storage_end - src_storage_begin);
+ uptr dst_storage_end = reinterpret_cast<uptr>(dst_end_p);
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
- if (src_storage_begin > src_storage_end) {
+ if (src_storage_begin > src_storage_end ||
+ dst_storage_end !=
+ (dst_storage_begin + (src_storage_end - src_storage_begin))) {
GET_STACK_TRACE_FATAL_HERE;
ReportBadParamsToCopyContiguousContainerAnnotations(
src_storage_begin, src_storage_end, dst_storage_begin, dst_storage_end,
diff --git a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
index b2635c79a1a940..e686cdc1206ea8 100644
--- a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
@@ -4,8 +4,8 @@
#include <algorithm>
#include <deque>
-#include <numeric>
#include <memory>
+#include <numeric>
#include <assert.h>
#include <sanitizer/asan_interface.h>
@@ -68,8 +68,10 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
size_t new_buffer_size = capacity + off_new + kGranularity * 2;
// Use unique_ptr with a custom deleter to manage the buffers
- std::unique_ptr<char[]> old_buffer = std::make_unique<char[]>(old_buffer_size);
- std::unique_ptr<char[]> new_buffer = std::make_unique<char[]>(new_buffer_size);
+ std::unique_ptr<char[]> old_buffer =
+ std::make_unique<char[]>(old_buffer_size);
+ std::unique_ptr<char[]> new_buffer =
+ std::make_unique<char[]>(new_buffer_size);
char *old_buffer_end = old_buffer.get() + old_buffer_size;
char *new_buffer_end = new_buffer.get() + new_buffer_size;
@@ -88,8 +90,8 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
RandomPoison(old_beg, old_end);
std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
- __sanitizer_copy_contiguous_container_annotations(old_beg, old_end,
- new_beg);
+ __sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
+ new_end);
// If old_buffer were poisoned, expected state of memory before old_beg
// is undetermined.
@@ -149,7 +151,6 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
__asan_unpoison_memory_region(new_buffer.get(), new_buffer_size);
}
-
void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
int poison_buffers) {
size_t buffer_size = capacity + off_old + off_new + kGranularity * 3;
@@ -173,8 +174,8 @@ void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
RandomPoison(old_beg, old_end);
std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
- __sanitizer_copy_contiguous_container_annotations(old_beg, old_end,
- new_beg);
+ __sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
+ new_end);
// This variable is used only when buffer ends in the middle of a granule.
bool can_modify_last_granule = __asan_address_is_poisoned(new_end);
>From c220d42906591dd81f68e3b3d46e8e175397a0cc Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Tue, 1 Oct 2024 21:10:58 -0700
Subject: [PATCH 09/25] Fix doc, it's not asan specific API
---
.../include/sanitizer/common_interface_defs.h | 42 +++++++++----------
1 file changed, 20 insertions(+), 22 deletions(-)
diff --git a/compiler-rt/include/sanitizer/common_interface_defs.h b/compiler-rt/include/sanitizer/common_interface_defs.h
index 0ee641bcaeaf02..220ca3b915c53a 100644
--- a/compiler-rt/include/sanitizer/common_interface_defs.h
+++ b/compiler-rt/include/sanitizer/common_interface_defs.h
@@ -195,36 +195,34 @@ void SANITIZER_CDECL __sanitizer_annotate_double_ended_contiguous_container(
/// Copies memory annotations from a source storage region to a destination
/// storage region. After the operation, the destination region has the same
-/// memory annotations as the source region,as long as ASan limitations allow it
-/// (more bytes may be unpoisoned than in the source region, resulting in more
-/// false negatives, but never false positives).
-/// If the source and destination regions overlap, only the minimal required
-/// changes are made to preserve the correct annotations. Old storage bytes that
-/// are not in the new storage should have the same annotations, as long as ASan
-/// limitations allow it.
+/// memory annotations as the source region, as long as sanitizer limitations
+/// allow it (more bytes may be unpoisoned than in the source region, resulting
+/// in more false negatives, but never false positives). If the source and
+/// destination regions overlap, only the minimal required changes are made to
+/// preserve the correct annotations. Old storage bytes that are not in the new
+/// storage should have the same annotations, as long as sanitizer limitations
+/// allow it.
///
/// This function is primarily designed to be used when moving trivially
/// relocatable objects that may have poisoned memory, making direct copying
-/// problematic under AddressSanitizer (ASan).
-/// However, this function does not move memory content itself,
-/// only annotations.
+/// problematic under sanitizer. However, this function does not move memory
+/// content itself, only annotations.
///
-/// A contiguous container is a container that keeps all of its elements
-/// in a contiguous region of memory. The container owns the region of memory
-/// <c>[src_begin, src_end)</c> and <c>[dst_begin, dst_end)</c>.
-/// The memory within these regions may be alternately poisoned and
-/// non-poisoned, with possibly smaller poisoned and unpoisoned regions.
+/// A contiguous container is a container that keeps all of its elements in a
+/// contiguous region of memory. The container owns the region of memory
+/// <c>[src_begin, src_end)</c> and <c>[dst_begin, dst_end)</c>. The memory
+/// within these regions may be alternately poisoned and non-poisoned, with
+/// possibly smaller poisoned and unpoisoned regions.
///
/// If this function fully poisons a granule, it is marked as "container
/// overflow".
///
-/// Argument requirements:
-/// The destination container must have the same size as the source container,
-/// which is inferred from the beginning and end of the source region. Addresses
-/// may be granule-unaligned, but this may affect performance. \param src_begin
-/// Beginning of the source container region. \param src_end End of the source
-/// container region. \param dst_begin Beginning of the destination container
-/// region.
+/// Argument requirements: The destination container must have the same size as
+/// the source container, which is inferred from the beginning and end of the
+/// source region. Addresses may be granule-unaligned, but this may affect
+/// performance. \param src_begin Beginning of the source container region.
+/// \param src_end End of the source container region. \param dst_begin
+/// Beginning of the destination container region.
void SANITIZER_CDECL __sanitizer_copy_contiguous_container_annotations(
const void *src_begin, const void *src_end, const void *dst_begin);
>From 995179240a89f9a10a4c3cf5acd8930e52202dc7 Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Tue, 1 Oct 2024 21:33:05 -0700
Subject: [PATCH 10/25] RoundUp
---
.../test/asan/TestCases/copy_container_annotations.cpp | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
index e686cdc1206ea8..66b53417703513 100644
--- a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
@@ -20,10 +20,8 @@ template <class T> static constexpr T RoundDown(T x) {
~(kGranularity - 1));
}
template <class T> static constexpr T RoundUp(T x) {
- return (x == RoundDown(x))
- ? x
- : reinterpret_cast<T>(reinterpret_cast<uintptr_t>(RoundDown(x)) +
- kGranularity);
+ return reinterpret_cast<T>(
+ RoundDown(reinterpret_cast<uintptr_t>(x) + kGranularity - 1));
}
static std::deque<int> GetPoisonedState(char *begin, char *end) {
@@ -67,7 +65,6 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
size_t old_buffer_size = capacity + off_old + kGranularity * 2;
size_t new_buffer_size = capacity + off_new + kGranularity * 2;
- // Use unique_ptr with a custom deleter to manage the buffers
std::unique_ptr<char[]> old_buffer =
std::make_unique<char[]>(old_buffer_size);
std::unique_ptr<char[]> new_buffer =
>From 043f74f2a36552253afb97f9d34c4e4ca2013209 Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Tue, 1 Oct 2024 21:51:34 -0700
Subject: [PATCH 11/25] interface
---
.../include/sanitizer/common_interface_defs.h | 12 ++++++++----
.../sanitizer_common/sanitizer_interface_internal.h | 3 ++-
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/compiler-rt/include/sanitizer/common_interface_defs.h b/compiler-rt/include/sanitizer/common_interface_defs.h
index 220ca3b915c53a..57313f9bc80e67 100644
--- a/compiler-rt/include/sanitizer/common_interface_defs.h
+++ b/compiler-rt/include/sanitizer/common_interface_defs.h
@@ -220,11 +220,15 @@ void SANITIZER_CDECL __sanitizer_annotate_double_ended_contiguous_container(
/// Argument requirements: The destination container must have the same size as
/// the source container, which is inferred from the beginning and end of the
/// source region. Addresses may be granule-unaligned, but this may affect
-/// performance. \param src_begin Beginning of the source container region.
-/// \param src_end End of the source container region. \param dst_begin
-/// Beginning of the destination container region.
+/// performance.
+///
+/// \param src_begin Begin of the source container region.
+/// \param src_end End of the source container region.
+/// \param dst_begin Begin of the destination container region.
+/// \param dst_end End of the destination container region.
void SANITIZER_CDECL __sanitizer_copy_contiguous_container_annotations(
- const void *src_begin, const void *src_end, const void *dst_begin);
+ const void *src_begin, const void *src_end, const void *dst_begin,
+ const void *dst_end);
/// Returns true if the contiguous container <c>[beg, end)</c> is properly
/// poisoned.
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
index beb46df5172ebd..387a4d87d97bf9 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
@@ -78,7 +78,8 @@ void __sanitizer_annotate_double_ended_contiguous_container(
SANITIZER_INTERFACE_ATTRIBUTE
void __sanitizer_copy_contiguous_container_annotations(const void *src_begin,
const void *src_end,
- const void *dst_begin);
+ const void *dst_begin,
+ const void *dst_end);
SANITIZER_INTERFACE_ATTRIBUTE
int __sanitizer_verify_contiguous_container(const void *beg, const void *mid,
const void *end);
>From 2607ba402e4a9b31ec7325ef67654a30eeca1da0 Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Tue, 1 Oct 2024 22:23:01 -0700
Subject: [PATCH 12/25] log message
---
compiler-rt/lib/asan/asan_poisoning.cpp | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index 03f07e04df92bd..4e11d107610ef0 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -729,7 +729,7 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_begin_p,
return;
VPrintf(3, "contiguous_container_src: %p %p\n", src_begin_p, src_end_p);
- VPrintf(3, "contiguous_container_dst: %p\n", dst_begin_p);
+ VPrintf(3, "contiguous_container_dst: %p %p\n", dst_begin_p, dst_end_p);
uptr src_storage_begin = reinterpret_cast<uptr>(src_begin_p);
uptr src_storage_end = reinterpret_cast<uptr>(src_end_p);
>From f8f91c4799a456ef299a9653c1331791ff269727 Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Tue, 1 Oct 2024 22:25:29 -0700
Subject: [PATCH 13/25] simplify condition
---
compiler-rt/lib/asan/asan_poisoning.cpp | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index 4e11d107610ef0..f0a196b07b75bf 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -739,8 +739,8 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_begin_p,
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
if (src_storage_begin > src_storage_end ||
- dst_storage_end !=
- (dst_storage_begin + (src_storage_end - src_storage_begin))) {
+ (dst_storage_end - dst_storage_begin) !=
+ (src_storage_end - src_storage_begin)) {
GET_STACK_TRACE_FATAL_HERE;
ReportBadParamsToCopyContiguousContainerAnnotations(
src_storage_begin, src_storage_end, dst_storage_begin, dst_storage_end,
>From c73606efea40515590d3524500962d8853c71ba3 Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 11:12:10 -0700
Subject: [PATCH 14/25] renames in test
---
.../TestCases/copy_container_annotations.cpp | 37 ++++++++++---------
1 file changed, 19 insertions(+), 18 deletions(-)
diff --git a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
index 66b53417703513..c260983b6c208c 100644
--- a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
@@ -48,7 +48,7 @@ static void RandomPoison(char *beg, char *end) {
}
}
-static size_t count_unpoisoned(std::deque<int> &poison_states, size_t n) {
+static size_t CountUnpoisoned(std::deque<int> &poison_states, size_t n) {
size_t result = 0;
for (size_t i = 0; i < n && !poison_states.empty(); ++i) {
if (!poison_states.front()) {
@@ -61,7 +61,8 @@ static size_t count_unpoisoned(std::deque<int> &poison_states, size_t n) {
}
void TestNonOverlappingContainers(size_t capacity, size_t off_old,
- size_t off_new, int poison_buffers) {
+ size_t off_new, bool poison_old,
+ bool poison_new) {
size_t old_buffer_size = capacity + off_old + kGranularity * 2;
size_t new_buffer_size = capacity + off_new + kGranularity * 2;
@@ -72,8 +73,6 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
char *old_buffer_end = old_buffer.get() + old_buffer_size;
char *new_buffer_end = new_buffer.get() + new_buffer_size;
- bool poison_old = poison_buffers % 2 == 1;
- bool poison_new = poison_buffers / 2 == 1;
char *old_beg = old_buffer.get() + off_old;
char *new_beg = new_buffer.get() + off_new;
char *old_end = old_beg + capacity;
@@ -118,7 +117,7 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
char *next;
for (cur = new_beg; cur + kGranularity <= new_end; cur = next) {
next = RoundUp(cur + 1);
- size_t unpoisoned = count_unpoisoned(poison_states, next - cur);
+ size_t unpoisoned = CountUnpoisoned(poison_states, next - cur);
if (unpoisoned > 0) {
assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
}
@@ -130,7 +129,7 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
// If new_buffer were not poisoned, it cannot be poisoned.
// If new_buffer were poisoned, it should be same as earlier.
if (cur < new_end) {
- size_t unpoisoned = count_unpoisoned(poison_states, new_end - cur);
+ size_t unpoisoned = CountUnpoisoned(poison_states, new_end - cur);
if (unpoisoned > 0) {
assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
}
@@ -149,15 +148,13 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
}
void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
- int poison_buffers) {
+ bool poison_whole, bool poison_new) {
size_t buffer_size = capacity + off_old + off_new + kGranularity * 3;
// Use unique_ptr with a custom deleter to manage the buffer
std::unique_ptr<char[]> buffer = std::make_unique<char[]>(buffer_size);
char *buffer_end = buffer.get() + buffer_size;
- bool poison_whole = poison_buffers % 2 == 1;
- bool poison_new = poison_buffers / 2 == 1;
char *old_beg = buffer.get() + kGranularity + off_old;
char *new_beg = buffer.get() + kGranularity + off_new;
char *old_end = old_beg + capacity;
@@ -170,7 +167,7 @@ void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
__asan_poison_memory_region(new_beg, new_end - new_beg);
RandomPoison(old_beg, old_end);
- std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
+ auto poison_states = GetPoisonedState(old_beg, old_end);
__sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
new_end);
// This variable is used only when buffer ends in the middle of a granule.
@@ -195,7 +192,7 @@ void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
char *next;
for (cur = new_beg; cur + kGranularity <= new_end; cur = next) {
next = RoundUp(cur + 1);
- size_t unpoisoned = count_unpoisoned(poison_states, next - cur);
+ size_t unpoisoned = CountUnpoisoned(poison_states, next - cur);
if (unpoisoned > 0) {
assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
}
@@ -207,7 +204,7 @@ void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
// It can be poisoned, only if non-container bytes in that granule were poisoned.
// Otherwise, it should be unpoisoned.
if (cur < new_end) {
- size_t unpoisoned = count_unpoisoned(poison_states, new_end - cur);
+ size_t unpoisoned = CountUnpoisoned(poison_states, new_end - cur);
if (unpoisoned > 0) {
assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
}
@@ -222,12 +219,16 @@ void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
int main(int argc, char **argv) {
int n = argc == 1 ? 64 : atoi(argv[1]);
- for (size_t j = 0; j < kGranularity + 2; j++) {
- for (size_t k = 0; k < kGranularity + 2; k++) {
- for (int i = 0; i <= n; i++) {
- for (int poison = 0; poison < 4; ++poison) {
- TestNonOverlappingContainers(i, j, k, poison);
- TestOverlappingContainers(i, j, k, poison);
+ for (size_t off_old = 0; off_old < kGranularity + 2; off_old++) {
+ for (size_t off_new = 0; off_new < kGranularity + 2; off_new++) {
+ for (int capacity = 0; capacity <= n; capacity++) {
+ for (int poison_old = 0; poison_old < 2; ++poison_old) {
+ for (int poison_new = 0; poison_new < 2; ++poison_new) {
+ TestNonOverlappingContainers(capacity, off_old, off_new, poison_old,
+ poison_new);
+ TestOverlappingContainers(capacity, off_old, off_new, poison_old,
+ poison_new);
+ }
}
}
}
>From a888e0765b5ec8561ede5c892370aea56b690ddb Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 12:23:31 -0700
Subject: [PATCH 15/25] rename old/new -> src/dst
---
.../TestCases/copy_container_annotations.cpp | 152 +++++++++---------
1 file changed, 76 insertions(+), 76 deletions(-)
diff --git a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
index c260983b6c208c..fa9a418710d1b9 100644
--- a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
@@ -60,62 +60,62 @@ static size_t CountUnpoisoned(std::deque<int> &poison_states, size_t n) {
return result;
}
-void TestNonOverlappingContainers(size_t capacity, size_t off_old,
- size_t off_new, bool poison_old,
- bool poison_new) {
- size_t old_buffer_size = capacity + off_old + kGranularity * 2;
- size_t new_buffer_size = capacity + off_new + kGranularity * 2;
-
- std::unique_ptr<char[]> old_buffer =
- std::make_unique<char[]>(old_buffer_size);
- std::unique_ptr<char[]> new_buffer =
- std::make_unique<char[]>(new_buffer_size);
-
- char *old_buffer_end = old_buffer.get() + old_buffer_size;
- char *new_buffer_end = new_buffer.get() + new_buffer_size;
- char *old_beg = old_buffer.get() + off_old;
- char *new_beg = new_buffer.get() + off_new;
- char *old_end = old_beg + capacity;
- char *new_end = new_beg + capacity;
+void TestNonOverlappingContainers(size_t capacity, size_t off_src,
+ size_t off_dst, bool poison_src,
+ bool poison_dst) {
+ size_t src_buffer_size = capacity + off_src + kGranularity * 2;
+ size_t dst_buffer_size = capacity + off_dst + kGranularity * 2;
+
+ std::unique_ptr<char[]> src_buffer =
+ std::make_unique<char[]>(src_buffer_size);
+ std::unique_ptr<char[]> dst_buffer =
+ std::make_unique<char[]>(dst_buffer_size);
+
+ char *src_buffer_end = src_buffer.get() + src_buffer_size;
+ char *dst_buffer_end = dst_buffer.get() + dst_buffer_size;
+ char *src_beg = src_buffer.get() + off_src;
+ char *dst_beg = dst_buffer.get() + off_dst;
+ char *src_end = src_beg + capacity;
+ char *dst_end = dst_beg + capacity;
for (int i = 0; i < 35; i++) {
- if (poison_old)
- __asan_poison_memory_region(old_buffer.get(), old_buffer_size);
- if (poison_new)
- __asan_poison_memory_region(new_buffer.get(), new_buffer_size);
+ if (poison_src)
+ __asan_poison_memory_region(src_buffer.get(), src_buffer_size);
+ if (poison_dst)
+ __asan_poison_memory_region(dst_buffer.get(), dst_buffer_size);
- RandomPoison(old_beg, old_end);
- std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
- __sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
- new_end);
+ RandomPoison(src_beg, src_end);
+ std::deque<int> poison_states = GetPoisonedState(src_beg, src_end);
+ __sanitizer_copy_contiguous_container_annotations(src_beg, src_end, dst_beg,
+ dst_end);
- // If old_buffer were poisoned, expected state of memory before old_beg
+ // If src_buffer were poisoned, expected state of memory before src_beg
// is undetermined.
// If old buffer were not poisoned, that memory should still be unpoisoned.
char *cur;
- if (!poison_old) {
- for (cur = old_buffer.get(); cur < old_beg; ++cur) {
+ if (!poison_src) {
+ for (cur = src_buffer.get(); cur < src_beg; ++cur) {
assert(!__asan_address_is_poisoned(cur));
}
}
for (size_t i = 0; i < poison_states.size(); ++i) {
- assert(__asan_address_is_poisoned(&old_beg[i]) == poison_states[i]);
+ assert(__asan_address_is_poisoned(&src_beg[i]) == poison_states[i]);
}
- // Memory after old_end should be the same as at the beginning.
- for (cur = old_end; cur < old_buffer_end; ++cur) {
- assert(__asan_address_is_poisoned(cur) == poison_old);
+ // Memory after src_end should be the same as at the beginning.
+ for (cur = src_end; cur < src_buffer_end; ++cur) {
+ assert(__asan_address_is_poisoned(cur) == poison_dst);
}
- // If new_buffer were not poisoned, memory before new_beg should never
+ // If dst_buffer were not poisoned, memory before dst_beg should never
// be poisoned. Otherwise, its state is undetermined.
- if (!poison_new) {
- for (cur = new_buffer.get(); cur < new_beg; ++cur) {
+ if (!poison_dst) {
+ for (cur = dst_buffer.get(); cur < dst_beg; ++cur) {
assert(!__asan_address_is_poisoned(cur));
}
}
char *next;
- for (cur = new_beg; cur + kGranularity <= new_end; cur = next) {
+ for (cur = dst_beg; cur + kGranularity <= dst_end; cur = next) {
next = RoundUp(cur + 1);
size_t unpoisoned = CountUnpoisoned(poison_states, next - cur);
if (unpoisoned > 0) {
@@ -125,72 +125,72 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
assert(__asan_address_is_poisoned(cur + unpoisoned));
}
}
- // [cur; new_end) is not checked yet.
- // If new_buffer were not poisoned, it cannot be poisoned.
- // If new_buffer were poisoned, it should be same as earlier.
- if (cur < new_end) {
- size_t unpoisoned = CountUnpoisoned(poison_states, new_end - cur);
+ // [cur; dst_end) is not checked yet.
+ // If dst_buffer were not poisoned, it cannot be poisoned.
+ // If dst_buffer were poisoned, it should be same as earlier.
+ if (cur < dst_end) {
+ size_t unpoisoned = CountUnpoisoned(poison_states, dst_end - cur);
if (unpoisoned > 0) {
assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
}
- if (cur + unpoisoned < new_end && poison_new) {
+ if (cur + unpoisoned < dst_end && poison_dst) {
assert(__asan_address_is_poisoned(cur + unpoisoned));
}
}
- // Memory annotations after new_end should be unchanged.
- for (cur = new_end; cur < new_buffer_end; ++cur) {
- assert(__asan_address_is_poisoned(cur) == poison_new);
+ // Memory annotations after dst_end should be unchanged.
+ for (cur = dst_end; cur < dst_buffer_end; ++cur) {
+ assert(__asan_address_is_poisoned(cur) == poison_dst);
}
}
- __asan_unpoison_memory_region(old_buffer.get(), old_buffer_size);
- __asan_unpoison_memory_region(new_buffer.get(), new_buffer_size);
+ __asan_unpoison_memory_region(src_buffer.get(), src_buffer_size);
+ __asan_unpoison_memory_region(dst_buffer.get(), dst_buffer_size);
}
-void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
- bool poison_whole, bool poison_new) {
- size_t buffer_size = capacity + off_old + off_new + kGranularity * 3;
+void TestOverlappingContainers(size_t capacity, size_t off_src, size_t off_dst,
+ bool poison_whole, bool poison_dst) {
+ size_t buffer_size = capacity + off_src + off_dst + kGranularity * 3;
// Use unique_ptr with a custom deleter to manage the buffer
std::unique_ptr<char[]> buffer = std::make_unique<char[]>(buffer_size);
char *buffer_end = buffer.get() + buffer_size;
- char *old_beg = buffer.get() + kGranularity + off_old;
- char *new_beg = buffer.get() + kGranularity + off_new;
- char *old_end = old_beg + capacity;
- char *new_end = new_beg + capacity;
+ char *src_beg = buffer.get() + kGranularity + off_src;
+ char *dst_beg = buffer.get() + kGranularity + off_dst;
+ char *src_end = src_beg + capacity;
+ char *dst_end = dst_beg + capacity;
for (int i = 0; i < 35; i++) {
if (poison_whole)
__asan_poison_memory_region(buffer.get(), buffer_size);
- if (poison_new)
- __asan_poison_memory_region(new_beg, new_end - new_beg);
+ if (poison_dst)
+ __asan_poison_memory_region(dst_beg, dst_end - dst_beg);
- RandomPoison(old_beg, old_end);
- auto poison_states = GetPoisonedState(old_beg, old_end);
- __sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
- new_end);
+ RandomPoison(src_beg, src_end);
+ auto poison_states = GetPoisonedState(src_beg, src_end);
+ __sanitizer_copy_contiguous_container_annotations(src_beg, src_end, dst_beg,
+ dst_end);
// This variable is used only when buffer ends in the middle of a granule.
- bool can_modify_last_granule = __asan_address_is_poisoned(new_end);
+ bool can_modify_last_granule = __asan_address_is_poisoned(dst_end);
// If whole buffer were poisoned, expected state of memory before first container
// is undetermined.
// If old buffer were not poisoned, that memory should still be unpoisoned.
char *cur;
if (!poison_whole) {
- for (cur = buffer.get(); cur < old_beg && cur < new_beg; ++cur) {
+ for (cur = buffer.get(); cur < src_beg && cur < dst_beg; ++cur) {
assert(!__asan_address_is_poisoned(cur));
}
}
// Memory after end of both containers should be the same as at the beginning.
- for (cur = (old_end > new_end) ? old_end : new_end; cur < buffer_end;
+ for (cur = (src_end > dst_end) ? src_end : dst_end; cur < buffer_end;
++cur) {
assert(__asan_address_is_poisoned(cur) == poison_whole);
}
char *next;
- for (cur = new_beg; cur + kGranularity <= new_end; cur = next) {
+ for (cur = dst_beg; cur + kGranularity <= dst_end; cur = next) {
next = RoundUp(cur + 1);
size_t unpoisoned = CountUnpoisoned(poison_states, next - cur);
if (unpoisoned > 0) {
@@ -200,15 +200,15 @@ void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
assert(__asan_address_is_poisoned(cur + unpoisoned));
}
}
- // [cur; new_end) is not checked yet, if container ends in the middle of a granule.
+ // [cur; dst_end) is not checked yet, if container ends in the middle of a granule.
// It can be poisoned, only if non-container bytes in that granule were poisoned.
// Otherwise, it should be unpoisoned.
- if (cur < new_end) {
- size_t unpoisoned = CountUnpoisoned(poison_states, new_end - cur);
+ if (cur < dst_end) {
+ size_t unpoisoned = CountUnpoisoned(poison_states, dst_end - cur);
if (unpoisoned > 0) {
assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
}
- if (cur + unpoisoned < new_end && can_modify_last_granule) {
+ if (cur + unpoisoned < dst_end && can_modify_last_granule) {
assert(__asan_address_is_poisoned(cur + unpoisoned));
}
}
@@ -219,15 +219,15 @@ void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
int main(int argc, char **argv) {
int n = argc == 1 ? 64 : atoi(argv[1]);
- for (size_t off_old = 0; off_old < kGranularity + 2; off_old++) {
- for (size_t off_new = 0; off_new < kGranularity + 2; off_new++) {
+ for (size_t off_src = 0; off_src < kGranularity + 2; off_src++) {
+ for (size_t off_dst = 0; off_dst < kGranularity + 2; off_dst++) {
for (int capacity = 0; capacity <= n; capacity++) {
- for (int poison_old = 0; poison_old < 2; ++poison_old) {
- for (int poison_new = 0; poison_new < 2; ++poison_new) {
- TestNonOverlappingContainers(capacity, off_old, off_new, poison_old,
- poison_new);
- TestOverlappingContainers(capacity, off_old, off_new, poison_old,
- poison_new);
+ for (int poison_dst = 0; poison_dst < 2; ++poison_dst) {
+ for (int poison_dst = 0; poison_dst < 2; ++poison_dst) {
+ TestNonOverlappingContainers(capacity, off_src, off_dst, poison_dst,
+ poison_dst);
+ TestOverlappingContainers(capacity, off_src, off_dst, poison_dst,
+ poison_dst);
}
}
}
>From b08a90c57f5057ae6919f62b0d8b24557fbb5ebd Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 16:56:35 -0700
Subject: [PATCH 16/25] Simplify the test
---
.../TestCases/copy_container_annotations.cpp | 247 ++++++------------
1 file changed, 84 insertions(+), 163 deletions(-)
diff --git a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
index fa9a418710d1b9..ecf61be7f602b4 100644
--- a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
@@ -3,9 +3,10 @@
// Test __sanitizer_copy_contiguous_container_annotations.
#include <algorithm>
-#include <deque>
+#include <iostream>
#include <memory>
#include <numeric>
+#include <vector>
#include <assert.h>
#include <sanitizer/asan_interface.h>
@@ -24,8 +25,8 @@ template <class T> static constexpr T RoundUp(T x) {
RoundDown(reinterpret_cast<uintptr_t>(x) + kGranularity - 1));
}
-static std::deque<int> GetPoisonedState(char *begin, char *end) {
- std::deque<int> result;
+static std::vector<int> GetPoisonedState(char *begin, char *end) {
+ std::vector<int> result;
for (char *ptr = begin; ptr != end; ++ptr) {
result.push_back(__asan_address_is_poisoned(ptr));
}
@@ -33,203 +34,123 @@ static std::deque<int> GetPoisonedState(char *begin, char *end) {
}
static void RandomPoison(char *beg, char *end) {
- if (beg != RoundDown(beg) && RoundDown(beg) != RoundDown(end) &&
- rand() % 2 == 1) {
- __asan_poison_memory_region(beg, RoundUp(beg) - beg);
- __asan_unpoison_memory_region(beg, rand() % (RoundUp(beg) - beg + 1));
- }
+ assert(beg == RoundDown(beg));
+ assert(end == RoundDown(end));
for (beg = RoundUp(beg); beg + kGranularity <= end; beg += kGranularity) {
__asan_poison_memory_region(beg, kGranularity);
__asan_unpoison_memory_region(beg, rand() % (kGranularity + 1));
}
- if (end > beg && __asan_address_is_poisoned(end)) {
- __asan_poison_memory_region(beg, kGranularity);
- __asan_unpoison_memory_region(beg, rand() % (end - beg + 1));
- }
}
-static size_t CountUnpoisoned(std::deque<int> &poison_states, size_t n) {
- size_t result = 0;
- for (size_t i = 0; i < n && !poison_states.empty(); ++i) {
- if (!poison_states.front()) {
- result = i + 1;
- }
- poison_states.pop_front();
+static void Test(size_t capacity, size_t off_src, size_t off_dst,
+ char *src_buffer_beg, char *src_buffer_end,
+ char *dst_buffer_beg, char *dst_buffer_end) {
+ size_t dst_buffer_size = dst_buffer_end - dst_buffer_beg;
+ char *src_beg = src_buffer_beg + off_src;
+ char *src_end = src_beg + capacity;
+
+ char *dst_beg = dst_buffer_beg + off_dst;
+ char *dst_end = dst_beg + capacity;
+
+ std::vector<int> src_poison_states =
+ GetPoisonedState(src_buffer_beg, src_buffer_end);
+ std::vector<int> dst_poison_before =
+ GetPoisonedState(dst_buffer_beg, dst_buffer_end);
+ __sanitizer_copy_contiguous_container_annotations(src_beg, src_end, dst_beg,
+ dst_end);
+ std::vector<int> dst_poison_after =
+ GetPoisonedState(dst_buffer_beg, dst_buffer_end);
+
+ // Create ideal copy of src over dst.
+ std::vector<int> dst_poison_exp = dst_poison_before;
+ for (size_t cur = 0; cur < capacity; ++cur)
+ dst_poison_exp[off_dst + cur] = src_poison_states[off_src + cur];
+
+ // Unpoison prefixes of Asan granules.
+ for (size_t cur = dst_buffer_size - 1; cur > 0; --cur) {
+ if (cur % kGranularity != 0 && !dst_poison_exp[cur])
+ dst_poison_exp[cur - 1] = 0;
}
- return result;
+ if (dst_poison_after != dst_poison_exp) {
+ std::cerr << "[" << off_dst << ", " << off_dst + capacity << ")\n";
+ for (size_t i = 0; i < dst_poison_after.size(); ++i) {
+ std::cerr << i << ":\t" << dst_poison_before[i] << "\t"
+ << dst_poison_after[i] << "\t" << dst_poison_exp[i] << "\n";
+ }
+ std::cerr << "----------\n";
+
+ assert(dst_poison_after == dst_poison_exp);
+ }
}
-void TestNonOverlappingContainers(size_t capacity, size_t off_src,
- size_t off_dst, bool poison_src,
- bool poison_dst) {
- size_t src_buffer_size = capacity + off_src + kGranularity * 2;
- size_t dst_buffer_size = capacity + off_dst + kGranularity * 2;
+static void TestNonOverlappingContainers(size_t capacity, size_t off_src,
+ size_t off_dst) {
+ // Test will copy [off_src, off_src + capacity) to [off_dst, off_dst + capacity).
+ // Allocate buffers to have additional granule before and after tested ranges.
+ off_src += kGranularity;
+ off_dst += kGranularity;
+ size_t src_buffer_size = RoundUp(off_src + capacity) + kGranularity;
+ size_t dst_buffer_size = RoundUp(off_dst + capacity) + kGranularity;
std::unique_ptr<char[]> src_buffer =
std::make_unique<char[]>(src_buffer_size);
std::unique_ptr<char[]> dst_buffer =
std::make_unique<char[]>(dst_buffer_size);
- char *src_buffer_end = src_buffer.get() + src_buffer_size;
- char *dst_buffer_end = dst_buffer.get() + dst_buffer_size;
- char *src_beg = src_buffer.get() + off_src;
- char *dst_beg = dst_buffer.get() + off_dst;
- char *src_end = src_beg + capacity;
- char *dst_end = dst_beg + capacity;
+ char *src_buffer_beg = src_buffer.get();
+ char *src_buffer_end = src_buffer_beg + src_buffer_size;
+ assert(RoundDown(src_buffer_beg) == src_buffer_beg);
- for (int i = 0; i < 35; i++) {
- if (poison_src)
- __asan_poison_memory_region(src_buffer.get(), src_buffer_size);
- if (poison_dst)
- __asan_poison_memory_region(dst_buffer.get(), dst_buffer_size);
-
- RandomPoison(src_beg, src_end);
- std::deque<int> poison_states = GetPoisonedState(src_beg, src_end);
- __sanitizer_copy_contiguous_container_annotations(src_beg, src_end, dst_beg,
- dst_end);
-
- // If src_buffer were poisoned, expected state of memory before src_beg
- // is undetermined.
- // If old buffer were not poisoned, that memory should still be unpoisoned.
- char *cur;
- if (!poison_src) {
- for (cur = src_buffer.get(); cur < src_beg; ++cur) {
- assert(!__asan_address_is_poisoned(cur));
- }
- }
- for (size_t i = 0; i < poison_states.size(); ++i) {
- assert(__asan_address_is_poisoned(&src_beg[i]) == poison_states[i]);
- }
- // Memory after src_end should be the same as at the beginning.
- for (cur = src_end; cur < src_buffer_end; ++cur) {
- assert(__asan_address_is_poisoned(cur) == poison_dst);
- }
+ char *dst_buffer_beg = dst_buffer.get();
+ char *dst_buffer_end = dst_buffer_beg + dst_buffer_size;
+ assert(RoundDown(dst_buffer_beg) == dst_buffer_beg);
- // If dst_buffer were not poisoned, memory before dst_beg should never
- // be poisoned. Otherwise, its state is undetermined.
- if (!poison_dst) {
- for (cur = dst_buffer.get(); cur < dst_beg; ++cur) {
- assert(!__asan_address_is_poisoned(cur));
- }
- }
+ for (int i = 0; i < 35; i++) {
+ RandomPoison(src_buffer_beg, src_buffer_end);
+ RandomPoison(dst_buffer_beg, dst_buffer_end);
- char *next;
- for (cur = dst_beg; cur + kGranularity <= dst_end; cur = next) {
- next = RoundUp(cur + 1);
- size_t unpoisoned = CountUnpoisoned(poison_states, next - cur);
- if (unpoisoned > 0) {
- assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
- }
- if (cur + unpoisoned < next) {
- assert(__asan_address_is_poisoned(cur + unpoisoned));
- }
- }
- // [cur; dst_end) is not checked yet.
- // If dst_buffer were not poisoned, it cannot be poisoned.
- // If dst_buffer were poisoned, it should be same as earlier.
- if (cur < dst_end) {
- size_t unpoisoned = CountUnpoisoned(poison_states, dst_end - cur);
- if (unpoisoned > 0) {
- assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
- }
- if (cur + unpoisoned < dst_end && poison_dst) {
- assert(__asan_address_is_poisoned(cur + unpoisoned));
- }
- }
- // Memory annotations after dst_end should be unchanged.
- for (cur = dst_end; cur < dst_buffer_end; ++cur) {
- assert(__asan_address_is_poisoned(cur) == poison_dst);
- }
+ Test(capacity, off_src, off_dst, src_buffer_beg, src_buffer_end,
+ dst_buffer_beg, dst_buffer_end);
}
- __asan_unpoison_memory_region(src_buffer.get(), src_buffer_size);
- __asan_unpoison_memory_region(dst_buffer.get(), dst_buffer_size);
+ __asan_unpoison_memory_region(src_buffer_beg, src_buffer_size);
+ __asan_unpoison_memory_region(dst_buffer_beg, dst_buffer_size);
}
-void TestOverlappingContainers(size_t capacity, size_t off_src, size_t off_dst,
- bool poison_whole, bool poison_dst) {
- size_t buffer_size = capacity + off_src + off_dst + kGranularity * 3;
+static void TestOverlappingContainers(size_t capacity, size_t off_src,
+ size_t off_dst) {
+ // Test will copy [off_src, off_src + capacity) to [off_dst, off_dst + capacity).
+ // Allocate buffers to have additional granule before and after tested ranges.
+ off_src += kGranularity;
+ off_dst += kGranularity;
+ size_t buffer_size =
+ RoundUp(std::max(off_src, off_dst) + capacity) + kGranularity;
// Use unique_ptr with a custom deleter to manage the buffer
std::unique_ptr<char[]> buffer = std::make_unique<char[]>(buffer_size);
- char *buffer_end = buffer.get() + buffer_size;
- char *src_beg = buffer.get() + kGranularity + off_src;
- char *dst_beg = buffer.get() + kGranularity + off_dst;
- char *src_end = src_beg + capacity;
- char *dst_end = dst_beg + capacity;
+ char *buffer_beg = buffer.get();
+ char *buffer_end = buffer_beg + buffer_size;
+ assert(RoundDown(buffer_beg) == buffer_beg);
for (int i = 0; i < 35; i++) {
- if (poison_whole)
- __asan_poison_memory_region(buffer.get(), buffer_size);
- if (poison_dst)
- __asan_poison_memory_region(dst_beg, dst_end - dst_beg);
-
- RandomPoison(src_beg, src_end);
- auto poison_states = GetPoisonedState(src_beg, src_end);
- __sanitizer_copy_contiguous_container_annotations(src_beg, src_end, dst_beg,
- dst_end);
- // This variable is used only when buffer ends in the middle of a granule.
- bool can_modify_last_granule = __asan_address_is_poisoned(dst_end);
-
- // If whole buffer were poisoned, expected state of memory before first container
- // is undetermined.
- // If old buffer were not poisoned, that memory should still be unpoisoned.
- char *cur;
- if (!poison_whole) {
- for (cur = buffer.get(); cur < src_beg && cur < dst_beg; ++cur) {
- assert(!__asan_address_is_poisoned(cur));
- }
- }
+ RandomPoison(buffer_beg, buffer_end);
- // Memory after end of both containers should be the same as at the beginning.
- for (cur = (src_end > dst_end) ? src_end : dst_end; cur < buffer_end;
- ++cur) {
- assert(__asan_address_is_poisoned(cur) == poison_whole);
- }
-
- char *next;
- for (cur = dst_beg; cur + kGranularity <= dst_end; cur = next) {
- next = RoundUp(cur + 1);
- size_t unpoisoned = CountUnpoisoned(poison_states, next - cur);
- if (unpoisoned > 0) {
- assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
- }
- if (cur + unpoisoned < next) {
- assert(__asan_address_is_poisoned(cur + unpoisoned));
- }
- }
- // [cur; dst_end) is not checked yet, if container ends in the middle of a granule.
- // It can be poisoned, only if non-container bytes in that granule were poisoned.
- // Otherwise, it should be unpoisoned.
- if (cur < dst_end) {
- size_t unpoisoned = CountUnpoisoned(poison_states, dst_end - cur);
- if (unpoisoned > 0) {
- assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
- }
- if (cur + unpoisoned < dst_end && can_modify_last_granule) {
- assert(__asan_address_is_poisoned(cur + unpoisoned));
- }
- }
+ Test(capacity, off_src, off_dst, buffer_beg, buffer_end, buffer_beg,
+ buffer_end);
}
- __asan_unpoison_memory_region(buffer.get(), buffer_size);
+ __asan_unpoison_memory_region(buffer_beg, buffer_size);
}
int main(int argc, char **argv) {
int n = argc == 1 ? 64 : atoi(argv[1]);
- for (size_t off_src = 0; off_src < kGranularity + 2; off_src++) {
- for (size_t off_dst = 0; off_dst < kGranularity + 2; off_dst++) {
+ for (size_t off_src = 0; off_src < kGranularity; off_src++) {
+ for (size_t off_dst = 0; off_dst < kGranularity; off_dst++) {
for (int capacity = 0; capacity <= n; capacity++) {
- for (int poison_dst = 0; poison_dst < 2; ++poison_dst) {
- for (int poison_dst = 0; poison_dst < 2; ++poison_dst) {
- TestNonOverlappingContainers(capacity, off_src, off_dst, poison_dst,
- poison_dst);
- TestOverlappingContainers(capacity, off_src, off_dst, poison_dst,
- poison_dst);
- }
- }
+ TestNonOverlappingContainers(capacity, off_src, off_dst);
+ TestOverlappingContainers(capacity, off_src, off_dst);
}
}
}
>From 74bf44a21c927d44c834899b6563d86f6726930b Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 20:44:04 -0700
Subject: [PATCH 17/25] benchmark mode in test
---
.../TestCases/copy_container_annotations.cpp | 41 +++++++++++++------
1 file changed, 28 insertions(+), 13 deletions(-)
diff --git a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
index ecf61be7f602b4..ed20dc3e80d44e 100644
--- a/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
@@ -36,12 +36,13 @@ static std::vector<int> GetPoisonedState(char *begin, char *end) {
static void RandomPoison(char *beg, char *end) {
assert(beg == RoundDown(beg));
assert(end == RoundDown(end));
- for (beg = RoundUp(beg); beg + kGranularity <= end; beg += kGranularity) {
- __asan_poison_memory_region(beg, kGranularity);
+ __asan_poison_memory_region(beg, end - beg);
+ for (beg = RoundUp(beg); beg < end; beg += kGranularity) {
__asan_unpoison_memory_region(beg, rand() % (kGranularity + 1));
}
}
+template <bool benchmark>
static void Test(size_t capacity, size_t off_src, size_t off_dst,
char *src_buffer_beg, char *src_buffer_end,
char *dst_buffer_beg, char *dst_buffer_end) {
@@ -51,6 +52,11 @@ static void Test(size_t capacity, size_t off_src, size_t off_dst,
char *dst_beg = dst_buffer_beg + off_dst;
char *dst_end = dst_beg + capacity;
+ if (benchmark) {
+ __sanitizer_copy_contiguous_container_annotations(src_beg, src_end, dst_beg,
+ dst_end);
+ return;
+ }
std::vector<int> src_poison_states =
GetPoisonedState(src_buffer_beg, src_buffer_end);
@@ -84,6 +90,7 @@ static void Test(size_t capacity, size_t off_src, size_t off_dst,
}
}
+template <bool benchmark>
static void TestNonOverlappingContainers(size_t capacity, size_t off_src,
size_t off_dst) {
// Test will copy [off_src, off_src + capacity) to [off_dst, off_dst + capacity).
@@ -107,17 +114,20 @@ static void TestNonOverlappingContainers(size_t capacity, size_t off_src,
assert(RoundDown(dst_buffer_beg) == dst_buffer_beg);
for (int i = 0; i < 35; i++) {
- RandomPoison(src_buffer_beg, src_buffer_end);
- RandomPoison(dst_buffer_beg, dst_buffer_end);
+ if (!benchmark || !i) {
+ RandomPoison(src_buffer_beg, src_buffer_end);
+ RandomPoison(dst_buffer_beg, dst_buffer_end);
+ }
- Test(capacity, off_src, off_dst, src_buffer_beg, src_buffer_end,
- dst_buffer_beg, dst_buffer_end);
+ Test<benchmark>(capacity, off_src, off_dst, src_buffer_beg, src_buffer_end,
+ dst_buffer_beg, dst_buffer_end);
}
__asan_unpoison_memory_region(src_buffer_beg, src_buffer_size);
__asan_unpoison_memory_region(dst_buffer_beg, dst_buffer_size);
}
+template <bool benchmark>
static void TestOverlappingContainers(size_t capacity, size_t off_src,
size_t off_dst) {
// Test will copy [off_src, off_src + capacity) to [off_dst, off_dst + capacity).
@@ -135,10 +145,10 @@ static void TestOverlappingContainers(size_t capacity, size_t off_src,
assert(RoundDown(buffer_beg) == buffer_beg);
for (int i = 0; i < 35; i++) {
- RandomPoison(buffer_beg, buffer_end);
-
- Test(capacity, off_src, off_dst, buffer_beg, buffer_end, buffer_beg,
- buffer_end);
+ if (!benchmark || !i)
+ RandomPoison(buffer_beg, buffer_end);
+ Test<benchmark>(capacity, off_src, off_dst, buffer_beg, buffer_end,
+ buffer_beg, buffer_end);
}
__asan_unpoison_memory_region(buffer_beg, buffer_size);
@@ -149,9 +159,14 @@ int main(int argc, char **argv) {
for (size_t off_src = 0; off_src < kGranularity; off_src++) {
for (size_t off_dst = 0; off_dst < kGranularity; off_dst++) {
for (int capacity = 0; capacity <= n; capacity++) {
- TestNonOverlappingContainers(capacity, off_src, off_dst);
- TestOverlappingContainers(capacity, off_src, off_dst);
+ if (n < 1024) {
+ TestNonOverlappingContainers<false>(capacity, off_src, off_dst);
+ TestOverlappingContainers<false>(capacity, off_src, off_dst);
+ } else {
+ TestNonOverlappingContainers<true>(capacity, off_src, off_dst);
+ TestOverlappingContainers<true>(capacity, off_src, off_dst);
+ }
}
}
}
-}
+}
\ No newline at end of file
>From 271fbb010258a1b7e8d7b163255fe3a9c8a62ad2 Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 21:30:10 -0700
Subject: [PATCH 18/25] Shorten function name
---
compiler-rt/lib/asan/asan_poisoning.cpp | 49 +++++++------------------
1 file changed, 14 insertions(+), 35 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index f0a196b07b75bf..d1df41ad6fda2d 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -576,31 +576,13 @@ void __sanitizer_annotate_double_ended_contiguous_container(
}
}
-// Checks if a buffer [p; q) falls into a single granule.
-static bool WithinOneGranule(uptr p, uptr q) {
- constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
- if (p == q)
- return true;
- return RoundDownTo(p, granularity) == RoundDownTo(q - 1, granularity);
-}
-
-// Copies ASan memory annotation (a shadow memory value)
-// from one granule to another.
-static void CopyGranuleAnnotation(uptr dst, uptr src) {
- *(u8 *)MemToShadow(dst) = *(u8 *)MemToShadow(src);
-}
// Marks the specified number of bytes in a granule as accessible or
// poisones the whole granule with kAsanContiguousContainerOOBMagic value.
-static void AnnotateContainerGranuleAccessibleBytes(uptr ptr, u8 n) {
+static void SetContainerGranule(uptr ptr, u8 n) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
- if (n == granularity) {
- *(u8 *)MemToShadow(ptr) = 0;
- } else if (n == 0) {
- *(u8 *)MemToShadow(ptr) = static_cast<u8>(kAsanContiguousContainerOOBMagic);
- } else {
- *(u8 *)MemToShadow(ptr) = n;
- }
+ u8 s = (n == granularity) ? 0 : (n ? n : kAsanContiguousContainerOOBMagic);
+ *(u8 *)MemToShadow(ptr) = s;
}
// Performs a byte-by-byte copy of ASan annotations (shadow memory values).
@@ -629,11 +611,9 @@ static void SlowCopyContainerAnnotations(uptr src_storage_beg,
if (dst_ptr < dst_storage_end || dst_ptr == dst_internal_end ||
AddressIsPoisoned(dst_storage_end)) {
if (unpoisoned_bytes != 0 || granule_begin >= dst_storage_beg) {
- AnnotateContainerGranuleAccessibleBytes(granule_begin,
- unpoisoned_bytes);
+ SetContainerGranule(granule_begin, unpoisoned_bytes);
} else if (!AddressIsPoisoned(dst_storage_beg)) {
- AnnotateContainerGranuleAccessibleBytes(
- granule_begin, dst_storage_beg - granule_begin);
+ SetContainerGranule(granule_begin, dst_storage_beg - granule_begin);
}
}
}
@@ -669,10 +649,9 @@ static void SlowReversedCopyContainerAnnotations(uptr src_storage_beg,
}
if (granule_begin == dst_ptr || unpoisoned_bytes != 0) {
- AnnotateContainerGranuleAccessibleBytes(granule_begin, unpoisoned_bytes);
+ SetContainerGranule(granule_begin, unpoisoned_bytes);
} else if (!AddressIsPoisoned(dst_storage_beg)) {
- AnnotateContainerGranuleAccessibleBytes(granule_begin,
- dst_storage_beg - granule_begin);
+ SetContainerGranule(granule_begin, dst_storage_beg - granule_begin);
}
}
}
@@ -687,10 +666,11 @@ static void CopyContainerFirstGranuleAnnotation(uptr src_storage_begin,
uptr dst_external_begin = RoundDownTo(dst_storage_begin, granularity);
uptr src_external_begin = RoundDownTo(src_storage_begin, granularity);
if (!AddressIsPoisoned(src_storage_begin)) {
- CopyGranuleAnnotation(dst_external_begin, src_external_begin);
+ *(u8 *)MemToShadow(dst_external_begin) =
+ *(u8 *)MemToShadow(src_external_begin);
} else if (!AddressIsPoisoned(dst_storage_begin)) {
- AnnotateContainerGranuleAccessibleBytes(
- dst_external_begin, dst_storage_begin - dst_external_begin);
+ SetContainerGranule(dst_external_begin,
+ dst_storage_begin - dst_external_begin);
}
}
@@ -703,10 +683,9 @@ static void CopyContainerLastGranuleAnnotation(uptr src_storage_end,
// Last granule
uptr src_internal_end = RoundDownTo(src_storage_end, granularity);
if (AddressIsPoisoned(src_storage_end)) {
- CopyGranuleAnnotation(dst_internal_end, src_internal_end);
+ *(u8 *)MemToShadow(dst_internal_end) = *(u8 *)MemToShadow(src_internal_end);
} else {
- AnnotateContainerGranuleAccessibleBytes(dst_internal_end,
- src_storage_end - src_internal_end);
+ SetContainerGranule(dst_internal_end, src_storage_end - src_internal_end);
}
}
@@ -763,7 +742,7 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_begin_p,
bool copy_in_reversed_order = src_storage_begin < dst_storage_begin &&
dst_storage_begin <= src_external_end;
if (src_storage_begin % granularity != dst_storage_begin % granularity ||
- WithinOneGranule(dst_storage_begin, dst_storage_end)) {
+ RoundDownTo(dst_storage_end - 1, granularity) <= dst_storage_begin) {
if (copy_in_reversed_order) {
SlowReversedCopyContainerAnnotations(src_storage_begin, src_storage_end,
dst_storage_begin, dst_storage_end);
>From c04bd495032b47adb1379ca08a07ed7691b1005d Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 21:58:46 -0700
Subject: [PATCH 19/25] rename variables
---
compiler-rt/lib/asan/asan_poisoning.cpp | 175 +++++++++++-------------
1 file changed, 78 insertions(+), 97 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index d1df41ad6fda2d..7c7e0f38e57892 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -16,6 +16,7 @@
#include "asan_report.h"
#include "asan_stack.h"
#include "sanitizer_common/sanitizer_atomic.h"
+#include "sanitizer_common/sanitizer_common.h"
#include "sanitizer_common/sanitizer_flags.h"
#include "sanitizer_common/sanitizer_interface_internal.h"
#include "sanitizer_common/sanitizer_libc.h"
@@ -588,32 +589,28 @@ static void SetContainerGranule(uptr ptr, u8 n) {
// Performs a byte-by-byte copy of ASan annotations (shadow memory values).
// Result may be different due to ASan limitations, but result cannot lead
// to false positives (more memory than requested may get unpoisoned).
-static void SlowCopyContainerAnnotations(uptr src_storage_beg,
- uptr src_storage_end,
- uptr dst_storage_beg,
- uptr dst_storage_end) {
+static void SlowCopyContainerAnnotations(uptr src_beg, uptr src_end,
+ uptr dst_beg, uptr dst_end) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
- uptr dst_internal_end = RoundDownTo(dst_storage_end, granularity);
- uptr src_ptr = src_storage_beg;
- uptr dst_ptr = dst_storage_beg;
+ uptr dst_end_down = RoundDownTo(dst_end, granularity);
+ uptr src_ptr = src_beg;
+ uptr dst_ptr = dst_beg;
- while (dst_ptr < dst_storage_end) {
- uptr next_new = RoundUpTo(dst_ptr + 1, granularity);
- uptr granule_begin = next_new - granularity;
+ while (dst_ptr < dst_end) {
+ uptr granule_beg = RoundDownTo(dst_ptr, granularity);
+ uptr granule_end = granule_beg + granularity;
uptr unpoisoned_bytes = 0;
- for (; dst_ptr != next_new && dst_ptr != dst_storage_end;
- ++dst_ptr, ++src_ptr) {
- if (!AddressIsPoisoned(src_ptr)) {
- unpoisoned_bytes = dst_ptr - granule_begin + 1;
- }
+ for (; dst_ptr != granule_end && dst_ptr != dst_end; ++dst_ptr, ++src_ptr) {
+ if (!AddressIsPoisoned(src_ptr))
+ unpoisoned_bytes = dst_ptr - granule_beg + 1;
}
- if (dst_ptr < dst_storage_end || dst_ptr == dst_internal_end ||
- AddressIsPoisoned(dst_storage_end)) {
- if (unpoisoned_bytes != 0 || granule_begin >= dst_storage_beg) {
- SetContainerGranule(granule_begin, unpoisoned_bytes);
- } else if (!AddressIsPoisoned(dst_storage_beg)) {
- SetContainerGranule(granule_begin, dst_storage_beg - granule_begin);
+ if (dst_ptr < dst_end || dst_ptr == dst_end_down ||
+ AddressIsPoisoned(dst_end)) {
+ if (unpoisoned_bytes != 0 || granule_beg >= dst_beg) {
+ SetContainerGranule(granule_beg, unpoisoned_bytes);
+ } else if (!AddressIsPoisoned(dst_beg)) {
+ SetContainerGranule(granule_beg, dst_beg - granule_beg);
}
}
}
@@ -623,35 +620,31 @@ static void SlowCopyContainerAnnotations(uptr src_storage_beg,
// going through bytes in reversed order, but not reversing annotations.
// Result may be different due to ASan limitations, but result cannot lead
// to false positives (more memory than requested may get unpoisoned).
-static void SlowReversedCopyContainerAnnotations(uptr src_storage_beg,
- uptr src_storage_end,
- uptr dst_storage_beg,
- uptr dst_storage_end) {
+static void SlowReversedCopyContainerAnnotations(uptr src_beg, uptr src_end,
+ uptr dst_beg, uptr dst_end) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
- uptr dst_internal_beg = RoundDownTo(dst_storage_beg, granularity);
- uptr dst_internal_end = RoundDownTo(dst_storage_end, granularity);
- uptr src_ptr = src_storage_end;
- uptr dst_ptr = dst_storage_end;
+ uptr dst_end_down = RoundDownTo(dst_end, granularity);
+ uptr src_ptr = src_end;
+ uptr dst_ptr = dst_end;
- while (dst_ptr > dst_storage_beg) {
- uptr granule_begin = RoundDownTo(dst_ptr - 1, granularity);
+ while (dst_ptr > dst_beg) {
+ uptr granule_beg = RoundDownTo(dst_ptr - 1, granularity);
uptr unpoisoned_bytes = 0;
- for (; dst_ptr != granule_begin && dst_ptr != dst_storage_beg;
- --dst_ptr, --src_ptr) {
+ for (; dst_ptr != granule_beg && dst_ptr != dst_beg; --dst_ptr, --src_ptr) {
if (unpoisoned_bytes == 0 && !AddressIsPoisoned(src_ptr - 1)) {
- unpoisoned_bytes = dst_ptr - granule_begin;
+ unpoisoned_bytes = dst_ptr - granule_beg;
}
}
- if (dst_ptr >= dst_internal_end && !AddressIsPoisoned(dst_storage_end)) {
+ if (dst_ptr >= dst_end_down && !AddressIsPoisoned(dst_end)) {
continue;
}
- if (granule_begin == dst_ptr || unpoisoned_bytes != 0) {
- SetContainerGranule(granule_begin, unpoisoned_bytes);
- } else if (!AddressIsPoisoned(dst_storage_beg)) {
- SetContainerGranule(granule_begin, dst_storage_beg - granule_begin);
+ if (granule_beg == dst_ptr || unpoisoned_bytes != 0) {
+ SetContainerGranule(granule_beg, unpoisoned_bytes);
+ } else if (!AddressIsPoisoned(dst_beg)) {
+ SetContainerGranule(granule_beg, dst_beg - granule_beg);
}
}
}
@@ -659,33 +652,30 @@ static void SlowReversedCopyContainerAnnotations(uptr src_storage_beg,
// A helper function for __sanitizer_copy_contiguous_container_annotations,
// has assumption about begin and end of the container.
// Should not be used stand alone.
-static void CopyContainerFirstGranuleAnnotation(uptr src_storage_begin,
- uptr dst_storage_begin) {
+static void CopyContainerFirstGranuleAnnotation(uptr src_beg, uptr dst_beg) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
// First granule
- uptr dst_external_begin = RoundDownTo(dst_storage_begin, granularity);
- uptr src_external_begin = RoundDownTo(src_storage_begin, granularity);
- if (!AddressIsPoisoned(src_storage_begin)) {
- *(u8 *)MemToShadow(dst_external_begin) =
- *(u8 *)MemToShadow(src_external_begin);
- } else if (!AddressIsPoisoned(dst_storage_begin)) {
- SetContainerGranule(dst_external_begin,
- dst_storage_begin - dst_external_begin);
+ uptr dst_external_beg = RoundDownTo(dst_beg, granularity);
+ uptr src_external_beg = RoundDownTo(src_beg, granularity);
+ if (!AddressIsPoisoned(src_beg)) {
+ *(u8 *)MemToShadow(dst_external_beg) = *(u8 *)MemToShadow(src_external_beg);
+ } else if (!AddressIsPoisoned(dst_beg)) {
+ SetContainerGranule(dst_external_beg, dst_beg - dst_external_beg);
}
}
// A helper function for __sanitizer_copy_contiguous_container_annotations,
// has assumption about begin and end of the container.
// Should not be used stand alone.
-static void CopyContainerLastGranuleAnnotation(uptr src_storage_end,
- uptr dst_internal_end) {
+static void CopyContainerLastGranuleAnnotation(uptr src_end,
+ uptr dst_end_down) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
// Last granule
- uptr src_internal_end = RoundDownTo(src_storage_end, granularity);
- if (AddressIsPoisoned(src_storage_end)) {
- *(u8 *)MemToShadow(dst_internal_end) = *(u8 *)MemToShadow(src_internal_end);
+ uptr src_internal_end = RoundDownTo(src_end, granularity);
+ if (AddressIsPoisoned(src_end)) {
+ *(u8 *)MemToShadow(dst_end_down) = *(u8 *)MemToShadow(src_internal_end);
} else {
- SetContainerGranule(dst_internal_end, src_storage_end - src_internal_end);
+ SetContainerGranule(dst_end_down, src_end - src_internal_end);
}
}
@@ -696,38 +686,34 @@ static void CopyContainerLastGranuleAnnotation(uptr src_storage_end,
// However, it does not move memory content itself, only annotations.
// If the buffers aren't aligned (the distance between buffers isn't
// granule-aligned)
-// // src_storage_beg % granularity != dst_storage_beg % granularity
+// // src_beg % granularity != dst_beg % granularity
// the function handles this by going byte by byte, slowing down performance.
// The old buffer annotations are not removed. If necessary,
// user can unpoison old buffer with __asan_unpoison_memory_region.
-void __sanitizer_copy_contiguous_container_annotations(const void *src_begin_p,
+void __sanitizer_copy_contiguous_container_annotations(const void *src_beg_p,
const void *src_end_p,
- const void *dst_begin_p,
+ const void *dst_beg_p,
const void *dst_end_p) {
if (!flags()->detect_container_overflow)
return;
- VPrintf(3, "contiguous_container_src: %p %p\n", src_begin_p, src_end_p);
- VPrintf(3, "contiguous_container_dst: %p %p\n", dst_begin_p, dst_end_p);
+ VPrintf(3, "contiguous_container_src: %p %p\n", src_beg_p, src_end_p);
+ VPrintf(3, "contiguous_container_dst: %p %p\n", dst_beg_p, dst_end_p);
- uptr src_storage_begin = reinterpret_cast<uptr>(src_begin_p);
- uptr src_storage_end = reinterpret_cast<uptr>(src_end_p);
- uptr dst_storage_begin = reinterpret_cast<uptr>(dst_begin_p);
- uptr dst_storage_end = reinterpret_cast<uptr>(dst_end_p);
+ uptr src_beg = reinterpret_cast<uptr>(src_beg_p);
+ uptr src_end = reinterpret_cast<uptr>(src_end_p);
+ uptr dst_beg = reinterpret_cast<uptr>(dst_beg_p);
+ uptr dst_end = reinterpret_cast<uptr>(dst_end_p);
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
- if (src_storage_begin > src_storage_end ||
- (dst_storage_end - dst_storage_begin) !=
- (src_storage_end - src_storage_begin)) {
+ if (src_beg > src_end || (dst_end - dst_beg) != (src_end - src_beg)) {
GET_STACK_TRACE_FATAL_HERE;
ReportBadParamsToCopyContiguousContainerAnnotations(
- src_storage_begin, src_storage_end, dst_storage_begin, dst_storage_end,
- &stack);
+ src_beg, src_end, dst_beg, dst_end, &stack);
}
- if (src_storage_begin == src_storage_end ||
- src_storage_begin == dst_storage_begin)
+ if (src_beg == src_end || src_beg == dst_beg)
return;
// Due to support for overlapping buffers, we may have to copy elements
// in reversed order, when destination buffer starts in the middle of
@@ -738,51 +724,46 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_begin_p,
//
// The only remaining edge cases involve edge granules,
// when the container starts or ends within a granule.
- uptr src_external_end = RoundUpTo(src_storage_end, granularity);
- bool copy_in_reversed_order = src_storage_begin < dst_storage_begin &&
- dst_storage_begin <= src_external_end;
- if (src_storage_begin % granularity != dst_storage_begin % granularity ||
- RoundDownTo(dst_storage_end - 1, granularity) <= dst_storage_begin) {
+ uptr src_end_up = RoundUpTo(src_end, granularity);
+ bool copy_in_reversed_order = src_beg < dst_beg && dst_beg <= src_end_up;
+ if (src_beg % granularity != dst_beg % granularity ||
+ RoundDownTo(dst_end - 1, granularity) <= dst_beg) {
if (copy_in_reversed_order) {
- SlowReversedCopyContainerAnnotations(src_storage_begin, src_storage_end,
- dst_storage_begin, dst_storage_end);
+ SlowReversedCopyContainerAnnotations(src_beg, src_end, dst_beg, dst_end);
} else {
- SlowCopyContainerAnnotations(src_storage_begin, src_storage_end,
- dst_storage_begin, dst_storage_end);
+ SlowCopyContainerAnnotations(src_beg, src_end, dst_beg, dst_end);
}
return;
}
// As buffers are granule-aligned, we can just copy annotations of granules
// from the middle.
- uptr dst_internal_begin = RoundUpTo(dst_storage_begin, granularity);
- uptr dst_internal_end = RoundDownTo(dst_storage_end, granularity);
+ uptr dst_beg_up = RoundUpTo(dst_beg, granularity);
+ uptr dst_end_down = RoundDownTo(dst_end, granularity);
if (copy_in_reversed_order) {
- if (dst_internal_end != dst_storage_end &&
- AddressIsPoisoned(dst_storage_end)) {
- CopyContainerLastGranuleAnnotation(src_storage_end, dst_internal_end);
+ if (dst_end_down != dst_end && AddressIsPoisoned(dst_end)) {
+ CopyContainerLastGranuleAnnotation(src_end, dst_end_down);
}
} else {
- if (dst_internal_begin != dst_storage_begin) {
- CopyContainerFirstGranuleAnnotation(src_storage_begin, dst_storage_begin);
+ if (dst_beg_up != dst_beg) {
+ CopyContainerFirstGranuleAnnotation(src_beg, dst_beg);
}
}
- if (dst_internal_end > dst_internal_begin) {
- uptr src_internal_begin = RoundUpTo(src_storage_begin, granularity);
- __builtin_memmove((u8 *)MemToShadow(dst_internal_begin),
- (u8 *)MemToShadow(src_internal_begin),
- (dst_internal_end - dst_internal_begin) / granularity);
+ if (dst_end_down > dst_beg_up) {
+ uptr src_internal_beg = RoundUpTo(src_beg, granularity);
+ __builtin_memmove((u8 *)MemToShadow(dst_beg_up),
+ (u8 *)MemToShadow(src_internal_beg),
+ (dst_end_down - dst_beg_up) / granularity);
}
if (copy_in_reversed_order) {
- if (dst_internal_begin != dst_storage_begin) {
- CopyContainerFirstGranuleAnnotation(src_storage_begin, dst_storage_begin);
+ if (dst_beg_up != dst_beg) {
+ CopyContainerFirstGranuleAnnotation(src_beg, dst_beg);
}
} else {
- if (dst_internal_end != dst_storage_end &&
- AddressIsPoisoned(dst_storage_end)) {
- CopyContainerLastGranuleAnnotation(src_storage_end, dst_internal_end);
+ if (dst_end_down != dst_end && AddressIsPoisoned(dst_end)) {
+ CopyContainerLastGranuleAnnotation(src_end, dst_end_down);
}
}
}
>From 970dfd9b4a16340d8b31cc5280eebf143c8726fa Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 22:08:16 -0700
Subject: [PATCH 20/25] rename variables
---
compiler-rt/lib/asan/asan_poisoning.cpp | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index 7c7e0f38e57892..14ee1b148e07f4 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -655,12 +655,12 @@ static void SlowReversedCopyContainerAnnotations(uptr src_beg, uptr src_end,
static void CopyContainerFirstGranuleAnnotation(uptr src_beg, uptr dst_beg) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
// First granule
- uptr dst_external_beg = RoundDownTo(dst_beg, granularity);
- uptr src_external_beg = RoundDownTo(src_beg, granularity);
+ uptr dst_beg_down = RoundDownTo(dst_beg, granularity);
+ uptr src_beg_down = RoundDownTo(src_beg, granularity);
if (!AddressIsPoisoned(src_beg)) {
- *(u8 *)MemToShadow(dst_external_beg) = *(u8 *)MemToShadow(src_external_beg);
+ *(u8 *)MemToShadow(dst_beg_down) = *(u8 *)MemToShadow(src_beg_down);
} else if (!AddressIsPoisoned(dst_beg)) {
- SetContainerGranule(dst_external_beg, dst_beg - dst_external_beg);
+ SetContainerGranule(dst_beg_down, dst_beg - dst_beg_down);
}
}
@@ -671,11 +671,11 @@ static void CopyContainerLastGranuleAnnotation(uptr src_end,
uptr dst_end_down) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
// Last granule
- uptr src_internal_end = RoundDownTo(src_end, granularity);
+ uptr src_end_down = RoundDownTo(src_end, granularity);
if (AddressIsPoisoned(src_end)) {
- *(u8 *)MemToShadow(dst_end_down) = *(u8 *)MemToShadow(src_internal_end);
+ *(u8 *)MemToShadow(dst_end_down) = *(u8 *)MemToShadow(src_end_down);
} else {
- SetContainerGranule(dst_end_down, src_end - src_internal_end);
+ SetContainerGranule(dst_end_down, src_end - src_end_down);
}
}
>From 88f094c77b37216e4058db02993d7f655e4d6c44 Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 22:18:47 -0700
Subject: [PATCH 21/25] drop optional {}
---
compiler-rt/lib/asan/asan_poisoning.cpp | 52 +++++++++----------------
1 file changed, 19 insertions(+), 33 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index 14ee1b148e07f4..c08a029c42a85b 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -601,17 +601,15 @@ static void SlowCopyContainerAnnotations(uptr src_beg, uptr src_end,
uptr granule_end = granule_beg + granularity;
uptr unpoisoned_bytes = 0;
- for (; dst_ptr != granule_end && dst_ptr != dst_end; ++dst_ptr, ++src_ptr) {
+ for (; dst_ptr != granule_end && dst_ptr != dst_end; ++dst_ptr, ++src_ptr)
if (!AddressIsPoisoned(src_ptr))
unpoisoned_bytes = dst_ptr - granule_beg + 1;
- }
if (dst_ptr < dst_end || dst_ptr == dst_end_down ||
AddressIsPoisoned(dst_end)) {
- if (unpoisoned_bytes != 0 || granule_beg >= dst_beg) {
+ if (unpoisoned_bytes != 0 || granule_beg >= dst_beg)
SetContainerGranule(granule_beg, unpoisoned_bytes);
- } else if (!AddressIsPoisoned(dst_beg)) {
+ else if (!AddressIsPoisoned(dst_beg))
SetContainerGranule(granule_beg, dst_beg - granule_beg);
- }
}
}
}
@@ -631,21 +629,17 @@ static void SlowReversedCopyContainerAnnotations(uptr src_beg, uptr src_end,
uptr granule_beg = RoundDownTo(dst_ptr - 1, granularity);
uptr unpoisoned_bytes = 0;
- for (; dst_ptr != granule_beg && dst_ptr != dst_beg; --dst_ptr, --src_ptr) {
- if (unpoisoned_bytes == 0 && !AddressIsPoisoned(src_ptr - 1)) {
+ for (; dst_ptr != granule_beg && dst_ptr != dst_beg; --dst_ptr, --src_ptr)
+ if (unpoisoned_bytes == 0 && !AddressIsPoisoned(src_ptr - 1))
unpoisoned_bytes = dst_ptr - granule_beg;
- }
- }
- if (dst_ptr >= dst_end_down && !AddressIsPoisoned(dst_end)) {
+ if (dst_ptr >= dst_end_down && !AddressIsPoisoned(dst_end))
continue;
- }
- if (granule_beg == dst_ptr || unpoisoned_bytes != 0) {
+ if (granule_beg == dst_ptr || unpoisoned_bytes != 0)
SetContainerGranule(granule_beg, unpoisoned_bytes);
- } else if (!AddressIsPoisoned(dst_beg)) {
+ else if (!AddressIsPoisoned(dst_beg))
SetContainerGranule(granule_beg, dst_beg - granule_beg);
- }
}
}
@@ -657,11 +651,10 @@ static void CopyContainerFirstGranuleAnnotation(uptr src_beg, uptr dst_beg) {
// First granule
uptr dst_beg_down = RoundDownTo(dst_beg, granularity);
uptr src_beg_down = RoundDownTo(src_beg, granularity);
- if (!AddressIsPoisoned(src_beg)) {
+ if (!AddressIsPoisoned(src_beg))
*(u8 *)MemToShadow(dst_beg_down) = *(u8 *)MemToShadow(src_beg_down);
- } else if (!AddressIsPoisoned(dst_beg)) {
+ else if (!AddressIsPoisoned(dst_beg))
SetContainerGranule(dst_beg_down, dst_beg - dst_beg_down);
- }
}
// A helper function for __sanitizer_copy_contiguous_container_annotations,
@@ -672,11 +665,10 @@ static void CopyContainerLastGranuleAnnotation(uptr src_end,
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
// Last granule
uptr src_end_down = RoundDownTo(src_end, granularity);
- if (AddressIsPoisoned(src_end)) {
+ if (AddressIsPoisoned(src_end))
*(u8 *)MemToShadow(dst_end_down) = *(u8 *)MemToShadow(src_end_down);
- } else {
+ else
SetContainerGranule(dst_end_down, src_end - src_end_down);
- }
}
// This function copies ASan memory annotations (poisoned/unpoisoned states)
@@ -728,11 +720,10 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_beg_p,
bool copy_in_reversed_order = src_beg < dst_beg && dst_beg <= src_end_up;
if (src_beg % granularity != dst_beg % granularity ||
RoundDownTo(dst_end - 1, granularity) <= dst_beg) {
- if (copy_in_reversed_order) {
+ if (copy_in_reversed_order)
SlowReversedCopyContainerAnnotations(src_beg, src_end, dst_beg, dst_end);
- } else {
+ else
SlowCopyContainerAnnotations(src_beg, src_end, dst_beg, dst_end);
- }
return;
}
@@ -741,13 +732,11 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_beg_p,
uptr dst_beg_up = RoundUpTo(dst_beg, granularity);
uptr dst_end_down = RoundDownTo(dst_end, granularity);
if (copy_in_reversed_order) {
- if (dst_end_down != dst_end && AddressIsPoisoned(dst_end)) {
+ if (dst_end_down != dst_end && AddressIsPoisoned(dst_end))
CopyContainerLastGranuleAnnotation(src_end, dst_end_down);
- }
} else {
- if (dst_beg_up != dst_beg) {
+ if (dst_beg_up != dst_beg)
CopyContainerFirstGranuleAnnotation(src_beg, dst_beg);
- }
}
if (dst_end_down > dst_beg_up) {
@@ -758,13 +747,10 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_beg_p,
}
if (copy_in_reversed_order) {
- if (dst_beg_up != dst_beg) {
+ if (dst_beg_up != dst_beg)
CopyContainerFirstGranuleAnnotation(src_beg, dst_beg);
- }
- } else {
- if (dst_end_down != dst_end && AddressIsPoisoned(dst_end)) {
- CopyContainerLastGranuleAnnotation(src_end, dst_end_down);
- }
+ } else if (dst_end_down != dst_end && AddressIsPoisoned(dst_end)) {
+ CopyContainerLastGranuleAnnotation(src_end, dst_end_down);
}
}
>From 50435f045c0dd792ed749f882f4cfad8bdb84228 Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 22:53:17 -0700
Subject: [PATCH 22/25] simplify a few loops and branches
---
compiler-rt/lib/asan/asan_poisoning.cpp | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index c08a029c42a85b..40f6d0d3ce6856 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -601,7 +601,8 @@ static void SlowCopyContainerAnnotations(uptr src_beg, uptr src_end,
uptr granule_end = granule_beg + granularity;
uptr unpoisoned_bytes = 0;
- for (; dst_ptr != granule_end && dst_ptr != dst_end; ++dst_ptr, ++src_ptr)
+ uptr end = Min(granule_end, dst_end);
+ for (; dst_ptr != end; ++dst_ptr, ++src_ptr)
if (!AddressIsPoisoned(src_ptr))
unpoisoned_bytes = dst_ptr - granule_beg + 1;
if (dst_ptr < dst_end || dst_ptr == dst_end_down ||
@@ -629,7 +630,8 @@ static void SlowReversedCopyContainerAnnotations(uptr src_beg, uptr src_end,
uptr granule_beg = RoundDownTo(dst_ptr - 1, granularity);
uptr unpoisoned_bytes = 0;
- for (; dst_ptr != granule_beg && dst_ptr != dst_beg; --dst_ptr, --src_ptr)
+ uptr end = Max(granule_beg, dst_beg);
+ for (; dst_ptr != end; --dst_ptr, --src_ptr)
if (unpoisoned_bytes == 0 && !AddressIsPoisoned(src_ptr - 1))
unpoisoned_bytes = dst_ptr - granule_beg;
@@ -716,6 +718,7 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_beg_p,
//
// The only remaining edge cases involve edge granules,
// when the container starts or ends within a granule.
+ uptr src_beg_up = RoundUpTo(src_beg, granularity);
uptr src_end_up = RoundUpTo(src_end, granularity);
bool copy_in_reversed_order = src_beg < dst_beg && dst_beg <= src_end_up;
if (src_beg % granularity != dst_beg % granularity ||
@@ -739,11 +742,10 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_beg_p,
CopyContainerFirstGranuleAnnotation(src_beg, dst_beg);
}
- if (dst_end_down > dst_beg_up) {
- uptr src_internal_beg = RoundUpTo(src_beg, granularity);
- __builtin_memmove((u8 *)MemToShadow(dst_beg_up),
- (u8 *)MemToShadow(src_internal_beg),
- (dst_end_down - dst_beg_up) / granularity);
+ if (dst_beg_up < dst_end_down) {
+ internal_memmove((u8 *)MemToShadow(dst_beg_up),
+ (u8 *)MemToShadow(src_beg_up),
+ (dst_end_down - dst_beg_up) / granularity);
}
if (copy_in_reversed_order) {
>From 88b5069dbf81d3c89521f2154feb6176ef3a3a5d Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 23:31:18 -0700
Subject: [PATCH 23/25] continue in SlowCopyContainerAnnotations
---
compiler-rt/lib/asan/asan_poisoning.cpp | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index 40f6d0d3ce6856..a0f6ac268cdbfc 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -605,13 +605,15 @@ static void SlowCopyContainerAnnotations(uptr src_beg, uptr src_end,
for (; dst_ptr != end; ++dst_ptr, ++src_ptr)
if (!AddressIsPoisoned(src_ptr))
unpoisoned_bytes = dst_ptr - granule_beg + 1;
- if (dst_ptr < dst_end || dst_ptr == dst_end_down ||
- AddressIsPoisoned(dst_end)) {
- if (unpoisoned_bytes != 0 || granule_beg >= dst_beg)
- SetContainerGranule(granule_beg, unpoisoned_bytes);
- else if (!AddressIsPoisoned(dst_beg))
- SetContainerGranule(granule_beg, dst_beg - granule_beg);
- }
+
+ if (dst_ptr == dst_end && dst_end != dst_end_down &&
+ !AddressIsPoisoned(dst_end))
+ continue;
+
+ if (unpoisoned_bytes != 0 || granule_beg >= dst_beg)
+ SetContainerGranule(granule_beg, unpoisoned_bytes);
+ else if (!AddressIsPoisoned(dst_beg))
+ SetContainerGranule(granule_beg, dst_beg - granule_beg);
}
}
>From d60e7dd3189ca221f74e52ad3be9ce32ca505bc4 Mon Sep 17 00:00:00 2001
From: Vitaly Buka <vitalybuka at google.com>
Date: Wed, 2 Oct 2024 23:40:18 -0700
Subject: [PATCH 24/25] Move conditions into
CopyContainerFirstGranuleAnnotation/CopyContainerLastGranuleAnnotation
---
compiler-rt/lib/asan/asan_poisoning.cpp | 35 ++++++++++++-------------
1 file changed, 17 insertions(+), 18 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index a0f6ac268cdbfc..b1ca84b15ca8cb 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -653,8 +653,10 @@ static void SlowReversedCopyContainerAnnotations(uptr src_beg, uptr src_end,
static void CopyContainerFirstGranuleAnnotation(uptr src_beg, uptr dst_beg) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
// First granule
- uptr dst_beg_down = RoundDownTo(dst_beg, granularity);
uptr src_beg_down = RoundDownTo(src_beg, granularity);
+ uptr dst_beg_down = RoundDownTo(dst_beg, granularity);
+ if (dst_beg_down == dst_beg)
+ return;
if (!AddressIsPoisoned(src_beg))
*(u8 *)MemToShadow(dst_beg_down) = *(u8 *)MemToShadow(src_beg_down);
else if (!AddressIsPoisoned(dst_beg))
@@ -664,11 +666,13 @@ static void CopyContainerFirstGranuleAnnotation(uptr src_beg, uptr dst_beg) {
// A helper function for __sanitizer_copy_contiguous_container_annotations,
// has assumption about begin and end of the container.
// Should not be used stand alone.
-static void CopyContainerLastGranuleAnnotation(uptr src_end,
- uptr dst_end_down) {
+static void CopyContainerLastGranuleAnnotation(uptr src_end, uptr dst_end) {
constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
// Last granule
uptr src_end_down = RoundDownTo(src_end, granularity);
+ uptr dst_end_down = RoundDownTo(dst_end, granularity);
+ if (dst_end_down == dst_end || !AddressIsPoisoned(dst_end))
+ return;
if (AddressIsPoisoned(src_end))
*(u8 *)MemToShadow(dst_end_down) = *(u8 *)MemToShadow(src_end_down);
else
@@ -736,26 +740,21 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_beg_p,
// from the middle.
uptr dst_beg_up = RoundUpTo(dst_beg, granularity);
uptr dst_end_down = RoundDownTo(dst_end, granularity);
- if (copy_in_reversed_order) {
- if (dst_end_down != dst_end && AddressIsPoisoned(dst_end))
- CopyContainerLastGranuleAnnotation(src_end, dst_end_down);
- } else {
- if (dst_beg_up != dst_beg)
- CopyContainerFirstGranuleAnnotation(src_beg, dst_beg);
- }
+ if (copy_in_reversed_order)
+ CopyContainerLastGranuleAnnotation(src_end, dst_end);
+ else
+ CopyContainerFirstGranuleAnnotation(src_beg, dst_beg);
if (dst_beg_up < dst_end_down) {
internal_memmove((u8 *)MemToShadow(dst_beg_up),
- (u8 *)MemToShadow(src_beg_up),
- (dst_end_down - dst_beg_up) / granularity);
+ (u8 *)MemToShadow(src_beg_up),
+ (dst_end_down - dst_beg_up) / granularity);
}
- if (copy_in_reversed_order) {
- if (dst_beg_up != dst_beg)
- CopyContainerFirstGranuleAnnotation(src_beg, dst_beg);
- } else if (dst_end_down != dst_end && AddressIsPoisoned(dst_end)) {
- CopyContainerLastGranuleAnnotation(src_end, dst_end_down);
- }
+ if (copy_in_reversed_order)
+ CopyContainerFirstGranuleAnnotation(src_beg, dst_beg);
+ else
+ CopyContainerLastGranuleAnnotation(src_end, dst_end);
}
static const void *FindBadAddress(uptr begin, uptr end, bool poisoned) {
>From b31757b466b2a8a8b3639f99d37e0f1a260ba289 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Tue, 8 Oct 2024 12:24:54 +0200
Subject: [PATCH 25/25] Fix formating
---
compiler-rt/lib/asan/asan_poisoning.cpp | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index b1ca84b15ca8cb..762670632f4e0f 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -577,7 +577,6 @@ void __sanitizer_annotate_double_ended_contiguous_container(
}
}
-
// Marks the specified number of bytes in a granule as accessible or
// poisones the whole granule with kAsanContiguousContainerOOBMagic value.
static void SetContainerGranule(uptr ptr, u8 n) {
@@ -729,9 +728,9 @@ void __sanitizer_copy_contiguous_container_annotations(const void *src_beg_p,
bool copy_in_reversed_order = src_beg < dst_beg && dst_beg <= src_end_up;
if (src_beg % granularity != dst_beg % granularity ||
RoundDownTo(dst_end - 1, granularity) <= dst_beg) {
- if (copy_in_reversed_order)
+ if (copy_in_reversed_order)
SlowReversedCopyContainerAnnotations(src_beg, src_end, dst_beg, dst_end);
- else
+ else
SlowCopyContainerAnnotations(src_beg, src_end, dst_beg, dst_end);
return;
}
More information about the llvm-commits
mailing list