[compiler-rt] [compiler-rt][ASan] Add function moving annotations (PR #91702)

via llvm-commits llvm-commits at lists.llvm.org
Wed Aug 28 02:59:43 PDT 2024


https://github.com/AdvenamTacet updated https://github.com/llvm/llvm-project/pull/91702

>From 2db13f5923191de806aea6dc0f9161f137a33c6e Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at trailofbits.com>
Date: Fri, 10 May 2024 07:27:17 +0200
Subject: [PATCH 1/5] [compiler-rt][ASan] Add function moving annotations

This PR adds a `__sanitizer_move_contiguous_container_annotations` function, which moves annotations from one memory area to another.
Old area is unpoisoned at the end. New area is annotated in the same way as the old region at the beginning (within limitations of ASan).

```cpp
void __sanitizer_move_contiguous_container_annotations(
    const void *old_storage_beg_p, const void *old_storage_end_p,
    const void *new_storage_beg_p, const void *new_storage_end_p) {
```

This function aims to help with short string annotations and similar container annotations.
Right now we change trait types of `std::basic_string` when compiling with ASan.

https://github.com/llvm/llvm-project/blob/87f3407856e61a73798af4e41b28bc33b5bf4ce6/libcxx/include/string#L738-L751

The goal is to not change `__trivially_relocatable` when compiling with ASan.
If this function is accpeted and upstreamed, the next step is creating function like `__memcpy_with_asan` moving memory with ASan.
And then using this function instead of `__builtin__memcpy` while moving trivially relocatable objects.

NOTICE: I did not test it yet, so it's probably not compiling, but I don't expect big changes. PR is WIP until I test it.

---

I'm thinking if there is a good way to address fact that in a container the new buffer is usually bigger than the previous one.
We may add two more arguments to the functions to address it (the beginning and the end of the whole buffer.

Another potential change is removing `new_storage_end_p` as it's redundant, because we require the same size.
---
 .../include/sanitizer/common_interface_defs.h |  24 +++
 compiler-rt/lib/asan/asan_errors.cpp          |  14 ++
 compiler-rt/lib/asan/asan_errors.h            |  19 +++
 compiler-rt/lib/asan/asan_poisoning.cpp       | 128 +++++++++++++++
 compiler-rt/lib/asan/asan_report.cpp          |  10 ++
 compiler-rt/lib/asan/asan_report.h            |   3 +
 .../sanitizer_common_interface.inc            |   1 +
 .../sanitizer_interface_internal.h            |   4 +
 .../TestCases/move_container_annotations.cpp  | 155 ++++++++++++++++++
 9 files changed, 358 insertions(+)
 create mode 100644 compiler-rt/test/asan/TestCases/move_container_annotations.cpp

diff --git a/compiler-rt/include/sanitizer/common_interface_defs.h b/compiler-rt/include/sanitizer/common_interface_defs.h
index f9fce595b37bb8..a3c91a9be6439b 100644
--- a/compiler-rt/include/sanitizer/common_interface_defs.h
+++ b/compiler-rt/include/sanitizer/common_interface_defs.h
@@ -193,6 +193,30 @@ void SANITIZER_CDECL __sanitizer_annotate_double_ended_contiguous_container(
     const void *old_container_beg, const void *old_container_end,
     const void *new_container_beg, const void *new_container_end);
 
+/// Moves annotation from one storage to another.
+/// At the end, new buffer is annotated in the same way as old buffer at
+/// the very beginning. Old buffer is fully unpoisoned.
+/// Main purpose of that function is use while moving trivially relocatable
+/// objects, which memory may be poisoned (therefore not trivially with ASan).
+///
+/// A contiguous container is a container that keeps all of its elements
+/// in a contiguous region of memory. The container owns the region of memory
+/// <c>[old_storage_beg, old_storage_end)</c> and
+/// <c>[new_storage_beg, new_storage_end)</c>;
+/// There is no requirement where objects are kept.
+/// Poisoned and non-poisoned memory areas can alternate,
+/// there are no shadow memory restrictions.
+///
+/// Argument requirements:
+/// New containert has to have the same size as the old container.
+/// \param old_storage_beg Beginning of the old container region.
+/// \param old_storage_end End of the old container region.
+/// \param new_storage_beg Beginning of the new container region.
+/// \param new_storage_end End of the new container region.
+void SANITIZER_CDECL __sanitizer_move_contiguous_container_annotations(
+    const void *old_storage_beg, const void *old_storage_end,
+    const void *new_storage_beg, const void *new_storage_end);
+
 /// Returns true if the contiguous container <c>[beg, end)</c> is properly
 /// poisoned.
 ///
diff --git a/compiler-rt/lib/asan/asan_errors.cpp b/compiler-rt/lib/asan/asan_errors.cpp
index 6f2fd28bfdf11a..11d67f4ace1064 100644
--- a/compiler-rt/lib/asan/asan_errors.cpp
+++ b/compiler-rt/lib/asan/asan_errors.cpp
@@ -348,6 +348,20 @@ void ErrorBadParamsToAnnotateDoubleEndedContiguousContainer::Print() {
   ReportErrorSummary(scariness.GetDescription(), stack);
 }
 
+void ErrorBadParamsToMoveContiguousContainerAnnotations::Print() {
+  Report(
+      "ERROR: AddressSanitizer: bad parameters to "
+      "__sanitizer_move_contiguous_container_annotations:\n"
+      "      old_storage_beg : %p\n"
+      "      old_storage_end : %p\n"
+      "      new_storage_beg : %p\n"
+      "      new_storage_end : %p\n",
+      (void *)old_storage_beg, (void *)old_storage_end, (void *)new_storage_beg,
+      (void *)new_storage_end);
+  stack->Print();
+  ReportErrorSummary(scariness.GetDescription(), stack);
+}
+
 void ErrorODRViolation::Print() {
   Decorator d;
   Printf("%s", d.Error());
diff --git a/compiler-rt/lib/asan/asan_errors.h b/compiler-rt/lib/asan/asan_errors.h
index 634f6da5443552..40526039fbc76f 100644
--- a/compiler-rt/lib/asan/asan_errors.h
+++ b/compiler-rt/lib/asan/asan_errors.h
@@ -353,6 +353,24 @@ struct ErrorBadParamsToAnnotateDoubleEndedContiguousContainer : ErrorBase {
   void Print();
 };
 
+struct ErrorBadParamsToMoveContiguousContainerAnnotations : ErrorBase {
+  const BufferedStackTrace *stack;
+  uptr old_storage_beg, old_storage_end, new_storage_beg, new_storage_end;
+
+  ErrorBadParamsToMoveContiguousContainerAnnotations() = default;  // (*)
+  ErrorBadParamsToMoveContiguousContainerAnnotations(
+      u32 tid, BufferedStackTrace *stack_, uptr old_storage_beg_,
+      uptr old_storage_end_, uptr new_storage_beg_, uptr new_storage_end_)
+      : ErrorBase(tid, 10,
+                  "bad-__sanitizer_annotate_double_ended_contiguous_container"),
+        stack(stack_),
+        old_storage_beg(old_storage_beg_),
+        old_storage_end(old_storage_end_),
+        new_storage_beg(new_storage_beg_),
+        new_storage_end(new_storage_end_) {}
+  void Print();
+};
+
 struct ErrorODRViolation : ErrorBase {
   __asan_global global1, global2;
   u32 stack_id1, stack_id2;
@@ -421,6 +439,7 @@ struct ErrorGeneric : ErrorBase {
   macro(StringFunctionSizeOverflow)                        \
   macro(BadParamsToAnnotateContiguousContainer)            \
   macro(BadParamsToAnnotateDoubleEndedContiguousContainer) \
+  macro(BadParamsToMoveContiguousContainerAnnotations)     \
   macro(ODRViolation)                                      \
   macro(InvalidPointerPair)                                \
   macro(Generic)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index d600b1a0c241fa..10f5e97f8a5aee 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -576,6 +576,134 @@ void __sanitizer_annotate_double_ended_contiguous_container(
   }
 }
 
+// This function moves annotation from one buffer to another.
+// Old buffer is unpoisoned at the end.
+void __sanitizer_move_contiguous_container_annotations(
+    const void *old_storage_beg_p, const void *old_storage_end_p,
+    const void *new_storage_beg_p, const void *new_storage_end_p) {
+  if (!flags()->detect_container_overflow)
+    return;
+
+  VPrintf(2, "contiguous_container_old: %p %p\n", old_storage_beg_p,
+          old_storage_end_p);
+  VPrintf(2, "contiguous_container_new: %p %p\n", new_storage_beg_p,
+          new_storage_end_p);
+
+  uptr old_storage_beg = reinterpret_cast<uptr>(old_storage_beg_p);
+  uptr old_storage_end = reinterpret_cast<uptr>(old_storage_end_p);
+  uptr new_storage_beg = reinterpret_cast<uptr>(new_storage_beg_p);
+  uptr new_storage_end = reinterpret_cast<uptr>(new_storage_end_p);
+
+  constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
+
+  if (!(old_storage_beg <= old_storage_end) ||
+      !(new_storage_beg <= new_storage_end) ||
+      (old_storage_end - old_storage_beg) !=
+          (new_storage_end - new_storage_beg)) {
+    GET_STACK_TRACE_FATAL_HERE;
+    ReportBadParamsToMoveContiguousContainerAnnotations(
+        old_storage_beg, old_storage_end, new_storage_beg, new_storage_end,
+        &stack);
+  }
+
+  uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
+  uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
+  uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
+  uptr old_external_beg = RoundDownTo(old_storage_beg, granularity);
+  uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
+  uptr old_internal_end = RoundDownTo(old_storage_end, granularity);
+
+  // At the very beginning we poison the whole buffer.
+  // Later we unpoison what is necessary.
+  PoisonShadow(new_internal_beg, new_internal_end - new_internal_beg,
+               kAsanContiguousContainerOOBMagic);
+  if (new_internal_beg != new_storage_beg) {
+    uptr new_unpoisoned = *(u8 *)MemToShadow(new_external_beg);
+    if (new_unpoisoned > (new_storage_beg - new_external_beg)) {
+      *(u8 *)MemToShadow(new_external_beg) =
+          static_cast<u8>(new_storage_beg - new_external_beg);
+    }
+  }
+  if (new_internal_end != new_storage_end) {
+    uptr new_unpoisoned = *(u8 *)MemToShadow(new_internal_end);
+    if (new_unpoisoned <= (new_storage_end - new_internal_end)) {
+      *(u8 *)MemToShadow(new_external_beg) =
+          static_cast<u8>(kAsanContiguousContainerOOBMagic);
+    }
+  }
+
+  // There are two cases.
+  // 1) Distance between buffers is granule-aligned.
+  // 2) It's not aligned, that case is slower.
+  if (old_storage_beg % granularity == new_storage_beg % granularity) {
+    // When buffers are aligned in the same way, we can just copy shadow memory,
+    // except first and last granule.
+    __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
+                     (u8 *)MemToShadow(old_internal_beg),
+                     (new_internal_end - new_internal_beg) / granularity);
+    // In first granule we cannot poison anything before beginning of the
+    // container.
+    if (new_internal_beg != new_storage_beg) {
+      uptr old_unpoisoned = *(u8 *)MemToShadow(old_external_beg);
+      uptr new_unpoisoned = *(u8 *)MemToShadow(new_external_beg);
+
+      if (old_unpoisoned > old_storage_beg - old_external_beg) {
+        *(u8 *)MemToShadow(new_external_beg) = old_unpoisoned;
+      } else if (new_unpoisoned > new_storage_beg - new_external_beg) {
+        *(u8 *)MemToShadow(new_external_beg) =
+            new_storage_beg - new_external_beg;
+      }
+    }
+    // In last granule we cannot poison anything after the end of the container.
+    if (new_internal_end != new_storage_end) {
+      uptr old_unpoisoned = *(u8 *)MemToShadow(old_internal_end);
+      uptr new_unpoisoned = *(u8 *)MemToShadow(new_internal_end);
+      if (new_unpoisoned <= new_storage_end - new_internal_end &&
+          old_unpoisoned < new_unpoisoned) {
+        *(u8 *)MemToShadow(new_internal_end) = old_unpoisoned;
+      }
+    }
+  } else {
+    // If buffers are not aligned, we have to go byte by byte.
+    uptr old_ptr = old_storage_beg;
+    uptr new_ptr = new_storage_beg;
+    uptr next_new;
+    for (; new_ptr + granularity <= new_storage_end;) {
+      next_new = RoundUpTo(new_ptr + 1, granularity);
+      uptr unpoison_to = 0;
+      for (; new_ptr != next_new; ++new_ptr, ++old_ptr) {
+        if (!AddressIsPoisoned(old_ptr)) {
+          unpoison_to = new_ptr + 1;
+        }
+      }
+      if (unpoison_to != 0) {
+        uptr granule_beg = new_ptr - granularity;
+        uptr value = unpoison_to - granule_beg;
+        *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
+      }
+    }
+    // Only case left is the end of the container in the middle of a granule.
+    // If memory after the end is unpoisoned, we cannot change anything.
+    // But if it's poisoned, we should unpoison as little as possible.
+    if (new_ptr != new_storage_end && AddressIsPoisoned(new_storage_end)) {
+      uptr unpoison_to = 0;
+      for (; new_ptr != new_storage_end; ++new_ptr, ++old_ptr) {
+        if (!AddressIsPoisoned(old_ptr)) {
+          unpoison_to = new_ptr + 1;
+        }
+      }
+      if (unpoison_to != 0) {
+        uptr granule_beg = RoundDownTo(new_storage_end, granularity);
+        uptr value = unpoison_to - granule_beg;
+        *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
+      }
+    }
+  }
+
+  __asan_unpoison_memory_region((void *)old_storage_beg,
+                                old_storage_end - old_storage_beg);
+}
+
 static const void *FindBadAddress(uptr begin, uptr end, bool poisoned) {
   CHECK_LE(begin, end);
   constexpr uptr kMaxRangeToCheck = 32;
diff --git a/compiler-rt/lib/asan/asan_report.cpp b/compiler-rt/lib/asan/asan_report.cpp
index fd590e401f67fe..00ef334bb88ffa 100644
--- a/compiler-rt/lib/asan/asan_report.cpp
+++ b/compiler-rt/lib/asan/asan_report.cpp
@@ -367,6 +367,16 @@ void ReportBadParamsToAnnotateDoubleEndedContiguousContainer(
   in_report.ReportError(error);
 }
 
+void ReportBadParamsToMoveContiguousContainerAnnotations(
+    uptr old_storage_beg, uptr old_storage_end, uptr new_storage_beg,
+    uptr new_storage_end, BufferedStackTrace *stack) {
+  ScopedInErrorReport in_report;
+  ErrorBadParamsToMoveContiguousContainerAnnotations error(
+      GetCurrentTidOrInvalid(), stack, old_storage_beg, old_storage_end,
+      new_storage_beg, new_storage_end);
+  in_report.ReportError(error);
+}
+
 void ReportODRViolation(const __asan_global *g1, u32 stack_id1,
                         const __asan_global *g2, u32 stack_id2) {
   ScopedInErrorReport in_report;
diff --git a/compiler-rt/lib/asan/asan_report.h b/compiler-rt/lib/asan/asan_report.h
index 3540b3b4b1bfe0..d98c87e5a6930b 100644
--- a/compiler-rt/lib/asan/asan_report.h
+++ b/compiler-rt/lib/asan/asan_report.h
@@ -88,6 +88,9 @@ void ReportBadParamsToAnnotateDoubleEndedContiguousContainer(
     uptr storage_beg, uptr storage_end, uptr old_container_beg,
     uptr old_container_end, uptr new_container_beg, uptr new_container_end,
     BufferedStackTrace *stack);
+void ReportBadParamsToMoveContiguousContainerAnnotations(
+    uptr old_storage_beg, uptr old_storage_end, uptr new_storage_beg,
+    uptr new_storage_end, BufferedStackTrace *stack);
 
 void ReportODRViolation(const __asan_global *g1, u32 stack_id1,
                         const __asan_global *g2, u32 stack_id2);
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc b/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
index d5981a38ff2921..0a413202b9368b 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
@@ -10,6 +10,7 @@
 INTERFACE_FUNCTION(__sanitizer_acquire_crash_state)
 INTERFACE_FUNCTION(__sanitizer_annotate_contiguous_container)
 INTERFACE_FUNCTION(__sanitizer_annotate_double_ended_contiguous_container)
+INTERFACE_FUNCTION(__sanitizer_move_contiguous_container_annotations)
 INTERFACE_FUNCTION(__sanitizer_contiguous_container_find_bad_address)
 INTERFACE_FUNCTION(
     __sanitizer_double_ended_contiguous_container_find_bad_address)
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
index cd0d45e2f3fab1..fa1d2b562f33fc 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
@@ -71,6 +71,10 @@ void __sanitizer_annotate_double_ended_contiguous_container(
     const void *old_container_beg, const void *old_container_end,
     const void *new_container_beg, const void *new_container_end);
 SANITIZER_INTERFACE_ATTRIBUTE
+void __sanitizer_move_contiguous_container_annotations(
+    const void *old_storage_beg, const void *old_storage_end,
+    const void *new_storage_beg, const void *new_storage_end);
+SANITIZER_INTERFACE_ATTRIBUTE
 int __sanitizer_verify_contiguous_container(const void *beg, const void *mid,
                                             const void *end);
 SANITIZER_INTERFACE_ATTRIBUTE
diff --git a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
new file mode 100644
index 00000000000000..f21826ca0b7939
--- /dev/null
+++ b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
@@ -0,0 +1,155 @@
+// RUN: %clangxx_asan -fexceptions -O %s -o %t && %env_asan_opts=detect_stack_use_after_return=0 %run %t
+//
+// Test __sanitizer_move_contiguous_container_annotations.
+
+#include <algorithm>
+#include <deque>
+#include <numeric>
+
+#include <assert.h>
+#include <sanitizer/asan_interface.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+static constexpr size_t kGranularity = 8;
+
+template <class T> static constexpr T RoundDown(T x) {
+  return reinterpret_cast<T>(reinterpret_cast<uintptr_t>(x) &
+                             ~(kGranularity - 1));
+}
+template <class T> static constexpr T RoundUp(T x) {
+  return (x == RoundDown(x))
+             ? x
+             : reinterpret_cast<T>(reinterpret_cast<uintptr_t>(RoundDown(x)) +
+                                   kGranularity);
+}
+
+static std::deque<int> GetPoisonedState(char *begin, char *end) {
+  std::deque<int> result;
+  for (; begin != end; ++begin) {
+    result.push_back(__asan_address_is_poisoned(begin));
+  }
+  return result;
+}
+
+static void RandomPoison(char *beg, char *end) {
+  if (beg != RoundDown(beg) && (rand() % 2 == 1)) {
+    __asan_poison_memory_region(beg, RoundUp(beg) - beg);
+    __asan_unpoison_memory_region(beg, rand() % (RoundUp(beg) - beg + 1));
+  }
+  for (beg = RoundUp(beg); beg + kGranularity <= end; beg += kGranularity) {
+    __asan_poison_memory_region(beg, kGranularity);
+    __asan_unpoison_memory_region(beg, rand() % (kGranularity + 1));
+  }
+  if (end > beg && __asan_address_is_poisoned(end)) {
+    __asan_poison_memory_region(beg, kGranularity);
+    __asan_unpoison_memory_region(beg, rand() % (end - beg + 1));
+  }
+}
+
+static size_t count_unpoisoned(std::deque<int> &poison_states, size_t n) {
+  size_t result = 0;
+  for (size_t i = 0; i < n && !poison_states.empty(); ++i) {
+    if (!poison_states.front()) {
+      result = i + 1;
+    }
+    poison_states.pop_front();
+  }
+
+  return result;
+}
+
+void TestMove(size_t capacity, size_t off_old, size_t off_new,
+              int poison_buffers) {
+  size_t old_buffer_size = capacity + off_old + kGranularity * 2;
+  size_t new_buffer_size = capacity + off_new + kGranularity * 2;
+  char *old_buffer = new char[old_buffer_size];
+  char *new_buffer = new char[new_buffer_size];
+  char *old_buffer_end = old_buffer + old_buffer_size;
+  char *new_buffer_end = new_buffer + new_buffer_size;
+  bool poison_old = poison_buffers % 2 == 1;
+  bool poison_new = poison_buffers / 2 == 1;
+  if (poison_old)
+    __asan_poison_memory_region(old_buffer, old_buffer_size);
+  if (poison_new)
+    __asan_poison_memory_region(new_buffer, new_buffer_size);
+  char *old_beg = old_buffer + off_old;
+  char *new_beg = new_buffer + off_new;
+  char *old_end = old_beg + capacity;
+  char *new_end = new_beg + capacity;
+
+  for (int i = 0; i < 1000; i++) {
+    RandomPoison(old_beg, old_end);
+    std::deque<int> poison_states(old_beg, old_end);
+    __sanitizer_move_contiguous_container_annotations(old_beg, old_end, new_beg,
+                                                      new_end);
+
+    // If old_buffer were poisoned, expected state of memory before old_beg
+    // is undetermined.
+    // If old buffer were not poisoned, that memory should still be unpoisoned.
+    // Area between old_beg and old_end should never be poisoned.
+    char *cur = poison_old ? old_beg : old_buffer;
+    for (; cur < old_end; ++cur) {
+      assert(!__asan_address_is_poisoned(cur));
+    }
+    // Memory after old_beg should be the same as at the beginning.
+    for (; cur < old_buffer_end; ++cur) {
+      assert(__asan_address_is_poisoned(cur) == poison_old);
+    }
+
+    // If new_buffer were not poisoned, memory before new_beg should never
+    // be poisoned. Otherwise, its state is undetermined.
+    if (!poison_new) {
+      for (cur = new_buffer; cur < new_beg; ++cur) {
+        assert(!__asan_address_is_poisoned(cur));
+      }
+    }
+    //In every granule, poisoned memory should be after last expected unpoisoned.
+    char *next;
+    for (cur = new_beg; cur + kGranularity <= new_end; cur = next) {
+      next = RoundUp(cur + 1);
+      size_t unpoisoned = count_unpoisoned(poison_states, next - cur);
+      if (unpoisoned > 0) {
+        assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
+      }
+      if (cur + unpoisoned < next) {
+        assert(__asan_address_is_poisoned(cur + unpoisoned));
+      }
+    }
+    // [cur; new_end) is not checked yet.
+    // If new_buffer were not poisoned, it cannot be poisoned and we can ignore check.
+    // If new_buffer were poisoned, it should be same as earlier.
+    if (cur < new_end && poison_new) {
+      size_t unpoisoned = count_unpoisoned(poison_states, new_end - cur);
+      if (unpoisoned > 0) {
+        assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
+      }
+      if (cur + unpoisoned < new_end) {
+        assert(__asan_address_is_poisoned(cur + unpoisoned));
+      }
+    }
+    // Memory annotations after new_end should be unchanged.
+    for (cur = new_end; cur < new_buffer_end; ++cur) {
+      assert(__asan_address_is_poisoned(cur) == poison_new);
+    }
+  }
+
+  __asan_unpoison_memory_region(old_buffer, old_buffer_size);
+  __asan_unpoison_memory_region(new_buffer, new_buffer_size);
+  delete[] old_buffer;
+  delete[] new_buffer;
+}
+
+int main(int argc, char **argv) {
+  int n = argc == 1 ? 64 : atoi(argv[1]);
+  for (int i = 0; i <= n; i++) {
+    for (int j = 0; j < kGranularity * 2; j++) {
+      for (int k = 0; k < kGranularity * 2; k++) {
+        for (int poison = 0; poison < 4; ++poison) {
+          TestMove(i, j, k, poison);
+        }
+      }
+    }
+  }
+}

>From 0ab9216ab9f2f3b60584070bb0847d3331d1ef08 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Tue, 20 Aug 2024 04:12:54 +0200
Subject: [PATCH 2/5] Not overlapping containers

Implemented correct container annotations copying function (non-overlapping containers only)

Refactored the initial PoC to a fully functional implementation for copying container annotations without moving/copying the memory content itself. This function currently supports cases where the source and destination containers do not overlap. Handling overlapping containers would add complexity.

The function is designed to work irrespective of whether the buffers are granule-aligned or the distance between them is granule-aligned. However, such scenarios may have an impact on performance.

A Test case has been included to verify the correctness of the implementation.

Removed unpoisoning oryginal buffer at the end, users can do it by themselves if they want to.
---
 compiler-rt/lib/asan/asan_poisoning.cpp       | 176 +++++++++++-------
 .../TestCases/move_container_annotations.cpp  |  45 +++--
 2 files changed, 135 insertions(+), 86 deletions(-)

diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index 10f5e97f8a5aee..a079f0ea607910 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -576,8 +576,46 @@ void __sanitizer_annotate_double_ended_contiguous_container(
   }
 }
 
-// This function moves annotation from one buffer to another.
-// Old buffer is unpoisoned at the end.
+static bool WithinOneGranule(uptr p, uptr q) {
+  if (p == q)
+    return true;
+  return RoundDownTo(p, ASAN_SHADOW_GRANULARITY) ==
+         RoundDownTo(q - 1, ASAN_SHADOW_GRANULARITY);
+}
+
+static void PoisonContainer(uptr storage_beg, uptr storage_end) {
+  constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
+  uptr internal_beg = RoundUpTo(storage_beg, granularity);
+  uptr external_beg = RoundDownTo(storage_beg, granularity);
+  uptr internal_end = RoundDownTo(storage_end, granularity);
+
+  if (internal_end > internal_beg)
+    PoisonShadow(internal_beg, internal_end - internal_beg,
+                 kAsanContiguousContainerOOBMagic);
+  // The new buffer may start in the middle of a granule.
+  if (internal_beg != storage_beg && internal_beg < internal_end &&
+      !AddressIsPoisoned(storage_beg)) {
+    *(u8 *)MemToShadow(external_beg) =
+        static_cast<u8>(storage_beg - external_beg);
+  }
+  // The new buffer may end in the middle of a granule.
+  if (internal_end != storage_end && AddressIsPoisoned(storage_end)) {
+    *(u8 *)MemToShadow(internal_end) =
+        static_cast<u8>(kAsanContiguousContainerOOBMagic);
+  }
+}
+
+// This function copies ASan memory annotations (poisoned/unpoisoned states)
+// from one buffer to another.
+// It's main purpose is to help with relocating trivially relocatable objects,
+// which memory may be poisoned, without calling copy constructor.
+// However, it does not move memory content itself, only annotations.
+// If the buffers aren't aligned (the distance between buffers isn't
+// granule-aligned)
+//     // old_storage_beg % granularity != new_storage_beg % granularity
+// the function handles this by going byte by byte, slowing down performance.
+// The old buffer annotations are not removed. If necessary,
+// user can unpoison old buffer with __asan_unpoison_memory_region.
 void __sanitizer_move_contiguous_container_annotations(
     const void *old_storage_beg_p, const void *old_storage_end_p,
     const void *new_storage_beg_p, const void *new_storage_end_p) {
@@ -606,6 +644,9 @@ void __sanitizer_move_contiguous_container_annotations(
         &stack);
   }
 
+  if (old_storage_beg == old_storage_end)
+    return;
+
   uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
   uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
   uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
@@ -615,52 +656,62 @@ void __sanitizer_move_contiguous_container_annotations(
 
   // At the very beginning we poison the whole buffer.
   // Later we unpoison what is necessary.
-  PoisonShadow(new_internal_beg, new_internal_end - new_internal_beg,
-               kAsanContiguousContainerOOBMagic);
-  if (new_internal_beg != new_storage_beg) {
-    uptr new_unpoisoned = *(u8 *)MemToShadow(new_external_beg);
-    if (new_unpoisoned > (new_storage_beg - new_external_beg)) {
-      *(u8 *)MemToShadow(new_external_beg) =
-          static_cast<u8>(new_storage_beg - new_external_beg);
-    }
-  }
-  if (new_internal_end != new_storage_end) {
-    uptr new_unpoisoned = *(u8 *)MemToShadow(new_internal_end);
-    if (new_unpoisoned <= (new_storage_end - new_internal_end)) {
-      *(u8 *)MemToShadow(new_external_beg) =
-          static_cast<u8>(kAsanContiguousContainerOOBMagic);
-    }
-  }
+  PoisonContainer(new_storage_beg, new_storage_end);
 
   // There are two cases.
   // 1) Distance between buffers is granule-aligned.
-  // 2) It's not aligned, that case is slower.
+  // 2) It's not aligned, and therefore requires going byte by byte.
   if (old_storage_beg % granularity == new_storage_beg % granularity) {
     // When buffers are aligned in the same way, we can just copy shadow memory,
-    // except first and last granule.
-    __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
-                     (u8 *)MemToShadow(old_internal_beg),
-                     (new_internal_end - new_internal_beg) / granularity);
-    // In first granule we cannot poison anything before beginning of the
-    // container.
-    if (new_internal_beg != new_storage_beg) {
-      uptr old_unpoisoned = *(u8 *)MemToShadow(old_external_beg);
-      uptr new_unpoisoned = *(u8 *)MemToShadow(new_external_beg);
-
-      if (old_unpoisoned > old_storage_beg - old_external_beg) {
-        *(u8 *)MemToShadow(new_external_beg) = old_unpoisoned;
-      } else if (new_unpoisoned > new_storage_beg - new_external_beg) {
-        *(u8 *)MemToShadow(new_external_beg) =
-            new_storage_beg - new_external_beg;
-      }
-    }
-    // In last granule we cannot poison anything after the end of the container.
-    if (new_internal_end != new_storage_end) {
-      uptr old_unpoisoned = *(u8 *)MemToShadow(old_internal_end);
-      uptr new_unpoisoned = *(u8 *)MemToShadow(new_internal_end);
-      if (new_unpoisoned <= new_storage_end - new_internal_end &&
-          old_unpoisoned < new_unpoisoned) {
-        *(u8 *)MemToShadow(new_internal_end) = old_unpoisoned;
+    // except the first and the last granule.
+    if (new_internal_end > new_internal_beg)
+      __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
+                       (u8 *)MemToShadow(old_internal_beg),
+                       (new_internal_end - new_internal_beg) / granularity);
+    // If the beginning and the end of the storage are aligned, we are done.
+    // Otherwise, we have to handle remaining granules.
+    if (new_internal_beg != new_storage_beg ||
+        new_internal_end != new_storage_end) {
+      if (WithinOneGranule(new_storage_beg, new_storage_end)) {
+        if (new_internal_end == new_storage_end) {
+          if (!AddressIsPoisoned(old_storage_beg)) {
+            *(u8 *)MemToShadow(new_external_beg) =
+                *(u8 *)MemToShadow(old_external_beg);
+          } else if (!AddressIsPoisoned(new_storage_beg)) {
+            *(u8 *)MemToShadow(new_external_beg) =
+                new_storage_beg - new_external_beg;
+          }
+        } else if (AddressIsPoisoned(new_storage_end)) {
+          if (!AddressIsPoisoned(old_storage_beg)) {
+            *(u8 *)MemToShadow(new_external_beg) =
+                AddressIsPoisoned(old_storage_end)
+                    ? *(u8 *)MemToShadow(old_internal_end)
+                    : new_storage_end - new_external_beg;
+          } else if (!AddressIsPoisoned(new_storage_beg)) {
+            *(u8 *)MemToShadow(new_external_beg) =
+                (new_storage_beg == new_external_beg)
+                    ? static_cast<u8>(kAsanContiguousContainerOOBMagic)
+                    : new_storage_beg - new_external_beg;
+          }
+        }
+      } else {
+        // Buffer is not within one granule!
+        if (new_internal_beg != new_storage_beg) {
+          if (!AddressIsPoisoned(old_storage_beg)) {
+            *(u8 *)MemToShadow(new_external_beg) =
+                *(u8 *)MemToShadow(old_external_beg);
+          } else if (!AddressIsPoisoned(new_storage_beg)) {
+            *(u8 *)MemToShadow(new_external_beg) =
+                new_storage_beg - new_external_beg;
+          }
+        }
+        if (new_internal_end != new_storage_end &&
+            AddressIsPoisoned(new_storage_end)) {
+          *(u8 *)MemToShadow(new_internal_end) =
+              AddressIsPoisoned(old_storage_end)
+                  ? *(u8 *)MemToShadow(old_internal_end)
+                  : old_storage_end - old_internal_end;
+        }
       }
     }
   } else {
@@ -668,40 +719,31 @@ void __sanitizer_move_contiguous_container_annotations(
     uptr old_ptr = old_storage_beg;
     uptr new_ptr = new_storage_beg;
     uptr next_new;
-    for (; new_ptr + granularity <= new_storage_end;) {
+    for (; new_ptr < new_storage_end;) {
       next_new = RoundUpTo(new_ptr + 1, granularity);
       uptr unpoison_to = 0;
-      for (; new_ptr != next_new; ++new_ptr, ++old_ptr) {
+      for (; new_ptr != next_new && new_ptr != new_storage_end;
+           ++new_ptr, ++old_ptr) {
         if (!AddressIsPoisoned(old_ptr)) {
           unpoison_to = new_ptr + 1;
         }
       }
-      if (unpoison_to != 0) {
-        uptr granule_beg = new_ptr - granularity;
-        uptr value = unpoison_to - granule_beg;
-        *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
-      }
-    }
-    // Only case left is the end of the container in the middle of a granule.
-    // If memory after the end is unpoisoned, we cannot change anything.
-    // But if it's poisoned, we should unpoison as little as possible.
-    if (new_ptr != new_storage_end && AddressIsPoisoned(new_storage_end)) {
-      uptr unpoison_to = 0;
-      for (; new_ptr != new_storage_end; ++new_ptr, ++old_ptr) {
-        if (!AddressIsPoisoned(old_ptr)) {
-          unpoison_to = new_ptr + 1;
+      if (new_ptr < new_storage_end || new_ptr == new_internal_end ||
+          AddressIsPoisoned(new_storage_end)) {
+        uptr granule_beg = RoundDownTo(new_ptr - 1, granularity);
+        if (unpoison_to != 0) {
+          uptr value =
+              (unpoison_to == next_new) ? 0 : unpoison_to - granule_beg;
+          *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
+        } else {
+          *(u8 *)MemToShadow(granule_beg) =
+              (granule_beg >= new_storage_beg)
+                  ? static_cast<u8>(kAsanContiguousContainerOOBMagic)
+                  : new_storage_beg - granule_beg;
         }
       }
-      if (unpoison_to != 0) {
-        uptr granule_beg = RoundDownTo(new_storage_end, granularity);
-        uptr value = unpoison_to - granule_beg;
-        *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
-      }
     }
   }
-
-  __asan_unpoison_memory_region((void *)old_storage_beg,
-                                old_storage_end - old_storage_beg);
 }
 
 static const void *FindBadAddress(uptr begin, uptr end, bool poisoned) {
diff --git a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
index f21826ca0b7939..cbbcfe89b0d682 100644
--- a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
@@ -27,14 +27,15 @@ template <class T> static constexpr T RoundUp(T x) {
 
 static std::deque<int> GetPoisonedState(char *begin, char *end) {
   std::deque<int> result;
-  for (; begin != end; ++begin) {
-    result.push_back(__asan_address_is_poisoned(begin));
+  for (char *ptr = begin; ptr != end; ++ptr) {
+    result.push_back(__asan_address_is_poisoned(ptr));
   }
   return result;
 }
 
 static void RandomPoison(char *beg, char *end) {
-  if (beg != RoundDown(beg) && (rand() % 2 == 1)) {
+  if (beg != RoundDown(beg) && RoundDown(beg) != RoundDown(end) &&
+      rand() % 2 == 1) {
     __asan_poison_memory_region(beg, RoundUp(beg) - beg);
     __asan_unpoison_memory_region(beg, rand() % (RoundUp(beg) - beg + 1));
   }
@@ -70,31 +71,36 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
   char *new_buffer_end = new_buffer + new_buffer_size;
   bool poison_old = poison_buffers % 2 == 1;
   bool poison_new = poison_buffers / 2 == 1;
-  if (poison_old)
-    __asan_poison_memory_region(old_buffer, old_buffer_size);
-  if (poison_new)
-    __asan_poison_memory_region(new_buffer, new_buffer_size);
   char *old_beg = old_buffer + off_old;
   char *new_beg = new_buffer + off_new;
   char *old_end = old_beg + capacity;
   char *new_end = new_beg + capacity;
 
-  for (int i = 0; i < 1000; i++) {
+  for (int i = 0; i < 75; i++) {
+    if (poison_old)
+      __asan_poison_memory_region(old_buffer, old_buffer_size);
+    if (poison_new)
+      __asan_poison_memory_region(new_buffer, new_buffer_size);
+
     RandomPoison(old_beg, old_end);
-    std::deque<int> poison_states(old_beg, old_end);
+    std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
     __sanitizer_move_contiguous_container_annotations(old_beg, old_end, new_beg,
                                                       new_end);
 
     // If old_buffer were poisoned, expected state of memory before old_beg
     // is undetermined.
     // If old buffer were not poisoned, that memory should still be unpoisoned.
-    // Area between old_beg and old_end should never be poisoned.
-    char *cur = poison_old ? old_beg : old_buffer;
-    for (; cur < old_end; ++cur) {
-      assert(!__asan_address_is_poisoned(cur));
+    char *cur;
+    if (!poison_old) {
+      for (cur = old_buffer; cur < old_beg; ++cur) {
+        assert(!__asan_address_is_poisoned(cur));
+      }
+    }
+    for (size_t i = 0; i < poison_states.size(); ++i) {
+      assert(__asan_address_is_poisoned(&old_beg[i]) == poison_states[i]);
     }
-    // Memory after old_beg should be the same as at the beginning.
-    for (; cur < old_buffer_end; ++cur) {
+    // Memory after old_end should be the same as at the beginning.
+    for (cur = old_end; cur < old_buffer_end; ++cur) {
       assert(__asan_address_is_poisoned(cur) == poison_old);
     }
 
@@ -118,7 +124,8 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
       }
     }
     // [cur; new_end) is not checked yet.
-    // If new_buffer were not poisoned, it cannot be poisoned and we can ignore check.
+    // If new_buffer were not poisoned, it cannot be poisoned and we can ignore
+    // a separate check.
     // If new_buffer were poisoned, it should be same as earlier.
     if (cur < new_end && poison_new) {
       size_t unpoisoned = count_unpoisoned(poison_states, new_end - cur);
@@ -143,9 +150,9 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
 
 int main(int argc, char **argv) {
   int n = argc == 1 ? 64 : atoi(argv[1]);
-  for (int i = 0; i <= n; i++) {
-    for (int j = 0; j < kGranularity * 2; j++) {
-      for (int k = 0; k < kGranularity * 2; k++) {
+  for (size_t j = 0; j < kGranularity * 2; j++) {
+    for (size_t k = 0; k < kGranularity * 2; k++) {
+      for (int i = 0; i <= n; i++) {
         for (int poison = 0; poison < 4; ++poison) {
           TestMove(i, j, k, poison);
         }

>From 1c89e0f40a754f2d5fffbc631fc7515dbfbe7b03 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Tue, 27 Aug 2024 07:22:38 +0200
Subject: [PATCH 3/5] Simplify and improve readability

Added helper functions, removed unnecessary branches, simplified code.
---
 .../include/sanitizer/common_interface_defs.h |   2 +-
 compiler-rt/lib/asan/asan_errors.cpp          |   4 +-
 compiler-rt/lib/asan/asan_errors.h            |   8 +-
 compiler-rt/lib/asan/asan_poisoning.cpp       | 199 ++++++++----------
 compiler-rt/lib/asan/asan_report.cpp          |   4 +-
 compiler-rt/lib/asan/asan_report.h            |   2 +-
 .../sanitizer_common_interface.inc            |   2 +-
 .../sanitizer_interface_internal.h            |   2 +-
 .../TestCases/move_container_annotations.cpp  |  16 +-
 9 files changed, 108 insertions(+), 131 deletions(-)

diff --git a/compiler-rt/include/sanitizer/common_interface_defs.h b/compiler-rt/include/sanitizer/common_interface_defs.h
index a3c91a9be6439b..e9101e39d72ddd 100644
--- a/compiler-rt/include/sanitizer/common_interface_defs.h
+++ b/compiler-rt/include/sanitizer/common_interface_defs.h
@@ -213,7 +213,7 @@ void SANITIZER_CDECL __sanitizer_annotate_double_ended_contiguous_container(
 /// \param old_storage_end End of the old container region.
 /// \param new_storage_beg Beginning of the new container region.
 /// \param new_storage_end End of the new container region.
-void SANITIZER_CDECL __sanitizer_move_contiguous_container_annotations(
+void SANITIZER_CDECL __sanitizer_copy_contiguous_container_annotations(
     const void *old_storage_beg, const void *old_storage_end,
     const void *new_storage_beg, const void *new_storage_end);
 
diff --git a/compiler-rt/lib/asan/asan_errors.cpp b/compiler-rt/lib/asan/asan_errors.cpp
index 11d67f4ace1064..8b712c70029f76 100644
--- a/compiler-rt/lib/asan/asan_errors.cpp
+++ b/compiler-rt/lib/asan/asan_errors.cpp
@@ -348,10 +348,10 @@ void ErrorBadParamsToAnnotateDoubleEndedContiguousContainer::Print() {
   ReportErrorSummary(scariness.GetDescription(), stack);
 }
 
-void ErrorBadParamsToMoveContiguousContainerAnnotations::Print() {
+void ErrorBadParamsToCopyContiguousContainerAnnotations::Print() {
   Report(
       "ERROR: AddressSanitizer: bad parameters to "
-      "__sanitizer_move_contiguous_container_annotations:\n"
+      "__sanitizer_copy_contiguous_container_annotations:\n"
       "      old_storage_beg : %p\n"
       "      old_storage_end : %p\n"
       "      new_storage_beg : %p\n"
diff --git a/compiler-rt/lib/asan/asan_errors.h b/compiler-rt/lib/asan/asan_errors.h
index 40526039fbc76f..b3af655e66639a 100644
--- a/compiler-rt/lib/asan/asan_errors.h
+++ b/compiler-rt/lib/asan/asan_errors.h
@@ -353,12 +353,12 @@ struct ErrorBadParamsToAnnotateDoubleEndedContiguousContainer : ErrorBase {
   void Print();
 };
 
-struct ErrorBadParamsToMoveContiguousContainerAnnotations : ErrorBase {
+struct ErrorBadParamsToCopyContiguousContainerAnnotations : ErrorBase {
   const BufferedStackTrace *stack;
   uptr old_storage_beg, old_storage_end, new_storage_beg, new_storage_end;
 
-  ErrorBadParamsToMoveContiguousContainerAnnotations() = default;  // (*)
-  ErrorBadParamsToMoveContiguousContainerAnnotations(
+  ErrorBadParamsToCopyContiguousContainerAnnotations() = default;  // (*)
+  ErrorBadParamsToCopyContiguousContainerAnnotations(
       u32 tid, BufferedStackTrace *stack_, uptr old_storage_beg_,
       uptr old_storage_end_, uptr new_storage_beg_, uptr new_storage_end_)
       : ErrorBase(tid, 10,
@@ -439,7 +439,7 @@ struct ErrorGeneric : ErrorBase {
   macro(StringFunctionSizeOverflow)                        \
   macro(BadParamsToAnnotateContiguousContainer)            \
   macro(BadParamsToAnnotateDoubleEndedContiguousContainer) \
-  macro(BadParamsToMoveContiguousContainerAnnotations)     \
+  macro(BadParamsToCopyContiguousContainerAnnotations)     \
   macro(ODRViolation)                                      \
   macro(InvalidPointerPair)                                \
   macro(Generic)
diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index a079f0ea607910..e661b907642726 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -576,6 +576,7 @@ void __sanitizer_annotate_double_ended_contiguous_container(
   }
 }
 
+// Checks if two pointers fall within the same memory granule.
 static bool WithinOneGranule(uptr p, uptr q) {
   if (p == q)
     return true;
@@ -583,25 +584,58 @@ static bool WithinOneGranule(uptr p, uptr q) {
          RoundDownTo(q - 1, ASAN_SHADOW_GRANULARITY);
 }
 
-static void PoisonContainer(uptr storage_beg, uptr storage_end) {
+// Copies ASan memory annotation (a shadow memory value)
+// from one granule to another.
+static void CopyGranuleAnnotation(uptr dst, uptr src) {
+  *(u8 *)MemToShadow(dst) = *(u8 *)MemToShadow(src);
+}
+
+// Marks the specified number of bytes in a granule as accessible or
+// poisones the whole granule with kAsanContiguousContainerOOBMagic value.
+static void AnnotateContainerGranuleAccessibleBytes(uptr ptr, u8 n) {
   constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
-  uptr internal_beg = RoundUpTo(storage_beg, granularity);
-  uptr external_beg = RoundDownTo(storage_beg, granularity);
-  uptr internal_end = RoundDownTo(storage_end, granularity);
-
-  if (internal_end > internal_beg)
-    PoisonShadow(internal_beg, internal_end - internal_beg,
-                 kAsanContiguousContainerOOBMagic);
-  // The new buffer may start in the middle of a granule.
-  if (internal_beg != storage_beg && internal_beg < internal_end &&
-      !AddressIsPoisoned(storage_beg)) {
-    *(u8 *)MemToShadow(external_beg) =
-        static_cast<u8>(storage_beg - external_beg);
+  if (n == granularity) {
+    *(u8 *)MemToShadow(ptr) = 0;
+  } else if (n == 0) {
+    *(u8 *)MemToShadow(ptr) = static_cast<u8>(kAsanContiguousContainerOOBMagic);
+  } else {
+    *(u8 *)MemToShadow(ptr) = n;
   }
-  // The new buffer may end in the middle of a granule.
-  if (internal_end != storage_end && AddressIsPoisoned(storage_end)) {
-    *(u8 *)MemToShadow(internal_end) =
-        static_cast<u8>(kAsanContiguousContainerOOBMagic);
+}
+
+// Performs a byte-by-byte copy of ASan annotations (shadow memory values).
+// Result may be different due to ASan limitations, but result cannot lead
+// to false positives (more memory than requested may get unpoisoned).
+static void SlowCopyContainerAnnotations(uptr old_storage_beg,
+                                         uptr old_storage_end,
+                                         uptr new_storage_beg,
+                                         uptr new_storage_end) {
+  constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
+  uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
+  uptr old_ptr = old_storage_beg;
+  uptr new_ptr = new_storage_beg;
+
+  while (new_ptr < new_storage_end) {
+    uptr next_new = RoundUpTo(new_ptr + 1, granularity);
+    uptr granule_begin = next_new - granularity;
+    uptr unpoisoned_bytes = 0;
+
+    for (; new_ptr != next_new && new_ptr != new_storage_end;
+         ++new_ptr, ++old_ptr) {
+      if (!AddressIsPoisoned(old_ptr)) {
+        unpoisoned_bytes = new_ptr - granule_begin + 1;
+      }
+    }
+    if (new_ptr < new_storage_end || new_ptr == new_internal_end ||
+        AddressIsPoisoned(new_storage_end)) {
+      if (unpoisoned_bytes != 0 || granule_begin >= new_storage_beg) {
+        AnnotateContainerGranuleAccessibleBytes(granule_begin,
+                                                unpoisoned_bytes);
+      } else if (!AddressIsPoisoned(new_storage_beg)) {
+        AnnotateContainerGranuleAccessibleBytes(
+            granule_begin, new_storage_beg - granule_begin);
+      }
+    }
   }
 }
 
@@ -616,7 +650,7 @@ static void PoisonContainer(uptr storage_beg, uptr storage_end) {
 // the function handles this by going byte by byte, slowing down performance.
 // The old buffer annotations are not removed. If necessary,
 // user can unpoison old buffer with __asan_unpoison_memory_region.
-void __sanitizer_move_contiguous_container_annotations(
+void __sanitizer_copy_contiguous_container_annotations(
     const void *old_storage_beg_p, const void *old_storage_end_p,
     const void *new_storage_beg_p, const void *new_storage_end_p) {
   if (!flags()->detect_container_overflow)
@@ -639,7 +673,7 @@ void __sanitizer_move_contiguous_container_annotations(
       (old_storage_end - old_storage_beg) !=
           (new_storage_end - new_storage_beg)) {
     GET_STACK_TRACE_FATAL_HERE;
-    ReportBadParamsToMoveContiguousContainerAnnotations(
+    ReportBadParamsToCopyContiguousContainerAnnotations(
         old_storage_beg, old_storage_end, new_storage_beg, new_storage_end,
         &stack);
   }
@@ -647,101 +681,44 @@ void __sanitizer_move_contiguous_container_annotations(
   if (old_storage_beg == old_storage_end)
     return;
 
+  if (old_storage_beg % granularity != new_storage_beg % granularity ||
+      WithinOneGranule(new_storage_beg, new_storage_end)) {
+    SlowCopyContainerAnnotations(old_storage_beg, old_storage_end,
+                                 new_storage_beg, new_storage_end);
+    return;
+  }
+
   uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
-  uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
-  uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
-  uptr old_external_beg = RoundDownTo(old_storage_beg, granularity);
   uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
-  uptr old_internal_end = RoundDownTo(old_storage_end, granularity);
-
-  // At the very beginning we poison the whole buffer.
-  // Later we unpoison what is necessary.
-  PoisonContainer(new_storage_beg, new_storage_end);
-
-  // There are two cases.
-  // 1) Distance between buffers is granule-aligned.
-  // 2) It's not aligned, and therefore requires going byte by byte.
-  if (old_storage_beg % granularity == new_storage_beg % granularity) {
-    // When buffers are aligned in the same way, we can just copy shadow memory,
-    // except the first and the last granule.
-    if (new_internal_end > new_internal_beg)
-      __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
-                       (u8 *)MemToShadow(old_internal_beg),
-                       (new_internal_end - new_internal_beg) / granularity);
-    // If the beginning and the end of the storage are aligned, we are done.
-    // Otherwise, we have to handle remaining granules.
-    if (new_internal_beg != new_storage_beg ||
-        new_internal_end != new_storage_end) {
-      if (WithinOneGranule(new_storage_beg, new_storage_end)) {
-        if (new_internal_end == new_storage_end) {
-          if (!AddressIsPoisoned(old_storage_beg)) {
-            *(u8 *)MemToShadow(new_external_beg) =
-                *(u8 *)MemToShadow(old_external_beg);
-          } else if (!AddressIsPoisoned(new_storage_beg)) {
-            *(u8 *)MemToShadow(new_external_beg) =
-                new_storage_beg - new_external_beg;
-          }
-        } else if (AddressIsPoisoned(new_storage_end)) {
-          if (!AddressIsPoisoned(old_storage_beg)) {
-            *(u8 *)MemToShadow(new_external_beg) =
-                AddressIsPoisoned(old_storage_end)
-                    ? *(u8 *)MemToShadow(old_internal_end)
-                    : new_storage_end - new_external_beg;
-          } else if (!AddressIsPoisoned(new_storage_beg)) {
-            *(u8 *)MemToShadow(new_external_beg) =
-                (new_storage_beg == new_external_beg)
-                    ? static_cast<u8>(kAsanContiguousContainerOOBMagic)
-                    : new_storage_beg - new_external_beg;
-          }
-        }
-      } else {
-        // Buffer is not within one granule!
-        if (new_internal_beg != new_storage_beg) {
-          if (!AddressIsPoisoned(old_storage_beg)) {
-            *(u8 *)MemToShadow(new_external_beg) =
-                *(u8 *)MemToShadow(old_external_beg);
-          } else if (!AddressIsPoisoned(new_storage_beg)) {
-            *(u8 *)MemToShadow(new_external_beg) =
-                new_storage_beg - new_external_beg;
-          }
-        }
-        if (new_internal_end != new_storage_end &&
-            AddressIsPoisoned(new_storage_end)) {
-          *(u8 *)MemToShadow(new_internal_end) =
-              AddressIsPoisoned(old_storage_end)
-                  ? *(u8 *)MemToShadow(old_internal_end)
-                  : old_storage_end - old_internal_end;
-        }
-      }
+  if (new_internal_end > new_internal_beg) {
+    uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
+    __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
+                     (u8 *)MemToShadow(old_internal_beg),
+                     (new_internal_end - new_internal_beg) / granularity);
+  }
+  // The only remaining cases involve edge granules when the container starts or
+  // ends within a granule. We already know that the container's start and end
+  // points lie in different granules.
+  if (new_internal_beg != new_storage_beg) {
+    // First granule
+    uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
+    uptr old_external_beg = RoundDownTo(old_storage_beg, granularity);
+    if (!AddressIsPoisoned(old_storage_beg)) {
+      CopyGranuleAnnotation(new_external_beg, old_external_beg);
+    } else if (!AddressIsPoisoned(new_storage_beg)) {
+      AnnotateContainerGranuleAccessibleBytes(
+          new_external_beg, new_storage_beg - new_external_beg);
     }
-  } else {
-    // If buffers are not aligned, we have to go byte by byte.
-    uptr old_ptr = old_storage_beg;
-    uptr new_ptr = new_storage_beg;
-    uptr next_new;
-    for (; new_ptr < new_storage_end;) {
-      next_new = RoundUpTo(new_ptr + 1, granularity);
-      uptr unpoison_to = 0;
-      for (; new_ptr != next_new && new_ptr != new_storage_end;
-           ++new_ptr, ++old_ptr) {
-        if (!AddressIsPoisoned(old_ptr)) {
-          unpoison_to = new_ptr + 1;
-        }
-      }
-      if (new_ptr < new_storage_end || new_ptr == new_internal_end ||
-          AddressIsPoisoned(new_storage_end)) {
-        uptr granule_beg = RoundDownTo(new_ptr - 1, granularity);
-        if (unpoison_to != 0) {
-          uptr value =
-              (unpoison_to == next_new) ? 0 : unpoison_to - granule_beg;
-          *(u8 *)MemToShadow(granule_beg) = static_cast<u8>(value);
-        } else {
-          *(u8 *)MemToShadow(granule_beg) =
-              (granule_beg >= new_storage_beg)
-                  ? static_cast<u8>(kAsanContiguousContainerOOBMagic)
-                  : new_storage_beg - granule_beg;
-        }
-      }
+  }
+  if (new_internal_end != new_storage_end &&
+      AddressIsPoisoned(new_storage_end)) {
+    // Last granule
+    uptr old_internal_end = RoundDownTo(old_storage_end, granularity);
+    if (AddressIsPoisoned(old_storage_end)) {
+      CopyGranuleAnnotation(new_internal_end, old_internal_end);
+    } else {
+      AnnotateContainerGranuleAccessibleBytes(
+          new_internal_end, old_storage_end - old_internal_end);
     }
   }
 }
diff --git a/compiler-rt/lib/asan/asan_report.cpp b/compiler-rt/lib/asan/asan_report.cpp
index 00ef334bb88ffa..45aa607dcda07f 100644
--- a/compiler-rt/lib/asan/asan_report.cpp
+++ b/compiler-rt/lib/asan/asan_report.cpp
@@ -367,11 +367,11 @@ void ReportBadParamsToAnnotateDoubleEndedContiguousContainer(
   in_report.ReportError(error);
 }
 
-void ReportBadParamsToMoveContiguousContainerAnnotations(
+void ReportBadParamsToCopyContiguousContainerAnnotations(
     uptr old_storage_beg, uptr old_storage_end, uptr new_storage_beg,
     uptr new_storage_end, BufferedStackTrace *stack) {
   ScopedInErrorReport in_report;
-  ErrorBadParamsToMoveContiguousContainerAnnotations error(
+  ErrorBadParamsToCopyContiguousContainerAnnotations error(
       GetCurrentTidOrInvalid(), stack, old_storage_beg, old_storage_end,
       new_storage_beg, new_storage_end);
   in_report.ReportError(error);
diff --git a/compiler-rt/lib/asan/asan_report.h b/compiler-rt/lib/asan/asan_report.h
index d98c87e5a6930b..3143d83abe390d 100644
--- a/compiler-rt/lib/asan/asan_report.h
+++ b/compiler-rt/lib/asan/asan_report.h
@@ -88,7 +88,7 @@ void ReportBadParamsToAnnotateDoubleEndedContiguousContainer(
     uptr storage_beg, uptr storage_end, uptr old_container_beg,
     uptr old_container_end, uptr new_container_beg, uptr new_container_end,
     BufferedStackTrace *stack);
-void ReportBadParamsToMoveContiguousContainerAnnotations(
+void ReportBadParamsToCopyContiguousContainerAnnotations(
     uptr old_storage_beg, uptr old_storage_end, uptr new_storage_beg,
     uptr new_storage_end, BufferedStackTrace *stack);
 
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc b/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
index 0a413202b9368b..d6b0662d80b940 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_common_interface.inc
@@ -10,7 +10,7 @@
 INTERFACE_FUNCTION(__sanitizer_acquire_crash_state)
 INTERFACE_FUNCTION(__sanitizer_annotate_contiguous_container)
 INTERFACE_FUNCTION(__sanitizer_annotate_double_ended_contiguous_container)
-INTERFACE_FUNCTION(__sanitizer_move_contiguous_container_annotations)
+INTERFACE_FUNCTION(__sanitizer_copy_contiguous_container_annotations)
 INTERFACE_FUNCTION(__sanitizer_contiguous_container_find_bad_address)
 INTERFACE_FUNCTION(
     __sanitizer_double_ended_contiguous_container_find_bad_address)
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
index fa1d2b562f33fc..07d6b238301c27 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_interface_internal.h
@@ -71,7 +71,7 @@ void __sanitizer_annotate_double_ended_contiguous_container(
     const void *old_container_beg, const void *old_container_end,
     const void *new_container_beg, const void *new_container_end);
 SANITIZER_INTERFACE_ATTRIBUTE
-void __sanitizer_move_contiguous_container_annotations(
+void __sanitizer_copy_contiguous_container_annotations(
     const void *old_storage_beg, const void *old_storage_end,
     const void *new_storage_beg, const void *new_storage_end);
 SANITIZER_INTERFACE_ATTRIBUTE
diff --git a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
index cbbcfe89b0d682..b98140fceb2557 100644
--- a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
@@ -1,6 +1,6 @@
 // RUN: %clangxx_asan -fexceptions -O %s -o %t && %env_asan_opts=detect_stack_use_after_return=0 %run %t
 //
-// Test __sanitizer_move_contiguous_container_annotations.
+// Test __sanitizer_copy_contiguous_container_annotations.
 
 #include <algorithm>
 #include <deque>
@@ -61,8 +61,8 @@ static size_t count_unpoisoned(std::deque<int> &poison_states, size_t n) {
   return result;
 }
 
-void TestMove(size_t capacity, size_t off_old, size_t off_new,
-              int poison_buffers) {
+void TestNonOverlappingContainers(size_t capacity, size_t off_old,
+                                  size_t off_new, int poison_buffers) {
   size_t old_buffer_size = capacity + off_old + kGranularity * 2;
   size_t new_buffer_size = capacity + off_new + kGranularity * 2;
   char *old_buffer = new char[old_buffer_size];
@@ -76,7 +76,7 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
   char *old_end = old_beg + capacity;
   char *new_end = new_beg + capacity;
 
-  for (int i = 0; i < 75; i++) {
+  for (int i = 0; i < 35; i++) {
     if (poison_old)
       __asan_poison_memory_region(old_buffer, old_buffer_size);
     if (poison_new)
@@ -84,7 +84,7 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
 
     RandomPoison(old_beg, old_end);
     std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
-    __sanitizer_move_contiguous_container_annotations(old_beg, old_end, new_beg,
+    __sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
                                                       new_end);
 
     // If old_buffer were poisoned, expected state of memory before old_beg
@@ -150,11 +150,11 @@ void TestMove(size_t capacity, size_t off_old, size_t off_new,
 
 int main(int argc, char **argv) {
   int n = argc == 1 ? 64 : atoi(argv[1]);
-  for (size_t j = 0; j < kGranularity * 2; j++) {
-    for (size_t k = 0; k < kGranularity * 2; k++) {
+  for (size_t j = 0; j < kGranularity + 2; j++) {
+    for (size_t k = 0; k < kGranularity + 2; k++) {
       for (int i = 0; i <= n; i++) {
         for (int poison = 0; poison < 4; ++poison) {
-          TestMove(i, j, k, poison);
+          TestNonOverlappingContainers(i, j, k, poison);
         }
       }
     }

>From 93ab376a6a82151b48b99a74ef74269609c5c3a5 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Wed, 28 Aug 2024 03:28:51 +0200
Subject: [PATCH 4/5] Add support for overlapping containers

Adds support for overlapping containers.
Adds test case for that.
---
 compiler-rt/lib/asan/asan_poisoning.cpp       | 109 ++++++++++++++++--
 .../TestCases/move_container_annotations.cpp  |  80 ++++++++++++-
 2 files changed, 173 insertions(+), 16 deletions(-)

diff --git a/compiler-rt/lib/asan/asan_poisoning.cpp b/compiler-rt/lib/asan/asan_poisoning.cpp
index e661b907642726..fd83b918ac02aa 100644
--- a/compiler-rt/lib/asan/asan_poisoning.cpp
+++ b/compiler-rt/lib/asan/asan_poisoning.cpp
@@ -639,6 +639,42 @@ static void SlowCopyContainerAnnotations(uptr old_storage_beg,
   }
 }
 
+// This function is basically the same as SlowCopyContainerAnnotations,
+// but goes through elements in reversed order
+static void SlowRCopyContainerAnnotations(uptr old_storage_beg,
+                                          uptr old_storage_end,
+                                          uptr new_storage_beg,
+                                          uptr new_storage_end) {
+  constexpr uptr granularity = ASAN_SHADOW_GRANULARITY;
+  uptr new_internal_beg = RoundDownTo(new_storage_beg, granularity);
+  uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
+  uptr old_ptr = old_storage_end;
+  uptr new_ptr = new_storage_end;
+
+  while (new_ptr > new_storage_beg) {
+    uptr granule_begin = RoundDownTo(new_ptr - 1, granularity);
+    uptr unpoisoned_bytes = 0;
+
+    for (; new_ptr != granule_begin && new_ptr != new_storage_beg;
+         --new_ptr, --old_ptr) {
+      if (unpoisoned_bytes == 0 && !AddressIsPoisoned(old_ptr - 1)) {
+        unpoisoned_bytes = new_ptr - granule_begin;
+      }
+    }
+
+    if (new_ptr >= new_internal_end && !AddressIsPoisoned(new_storage_end)) {
+      continue;
+    }
+
+    if (granule_begin == new_ptr || unpoisoned_bytes != 0) {
+      AnnotateContainerGranuleAccessibleBytes(granule_begin, unpoisoned_bytes);
+    } else if (!AddressIsPoisoned(new_storage_beg)) {
+      AnnotateContainerGranuleAccessibleBytes(granule_begin,
+                                              new_storage_beg - granule_begin);
+    }
+  }
+}
+
 // This function copies ASan memory annotations (poisoned/unpoisoned states)
 // from one buffer to another.
 // It's main purpose is to help with relocating trivially relocatable objects,
@@ -678,9 +714,61 @@ void __sanitizer_copy_contiguous_container_annotations(
         &stack);
   }
 
-  if (old_storage_beg == old_storage_end)
+  if (old_storage_beg == old_storage_end || old_storage_beg == new_storage_beg)
     return;
+  // The only edge cases involve edge granules when the container starts or
+  // ends within a granule. We already know that the container's start and end
+  // points lie in different granules.
+  uptr old_external_end = RoundUpTo(old_storage_end, granularity);
+  if (old_storage_beg < new_storage_beg &&
+      new_storage_beg <= old_external_end) {
+    // In this case, we have to copy elements in reversed order, because
+    // destination buffer starts in the middle of the source buffer (or shares
+    // first granule with it).
+    // It still may be possible to optimize, but reversed order has to be kept.
+    if (old_storage_beg % granularity != new_storage_beg % granularity ||
+        WithinOneGranule(new_storage_beg, new_storage_end)) {
+      SlowRCopyContainerAnnotations(old_storage_beg, old_storage_end,
+                                    new_storage_beg, new_storage_end);
+      return;
+    }
 
+    uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
+    if (new_internal_end != new_storage_end &&
+        AddressIsPoisoned(new_storage_end)) {
+      // Last granule
+      uptr old_internal_end = RoundDownTo(old_storage_end, granularity);
+      if (AddressIsPoisoned(old_storage_end)) {
+        CopyGranuleAnnotation(new_internal_end, old_internal_end);
+      } else {
+        AnnotateContainerGranuleAccessibleBytes(
+            new_internal_end, old_storage_end - old_internal_end);
+      }
+    }
+
+    uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
+    if (new_internal_end > new_internal_beg) {
+      uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
+      __builtin_memmove((u8 *)MemToShadow(new_internal_beg),
+                        (u8 *)MemToShadow(old_internal_beg),
+                        (new_internal_end - new_internal_beg) / granularity);
+    }
+
+    if (new_internal_beg != new_storage_beg) {
+      // First granule
+      uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
+      uptr old_external_beg = RoundDownTo(old_storage_beg, granularity);
+      if (!AddressIsPoisoned(old_storage_beg)) {
+        CopyGranuleAnnotation(new_external_beg, old_external_beg);
+      } else if (!AddressIsPoisoned(new_storage_beg)) {
+        AnnotateContainerGranuleAccessibleBytes(
+            new_external_beg, new_storage_beg - new_external_beg);
+      }
+    }
+    return;
+  }
+
+  // Simple copy of annotations of all internal granules.
   if (old_storage_beg % granularity != new_storage_beg % granularity ||
       WithinOneGranule(new_storage_beg, new_storage_end)) {
     SlowCopyContainerAnnotations(old_storage_beg, old_storage_end,
@@ -689,16 +777,6 @@ void __sanitizer_copy_contiguous_container_annotations(
   }
 
   uptr new_internal_beg = RoundUpTo(new_storage_beg, granularity);
-  uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
-  if (new_internal_end > new_internal_beg) {
-    uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
-    __builtin_memcpy((u8 *)MemToShadow(new_internal_beg),
-                     (u8 *)MemToShadow(old_internal_beg),
-                     (new_internal_end - new_internal_beg) / granularity);
-  }
-  // The only remaining cases involve edge granules when the container starts or
-  // ends within a granule. We already know that the container's start and end
-  // points lie in different granules.
   if (new_internal_beg != new_storage_beg) {
     // First granule
     uptr new_external_beg = RoundDownTo(new_storage_beg, granularity);
@@ -710,6 +788,15 @@ void __sanitizer_copy_contiguous_container_annotations(
           new_external_beg, new_storage_beg - new_external_beg);
     }
   }
+
+  uptr new_internal_end = RoundDownTo(new_storage_end, granularity);
+  if (new_internal_end > new_internal_beg) {
+    uptr old_internal_beg = RoundUpTo(old_storage_beg, granularity);
+    __builtin_memmove((u8 *)MemToShadow(new_internal_beg),
+                      (u8 *)MemToShadow(old_internal_beg),
+                      (new_internal_end - new_internal_beg) / granularity);
+  }
+
   if (new_internal_end != new_storage_end &&
       AddressIsPoisoned(new_storage_end)) {
     // Last granule
diff --git a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
index b98140fceb2557..db3b3e96968cec 100644
--- a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
+++ b/compiler-rt/test/asan/TestCases/move_container_annotations.cpp
@@ -111,7 +111,7 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
         assert(!__asan_address_is_poisoned(cur));
       }
     }
-    //In every granule, poisoned memory should be after last expected unpoisoned.
+
     char *next;
     for (cur = new_beg; cur + kGranularity <= new_end; cur = next) {
       next = RoundUp(cur + 1);
@@ -124,15 +124,14 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
       }
     }
     // [cur; new_end) is not checked yet.
-    // If new_buffer were not poisoned, it cannot be poisoned and we can ignore
-    // a separate check.
+    // If new_buffer were not poisoned, it cannot be poisoned.
     // If new_buffer were poisoned, it should be same as earlier.
-    if (cur < new_end && poison_new) {
+    if (cur < new_end) {
       size_t unpoisoned = count_unpoisoned(poison_states, new_end - cur);
       if (unpoisoned > 0) {
         assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
       }
-      if (cur + unpoisoned < new_end) {
+      if (cur + unpoisoned < new_end && poison_new) {
         assert(__asan_address_is_poisoned(cur + unpoisoned));
       }
     }
@@ -148,6 +147,76 @@ void TestNonOverlappingContainers(size_t capacity, size_t off_old,
   delete[] new_buffer;
 }
 
+void TestOverlappingContainers(size_t capacity, size_t off_old, size_t off_new,
+                               int poison_buffers) {
+  size_t buffer_size = capacity + off_old + off_new + kGranularity * 3;
+  char *buffer = new char[buffer_size];
+  char *buffer_end = buffer + buffer_size;
+  bool poison_whole = poison_buffers % 2 == 1;
+  bool poison_new = poison_buffers / 2 == 1;
+  char *old_beg = buffer + kGranularity + off_old;
+  char *new_beg = buffer + kGranularity + off_new;
+  char *old_end = old_beg + capacity;
+  char *new_end = new_beg + capacity;
+
+  for (int i = 0; i < 35; i++) {
+    if (poison_whole)
+      __asan_poison_memory_region(buffer, buffer_size);
+    if (poison_new)
+      __asan_poison_memory_region(new_beg, new_end - new_beg);
+
+    RandomPoison(old_beg, old_end);
+    std::deque<int> poison_states = GetPoisonedState(old_beg, old_end);
+    __sanitizer_copy_contiguous_container_annotations(old_beg, old_end, new_beg,
+                                                      new_end);
+    // This variable is used only when buffer ends in the middle of a granule.
+    bool can_modify_last_granule = __asan_address_is_poisoned(new_end);
+
+    // If whole buffer were poisoned, expected state of memory before first container
+    // is undetermined.
+    // If old buffer were not poisoned, that memory should still be unpoisoned.
+    char *cur;
+    if (!poison_whole) {
+      for (cur = buffer; cur < old_beg && cur < new_beg; ++cur) {
+        assert(!__asan_address_is_poisoned(cur));
+      }
+    }
+
+    // Memory after end of both containers should be the same as at the beginning.
+    for (cur = (old_end > new_end) ? old_end : new_end; cur < buffer_end;
+         ++cur) {
+      assert(__asan_address_is_poisoned(cur) == poison_whole);
+    }
+
+    char *next;
+    for (cur = new_beg; cur + kGranularity <= new_end; cur = next) {
+      next = RoundUp(cur + 1);
+      size_t unpoisoned = count_unpoisoned(poison_states, next - cur);
+      if (unpoisoned > 0) {
+        assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
+      }
+      if (cur + unpoisoned < next) {
+        assert(__asan_address_is_poisoned(cur + unpoisoned));
+      }
+    }
+    // [cur; new_end) is not checked yet, if container ends in the middle of a granule.
+    // It can be poisoned, only if non-container bytes in that granule were poisoned.
+    // Otherwise, it should be unpoisoned.
+    if (cur < new_end) {
+      size_t unpoisoned = count_unpoisoned(poison_states, new_end - cur);
+      if (unpoisoned > 0) {
+        assert(!__asan_address_is_poisoned(cur + unpoisoned - 1));
+      }
+      if (cur + unpoisoned < new_end && can_modify_last_granule) {
+        assert(__asan_address_is_poisoned(cur + unpoisoned));
+      }
+    }
+  }
+
+  __asan_unpoison_memory_region(buffer, buffer_size);
+  delete[] buffer;
+}
+
 int main(int argc, char **argv) {
   int n = argc == 1 ? 64 : atoi(argv[1]);
   for (size_t j = 0; j < kGranularity + 2; j++) {
@@ -155,6 +224,7 @@ int main(int argc, char **argv) {
       for (int i = 0; i <= n; i++) {
         for (int poison = 0; poison < 4; ++poison) {
           TestNonOverlappingContainers(i, j, k, poison);
+          TestOverlappingContainers(i, j, k, poison);
         }
       }
     }

>From 8caef12e2d9c294d11c7b4d53017e89f1409a212 Mon Sep 17 00:00:00 2001
From: Advenam Tacet <advenam.tacet at gmail.com>
Date: Wed, 28 Aug 2024 11:59:29 +0200
Subject: [PATCH 5/5] Rename test

---
 ...e_container_annotations.cpp => copy_container_annotations.cpp} | 0
 1 file changed, 0 insertions(+), 0 deletions(-)
 rename compiler-rt/test/asan/TestCases/{move_container_annotations.cpp => copy_container_annotations.cpp} (100%)

diff --git a/compiler-rt/test/asan/TestCases/move_container_annotations.cpp b/compiler-rt/test/asan/TestCases/copy_container_annotations.cpp
similarity index 100%
rename from compiler-rt/test/asan/TestCases/move_container_annotations.cpp
rename to compiler-rt/test/asan/TestCases/copy_container_annotations.cpp



More information about the llvm-commits mailing list