[compiler-rt] [llvm] [TSan] Add escape analysis for redundant instrumentation elimination (PR #169896)

Alexey Paznikov via llvm-commits llvm-commits at lists.llvm.org
Fri Nov 28 02:41:58 PST 2025


https://github.com/apaznikov created https://github.com/llvm/llvm-project/pull/169896

## Summary
This PR implements a new static analysis pass, `EscapeAnalysis`, to identify and eliminate redundant memory access instrumentation in ThreadSanitizer. The pass determines whether a memory allocation (stack or heap) is confined to the current thread's execution scope. Accesses to such "non-escaping" objects cannot participate in data races and are therefore safe to exclude from instrumentation.

This work is part of a broader research effort on optimizing dynamic race detectors [1].

## Implementation Details
The core logic is implemented in `EscapeAnalysis.h/cpp` and integrated into `ThreadSanitizer.cpp`. The analysis operates intra-procedurally and heavily leverages **MemorySSA**:

1.  **MemorySSA-driven Dataflow:** The analysis uses MemorySSA to trace the lifecycle of pointers through memory operations. This allows it to:
    *   **Look through loads:** Identify the underlying allocation base object even if the pointer is loaded from stack slots or aggregates.
    *   **Track indirect escapes:** Detect scenarios where an address is stored into another object (e.g., a struct or array) and verify if that container object allows the pointer to leak (via a subsequent load or escape of the container itself).
2.  **Escape Criteria:** An object is considered escaping if:
    *   It is stored to a global variable or an escaping object.
    *   It is returned from the function.
    *   It is passed as an argument to a function call that captures the pointer (checked via `CaptureTracking`).
    *   It is involved in complex casts (e.g., `ptrtoint`) or volatile operations.
3.  **Heap Support:** Unlike simple stack-only analyses, this implementation identifies non-escaping heap allocations (e.g., a buffer allocated via `malloc` and used within the same function).

## Impact
*   **Runtime Performance:** Eliminating checks for thread-local data significantly reduces overhead, especially in functions with heavy usage of temporary buffers or local aggregates.
*   **Memory Overhead:** By not instrumenting local allocations, TSan avoids allocating shadow memory for them. This leads to a measurable reduction in memory consumption.

## Motivation & Potential Impact
This work is based on our research [1] into optimizing dynamic race detectors. Our experiments show that Escape Analysis (EA) is highly effective for specific workloads and complementary to other techniques.

**Runtime Speedup (EA Only):**
In our research prototype, isolating Escape Analysis yielded the following speedups:
*   **SQLite:** ~1.43x speedup (Significant gain due to heavy use of local structures).
*   **FFmpeg:** ~1.11x speedup.
*   **MySQL:** ~1.10x - 1.14x speedup.

**Memory Overhead Reduction:**
A key advantage of EA is reducing the memory footprint of the sanitizer by preventing shadow memory allocation for thread-local objects.
In our experiments with the **full optimization suite** (TSan+AllOpt) [1], we observed significant memory overhead reductions:
*   **FFmpeg:** ~10% reduction.
*   **Chromium:** ~7% reduction.
*   **Redis/SQLite:** ~4-6% reduction.
*As noted in the research, Escape Analysis is the **primary driver** of these memory savings.*

**Compilation Overhead:**
While the full research prototype (involving inter-procedural analyses) incurred a moderate build time overhead (~6-15%), this specific **intra-procedural** implementation is lightweight and is expected to have a **negligible impact** on compilation time.

### Note on this PR
This patch implements a **conservative, intra-procedural** version of the algorithm described in [1]. While the full research prototype uses Inter-Procedural Analysis (IPA) to track escapes across function boundaries, this upstream version is restricted to the function scope to ensure maximum stability, fast compilation, and compatibility with the existing LLVM pipeline. It's control-flow unsensitive as well. Despite this limitation, it effectively handles common patterns like local buffers and temporary objects passed to `nocapture` helpers.

## Usage
The optimization is currently opt-in.
**Flag:** `-mllvm -tsan-use-escape-analysis`

## Attribution & Status
**Implementation:**
This patch was implemented by **Alexey Paznikov**.

**Research & Algorithm Design:**
The underlying algorithms and performance validation were conducted by the research team: **Alexey Paznikov**, **Andrey Kogutenko**, **Yaroslav Osipov**, **Michael Schwarz**, and **Umang Mathur**.

This work is based on research currently **under review** for publication [1].

[1] "Optimizing Instrumentation for Data Race Detectors" (Under Review, 2025).

>From 73f6d9ccaa695d5e77e01d652f498e60f4badd53 Mon Sep 17 00:00:00 2001
From: Alexey Paznikov <apaznikov at gmail.com>
Date: Thu, 25 Sep 2025 15:32:29 +0800
Subject: [PATCH] [TSan] Add escape analysis for redundant instrumentation
 elimination

This patch introduces a new intra-procedural Escape Analysis pass and integrates it into ThreadSanitizer to eliminate redundant instrumentation of memory accesses.

The optimization identifies memory allocations (both stack and heap) that are provably thread-local. If an object does not escape the function scope (i.e., is not stored to a global, returned, or passed to a capturing function), it cannot be accessed by another thread, making data races impossible. Consequently, runtime checks for such objects can be safely omitted.

The analysis relies on MemorySSA for precise dataflow tracking. Key features include:
1. Heap & Stack Support: Handles both `alloca` and standard heap allocations (`malloc`, `calloc`, `new`, etc.) via TargetLibraryInfo.
2. MemorySSA-based Tracking: Uses MemorySSA to trace pointer flow through loads and stores. This enables:
   - "Looking through" intermediate memory operations to correctly identify the underlying allocation.
   - Detecting indirect escapes, where a pointer is stored into another local object which is subsequently read or leaked.
3. Conservative Safety: Bails out on unknown flow, complex pointer arithmetic, or when the step limit is exceeded to ensure soundness.

To ensure correctness:
- The analysis is strictly intra-procedural in this patch.
- Volatile accesses and atomic operations are never optimized out.
- It respects TSan's requirements by checking if the base object (not just the pointer) escapes.

This optimization is disabled by default and can be enabled via `-tsan-use-escape-analysis`.

This implementation is based on the research algorithm designed by:
Alexey Paznikov, Andrey Kogutenko, Yaroslav Osipov, Michael Schwarz, and Umang Mathur.

Complete and refactor the first simple EscapeAnalysis implementation and integrate it into the PassRegistry.

Improve allocation detection logic and update function signature. Integrate `TargetLibraryInfo` and utilize `isAllocationFn` for enhanced analysis. Use getUnderlyingObjectAggressive instead getUnderlyingObject

WIP: Refactor (maybe not correct version)

WIP:getUnderlyingObjectsThroughLoads - first raw version (not ready!)

WIP:getUnderlyingObjectsThroughLoads - semi-ready version (verified through LLM)

Fix getUnderlyingObjectsThroughLoads: streamline terminal conditions and address handling of memory intrinsics.

WIP (TMP): Draft of EscapeAnalysis as independent (from CaptureTracking) implementation

WIP (TMP): Draft of CaptureTracking-based implementation

WIP: fix captured (mainly), cleanup

WIP: minor fixes in the logic, make more concise

WIP: add early vector filter, fix possible recursion issue in doesStoreDestinationEscape

WIP: remove unnecessary conservatism, light refactoring

WIP: make printing EA results more idiomatic

WIP: make isHeapAllocation as a fallback if MemoryBuiltins doesn't work

WIP: add first set of regressions tests for Escape Analysis

WIP: add isAllocationSite check to isEscaping

WIP: remove unnecessary getUnderlyingObject in isEscaping, add store_to_unknown_ret_escape test

WIP: implement doesStoredPointerEscapeViaLoads

Consider case:
void esc() {
    int x;
    int *p = &x;
    GPtr = p;
}

define dso_local void @esc() #0 {
entry:
  %x = alloca i32, align 4
  %p = alloca ptr, align 8
  store ptr %x, ptr %p, align 8
  %0 = load ptr, ptr %p, align 8
  store ptr %0, ptr @GPtr, align 8
  ret void
}
--> x must escape

WIP: large refactoring of getUnderlyingObjectsThroughLoads, mainly consider isLiveOnEntryDef to get real base objects

* Better strip inMSSA->isLiveOnEntryDef(CurrClobber)
* Consider aliasing if (!isModSet(AA->getModRefInfo(Store, Loc))) {
}
* Fix appendIncomingMAs, get clobber in-place
* Renaming
* Extracting functions, minor readability refactoring

WIP: make x <- GPtr (store from global) and x -> GPtr (store to global) both escape, resolve issue with cyclic dependency

Add more tests, treat p <-- GPtr as not escaping.

Break escape-analysis.ll into sections

WIP: add check-tsan-escape-analysis

WIP: add check-tsan-escape-analysis, make tsan_generate_arch_suites to avoid code duplication

WIP: add check-tsan-escape-analysis-dynamic

Cleanup EscapeAnalysis.cpp

Enable EscapeAnalysis in ThreadSanitizer to filter non-escaping objects.

Delete no-escape.ll

Redesign doesStoredPointerEscapeViaLoads, cleanup getUnderlyingObjectsThroughLoads

Add walkEdgeClobbers to traverse clobbers. Use it in  collectLoadsReadingFromStore

Use walkEdgeClobbers in  getUnderlyingObjectsThroughLoads, cleanup, function reordering

Add comparison of EA and standard CaptureTracking

Use getUnderlyingObjectsThroughLoads to get allocas in ThreadSanitizer.cpp, fix a bug with isLiveOnEntryDef in getUnderlyingObjectsThroughLoads

Make walkEdgeClobbers return void, use isSimple() for atomic+volatile

Fix critical bug in getUnderlyingObjectsThroughLoads (in isLiveOnEntryDef), fix Cycle logic error, simplify and cleanup. Add double-pointer tests.

Slightly simplified the `ThreadSanitizerPass::run()`.

One "if" instead of several ternary operators.

Tidy up analyses initialization

Move escape-analysis.ll to ThreadSanitizer root folder

Change EA flag to  tsan-use-escape-analysis

Simplify, make lazy analyses initialization, update invalidate()

Add heap call sites to terminal objects (getUnderlyingObjectsThroughLoads)

Fix doesStoredPointerEscapeViaLoads: loads from non-pointers are not escaped

Add escape-analysis-tsan.ll to test TSan with EA enabled

Add tests for escaping arrays and structures

Add captures(none) arguments support, add support of storing local objects to heap memory

1. Some functions (like memory intrinsics) have captures(none) arguments. Pointer passed to these arguments don't escape, but content behind these argument may escape. We consider it in findStoreReadersAndExports
2. Also consider store of local object to heap memory and escape with these heap objects.
3. Simplifying tests, remove redundant tests.

Add more tests: multiple_stores_last_escapes, two_dimensional_array_escape, varargs_escape

Cleanup: improve debug messages, fix typos, and remove unused code
---
 compiler-rt/test/tsan/CMakeLists.txt          | 111 ++-
 compiler-rt/test/tsan/lit.cfg.py              |   7 +
 compiler-rt/test/tsan/lit.site.cfg.py.in      |   3 +
 .../Instrumentation/EscapeAnalysis.h          | 191 ++++
 llvm/lib/Passes/PassBuilder.cpp               |   1 +
 llvm/lib/Passes/PassRegistry.def              |   2 +
 .../Transforms/Instrumentation/CMakeLists.txt |   1 +
 .../Instrumentation/EscapeAnalysis.cpp        | 742 +++++++++++++++
 .../Instrumentation/ThreadSanitizer.cpp       | 107 ++-
 .../ThreadSanitizer/escape-analysis-tsan.ll   | 250 ++++++
 .../ThreadSanitizer/escape-analysis.ll        | 843 ++++++++++++++++++
 11 files changed, 2208 insertions(+), 50 deletions(-)
 create mode 100644 llvm/include/llvm/Transforms/Instrumentation/EscapeAnalysis.h
 create mode 100644 llvm/lib/Transforms/Instrumentation/EscapeAnalysis.cpp
 create mode 100644 llvm/test/Instrumentation/ThreadSanitizer/escape-analysis-tsan.ll
 create mode 100644 llvm/test/Instrumentation/ThreadSanitizer/escape-analysis.ll

diff --git a/compiler-rt/test/tsan/CMakeLists.txt b/compiler-rt/test/tsan/CMakeLists.txt
index 163355d68ebc2..828e6478d540c 100644
--- a/compiler-rt/test/tsan/CMakeLists.txt
+++ b/compiler-rt/test/tsan/CMakeLists.txt
@@ -18,6 +18,7 @@ endif()
 set(TSAN_DYNAMIC_TEST_DEPS ${TSAN_TEST_DEPS})
 set(TSAN_TESTSUITES)
 set(TSAN_DYNAMIC_TESTSUITES)
+set(TSAN_ENABLE_ESCAPE_ANALYSIS "False") # Disable escape analysis by default
 
 if (NOT DEFINED TSAN_TEST_DEFLAKE_THRESHOLD)
   set(TSAN_TEST_DEFLAKE_THRESHOLD "10")
@@ -28,45 +29,77 @@ if(APPLE)
   darwin_filter_host_archs(TSAN_SUPPORTED_ARCH TSAN_TEST_ARCH)
 endif()
 
-foreach(arch ${TSAN_TEST_ARCH})
-  set(TSAN_TEST_APPLE_PLATFORM "osx")
-  set(TSAN_TEST_MIN_DEPLOYMENT_TARGET_FLAG "${DARWIN_osx_MIN_VER_FLAG}")
+# Unified function for generating TSAN test suites by architectures.
+# Arguments:
+#   OUT_LIST_VAR    - name of output list (for example, TSAN_TESTSUITES or TSAN_EA_TESTSUITES)
+#   SUFFIX_KIND     - string added to config suffix after "-${arch}" (for example, "" or "-escape")
+#   CONFIG_KIND     - string added to config name after "Config" (for example, "" or "Escape")
+#   ENABLE_EA       - "True"/"False" enable escape analysis
+function(tsan_generate_arch_suites OUT_LIST_VAR SUFFIX_KIND CONFIG_KIND ENABLE_EA)
+  foreach(arch ${TSAN_TEST_ARCH})
+    set(TSAN_ENABLE_ESCAPE_ANALYSIS "${ENABLE_EA}")
 
-  set(TSAN_TEST_TARGET_ARCH ${arch})
-  string(TOLOWER "-${arch}" TSAN_TEST_CONFIG_SUFFIX)
-  get_test_cc_for_arch(${arch} TSAN_TEST_TARGET_CC TSAN_TEST_TARGET_CFLAGS)
+    set(TSAN_TEST_APPLE_PLATFORM "osx")
+    set(TSAN_TEST_MIN_DEPLOYMENT_TARGET_FLAG "${DARWIN_osx_MIN_VER_FLAG}")
 
-  string(REPLACE ";" " " LIBDISPATCH_CFLAGS_STRING " ${COMPILER_RT_TEST_LIBDISPATCH_CFLAGS}")
-  string(APPEND TSAN_TEST_TARGET_CFLAGS ${LIBDISPATCH_CFLAGS_STRING})
+    set(TSAN_TEST_TARGET_ARCH ${arch})
+    string(TOLOWER "-${arch}${SUFFIX_KIND}" TSAN_TEST_CONFIG_SUFFIX)
+    get_test_cc_for_arch(${arch} TSAN_TEST_TARGET_CC TSAN_TEST_TARGET_CFLAGS)
 
-  if (COMPILER_RT_HAS_MSSE4_2_FLAG)
-    string(APPEND TSAN_TEST_TARGET_CFLAGS " -msse4.2 ")
-  endif()
+    string(REPLACE ";" " " LIBDISPATCH_CFLAGS_STRING " ${COMPILER_RT_TEST_LIBDISPATCH_CFLAGS}")
+    string(APPEND TSAN_TEST_TARGET_CFLAGS ${LIBDISPATCH_CFLAGS_STRING})
 
-  string(TOUPPER ${arch} ARCH_UPPER_CASE)
-  set(CONFIG_NAME ${ARCH_UPPER_CASE}Config)
+    if (COMPILER_RT_HAS_MSSE4_2_FLAG)
+      string(APPEND TSAN_TEST_TARGET_CFLAGS " -msse4.2 ")
+    endif()
 
-  configure_lit_site_cfg(
-    ${CMAKE_CURRENT_SOURCE_DIR}/lit.site.cfg.py.in
-    ${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME}/lit.site.cfg.py
-    MAIN_CONFIG
-    ${CMAKE_CURRENT_SOURCE_DIR}/lit.cfg.py
-    )
-  list(APPEND TSAN_TESTSUITES ${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME})
+    string(TOUPPER ${arch} ARCH_UPPER_CASE)
+    set(CONFIG_NAME ${ARCH_UPPER_CASE}Config${CONFIG_KIND})
 
-  if(COMPILER_RT_TSAN_HAS_STATIC_RUNTIME)
-    string(TOLOWER "-${arch}-${OS_NAME}-dynamic" TSAN_TEST_CONFIG_SUFFIX)
-    set(CONFIG_NAME ${ARCH_UPPER_CASE}${OS_NAME}DynamicConfig)
     configure_lit_site_cfg(
-      ${CMAKE_CURRENT_SOURCE_DIR}/lit.site.cfg.py.in
-      ${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME}/lit.site.cfg.py
-      MAIN_CONFIG
-      ${CMAKE_CURRENT_SOURCE_DIR}/lit.cfg.py
+            ${CMAKE_CURRENT_SOURCE_DIR}/lit.site.cfg.py.in
+            ${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME}/lit.site.cfg.py
+            MAIN_CONFIG
+            ${CMAKE_CURRENT_SOURCE_DIR}/lit.cfg.py
+    )
+    list(APPEND ${OUT_LIST_VAR} ${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME})
+
+    if(COMPILER_RT_TSAN_HAS_STATIC_RUNTIME)
+      # Dynamic runtime for corresponding variant
+      if("${SUFFIX_KIND}" STREQUAL "")
+        string(TOLOWER "-${arch}-${OS_NAME}-dynamic" TSAN_TEST_CONFIG_SUFFIX)
+        set(CONFIG_NAME ${ARCH_UPPER_CASE}${OS_NAME}DynamicConfig${CONFIG_KIND})
+        list(APPEND TSAN_DYNAMIC_TESTSUITES ${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME})
+      else()
+        string(TOLOWER "-${arch}-${OS_NAME}-dynamic${SUFFIX_KIND}" TSAN_TEST_CONFIG_SUFFIX)
+        set(CONFIG_NAME ${ARCH_UPPER_CASE}${OS_NAME}DynamicConfig${CONFIG_KIND})
+        # Track dynamic escape-analysis suites separately for a dedicated target.
+        list(APPEND TSAN_EA_DYNAMIC_TESTSUITES ${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME})
+      endif()
+      configure_lit_site_cfg(
+              ${CMAKE_CURRENT_SOURCE_DIR}/lit.site.cfg.py.in
+              ${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME}/lit.site.cfg.py
+              MAIN_CONFIG
+              ${CMAKE_CURRENT_SOURCE_DIR}/lit.cfg.py
       )
-    list(APPEND TSAN_DYNAMIC_TESTSUITES
-      ${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME})
+      list(APPEND ${OUT_LIST_VAR} ${CMAKE_CURRENT_BINARY_DIR}/${CONFIG_NAME})
+    endif()
+  endforeach()
+
+  # Propagate the assembled list to the parent scope
+  set(${OUT_LIST_VAR} "${${OUT_LIST_VAR}}" PARENT_SCOPE)
+  if(DEFINED TSAN_EA_DYNAMIC_TESTSUITES)
+    set(TSAN_EA_DYNAMIC_TESTSUITES "${TSAN_EA_DYNAMIC_TESTSUITES}" PARENT_SCOPE)
   endif()
-endforeach()
+endfunction()
+
+# Default configuration
+set(TSAN_TESTSUITES)
+tsan_generate_arch_suites(TSAN_TESTSUITES "" "" "False")
+
+# Enable escape analysis (check-tsan-escape-analysis target)
+set(TSAN_EA_TESTSUITES)
+tsan_generate_arch_suites(TSAN_EA_TESTSUITES "-escape" "Escape" "True")
 
 # iOS and iOS simulator test suites
 # These are not added into "check-all", in order to run these tests, use
@@ -124,6 +157,10 @@ list(APPEND TSAN_TESTSUITES ${CMAKE_CURRENT_BINARY_DIR}/Unit)
 if(COMPILER_RT_TSAN_HAS_STATIC_RUNTIME)
   list(APPEND TSAN_DYNAMIC_TESTSUITES ${CMAKE_CURRENT_BINARY_DIR}/Unit/dynamic)
 endif()
+list(APPEND TSAN_EA_TESTSUITES ${CMAKE_CURRENT_BINARY_DIR}/Unit)
+if(COMPILER_RT_TSAN_HAS_STATIC_RUNTIME)
+  list(APPEND TSAN_EA_DYNAMIC_TESTSUITES ${CMAKE_CURRENT_BINARY_DIR}/Unit/dynamic)
+endif()
 
 add_lit_testsuite(check-tsan "Running ThreadSanitizer tests"
   ${TSAN_TESTSUITES}
@@ -136,3 +173,17 @@ if(COMPILER_RT_TSAN_HAS_STATIC_RUNTIME)
                     EXCLUDE_FROM_CHECK_ALL
                     DEPENDS ${TSAN_DYNAMIC_TEST_DEPS})
 endif()
+
+add_lit_testsuite(check-tsan-escape-analysis "Running ThreadSanitizer tests (escape analysis)"
+        ${TSAN_EA_TESTSUITES}
+        DEPENDS ${TSAN_TEST_DEPS})
+set_target_properties(check-tsan-escape-analysis PROPERTIES FOLDER "Compiler-RT Tests")
+
+# New target: dynamic + escape analysis
+if(COMPILER_RT_TSAN_HAS_STATIC_RUNTIME)
+  add_lit_testsuite(check-tsan-escape-analysis-dynamic "Running ThreadSanitizer tests (dynamic, escape analysis)"
+                    ${TSAN_EA_DYNAMIC_TESTSUITES}
+                    EXCLUDE_FROM_CHECK_ALL
+                    DEPENDS ${TSAN_DYNAMIC_TEST_DEPS})
+  set_target_properties(check-tsan-escape-analysis-dynamic PROPERTIES FOLDER "Compiler-RT Tests")
+endif()
\ No newline at end of file
diff --git a/compiler-rt/test/tsan/lit.cfg.py b/compiler-rt/test/tsan/lit.cfg.py
index 8803a7bda9aa5..829ee625499aa 100644
--- a/compiler-rt/test/tsan/lit.cfg.py
+++ b/compiler-rt/test/tsan/lit.cfg.py
@@ -56,6 +56,13 @@ def get_required_attr(config, attr_name):
     + extra_cflags
     + ["-I%s" % tsan_incdir]
 )
+# Setup escape analysis if enabled
+tsan_enable_escape = getattr(config, "tsan_enable_escape_analysis", "False") == "True"
+if tsan_enable_escape:
+    config.name += " (escape-analysis)"
+    ea_flags = [ "-mllvm", "-tsan-use-escape-analysis" ]
+    clang_tsan_cflags += ea_flags
+
 clang_tsan_cxxflags = (
     config.cxx_mode_flags + clang_tsan_cflags + ["-std=c++11"] + ["-I%s" % tsan_incdir]
 )
diff --git a/compiler-rt/test/tsan/lit.site.cfg.py.in b/compiler-rt/test/tsan/lit.site.cfg.py.in
index c6d453aaee26f..da38c9b7eb9b9 100644
--- a/compiler-rt/test/tsan/lit.site.cfg.py.in
+++ b/compiler-rt/test/tsan/lit.site.cfg.py.in
@@ -9,6 +9,9 @@ config.target_cflags = "@TSAN_TEST_TARGET_CFLAGS@"
 config.target_arch = "@TSAN_TEST_TARGET_ARCH@"
 config.deflake_threshold = "@TSAN_TEST_DEFLAKE_THRESHOLD@"
 
+# Enable escape analysis.
+config.tsan_enable_escape_analysis = "@TSAN_ENABLE_ESCAPE_ANALYSIS@"
+
 # Load common config for all compiler-rt lit tests.
 lit_config.load_config(config, "@COMPILER_RT_BINARY_DIR@/test/lit.common.configured")
 
diff --git a/llvm/include/llvm/Transforms/Instrumentation/EscapeAnalysis.h b/llvm/include/llvm/Transforms/Instrumentation/EscapeAnalysis.h
new file mode 100644
index 0000000000000..a12b3ff374cda
--- /dev/null
+++ b/llvm/include/llvm/Transforms/Instrumentation/EscapeAnalysis.h
@@ -0,0 +1,191 @@
+//===- EscapeAnalysis.h - Intraprocedural Escape Analysis -------*- C++ -*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines the interface for a simple, conservative intraprocedural
+// escape analysis. It is designed as a helper utility for other passes, like
+// ThreadSanitizer, to determine if an allocation escapes the context of its
+// containing function.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_TRANSFORMS_INSTRUMENTATION_ESCAPEANALYSIS_H
+#define LLVM_TRANSFORMS_INSTRUMENTATION_ESCAPEANALYSIS_H
+
+#include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/SmallVector.h"
+#include "llvm/Analysis/CaptureTracking.h"
+#include "llvm/Analysis/LoopInfo.h"
+#include "llvm/Analysis/MemorySSA.h"
+#include "llvm/Analysis/TargetLibraryInfo.h"
+#include "llvm/IR/PassManager.h"
+
+namespace llvm {
+/// Find underlying base objects for a pointer possibly produced by loads.
+///
+/// This routine walks backwards through MemorySSA clobbering definitions of
+/// simple loads to find stores that defined the loaded pointer values, and
+/// collects their base objects. Additionally, it attempts ValueTracking
+/// `getUnderlyingObjects` to peel pointer casts/GEPs/phis where profitable.
+///
+/// Collected "base" objects are:
+///  - `AllocaInst` (stack, base=stack)
+///  - `Argument` (function argument, base=arg)
+///  - `GlobalVariable` and `GlobalAlias` (base=global|alias)
+///  - `ConstantPointerNull` (base=null)
+///  - Results of known heap-allocating calls (e.g. `malloc`, `calloc`,
+///    `realloc`, `aligned_alloc`, `strdup`, or C\+\+ `new`) when recognized
+///    via `TargetLibraryInfo` (base=heap).
+///
+/// If the walk encounters an unrecognized defining write, a non-simple store,
+/// a memintrinsic as a defining write, or the step budget is exceeded, the
+/// analysis conservatively treats the current value as a terminal non-base
+/// and marks the result as incomplete.
+///
+/// Contract and guarantees:
+///  - If `MSSA` is null, the analysis immediately returns with
+///    `*IsComplete == false` (if provided).
+///  - If `TLI` is null, heap allocations cannot be recognized; terminals that
+///    are calls are treated as non-bases and lead to `*IsComplete == false`.
+///  - `Result` is a set of terminal values observed (may include non-bases if
+///    the analysis is incomplete). Use `*IsComplete` to know if all are bases.
+///  - `MaxSteps` is a per-query safety valve limiting the combined number of
+///    processed worklist nodes. When exceeded, the analysis bails out and
+///    sets `*IsComplete == false`.
+void getUnderlyingObjectsThroughLoads(const Value *Ptr, MemorySSA *MSSA,
+                                      SmallPtrSetImpl<const Value *> &Result,
+                                      const TargetLibraryInfo *TLI = nullptr,
+                                      LoopInfo *LI = nullptr,
+                                      bool *IsComplete = nullptr,
+                                      unsigned MaxSteps = 10000);
+
+/// Detect heap allocations. Complements isAllocationFn() by checking
+/// library functions directly when attributes might be missing.
+bool isHeapAllocation(const CallBase *CB, const TargetLibraryInfo &TLI);
+
+/// EscapeAnalysisInfo - This class implements the actual backward dataflow
+/// analysis for a function; queries are per allocation site.
+///
+/// This is a lightweight, intraprocedural and conservative analysis intended
+/// to help instrumentation passes (e.g. ThreadSanitizer) skip objects that do
+/// not escape the function scope. The main query is \c isEscaping(Value&),
+/// which answers whether an allocation site (alloca/malloc-like) may escape
+/// the current function. Results are memoized per underlying object.
+struct EscapeAnalysisInfo {
+  /// Constructs an escape analysis utility for a given function.
+  /// Requires a FunctionAnalysisManager to obtain other analyses like AA.
+  EscapeAnalysisInfo(Function &F, FunctionAnalysisManager &FAM) : F(F) {
+    TLI = &FAM.getResult<TargetLibraryAnalysis>(F);
+    MSSA = &FAM.getResult<MemorySSAAnalysis>(F).getMSSA();
+    LI = &FAM.getResult<LoopAnalysis>(F);
+  };
+  ~EscapeAnalysisInfo() = default;
+
+  /// Return true if \p Alloc may escape the function.
+  /// \param Alloc - Must be an allocation site (AllocaInst or heap allocation
+  ///                call). Passing GEPs/bitcasts is not supported; use the base
+  ///                allocation.
+  /// \returns true if the allocation escapes or if \p Alloc is not an
+  /// allocation site.
+  bool isEscaping(const Value &Alloc);
+
+  /// Print escape information for all allocations in the function
+  void print(raw_ostream &OS);
+
+  bool invalidate(Function &F, const PreservedAnalyses &PA,
+                  FunctionAnalysisManager::Invalidator &Inv);
+
+private:
+  Function &F;
+  DenseMap<const Value *, bool> Cache;
+
+  TargetLibraryInfo *TLI = nullptr;
+  MemorySSA *MSSA = nullptr;
+  LoopInfo *LI = nullptr;
+
+  /// Checks whether a base location is externally visible (thus escapes).
+  static bool isExternalObject(const Value *Base);
+
+  /// Custom CaptureTracker for escape analysis
+  class EscapeCaptureTracker : public CaptureTracker {
+  public:
+    EscapeCaptureTracker(EscapeAnalysisInfo &EAI,
+                         const SmallPtrSet<const Value *, 32> &ProcessingSet)
+        : EAI(EAI), ProcessingSet(ProcessingSet) {}
+
+    void tooManyUses() override { Escaped = true; }
+    bool shouldExplore(const Use *U) override;
+    Action captured(const Use *U, UseCaptureInfo CI) override;
+    bool hasEscaped() const { return Escaped; }
+
+  private:
+    EscapeAnalysisInfo &EAI;
+    SmallPtrSet<const Value *, 32> ProcessingSet;
+    bool Escaped = false;
+
+    /// Analyze if storing to destination causes escape
+    bool doesStoreDestEscape(const Value *Dest);
+
+    /// Get indices of pointer-typed arguments that are marked 'nocapture'
+    SmallVector<unsigned, 8>
+    getNoCapturePointerArgIndices(const CallBase *CB) const;
+
+    /// Check if any of the 'nocapture' arguments can reach the query object
+    bool canEscapeViaNocaptureArgs(
+        const CallBase &CB, ArrayRef<unsigned> NoCapPtrArgs,
+        SmallPtrSetImpl<const Value *> &StorePtrOpndBases) const;
+
+    /// Check if the given clobber stems from StartMDef
+    bool stemsFromStartStore(MemoryUseOrDef *MUOD, const MemoryDef *StartMDef,
+                             MemoryLocation Loc, bool &IsComplete,
+                             MemorySSAWalker *Walker) const;
+
+    /// Walk MemorySSA forward from StartStore and:
+    ///  - collect pointer-typed Loads that may read bytes written by StartStore
+    ///  - detect calls that may export those bytes via nocapture pointer args
+    /// Sets ContentMayEscape if any call may export the bytes.
+    SmallVector<const LoadInst *, 32>
+    findStoreReadersAndExports(const StoreInst *StartStore,
+                                 bool &ContentMayEscape, bool &IsComplete);
+
+    /// Analyze whether the pointer value stored by `Store` can escape
+    bool doesStoredPointerEscapeViaLoads(const StoreInst *Store);
+  };
+
+  /// Solve escape for a single allocation site using backward dataflow.
+  bool solveEscapeFor(const Value &Ptr,
+                      SmallPtrSet<const Value *, 32> &ProcessingSet);
+
+  /// Helper function to detect allocation sites (malloc/new-like)
+  /// Returns true if V is an Alloca or a call to a known heap alloc function.
+  bool isAllocationSite(const Value *V);
+};
+
+/// EscapeAnalysisInfo wrapper for the new pass manager.
+class EscapeAnalysis : public AnalysisInfoMixin<EscapeAnalysis> {
+  friend AnalysisInfoMixin<EscapeAnalysis>;
+  static AnalysisKey Key;
+
+public:
+  using Result = EscapeAnalysisInfo;
+  static Result run(Function &F, FunctionAnalysisManager &FAM);
+};
+
+/// Printer pass for the \c EscapeAnalysis results.
+class EscapeAnalysisPrinterPass
+    : public PassInfoMixin<EscapeAnalysisPrinterPass> {
+  raw_ostream &OS;
+
+public:
+  explicit EscapeAnalysisPrinterPass(raw_ostream &OS) : OS(OS) {}
+  PreservedAnalyses run(Function &F, FunctionAnalysisManager &FAM) const;
+  static bool isRequired() { return true; }
+};
+
+} // end namespace llvm
+
+#endif // LLVM_TRANSFORMS_INSTRUMENTATION_ESCAPEANALYSIS_H
\ No newline at end of file
diff --git a/llvm/lib/Passes/PassBuilder.cpp b/llvm/lib/Passes/PassBuilder.cpp
index f5281ea69b512..4007b46acfdb5 100644
--- a/llvm/lib/Passes/PassBuilder.cpp
+++ b/llvm/lib/Passes/PassBuilder.cpp
@@ -245,6 +245,7 @@
 #include "llvm/Transforms/Instrumentation/CGProfile.h"
 #include "llvm/Transforms/Instrumentation/ControlHeightReduction.h"
 #include "llvm/Transforms/Instrumentation/DataFlowSanitizer.h"
+#include "llvm/Transforms/Instrumentation/EscapeAnalysis.h"
 #include "llvm/Transforms/Instrumentation/GCOVProfiler.h"
 #include "llvm/Transforms/Instrumentation/HWAddressSanitizer.h"
 #include "llvm/Transforms/Instrumentation/InstrProfiling.h"
diff --git a/llvm/lib/Passes/PassRegistry.def b/llvm/lib/Passes/PassRegistry.def
index 074c328ef0931..c72655d75bcfb 100644
--- a/llvm/lib/Passes/PassRegistry.def
+++ b/llvm/lib/Passes/PassRegistry.def
@@ -356,6 +356,7 @@ FUNCTION_ANALYSIS("demanded-bits", DemandedBitsAnalysis())
 FUNCTION_ANALYSIS("domfrontier", DominanceFrontierAnalysis())
 FUNCTION_ANALYSIS("domtree", DominatorTreeAnalysis())
 FUNCTION_ANALYSIS("ephemerals", EphemeralValuesAnalysis())
+FUNCTION_ANALYSIS("escape-analysis", EscapeAnalysis())
 FUNCTION_ANALYSIS("func-properties", FunctionPropertiesAnalysis())
 FUNCTION_ANALYSIS("machine-function-info", MachineFunctionAnalysis(*TM))
 FUNCTION_ANALYSIS("gc-function", GCFunctionAnalysis())
@@ -512,6 +513,7 @@ FUNCTION_PASS("print<delinearization>", DelinearizationPrinterPass(errs()))
 FUNCTION_PASS("print<demanded-bits>", DemandedBitsPrinterPass(errs()))
 FUNCTION_PASS("print<domfrontier>", DominanceFrontierPrinterPass(errs()))
 FUNCTION_PASS("print<domtree>", DominatorTreePrinterPass(errs()))
+FUNCTION_PASS("print<escape-analysis>", EscapeAnalysisPrinterPass(errs()))
 FUNCTION_PASS("print<func-properties>", FunctionPropertiesPrinterPass(errs()))
 FUNCTION_PASS("print<inline-cost>", InlineCostAnnotationPrinterPass(errs()))
 FUNCTION_PASS("print<lazy-value-info>", LazyValueInfoPrinterPass(errs()))
diff --git a/llvm/lib/Transforms/Instrumentation/CMakeLists.txt b/llvm/lib/Transforms/Instrumentation/CMakeLists.txt
index 80576c61fd80c..4369dfddbd439 100644
--- a/llvm/lib/Transforms/Instrumentation/CMakeLists.txt
+++ b/llvm/lib/Transforms/Instrumentation/CMakeLists.txt
@@ -24,6 +24,7 @@ add_llvm_component_library(LLVMInstrumentation
   SanitizerBinaryMetadata.cpp
   ValueProfileCollector.cpp
   ThreadSanitizer.cpp
+  EscapeAnalysis.cpp
   TypeSanitizer.cpp
   HWAddressSanitizer.cpp
   RealtimeSanitizer.cpp
diff --git a/llvm/lib/Transforms/Instrumentation/EscapeAnalysis.cpp b/llvm/lib/Transforms/Instrumentation/EscapeAnalysis.cpp
new file mode 100644
index 0000000000000..c72ca2431a9ee
--- /dev/null
+++ b/llvm/lib/Transforms/Instrumentation/EscapeAnalysis.cpp
@@ -0,0 +1,742 @@
+//===- EscapeAnalysis.cpp - Intraprocedural Escape Analysis Implementation ===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the EscapeAnalysis helper class. It uses a worklist-
+// based, backward dataflow analysis to determine if an allocation can escape.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/Transforms/Instrumentation/EscapeAnalysis.h"
+#include "llvm/ADT/SmallString.h"
+#include "llvm/ADT/Statistic.h"
+#include "llvm/Analysis/AliasAnalysis.h"
+#include "llvm/Analysis/MemoryBuiltins.h"
+#include "llvm/Analysis/MemorySSA.h"
+#include "llvm/Analysis/ValueTracking.h"
+#include "llvm/IR/InstIterator.h"
+#include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
+
+#define DEBUG_TYPE "escape-analysis"
+
+using namespace llvm;
+
+STATISTIC(NumAllocationsAnalyzed, "Number of allocation sites analyzed");
+STATISTIC(NumAllocationsEscaped, "Number of allocation sites found to escape");
+
+/// Per-allocation worklist cap (safety valve). If the number of processed
+/// worklist nodes exceeds this limit, the analysis bails out conservatively and
+/// considers the allocation as escaping.
+static cl::opt<unsigned>
+WorklistLimit("escape-analysis-worklist-limit", cl::init(1000), cl::Hidden,
+              cl::desc("Max number of worklist nodes processed per allocation; "
+                       "if exceeded, assume the allocation escapes"));
+
+// getUnderlyingObjects(..., MaxLookup = 0) is assumed to mean "unbounded".
+// If upstream changes semantics, this must be revisited.
+static const unsigned VTMaxLookup = 0;
+
+//===----------------------------------------------------------------------===//
+// MemorySSA-related utils
+//===----------------------------------------------------------------------===//
+
+namespace llvm {
+/// Add P to Worklist if it doesn't exist in Seen
+template <typename PtrT, typename SetT, typename WorklistT>
+static bool tryEnqueueIfNew(PtrT *P, SetT &Seen, WorklistT &Worklist) {
+  if (P && Seen.insert(P).second) {
+    Worklist.push_back(P);
+    return true;
+  }
+  return false;
+}
+
+/// Add incoming unvisited MemoryAccesses of a MemoryPhi to MAWorkList.
+static void appendIncomingMAs(const MemoryPhi *MPhi,
+                              SmallPtrSetImpl<MemoryAccess *> &VisitedMA,
+                              SmallVectorImpl<MemoryAccess *> &MAWorkList,
+                              MemoryLocation Loc, MemorySSAWalker *Walker,
+                              bool &IsComplete) {
+  for (unsigned i = 0, N = MPhi->getNumIncomingValues(); i != N; ++i) {
+    MemoryAccess *InMA = MPhi->getIncomingValue(i);
+    MemoryAccess *EdgeCl = Walker->getClobberingMemoryAccess(InMA, Loc);
+    if (!EdgeCl) {
+      IsComplete = false;
+      continue;
+    }
+    tryEnqueueIfNew(EdgeCl, VisitedMA, MAWorkList);
+  }
+}
+
+enum class EdgeWalkStep { Recurse, SkipSuccessors, Stop };
+
+/// Walk edge clobbering definitions starting from Start MemoryAccess.
+template <typename VisitT>
+static void walkEdgeClobbers(MemoryAccess *Start, MemorySSAWalker *Walker,
+                             MemoryLocation Loc, unsigned Limit,
+                             const VisitT &Visit, bool &IsComplete) {
+  IsComplete = true;
+  if (!Start) {
+    IsComplete = false;
+    return;
+  }
+
+  SmallVector<MemoryAccess *, 32> MAWorklist;
+  SmallPtrSet<MemoryAccess *, 32> MAVisited;
+  tryEnqueueIfNew(Start, MAVisited, MAWorklist);
+  unsigned Steps = 0;
+
+  while (!MAWorklist.empty()) {
+    if (++Steps > Limit) {
+      IsComplete = false;
+      return;
+    }
+
+    MemoryAccess *MA = MAWorklist.pop_back_val();
+
+    const EdgeWalkStep Act = Visit(MA);
+    if (Act == EdgeWalkStep::Stop)
+      return;
+    if (Act == EdgeWalkStep::SkipSuccessors)
+      continue;
+
+    if (auto *MDef = dyn_cast<MemoryDef>(MA)) {
+      MemoryAccess *EdgeCl = Walker->getClobberingMemoryAccess(MDef, Loc);
+      if (!EdgeCl) {
+        IsComplete = false;
+        return;
+      }
+      tryEnqueueIfNew(EdgeCl, MAVisited, MAWorklist);
+    } else if (const auto *MPhi = dyn_cast<MemoryPhi>(MA)) {
+      appendIncomingMAs(MPhi, MAVisited, MAWorklist, Loc, Walker, IsComplete);
+      if (!IsComplete)
+        return;
+    } else {
+      llvm_unreachable("Unexpected MemoryAccess kind");
+    }
+  }
+}
+
+/// Try to use ValueTracking to find underlying objects.
+static bool tryValueTracking(const Value *V, LoopInfo *LI,
+                             SmallVectorImpl<const Value *> &Work,
+                             SmallPtrSetImpl<const Value *> &Enqueued) {
+  SmallVector<const Value *, 4> Bases;
+  if (!V->getType()->isPointerTy())
+    return false; // Only pointers have underlying objects.
+
+  getUnderlyingObjects(V, Bases, LI, VTMaxLookup);
+
+  if (Bases.empty() || (Bases.size() == 1 && Bases[0] == V))
+    return false;
+
+  for (const Value *B : Bases)
+    tryEnqueueIfNew(B, Enqueued, Work);
+  return true;
+}
+
+bool isHeapAllocation(const CallBase *CB, const TargetLibraryInfo &TLI) {
+  // Try standard path first (works for C++ new and modern IR with allockind)
+  if (isAllocationFn(CB, &TLI) || isNewLikeFn(CB, &TLI))
+    return true;
+
+  // Fallback: check directly via TLI for malloc/calloc/etc
+  const Function *Callee = CB->getCalledFunction();
+  if (!Callee || !Callee->getReturnType()->isPointerTy())
+    return false;
+
+  LibFunc Func;
+  if (!TLI.getLibFunc(*Callee, Func) || !TLI.has(Func))
+    return false;
+
+  // List of known heap allocation functions from libc
+  switch (Func) {
+  case LibFunc_malloc:
+  case LibFunc_calloc:
+  case LibFunc_realloc:
+  case LibFunc_reallocf:
+  case LibFunc_reallocarray:
+  case LibFunc_valloc:
+  case LibFunc_pvalloc:
+  case LibFunc_aligned_alloc:
+  case LibFunc_memalign:
+  case LibFunc_vec_malloc:
+  case LibFunc_vec_calloc:
+  case LibFunc_vec_realloc:
+  case LibFunc_strdup:
+  case LibFunc_strndup:
+    return true;
+  default:
+    return false;
+  }
+}
+
+void getUnderlyingObjectsThroughLoads(const Value *Ptr, MemorySSA *MSSA,
+                                      SmallPtrSetImpl<const Value *> &Result,
+                                      const TargetLibraryInfo *TLI,
+                                      LoopInfo *LI, bool *IsComplete,
+                                      unsigned MaxSteps) {
+  LLVM_DEBUG(dbgs() << "getUnderlyingObjectsThroughLoads: " << Ptr->getName()
+                    << "\n");
+
+  if (!Ptr->getType()->isPointerTy()) {
+    LLVM_DEBUG(dbgs() << "Input is not a pointer: " << *Ptr << "\n");
+    return; // Only pointers have underlying objects.
+  }
+
+  if (!MSSA) {
+    LLVM_DEBUG(dbgs() << "MSSA is null, marking analysis as incomplete\n");
+    if (IsComplete)
+      *IsComplete = false;
+    return;
+  }
+
+  auto addTerminal = [&](const Value *Term,
+                         bool MarkIncompleteIfNotBase = true) {
+    if (!Term || !Term->getType()->isPointerTy())
+      return;
+    bool IsBase = isa<AllocaInst>(Term) || isa<Argument>(Term) ||
+                  isa<GlobalVariable>(Term) || isa<GlobalAlias>(Term) ||
+                  isa<ConstantPointerNull>(Term);
+    if (!IsBase && TLI) { // Check if it's heap allocation call
+      if (const auto *CB = dyn_cast<CallBase>(Term))
+        IsBase = isHeapAllocation(CB, *TLI);
+    }
+    LLVM_DEBUG(dbgs() << "Mark terminal: " << *Term << " IsBase="
+                      << (IsBase ? "yes" : "no") << "\n");
+    Result.insert(Term);
+    if (IsComplete && !IsBase && MarkIncompleteIfNotBase) {
+      *IsComplete = false;
+      LLVM_DEBUG(dbgs() << "Marking incomplete due to non-base\n");
+    }
+  };
+
+  SmallPtrSet<const Value *, 32> ValueTrackingSeen;
+  SmallPtrSet<const Value *, 32> Seen;
+  SmallVector<const Value *, 32> Worklist;
+
+  auto bail = [&]() {
+    if (IsComplete)
+      *IsComplete = false;
+    for (const Value *WV : Worklist)
+      addTerminal(WV);
+  };
+
+  tryEnqueueIfNew(Ptr, Seen, Worklist);
+
+  unsigned Step = 0;
+  if (IsComplete)
+    *IsComplete = true;
+
+  MemorySSAWalker *Walker = MSSA->getSkipSelfWalker();
+
+  while (!Worklist.empty()) {
+    const Value *CurrPtr = Worklist.pop_back_val();
+
+    // Safety valve: if we exceed MaxSteps, bail out conservatively.
+    if (++Step > MaxSteps) {
+      LLVM_DEBUG(dbgs() << "MaxSteps exceeded at: " << *CurrPtr << "\n");
+      addTerminal(CurrPtr);
+      bail();
+      return;
+    }
+
+    // Try ValueTracking first (only once per value)
+    if (!isa<LoadInst>(CurrPtr) && ValueTrackingSeen.insert(CurrPtr).second &&
+        tryValueTracking(CurrPtr, LI, Worklist, Seen))
+      continue; // Successfully expanded via ValueTracking;
+
+    const auto *Load = dyn_cast<LoadInst>(CurrPtr);
+    if (!Load || !Load->isSimple()) {
+      addTerminal(CurrPtr);
+      continue;
+    }
+
+    // Use MemorySSA's API to get the clobbering MemoryAccess.
+    MemoryAccess *Clobber = Walker->getClobberingMemoryAccess(Load);
+    const auto LoadLoc = MemoryLocation::get(Load);
+
+    // Local accumulators for Load
+    SmallVector<const Value *, 8> LocalWorklist;
+    SmallPtrSet<const Value *, 8> LocalSeen;
+
+    LocalSeen.insert(Load);
+    bool Fallback = false;
+    bool MAWalkComplete = false;
+    // Limit MemorySSA walk to half of the budget
+    const unsigned MAIterationLimit = std::max(1u, MaxSteps / 2);
+
+    walkEdgeClobbers(
+        Clobber, Walker, LoadLoc, MAIterationLimit,
+        [&](MemoryAccess *MA) -> EdgeWalkStep {
+          if (MSSA->isLiveOnEntryDef(MA)) {
+            LLVM_DEBUG(dbgs() << "LiveOnEntryDef reached, fallback\n");
+            Fallback = true;
+            return EdgeWalkStep::Stop;
+          }
+
+          if (const auto *MDef = dyn_cast<MemoryDef>(MA)) {
+            const Instruction *I = MDef->getMemoryInst();
+            assert(I && "MemoryDef must have an instruction");
+
+            if (const auto *Store = dyn_cast<StoreInst>(I)) {
+              if (!Store->isSimple()) {
+                Fallback = true;
+                return EdgeWalkStep::Stop;
+              }
+              const Value *SV = Store->getValueOperand();
+              if (SV->getType()->isPointerTy()) {
+                tryEnqueueIfNew(SV, LocalSeen, LocalWorklist);
+                // Reached defining store for LoadLoc — stop this path here.
+                return EdgeWalkStep::SkipSuccessors;
+              }
+              LLVM_DEBUG(dbgs() << "Non-pointer store: " << *Store << "\n");
+              Fallback = true;
+              return EdgeWalkStep::Stop;
+            }
+            // NOTE: We intentionally don't consider the source in memintrinsics
+            // (memmove/memcpy): they are not semantically underlying objects.
+            // Conservatively assume escape.
+            LLVM_DEBUG(dbgs() << "Unrecognized defining write, fallback\n");
+            Fallback = true;
+            return EdgeWalkStep::Stop;
+          }
+          return EdgeWalkStep::Recurse;
+        },
+        MAWalkComplete);
+
+    if (!MAWalkComplete) Fallback = true;
+
+    if (Fallback) {
+      LLVM_DEBUG(dbgs() << "Fallback: mark Load as term: " << *Load << "\n");
+      addTerminal(Load);
+    } else {
+      for (const auto *WV : LocalWorklist)
+        tryEnqueueIfNew(WV, Seen, Worklist);
+    }
+  } // end while for Work
+  LLVM_DEBUG(dbgs() << "getUnderlyingObjectsThroughLoads: " << Ptr->getName()
+                    << " -- end\n");
+}
+
+} // end namespace llvm
+
+//===----------------------------------------------------------------------===//
+// EscapeCaptureTracker Implementation
+//===----------------------------------------------------------------------===//
+
+bool EscapeAnalysisInfo::EscapeCaptureTracker::shouldExplore(const Use *U) {
+  // Always explore, but we can add optimizations here later
+  return true;
+}
+
+bool EscapeAnalysisInfo::isExternalObject(const Value *Base) {
+  return isa<GlobalVariable>(Base) || isa<GlobalAlias>(Base) ||
+         isa<Argument>(Base);
+}
+
+bool EscapeAnalysisInfo::EscapeCaptureTracker::doesStoreDestEscape(
+    const Value *Dest) {
+  // Find base objects for the storage location
+  SmallPtrSet<const Value *, 8> BaseObjects;
+  bool IsComplete = false;
+  getUnderlyingObjectsThroughLoads(Dest, EAI.MSSA, BaseObjects, EAI.TLI, EAI.LI,
+                                   &IsComplete);
+
+  // If bases are unknown or the walk is incomplete, be conservative.
+  if (BaseObjects.empty() || !IsComplete) {
+    LLVM_DEBUG(dbgs() << "  Store destination unknown/incomplete, escapes\n");
+    return true;
+  }
+
+  for (const Value *Base : BaseObjects) {
+    LLVM_DEBUG(dbgs() << "  Store destination Base: " << *Base << "\n");
+    if (isExternalObject(Base)) {
+      LLVM_DEBUG(dbgs() << "  Stored to external object, escapes\n");
+      return true;
+    }
+
+    // If storing to another local allocation, recursively check if it escapes
+    if (const auto *Alloca = dyn_cast<AllocaInst>(Base)) {
+      // Recurse to decide whether the target alloca itself escapes.
+      if (EAI.solveEscapeFor(*Alloca, ProcessingSet)) {
+        LLVM_DEBUG(dbgs() << "  Stored to escaping alloca, escapes\n");
+        return true;
+      }
+    } else if (const auto *CB = dyn_cast<CallBase>(Base)) {
+      // Store to malloc/new call result (heap allocation).
+      if (isHeapAllocation(CB, *EAI.TLI)) {
+        if (EAI.solveEscapeFor(*CB, ProcessingSet)) {
+          LLVM_DEBUG(dbgs() << "  Stored into escaping heap alloc, escapes\n");
+          return true; // Stored into escaping heap allocation — escapes.
+        }
+        continue; // Stored into non-escaping heap allocation — OK.
+      }
+      LLVM_DEBUG(dbgs() << "  Stored to unknown call result, escapes\n");
+      return true; // Unknown call result — escapes.
+    } else {
+      // Any other/unknown terminal means the destination is not proven local.
+      LLVM_DEBUG(dbgs() << "  Stored to unknown location, escapes\n");
+      return true;
+    }
+  }
+  return false;
+}
+
+SmallVector<unsigned, 8>
+EscapeAnalysisInfo::EscapeCaptureTracker::getNoCapturePointerArgIndices(
+    const CallBase *CB) const {
+  SmallVector<unsigned, 8> Indices;
+  for (unsigned ArgNo = 0, End = CB->arg_size(); ArgNo != End; ++ArgNo) {
+    const auto *Opnd = CB->getArgOperand(ArgNo);
+    if (!Opnd || !Opnd->getType()->isPointerTy() ||
+        CB->paramHasAttr(ArgNo, Attribute::WriteOnly))
+      continue;
+    CaptureInfo CI = CB->getCaptureInfo(ArgNo);
+    if (capturesNothing(
+            UseCaptureInfo(CI.getOtherComponents(), CI.getRetComponents())))
+      Indices.push_back(ArgNo);
+  }
+  return Indices;
+}
+
+bool EscapeAnalysisInfo::EscapeCaptureTracker::canEscapeViaNocaptureArgs(
+    const CallBase &CB, ArrayRef<unsigned> NoCapPtrArgs,
+    SmallPtrSetImpl<const Value *> &StorePtrOpndBases) const {
+  // For each nocapture arg, compute bases and check if it is the QueryObj.
+  for (unsigned ArgIdx : NoCapPtrArgs) {
+    const Value *OpV = CB.getArgOperand(ArgIdx);
+    SmallPtrSet<const Value *, 8> ArgBases;
+    bool Complete = false;
+    getUnderlyingObjectsThroughLoads(OpV, EAI.MSSA, ArgBases, EAI.TLI, EAI.LI,
+                                     &Complete);
+    if (!Complete)
+      return true;
+    for (const Value *StoreBaseObj: StorePtrOpndBases) {
+      if (ArgBases.contains(StoreBaseObj))
+        return true;
+    }
+  }
+  return false;
+}
+
+bool EscapeAnalysisInfo::EscapeCaptureTracker::stemsFromStartStore(
+    MemoryUseOrDef *MUOD, const MemoryDef *StartMDef, MemoryLocation Loc,
+    bool &IsComplete, MemorySSAWalker *Walker) const {
+  LLVM_DEBUG(dbgs() << "  stemsFromStartStore: clobber " << *MUOD << "\n");
+  MemoryAccess *Clobber =
+      Walker->getClobberingMemoryAccess(MUOD->getDefiningAccess(), Loc);
+  if (!Clobber) {
+    IsComplete = false;
+    return true;
+  }
+  bool WalkComplete = false;
+  bool Found = false;
+
+  walkEdgeClobbers(
+      Clobber, Walker, Loc, WorklistLimit,
+      [&](MemoryAccess *MA) {
+        LLVM_DEBUG(dbgs() << "    inspect MA: " << *MA << "\n");
+        if (MA == StartMDef) {
+          Found = true;
+          return EdgeWalkStep::Stop;
+        }
+        return EdgeWalkStep::Recurse;
+      },
+      WalkComplete);
+
+  if (!WalkComplete)
+    IsComplete = false;
+  return Found || !WalkComplete; // conservative on incompleteness
+}
+
+SmallVector<const LoadInst *, 32>
+EscapeAnalysisInfo::EscapeCaptureTracker::findStoreReadersAndExports(
+    const StoreInst *StartStore, bool &ContentMayEscape, bool &IsComplete) {
+  IsComplete = true;
+  ContentMayEscape = false;
+  auto *StartMDef = cast<MemoryDef>(EAI.MSSA->getMemoryAccess(StartStore));
+  LLVM_DEBUG(dbgs() << "Collecting Loads reading from Store: " << *StartStore
+                    << "\t" << *StartMDef << "\n");
+  const MemoryLocation LocDest = MemoryLocation::get(StartStore);
+  MemorySSAWalker *Walker = EAI.MSSA->getSkipSelfWalker();
+
+  // A call may export memory and cause its escape.
+  auto mayReadMemory = [](const CallBase *CB) {
+    if (!CB || CB->doesNotAccessMemory() || CB->onlyWritesMemory())
+      return false;
+    return true;
+  };
+
+  // Need for nocapture arguments inquire
+  SmallPtrSet<const Value *, 8> StorePtrBases;
+  getUnderlyingObjectsThroughLoads(StartStore->getPointerOperand(),
+                                   EAI.MSSA, StorePtrBases, EAI.TLI,
+                                   EAI.LI, &IsComplete);
+
+  SmallVector<const LoadInst *, 32> ResLoads;
+  SmallVector<MemoryAccess *, 32> MAWorklist;
+  SmallPtrSet<MemoryAccess *, 32> MAVisited;
+  tryEnqueueIfNew(StartMDef, MAVisited, MAWorklist);
+
+  unsigned Steps = 0;
+  while (!MAWorklist.empty()) {
+    if (++Steps > WorklistLimit) {
+      IsComplete = false;
+      return {};
+    }
+
+    MemoryAccess *MA = MAWorklist.pop_back_val();
+    for (User *U : MA->users()) {
+      if (auto *MUOD = dyn_cast<MemoryUseOrDef>(U)) {
+        if (const Instruction *I = MUOD->getMemoryInst()) {
+          LLVM_DEBUG(dbgs() << "  UseOrDef: " << *I << "\n");
+          // Consider functions which can export the content behind LocDest
+          if (const auto *CB = dyn_cast<CallBase>(I); CB && mayReadMemory(CB)) {
+            if (auto NoCapArgs = getNoCapturePointerArgIndices(CB);
+                canEscapeViaNocaptureArgs(*CB, NoCapArgs, StorePtrBases) &&
+                stemsFromStartStore(MUOD, StartMDef, LocDest, IsComplete,
+                                    Walker)) {
+              LLVM_DEBUG(dbgs() << "  Call may export bytes: " << *CB << "\n");
+              ContentMayEscape = true;
+              return {};
+            }
+          } else if (auto *MU = dyn_cast<MemoryUse>(MUOD)) {
+            if (const auto *Load = dyn_cast<LoadInst>(I);
+                Load && stemsFromStartStore(MU, StartMDef, LocDest, IsComplete,
+                                            Walker)) {
+              LLVM_DEBUG(dbgs() << "  Load read from Store: " << *Load << "\n");
+              ResLoads.push_back(Load);
+            }
+            continue; // No need to enqueue further for MemoryUse
+          }
+        }
+      }
+      tryEnqueueIfNew(cast<MemoryAccess>(U), MAVisited, MAWorklist);
+    }
+  }
+  return ResLoads;
+}
+
+bool EscapeAnalysisInfo::EscapeCaptureTracker::
+    doesStoredPointerEscapeViaLoads(const StoreInst *Store) {
+  LLVM_DEBUG(dbgs() << "\n---- doesStoredPointerEscapeViaLoads " << *Store
+                    << " ----\n");
+  bool IsComplete = false;
+  bool ContentMayEscape = false;
+  const auto Loads =
+      findStoreReadersAndExports(Store, ContentMayEscape, IsComplete);
+  if (!IsComplete) {
+    LLVM_DEBUG(dbgs() << "  Incomplete load collection, escapes\n");
+    return true; // We may have missed loads -> conservatively escape
+  }
+  if (ContentMayEscape) {
+    LLVM_DEBUG(dbgs() << "  Bytes exported by call, escapes\n");
+    return true; // Bytes stored may have been exported -> conservatively escape
+  }
+
+  for (const LoadInst *Load: Loads) {
+    if (!Load->isSimple())
+      return true;
+    if (!Load->getType()->isPointerTy())
+      continue; // Loading non-pointer cannot cause escape
+    if (EAI.solveEscapeFor(*Load, ProcessingSet)) {
+      LLVM_DEBUG(dbgs() << "  -> escapes via load\n");
+      return true;
+    }
+  }
+  LLVM_DEBUG(dbgs() << "  -> does not escape via loads\n");
+  return false;
+}
+
+CaptureTracker::Action
+EscapeAnalysisInfo::EscapeCaptureTracker::captured(const Use *U,
+                                                   UseCaptureInfo CI) {
+  LLVM_DEBUG(dbgs() << "\n--> Analyzing capture use: " << *U->get() << " in "
+                    << *U->getUser() << "\n");
+  const auto *I = cast<Instruction>(U->getUser());
+
+  // If CaptureTracking says this use does not capture, continue exploring.
+  if (capturesNothing(CI.UseCC)) {
+    LLVM_DEBUG(dbgs() << "    Use doesn't capture, continue\n");
+    return Continue; // CaptureTracking says it's not captured, continue
+  }
+
+  // Passthrough ops (gep/bitcast/select/phi..) should be explored transitively.
+  if (CI.isPassthrough()) {
+    LLVM_DEBUG(dbgs() << "    Passthrough operation, continue to result\n");
+    return Continue;
+  }
+
+  // Now handle special cases where CaptureTracking says it's captured,
+  // but we need more sophisticated escape analysis
+
+  if (const auto *Store = dyn_cast<StoreInst>(I)) {
+    // Check if we're storing the pointer (not storing to it)
+    if (Store->getValueOperand() == U->get()) {
+      LLVM_DEBUG(dbgs() << "==> Storing pointer value, analyze destination "
+                        << *Store->getPointerOperand() << "\n");
+      if (!Store->isSimple() ||
+          doesStoreDestEscape(Store->getPointerOperand()) ||
+          doesStoredPointerEscapeViaLoads(Store)) {
+        LLVM_DEBUG(dbgs() << "  Store to escaping destination, escapes\n");
+        Escaped = true;
+        return Stop;
+      }
+      return ContinueIgnoringReturn;
+    }
+    // If we are the destination pointer, this use does not capture the value.
+    LLVM_DEBUG(dbgs() << "    Used as store destination, doesn't escape\n");
+    return ContinueIgnoringReturn;
+  }
+
+  if (isa<ICmpInst>(I)) { // Pure comparisons of addresses do not cause escape.
+    LLVM_DEBUG(dbgs() << "    Pointer comparison, doesn't escape\n");
+    return ContinueIgnoringReturn;
+  }
+
+  // Default: if CaptureTracking still indicates capture, treat as escape.
+  if (capturesAnything(CI.UseCC)) {
+    LLVM_DEBUG(dbgs() << "  Captured by: " << *I << "\n");
+    Escaped = true;
+    return Stop;
+  }
+
+  llvm_unreachable("Unhandled case in EscapeCaptureTracker::captured");
+}
+
+bool EscapeAnalysisInfo::solveEscapeFor(
+    const Value &Ptr, SmallPtrSet<const Value *, 32> &ProcessingSet) {
+  LLVM_DEBUG(dbgs() << "Solving escape for "
+                  << (Ptr.hasName() ? Ptr.getName() : "Load") << "\n");
+  if (const auto CacheIt = Cache.find(&Ptr); CacheIt != Cache.end()) {
+    LLVM_DEBUG(dbgs() << "  Cached result: "
+                  << (CacheIt->second ? "escaped" : "not escaped") << "\n");
+    return CacheIt->second;
+  }
+
+  if (ProcessingSet.contains(&Ptr)) { // Cycle
+    LLVM_DEBUG(dbgs() << "  Cycle detected for " << Ptr.getName()
+                      << ", assume escapes\n");
+    return true;
+  }
+  ProcessingSet.insert(&Ptr);
+
+  EscapeCaptureTracker Tracker(*this, ProcessingSet);
+
+  // Use the CaptureTracking infrastructure to analyze the allocation
+  PointerMayBeCaptured(&Ptr, &Tracker, /*MaxUsesToExplore=*/WorklistLimit);
+  Cache[&Ptr] = Tracker.hasEscaped();
+
+  LLVM_DEBUG(dbgs() << "  Result: "
+                        << (Tracker.hasEscaped() ? "escaped" : "not escaped") << "\n");
+  return Tracker.hasEscaped();
+}
+
+//===----------------------------------------------------------------------===//
+// EscapeAnalysis Core Implementation
+//===----------------------------------------------------------------------===//
+
+bool EscapeAnalysisInfo::isAllocationSite(const Value *V) {
+  if (isa<AllocaInst>(V))
+    return true;
+  if (const auto *CB = dyn_cast<CallBase>(V))
+    return isHeapAllocation(CB, *TLI);
+  return false;
+}
+
+bool EscapeAnalysisInfo::isEscaping(const Value &Alloc) {
+  if (!isAllocationSite(&Alloc)) { // Validate input
+    LLVM_DEBUG(dbgs() << "EscapeAnalysis: Not an allocation: " << Alloc
+                      << "\n");
+    return true; // Conservative: unknown things "escape"
+  }
+
+  LLVM_DEBUG(dbgs() << "EscapeAnalysis: Analyzing " << Alloc << "\n");
+  NumAllocationsAnalyzed++;
+
+  // Track allocations being processed to detect cycles
+  SmallPtrSet<const Value *, 32> ProcessingSet;
+  const bool IsEscaped = solveEscapeFor(Alloc, ProcessingSet);
+
+  if (IsEscaped)
+    NumAllocationsEscaped++;
+
+  return IsEscaped;
+}
+
+void EscapeAnalysisInfo::print(raw_ostream &OS) {
+  bool Any = false;
+  unsigned UnnamedCount = 0;
+
+  for (Instruction &I : instructions(F)) {
+    LLVM_DEBUG(OS << "\nI: " << I << "\n");
+    if (!isAllocationSite(&I))
+      continue;
+
+    Any = true;
+
+    // Stable symbol: use SSA name if exists, otherwise "unnamed#N".
+    StringRef Name = I.hasName() ? I.getName() : StringRef();
+    SmallString<32> Gen;
+    if (Name.empty()) {
+      ++UnnamedCount;
+      Gen += "unnamed#";
+      Gen += Twine(UnnamedCount).str();
+      Name = Gen;
+    }
+
+    const bool Esc = isEscaping(I);
+    OS << "  " << Name << " escapes: " << (Esc ? "yes" : "no") << "\n";
+  }
+
+  if (!Any)
+    OS << "  none\n";
+  OS << "\n";
+}
+
+bool EscapeAnalysisInfo::invalidate(Function &F, const PreservedAnalyses &PA,
+                                    FunctionAnalysisManager::Invalidator &Inv) {
+  if (auto PAC = PA.getChecker<EscapeAnalysis>();
+      PAC.preserved() || PAC.preservedSet<AllAnalysesOn<Function>>())
+    return false;
+
+  if (Inv.invalidate<MemorySSAAnalysis>(F, PA) ||
+      Inv.invalidate<LoopAnalysis>(F, PA) ||
+      Inv.invalidate<TargetLibraryAnalysis>(F, PA)) {
+    Cache.clear();
+    MSSA = nullptr;
+    LI = nullptr;
+    TLI = nullptr;
+    return true;
+  }
+
+  return false;
+}
+
+AnalysisKey EscapeAnalysis::Key;
+
+EscapeAnalysis::Result EscapeAnalysis::run(Function &F,
+                                           FunctionAnalysisManager &FAM) {
+  EscapeAnalysisInfo EAI(F, FAM);
+  return EAI;
+}
+
+//===----------------------------------------------------------------------===//
+// Printing Pass for Verification
+//===----------------------------------------------------------------------===//
+
+PreservedAnalyses
+EscapeAnalysisPrinterPass::run(Function &F, FunctionAnalysisManager &FAM) const {
+  if (F.isDeclaration())
+    return PreservedAnalyses::all();
+  OS << "Printing analysis 'Escape Analysis' for function '" << F.getName()
+     << "':\n";
+  FAM.getResult<EscapeAnalysis>(F).print(OS);
+  return PreservedAnalyses::all();
+}
\ No newline at end of file
diff --git a/llvm/lib/Transforms/Instrumentation/ThreadSanitizer.cpp b/llvm/lib/Transforms/Instrumentation/ThreadSanitizer.cpp
index fd0e9f18b61c9..78c343621bfd8 100644
--- a/llvm/lib/Transforms/Instrumentation/ThreadSanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/ThreadSanitizer.cpp
@@ -41,6 +41,7 @@
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
+#include "llvm/Transforms/Instrumentation/EscapeAnalysis.h"
 #include "llvm/Transforms/Utils/EscapeEnumerator.h"
 #include "llvm/Transforms/Utils/Instrumentation.h"
 #include "llvm/Transforms/Utils/Local.h"
@@ -84,6 +85,11 @@ static cl::opt<bool>
     ClOmitNonCaptured("tsan-omit-by-pointer-capturing", cl::init(true),
                       cl::desc("Omit accesses due to pointer capturing"),
                       cl::Hidden);
+static cl::opt<bool> ClUseEscapeAnalysis(
+    "tsan-use-escape-analysis", cl::init(false),
+    cl::desc("Use EscapeAnalysis to filter memory accesses to non-escaping "
+             "objects"),
+    cl::Hidden);
 
 STATISTIC(NumInstrumentedReads, "Number of instrumented reads");
 STATISTIC(NumInstrumentedWrites, "Number of instrumented writes");
@@ -96,6 +102,8 @@ STATISTIC(NumOmittedReadsFromConstantGlobals,
           "Number of reads from constant globals");
 STATISTIC(NumOmittedReadsFromVtable, "Number of vtable reads");
 STATISTIC(NumOmittedNonCaptured, "Number of accesses ignored due to capturing");
+STATISTIC(NumOmittedByEscapeAnalysis,
+          "Number of accesses ignored due to EscapeAnalysis");
 
 const char kTsanModuleCtorName[] = "tsan.module_ctor";
 const char kTsanInitName[] = "__tsan_init";
@@ -109,7 +117,9 @@ namespace {
 /// ensures the __tsan_init function is in the list of global constructors for
 /// the module.
 struct ThreadSanitizer {
-  ThreadSanitizer() {
+  ThreadSanitizer(const TargetLibraryInfo &TLI, EscapeAnalysisInfo *EAI,
+                  MemorySSA *MSSA, LoopInfo *LI)
+      : TLI(TLI), EAI(EAI), MSSA(MSSA), LI(LI) {
     // Check options and warn user.
     if (ClInstrumentReadBeforeWrite && ClCompoundReadBeforeWrite) {
       errs()
@@ -118,7 +128,7 @@ struct ThreadSanitizer {
     }
   }
 
-  bool sanitizeFunction(Function &F, const TargetLibraryInfo &TLI);
+  bool sanitizeFunction(Function &F, FunctionAnalysisManager *FAM);
 
 private:
   // Internal Instruction wrapper that contains more information about the
@@ -139,12 +149,16 @@ struct ThreadSanitizer {
   bool instrumentAtomic(Instruction *I, const DataLayout &DL);
   bool instrumentMemIntrinsic(Instruction *I);
   void chooseInstructionsToInstrument(SmallVectorImpl<Instruction *> &Local,
-                                      SmallVectorImpl<InstructionInfo> &All,
-                                      const DataLayout &DL);
+                                      SmallVectorImpl<InstructionInfo> &All);
   bool addrPointsToConstantData(Value *Addr);
   int getMemoryAccessFuncIndex(Type *OrigTy, Value *Addr, const DataLayout &DL);
   void InsertRuntimeIgnores(Function &F);
 
+  const TargetLibraryInfo &TLI;
+  EscapeAnalysisInfo *EAI = nullptr;
+  MemorySSA *MSSA = nullptr;
+  LoopInfo *LI = nullptr;
+
   Type *IntptrTy;
   FunctionCallee TsanFuncEntry;
   FunctionCallee TsanFuncExit;
@@ -186,8 +200,19 @@ void insertModuleCtor(Module &M) {
 
 PreservedAnalyses ThreadSanitizerPass::run(Function &F,
                                            FunctionAnalysisManager &FAM) {
-  ThreadSanitizer TSan;
-  if (TSan.sanitizeFunction(F, FAM.getResult<TargetLibraryAnalysis>(F)))
+
+  MemorySSA *MSSA = nullptr;
+  LoopInfo *LI = nullptr;
+  EscapeAnalysisInfo *EAI = nullptr;
+
+  if (ClUseEscapeAnalysis) {
+    MSSA = &FAM.getResult<MemorySSAAnalysis>(F).getMSSA();
+    LI = &FAM.getResult<LoopAnalysis>(F);
+    EAI = &FAM.getResult<EscapeAnalysis>(F);
+  }
+
+  ThreadSanitizer TSan(FAM.getResult<TargetLibraryAnalysis>(F), EAI, MSSA, LI);
+  if (TSan.sanitizeFunction(F, &FAM))
     return PreservedAnalyses::none();
   return PreservedAnalyses::all();
 }
@@ -200,6 +225,7 @@ PreservedAnalyses ModuleThreadSanitizerPass::run(Module &M,
   insertModuleCtor(M);
   return PreservedAnalyses::none();
 }
+
 void ThreadSanitizer::initialize(Module &M, const TargetLibraryInfo &TLI) {
   const DataLayout &DL = M.getDataLayout();
   LLVMContext &Ctx = M.getContext();
@@ -407,6 +433,7 @@ bool ThreadSanitizer::addrPointsToConstantData(Value *Addr) {
 // Currently handled:
 //  - read-before-write (within same BB, no calls between)
 //  - not captured variables
+//  - non-escaping allocations (if -tsan-use-escape-analysis)
 //
 // We do not handle some of the patterns that should not survive
 // after the classic compiler optimizations.
@@ -417,7 +444,7 @@ bool ThreadSanitizer::addrPointsToConstantData(Value *Addr) {
 // 'All' is a vector of insns that will be instrumented.
 void ThreadSanitizer::chooseInstructionsToInstrument(
     SmallVectorImpl<Instruction *> &Local,
-    SmallVectorImpl<InstructionInfo> &All, const DataLayout &DL) {
+    SmallVectorImpl<InstructionInfo> &All) {
   DenseMap<Value *, size_t> WriteTargets; // Map of addresses to index in All
   // Iterate from the end.
   for (Instruction *I : reverse(Local)) {
@@ -452,15 +479,55 @@ void ThreadSanitizer::chooseInstructionsToInstrument(
       }
     }
 
-    const AllocaInst *AI = findAllocaForValue(Addr);
-    // Instead of Addr, we should check whether its base pointer is captured.
-    if (AI && !PointerMayBeCaptured(AI, /*ReturnCaptures=*/true) &&
-        ClOmitNonCaptured) {
-      // The variable is addressable but not captured, so it cannot be
-      // referenced from a different thread and participate in a data race
-      // (see llvm/Analysis/CaptureTracking.h for details).
-      NumOmittedNonCaptured++;
-      continue;
+    if (!ClUseEscapeAnalysis) {
+      // Instead of Addr, we should check whether its base pointer is captured.
+      if (const AllocaInst *AI = findAllocaForValue(Addr);
+          AI && !PointerMayBeCaptured(AI, /*ReturnCaptures=*/true) &&
+          ClOmitNonCaptured) {
+        // The variable is addressable but not captured, so it cannot be
+        // referenced from a different thread and participate in a data race
+        // (see llvm/Analysis/CaptureTracking.h for details).
+        NumOmittedNonCaptured++;
+        continue;
+      }
+    }
+
+    // Use escape analysis if enabled
+    if (ClUseEscapeAnalysis) {
+      LLVM_DEBUG(dbgs() << "[TSan][EA] Analyzing access: " << *I << "\n");
+      SmallPtrSet<const Value *, 8> BaseObjs;
+      bool IsComplete = false;
+      getUnderlyingObjectsThroughLoads(Addr, MSSA, BaseObjs, &TLI, LI,
+                                       &IsComplete);
+      bool IsEscaped = false;
+      if (!IsComplete) {
+        IsEscaped = true;
+      } else {
+        for (const Value *Obj : BaseObjs) {
+          LLVM_DEBUG(dbgs() << "[TSan][EA] Base obj: " << *Obj << "\n");
+          if (EAI->isEscaping(*Obj)) {
+            IsEscaped = true;
+            break;
+          }
+        }
+      }
+      LLVM_DEBUG(
+          dbgs() << "[TSan][EA] Access to " << *Addr
+                 << (IsEscaped ? " ESCAPES\n\n" : " does NOT escape\n\n"));
+
+#ifndef NDEBUG // Each capture is an escape. Check it.
+      if (const AllocaInst *AI = findAllocaForValue(Addr);
+          IsEscaped && AI && !PointerMayBeCaptured(AI, true)) {
+        LLVM_DEBUG(dbgs() << "[TSan][EA] Mismatch: escaped but not captured "
+                          << *AI << " in " << I->getFunction()->getName()
+                          << "\n");
+        report_fatal_error("TSan EA mismatch: capture implies escape");
+      }
+#endif
+      if (!IsEscaped) {
+        NumOmittedByEscapeAnalysis++;
+        continue;
+      }
     }
 
     // Instrument this instruction.
@@ -496,7 +563,8 @@ void ThreadSanitizer::InsertRuntimeIgnores(Function &F) {
 }
 
 bool ThreadSanitizer::sanitizeFunction(Function &F,
-                                       const TargetLibraryInfo &TLI) {
+                                       FunctionAnalysisManager *FAM) {
+  LLVM_DEBUG(dbgs() << "\n[TSan] Function: " << F.getName() << "\n");
   // This is required to prevent instrumenting call to __tsan_init from within
   // the module constructor.
   if (F.getName() == kTsanModuleCtorName)
@@ -538,11 +606,10 @@ bool ThreadSanitizer::sanitizeFunction(Function &F,
         if (isa<MemIntrinsic>(Inst))
           MemIntrinCalls.push_back(&Inst);
         HasCalls = true;
-        chooseInstructionsToInstrument(LocalLoadsAndStores, AllLoadsAndStores,
-                                       DL);
+        chooseInstructionsToInstrument(LocalLoadsAndStores, AllLoadsAndStores);
       }
     }
-    chooseInstructionsToInstrument(LocalLoadsAndStores, AllLoadsAndStores, DL);
+    chooseInstructionsToInstrument(LocalLoadsAndStores, AllLoadsAndStores);
   }
 
   // We have collected all loads and stores.
diff --git a/llvm/test/Instrumentation/ThreadSanitizer/escape-analysis-tsan.ll b/llvm/test/Instrumentation/ThreadSanitizer/escape-analysis-tsan.ll
new file mode 100644
index 0000000000000..e75f60c93199a
--- /dev/null
+++ b/llvm/test/Instrumentation/ThreadSanitizer/escape-analysis-tsan.ll
@@ -0,0 +1,250 @@
+; RUN: opt -passes=tsan -tsan-use-escape-analysis < %s -S | FileCheck %s
+
+; This file contains tests for TSan with escape analysis enabled.
+; Goal: skip instrumentation for accesses to provably non-escaping locals and
+; instrument when the object may escape (return, store to global, via loaded global
+; pointers, double-pointer cycles, atomic/volatile, complex/bail-out paths).
+
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"
+
+ at g = global i32 0
+ at GPtr = global ptr null
+
+declare void @external(ptr)
+declare noalias ptr @malloc(i64)
+declare void @opaque_call()
+declare void @llvm.donothing() readnone
+
+; =============================================================================
+; LOCAL ALLOCAS THAT DO NOT ESCAPE
+; =============================================================================
+
+; Local alloca: not returned, not stored outside -> no __tsan_* calls
+define void @local_no_escape() nounwind uwtable sanitize_thread {
+  %a = alloca i32, align 4
+  store i32 1, ptr %a, align 4
+  %v = load i32, ptr %a, align 4
+  ret void
+}
+; CHECK-LABEL: define void @local_no_escape
+; CHECK-NOT: call void @__tsan_write4
+; CHECK-NOT: call void @__tsan_read4
+
+; Chain through local pointer (double indirection) remains local
+define void @double_ptr_local_ok() nounwind uwtable sanitize_thread {
+  %x  = alloca i32, align 4
+  %p  = alloca ptr, align 8
+  %pp = alloca ptr, align 8
+  store ptr %x, ptr %p
+  store ptr %p, ptr %pp
+  store i32 1, ptr %x
+  %lv = load i32, ptr %x
+  ret void
+}
+; CHECK-LABEL: define void @double_ptr_local_ok
+; CHECK-NOT: call void @__tsan_write4(ptr %x)
+; CHECK-NOT: call void @__tsan_read4(ptr %x)
+
+; Loaded destination remains local (two-store memphi pattern)
+define void @loaded_dest_memphi_local_ok(i1 %c) nounwind uwtable sanitize_thread {
+  %x  = alloca i32, align 4
+  %p  = alloca ptr, align 8
+  %s1 = alloca ptr, align 8
+  %s2 = alloca ptr, align 8
+  br i1 %c, label %T, label %F
+T:
+  store ptr %s1, ptr %p
+  br label %M
+F:
+  store ptr %s2, ptr %p
+  br label %M
+M:
+  %l = load ptr, ptr %p
+  store ptr %x, ptr %l
+  store i32 9, ptr %x
+  %rv = load i32, ptr %x
+  ret void
+}
+; CHECK-LABEL: define void @loaded_dest_memphi_local_ok
+; CHECK-NOT: call void @__tsan_write4(ptr %x)
+; CHECK-NOT: call void @__tsan_read4(ptr %x)
+
+; Simple local store via intermediate pointer remains local
+define void @store_via_local_ptr_ok() nounwind uwtable sanitize_thread {
+  %x = alloca i32, align 4
+  %slot = alloca ptr, align 8
+  store ptr %x, ptr %slot
+  %l = load ptr, ptr %slot
+  store i32 100, ptr %l
+  %r = load i32, ptr %x
+  ret void
+}
+; CHECK-LABEL: define void @store_via_local_ptr_ok
+; CHECK-NOT: call void @__tsan_write4(ptr %x)
+; CHECK-NOT: call void @__tsan_read4(ptr %x)
+
+; =============================================================================
+; ESCAPES VIA GLOBALS, RETURNS, CALLS
+; =============================================================================
+
+; Address stored to global -> escape, accesses should be instrumented
+define void @store_to_global_escape() nounwind uwtable sanitize_thread {
+  %a = alloca i32, align 4
+  store ptr %a, ptr @GPtr
+  store i32 2, ptr %a, align 4
+  %v = load i32, ptr %a, align 4
+  ret void
+}
+; CHECK-LABEL: define void @store_to_global_escape
+; CHECK: call void @__tsan_write4(ptr %a)
+; CHECK: call void @__tsan_read4(ptr %a)
+
+; Returning alloca pointer -> escape
+define ptr @return_alloca_escape() nounwind uwtable sanitize_thread {
+  %a = alloca i32, align 4
+  store i32 5, ptr %a, align 4
+  ret ptr %a
+}
+; CHECK-LABEL: define ptr @return_alloca_escape
+; CHECK: call void @__tsan_write4(ptr %a)
+
+; Chain becomes escaping: the final node is stored to global
+define void @double_ptr_escape() nounwind uwtable sanitize_thread {
+  %x  = alloca i32, align 4
+  %p  = alloca ptr, align 8
+  %pp = alloca ptr, align 8
+  store ptr %x, ptr %p
+  store ptr %p, ptr %pp
+  store ptr %pp, ptr @GPtr
+  store i32 7, ptr %x
+  %r = load i32, ptr %x
+  ret void
+}
+; CHECK-LABEL: define void @double_ptr_escape
+; CHECK: call void @__tsan_write4(ptr %x)
+; CHECK: call void @__tsan_read4(ptr %x)
+
+; Loaded destination points to global -> escape
+define void @loaded_dest_global_escape() nounwind uwtable sanitize_thread {
+  %x = alloca i32, align 4
+  %p = alloca ptr, align 8
+  store ptr @GPtr, ptr %p
+  %l = load ptr, ptr %p
+  store ptr %x, ptr %l
+  store i32 3, ptr %x
+  %v = load i32, ptr %x
+  ret void
+}
+; CHECK-LABEL: define void @loaded_dest_global_escape
+; CHECK: call void @__tsan_write4(ptr %x)
+; CHECK: call void @__tsan_read4(ptr %x)
+
+; Passing address outside via call: argument escape
+define void @call_external_escape() nounwind uwtable sanitize_thread {
+  %x = alloca i32, align 4
+  call void @external(ptr %x)
+  store i32 300, ptr %x
+  %r = load i32, ptr %x
+  ret void
+}
+; CHECK-LABEL: define void @call_external_escape
+; CHECK: call void @__tsan_write4(ptr %x)
+; CHECK: call void @__tsan_read4(ptr %x)
+
+; Store to global after intermediate slot -> escape
+define void @store_via_local_then_global_escape() nounwind uwtable sanitize_thread {
+  %x = alloca i32, align 4
+  %slot = alloca ptr, align 8
+  store ptr %x, ptr %slot
+  %tmp = load ptr, ptr %slot
+  store ptr %tmp, ptr @GPtr
+  store i32 200, ptr %x
+  %r = load i32, ptr %x
+  ret void
+}
+; CHECK-LABEL: define void @store_via_local_then_global_escape
+; CHECK: call void @__tsan_write4(ptr %x)
+; CHECK: call void @__tsan_read4(ptr %x)
+
+; =============================================================================
+; ATOMICS, VOLATILE, AND BAIL-OUTS
+; =============================================================================
+
+; Atomic store of local address -> considered escape
+define void @atomic_store_escape() nounwind uwtable sanitize_thread {
+  %a = alloca i32, align 4
+  %p = alloca ptr, align 8
+  store atomic ptr %a, ptr %p seq_cst, align 8
+  store i32 11, ptr %a
+  %v = load i32, ptr %a
+  ret void
+}
+; CHECK-LABEL: define void @atomic_store_escape
+; CHECK: call void @__tsan_write4(ptr %a)
+; CHECK: call void @__tsan_read4(ptr %a)
+
+; Volatile store of local address -> escape
+define void @volatile_store_escape() nounwind uwtable sanitize_thread {
+  %a = alloca i32, align 4
+  %p = alloca ptr, align 8
+  store volatile ptr %a, ptr %p
+  store i32 13, ptr %a
+  %v = load i32, ptr %a
+  ret void
+}
+; CHECK-LABEL: define void @volatile_store_escape
+; CHECK: call void @__tsan_write4(ptr %a)
+; CHECK: call void @__tsan_read4(ptr %a)
+
+; Bail-out on complex path: ptrtoint -> conservatively escape
+define void @ptrtoint_bailout_escape() nounwind uwtable sanitize_thread {
+  %a = alloca i32, align 4
+  %c = ptrtoint ptr %a to i64
+  store i32 21, ptr %a
+  %v = load i32, ptr %a
+  ret void
+}
+; CHECK-LABEL: define void @ptrtoint_bailout_escape
+; CHECK: call void @__tsan_write4(ptr %a)
+; CHECK: call void @__tsan_read4(ptr %a)
+
+; =============================================================================
+; HEAP ALLOCATIONS
+; =============================================================================
+
+; Heap allocation is local (not stored, not returned) -> no instrumentation on the heap object
+define void @malloc_not_escape1() nounwind uwtable sanitize_thread {
+  %m = call noalias ptr @malloc(i64 16)
+  store i32 1, ptr %m
+  %v = load i32, ptr %m
+  ret void
+}
+; CHECK-LABEL: define void @malloc_not_escape1
+; CHECK-NOT: call void @__tsan_write4(ptr %m)
+; CHECK-NOT: call void @__tsan_read4(ptr %m)
+
+define void @malloc_not_escape2() nounwind uwtable sanitize_thread {
+entry:
+  %p = alloca ptr, align 8
+  %call = call noalias ptr @malloc(i64 noundef 400) #2
+  store ptr %call, ptr %p, align 8
+  %0 = load ptr, ptr %p, align 8
+  %arrayidx = getelementptr inbounds i32, ptr %0, i64 33
+  store i32 42, ptr %arrayidx, align 4
+  ret void
+}
+; CHECK-LABEL: define void @malloc_not_escape2
+; CHECK-NOT: call void @__tsan_write4(ptr %arrayidx)
+
+; Heap allocation stored to global -> instrumented
+define void @malloc_global_escape() nounwind uwtable sanitize_thread {
+  %m = call noalias ptr @malloc(i64 320)
+  store ptr %m, ptr @GPtr
+  store i32 2, ptr %m
+  %v = load i32, ptr %m
+  ret void
+}
+; CHECK-LABEL: define void @malloc_global_escape
+; CHECK: call void @__tsan_write8(ptr @GPtr)
+; CHECK: call void @__tsan_write4(ptr %m)
+; CHECK: call void @__tsan_read4(ptr %m)
diff --git a/llvm/test/Instrumentation/ThreadSanitizer/escape-analysis.ll b/llvm/test/Instrumentation/ThreadSanitizer/escape-analysis.ll
new file mode 100644
index 0000000000000..4ca59dfa52837
--- /dev/null
+++ b/llvm/test/Instrumentation/ThreadSanitizer/escape-analysis.ll
@@ -0,0 +1,843 @@
+; RUN: opt -passes='print<escape-analysis>' -disable-output %s 2>&1 | FileCheck %s
+
+; NOTE:
+; - The printer emits:
+;   "EscapeAnalysis for function: <func>"
+;   "<alloc-name> escapes: yes|no" per allocation site (alloca/malloc-like).
+;   "EA: none" if no allocations in the function.
+; - Names are taken from SSA. We avoid relying on "unnamed#N" in tests.
+
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"
+
+ at G = global ptr null
+ at GPtr = dso_local global ptr null, align 8
+ at GPtrPtr = dso_local global ptr null, align 8
+ at GPtrPtrPtr = dso_local global ptr null, align 8
+ at GAlias = alias ptr, ptr @GPtr
+
+%S = type { ptr, ptr }
+ at GS = dso_local global %S zeroinitializer, align 8
+ at GArr = dso_local global [2 x %S] zeroinitializer, align 8
+
+declare noalias ptr @malloc(i64)
+declare noalias ptr @external(ptr)
+
+; ============================================================================ ;
+; Basics and locals
+; ============================================================================ ;
+
+; No allocations
+define void @no_allocs() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'no_allocs':
+; CHECK-NEXT: none
+  ret void
+}
+
+; Using pointer in icmp -> no escape
+define void @icmp_no_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'icmp_no_escape':
+; CHECK: a escapes: no
+  %a = alloca i8, align 1
+  %cmp = icmp eq ptr %a, null
+  ret void
+}
+
+; Passthrough via phi/select-like -> no escape
+define void @passthrough_phi_no_escape(i1 %c) {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'passthrough_phi_no_escape':
+; CHECK: a escapes: no
+entry:
+  %a = alloca i8, align 1
+  br i1 %c, label %t, label %f
+t:
+  br label %merge
+f:
+  br label %merge
+merge:
+  %p = phi ptr [ %a, %t ], [ %a, %f ]
+  ret void
+}
+
+; Safe store to local memory -> no escape.
+define void @store_to_local_ok() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'store_to_local_ok':
+; CHECK: a escapes: no
+; CHECK: p escapes: no
+  %a = alloca i8, align 1
+  %p = alloca ptr, align 8
+  store ptr %a, ptr %p
+  ret void
+}
+
+; Chain through local pointer (double indirection) remains local:
+; %x is stored in %p, %p in %pp -> no escape.
+define void @double_ptr_local_ok() sanitize_thread {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'double_ptr_local_ok':
+; CHECK:  x escapes: no
+; CHECK:  p escapes: no
+; CHECK:  pp escapes: no
+  %x  = alloca i32, align 4
+  %p  = alloca ptr, align 8
+  %pp = alloca ptr, align 8
+  store ptr %x, ptr %p
+  store ptr %p, ptr %pp
+  store i32 1, ptr %x
+  %lv = load i32, ptr %x
+  ret void
+}
+
+; ============================================================================ ;
+; Returns and heap allocations
+; ============================================================================ ;
+
+; Returning alloca pointer -> escape
+define ptr @return_alloca_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'return_alloca_escape':
+; CHECK: a escapes: yes
+  %a = alloca i8, align 1
+  ret ptr %a
+}
+
+; Malloc-like allocations
+define ptr @malloc_local_no_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'malloc_local_no_escape':
+; CHECK: m1 escapes: no
+; CHECK: m2 escapes: yes
+  %m1 = call ptr @malloc(i64 16)
+  %m2 = call ptr @malloc(i64 32)
+  ret ptr %m2
+}
+
+; Escape of malloc calls
+define dso_local void @malloc_escape() #0 {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'malloc_escape':
+; CHECK:   p escapes: no
+; CHECK:   call escapes: yes
+; CHECK:   call1 escapes: yes
+entry:
+  %p = alloca ptr, align 8
+  %call = call noalias ptr @malloc(i64 noundef 4) #2
+  store ptr %call, ptr @GPtr, align 8
+  %call1 = call noalias ptr @malloc(i64 noundef 4) #2
+  store ptr %call1, ptr %p, align 8
+  %0 = load ptr, ptr %p, align 8
+  store ptr %0, ptr @GPtr, align 8
+  ret void
+}
+
+; Store to malloc'ed array element, which escapes -> local escapes
+define dso_local void @escape_through_malloc() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'escape_through_malloc':
+; CHECK:   a escapes: yes
+; CHECK:   x escapes: yes
+; CHECK:   b escapes: no
+; CHECK:   y escapes: yes
+; CHECK:   call escapes: yes
+; CHECK:   call1 escapes: yes
+entry:
+  %a = alloca ptr, align 8
+  %x = alloca i32, align 4
+  %b = alloca ptr, align 8
+  %y = alloca i32, align 4
+  %call = call noalias ptr @malloc(i64 noundef 80)
+  store ptr %call, ptr %a, align 8
+  store ptr %a, ptr @GPtrPtrPtr, align 8
+  %0 = load ptr, ptr %a, align 8
+  %arrayidx = getelementptr inbounds ptr, ptr %0, i64 5
+  store ptr %x, ptr %arrayidx, align 8
+  %call1 = call noalias ptr @malloc(i64 noundef 80)
+  store ptr %call1, ptr %b, align 8
+  %1 = load ptr, ptr %b, align 8
+  store ptr %1, ptr @GPtrPtr, align 8
+  %2 = load ptr, ptr %b, align 8
+  %arrayidx2 = getelementptr inbounds ptr, ptr %2, i64 5
+  store ptr %y, ptr %arrayidx2, align 8
+  ret void
+}
+
+; Store to malloc'ed array element, which does not escape -> no escape
+define dso_local void @no_escape_through_malloc() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'no_escape_through_malloc':
+; CHECK:   a escapes: no
+; CHECK:   x escapes: no
+; CHECK:   b escapes: no
+; CHECK:   y escapes: no
+; CHECK:   call escapes: no
+; CHECK:   call1 escapes: no
+entry:
+  %a = alloca ptr, align 8
+  %x = alloca i32, align 4
+  %b = alloca ptr, align 8
+  %y = alloca i32, align 4
+  %call = call noalias ptr @malloc(i64 noundef 80)
+  store ptr %call, ptr %a, align 8
+  %0 = load ptr, ptr %a, align 8
+  %arrayidx = getelementptr inbounds ptr, ptr %0, i64 5
+  store ptr %x, ptr %arrayidx, align 8
+  %call1 = call noalias ptr @malloc(i64 noundef 80)
+  store ptr %call1, ptr %b, align 8
+  %1 = load ptr, ptr %b, align 8
+  %arrayidx2 = getelementptr inbounds ptr, ptr %1, i64 5
+  store ptr %y, ptr %arrayidx2, align 8
+  ret void
+}
+
+; ============================================================================ ;
+; Globals, arguments, and mixed destinations
+; ============================================================================ ;
+
+; Store to global, global alias and global structure field -> escape
+define void @store_to_global_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'store_to_global_escape':
+; CHECK: a escapes: yes
+; CHECK: b escapes: yes
+; CHECK: c escapes: yes
+  %a = alloca i8, align 1
+  %b = alloca i8, align 1
+  %c = alloca i8, align 1
+  store ptr %a, ptr @G
+  store ptr %b, ptr @GAlias
+  %f0 = getelementptr inbounds %S, ptr @GS, i64 0, i32 0
+  store ptr %c, ptr %f0, align 8
+  ret void
+}
+
+; Store to argument -> escape
+define void @store_to_arg_escape(ptr %out) {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'store_to_arg_escape':
+; CHECK: a escapes: yes
+  %a = alloca i8, align 1
+  store ptr %a, ptr %out
+  ret void
+}
+
+; Store to the pointer returned by an external function -> escape
+define void @store_to_unknown_ret_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'store_to_unknown_ret_escape':
+; CHECK: a escapes: yes
+; CHECK: b escapes: yes
+  %a = alloca i8, align 1
+  %b = alloca i8, align 1
+  %p = call ptr @external(ptr %b)
+  store ptr %a, ptr %p
+  ret void
+}
+
+; Cyclic dependency between local allocas, one stored to global -> both escape
+define void @cycle_allocas_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'cycle_allocas_escape':
+; CHECK: a escapes: yes
+; CHECK: b escapes: yes
+  %a = alloca ptr, align 8
+  %b = alloca ptr, align 8
+  store ptr %a, ptr %b
+  store ptr %b, ptr %a
+  store ptr %a, ptr @G
+  ret void
+}
+
+; Destination via phi mixing local and global -> escape
+define void @phi_mixed_dest_escape(i1 %c) {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'phi_mixed_dest_escape':
+; CHECK: a escapes: yes
+; CHECK: p escapes: no
+  %a = alloca i8, align 1
+  %p = alloca ptr, align 8
+  br i1 %c, label %t, label %f
+t:
+  br label %m
+f:
+  br label %m
+m:
+  %dst = phi ptr [ %p, %t ], [ @GPtr, %f ]
+  store ptr %a, ptr %dst, align 8
+  ret void
+}
+
+; Store to local pointer and then store from global pointer -> no escape
+define dso_local void @store_ptr_store_from_global_no_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'store_ptr_store_from_global_no_escape':
+; CHECK: x escapes: no
+; CHECK: p escapes: no
+  %x = alloca i32, align 4
+  %p = alloca ptr, align 8
+  store ptr %x, ptr %p, align 8
+  %1 = load ptr, ptr @GPtr, align 8
+  store ptr %1, ptr %p, align 8
+  ret void
+}
+
+declare void @varargs_func(ptr, ...)
+
+; Passing pointer to varargs -> escape
+define void @varargs_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'varargs_escape':
+; CHECK: a escapes: yes
+  %a = alloca i32, align 4
+  call void (ptr, ...) @varargs_func(ptr null, ptr %a)
+  ret void
+}
+
+; ============================================================================ ;
+; Loaded destination patterns
+; ============================================================================ ;
+
+; Store through pointer loaded from argument (LiveOnEntry) -> escape
+define void @store_through_loaded_arg_escape(ptr %out) {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'store_through_loaded_arg_escape':
+; CHECK: a escapes: yes
+  %a = alloca i8, align 1
+  %l = load ptr, ptr %out, align 8
+  store ptr %a, ptr %l, align 8
+  ret void
+}
+
+; Multiple stores to same location - last one escapes
+define void @multiple_stores_last_escapes(i1 %c) {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'multiple_stores_last_escapes':
+; CHECK: x1 escapes: no
+; CHECK: x2 escapes: yes
+; CHECK: p escapes: no
+  %x1 = alloca i32, align 4
+  %x2 = alloca i32, align 4
+  %p = alloca ptr, align 8
+
+  store ptr %x1, ptr %p, align 8
+  store ptr %x2, ptr %p, align 8
+
+  %loaded = load ptr, ptr %p, align 8
+  store ptr %loaded, ptr @GPtr, align 8
+  ret void
+}
+
+; Loaded destination with MemoryPhi (two stores) -> no escape
+define void @loaded_dest_memphi_local_ok(i1 %c) {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'loaded_dest_memphi_local_ok':
+; CHECK: x escapes: no
+; CHECK: p escapes: no
+; CHECK: s1 escapes: no
+; CHECK: s2 escapes: no
+entry:
+  %x = alloca i32, align 4
+  %p = alloca ptr, align 8
+  %s1 = alloca ptr, align 8
+  %s2 = alloca ptr, align 8
+  br i1 %c, label %t, label %f
+t:
+  store ptr %s1, ptr %p, align 8
+  br label %m
+f:
+  store ptr %s2, ptr %p, align 8
+  br label %m
+m:
+  %l = load ptr, ptr %p, align 8
+  store ptr %x, ptr %l, align 8
+  ret void
+}
+
+; ============================================================================ ;
+; Arrays and GEP
+; ============================================================================ ;
+
+; Store to local array element via GEP -> no escape
+define void @store_to_gep_local_ok() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'store_to_gep_local_ok':
+; CHECK: a escapes: no
+; CHECK: arr escapes: no
+  %a = alloca i8, align 1
+  %arr = alloca [2 x ptr], align 8
+  %elem = getelementptr inbounds [2 x ptr], ptr %arr, i64 0, i64 1
+  store ptr %a, ptr %elem, align 8
+  ret void
+}
+
+; Escape through heap array element: store pointer to local into malloc'ed array
+; element, then read back and store to global -> local escapes; the malloc escapes.
+define dso_local void @escape_through_heap_array_element() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'escape_through_heap_array_element':
+; CHECK:  x escapes: yes
+; CHECK:  p escapes: no
+; CHECK:  call escapes: yes
+entry:
+  %x = alloca i32, align 4
+  %p = alloca ptr, align 8
+  %call = call noalias ptr @malloc(i64 noundef 800) #2
+  store ptr %call, ptr %p, align 8
+  %0 = load ptr, ptr %p, align 8
+  %arrayidx = getelementptr inbounds ptr, ptr %0, i64 33
+  store ptr %x, ptr %arrayidx, align 8
+  %1 = load ptr, ptr %p, align 8
+  %arrayidx1 = getelementptr inbounds ptr, ptr %1, i64 11
+  %2 = load ptr, ptr %arrayidx1, align 8
+  store ptr %2, ptr @GPtr, align 8
+  ret void
+}
+
+; Escape through stack array element read from another index: local escapes,
+; array remains local (element copied out to global).
+define dso_local void @escape_through_array_element() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'escape_through_array_element':
+; CHECK:   x escapes: yes
+; CHECK:   p escapes: no
+entry:
+  %x = alloca i32, align 4
+  %p = alloca [100 x ptr], align 16
+  %arrayidx = getelementptr inbounds [100 x ptr], ptr %p, i64 0, i64 33
+  store ptr %x, ptr %arrayidx, align 8
+  %arrayidx1 = getelementptr inbounds [100 x ptr], ptr %p, i64 0, i64 0
+  %0 = load ptr, ptr %arrayidx1, align 16
+  store ptr %0, ptr @GPtr, align 8
+  ret void
+}
+
+; Whole array (stack): leak address of an element itself -> the array escapes
+define dso_local void @escape_whole_array() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'escape_whole_array':
+; CHECK:   a1 escapes: yes
+entry:
+  %a1 = alloca [100 x i32], align 16
+  %arrayidx = getelementptr inbounds [100 x i32], ptr %a1, i64 0, i64 33
+  store ptr %arrayidx, ptr @GPtr, align 8
+  ret void
+}
+
+; Whole array (heap): leak address of an element; also keep a local alloca with
+; the malloc pointer to ensure both are reported.
+define dso_local void @escape_whole_array_heap() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'escape_whole_array_heap':
+; CHECK:   a2 escapes: yes
+; CHECK:   call escapes: yes
+entry:
+  %a2 = alloca ptr, align 8
+  %call = call noalias ptr @malloc(i64 noundef 400) #2
+  store ptr %call, ptr %a2, align 8
+  store ptr %a2, ptr @GPtrPtr, align 8
+  ret void
+}
+
+; Struct (stack): leak address of a field -> the struct itself escapes
+define void @struct_stack_self_escape_via_field_addr() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'struct_stack_self_escape_via_field_addr':
+; CHECK: s escapes: yes
+  %s = alloca %S, align 8
+  %f0 = getelementptr inbounds %S, ptr %s, i64 0, i32 0
+  store ptr %f0, ptr @GPtr, align 8
+  ret void
+}
+
+; Struct (heap-esque via malloc): leak address of a field -> the heap object escapes
+define void @struct_heap_self_escape_via_field_addr() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'struct_heap_self_escape_via_field_addr':
+; CHECK: m escapes: yes
+  %m = call ptr @malloc(i64 16)
+  %f0 = getelementptr inbounds %S, ptr %m, i64 0, i32 0
+  store ptr %f0, ptr @GPtr, align 8
+  ret void
+}
+
+; Two-dimensional heap array pointer-contained structures:
+; store local into an element, then array escapes -> local escapes; mallocs escape.
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'two_dimensional_array_escape':
+; CHECK:   x escapes: yes
+; CHECK:   arr escapes: no
+; CHECK:   i escapes: no
+; CHECK:   call escapes: yes
+; CHECK:   call1 escapes: yes
+define dso_local void @two_dimensional_array_escape() {
+entry:
+  %x = alloca i32, align 4
+  %arr = alloca ptr, align 8
+  %i = alloca i32, align 4
+  %call = call noalias ptr @malloc(i64 noundef 80)
+  store ptr %call, ptr %arr, align 8
+  store i32 0, ptr %i, align 4
+  br label %for.cond
+
+for.cond:                                         ; preds = %for.inc, %entry
+  %0 = load i32, ptr %i, align 4
+  %cmp = icmp slt i32 %0, 10
+  br i1 %cmp, label %for.body, label %for.end
+
+for.body:                                         ; preds = %for.cond
+  %call1 = call noalias ptr @malloc(i64 noundef 160)
+  %1 = load ptr, ptr %arr, align 8
+  %2 = load i32, ptr %i, align 4
+  %idxprom = sext i32 %2 to i64
+  %arrayidx = getelementptr inbounds ptr, ptr %1, i64 %idxprom
+  store ptr %call1, ptr %arrayidx, align 8
+  br label %for.inc
+
+for.inc:                                          ; preds = %for.body
+  %3 = load i32, ptr %i, align 4
+  %inc = add nsw i32 %3, 1
+  store i32 %inc, ptr %i, align 4
+  br label %for.cond
+
+for.end:                                          ; preds = %for.cond
+  %4 = load ptr, ptr %arr, align 8
+  %arrayidx2 = getelementptr inbounds ptr, ptr %4, i64 5
+  %5 = load ptr, ptr %arrayidx2, align 8
+  %arrayidx3 = getelementptr inbounds %S, ptr %5, i64 5
+  %f1 = getelementptr inbounds nuw %S, ptr %arrayidx3, i32 0, i32 0
+  store ptr %x, ptr %f1, align 8
+  %6 = load ptr, ptr %arr, align 8
+  %arrayidx4 = getelementptr inbounds ptr, ptr %6, i64 0
+  %7 = load ptr, ptr %arrayidx4, align 8
+  %arrayidx5 = getelementptr inbounds %S, ptr %7, i64 0
+  store ptr %arrayidx5, ptr @GPtr, align 8
+  ret void
+}
+
+; ============================================================================ ;
+; Structs and struct fields
+; ============================================================================ ;
+
+; Store into field of a local struct -> no escape
+define void @struct_field_local_ok() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'struct_field_local_ok':
+; CHECK: x escapes: no
+; CHECK: s escapes: no
+  %x = alloca i8, align 1
+  %s = alloca %S, align 8
+  %f0 = getelementptr inbounds %S, ptr %s, i64 0, i32 0
+  store ptr %x, ptr %f0, align 8
+  ret void
+}
+
+; Loaded-dest via field of a local struct -> no escape
+define void @loaded_dest_struct_local_ok() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'loaded_dest_struct_local_ok':
+; CHECK: x escapes: no
+; CHECK: q escapes: no
+; CHECK: s escapes: no
+  %x = alloca i8, align 1
+  %q = alloca ptr, align 8
+  %s = alloca %S, align 8
+  %f1 = getelementptr inbounds %S, ptr %s, i64 0, i32 1
+  store ptr %q, ptr %f1, align 8
+  %l = load ptr, ptr %f1, align 8
+  store ptr %x, ptr %l, align 8
+  ret void
+}
+
+; Local struct holds pointer to local; then the pointer is stored to global
+; -> local escapes, struct remains local.
+define void @struct_field_local_escape_via_global() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'struct_field_local_escape_via_global':
+; CHECK: x escapes: yes
+; CHECK: s escapes: no
+  %x = alloca i8, align 1
+  %s = alloca %S, align 8
+  %f0 = getelementptr inbounds %S, ptr %s, i64 0, i32 0
+  store ptr %x, ptr %f0, align 8
+  %loaded = load ptr, ptr %f0, align 8
+  store ptr %loaded, ptr @GPtr, align 8
+  ret void
+}
+
+; Global struct stores the address of a local slot ->
+; x escapes via escaping slot; q escapes
+define void @loaded_dest_struct_global_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'loaded_dest_struct_global_escape':
+; CHECK: x escapes: yes
+; CHECK: q escapes: yes
+  %x = alloca i8, align 1
+  %q = alloca ptr, align 8
+  store ptr %q, ptr getelementptr inbounds (%S, ptr @GS, i32 0, i32 1), align 8
+  %l = load ptr, ptr %q, align 8
+  store ptr %x, ptr %l, align 8
+  ret void
+}
+
+; Select between local and global struct as container -> escape
+define void @select_struct_dest_mixed_escape(i1 %c) {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'select_struct_dest_mixed_escape':
+; CHECK: x escapes: yes
+; CHECK: s escapes: no
+  %x = alloca i8, align 1
+  %s = alloca %S, align 8
+  %dst = select i1 %c, ptr %s, ptr @GS
+  %f0 = getelementptr inbounds %S, ptr %dst, i64 0, i32 0
+  store ptr %x, ptr %f0, align 8
+  ret void
+}
+
+; Return a struct containing a pointer to a local -> escape
+define %S @return_struct_containing_ptr_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'return_struct_containing_ptr_escape':
+; CHECK: x escapes: yes
+  %x = alloca i8, align 1
+  %u = insertvalue %S undef, ptr %x, 0
+  %u2 = insertvalue %S %u, ptr null, 1
+  ret %S %u2
+}
+
+; Deep hierarchy of nested structures
+%struct.Parent = type { i32, %struct.Child }
+%struct.Child = type { i32, i32, ptr }
+%struct.GrandParent = type { i32, %struct.Parent }
+define dso_local void @escape_through_nested_struct() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'escape_through_nested_struct':
+; CHECK:  sp escapes: yes
+; CHECK:  sg escapes: yes
+; CHECK:  x escapes: yes
+entry:
+  %sp = alloca %struct.Parent, align 8
+  %sg = alloca %struct.GrandParent, align 8
+  %x = alloca i32, align 4
+  %s = getelementptr inbounds nuw %struct.Parent, ptr %sp, i32 0, i32 1
+  %a = getelementptr inbounds nuw %struct.Child, ptr %s, i32 0, i32 0
+  store ptr %a, ptr @GPtr, align 8
+  %sp1 = getelementptr inbounds nuw %struct.GrandParent, ptr %sg, i32 0, i32 1
+  %s2 = getelementptr inbounds nuw %struct.Parent, ptr %sp1, i32 0, i32 1
+  %b = getelementptr inbounds nuw %struct.Child, ptr %s2, i32 0, i32 1
+  store ptr %b, ptr @GPtr, align 8
+  %sp3 = getelementptr inbounds nuw %struct.GrandParent, ptr %sg, i32 0, i32 1
+  %s4 = getelementptr inbounds nuw %struct.Parent, ptr %sp3, i32 0, i32 1
+  %p = getelementptr inbounds nuw %struct.Child, ptr %s4, i32 0, i32 2
+  store ptr %x, ptr %p, align 8
+  ret void
+}
+
+; ============================================================================ ;
+; Atomics and volatile
+; ============================================================================ ;
+
+; Atomic store of pointer -> treated as escape
+define void @atomic_store_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'atomic_store_escape':
+; CHECK: a escapes: yes
+; CHECK: p escapes: no
+  %a = alloca i8, align 1
+  %p = alloca ptr, align 8
+  store atomic ptr %a, ptr %p seq_cst, align 8
+  ret void
+}
+
+; Volatile store of pointer -> treated as escape
+define void @volatile_store_escape() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'volatile_store_escape':
+; CHECK: a escapes: yes
+; CHECK: p escapes: no
+  %a = alloca i8, align 1
+  %p = alloca ptr, align 8
+  store volatile ptr %a, ptr %p
+  ret void
+}
+
+; ============================================================================ ;
+; Casts
+; ============================================================================ ;
+
+; PtrToInt cast -> escape
+define void @worklist_limit_bailout() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'worklist_limit_bailout':
+; CHECK: a escapes: yes
+  %a = alloca i8, align 1
+  %c1 = icmp ne ptr %a, null
+  %c2 = icmp eq ptr %a, null
+  %sel = select i1 %c1, ptr %a, ptr %a
+  %use = ptrtoint ptr %sel to i64
+  ret void
+}
+
+; ============================================================================ ;
+; Escape through double pointers
+; ============================================================================ ;
+
+; Store pp to global triple pointer -> all three allocations escape
+define dso_local void @esc_thorugh_double_ptr1() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'esc_thorugh_double_ptr1':
+; CHECK:  x escapes: yes
+; CHECK:  p escapes: yes
+; CHECK:  pp escapes: yes
+entry:
+  %x = alloca i32, align 4
+  %p = alloca ptr, align 8
+  %pp = alloca ptr, align 8
+  store ptr %x, ptr %p, align 8
+  store ptr %p, ptr %pp, align 8
+  store ptr %pp, ptr @GPtrPtrPtr, align 8
+  ret void
+}
+
+; Store p (loaded from pp) to global double pointer
+; -> x and p escape, pp stays local
+define dso_local void @esc_thorugh_double_ptr2() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'esc_thorugh_double_ptr2':
+; CHECK:  x escapes: yes
+; CHECK:  p escapes: yes
+; CHECK:  pp escapes: no
+entry:
+  %x = alloca i32, align 4
+  %p = alloca ptr, align 8
+  %pp = alloca ptr, align 8
+  store ptr %x, ptr %p, align 8
+  store ptr %p, ptr %pp, align 8
+  %0 = load ptr, ptr %pp, align 8
+  store ptr %0, ptr @GPtrPtr, align 8
+  ret void
+}
+
+; ============================================================================ ;
+; Loops
+; ============================================================================ ;
+
+; Store pointer in a loop -> escape detection through loop iterations
+define void @store_in_loop_escape(i32 %n) {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'store_in_loop_escape':
+; CHECK: x escapes: yes
+; CHECK: arr escapes: no
+entry:
+  %x = alloca i32, align 4
+  %arr = alloca [10 x ptr], align 8
+  br label %loop
+
+loop:
+  %i = phi i32 [ 0, %entry ], [ %next, %loop ]
+  %gep = getelementptr inbounds [10 x ptr], ptr %arr, i64 0, i32 %i
+  store ptr %x, ptr %gep, align 8
+  %next = add i32 %i, 1
+  %cond = icmp slt i32 %next, %n
+  br i1 %cond, label %loop, label %exit
+
+exit:
+  %load_gep = getelementptr inbounds [10 x ptr], ptr %arr, i64 0, i32 5
+  %loaded = load ptr, ptr %load_gep, align 8
+  store ptr %loaded, ptr @GPtr, align 8
+  ret void
+}
+
+; ============================================================================ ;
+; Calls with nocapture arguments (e.g. memory intrinsics)
+; ============================================================================ ;
+
+declare void @memintr_like_func(ptr noalias readonly captures(none))
+declare void @memintr_like_func_writeonly(ptr noalias readonly captures(none)) writeonly
+declare void @memintr_like_func_arg_writeonly(ptr noalias writeonly captures(none))
+declare void @memintr_like_func_readonly(ptr noalias readonly captures(none)) readonly
+
+; Object stored into array, which is passed to a nocapture argument -> escape
+define dso_local void @pass_nocapture_arg() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'pass_nocapture_arg':
+; Case 0: store followed by nocapture argument, but writeonly -> no escape
+; CHECK:  x01 escapes: no
+; CHECK:  arr01 escapes: no
+; CHECK:  x02 escapes: no
+; CHECK:  arr02 escapes: no
+; Case 1: stored pointer is read by a nocapture-like call -> escape
+; CHECK:  x1 escapes: yes
+; CHECK:  arr1 escapes: no
+; Case 2: another independent case -> escape
+; CHECK:  x2 escapes: yes
+; CHECK:  arr2 escapes: no
+; Overwrite case below: first store is overwritten before the call -> x3 does not escape, x4 escapes
+; CHECK:  x3 escapes: no
+; CHECK:  x4 escapes: yes
+; CHECK:  arr3 escapes: no
+entry:
+  ; Case 0-1: store followed by nocapture argument, but the function is writeonly -> no escape
+  %x01 = alloca i32, align 4
+  %arr01 = alloca [10 x ptr], align 16
+  %arrayidx0 = getelementptr inbounds [10 x ptr], ptr %arr01, i64 0, i64 5
+  store ptr %x01, ptr %arrayidx0, align 8
+  %arraydecay0 = getelementptr inbounds [10 x ptr], ptr %arr01, i64 0, i64 0
+  call void @memintr_like_func_writeonly(ptr align 16 %arraydecay0)
+
+  ; Case 0-2: store followed by nocapture argument,
+  ; but the function argument is writeonly -> no escape
+  %x02 = alloca i32, align 4
+  %arr02 = alloca [10 x ptr], align 16
+  %arrayidx02 = getelementptr inbounds [10 x ptr], ptr %arr02, i64 0, i64 5
+  store ptr %x02, ptr %arrayidx02, align 8
+  %arraydecay02 = getelementptr inbounds [10 x ptr], ptr %arr02, i64 0, i64 0
+  call void @memintr_like_func_arg_writeonly(ptr align 16 %arraydecay02)
+
+  ; Case 1: stored pointer is read by a nocapture-like call -> escape
+  %x1 = alloca i32, align 4
+  %arr1 = alloca [10 x ptr], align 16
+  %arrayidx1 = getelementptr inbounds [10 x ptr], ptr %arr1, i64 0, i64 5
+  store ptr %x1, ptr %arrayidx1, align 8
+  %arraydecay1 = getelementptr inbounds [10 x ptr], ptr %arr1, i64 0, i64 0
+  call void @memintr_like_func(ptr align 16 %arraydecay1)
+
+  ; Case 2: another independent case -> escape
+  %x2 = alloca i32, align 4
+  %arr2 = alloca [10 x ptr], align 16
+  %arrayidx2 = getelementptr inbounds [10 x ptr], ptr %arr2, i64 0, i64 5
+  store ptr %x2, ptr %arrayidx2, align 8
+  %arraydecay2 = getelementptr inbounds [10 x ptr], ptr %arr2, i64 0, i64 0
+  call void @memintr_like_func_readonly(ptr align 16 %arraydecay2)
+
+  ; Case 3: Overwrite before the nocapture call:
+  ; first store (%x3) is overwritten by second store (%x4) before the call.
+  ; Expected: x3 does not escape; x4 escapes; the array itself does not escape.
+  %x3 = alloca i32, align 4
+  %x4 = alloca i32, align 4
+  %arr3 = alloca [10 x ptr], align 16
+  %slot3 = getelementptr inbounds [10 x ptr], ptr %arr3, i64 0, i64 5
+  store ptr %x3, ptr %slot3, align 8
+  store ptr %x4, ptr %slot3, align 8
+  %decay3 = getelementptr inbounds [10 x ptr], ptr %arr3, i64 0, i64 0
+  call void @memintr_like_func(ptr align 16 %decay3)
+
+  ret void
+}
+
+; Calls with nocapture arguments (heap): passing a heap pointer itself to a nocapture
+; consumer does NOT make the heap allocation escape; however, objects stored inside
+; that heap memory may escape. Includes overwrite cases for both heap pointers and x's.
+define dso_local void @pass_nocapture_arg_heap() {
+; CHECK-LABEL: Printing analysis 'Escape Analysis' for function 'pass_nocapture_arg_heap':
+; Case 1: Store local x into heap memory, then nocapture reads -> x escapes
+; CHECK:  x1 escapes: yes
+; CHECK:  arr1 escapes: no
+; CHECK:  call1 escapes: no
+; Case 2: Readonly variant -> x still escapes
+; CHECK:  x2 escapes: yes
+; CHECK:  arr2 escapes: no
+; CHECK:  call2 escapes: no
+; Case 3: Overwrite scenario for x on heap:
+; CHECK:  x3 escapes: no
+; CHECK:  x4 escapes: yes
+; CHECK:  arr3 escapes: no
+; CHECK:  call3 escapes: no
+entry:
+  ; Case 1: Store local x into heap memory, then nocapture reads -> x escapes
+  %x1 = alloca i32, align 4
+  %arr1 = alloca ptr, align 8
+  %call1 = call noalias ptr @malloc(i64 noundef 80)
+  store ptr %call1, ptr %arr1, align 8
+  %hp1 = load ptr, ptr %arr1, align 8
+  %slot1 = getelementptr inbounds ptr, ptr %hp1, i64 5
+  store ptr %x1, ptr %slot1, align 8
+  %base1 = load ptr, ptr %arr1, align 8
+  call void @memintr_like_func(ptr align 8 %base1)
+
+  ; Case 2: Readonly variant -> x still escapes
+  %x2 = alloca i32, align 4
+  %arr2 = alloca ptr, align 8
+  %call2 = call noalias ptr @malloc(i64 noundef 80)
+  store ptr %call2, ptr %arr2, align 8
+  %hp2 = load ptr, ptr %arr2, align 8
+  %slot2 = getelementptr inbounds ptr, ptr %hp2, i64 5
+  store ptr %x2, ptr %slot2, align 8
+  %base2 = load ptr, ptr %arr2, align 8
+  call void @memintr_like_func_readonly(ptr align 8 %base2)
+
+  ; Case 3: Overwrite scenario for x on heap:
+  %x3 = alloca i32, align 4
+  %x4 = alloca i32, align 4
+  %arr3 = alloca ptr, align 8
+  %call3 = call noalias ptr @malloc(i64 noundef 80)
+  store ptr %call3, ptr %arr3, align 8
+  %hp3 = load ptr, ptr %arr3, align 8
+  %slot3 = getelementptr inbounds ptr, ptr %hp3, i64 5
+  store ptr %x3, ptr %slot3, align 8
+  store ptr %x4, ptr %slot3, align 8
+  %base3 = load ptr, ptr %arr3, align 8
+  call void @memintr_like_func(ptr align 8 %base3)
+
+  ret void
+}
+



More information about the llvm-commits mailing list