[llvm-bugs] [Bug 47181] New: [libunwind] FrameHeaderCache broken/segfaulting in multithreaded environments (e.g. lld)
via llvm-bugs
llvm-bugs at lists.llvm.org
Sat Aug 15 12:36:22 PDT 2020
https://bugs.llvm.org/show_bug.cgi?id=47181
Bug ID: 47181
Summary: [libunwind] FrameHeaderCache broken/segfaulting in
multithreaded environments (e.g. lld)
Product: new-bugs
Version: trunk
Hardware: PC
OS: All
Status: NEW
Severity: enhancement
Priority: P
Component: new bugs
Assignee: unassignedbugs at nondot.org
Reporter: dimitry at andric.com
CC: htmldeveloper at gmail.com, llvm-bugs at lists.llvm.org
Linking binaries with lld 11 which uses libunwind as its unwinder segfaults
semi-randomly, as it exits and unwinds lots of threads in quick succession.
This is apparently caused by the static global FrameHeaderCache introduced in
https://reviews.llvm.org/rGc53c2058ffb8, which has no locking whatsoever.
When multiple threads call into FrameHeaderCache::add(), the situation can
occur that FrameHeaderCache::MostRecentlyUsed and FrameHeaderCache::Unused are
both nullptr, and when it then enters the loop at line 131:
122 void add(const UnwindInfoSections *UIS) {
123 CacheEntry *Current = nullptr;
124
125 if (Unused != nullptr) {
126 Current = Unused;
127 Unused = Unused->Next;
128 } else {
129 Current = MostRecentlyUsed;
130 CacheEntry *Previous = nullptr;
131 while (Current->Next != nullptr) {
132 Previous = Current;
133 Current = Current->Next;
134 }
135 Previous->Next = nullptr;
136 _LIBUNWIND_FRAMEHEADERCACHE_TRACE("FrameHeaderCache evict [%lx -
%lx)",
137 Current->LowPC(),
Current->HighPC());
138 }
the value of Current will be nullptr, leading to a segfault.
This code should be made thread-proof, by adding some locking, or by some other
mechanism, like a per-thread cache (to avoid locking).
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20200815/93cbde4e/attachment.html>
More information about the llvm-bugs
mailing list