[all-commits] [llvm/llvm-project] ca271f: [lldb-server/linux] Fix waitpid for multithreaded ...

Pavel Labath via All-commits all-commits at lists.llvm.org
Mon Jan 3 05:28:41 PST 2022


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: ca271f4ef5a2a4bf115ac11ada70bbd7c737d77d
      https://github.com/llvm/llvm-project/commit/ca271f4ef5a2a4bf115ac11ada70bbd7c737d77d
  Author: Pavel Labath <pavel at labath.sk>
  Date:   2022-01-03 (Mon, 03 Jan 2022)

  Changed paths:
    M lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp
    M lldb/source/Plugins/Process/Linux/NativeProcessLinux.h
    M lldb/test/API/tools/lldb-server/TestGdbRemoteFork.py

  Log Message:
  -----------
  [lldb-server/linux] Fix waitpid for multithreaded forks

The lldb-server code is currently set up in a way that each
NativeProcess instance does its own waitpid handling. This works fine
for BSDs, where the code can do a waitpid(process_id), and get
information for all threads in that process.

The situation is trickier on linux, because waitpid(pid) will only
return information for the main thread of the process (one whose tid ==
pid). For this reason the linux code does a waitpid(-1), to get
information for all threads. This was fine while we were supporting just
a single process, but becomes a problem when we have multiple processes
as they end up stealing each others events.

There are two possible solutions to this problem:
- call waitpid(-1) centrally, and then dispatch the events to the
  appropriate process
- have each process call waitpid(tid) for all the threads it manages

This patch implements the second approach. Besides fitting better into
the existing design, it also has the added benefit of ensuring
predictable ordering for thread/process creation events (which come in
pairs -- one for the parent and one for the child). The first approach
OTOH, would make this ordering even more complicated since we would
have to keep the half-threads hanging in mid-air until we find the
process we should attach them to.

The downside to this approach is an increased number of syscalls (one
waitpid for each thread), but I think we're pretty far from optimizing
things like this, and so the cleanliness of the design is worth it.

The included test reproduces the circumstances which should demonstrate
the bug (which manifests as a hung test), but I have not been able to
get it to fail. The only place I've seen this failure modes are very
rare hangs in the thread sanitizer tests (tsan forks an addr2line
process to produce its error messages).

Differential Revision: https://reviews.llvm.org/D116372




More information about the All-commits mailing list