<div dir="ltr">I've tried to find a way to move the calls the way you mentioned, but it doesn't seem trivial.<div><br></div><div>Some more information:</div><div>- The invocation to the thread plan is done by Thread::ShouldStop, where it does</div><div><br></div><div>```</div><div>// We're starting from the base plan, so just let it decide;<br> if (current_plan->IsBasePlan()) {<br> should_stop = current_plan->ShouldStop(event_ptr);<br></div><div>```</div><div><br></div><div>The interesting part of this call-stack starts at Process::HandlePrivateEvent, which seems to be handling the Stop Event. Where I think there's some inconsistent code is in ThreadPlanBase::ShouldStop, because it tries to get the stop reason, which fails for Linux, and then it processes eStopReasonExec a few lines below. And see what it does:<br><br>```</div><div> case eStopReasonExec:<br> // If we crashed, discard thread plans and stop. Don't force the<br> // discard, however, since on rerun the target may clean up this<br> // exception and continue normally from there.<br> LLDB_LOGF(<br> log,<br> "Base plan discarding thread plans for thread tid = 0x%4.4" PRIx64<br> " (exec.)",<br> m_tid);<br> GetThread().DiscardThreadPlans(false);<br> return true;<br></div><div>```</div><div><br></div><div>It does discard the thread plans for that thread! This makes me believe that it should be fine to delete the thread plans in the first place. I wasn't able to figure out more, but I can dig deeper if you give me a few pointers. In any case, this last code of block makes me believe that deleting the thread plans or reseting the thread pointer in the base thread plan might be fine.</div><div><br></div><div><br></div><div>Btw, this is a bunch of logs I got, which start at a breakpoint, then there's a "continue", which triggers what I'm saying. The backtrace is below</div><div><br></div><div>(lldb) bt<br>* thread #1, name = 'runner', stop reason = instruction step into<br> * frame #0: 0x00007f1fe674838d libc.so.6`raise + 61<br> frame #1: 0x0000000000400a3c runner`main(argc=2, argv=0x00007fff4b78fc08) at runner.cpp:20<br> frame #2: 0x00007f1fe6734555 libc.so.6`__libc_start_main + 245<br> frame #3: 0x0000000000400919 runner`_start + 41<br>(lldb) thread plan list /////////////////////////////////////// See that only the base plan is there<br>thread #1: tid = 0x2b72f1:<br> Active plan stack:<br> Element 0: Base thread plan.<br> Completed plan stack:<br> Element 0: Stepping one instruction past 0x00007f1fe6748387 stepping into calls<br> Element 1: Stepping one instruction past 0x00007f1fe6748387 stepping into calls<br>(lldb) c<br>lldb Process::Resume -- locking run lock<br>lldb Process::PrivateResume() m_stop_id = 3, public state: stopped private state: stopped<br>lldb WillResume Thread #1 (0x0x7fcd30000da0): tid = 0x2b72f1, pc = 0x7f1fe674838d, sp = 0x7fff4b78fad8, fp = 0x7fff4b78fb20, plan = 'base plan', state = running, stop others = 0<br>lldb Resuming thread: 2b72f1 with state: running.<br>lldb 0x4e7020 Listener::Listener('gdb-remote.resume-packet-sent')<br>lldb 0x4e7020 Listener::StartListeningForEvents (broadcaster = 0x4787a8, mask = 0x00010000) acquired_mask = 0x00010000 for gdb-remote.resume-packet-sent<br>lldb 0x4e7020 Listener::StartListeningForEvents (broadcaster = 0x478ff8, mask = 0x00000004) acquired_mask = 0x00000004 for gdb-remote.resume-packet-sent<br>lldb 0x494ab0 Broadcaster("lldb.process.gdb-remote.async-broadcaster")::BroadcastEvent (event_sp = {0x7fcd40007cd0 Event: broadcaster = 0x478ff8 (lldb.process.gdb-remote.async-broadcaster), type = 0x00000001 (async thread continue), data = {"c"}}, unique =0) hijack = (nil)<br>lldb 0x494bf0 Listener('lldb.process.gdb-remote.async-listener')::AddEvent (event_sp = {0x7fcd40007cd0})<br>lldb this = 0x00000000004E7020, timeout = 5000000 us for gdb-remote.resume-packet-sent<br>b-remote.async> 0x494bf0 'lldb.process.gdb-remote.async-listener' Listener::FindNextEventInternal(broadcaster=(nil), broadcaster_names=(nil)[0], event_type_mask=0x00000000, remove=1) event 0x7fcd40007cd0<br>b-remote.async> Process::SetPrivateState (running)<br>b-remote.async> 0x483f30 Broadcaster("lldb.process.internal_state_broadcaster")::BroadcastEvent (event_sp = {0x7fcd40005380 Event: broadcaster = 0x477dd0 (lldb.process.internal_state_broadcaster), type = 0x00000001, data = { process = 0x477ce0 (pid = 2847473), state = running}}, unique =0) hijack = (nil)<br>b-remote.async> 0x4841b0 Listener('lldb.process.internal_state_listener')::AddEvent (event_sp = {0x7fcd40005380})<br>b-remote.async> 0x485cf0 Broadcaster("gdb-remote.client")::BroadcastEvent (event_sp = {0x7fcd40005500 Event: broadcaster = 0x4787a8 (gdb-remote.client), type = 0x00010000, data = <NULL>}, unique =0) hijack = (nil)<br>b-remote.async> 0x4e7020 Listener('gdb-remote.resume-packet-sent')::AddEvent (event_sp = {0x7fcd40005500})<br>intern-state 0x4841b0 'lldb.process.internal_state_listener' Listener::FindNextEventInternal(broadcaster=(nil), broadcaster_names=(nil)[0], event_type_mask=0x00000000, remove=1) event 0x7fcd40005380<br>lldb 0x4e7020 'gdb-remote.resume-packet-sent' Listener::FindNextEventInternal(broadcaster=(nil), broadcaster_names=(nil)[0], event_type_mask=0x00000000, remove=1) event 0x7fcd40005500<br>intern-state Current Plan for thread 1(0x7fcd30000da0) (0x2b72f1, running): base plan being asked whether we should report run.<br>lldb 0x4e7020 Listener::Clear('gdb-remote.resume-packet-sent')<br>intern-state Process::ShouldBroadcastEvent (0x7fcd40005380) => new state: running, last broadcast state: running - YES<br>lldb 0x4e7020 Listener::~Listener('gdb-remote.resume-packet-sent')<br>intern-state Process::HandlePrivateEvent (pid = 2847473) broadcasting new state running (old state stopped) to public<br>lldb Process thinks the process has resumed.<br>intern-state Process::HandlePrivateEvent updated m_iohandler_sync to 3<br>Process 2847473 resuming<br>intern-state 0x483df0 Broadcaster("lldb.process")::BroadcastEvent (event_sp = {0x7fcd40005380 Event: broadcaster = 0x477d18 (lldb.process), type = 0x00000001 (state-changed), data = { process = 0x477ce0 (pid = 2847473), state = running}}, unique =0) hijack = (nil)<br>(lldb) intern-state 0x358b00 Listener('lldb.Debugger')::AddEvent (event_sp = {0x7fcd40005380})<br>intern-state timeout = <infinite>, event_sp)...<br>intern-state this = 0x00000000004841B0, timeout = <infinite> for lldb.process.internal_state_listener<br>dbg.evt-handler 0x358b00 'lldb.Debugger' Listener::FindNextEventInternal(broadcaster=(nil), broadcaster_names=(nil)[0], event_type_mask=0x00000000, remove=1) event 0x7fcd40005380<br>dbg.evt-handler Process::SetPublicState (state = running, restarted = 0)<br>dbg.evt-handler this = 0x0000000000358B00, timeout = <infinite> for lldb.Debugger<br><br><div><br></div><div>/////////////////////////////////////////////////////////////// This is where we start handling the stop, see the ~Thread call below</div>b-remote.async> this = 0x00007FCD30000DA0, pid = 2847473, tid = 2847473<br>b-remote.async> 0x7fcd30000da0 Thread::~Thread(tid = 0x2b72f1)<br>b-remote.async> 0x00007FCD30000DD8 Broadcaster::~Broadcaster("lldb.thread")</div><div>b-remote.async> Process::SetPrivateState (stopped)<br>b-remote.async> Process::SetPrivateState (stopped) stop_id = 4<br>b-remote.async> 0x483f30 Broadcaster("lldb.process.internal_state_broadcaster")::BroadcastEvent (event_sp = {0x7fcd3c0736f0 Event: broadcaster = 0x477dd0 (lldb.process.internal_state_broadcaster), type = 0x00000001, data = { process = 0x477ce0 (pid = 2847473), state = stopped}}, unique =0) hijack = (nil)<br>b-remote.async> 0x4841b0 Listener('lldb.process.internal_state_listener')::AddEvent (event_sp = {0x7fcd3c0736f0})<br>b-remote.async> this = 0x0000000000494BF0, timeout = <infinite> for lldb.process.gdb-remote.async-listener<br>intern-state 0x4841b0 'lldb.process.internal_state_listener' Listener::FindNextEventInternal(broadcaster=(nil), broadcaster_names=(nil)[0], event_type_mask=0x00000000, remove=1) event 0x7fcd3c0736f0<br>intern-state 0x7fcd302ac330 Listener::Listener('Communication::SyncronizeWithReadThread')<br>intern-state 0x7fcd302ac330 Listener::StartListeningForEvents (broadcaster = 0x478228, mask = 0x00000020) acquired_mask = 0x00000020 for Communication::SyncronizeWithReadThread<br>intern-state 0x7fcd302ac330 Listener::Clear('Communication::SyncronizeWithReadThread')<br>intern-state 0x7fcd302ac330 Listener::~Listener('Communication::SyncronizeWithReadThread')<br>intern-state 0x00007FCD302BC738 Broadcaster::Broadcaster("lldb.thread")</div><div><br></div><div>///////////////////////////////////////////////////// This is where the new thread is created, but it points to the same tid<br>intern-state 0x7fcd302bc700 Thread::Thread(tid = 0x2b72f1) <br>intern-state 0x358b00 Listener::StartListeningForEvents (broadcaster = 0x7fcd302bc738, mask = 0x00000011) acquired_mask = 0x00000011 for lldb.Debugger<br>intern-state this = 0x00007FCD302BC700, pid = 2847473, tid = 2847473<br>intern-state 0x7fcd302bc700: tid = 0x2b72f1: stop info = <NULL> (stop_id = 4)<br>intern-state 0x7fcd302bc700: tid = 0x2b72f1: stop info = exec (stop_id = 4)<br>intern-state <br>intern-state ThreadList::ShouldStop: 1 threads, 1 unsuspended threads<br>intern-state Thread::ShouldStop(0x7fcd302bc700) for tid = 0x2b72f1 0x2b72f1, pc = 0x00007f3b751ce140<br>intern-state ^^^^^^^^ Thread::ShouldStop Begin ^^^^^^^^<br>intern-state Plan stack initial state:<br> thread #1: tid = 0x2b72f1:<br> Active plan stack:<br> Element 0: Base thread plan.<br><br>intern-state Plan base plan explains stop, auto-continue 0.<br>PLEASE submit a bug report to <a href="https://bugs.llvm.org/">https://bugs.llvm.org/</a> and include the crash backtrace.<br>Stack dump without symbol names (ensure you have llvm-symbolizer in your PATH or set the environment var `LLVM_SYMBOLIZER_PATH` to point to it):<br>./bin/lldb[0x298aca]<br>./bin/lldb[0x298ca4]<br>./bin/lldb[0x296ca0]<br>./bin/lldb[0x29a606]<br>/usr/local/fbcode/platform007/lib/libpthread.so.0(+0x12b80)[0x7fcd53167b80]<br>/home/wallace/llvm-sand/build/Debug/Linux-x86_64/llvm/bin/../lib/liblldb.so.12(+0x47e5d96)[0x7fcd4c955d96]<br>/home/wallace/llvm-sand/build/Debug/Linux-x86_64/llvm/bin/../lib/liblldb.so.12(+0x47e595a)[0x7fcd4c95595a]<br>/home/wallace/llvm-sand/build/Debug/Linux-x86_64/llvm/bin/../lib/liblldb.so.12(+0x47cf23b)[0x7fcd4c93f23b]<br>/home/wallace/llvm-sand/build/Debug/Linux-x86_64/llvm/bin/../lib/liblldb.so.12(+0x47e006c)[0x7fcd4c95006c]<br>/home/wallace/llvm-sand/build/Debug/Linux-x86_64/llvm/bin/../lib/liblldb.so.12(+0x47253f5)[0x7fcd4c8953f5]<br>/home/wallace/llvm-sand/build/Debug/Linux-x86_64/llvm/bin/../lib/liblldb.so.12(+0x4720f01)[0x7fcd4c890f01]<br>/home/wallace/llvm-sand/build/Debug/Linux-x86_64/llvm/bin/../lib/liblldb.so.12(+0x472676e)[0x7fcd4c89676e]<br>/home/wallace/llvm-sand/build/Debug/Linux-x86_64/llvm/bin/../lib/liblldb.so.12(+0x47257ed)[0x7fcd4c8957ed]<br>/home/wallace/llvm-sand/build/Debug/Linux-x86_64/llvm/bin/../lib/liblldb.so.12(+0x45856ec)[0x7fcd4c6f56ec]<br>/usr/local/fbcode/platform007/lib/libpthread.so.0(+0x76ce)[0x7fcd5315c6ce]<br>/usr/local/fbcode/platform007/lib/libc.so.6(clone+0x3f)[0x7fcd46d6135f]<br>Segmentation fault (core dumped)<br><br><br><br>//// This is the callstack I got from another run, but they are the same<br>* thread #4, name = 'intern-state', stop reason = signal SIGSEGV: invalid address (fault address: 0x218)<br> * frame #0: 0x00007f9983cccd96 liblldb.so.12`lldb_private::ThreadPlan::GetPrivateStopInfo(this=0x00007f99680014d0) at ThreadPlan.h:521:24<br> frame #1: 0x00007f9983ccc95a liblldb.so.12`lldb_private::ThreadPlanBase::ShouldStop(this=0x00007f99680014d0, event_ptr=0x00007f997404c310) at ThreadPlanBase.cpp:78:29<br> frame #2: 0x00007f9983cb623b liblldb.so.12`lldb_private::Thread::ShouldStop(this=0x00007f99682be2b0, event_ptr=0x00007f997404c310) at Thread.cpp:871:35<br> frame #3: 0x00007f9983cc706c liblldb.so.12`lldb_private::ThreadList::ShouldStop(this=0x0000000000477fa8, event_ptr=0x00007f997404c310) at ThreadList.cpp:330:48<br> frame #4: 0x00007f9983c0c3f5 liblldb.so.12`lldb_private::Process::ShouldBroadcastEvent(this=0x0000000000477ce0, event_ptr=0x00007f997404c310) at Process.cpp:3368:40<br> frame #5: 0x00007f9983c07f01 liblldb.so.12`lldb_private::Process::HandlePrivateEvent(this=0x0000000000477ce0, event_sp=std::__shared_ptr<lldb_private::Event, __gnu_cxx::_S_atomic>::element_type @ 0x00007f997404c310) at Process.cpp:3593:33<br> frame #6: 0x00007f9983c0d76e liblldb.so.12`lldb_private::Process::RunPrivateStateThread(this=0x0000000000477ce0, is_secondary_thread=false) at Process.cpp:3787:7<br> frame #7: 0x00007f9983c0c7ed liblldb.so.12`lldb_private::Process::PrivateStateThread(arg=0x0000000000494d80) at Process.cpp:3685:25<br> frame #8: 0x00007f9983a6c6ec liblldb.so.12`lldb_private::HostNativeThreadBase::ThreadCreateTrampoline(arg=0x00000000004982e0) at HostNativeThreadBase.cpp:68:10<br> frame #9: 0x00007f998a4d36ce libpthread.so.0`start_thread(arg=0x00007f996f7fe700) at pthread_create.c:465:7<br> frame #10: 0x00007f997e0d835f libc.so.6`__clone at clone.S:95<br>(lldb) ^D<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno gio 21 gen 2021 alle ore 14:37 Jim Ingham <<a href="mailto:jingham@apple.com">jingham@apple.com</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
> On Jan 21, 2021, at 2:33 PM, Jim Ingham <<a href="mailto:jingham@apple.com" target="_blank">jingham@apple.com</a>> wrote:<br>
> <br>
> <br>
> <br>
>> On Jan 21, 2021, at 12:51 PM, walter erquinigo via Phabricator <<a href="mailto:reviews@reviews.llvm.org" target="_blank">reviews@reviews.llvm.org</a>> wrote:<br>
>> <br>
>> wallace added a comment.<br>
>> <br>
>> Sorry for returning late to this diff, but I have some additional information. This is what's happening:<br>
>> <br>
>> - Before the exec, the base thread plan holds a reference to the thread pointer<br>
>> - During the exec, the original thread is destroyed in ProcessGDBRemote::SetLastStopPacket, and later a new Thread pointer is created for the same tid. Notice that the base thread plan is still holding a reference to the previous pointer even though there's a new Thread pointer for the same tid. On Linux, exec preserves the tid. This step is where I think there's a wrong invariant.<br>
>> - Then, at some point, the base thread plan is asked its stop reason, and it'll try to use the destroyed pointer, thus crashing.<br>
> <br>
> Thanks for looking at this some more.<br>
> <br>
> Sadly, I think I'm still missing something. Here's how I think things should happen:<br>
> <br>
> 1) The program was stopped somewhere in the original program (before exec), and the user continued the process<br>
> a) WillResume is called and all the thread plans clear their cached thread pointers. So they will have to look them up by ID when next asked a question.<br>
> b) Then the process resumes.<br>
> 2) We get a stop packet with a reason of "exec" from the stub.<br>
> a) We process the stop packet, and get the new thread list<br>
> b) We start to build the new thread list<br>
<br>
Ack, that was confusing. Item b) should be "We start to build the new ThreadList". The object in a) is the thread list the stub tells about, either in the stop packet or through subsequent queries. The object in b) is the ThreadList in the Process that will describe the state at this particular stop point.<br>
<br>
> 1) From what you are saying, in the case of exec, regardless of whether the TID matches an extant one or not, we make a new Thread *, put it into the new ThreadList and delete the old Thread *.<br>
> c) Then we start figuring out what to do after the stop.<br>
> i) If the TID of the new thread is the same as any of the TID's in the old thread list (and thus of the ThreadPlanStack that was managing it) when the new thread gets asked what to do, it will route the question to the ThreadPlanStack for that TID.<br>
> ii) If this happens after the new thread list is in place, FindThreadByID will return the new, valid thread, and there should be no problem.<br>
> <br>
> So for this to fail as you described, somewhere between the time that we stopped for exec and the time we finalized the new thread list and discarded the old threads, somebody must have asked the thread plan stack for that TID a question that triggered it to cache the old thread's Thread *.<br>
> <br>
> That really shouldn't happen, precisely for the reason you see here. When we stop, we have to finish setting up the current ThreadList before we start asking questions of the ThreadPlanStack for threads, like ShouldStop or whatever. It makes no sense to ask the pre-stop thread list questions for the current stop, and we shouldn't do it.<br>
> <br>
> Do you know where we're calling ThreadPlan::GetThread while the thread list still has the old threads in it? I'd really like to stop that if we can. And if you can move those queries till after the new thread list is established, then we shouldn't need any of the alternatives you suggest.<br>
> <br>
> About the alternatives:<br>
> <br>
> First one: I'd rather not artificially clear out the ThreadPlanStack after exec. After all, if you knew that exec on Linux preserves the TID, you could write a thread plan that carried over the exec and did something interesting. Be a shame to prevent that for implementation reasons.<br>
> <br>
> Second one: Adding the IsValid check won't do any harm, but it implicitly lets something happen that I don't think should happen, so it doesn't feel right...<br>
> <br>
> Third one: If there's some good reason why you have to first check through the old thread list for stop reasons, then update the thread list, then ask again with no Resume in between, I think the cleanest solution is to announce that by explicitly clearing the ThreadPlans' cached thread pointer. So I think this one's the best option. But again, I'd like to understand why it was necessary.<br>
> <br>
> Jim<br>
> <br>
> <br>
>> <br>
>> I've found three possible fixes for this (I'm pretty sure that you can come up with more):<br>
>> <br>
>> - The first one is what this diff does, which is clearing all the thread plans once the gdbremote client knows there's an exec.<br>
>> - The second one is modifying the ThreadPlan::GetThread method, which currently looks like this<br>
>> <br>
>> if (m_thread)<br>
>> return *m_thread;<br>
>> <br>
>> ThreadSP thread_sp = m_process.GetThreadList().FindThreadByID(m_tid);<br>
>> m_thread = thread_sp.get();<br>
>> return *m_thread;<br>
>> <br>
>> Changing it into<br>
>> <br>
>> if (m_thread && m_thread->IsValid()) // IsValid will return false for a deleted thread<br>
>> return *m_thread;<br>
>> <br>
>> ThreadSP thread_sp = m_process.GetThreadList().FindThreadByID(m_tid);<br>
>> m_thread = thread_sp.get();<br>
>> return *m_thread;<br>
>> <br>
>> also works, as it will make the ThreadPlan grab the new thread pointer.<br>
>> <br>
>> - Another solution is to create a method ThreadPlan::PrepareForExec() which just clears the m_thread pointer in the the thread plan, and call it from ProcessGDBRemote::SetLastStopPacket, where we handle the exec.<br>
>> <br>
>> I prefer the first solution, as the thread plans are all cleared in preparation for the new process.<br>
>> <br>
>> <br>
>> Repository:<br>
>> rG LLVM Github Monorepo<br>
>> <br>
>> CHANGES SINCE LAST ACTION<br>
>> <a href="https://reviews.llvm.org/D93874/new/" rel="noreferrer" target="_blank">https://reviews.llvm.org/D93874/new/</a><br>
>> <br>
>> <a href="https://reviews.llvm.org/D93874" rel="noreferrer" target="_blank">https://reviews.llvm.org/D93874</a><br>
>> <br>
> <br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>- Walter Erquínigo Pezo<br><br></div></div></div>