[lldb-dev] gdb-remote lldb prompt race condition

Matthew Gardiner mg11 at csr.com
Thu Aug 21 00:27:41 PDT 2014


Hi Greg,

If that is the case, then the main thread (i.e. the one running 
Debugger::ExecuteIOHandler) needs to be blocked until the event arrives.

Why?

Because with the existing design once the user has entered their 
"gdb-remote" command, and completed the connect call,  the main thread 
goes back to the IOHandlerEditline::Run() loop, sees that IsActive() is 
true, prints the next prompt, etc.. When I debugged this I didn't see 
any call to Deactivate or SetIsDone, which would have made IsActive 
return false. (And then the async DefaultEventHandler awakes and it's 
output "Process 1 stopped" splats over the prompt).

If the code is changed so that the edit line IOHandler's IsActive 
returns false, while an asynchronous event is happening, then I think 
that the main thread would spin, since the reader_sp->Run() function below:

void
Debugger::ExecuteIOHanders()
{
     while (1)
     {
         IOHandlerSP reader_sp(m_input_reader_stack.Top());
         if (!reader_sp)
             break;

         reader_sp->Activate();
         reader_sp->Run();
         reader_sp->Deactivate();

would immediately return. That's why I'm thinking the main thread 
probably should block until the last issued command has completed.

Out of interest, I did research your "If someone is grabbing the event 
manually by hijacking events" point. But when stopped state is detected 
(i.e. the reply to ?) in GDBRemote and 
Broadcaster::PrivateBroadcastEvent is called, there is no 
hijacking_listener. Indeed the execution path is that as indicated by 
the -->

     if (hijacking_listener)
     {
         if (unique && 
hijacking_listener->PeekAtNextEventForBroadcasterWithType (this, 
event_type))
             return;
         hijacking_listener->AddEvent (event_sp);
     }
     else
     {
         collection::iterator pos, end = m_listeners.end();
         // Iterate through all listener/mask pairs
         for (pos = m_listeners.begin(); pos != end; ++pos)
         {
             // If the listener's mask matches any bits that we just 
set, then
             // put the new event on its event queue.
             if (event_type & pos->second)
             {
                 if (unique && 
pos->first->PeekAtNextEventForBroadcasterWithType (this, event_type))
                     continue;
---->         pos->first->AddEvent (event_sp);

So my contention is that in the case of gdb-connect the initial stop 
state should either be delivered/printed sychronously in the 
Process::ConnectRemote (i.e. in the mainthread context) or that the main 
thread should block until either the event arrives, or for some other 
reason the command last issued by the user is deemed to be "complete".

thanks
Matt



Greg Clayton wrote:
> The event should get delivered to the Debugger thread that is waiting for events and it should coordinate with the top io handler when printing it. If someone is grabbing the event manually by hijacking events, then we need to fix that code to send the event on to the unhijacked main event loop.
>
>
>> On Aug 20, 2014, at 1:42 AM, Matthew Gardiner <mg11 at csr.com> wrote:
>>
>> Hi folks
>>
>> I have been seeing another issue with the display of the lldb prompt. This time it's when I do "target create elf-file", then "gdb-remote port-number". After the "gdb-remote" command I see the fact that my process is stopped, e.g.
>>
>> Process 1 stopped
>>
>> on the screen. But no (lldb) prompt.
>>
>> Some investigation revealed that what's *probably* happening is the main thread after processing the "gdb-remote" returns to it's IOHandler, which then prints (lldb). However, the inferior's state changes seem to delivered to stdout via a different thread, basically one which sits in Debugger::DefaultEventHandler. This subsequent output then, I think, overwrites the previous (lldb) prompt.
>>
>> Now in my (and presumably other people's) situation, this issue is compounded by the speed of the TCP/IP connection to the gdbserver stub, the "poll the hardware" nature of my stub, and the fact the hardware is actually simulated - yes over a TCP/IP socket.
>>
>> FWIW, I resolved this by a horrible (POSIX only) hack, of sleeping for 300ms at the bottom of the CommandInterpreter::HandleCommand function.
>>
>> @@ -9,6 +9,8 @@
>>
>> #include "lldb/lldb-python.h"
>>
>> +#include <poll.h> // MG for prompt bugs
>> +
>> #include <string>
>> #include <vector>
>> #include <stdlib.h>
>> @@ -1916,6 +1918,9 @@
>>      if (log)
>>        log->Printf ("HandleCommand, command %s", (result.Succeeded() ? "succeeded" : "did not succeed"));
>>
>> +    // MG wait for remote connects etc. to complete
>> +    poll(0,0,300);
>> +
>>      return result.Succeeded();
>> }
>>
>>
>> But this is horrid. Given this, and other prompt issues, I'm wondering whether whether we need some brave soul, to redesign the current lldb IO handling mechanisms.
>>
>> Matt
>>
>>
>>
>>
>>
>>
>>
>> Member of the CSR plc group of companies. CSR plc registered in England and Wales, registered number 4187346, registered office Churchill House, Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom
>> More information can be found at www.csr.com. Keep up to date with CSR on our technical blog, www.csr.com/blog, CSR people blog, www.csr.com/people, YouTube, www.youtube.com/user/CSRplc, Facebook, www.facebook.com/pages/CSR/191038434253534, or follow us on Twitter at www.twitter.com/CSR_plc.
>> New for 2014, you can now access the wide range of products powered by aptX at www.aptx.com.
>> _______________________________________________
>> lldb-dev mailing list
>> lldb-dev at cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev
>
>
>   To report this email as spam click https://www.mailcontrol.com/sr/KlrpNQ2fxpfGX2PQPOmvUmkxeMeR4!FmRCejA7xH8n6hToChZw9ceRgscvXSUhTVQMiZOyHYW0uU8yB5sLY89Q== .




More information about the lldb-dev mailing list