[lldb-dev] Is there support for command queuing/asynchronicity in lldb?

Kuba Ober kuba at mareimbrium.org
Sun Mar 16 08:44:13 PDT 2014


Thanks for bringing that up. I’m sure LLDB could be exposed as a TCF service,
and it also could have a plugin that consumes TCF services. I will look into that
at some point for sure.

I need LLDB since it is the only open-source debugger out there that will
know C/C++ syntax (via LLVM) and will compile and inject code during a debugging
session. Without the leverage of a full-blown compiler, this would be impossible.
LLDB being under LLVM umbrella makes it quite special - for me, at least.

The only reason I’m touching LLVM/LLDB at all is that I can leverage all that
stuff “at once”. In isolation, the vendor tools are “good enough”, but when
you consider how good LLDB+LLVM *combo* is, there’s no beating that IMHO.

Alas, at the moment I’m not really doing much with the debugger, so this was
all good food for thought. I’ll let it all simmer.

Cheers, Kuba

On Mar 15, 2014, at 2:34 PM, J.R. Heisey <jr at heisey.org> wrote:

> I thought I'd see if the LLDB group is familiar with the Target Communication Framework (TCF).
> It is used to enable embedded debugging.
> There is a version of Eclipse which supports TCF.
> 
> It defines a protocol which is asynchronous. The APIs are not asynchronous. There is a C API.
> 
> http://www.eclipse.org/tcf/
> http://wiki.eclipse.org/TCF
> 
> Companies involved are Intel Corporation, Wind River and Xilinx, Inc.
> 
> It was explained to me that you can eliminate GDB with Eclipse TCF for embedded debugging.
> I am not familiar enough to know how TCF replaces all the capabilities of GDB. This implies that you would not need LLDB. It might be useful for remote debugging for the LLDB team to provide an optional TCF protocol services into LLDB. Or implement various TCF services to the components of LLDB .
> 
> On Mar 14, 2014, at 1:16 PM, jingham at apple.com wrote:
> 
>> BTW, to be clear, I would not at all be opposed to having another LLDBAsync library that implements one cut at the sort of thing Kuba suggests as part of the LLDB project.  That would be cool.  I just think this should live outside the SB API's both to keep those API's clean and simple and to keep us from cheating and not providing a good enough control through the SB API's for anybody to implement this sort of thing.
>> 
>> Jim
>> 
>> On Mar 14, 2014, at 11:56 AM, jingham at apple.com wrote:
>> 
>>> My take on this would be to keep the SB API's as simple as possible, and then add a library on top of the SB API's that would implement some flavor of asynchronous operation.  After all, lots of folks have opinions about how this sort of thing should be done, arguments which the core SB API's should stand to the side of if at all possible...
>>> 
>>> If there is anything about supporting various models of async execution that would need support not currently available through the SB API's as they now stand, then we should provide that.  That would keep us honest about supporting different flavors of using our API, and also keep orthogonal complexity out of the core of the debugger.  For instance, the SB API's don't currently have a way to say "I am going to be doing a whole bunch of operations, and I don't want anybody to be able to restart the process while that is going on."  That is a primitive that would be needed for async operation to be reliably coded up, and Greg and I have talked about ways to do this for quite some time, but haven't had time to code anything up yet.  But again, I'd much rather provide this sort of thing, and then be agnostic about how clients want to use our API's. 
>>> 
>>> Jim
>>> 
>>> On Mar 14, 2014, at 10:30 AM, Greg Clayton <gclayton at apple.com> wrote:
>>> 
>>>> 
>>>> On Mar 14, 2014, at 9:46 AM, Kuba Ober <kuba at mareimbrium.org> wrote:
>>>> 
>>>>> Do lldb internals support/make it possible to queue commands
>>>>> for remote debugging, and to have the results of the same
>>>>> reported asynchronously?
>>>> 
>>>> No, not currently.
>>>> 
>>>>> This is needed for good user experience on targets that
>>>>> are being debugged remotely or via JTAG and similar debugging
>>>>> dongles. Quite often such connections are slow and the
>>>>> commands can take very long to execute.
>>>>> 
>>>>> Ideally, the user or the IDE should not be expected to
>>>>> know to split long-running commands into smaller pieces/
>>>>> 
>>>>> Specifically, I’m thinking of following features:
>>>>> 
>>>>> 1. Submit a command for execution asynchronously -
>>>>> it returns right away, and the result(s) are reported lated.
>>>>> 
>>>>> 2. Get partial results from asynchronous command execution
>>>>> as it progresses. For example, during a long memory read
>>>>> it’d be nice to get periodic notifications as each “chunk”
>>>>> of data comes in.
>>>>> 
>>>>> 3. Specification of partial ordering between commands.
>>>>> The default would be to have totally ordered command execution
>>>>> as is the case right now, but sometimes this can be relaxed.
>>>>> 
>>>>> Again, think of a very long running memory dump - several
>>>>> megabytes of stuff being read, it can take dozens of seconds
>>>>> on slow dongles or slow network connections. If the user
>>>>> (or an IDE) wants, the subsequent commands can be given
>>>>> with relaxed ordering such that they don’t have to be delayed
>>>>> until the memory read finishes. Say that a user wants to change
>>>>> a register while the memory is dumped, or request a smaller
>>>>> read somewhere else that could finish much sooner.
>>>>> 
>>>>> This would provide for good interactive user experience in IDEs.
>>>>> 
>>>>> 
>>>>> Any thoughts/hints/input?
>>>> 
>>>> I don't believe async should be built into our API as this would really change the entire nature of the API. It would also seriously affect our Python API bindings and all existing programs that currently link the LLDB.
>>>> 
>>>> But that isn't to say we can add a "read/write memory in background" to our Process API. That would be really useful and it would be great to be able to cancel such an operation. So I would vote to identify these long running operations and add support for them into our public API.
>>>> 
>>>> So we could add something like this to SBProcess:
>>>> 
>>>> class SBProcess {
>>>> 
>>>> typedef bool (*MemoryProgressCallback)(
>>>>   addr_t orig_addr, // Original start address for memory read/write
>>>>   addr_t orig_size, // Original size of the memory read/write
>>>>   addr_t curr_pos,  // The address that was just written used for progress
>>>>     SBError &error,   // The error for the current memory read/write
>>>>   void* baton);      // The void * passed to the read/write memory in background call
>>>> 
>>>> void
>>>> ReadMemoryInBackground (addr_t addr, void *buf, size_t size, lldb::SBError &error, MemoryProgressCallback callback, void *baton);
>>>> 
>>>> void
>>>> WriteMemoryInBackground (addr_t addr, const void *buf, size_t size, lldb::SBError &error, MemoryProgressCallback callback, void *baton);
>>>> 
>>>> }
>>>> 
>>>> When the MemoryProgressCallback is called, it can return "true" to keep going, for "false" to cancel.
>>>> 
>>>> So I would vote:
>>>> 1 - identify things we know are going to take a long time and make sure we have an API for common things (like long read/write memory)
>>>> 2 - Use the API to build things that aren't commonly needed by everyone.
>>>> 
>>>> Right now, an IDE can implement the background memory read/write with progress using our public API. Just spin off another thread and break the read/write up yourself and submit smaller reads/writes. Have your other thread run your GUI and it will be able to read/write registers, etc without interrupting your memory read/write thread.
>>>> 
>>>> Our API is multi-threaded aware and safe, so there might not even be anything we really need to do here. Just use the existing API. The memory read/write in the background would be a nice addition to the API, so I would be happy to have this added as a nice functionality so not everyone has to code this up themselves.





More information about the lldb-dev mailing list