[lldb-dev] Inquiry for performance monitors

Greg Clayton via lldb-dev lldb-dev at lists.llvm.org
Thu Feb 4 12:58:44 PST 2016


> On Feb 4, 2016, at 2:24 AM, Pavel Labath via lldb-dev <lldb-dev at lists.llvm.org> wrote:
> 
> On 4 February 2016 at 10:04, Ravitheja Addepally
> <ravithejawork at gmail.com> wrote:
>> Hello Pavel,
>>                In the case of expression evaluation approach you mentioned
>> that:
>> 1. The data could be accessible only when the target is stopped. why is that
>> ?
> If I understand the approach correctly, the idea is the run all perf
> calls as expressions in the debugger. Something like
> lldb> expr perf_event_open(...)
> We need to stop the target to be able to do something like that, as we
> need to fiddle with its registers. I don't see any way around that...
> 
>> 2. What sort of noise were you referring to ?
> Since now all the perf calls will be expressions executed within the
> context of the process being traced, they themselves will show up in
> the trace. I am sure we could filter that out somehow, but it feels
> like an added complication..
> 
> Does that make it any clearer?

So a few questions: people seem worried about running something in the process if expression are being used. Are you saying that if the process is on the local machine, process 1 can just open up a file descriptor to the trace data for process 2? If so, why pass this through lldb-server? I am not a big fan making the lldb-server become the conduits for a ton of information. It just isn't built for that high volumes of data coming in. I can be done, but that doesn't mean it should. If everyone starts passing data like memory usage, CPU time, trace info, backtraces and more through asynchronously through lldb-server, it will become a very crowded communication channel. 

You don't need python if you want to do this using the lldb API. If your IDE is already linking against the LLDB shared library, it can just run the expressions using the public LLDB API. This is how view debugging is implemented in Xcode. It runs complex expressions that gather all data about a view and its subviews and returns all the layers in a blob of data that can be serialized by the expression, retrieved by Xcode (memory read from the process), and then de-serialized by the IDE into a format that can be used. If your IDE can access the trace data for another process, why not just read it from the IDE itself? Why get the lldb-server involved? Granted the remote debugging parts of this make an argument for including it in the lldb-server. But if you go this route you need to make a base implementation for trace data that will work for any trace data, have trace data plug-ins that somehow know how to interpret the data and provide. 

How do you say "here is a blob of trace data" I just got from some process, go find me a plug-in that can parse it. You might have to say "here is a blob of data" and it is for the "intel" trace data plug-in. How are we going to know which trace data to ask for? Is the packet we send to lldb-server going to reply to "qGetTraceData" with something that says the type of data is "intel-IEEE-version-123.3.1" and the data is "xxxxxxx"? Then we would find a plug-in in LLDB for that trace data that can parse it? So you will need to think about completely abstracting the whole notion of trace data into some sensible API that gets exposed via SBProcess. 

So yes, there are two approaches to take. Let me know which one is the way you want to go. But I really want to avoid the GDB remote protocol's async packets becoming the conduit for a boat load of information. 

Greg Clayton




More information about the lldb-dev mailing list