<div dir="ltr">Ok, that is one option, but one of the aim for this activity is to make the data available for use by the IDE's like Android Studio or XCode or any other that may want to display this information in its environment so keeping that in consideration would the complete python based approach be useful ? or would providing LLDB api's to extract raw perf data from the target be useful ?</div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jan 21, 2016 at 10:00 PM, Greg Clayton <span dir="ltr"><<a href="mailto:gclayton@apple.com" target="_blank">gclayton@apple.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">One thing to think about is you can actually just run an expression in the program that is being debugged without needing to change anything in the GDB remote server. So this can all be done via python commands and would require no changes to anything. So you can run an expression to enable the buffer. Since LLDB supports multiple line expression that can define their own local variables and local types. So the expression could be something like:<br>
<br>
int perf_fd = (int)perf_event_open(...);<br>
struct PerfData<br>
{<br>
void *data;<br>
size_t size;<br>
};<br>
PerfData result = read_perf_data(perf_fd);<br>
result<br>
<br>
<br>
The result is then a structure that you can access from your python command (it will be a SBValue) and then you can read memory in order to get the perf data.<br>
<br>
You can also split things up into multiple calls where you can run perf_event_open() on its own and return the file descriptor:<br>
<br>
(int)perf_event_open(...)<br>
<br>
This expression will return the file descriptor<br>
<br>
Then you could allocate memory via the SBProcess:<br>
<br>
(void *)malloc(1024);<br>
<br>
The result of this expression will be the buffer that you use...<br>
<br>
Then you can read 1024 bytes at a time into this newly created buffer.<br>
<br>
So a solution that is completely done in python would be very attractive.<br>
<span class="HOEnZb"><font color="#888888"><br>
Greg<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
> On Jan 21, 2016, at 7:04 AM, Ravitheja Addepally <<a href="mailto:ravithejawork@gmail.com">ravithejawork@gmail.com</a>> wrote:<br>
><br>
> Hello,<br>
> Regarding the questions in this thread please find the answers -><br>
><br>
> How are you going to present this information to the user? (I know<br>
> debugserver can report some performance data... Have you looked into<br>
> how that works? Do you plan to reuse some parts of that<br>
> infrastructure?) and How will you get the information from the server to the client?<br>
><br>
> Currently I plan to show a list of instructions that have been executed so far, I saw the<br>
> implementation suggested by pavel, the already present infrastructure is a little bit lacking in terms of the needs of the<br>
> project, but I plan to follow a similar approach, i.e to extract the raw trace data by querying the server (which can use the<br>
> perf_event_open to get the raw trace data from the kernel) and transport it through gdb packets ( qXfer packets<br>
> <a href="https://sourceware.org/gdb/onlinedocs/gdb/Branch-Trace-Format.html#Branch-Trace-Format" rel="noreferrer" target="_blank">https://sourceware.org/gdb/onlinedocs/gdb/Branch-Trace-Format.html#Branch-Trace-Format</a>). At the client side the raw trace data<br>
> could be passed on to python based command that could decode the data. This also eliminates the dependency of libipt since LLDB<br>
> would not decode the data itself.<br>
><br>
> There is also the question of this third party library. Do we take a hard dependency on libipt (probably a non-starter), or only use it if it's available (much better)?<br>
><br>
> With the above mentioned way LLDB would not need the library, who ever wants to use the python command would have to install it separately but LLDB wont need it<br>
><br>
> With the performance counters, the interface would still be perf_event_open, so if there was a perf_wrapper in LLDB server then it could be reused to configure and use the<br>
> software performance counters as well, you would just need to pass different attributes in the perf_event_open system call, plus I think the perf_wrapper could be reused to<br>
> get CoreSight information as well (see <a href="https://lwn.net/Articles/664236/" rel="noreferrer" target="_blank">https://lwn.net/Articles/664236/</a> )<br>
><br>
><br>
> On Wed, Oct 21, 2015 at 8:57 PM, Greg Clayton <<a href="mailto:gclayton@apple.com">gclayton@apple.com</a>> wrote:<br>
> one main benefit to doing this externally is allow this to be done remotely over any debugger connection. If you can run expressions to enable/disable/setup the memory buffer/access the buffer contents, then you don't need to add code into the debugger to actually do this.<br>
><br>
> Greg<br>
><br>
> > On Oct 21, 2015, at 11:54 AM, Greg Clayton <<a href="mailto:gclayton@apple.com">gclayton@apple.com</a>> wrote:<br>
> ><br>
> > IMHO the best way to provide this information is to implement reverse debugging packets in a GDB server (lldb-server). If you enable this feature via some packet to lldb-server, and that enables the gathering of data that keeps the last N instructions run by all threads in some buffer that gets overwritten. The lldb-server enables it and gives a buffer to the perf_event_interface(). Then clients can ask the lldb-server to step back in any thread. Only when the data is requested do we actually use the data to implement the reverse stepping.<br>
> ><br>
> > Another way to do this would be to use a python based command that can be added to any target that supports this. The plug-in could install a set of LLDB commands. To see how to create new lldb command line commands in python, see the section named "CREATE A NEW LLDB COMMAND USING A PYTHON FUNCTION" on the <a href="http://lldb.llvm.org/python-reference.html" rel="noreferrer" target="_blank">http://lldb.llvm.org/python-reference.html</a> web page.<br>
> ><br>
> > Then you can have some commands like:<br>
> ><br>
> > intel-pt-start<br>
> > intel-pt-dump<br>
> > intel-pt-stop<br>
> ><br>
> > Each command could have options and arguments as desired. The "intel-pt-start" command could make an expression call to enable the feature in the target by running and expression that runs the some perf_event_interface calls that would allocate some memory and hand it to the Intel PT stuff. The "intel-pt-dump" could just give a raw dump all of history for one or more threads (again, add options and arguments as needed to this command). The python code could bridge to C and use the intel libraries that know how to process the data.<br>
> ><br>
> > If this all goes well we can think about building it into LLDB as a built in command.<br>
> ><br>
> ><br>
> >> On Oct 21, 2015, at 9:50 AM, Zachary Turner via lldb-dev <<a href="mailto:lldb-dev@lists.llvm.org">lldb-dev@lists.llvm.org</a>> wrote:<br>
> >><br>
> >> There are two different kinds of performance counters: OS performance counters and CPU performance counters. It sounds like you're talking about the latter, but it's worth considering whether this could be designed in a way to support both (i.e. even if you don't do both yourself, at least make the machinery reusable and apply to both for when someone else wanted to come through and add OS perf counters).<br>
> >><br>
> >> There is also the question of this third party library. Do we take a hard dependency on libipt (probably a non-starter), or only use it if it's available (much better)?<br>
> >><br>
> >> As Pavel said, how are you planning to present the information to the user? Through some sort of top level command like "perfcount instructions_retired"?<br>
> >><br>
> >> On Wed, Oct 21, 2015 at 8:16 AM Pavel Labath via lldb-dev <<a href="mailto:lldb-dev@lists.llvm.org">lldb-dev@lists.llvm.org</a>> wrote:<br>
> >> [ Moving this discussion back to the list. I pressed the wrong button<br>
> >> when replying.]<br>
> >><br>
> >> Thanks for the explanation Ravi. It sounds like a very useful feature<br>
> >> indeed. I've found a reference to the debugserver profile data in<br>
> >> GDBRemoteCommunicationClient.cpp:1276, so maybe that will help with<br>
> >> your investigation. Maybe also someone more knowledgeable can explain<br>
> >> what those A packets are used for (?).<br>
> >><br>
> >><br>
> >> On 21 October 2015 at 15:48, Ravitheja Addepally<br>
> >> <<a href="mailto:ravithejawork@gmail.com">ravithejawork@gmail.com</a>> wrote:<br>
> >>> Hi,<br>
> >>> Thanx for your reply, some of the future processors to be released by<br>
> >>> Intel have this hardware support for recording the instructions that were<br>
> >>> executed by the processor and this recording process is also quite fast and<br>
> >>> does not add too much computational load. Now this hardware is made<br>
> >>> accessible via the perf_event_interface where one could map a region of<br>
> >>> memory for this purpose by passing it as an argument to this<br>
> >>> perf_event_interface. The recorded instructions are then written to the<br>
> >>> memory region assigned. Now this is basically the raw information, which can<br>
> >>> be obtained from the hardware. It can be interpreted and presented to the<br>
> >>> user in the following ways -><br>
> >>><br>
> >>> 1) Instruction history - where the user gets basically a list of all<br>
> >>> instructions that were executed<br>
> >>> 2) Function Call History - It is also possible to get a list of all the<br>
> >>> functions called in the inferior<br>
> >>> 3) Reverse Debugging with limited information - In GDB this is only the<br>
> >>> functions executed.<br>
> >>><br>
> >>> This raw information also needs to decoded (even before you can disassemble<br>
> >>> it ), there is already a library released by Intel called libipt which can<br>
> >>> do that. At the moment we plan to work with Instruction History.<br>
> >>> I will look into the debugserver infrastructure and get back to you. I guess<br>
> >>> for the server client communication we would rely on packets only. In case<br>
> >>> of concerns about too much data being transferred, we can limit the number<br>
> >>> of entries we report because anyway the amount of data recorded is too big<br>
> >>> to present all at once so we would have to resort to something like a<br>
> >>> viewport.<br>
> >>><br>
> >>> Since a lot of instructions can be recorded this way, the function call<br>
> >>> history can be quite useful for debugging and especially since it is a lot<br>
> >>> faster to collect function traces this way.<br>
> >>><br>
> >>> -ravi<br>
> >>><br>
> >>> On Wed, Oct 21, 2015 at 3:14 PM, Pavel Labath <<a href="mailto:labath@google.com">labath@google.com</a>> wrote:<br>
> >>>><br>
> >>>> Hi,<br>
> >>>><br>
> >>>> I am not really familiar with the perf_event interface (and I suspect<br>
> >>>> others aren't also), so it might help if you explain what kind of<br>
> >>>> information do you plan to collect from there.<br>
> >>>><br>
> >>>> As for the PtraceWrapper question, I think that really depends on<br>
> >>>> bigger design decisions. My two main questions for a feature like this<br>
> >>>> would be:<br>
> >>>> - How are you going to present this information to the user? (I know<br>
> >>>> debugserver can report some performance data... Have you looked into<br>
> >>>> how that works? Do you plan to reuse some parts of that<br>
> >>>> infrastructure?)<br>
> >>>> - How will you get the information from the server to the client?<br>
> >>>><br>
> >>>> pl<br>
> >>>><br>
> >>>><br>
> >>>> On 21 October 2015 at 13:41, Ravitheja Addepally via lldb-dev<br>
> >>>> <<a href="mailto:lldb-dev@lists.llvm.org">lldb-dev@lists.llvm.org</a>> wrote:<br>
> >>>>> Hello,<br>
> >>>>> I want to implement support for reading Performance measurement<br>
> >>>>> information using the perf_event_open system calls. The motive is to add<br>
> >>>>> support for Intel PT hardware feature, which is available through the<br>
> >>>>> perf_event interface. I was thinking of implementing a new Wrapper like<br>
> >>>>> PtraceWrapper in NativeProcessLinux files. My query is that, is this a<br>
> >>>>> correct place to start or not ? in case not, could someone suggest me<br>
> >>>>> another place to begin with ?<br>
> >>>>><br>
> >>>>> BR,<br>
> >>>>> A Ravi Theja<br>
> >>>>><br>
> >>>>><br>
> >>>>> _______________________________________________<br>
> >>>>> lldb-dev mailing list<br>
> >>>>> <a href="mailto:lldb-dev@lists.llvm.org">lldb-dev@lists.llvm.org</a><br>
> >>>>> <a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev</a><br>
> >>>>><br>
> >>><br>
> >>><br>
> >> _______________________________________________<br>
> >> lldb-dev mailing list<br>
> >> <a href="mailto:lldb-dev@lists.llvm.org">lldb-dev@lists.llvm.org</a><br>
> >> <a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev</a><br>
> >> _______________________________________________<br>
> >> lldb-dev mailing list<br>
> >> <a href="mailto:lldb-dev@lists.llvm.org">lldb-dev@lists.llvm.org</a><br>
> >> <a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev</a><br>
> ><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>