[LLVMdev] Profiling LLVM JIT code
andrew.kaylor at intel.com
Tue Mar 5 17:36:29 PST 2013
Profiling using oprofile should work just fine with the --enable-optimized option. If the function being JITed includes location metadata for a source file name and line number, that will be used. Otherwise, you'll just get function names and addresses.
From: Priyendra Deshwal [mailto:deshwal at scaligent.com]
Sent: Monday, March 04, 2013 5:59 PM
To: Kaylor, Andrew
Cc: llvmdev at cs.uiuc.edu
Subject: Re: [LLVMdev] Profiling LLVM JIT code
Thanks for the info. I am using old JIT. So that should not be a problem.
I will take a look at using oprofile. I have never used it - so will be somewhat of a learning curve.
I notice that the configure script has a --with-oprofile option. In addition to enabling that, is there something else that also needs to be done? My copy of LLVM is compiled with --enable-optimized. Will --with-oprofile work fine with that or should I disable optimized?
On Mon, Mar 4, 2013 at 11:54 AM, Kaylor, Andrew <andrew.kaylor at intel.com<mailto:andrew.kaylor at intel.com>> wrote:
There is support for oprofile and Intel(r) VTune(tm) Performance Analyzer, but either one needs to be explicitly turned on during the build process. If you use MCJIT (as opposed to the older JIT) then oprofile support isn't in place yet.
Both of these work by providing a JITEventListener that receives notification when new code is emitted and hooks it up to the profiling tool via some tool-specific notification API. I'm not familiar with pprof, but it probably wouldn't be very difficult to write a new event listener to add support for pprof.
You can find the oprofile code in 'llvm/lib/ExecutionEngine/OProfileJIT' to use as an example.
From: llvmdev-bounces at cs.uiuc.edu<mailto:llvmdev-bounces at cs.uiuc.edu> [mailto:llvmdev-bounces at cs.uiuc.edu<mailto:llvmdev-bounces at cs.uiuc.edu>] On Behalf Of Priyendra Deshwal
Sent: Sunday, March 03, 2013 3:11 AM
To: llvmdev at cs.uiuc.edu<mailto:llvmdev at cs.uiuc.edu>
Subject: [LLVMdev] Profiling LLVM JIT code
I am currently working on a project that uses JIT compilation to compile incoming user requests to native code. Are there some best practises related to profiling the generated code?
My project uses gperftools pprof for profiling etc. Is there a way to hook the two up? Are there any other profiling method that works? This page describes how to debug JIT code with GDB. I wonder if something similar could be done for gperftools/pprof?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev