[llvm-dev] RFC: EfficiencySanitizer

Craig, Ben via llvm-dev llvm-dev at lists.llvm.org
Mon Apr 18 11:15:01 PDT 2016


On 4/18/2016 1:02 PM, Derek Bruening wrote:
> On Mon, Apr 18, 2016 at 1:36 PM, Craig, Ben via llvm-dev 
> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>
>
>>     *Working set measurement*: this tool measures the data working
>>     set size of an application at each snapshot during execution.  It
>>     can help to understand phased behavior as well as providing basic
>>     direction for further effort by the developer: e.g., knowing
>>     whether the working set is close to fitting in current L3 caches
>>     or is many times larger can help determine where to spend effort.
>     I think my questions here are basically the reverse of my prior
>     questions.  I can imagine the presentation ( a graph with time on
>     the X axis, working set measurement on the Y axis, with some
>     markers highlighting key execution points).  I'm not sure how the
>     data collection works though, or even really what is being
>     measured.  Are you planning on counting the number of data bytes /
>     data cache lines used during each time period?  For the purposes
>     of this tool, when is data brought into the working set and when
>     is data evicted from the working set?
>
>
> The tool records which data cache lines were touched at least once 
> during a snapshot (basically just setting a shadow memory bit for each 
> load/store).  The metadata is cleared after each snapshot is recorded 
> so that the next snapshot starts with a blank slate.  Snapshots can be 
> combined via logical or as the execution time grows to adaptively 
> handle varying total execution time.
So I guess a visualization of that information could show how much new 
memory was referenced per snapshot, how much was the same from a prior 
snapshot, and how much was "dropped".  Neat.


Some other things that might be useful to look for (unrelated to the 
Working set measurement tool):
* Missed restrict opportunities
With shadow memory, you should be able to track whether there is 
aliasing in practice for a given execution, and whether annotating with 
restrict would reduce the number of loads and stores.

* Missed vectorization opportunities
I'm not exactly sure how you could annotate on this front, but this blog 
post ( 
http://blog.llvm.org/2014/11/loop-vectorization-diagnostics-and.html ) 
describes where some diagnostics are already present.  If you can also 
determine whether those optimizations would be valuable through 
instrumentation, then it could be valuable to report it.

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/915c4bff/attachment.html>


More information about the llvm-dev mailing list