<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jun 8, 2016 at 10:20 PM, Hal Finkel <span dir="ltr"><<a href="mailto:hfinkel@anl.gov" target="_blank">hfinkel@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:#000000"><br><hr><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><b>From: </b>"Vikram TV via llvm-dev" <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>><br><b>To: </b>"DEV" <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>><br><b>Sent: </b>Wednesday, June 8, 2016 2:58:17 AM<br><b>Subject: </b>[llvm-dev] [Proposal][RFC] Cache aware Loop Cost Analysis<span><br><br><div dir="ltr">Hi,<div><br></div><div>This is a proposal about implementing an analysis that calculates loop cost based on cache data. The primary motivation for implementing this is to write profitability measures for cache related optimizations like interchange, fusion, fission, pre-fetching and others.</div><div><br></div><div>I have implemented a prototypical version at <a href="http://reviews.llvm.org/D21124" target="_blank">http://reviews.llvm.org/D21124</a>.</div></div></span></blockquote>I quick comment on your pass: You'll also want to add the ability to get the loop trip count from profiling information from BFI when available (by dividing the loop header frequency by the loop predecessor-block frequencies).</div></div></blockquote><div>Sure. I will add it. <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:#000000"><span><br><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><div dir="ltr"><div></div><div>The patch basically creates groups of references that would lie in the same cache line. Each group is then analysed with respect to innermost loops considering cache lines. Penalty for the reference is:</div><div>a. 1, if the reference is invariant with the innermost loop,</div><div>b. TripCount for non-unit stride access,</div><div>c. TripCount / CacheLineSize for a unit-stride access.</div><div>Loop Cost is then calculated as the sum of the reference penalties times the product of the loop bounds of the outer loops. This loop cost can then be used as a profitability measure for cache reuse related optimizations. This is just a brief description; please refer to [1] for more details.</div></div></blockquote></span>This sounds like the right approach, and is certainly something we need. One question is how the various transformations you name above (loop fusion, interchange, etc.) will use this analysis to evaluate the effect of the transformation they are considering performing. Specifically, they likely need to do this without actually rewriting the IR (speculatively).</div></div></blockquote><div>For Interchange: Loop cost can be used to build a required ordering and interchange can try to match the nearest possible ordering. Few internal kernels, matmul are working fine with this approach.</div><div>For Fission/Fusion: Loop cost can provide the cost before and after fusion. Loop cost for a fused loop can be obtained by building reference groups by assuming a fused loop. Similar case with fission but with partitions.</div><div>These require no prior IR changes.</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:#000000"><span><br><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><div dir="ltr"><div></div><div>A primary drawback in the above patch is the static use of Cache Line Size. I wish to get this data from tblgen/TTI and I am happy to submit patches on it.</div></div></blockquote></span>Yes, this sounds like the right direction. The targets obviously need to provide this information.<span><br><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><div dir="ltr"><div></div><div><br></div><div>Finally, if the community is interested in the above work, I have the following work-plan:</div><div>a. Update loop cost patch to a consistent, commit-able state.</div><div>b. Add cache data information to tblgen/TTI.</div><div>c. Rewrite Loop Interchange profitability to start using Loop Cost.</div></div></blockquote></span>Great!<br><br>This does not affect loop interchange directly, but one thing we need to also consider is the number of prefetching streams supported by the hardware prefetcher on various platforms. This is generally, per thread, some number around 5 to 10. Splitting loops requiring more streams than supported, and preventing fusion from requiring more streams than supported, is also an important cost metric. Of course, so is ILP and other factors.<br></div></div></blockquote><div>Cache based cost computation is one of the metric and yes, more of other metrics need to be added.</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:10pt;color:#000000"><br> -Hal<br><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><span><div dir="ltr"><div></div><div><br></div><div>Please share your valuable thoughts.</div><div><br></div><div>Thank you.</div><div><br></div><div>References:</div><div>[1] [Carr-McKinley-Tseng] <a href="http://www.cs.utexas.edu/users/mckinley/papers/asplos-1994.pdf" target="_blank">http://www.cs.utexas.edu/users/mckinley/papers/asplos-1994.pdf</a></div><div><br></div><div>-- </div><div><div><div dir="ltr"><div><div dir="ltr"><div><br></div><div>Good time...</div><div>Vikram TV</div><div>CompilerTree Technologies</div><div>Mysore, Karnataka, INDIA</div></div></div></div></div>
</div></div>
<br></span>_______________________________________________<br>LLVM Developers mailing list<br><a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br><a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><span><font color="#888888"><br></font></span></blockquote><span><font color="#888888"><br><br><br>-- <br><div><span name="x"></span>Hal Finkel<br>Assistant Computational Scientist<br>Leadership Computing Facility<br>Argonne National Laboratory<span name="x"></span><br></div></font></span></div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><br></div><div>Good time...</div><div>Vikram TV</div><div>CompilerTree Technologies</div><div>Mysore, Karnataka, INDIA</div></div></div></div></div>
</div></div>