<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 26, 2015 at 2:17 PM, Kostya Serebryany <span dir="ltr"><<a href="mailto:kcc@google.com" target="_blank">kcc@google.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">In <a href="http://reviews.llvm.org/D8639#147540" target="_blank">http://reviews.llvm.org/D8639#147540</a>, @zaks.anna wrote:<br>
<br>
> > This looks fine, but I would slightly prefer to compute the map beforehand instead of caching, if possible.<br>
><br>
> > I.e. instead of a map of ProcessedAllocas have a set of InterestingAllocas that is computed in visitAllocaInst<br>
><br>
><br>
> In the future, determining if an alloca is interesting or not might depend on visiting all of its uses. For example, if an alloca is promotable but all of it's uses are accessing memory, which is provably in range, it should be marked as non-interesting. The current approach seems to fit that model better than visiting allocas and building a list.<br>
<br>
<br>
</span>But you can also visit all uses in a pre-compute step too.<br>
And having this pre-computed instead of cached will make the code simpler to understand.</blockquote></div><br>Each walk of the use list of the alloca is relatively expensive -- it is a linked list and cache hostile.</div></div>