<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On 5 May 2016 at 08:27, Artem Dergachev via cfe-dev <span dir="ltr"><<a href="mailto:cfe-dev@lists.llvm.org" target="_blank">cfe-dev@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">> Can you explain what the "two pass analysis" does?<br>
> Ex: Does it loop through each of the TU second time and<br>
> “inline” every call from other TUs? In which order are the other<br>
> TUs loaded? In which order the call sites are processed? Do you<br>
> repeat until no change? Did you measure coverage in some way?<br>
> Did you perform path-sensitive checks? (The time of analysis of 4X<br>
> seems much lower than what I would expect, given that we now<br>
> explore much deeper paths and the analyzer has exponential<br>
> running time.)<br></span></blockquote><div><br></div><div>Yes, path sensitive checks were running. I forget to mention: the number of reports were about 3X. I did not have time to evaluate them yet though.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<br></span>
On first pass we get a bunch of -emit-ast dumps, on second pass we go ahead and analyze each translation unit old-style, but whenever we find an inter-unit CallEvent during path-sensitive analysis, we import the section of the AST dump containing the function body and all dependent sections, and inline the call. The inlined call may later trigger more imports if there are inter-unit calls we'd end up wanting to model.<br>
<br>
Yeah, benchmarking is a bit more difficult than that. I think Alexey has some complicated numbers. I guess the slowdown is only-4x on path-sensitive checks simply because there are too many drops due to -analyzer-config max-nodes= limit. The most practical measurement would probably be to increase limits until the number of reports stops growing. It's also possible to count number of limit drops, number of exploded nodes constructed, number of bug reports with and without unification, we did some of this but not all.<br></blockquote><div><br></div><div>It is not just the node limit. This solution also maintains a list which functions were analyzed (during a call from another TU). This implies that less functions are analyzed as top level functions. <br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
________<br>
<br>
My best idea on reducing AST loads is to relax "typedness" requirements on SVal hierarchy. For example, if an inter-unit function references an inter-unit static global variable, this variable can probably be represented as some kind of "untyped VarRegion" (let's call this class "XTUVarRegion", and inherit it from SubRegion directly, rather than from TypedValueRegion), and then its type (which may be a complicated class or template-instantiation declaration) doesn't need to be imported. The XTUVarRegion should still be uniquely determined by the variable - we need to know that two different functions imported from that translation unit refer to the same variable. Not sure - maybe some MemSpace magic may be employed to control invalidation, maybe we could use separate memory spaces for different translation units.</blockquote><div><br></div><div>I do not like the idea of ignoring type information. First it would be worth to measure what portion of the ASTs are actually type related nodes and not function bodies.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
cfe-dev mailing list<br>
<a href="mailto:cfe-dev@lists.llvm.org" target="_blank">cfe-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev</a><br>
</div></div></blockquote></div><br></div></div>