<div class="gmail_quote">On Tue, May 29, 2012 at 6:40 PM, James K. Lowden <span dir="ltr"><<a href="mailto:jklowden@schemamania.org" target="_blank">jklowden@schemamania.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Fri, 25 May 2012 17:43:13 +0200<br>
<div class="im">Manuel Klimek <<a href="mailto:klimek@google.com">klimek@google.com</a>> wrote:<br>
<br>
> On Fri, May 25, 2012 at 5:19 PM, James K. Lowden<br>
> <<a href="mailto:jklowden@schemamania.org">jklowden@schemamania.org</a>>wrote:<br>
><br>
> > On Fri, 25 May 2012 07:42:46 +0200<br>
> > Manuel Klimek <<a href="mailto:klimek@google.com">klimek@google.com</a>> wrote:<br>
> ><br>
</div><div class="im">> > > > To solve the problem you describe -- to have Clang<br>
> > > > recompile "all files that use a specific symbol" -- one could<br>
</div>> > > > touch(1) all such files and then run make.<br>
<div class="im">> > ><br>
> > > Not all build systems work on system time. Some work on content<br>
> > > digests.<br>
> ><br>
</div><div class="im">> > That's clearly insufficient if a header file changes.<br>
><br>
</div><div class="im">> I don't understand how this is different to timestamps when it comes<br>
> to header files - the headers need to be in the dependency graph<br>
> anyways.<br>
<br>
</div>I see. There are two orthogonal issues: the dependency graph, and<br>
determining which files in that graph have changed. No matter how<br>
"change" is measured/defined, anything affected by it -- anything<br>
further down the graph -- needs recompilation or relinking.<br>
<br>
With make, we express the dependency graph directly in make's<br>
target:source syntax, and "change" is defined in terms of the files'<br>
mtimes. make is quite good at converting its source language into a<br>
DAG, but to determine what's changed it must rely on the underlying<br>
system.<br>
<div class="im"><br>
> > > Even if this worked, the amount of stats that build systems<br>
> > > do in ReallyLargeCodeBases to figure out what to build can be a<br>
> > > huge cost, and can actually dwarf the time it takes to reparse 3<br>
> > > or 4 files you care about.<br>
> ><br>
> > I doubt that. Could you point me to research supporting that claim?<br>
><br>
> No. I can point you to my anecdotal experience with networked file<br>
> systems and code bases that are so large that you don't want to check<br>
> out everything at once.<br>
<br>
</div>I think I see where you're going.<br>
<br>
If the source control system (or, say, compilation database) maintains<br>
fingerprints, it's possible to determine what's changed a priori,<br>
without checking out the files to a local working directory. If it<br>
furthermore represents the dependency graph, it's also possible to<br>
check out only "what's changed" prior to rebuilding, and to federate<br>
that rebuilding. In effect, the fingerprints represent change directly,<br>
whereas inode mtime is only a proxy, and not one that scales well along<br>
a network.<br>
<br>
Each time make runs, it must stat the whole tree, meaning that its<br>
evaluation time is O(n) with the number of files. A database must also<br>
make as many comparisons, but those comparisons orders of magnitude<br>
faster than a system call. If N is 100 or even 1000, none of that<br>
matters very much, but if N is 1,000,000 it will be measurable.<br>
<br>
It's interesting to think of make divorced from the filesystem, to<br>
imagine a make implementation that used a database of fingerprints to<br>
answer the "is changed" question. I'm guessing that is, in a way, what<br>
you've done or are doing, except that it isn't as general as make.<br>
<div class="im"><br>
> > > One more reason: dependency order introduces artificial<br>
> > > serialization.<br>
> ><br>
> > Are you saying the dependencies of static analysis are lesser than<br>
> > for compilation? I would think that's ultimately untrue, given that<br>
> > eventually static analysis would have to be applied to the whole<br>
> > program, even if that's not the case today.<br>
><br>
> But that doesn't mean you can't apply it in parallel. We've had great<br>
> success using a MapReduce over ~100MLOC code to do very fast parallel<br>
> static analysis.<br>
<br>
</div>Yes, but part of that "win" is due to the fact that you're not<br>
linking. Static analysis today is comparable to compilation, the<br>
evaluation of a single translation unit.<br></blockquote><div><br></div><div>Yes. But if you want to statically analyze a whole code base for global information, you need to analyze every TU (as you would need to compile every TU). Having a dependency graph restriction in here would lead to sequential bottlenecks (which make -j shows). On the other hand, having global analysis of course still relies on the build, as generated sources need to be available. But given that we often want to run multiple iterations of static analysis at a single point of the code, this pays of big time.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">
> > Nothing against you trying; I'm hoping at least one of us will learn<br>
> > something. :-)<br>
<br>
</div>If I'm on target above, then our experiment yielded unexpected but<br>
useful results!</blockquote><div><br></div><div>More details about what I'm talking about can be found here:</div><div><a href="http://google-engtools.blogspot.de/">http://google-engtools.blogspot.de/</a></div><div>(use the left side navigation bar to everything that has "Build" in the title)</div>
<div><br></div><div>Cheers,</div><div>/Manuel</div><div> </div></div><br>