[LLVMdev] RFC: Upcoming Build System Changes
joerg at britannica.bec.de
Tue Nov 1 17:17:09 PDT 2011
On Tue, Nov 01, 2011 at 06:46:15PM -0500, David A. Greene wrote:
> The fact that the LLVM has to run through all of the directories, read
> Makefiles, check dependencies in each Makefile, etc. In essence, a
> recursive make adds implicit dependencies on all of the sub-Makefiles.
> Those Makefiles have to be processed before any real work can begin.
> That includes shell overhead, which can be significant.
Makefiles don't include shell overhead. You have to parse the build
rules at some point anyway. There are some systems that support more
aggressive caching for that part (e.g. Ninja), but that's beside the
point. Sub-directories do not have to introduce serialisation points.
Just as a test case, I copied a number of single file tools (apply, asa,
basename, bdes, biff, bthset, btpin, cal, cap_mkdb, cdplay, checknr)
into a separate subdirectory in my NetBSD src tree. I added a trivial
Makefile just listening those with SUBDIR and an include of
make without -j: 1.275s
make -j4: 0.607s
After a full build:
make without -j: 0.11s
make -j4: 0.073s
That's on a mobile (dual core) i7 running at 1.2GHz. In short, there is
no serialisation point here.
The main objection to the "recursive make is harmful" article is that it
complete ignores the advantage of such a setup. Small per-directory
build rules are easier to understand and as a direct consequence easier
to maintain. The second objection is that many of the perceived
performance issues are a result of GNU make as mentioned elsewhere in
the thread. Sure, there is overhead associated with creating ~370
processes. But if that is the bottle neck of your build system, you
should move to a platform with less process creation overhead.
Unlike e.g. the GCC build, LLVM scales well with the coarser granularity
since there are no single build actions that requires as long as the
rest of the build combined.
More information about the llvm-dev