[cfe-dev] RFC: Upcoming Build System Changes

Óscar Fuentes ofv at wanadoo.es
Tue Nov 1 15:49:51 PDT 2011


greened at obbligato.org (David A. Greene) writes:

>> That's makes no sense. The automatic system we had generated an optimal
>> dependency graph: no missing or unnecesary edges. You can't do better
>> than than optimal, can you?
>
> If you're using recursive make, it is by definitely not optimal.

See, running `make -j4' here takes 1.3 seconds (not 1.7, as stated on my
previous e-mail: there were two processes pegging two cores.) How much
can you improve it after getting rid of the recursive make invocations?
Let's suppose that you can reduce it to 0.3 seconds, i.e. 1 second
better. Is it worth to care at all?

>> To think that no unnecesary dependencies will pop up on the build is
>> delusional, unless you introduce some sort of automatic check. And once
>> you have the automatic check, why not delegate to it the handling of the
>> dependencies?
>
> I never said they wouldn't show up.  I said they're not worth worrying
> about until you notice them.  You notice them when you notice your build
> isn't as parallel as it could be and that happens when the build takes
> too long.

See, time ago there was a guy here complaining about lack of parallelism
on the cmake build. It was perfectly parallel for my quadcore machine,
but he was using a 16-threads cpu and, no doubt, part of the time some
threads were idle. So noticing that the build is parallel depends on
each case. And don't you have better things to do than staring at the
cpu graph and walking the build for locating unneeded dependencies? And
running the build again for testing it is right after removing the
suspicious edges? It's not fun, I can tell you that from my experience.

>> As mentioned elsewhere, what we want is to know when a source code
>> change creates a modification on the dependency graph, to decide if we
>> are okay with that. Manually keeping up to date the dependency files is
>> just an unnecessary bureaucratic burden.
>
> I agree.  Dependency files should not be necessary.  But CMake uses such
> files, doesn't it?

CMake, at first, used an schema similar to the `make' build: library
dependencies were generated on the fly and used for building the
executables. That was on Unix. On Windows, it used a file with the
library dependencies as determined on a Unix build.

Later, someone "improved" it by removing the step were the autogenerated
dependencies were used on the same build. It used the dependencies
computed on the previous build and stored on the file mentioned
above. That introduced the problem of failing builds when a source
change required a new dependency.

Finally, the build was "improved" again by getting rid of the automatic
dependencies and switching to manual maintenance. Thus, highly paid,
top-notch software engineers devote part of their time to chasing
library dependencies and updating files, something that an automatic
system can do.

The cmake build, before the last "improvement", updated a versioned file
that contained the library dependencies, so `svn status' will show that
file as updated and you knew that you changed the dependencies. Knowing
how exactly those dependencies changed was a `svn diff' away. So the
system keep track of the dependencies without human intervention, *plus*
warned the developer about the dependency changes he introduced. Neat,
uh?

> What's the difference between that and Daniel's
> solution?  All things being equal, I prefer Daniel's solution because
> CMake is just horrendous.

CMake will stay after Daniel's changes. Those changes will not make
cmake any less horrendous than it already is, but they will require to
know cmake+make+python+daniel's-system for tweaking the builds.



More information about the cfe-dev mailing list