[cfe-dev] RFC: Upcoming Build System Changes
Daniel Dunbar
daniel at zuster.org
Fri Oct 28 13:52:06 PDT 2011
On Fri, Oct 28, 2011 at 12:43 PM, Óscar Fuentes <ofv at wanadoo.es> wrote:
> Chris Lattner <clattner at apple.com> writes:
>
>> On Oct 28, 2011, at 12:54 AM, Óscar Fuentes wrote:
>>> A good measure of how fast a set of Makefile are is to run the build
>>> with all targets up-to-date. Both builds takes a few seconds (3 or so)
>>> on my Linux quad core box. Whatever improvement can be achieved on this
>>> seems pretty insignificant.
>>
>> There are different overheads in different scenarios. The makefiles
>> get really poor utilization out of a 8 or 16-way machine because of
>> implicit synchronization between different sublibraries.
>
> BTW, adding explicit library dependencies will make the parallel builds
> worse, because you know when a dependency is missing (the build fails)
> but you don't know when a dependency is superfluous.
>
>>> Furthermore, recursive make is necessary for automatic generation of
>>> header dependencies, among other things. The makefiles generated by
>>> cmake are "partially" recursive for that reason:
>>>
>>> http://www.cmake.org/Wiki/CMake_FAQ#Why_does_CMake_generate_recursive_Makefiles.3F
>>
>> I don't understand, what does having one makefile per directory have
>> to do with header file generation?
>
> No one makefile per directory, but recursive make. Automatic header
> dependencies are usually supported by recursively invoking `make', which
> is the slow part. Having one makefile per directory may require
> recursive invocation or not, depending on how the build was written.
>
> [snip]
>
>>> Anyways, if you wish to avoid duplicating info on both Makefile and
>>> CMakeLists.txt there is a simple solution: read and parse the Makefile
>>> from the corresponding CMakeLists.txt. For instance, if you put the
>>> library dependencies on the Makefile like this:
>>>
>>> LLVMLIBDEPS := foo zoo bar
>>
>> I don't see how that is any better than what is being proposed.
>> You're just moving the complexity from one place to the other,
>
> What I propose requires about 20 lines in the cmake build. And what Dan
> proposes will require changes on the cmake build as well, I'm pretty
> sure that those changes would be much more intrusive.
>
>> and blocking future improvements that will build on this.
>
> Dan proposes to have a file with the info and processing that file for
> generating stuff for cmake and `make' (after making the necessary
> changes on both builds.) What I propose is to read that info from cmake
> itself. No python required. How is that blocking future improvements?
FWIW, I'm fine with reading that information from CMake, but I imagine
that over time as more stuff moves into shared files the burden of
implementing the CMake logic to deal with that data is just not worth
it (as opposed to leveraging a shared tool).
- Daniel
>
More information about the cfe-dev
mailing list