[cfe-dev] Why clang needs to fork into itself?
kremenek at apple.com
Wed Jan 29 10:10:52 PST 2014
On Jan 29, 2014, at 12:39 AM, Chandler Carruth <chandlerc at google.com> wrote:
> There is another advantage. More and more we are lifting the host-specific behavior into the driver rather than the compiler proper. The internal compiler invocation thus has a canonical set of flags rather than a platform specific flags, and it captures *numerous* behavioral reflections of the host system. This too is very useful in capturing bug reports accurately. While we might be able to successfully extract the exact internal flag state necessary to reproduce things and serialize it, exclusively using that serialization does help ensure this always holds true.
This is an excellent point. This separation creates a very useful separation of concerns.
> The problem I really have with all of this is that we are acting like we won't have bugs and problems with this clever signal-based crash handling solution. My experience with signal handling and running a wide variety of Unix-like operating systems is that this is absolutely not true. I think the maintenance cost of complex and subtle signal management will be extremely high, and to me it doesn't (yet) seem likely to be worth the cost. Currently, compiling C++ (or even moderately complex C, ObjC, etc.) is still massively slower than a subprocess invocation even on Windows. I don't think we should even consider crossing this bridge until that changes.
This pretty much captures my sentiments entirely. Our experience with using a signal-based crash handling solution with libclang in Xcode has been mediocre. I’m not convinced that we could do something that approximates process isolation without doing process isolation. I also agree that the raw compilation time likely dwarfs the subprocess invocation on Windows.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cfe-dev