[llvm-dev] Compiler support in libc++

Louis Dionne via llvm-dev llvm-dev at lists.llvm.org
Wed Mar 3 17:51:54 PST 2021



> On Mar 3, 2021, at 14:17, Mehdi AMINI <joker.eph at gmail.com> wrote:
> 
> 
> 
> On Wed, Mar 3, 2021 at 10:32 AM David Blaikie <dblaikie at gmail.com <mailto:dblaikie at gmail.com>> wrote:
> On Wed, Mar 3, 2021 at 9:31 AM Mehdi AMINI via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
> Hi,
> 
> It seems to me that this would require one extra stage of bootstrap in CI for many buildbots.
> For example, today I have a Linux bot with a clang-8 host compiler and libstdc++. The goal is to ensure that MLIR (but it is applicable to any project) builds with clang and libc++ at the top of the main branch.
> So the setup is:
> - stage1: build clang/libc++ with host clang-8/libstdc++
> - stage2: build test "anything" using stage1 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
>  
> With this proposal, the setup would be:
> 
> - stage1: build just clang with host clang-8/libstdc++
> - stage2: build clang/libc++ with stage1 clang and host libstdc++
> - stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
> 
> Would it be possible to change the build system so that libc++ can be built like compiler-rt, using the just-built clang? That would then avoid the need for the extra stage? (though it would bottleneck the usual build a bit - not being able to start the libc++ build until after clang build)
> 
> That's a good point:
>  - stage1: build just clang with host clang-8/libstdc++
> - stage1.5: build libc++ with stage1 clang
> - stage 2: assemble toolchain with clang from stage1 and libc++ from stage2
> - stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
> 
> Since this "stage 2" is the new "stage1", I believe that this should be made completely straightforward to achieve. Ideally it should boil down to a single standard CMake invocation to produce this configuration.

I think the Runtimes build is exactly what you’re looking for. With the runtimes build, you say:

    $ cmake -S "${MONOREPO_ROOT}/llvm" -B "${BUILD_DIR}” \
        -DLLVM_ENABLE_PROJECTS="clang” \
        -DLLVM_ENABLE_RUNTIMES="libcxx;libcxxabi” \
        -DLLVM_RUNTIME_TARGETS="x86_64-unknown-linux-gnu”

And then you can just do:

    $ make -C $BUILD_DIR cxx

That will bootstrap Clang and then build libc++ with the just-built Clang. I don’t know whether you consider that to be one or two stages, but it happens automatically in that single CMake invocation. And since building libc++ is basically trivial, this takes approximately the same time as building Clang only.

>  
> 
> & again, this isn't so much a proposal of change, but one of documenting the current state of things - which reveals the current situations are sort of unsupported? (though it also goes against the claim that they're untested) - so I'll be curious to hear from the libc++ folks about this for sure.
> 
> Right: I'm absolutely not convinced by the "we're documenting the current state of things" actually. 
> In particular my take in general on what we call "supported" is a policy that "we revert if we break a supported configuration" and "we accept patches to fix a supported configuration". So the change here is that libc++ would not accept to revert when they break an older toolchain, and we wouldn't accept patches to libc++ to fix it.
> We don't necessarily have buildbots for every configuration that we claim LLVM is supporting, yet this is the policy, and I'm quite wary of defining the "current state of things" based exclusively on the current public buildbots setup.

To be clear, what we do today to “fix” older compilers is usually to mark failing tests in the test suite with XFAIL or UNSUPPORTED annotations. We don’t actually provide a good level of support for those compilers. There’s also other things that we simply can’t fix, like the fact that a libc++ built with a compiler that doesn’t know about char8_t (for example) won’t produce the RTTI for char8_t in the dylib, and hence will produce a dylib where some random uses of char8_t will break down. This is just an example, but my point is that it’s far better to clarify the support policy to something that *we know* will work, and that we can commit to supporting. There's a small upfront cost for people running build bots right now, but once things are setup it’ll just be better for everyone.

>  
>  
> 
> The only way to avoid adding a stage in the bootstrap is to keep updating the bots with a very recent host clang (I'm not convinced that increasing the cost of maintenance for CI / infra is good in general).
> 
> We should aim for a better balance: it is possible that clang-5 is too old (I don't know?), but there are people (like me, and possibly others) who are testing HEAD with older compiler (clang-8 here) and it does not seem broken at the moment (or the recent years), I feel there should be a strong motivation to break it.

Libc++ on Clang 8 doesn’t look broken because it builds. And it builds because you’ve been pinging us on Phabricator when we break you with a change, and we add a “workaround” that makes it build. But there’s no guarantee about the “quality" of the libc++ that you get in that case though. That’s exactly what we want to avoid - you get something that “kinda works”, yet we still have to insert random workarounds in the code. It’s a lose/lose situation.

> Could we find something more intermediate here? Like a time-based support (2 years?) or something based on the latest Ubuntu release or something like that. That would at least keep the cost of upgrading bots a bit more controlled (and avoid a costly extra stage of bootstrap).

As I said above, I don’t think there’s any extra stage of bootstrap. The only difference is that you build your libc++ using the Clang you just built, instead of against the system compiler. In both cases you need to build both Clang and libc++ anyway.

Furthermore, we specifically support the last released Clang. If you were in a situation where you didn’t want to build Clang but wanted to build libc++, you’d just have to download a sufficiently recent Clang release and use that.

Louis


> 
> On Tue, Mar 2, 2021 at 7:10 AM Louis Dionne via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
> 
> 
> > On Mar 1, 2021, at 15:41, Joerg Sonnenberger via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
> > 
> > On Mon, Mar 01, 2021 at 12:40:36PM -0500, Louis Dionne via llvm-dev wrote:
> >> However, for a library like libc++, things are a bit different.
> > 
> > So how does this prevent the libstdc++ mess that you need to lock step
> > the RTL with the compiler and more importantly, get constantly screwed
> > over when you need to upgrade or downgrade the compiler in a complex
> > environment like an actual Operating System?
> 
> Could you please elaborate on what issue you’re thinking about here? As someone who ships libc++ as part of an operating system and SDK (which isn’t necessarily in perfect lockstep with the compiler), I don’t see any issues. The guarantee that you can still use a ~6 months old Clang is specifically intended to allow for that use case, i.e. shipping libc++ as part of an OS instead of a toolchain.
> 
> 
> > I consider this proposal a major step backwards...
> 
> To be clear, we only want to make official the level of support that we already provide in reality. As I explained in my original email, if you’ve been relying on libc++ working on much older compilers, I would suggest that you stop doing so because nobody is testing that and we don’t really support it, despite what the documentation says. So IMO this can’t be a step backwards, since we already don’t support these compilers, we just pretend that we do.
> 
> Louis
> 
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210303/def47573/attachment-0001.html>


More information about the llvm-dev mailing list