[libcxx-dev] An extension of libcxx

Olivier Giroux via libcxx-dev libcxx-dev at lists.llvm.org
Mon Dec 10 21:23:11 PST 2018


Hello libc++-dev,

In the discussion of https://reviews.llvm.org/D55517, I mentioned that we are attempting a vendor variant of libcxx that uses _VSTD differently. Eric pointed out that I should have started here, so we could talk about design goals. He’s right, I’m sorry.

Not one to bury the lede, I’d like to talk about a CUDA C++ standard library.

The ultimate goal of something like that should be that most things in C++, if not bolted too-tightly onto the operating system, should be able to be passed and used between CPU and GPU. There’s no fundamental reason why we don’t have a big chunk of C++ working like this, today, if we’re talking about contemporary HPC-friendly GPUs. The reason we don’t have much is that it’s a huge pile of work and everyone has managed to avoid doing it so far.

One exploration vehicle was shown at CppCon in September (by me, see: YouTube, and https://github.com/ogiroux/freestanding) and then we made but failed to present a more detailed poster at the LLVM dev meeting in October. And now we’re here. 😊

After making a few exploration vehicles (2 overall, 4 for <atomic>), we now think we’ll create version 1 this way:

  1.  Wrap select libcxx <*> headers with <cuda/*> to introduce symbols in cuda::* instead of std::*, and…
     *   These facilities are always heterogeneous, NORTTI, and NOEXCEPTIONS.
     *   Enable users to include them on top of their host library (that being CPU only).
  2.  “Select” here means prioritizing headers in Freestanding now, or soon, basically the header-only facilities.
  3.  Subsequently help maintain the intersection of libcxx and Freestanding.

In terms of libcxx design, we think that we could layer on this surface:

  *   A freestanding mode, say LIBCXX_FREESTANDING, with a design goal of placing low-OS-coupling variants of code for facilities under this mode, and some agreement that Freestanding libraries have different ABI goals than their closest Hosted relative.
     *   For example, in <atomic>, it would be preferable for Freestanding implementations (and users) if the lock-in-atomic strategy was used for non-lock-free atomics (instead of the sharded-lock-table strategy tucked inside __cxa_atomic_*) because that then frees the program from dependencies on libatomic.
     *   It is my intention to contribute the code for this 3rd strategy, and other maintenance to <atomic>, some of which I’ve already made in my branch.
  *   An extension point that allows std::* symbols to be put into another namespace, both for ABI and to co-exist.
     *   This is in tension with Eric’s proposed change.
  *   An extension point that allows us to tune visibility control, e.g. add __device__ linkage to local-linkage symbols in those headers included in the subset (Freestanding minimum, or the implementation-defined choice).
     *   This was at one point in tension with changes Louis was making, but I think we’re Ok right now.
  *   And, generally speaking, good header inclusion hygiene that tries to minimize what’s pulled into a facility’s header.

That should isolate most of the ugly stuff in our code; version 1 will indeed be fairly ugly, no doubt about that. But then, hopefully, this all ends with libcxx gaining a new implementer!

Thanks for reading, I’ll try to answer your questions as best I can.

Sincerely,

Olivier


-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/libcxx-dev/attachments/20181211/2c90ee13/attachment-0001.html>


More information about the libcxx-dev mailing list