[PATCH] D146973: [Clang] Implicitly include LLVM libc headers for the GPU

Aaron Ballman via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Tue Mar 28 07:11:10 PDT 2023


aaron.ballman added a comment.

In D146973#4227266 <https://reviews.llvm.org/D146973#4227266>, @jhuber6 wrote:

> In D146973#4227114 <https://reviews.llvm.org/D146973#4227114>, @aaron.ballman wrote:
>
>> Hmmm, I've had experience with SYCL as to how it goes when you have difference between host and device; those kinds of bugs are incredibly hard to track down. Pointer sizes being compatible is a good start, but you need integer widths, floating point formats, structure layout decisions, macro definitions, etc to all be the same as well. Having only one set of headers that can be used helps users avoid these sort of problems.
>
> The problem is that we are trying to implement an actual library here. It is, in my opinion, completely unreasonable to try to implement a library based off of another implementation's specification.

I am not asking you to implement a library based off another implementation's specification. I am relaying implementation experience with the design you've chosen for your implementation and how well it's worked in other, related projects. Given that two different technologies have both run into this same problem, I think the llvm-libc folks should carefully consider the design decisions here. If it turns out this is the best way forward, that's fine.

> What you are suggesting is that we implement a GPU library that copies every internal implementation detail that GNU has for that platform. So, let's just copy-paste their headers into our LLVM `libc` and make sure we copy all of their implementations too. Now what if someone wants to use `musl` instead? Do we copy that one as well and have everything surrounded by `ifdef`s? Do we just implement some meta libc that is compatible with every other `libc`? This is not going to create a usable library, and as the person who would presumably need to write it, I'm not going to spend my time copying other `libc` headers.

I'm not asking you to copy other libc headers. I'm pointing out that having two separate headers, one for host and one for device, is a recipe for problems in practice because these two will invariably get out of sync in really fascinating ways that are extremely hard for people to debug. But maybe there's a misunderstanding here: I am assuming we consider it to be unsupported to use glibc/musl/etc on the host and llvm-libc on the device, but maybe that's a faulty assumption.

> The important point is that any symbol or macro we provide in the GPU's headers has an implementation that is expected to be compatible with the host. It's understandable if the macros and functions map to something slightly different, as long as it does what we say it does.
>
>> So we're comfortable painting ourselves into a corner where llvm-libc is only usable with Clang, depending on the target?
>
> There might be somewhat of a misunderstanding here, I'm talking about the GPU implementation of `libc` using LLVM's `libc`. Expecting a specific toolchain is standard procedure for every single other offloading language. It's how we build ROCm device libraries, CUDA device libraries, the OpenMP device runtime, etc. LLVM's `libc` project is perfectly fine being compiled with `gcc`, but the GPU is such a special case we don't have that luxury and need to use `clang`. This is the same approach we do for OpenMP already.

Ah, yes this was a misunderstanding then, sorry for that.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D146973/new/

https://reviews.llvm.org/D146973



More information about the cfe-commits mailing list