[PATCH] D146973: [Clang] Implicitly include LLVM libc headers for the GPU

Joseph Huber via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Tue Mar 28 06:24:04 PDT 2023


jhuber6 added a comment.

In D146973#4227114 <https://reviews.llvm.org/D146973#4227114>, @aaron.ballman wrote:

> Hmmm, I've had experience with SYCL as to how it goes when you have difference between host and device; those kinds of bugs are incredibly hard to track down. Pointer sizes being compatible is a good start, but you need integer widths, floating point formats, structure layout decisions, macro definitions, etc to all be the same as well. Having only one set of headers that can be used helps users avoid these sort of problems.

The problem is that we are trying to implement an actual library here. It is, in my opinion, completely unreasonable to try to implement a library based off of another implementation's specification. What you are suggesting is that we implement a GPU library that copies every internal implementation detail that GNU has for that platform. So, let's just copy-paste their headers into our LLVM `libc` and make sure we copy all of their implementations too. Now what if someone wants to use `musl` instead? Do we copy that one as well and have everything surrounded by `ifdef`s? Do we just implement some meta libc that is compatible with every other `libc`? This is not going to create a usable library, and as the person who would presumably need to write it, I'm not going to spend my time copying other `libc` headers.

We need to provide fully-custom headers, if this fully-custom header uses `#include_next` after we've verified that it doesn't break, that's fine. I'm not particularly concerned if a macro or function is undefined between the CPU and GPU. The important point is that any symbol or macro we provide in the GPU's headers has an implementation that is expected to be compatible with the host. It's understandable if the macros and functions map to something slightly different, as long as it does what we say it does.

> So we're comfortable painting ourselves into a corner where llvm-libc is only usable with Clang, depending on the target?

There might be somewhat of a misunderstanding here, I'm talking about the GPU implementation of `libc` using LLVM's `libc`. Expecting a specific toolchain is standard procedure for every single other offloading language. It's how we build ROCm device libraries, CUDA device libraries, the OpenMP device runtime, etc. LLVM's `libc` project is perfectly fine being compiled with `gcc`, but the GPU is such a special case we don't have that luxury and need to use `clang`. This is the same approach we do for OpenMP already.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D146973/new/

https://reviews.llvm.org/D146973



More information about the cfe-commits mailing list