[libc-commits] [libc] [libc] Update GPU documentation pages (PR #84076)

Joseph Huber via libc-commits libc-commits at lists.llvm.org
Tue Mar 5 16:15:07 PST 2024


================
@@ -0,0 +1,243 @@
+.. _libc_gpu_building:
+
+======================
+Building libs for GPUs
+======================
+
+.. contents:: Table of Contents
+  :depth: 4
+  :local:
+
+Building the GPU C library
+==========================
+
+This document will present recipes to build the LLVM C library targeting a GPU 
+architecture. The GPU build uses the same :ref:`cross build<full_cross_build>` 
+support as the other targets. However, the GPU target has the restriction that 
+it *must* be built with an up-to-date ``clang`` compiler. This is because the 
+GPU target uses several compiler extensions to target GPU architectures.
+
+The LLVM C library currently supports two GPU targets. This is either 
+``nvptx64-nvidia-cuda`` for NVIDIA GPUs or ``amdgcn-amd-amdhsa`` for AMD GPUs. 
+Targeting these architectures is done through ``clang``'s cross-compiling 
+support using the ``--target=<triple>`` flag. The following sections will 
+describe how to build the GPU support specifically.
+
+Once you have finished building, refer to :ref:`libc_gpu_usage` to get started 
+with the newly built C library.
+
+Standard runtimes build
+-----------------------
+
+The simplest way to build the GPU libc is to use the existing LLVM runtimes 
+support. This will automatically handle bootstrapping an up-to-date ``clang`` 
+compiler and using it to build the C library. The following CMake invocation 
+will instruct it to build the ``libc`` runtime targeting both AMD and NVIDIA 
+GPUs.
+
+.. code-block:: sh
+
+  $> cd llvm-project  # The llvm-project checkout
+  $> mkdir build
+  $> cd build
+  $> cmake ../llvm -G Ninja                                                 \
+     -DLLVM_ENABLE_PROJECTS="clang;lld;compiler-rt"                         \
+     -DLLVM_ENABLE_RUNTIMES="openmp"                                        \
+     -DCMAKE_BUILD_TYPE=<Debug|Release>   \ # Select build type
+     -DCMAKE_INSTALL_PREFIX=<PATH>        \ # Where the libraries will live
+     -DRUNTIMES_nvptx64-nvidia-cuda_LLVM_ENABLE_RUNTIMES=libc               \
+     -DRUNTIMES_amdgcn-amd-amdhsa_LLVM_ENABLE_RUNTIMES=libc                 \
+     -DLLVM_RUNTIME_TARGETS=default;amdgcn-amd-amdhsa;nvptx64-nvidia-cuda
+  $> ninja install
+
+
+Since we want to include ``clang``, ``lld`` and ``compiler-rt`` in our
+toolchain, we list them in ``LLVM_ENABLE_PROJECTS``. To ensure ``libc`` is built
+using a compatible compiler and to support ``openmp`` offloading, we list them
+in ``LLVM_ENABLE_RUNTIMES`` to build them after the enabled projects using the
+newly built compiler. We enable the ``libc`` project only for the GPU 
+architectures. The ``lld`` linker is required to produce AMDGPU executables 
+while ``openmp`` has first-class support for the GPU libc.
+
+Runtimes cross build
+--------------------
+
+For users wanting more direct control over the build process, the build steps 
+can be done manually instead. This build closely follows the instructions in the 
+:ref:`main documentation<runtimes_cross_build>` but is specialized for the GPU 
+build. We follow the same steps to first build the libc tools and a suitable 
+compiler. These tools must all be up-to-date with the libc source.
+
+.. code-block:: sh
+
+  $> cd llvm-project  # The llvm-project checkout
+  $> mkdir build-libc-tools # A different build directory for the build tools
+  $> cd build-libc-tools
+  $> HOST_C_COMPILER=<C compiler for the host> # For example "clang"
+  $> HOST_CXX_COMPILER=<C++ compiler for the host> # For example "clang++"
+  $> cmake ../llvm                            \
+     -G Ninja                                 \
+     -DLLVM_ENABLE_PROJECTS=libc              \
+     -DCMAKE_C_COMPILER=$HOST_C_COMPILER      \
+     -DCMAKE_CXX_COMPILER=$HOST_CXX_COMPILER  \
+     -DLLVM_LIBC_FULL_BUILD=ON                \
+     -DLIBC_HDRGEN_ONLY=ON    \ # Only build the 'libc-hdrgen' tool
+     -DCMAKE_BUILD_TYPE=Release # Release suggested to make "clang" fast
+  $> ninja # Build the 'clang' compiler
+  $> ninja libc-hdrgen # Build the 'libc-hdrgen' tool
+
+Once this has finished the build directory should contain the ``clang`` compiler 
+and the ``libc-hdrgen`` executable. We will use the ``clang`` compiler to build 
+the GPU code and the ``libc-hdrgen`` tool to create the necessary headers. We 
+use these tools to bootstrap the build out of the runtimes directory targeting
+
+.. code-block:: sh
+
+  $> cd llvm-project  # The llvm-project checkout
+  $> mkdir build # A different build directory for the build tools
+  $> cd build
+  $> TARGET_TRIPLE=<amdgcn-amd-amdhsa or nvptx64-nvidia-cuda>
+  $> TARGET_C_COMPILER=</path/to/clang>
+  $> TARGET_CXX_COMPILER=</path/to/clang++>
+  $> HDRGEN=</path/to/libc-hdrgen>
+  $> cmake ../runtimes \ # Point to the runtimes build
+     -G Ninja                                  \
+     -DLLVM_ENABLE_RUNTIMES=libc               \
+     -DCMAKE_C_COMPILER=$TARGET_C_COMPILER     \
+     -DCMAKE_CXX_COMPILER=$TARGET_CXX_COMPILER \
+     -DLLVM_LIBC_FULL_BUILD=ON                 \
+     -DLLVM_RUNTIMES_TARGET=$TARGET_TRIPLE     \
+     -DLIBC_HDRGEN_EXE=$HDRGEN                 \
+     -DCMAKE_BUILD_TYPE=Release
+  $> ninja install
+
+The above steps will result in a build targeting one of the supported GPU 
+architectures. Building for multiple targets requires separate CMake 
+invocations.
+
+Standalone cross build
+----------------------
+
+The GPU build can also be targeted directly as long as the compiler used is a 
+supported ``clang`` compiler. This method is generally not recommended as it can 
+only target a single GPU architecture.
+
+.. code-block:: sh
+
+  $> cd llvm-project  # The llvm-project checkout
+  $> mkdir build # A different build directory for the build tools
+  $> cd build
+  $> CLANG_C_COMPILER=</path/to/clang> # Must be a trunk build
+  $> CLANG_CXX_COMPILER=</path/to/clang++> # Must be a trunk build
+  $> TARGET_TRIPLE=<amdgcn-amd-amdhsa or nvptx64-nvidia-cuda>
+  $> cmake ../runtimes \ # Point to the runtimes build
+     -G Ninja                                 \
+     -DLLVM_ENABLE_RUNTIMES=libc              \
+     -DCMAKE_C_COMPILER=$CLANG_C_COMPILER     \
+     -DCMAKE_CXX_COMPILER=$CLANG_CXX_COMPILER \
+     -DLLVM_LIBC_FULL_BUILD=ON                \
+     -DLIBC_TARGET_TRIPLE=$TARGET_TRIPLE      \
+     -DCMAKE_BUILD_TYPE=Release
+  $> ninja install
+
+This will build and install the GPU C library along with all the other LLVM 
+libraries.
+
+Build overview
+==============
+
+Once installed, the GPU build will create several files used for different 
+targets. This section will briefly describe their purpose.
+
+**lib/<host-triple>/libcgpu-amdgpu.a or lib/libcgpu-amdgpu.a**
+  A static library containing fat binaries supporting AMD GPUs. These are built 
+  using the support described in the `clang documentation 
+  <https://clang.llvm.org/docs/OffloadingDesign.html>`_. These are intended to 
+  be static libraries included natively for offloading languages like CUDA, HIP, 
+  or OpenMP. This implements the standard C library.
+
+**lib/<host-triple>/libmgpu-amdgpu.a or lib/libmgpu-amdgpu.a**
+  A static library containing fat binaries that implements the standard math 
+  library for AMD GPUs.
+
+**lib/<host-triple>/libcgpu-nvptx.a or lib/libcgpu-nvptx.a**
+  A static library containing fat binaries that implement the standard C library 
+  for NVIDIA GPUs.
+
+**lib/<host-triple>/libcgpu-nvptx.a or lib/libcgpu-nvptx.a**
----------------
jhuber6 wrote:

You are correct, forgot to change it after copy paste. Thanks.

https://github.com/llvm/llvm-project/pull/84076


More information about the libc-commits mailing list