[all-commits] [llvm/llvm-project] fcfeb1: [mlir][gpu] Add GPU target support to `gpu-to-llvm`.

Fabian Mora via All-commits all-commits at lists.llvm.org
Fri Aug 11 17:27:46 PDT 2023


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: fcfeb1e5b3cddfbc32f7fb8798c90d9c23d5de48
      https://github.com/llvm/llvm-project/commit/fcfeb1e5b3cddfbc32f7fb8798c90d9c23d5de48
  Author: Fabian Mora <fmora.dev at gmail.com>
  Date:   2023-08-12 (Sat, 12 Aug 2023)

  Changed paths:
    M mlir/include/mlir/Conversion/GPUCommon/GPUCommonPass.h
    M mlir/lib/Conversion/GPUCommon/GPUToLLVMConversion.cpp
    A mlir/test/Conversion/GPUCommon/lower-launch-func-bare-ptr.mlir
    M mlir/test/Conversion/GPUCommon/lower-launch-func-to-gpu-runtime-calls.mlir

  Log Message:
  -----------
  [mlir][gpu] Add GPU target support to `gpu-to-llvm`.

**For an explanation of these patches see D154153.**

This patch modifies the lowering of `gpu.module` & `gpu.launch_func` in the `gpu-to-llvm` pass,
allowing the usage of the new GPU compilation mechanism in the patch series ending in D154153.

Instead of removing Modules, this patch preserves the module if it has target attributes so that the
`gpu-module-to-binary` pass can later serialize them.

Instead of lowering the kernel calls to the LLVM dialect, this patch primarily updates the operation's
arguments, leaving the job of converting the operation into LLVM instructions to the translation stage.
The reason for not lowering the operation to LLVM at this stage is that kernel launches do not have a
single one-to-one representation in LLVM. For example, a kernel launch can be represented by a call
to a kernel stub, like in CUDA or HIP.
Kernel launches are also intrinsically linked to the binary associated with the call, and the binaries are
converted during translation.

Depends on D154149

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D154152




More information about the All-commits mailing list