[llvm] [Offload] Implement the remaining initial Offload API (PR #122106)

Joseph Huber via llvm-commits llvm-commits at lists.llvm.org
Wed Mar 5 06:59:22 PST 2025


================
@@ -0,0 +1,48 @@
+//===-- Memory.td - Memory definitions for Offload ---------*- tablegen -*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This file contains Offload API definitions related to memory allocations
+//
+//===----------------------------------------------------------------------===//
+
+def : Enum {
+  let name = "ol_alloc_type_t";
+  let desc = "Represents the type of allocation made with olMemAlloc.";
+  let etors = [
+    Etor<"HOST", "Host allocation">,
----------------
jhuber6 wrote:

These names suck, they roughly equate to CUDA's `host`, `managed`, and `device` memory. Honestly we should take this opportunity to change them to something more understandable.

In this context, I believe `host` is 'pinned' memory that always resides on the host `managed` is memory that can migrate in the unified memory context. While device is just memory that only exists on the GPU. HSA has `coarse-grained` and `fine-grained`. coarse-grained being only accessible to one 'agent' (i.e. GPU) while fine-grained is likely pinned. They also have their `svm` API which I believe is closer to `managed`. Naming things is hard unfortunately.

https://github.com/llvm/llvm-project/pull/122106


More information about the llvm-commits mailing list