[llvm] [mlir] [mlir][gpu] Change GPU modules to globals (PR #135478)
Fabian Mora via llvm-commits
llvm-commits at lists.llvm.org
Fri Apr 25 07:47:38 PDT 2025
fabianmcg wrote:
> the loading/unloading of the module should be delegated to another operation that has this clear meaning so it would be more modular.
I think adding something like `gpu.get_binary` op that returns a ptr to the embedded binary could provide that flexibility. That way, we also avoid the pitfalls of modeling against a particular runtime, and if an user needs more customization they can take the ptr and load it via func call.
> Or as an intermediate step we could have an attribute on the gpu.binary operation that indicates whether the translation should manage the loading/unloading or if it is left to another part of the pipeline.
Technically, `gpu.binary` already allows that: https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/GPU/IR/GPUOps.td#L1467
It's possible for one to add an attr `#cuf.offloading_handler` implementing the interface. Put it on the binary, and then translation would only do what the interface says, and not load a module for example.
This is what https://github.com/llvm/llvm-project/pull/78117 does. It adds a new offloading attribute, that during translation adds a global constructor registering the binaries at startup with the CUDA runtime and then launch is just call to a runtime function. It would be up to the user to use `gpu.select_object` or `gpu.offload_embedding` in `gpu.binary`.
> I'm happy to help if you would like to move things in a particular direction
At the moment, I would argue is better to not move things to `gpu-to-llvm` because we would ultimately move then out when offload comes around.
However, IMO a good cleanup would be merging `load`, `get_func`, `launch` into `mgpuLaunchKernel`, and handle everything inside that function. Including lazy loading the modules and unloading at program shutdown.
https://github.com/llvm/llvm-project/pull/135478
More information about the llvm-commits
mailing list