[llvm] [Offload] Allow CUDA Kernels to use arbitrarily large shared memory (PR #145963)
Johannes Doerfert via llvm-commits
llvm-commits at lists.llvm.org
Tue Jul 1 09:27:16 PDT 2025
================
@@ -1302,6 +1305,16 @@ Error CUDAKernelTy::launchImpl(GenericDeviceTy &GenericDevice,
if (GenericDevice.getRPCServer())
GenericDevice.Plugin.getRPCServer().Thread->notify();
+ // In case we require more memory than the current limit.
+ if (MaxDynCGroupMem >= MaxDynCGroupMemLimit) {
----------------
jdoerfert wrote:
They do. But openmp has no such means. We need to handle it transparently.
https://github.com/llvm/llvm-project/pull/145963
More information about the llvm-commits
mailing list