[llvm] [Offload] Allow CUDA Kernels to use arbitrarily large shared memory (PR #145963)
Shilei Tian via llvm-commits
llvm-commits at lists.llvm.org
Tue Jul 1 09:06:39 PDT 2025
================
@@ -1302,6 +1305,16 @@ Error CUDAKernelTy::launchImpl(GenericDeviceTy &GenericDevice,
if (GenericDevice.getRPCServer())
GenericDevice.Plugin.getRPCServer().Thread->notify();
+ // In case we require more memory than the current limit.
+ if (MaxDynCGroupMem >= MaxDynCGroupMemLimit) {
----------------
shiltian wrote:
Is this behavior same as CUDA? I thought for CUDA users have to set it explicitly?
https://github.com/llvm/llvm-project/pull/145963
More information about the llvm-commits
mailing list