[llvm] [Offload] Allow CUDA Kernels to use arbitrarily large shared memory (PR #145963)
Johannes Doerfert via llvm-commits
llvm-commits at lists.llvm.org
Fri Jun 27 14:30:38 PDT 2025
================
@@ -160,6 +160,9 @@ struct CUDAKernelTy : public GenericKernelTy {
private:
/// The CUDA kernel function to execute.
CUfunction Func;
+ /// The maximum amount of dynamic shared memory per thread group. By default,
+ /// this is set to 48 KB.
+ mutable uint32_t MaxDynCGroupMemLimit = 49152;
----------------
jdoerfert wrote:
We don't want to check every time, and we can't fight the user. They could also change the context or other stuff, that's not our concern. The only real question is if we want to go back to a lower value or keep the high water mark. Going back might benefit performance for those launches, but I'd stick with the high water mark for now. Wrt. Mutable, we need some solution, open to suggestions, mutable isn't the worst here honestly.
https://github.com/llvm/llvm-project/pull/145963
More information about the llvm-commits
mailing list