[all-commits] [llvm/llvm-project] 5110ac: [Offload] Allow CUDA Kernels to use arbitrarily la...
Giorgi Gvalia via All-commits
all-commits at lists.llvm.org
Mon Jul 7 12:26:38 PDT 2025
Branch: refs/heads/main
Home: https://github.com/llvm/llvm-project
Commit: 5110ac4113b5969315a38e0cffe7580a4ca847a1
https://github.com/llvm/llvm-project/commit/5110ac4113b5969315a38e0cffe7580a4ca847a1
Author: Giorgi Gvalia <49309634+gvalson at users.noreply.github.com>
Date: 2025-07-07 (Mon, 07 Jul 2025)
Changed paths:
M offload/plugins-nextgen/cuda/dynamic_cuda/cuda.cpp
M offload/plugins-nextgen/cuda/dynamic_cuda/cuda.h
M offload/plugins-nextgen/cuda/src/rtl.cpp
Log Message:
-----------
[Offload] Allow CUDA Kernels to use arbitrarily large shared memory (#145963)
Previously, the user was not able to use more than 48 KB of shared
memory on NVIDIA GPUs. In order to do so, setting the function attribute
`CU_FUNC_ATTRIBUTE_MAX_THREADS_PER_BLOCK` is required, which was not
present in the code base. With this commit, we add the ability toset
this attribute, allowing the user to utilize the full power of their
GPU.
In order to not have to reset the function attribute for each launch of
the same kernel, we keep track of the maximum memory limit (as the
variable `MaxDynCGroupMemLimit`) and only set the attribute if our
desired amount exceeds the limit. By default, this limit is set to 48
KB.
Feedback is greatly appreciated, especially around setting the new
variable as mutable. I did this becuase the `launchImpl` method is const
and I am not able to modify my variable otherwise.
---------
Co-authored-by: Giorgi Gvalia <ggvalia at login33.chn.perlmutter.nersc.gov>
Co-authored-by: Giorgi Gvalia <ggvalia at login07.chn.perlmutter.nersc.gov>
To unsubscribe from these emails, change your notification settings at https://github.com/llvm/llvm-project/settings/notifications
More information about the All-commits
mailing list