[clang] [llvm] [clang][CUDA] Avoid accounting for tail padding in LLVM offloading (PR #156229)
Kevin Sala Penades via llvm-commits
llvm-commits at lists.llvm.org
Thu Sep 18 09:39:56 PDT 2025
kevinsala wrote:
@jhuber6 @Artem-B does the new approach look better? We now pass the same size for both nvidia and amdgpu kernels.
https://github.com/llvm/llvm-project/pull/156229
More information about the llvm-commits
mailing list