[clang] [llvm] [clang][CUDA] Avoid accounting for tail padding in LLVM offloading (PR #156229)
Joseph Huber via llvm-commits
llvm-commits at lists.llvm.org
Fri Oct 3 16:18:26 PDT 2025
================
@@ -350,7 +351,15 @@ Address CGNVCUDARuntime::prepareKernelArgsLLVMOffload(CodeGenFunction &CGF,
KernelLaunchParamsTy, CharUnits::fromQuantity(16),
"kernel_launch_params");
- auto KernelArgsSize = CGM.getDataLayout().getTypeAllocSize(KernelArgsTy);
+ // Avoid accounting the tail padding for the kernel arguments.
+ auto KernelArgsSize = llvm::TypeSize::getZero();
----------------
jhuber6 wrote:
That never occurs in OpenMP because we pad everything to u64, but I get what you mean
https://github.com/llvm/llvm-project/pull/156229
More information about the llvm-commits
mailing list