[clang] [llvm] [clang][CUDA] Avoid accounting for tail padding in LLVM offloading (PR #156229)

Kevin Sala Penades via cfe-commits cfe-commits at lists.llvm.org
Fri Oct 3 16:03:04 PDT 2025


================
@@ -350,7 +351,15 @@ Address CGNVCUDARuntime::prepareKernelArgsLLVMOffload(CodeGenFunction &CGF,
       KernelLaunchParamsTy, CharUnits::fromQuantity(16),
       "kernel_launch_params");
 
-  auto KernelArgsSize = CGM.getDataLayout().getTypeAllocSize(KernelArgsTy);
+  // Avoid accounting the tail padding for the kernel arguments.
+  auto KernelArgsSize = llvm::TypeSize::getZero();
----------------
kevinsala wrote:

Probably the title of the PR should be more generic. I don't think this issue is only affecting LLVM offloading. It could potentially affect any kernel in OpenMP offloading with parameters that have different alignments.

https://github.com/llvm/llvm-project/pull/156229


More information about the cfe-commits mailing list