[Mlir-commits] [mlir] [mlir] Let GPU ID bounds work on any FunctionOpInterfaces (PR #95166)

Krzysztof Drewniak llvmlistbot at llvm.org
Tue Jun 11 16:34:52 PDT 2024


================
@@ -73,12 +85,16 @@ static std::optional<uint64_t> getKnownLaunchDim(Op op, LaunchDims type) {
       return value.getZExtValue();
   }
 
-  if (auto func = op->template getParentOfType<GPUFuncOp>()) {
+  if (auto func = op->template getParentOfType<FunctionOpInterface>()) {
     switch (type) {
     case LaunchDims::Block:
-      return llvm::transformOptional(func.getKnownBlockSize(dim), zext);
+      return llvm::transformOptional(
+          getKnownLaunchAttr(func, GPUFuncOp::getKnownBlockSizeAttrName(), dim),
+          zext);
     case LaunchDims::Grid:
-      return llvm::transformOptional(func.getKnownGridSize(dim), zext);
+      return llvm::transformOptional(
+          getKnownLaunchAttr(func, GPUFuncOp::getKnownGridSizeAttrName(), dim),
+          zext);
----------------
krzysz00 wrote:

Yeah, honestly, I don't think gpu.module and gpu.func should exist.

I think `gpu.module` should be `builtin.module {target = #gpu.{nvvm,rocdl,...}_target<...>` and that `gpu.func` should be `func.func {gpu.kernel}`.

https://github.com/llvm/llvm-project/pull/95166


More information about the Mlir-commits mailing list