[Mlir-commits] [mlir] [mlir] Let GPU ID bounds work on any FunctionOpInterfaces (PR #95166)

Mehdi Amini llvmlistbot at llvm.org
Tue Jun 11 16:30:14 PDT 2024


================
@@ -73,12 +85,16 @@ static std::optional<uint64_t> getKnownLaunchDim(Op op, LaunchDims type) {
       return value.getZExtValue();
   }
 
-  if (auto func = op->template getParentOfType<GPUFuncOp>()) {
+  if (auto func = op->template getParentOfType<FunctionOpInterface>()) {
     switch (type) {
     case LaunchDims::Block:
-      return llvm::transformOptional(func.getKnownBlockSize(dim), zext);
+      return llvm::transformOptional(
+          getKnownLaunchAttr(func, GPUFuncOp::getKnownBlockSizeAttrName(), dim),
+          zext);
     case LaunchDims::Grid:
-      return llvm::transformOptional(func.getKnownGridSize(dim), zext);
+      return llvm::transformOptional(
+          getKnownLaunchAttr(func, GPUFuncOp::getKnownGridSizeAttrName(), dim),
+          zext);
----------------
joker-eph wrote:

> Re getTargetInfo, my claim is that a user of the GPU dialect should not be (and, by current practice, is not) required to use a gpu.module as the container for their GPU compilations. They should be able to use a their_own_custom.module that's annotated with "this is targeting a GPU". 

Sure, but you're leaving out some parts here about "how much of upstream can they expect to rely on when they are in this situation".  The rubber hits the road when you try to write **all** code without being able to assume anything about the structure or the operations used: having things compose is nice, but that does not mean that the upstream lowering would just work out of the box regardless.
(otherwise why even both introducing things like gpu.func and gpu.module at all?)


https://github.com/llvm/llvm-project/pull/95166


More information about the Mlir-commits mailing list