[Mlir-commits] [mlir] [mlir] Let GPU ID bounds work on any FunctionOpInterfaces (PR #95166)
Jakub Kuderski
llvmlistbot at llvm.org
Wed Jun 12 10:20:02 PDT 2024
================
@@ -73,12 +85,16 @@ static std::optional<uint64_t> getKnownLaunchDim(Op op, LaunchDims type) {
return value.getZExtValue();
}
- if (auto func = op->template getParentOfType<GPUFuncOp>()) {
+ if (auto func = op->template getParentOfType<FunctionOpInterface>()) {
switch (type) {
case LaunchDims::Block:
- return llvm::transformOptional(func.getKnownBlockSize(dim), zext);
+ return llvm::transformOptional(
+ getKnownLaunchAttr(func, GPUFuncOp::getKnownBlockSizeAttrName(), dim),
+ zext);
case LaunchDims::Grid:
- return llvm::transformOptional(func.getKnownGridSize(dim), zext);
+ return llvm::transformOptional(
+ getKnownLaunchAttr(func, GPUFuncOp::getKnownGridSizeAttrName(), dim),
+ zext);
----------------
kuhar wrote:
> My claim is that operations like gpu.thread_id and gpu.shuffle are in a third class: abstractions around platform-specific GPU intrinsics. That is, these are operations meant to abstract across what are almost inevitably platform-specific intrinsics, allowing people to write code generation schemes that can target "a GPU" (though somewhere in their context they're likely to know which).
+1, this has always been my understanding and I'm not aware of any intended limitations as to where these have to live `gpu.func`/`func.func`/`spirv.func`/etc. None seem to be documented at the moment either AFAICT.
https://github.com/llvm/llvm-project/pull/95166
More information about the Mlir-commits
mailing list