[Mlir-commits] [mlir] [mlir] Let GPU ID bounds work on any FunctionOpInterfaces (PR #95166)
Krzysztof Drewniak
llvmlistbot at llvm.org
Tue Jun 11 16:24:44 PDT 2024
================
@@ -73,12 +85,16 @@ static std::optional<uint64_t> getKnownLaunchDim(Op op, LaunchDims type) {
return value.getZExtValue();
}
- if (auto func = op->template getParentOfType<GPUFuncOp>()) {
+ if (auto func = op->template getParentOfType<FunctionOpInterface>()) {
switch (type) {
case LaunchDims::Block:
- return llvm::transformOptional(func.getKnownBlockSize(dim), zext);
+ return llvm::transformOptional(
+ getKnownLaunchAttr(func, GPUFuncOp::getKnownBlockSizeAttrName(), dim),
+ zext);
case LaunchDims::Grid:
- return llvm::transformOptional(func.getKnownGridSize(dim), zext);
+ return llvm::transformOptional(
+ getKnownLaunchAttr(func, GPUFuncOp::getKnownGridSizeAttrName(), dim),
+ zext);
----------------
krzysz00 wrote:
That being said, you've got a point about the problems with discardable attributes, so how about ... we do both?
Known block/grid sizes would be an optional inherent attribute of `gpu.func`, and attempts to retrieve that data would go looking for those values if available. However, when the parent of the `gpu.*_id` or `gpu.*_dim` is some other kind of function, the data in question would live in `gpu.known_*_sizes`, a discardable attribute.
https://github.com/llvm/llvm-project/pull/95166
More information about the Mlir-commits
mailing list