[Mlir-commits] [mlir] [mlir][gpu] Introduce `gpu.dynamic_shared_memory` Op (PR #71546)
Guray Ozen
llvmlistbot at llvm.org
Fri Nov 10 01:54:03 PST 2023
================
@@ -554,6 +557,101 @@ static IntegerAttr wrapNumericMemorySpace(MLIRContext *ctx, unsigned space) {
return IntegerAttr::get(IntegerType::get(ctx, 64), space);
}
+/// Generates a symbol with 0-sized array type for dynamic shared memory usage,
+/// or uses existing symbol.
+LLVM::GlobalOp
+getDynamicSharedMemorySymbol(ConversionPatternRewriter &rewriter,
+ gpu::DynamicSharedMemoryOp op,
+ const LLVMTypeConverter *typeConverter,
+ MemRefType memrefType, unsigned alignmentBit) {
+ LLVM::LLVMFuncOp funcOp = op->getParentOfType<LLVM::LLVMFuncOp>();
+ assert(funcOp && "cannot find llvm.func op");
+
+ gpu::GPUModuleOp moduleOp = funcOp->getParentOfType<gpu::GPUModuleOp>();
+ assert(moduleOp && "cannot find gpu.module op");
----------------
grypp wrote:
I added asserts with third-party apps in mind. For instance, if IREE adopts this new Op, it'd hit the asserts as it doesn't have gpu.func/module (due to its different GPU codegen). I thought asserts are better than ICE, so third-party apps to improve Op lowering for their their needs.
In the context of MLIR upstream, these asserts may be overly cautious :) Consider following IR:
```
gpu.module @modules {
gpu.func @gpu_device_function() {
%shmem = gpu.dynamic_shared_memory : memref<?xi8, 3>
gpu.return
}
func.func @func_device_function() {
%shmem = gpu.dynamic_shared_memory : memref<?xi8, 3>
return
}}
```
The `assert(moduleOp && "cannot find gpu.module op")` is overkill since the `convert-gpu-to-nvvm` pass operates on `gpu::GPUModuleOp`, guaranteeing its presence.
Similarly, the `assert(funcOp && "cannot find llvm.func op")` is unnecessary since, during the lowering pattern, `gpu.func` and `func.func` are `llvm.func`.
We can get rid of them both.
Thoughts?
https://github.com/llvm/llvm-project/pull/71546
More information about the Mlir-commits
mailing list