[Mlir-commits] [mlir] [mlir][gpu] Introduce `gpu.dynamic_shared_memory` Op (PR #71546)

Guray Ozen llvmlistbot at llvm.org
Fri Nov 10 04:00:37 PST 2023


================
@@ -554,6 +557,104 @@ static IntegerAttr wrapNumericMemorySpace(MLIRContext *ctx, unsigned space) {
   return IntegerAttr::get(IntegerType::get(ctx, 64), space);
 }
 
+/// Generates a symbol with 0-sized array type for dynamic shared memory usage,
+/// or uses existing symbol.
+LLVM::GlobalOp getDynamicSharedMemorySymbol(
+    ConversionPatternRewriter &rewriter, gpu::DynamicSharedMemoryOp op,
+    const LLVMTypeConverter *typeConverter, MemRefType memrefType,
+    unsigned alignmentBit, unsigned addressSpace) {
+  LLVM::LLVMFuncOp funcOp = op->getParentOfType<LLVM::LLVMFuncOp>();
+  assert(funcOp && "cannot find llvm.func op");
+
+  gpu::GPUModuleOp moduleOp = funcOp->getParentOfType<gpu::GPUModuleOp>();
+  assert(moduleOp && "cannot find gpu.module op");
+
+  // Step 1. Return existing global op if it exists
+  uint64_t alignmentByte = alignmentBit / memrefType.getElementTypeBitWidth();
+  for (auto &innerOp : moduleOp->getRegions().front().front().getOperations()) {
+    if (auto globalOp = dyn_cast<LLVM::GlobalOp>(innerOp)) {
----------------
grypp wrote:

Not from what I've seen ([see the code](https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/IR/Block.h#L130))

I can add the code below as a follow-up PR if you want? But I'm not sure if this is a good idea, `operations` is intrusive link list `llvm::iplist<Operation>`. I need an extra `SmallVector<OpTy>` to keep the filtered ops. 
```
template <typename OpTy>
SmallVector<OpTy> getOperations() const {
 SmallVector<OpTy> filteredOps;
 for (automatic &op : operations) {
    if (auto cast = dyn_cast<OpTy>(op)) {
      filteredOps.push_back(dump);
    }
 }
 return filteredOps;
}
```

https://github.com/llvm/llvm-project/pull/71546


More information about the Mlir-commits mailing list