[Mlir-commits] [mlir] [mlir][gpu] Introduce `gpu.dynamic_shared_memory` Op (PR #71546)
Mehdi Amini
llvmlistbot at llvm.org
Thu Nov 9 06:30:20 PST 2023
================
@@ -554,6 +557,101 @@ static IntegerAttr wrapNumericMemorySpace(MLIRContext *ctx, unsigned space) {
return IntegerAttr::get(IntegerType::get(ctx, 64), space);
}
+/// Generates a symbol with 0-sized array type for dynamic shared memory usage,
+/// or uses existing symbol.
+LLVM::GlobalOp
+getDynamicSharedMemorySymbol(ConversionPatternRewriter &rewriter,
+ gpu::DynamicSharedMemoryOp op,
+ const LLVMTypeConverter *typeConverter,
+ MemRefType memrefType, unsigned alignmentBit) {
+ LLVM::LLVMFuncOp funcOp = op->getParentOfType<LLVM::LLVMFuncOp>();
+ assert(funcOp && "cannot find llvm.func op");
+
+ gpu::GPUModuleOp moduleOp = funcOp->getParentOfType<gpu::GPUModuleOp>();
+ assert(moduleOp && "cannot find gpu.module op");
+
+ uint64_t alignmentByte = alignmentBit / memrefType.getElementTypeBitWidth();
+
+ LLVM::GlobalOp existingGlobalOp;
+ moduleOp->walk([&](LLVM::GlobalOp globalOp) {
+ if (auto arrayType = dyn_cast<LLVM::LLVMArrayType>(globalOp.getType())) {
+ if (arrayType.getNumElements() == 0 &&
+ globalOp.getAlignment().value_or(0) == alignmentByte) {
----------------
joker-eph wrote:
I don't quite get what this is enough of a condition?
https://github.com/llvm/llvm-project/pull/71546
More information about the Mlir-commits
mailing list