[Mlir-commits] [mlir] [MLIR][Linalg] Add more specialize patterns (PR #91153)

Javed Absar llvmlistbot at llvm.org
Mon May 6 14:40:38 PDT 2024


================
@@ -70,6 +70,105 @@ bool linalg::isaCopyOpInterface(LinalgOp linalgOp) {
   return llvm::hasSingleElement(linalgOp.getBlock()->getOperations());
 }
 
+//===----------------------------------------------------------------------===//
+// FillOpInterface implementation
+//===----------------------------------------------------------------------===//
+bool linalg::isaFillOpInterface(GenericOp genericOp) {
+  // Structural.
+  if (genericOp.getNumParallelLoops() != genericOp.getNumLoops())
+    return false;
+
+  if (genericOp.getNumDpsInputs() != 1 || genericOp.getNumDpsInits() != 1)
+    return false;
+
+  // Input should be referenced and init should not.
+  if (!genericOp.payloadUsesValueFromOperand(genericOp.getDpsInputOperand(0)) ||
+      genericOp.payloadUsesValueFromOperand(genericOp.getDpsInitOperand(0)))
+    return false;
+
+  OpOperand *value = genericOp.getDpsInputOperand(0);
+  if (!genericOp.isScalar(value))
+    return false;
+
+  Block *body = genericOp.getBody();
+  if (body->getOperations().size() != 1)
+    return false;
+
+  auto yieldOp = dyn_cast<linalg::YieldOp>(body->back());
+  if (!yieldOp || yieldOp.getNumOperands() != 1 ||
+      yieldOp->getOperand(0) != body->getArgument(0))
+    return false;
+  return true;
+}
+
+//===----------------------------------------------------------------------===//
+// Elementwise-Unary/Binary-OpInterface implementation
+//===----------------------------------------------------------------------===//
+static bool isaElementwiseUnaryOrBinaryOpInterface(linalg::GenericOp genericOp,
+                                                   unsigned arity) {
+  // Check all loops are parallel, and have only tensor semantics.
+  if (genericOp.getNumParallelLoops() != genericOp.getNumLoops() ||
+      genericOp.getNumLoops() < 1 || !genericOp.hasPureTensorSemantics())
+    return false;
+
+  // Check there are arity-inputs, 1-output and all are identity-maps.
+  if (genericOp.getNumDpsInputs() != arity || genericOp.getNumDpsInits() != 1 ||
+      !llvm::all_of(genericOp.getIndexingMapsArray(),
----------------
javedabsar1 wrote:

> I think this is related to https://discourse.llvm.org/t/notes-from-the-mlir-upstream-round-table-eurollvm-2024/78374/11?u=maheshravishankar . Please correct me if I am wrong, but IMO this is too restrictive. It is perfectly reasonable for binary operations to have some "explicit broadcasting support". Is this already an assumption of these ops, or is this being added here?

@MaheshRavishankar  : Good point on broadcast. I hope I got your exact question right. 
Anyways, my answer based on guessing - 
the named-op e.g. linalg.add cannot express broadcast (implicit) via shape e.g.
` linalg.add ins(%arg0, %arg1 :  tensor<10x10xf32>,  tensor<10x100xf32>) outs(%arg2:  tensor<10x100xf32>) -> tensor<10x100xf32>`
`error: 'linalg.add' op inferred input/output operand #1 has shape's dimension #1 to be 10, but found 100`
But a linalg-generic could easily express the implicit broadcast via explicit map. 

https://github.com/llvm/llvm-project/pull/91153


More information about the Mlir-commits mailing list