[Mlir-commits] [mlir] [mlir][Interface] `DestinationStyleOpInterface`: Rename `hasTensor/BufferSemantics` (PR #77574)

llvmlistbot at llvm.org llvmlistbot at llvm.org
Wed Jan 10 02:01:41 PST 2024


llvmbot wrote:


<!--LLVM PR SUMMARY COMMENT-->

@llvm/pr-subscribers-mlir-nvgpu

Author: Matthias Springer (matthias-springer)

<details>
<summary>Changes</summary>

Rename interface functions as follows:
* `hasTensorSemantics` -> `hasPureTensorSemantics`
* `hasBufferSemantics` -> `hasPureBufferSemantics`

These two functions return "true" if the op has tensor/buffer operands but not buffer/tensor operands.

Add two new interface functions:
* `hasTensorSemantics`: Return "true" if the op has tensor operands. Whether the op has buffer operands or not does not matter.
* `hasBufferSemantics`: Return "true" if the op has buffer operands. Whether the op has tensor operands or not does not matter.

Also drop the "ranked" part from the interface, i.e., do not distinguish between ranked/unranked types.

This change aligns the meaning of "tensor semantics" with the bufferization framework. (An op is supposed to be bufferized if it has tensor operands, and we don't care if it also has memref operands.) This change is in preparation of #<!-- -->75273, which adds `BufferizableOpInterface::hasTensorSemantics`.

---

Patch is 36.28 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/77574.diff


25 Files Affected:

- (modified) mlir/include/mlir/Interfaces/DestinationStyleOpInterface.td (+36-39) 
- (modified) mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp (+5-5) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/BubbleUpExtractSlice.cpp (+1-1) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/BufferizableOpInterfaceImpl.cpp (+3-3) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/ConstantFold.cpp (+1-1) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/DecomposeLinalgOps.cpp (+1-1) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp (+1-1) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp (+8-8) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/EraseUnusedOperandsAndResults.cpp (+2-2) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/Generalization.cpp (+1-1) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/InlineScalarOperands.cpp (+1-1) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/Loops.cpp (+3-3) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/NamedOpConversions.cpp (+1-1) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/Padding.cpp (+2-2) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp (+5-3) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cpp (+2-2) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/Transforms.cpp (+3-3) 
- (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+1-1) 
- (modified) mlir/lib/Dialect/Linalg/Utils/Utils.cpp (+5-5) 
- (modified) mlir/lib/Dialect/NVGPU/Transforms/CreateAsyncGroups.cpp (+1-1) 
- (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseReinterpretMap.cpp (+2-2) 
- (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp (+5-5) 
- (modified) mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp (+1-1) 
- (modified) mlir/lib/Interfaces/DestinationStyleOpInterface.cpp (+4-4) 
- (modified) mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-yaml-gen.cpp (+1-1) 


``````````diff
diff --git a/mlir/include/mlir/Interfaces/DestinationStyleOpInterface.td b/mlir/include/mlir/Interfaces/DestinationStyleOpInterface.td
index 4c52d803e11476..b1ea4c82a08c82 100644
--- a/mlir/include/mlir/Interfaces/DestinationStyleOpInterface.td
+++ b/mlir/include/mlir/Interfaces/DestinationStyleOpInterface.td
@@ -17,24 +17,26 @@ def DestinationStyleOpInterface : OpInterface<"DestinationStyleOpInterface"> {
     as initial tensor values for the results of the operation or the init
     buffers to which the results of the op will be written.
 
-    Init operands must be ranked tensors or ranked memrefs. Input operands can
-    have any type. All non-init operands are DPS inputs.
+    Init operands must be tensors or memrefs. Input operands can have any type.
+    All non-init operands are DPS inputs.
 
     The init operands of this op are specified by the MutableOperandRange that
     the `getDpsInitsMutable` interface methods returns. This implies that the
     init operands must be a consecutive range of operands.
 
-    If the op has "tensor semantics", then the input operands are either ranked
-    tensors or other non-tensor/memref types ("scalars"). The init operands are
-    ranked tensors and every tensor init is tied to a corresponding tensor
-    OpResult in a 1-to-1 fashion. The i-th init tensor is tied to the i-th
-    OpResult. The op may not have any additional OpResults. Init operands and
-    their tied OpResults have the same type. Dynamic dimension sizes also match
-    at runtime.
+    Each tensor init operand is tied to a corresponding tensor OpResult in a
+    1-to-1 fashion. The i-th init tensor is tied to the i-th OpResult. The op
+    may not have any additional OpResults. Init operands and their tied
+    OpResults have the same type. Dynamic dimension sizes also match at runtime.
 
-    If the op has "buffer semantics", then the input operands are either ranked
-    memrefs or other non-tensor/memref types ("scalar" types). Furthermore, the
-    init operands are ranked memrefs and the op has no results.
+    Note: This implies that a destination style op without any tensor inits must
+    not have any OpResults.
+
+    An op has "tensor semantics" if it has at least one tensor operand.
+    An op has "buffer semantics" if it has at least one buffer (memref) operand.
+    An op has "pure tensor semantics" if it has tensor semantics but not buffer
+    semantics. An op has "pure buffer semantics" if it has buffer semantics but
+    not tensor semantics.
 
     Destination-passing style abstraction makes certain transformations easier.
     For example, tiling implementation can extract/insert slices from/into the
@@ -148,7 +150,8 @@ def DestinationStyleOpInterface : OpInterface<"DestinationStyleOpInterface"> {
     /// neither a MemRef nor a tensor value.
     bool isScalar(::mlir::OpOperand *opOperand) {
       assert(opOperand->getOwner() == $_op && "invalid operand");
-      return !::llvm::isa<MemRefType, TensorType>(opOperand->get().getType());
+      return !::llvm::isa<BaseMemRefType, TensorType>(
+          opOperand->get().getType());
     }
 
     /// Return the OpResult that is tied to the given OpOperand.
@@ -169,36 +172,30 @@ def DestinationStyleOpInterface : OpInterface<"DestinationStyleOpInterface"> {
       return $_op.getDpsInitOperand(opResult.getResultNumber());
     }
 
-    /// Return whether the op has buffer semantics. That is the case if the op
-    /// has no ranked tensor operands and at least one memref operand.
+    /// Return whether the op has buffer semantics. That is the case if the
+    /// op has at least one memref operand.
     bool hasBufferSemantics() {
-      // No tensors.
-      auto isTensor = [](Value v){
-        return ::llvm::isa<::mlir::RankedTensorType>(v.getType());
-      };
-      if (::llvm::any_of($_op->getOperands(), isTensor))
-        return false;
-      // At least one memref.
-      auto isMemref = [](Value v){
-        return ::llvm::isa<::mlir::MemRefType>(v.getType());
-      };
-      return llvm::any_of($_op->getOperands(), isMemref);
+      return ::llvm::any_of($_op->getOperands(),
+          [](Value v) { return isa<BaseMemRefType>(v.getType()); });
     }
 
-    /// Return whether the op has tensor semantics. That is the case if the op
-    /// has no memref operands and at least one ranked tensor operand.
+    /// Return whether the op has tensor semantics. That is the case if the
+    /// op has at least one tensor operand.
     bool hasTensorSemantics() {
-      // No memrefs.
-      auto isMemref = [](Value v){
-        return ::llvm::isa<::mlir::MemRefType>(v.getType());
-      };
-      if (::llvm::any_of($_op->getOperands(), isMemref))
-        return false;
-      // At least one tensor.
-      auto isTensor = [](Value v){
-        return ::llvm::isa<::mlir::RankedTensorType>(v.getType());
-      };
-      return llvm::any_of($_op->getOperands(), isTensor);
+      return ::llvm::any_of($_op->getOperands(),
+          [](Value v) { return isa<TensorType>(v.getType()); });
+    }
+
+    /// Return whether the op has pure buffer semantics. That is the case if the
+    /// op has no tensor operands and at least one memref operand.
+    bool hasPureBufferSemantics() {
+      return hasBufferSemantics() && !hasTensorSemantics();
+    }
+
+    /// Return whether the op has pure tensor semantics. That is the case if the
+    /// op has no memref operands and at least one tensor operand.
+    bool hasPureTensorSemantics() {
+      return hasTensorSemantics() && !hasBufferSemantics();
     }
   }];
 
diff --git a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
index b68aa77fd83a1c..828a140be75456 100644
--- a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
+++ b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
@@ -550,7 +550,7 @@ struct EraseSelfCopy : OpRewritePattern<CopyOp> {
                                 PatternRewriter &rewriter) const override {
     if (copyOp.getInputs() != copyOp.getOutputs())
       return rewriter.notifyMatchFailure(copyOp, "not a self copy");
-    if (copyOp.hasBufferSemantics())
+    if (copyOp.hasPureBufferSemantics())
       rewriter.eraseOp(copyOp);
     else
       rewriter.replaceOp(copyOp, copyOp.getInputs());
@@ -1112,7 +1112,7 @@ struct EraseIdentityGenericOp : public OpRewritePattern<GenericOp> {
       return failure();
 
     // In the buffer case, we need to check exact buffer equality.
-    if (genericOp.hasBufferSemantics()) {
+    if (genericOp.hasPureBufferSemantics()) {
       if (genericOp.getNumDpsInputs() == 1 && genericOp.getNumDpsInits() == 1 &&
           genericOp.getDpsInputOperand(0)->get() ==
               genericOp.getDpsInitOperand(0)->get()) {
@@ -1123,7 +1123,7 @@ struct EraseIdentityGenericOp : public OpRewritePattern<GenericOp> {
     }
 
     // Mixed semantics is not supported yet.
-    if (!genericOp.hasTensorSemantics())
+    if (!genericOp.hasPureTensorSemantics())
       return failure();
 
     // Get the argument number of the returned values. That is the operand
@@ -2257,7 +2257,7 @@ struct InferStaticShapeOfOperands : public OpInterfaceRewritePattern<LinalgOp> {
 
   LogicalResult matchAndRewrite(LinalgOp linalgOp,
                                 PatternRewriter &rewriter) const override {
-    if (!linalgOp.hasTensorSemantics())
+    if (!linalgOp.hasPureTensorSemantics())
       return failure();
 
     // Maps must be projected permutations.
@@ -2376,7 +2376,7 @@ SoftmaxOp::getTiledImplementation(OpBuilder &builder,
       getSlice(builder, getLoc(), getOutput(), offsets, sizes, strides));
 
   SmallVector<Type, 4> resultTypes;
-  if (hasTensorSemantics())
+  if (hasPureTensorSemantics())
     resultTypes.push_back(tiledOperands[1].getType());
   Operation *tiledOp =
       mlir::clone(builder, getOperation(), resultTypes, tiledOperands);
diff --git a/mlir/lib/Dialect/Linalg/Transforms/BubbleUpExtractSlice.cpp b/mlir/lib/Dialect/Linalg/Transforms/BubbleUpExtractSlice.cpp
index 5c4bc9137c10a8..428422e6e875a2 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/BubbleUpExtractSlice.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/BubbleUpExtractSlice.cpp
@@ -68,7 +68,7 @@ struct BubbleUpExtractSliceOpPattern
                                          "expected single output of linalg op");
     }
 
-    if (!linalgOp.hasTensorSemantics()) {
+    if (!linalgOp.hasPureTensorSemantics()) {
       return rewriter.notifyMatchFailure(sliceOp,
                                          "expected tensor of linalg op");
     }
diff --git a/mlir/lib/Dialect/Linalg/Transforms/BufferizableOpInterfaceImpl.cpp b/mlir/lib/Dialect/Linalg/Transforms/BufferizableOpInterfaceImpl.cpp
index 0577441bdd28d2..b232d56d4419f6 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/BufferizableOpInterfaceImpl.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/BufferizableOpInterfaceImpl.cpp
@@ -32,13 +32,13 @@ bufferizeDestinationStyleOpInterface(RewriterBase &rewriter,
   rewriter.setInsertionPoint(op);
 
   // Nothing to do. This op is already bufferized.
-  if (op.hasBufferSemantics())
+  if (op.hasPureBufferSemantics())
     return success();
 
   // Ensure op has only tensors. Allow mixed tensor-buffer mode on a per-need
   // basis.
-  if (!op.hasTensorSemantics())
-    return op->emitError() << "op does not have tensor semantics";
+  if (!op.hasPureTensorSemantics())
+    return op->emitError() << "op does not have pure tensor semantics";
 
   // New input operands for the cloned op.
   SmallVector<Value> newInputBuffers;
diff --git a/mlir/lib/Dialect/Linalg/Transforms/ConstantFold.cpp b/mlir/lib/Dialect/Linalg/Transforms/ConstantFold.cpp
index 062751552b3cc6..8fffabf11f3fdd 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/ConstantFold.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/ConstantFold.cpp
@@ -57,7 +57,7 @@ class FoldConstantBase : public OpRewritePattern<GenericOp> {
   LogicalResult matchAndRewrite(GenericOp genericOp,
                                 PatternRewriter &rewriter) const override {
     // Mixed and buffer sematics aren't supported.
-    if (!genericOp.hasTensorSemantics())
+    if (!genericOp.hasPureTensorSemantics())
       return failure();
 
     // Only support ops generating one output for now.
diff --git a/mlir/lib/Dialect/Linalg/Transforms/DecomposeLinalgOps.cpp b/mlir/lib/Dialect/Linalg/Transforms/DecomposeLinalgOps.cpp
index 28f4d8ac64431a..5cd6d4597affaf 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/DecomposeLinalgOps.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/DecomposeLinalgOps.cpp
@@ -258,7 +258,7 @@ DecomposeLinalgOp::matchAndRewrite(GenericOp genericOp,
   // TODO: this could be generalized to handle `linalg.generic` with buffer
   // operands too but requires allocation for intermediates. Punt on this for
   // now.
-  if (!genericOp.hasTensorSemantics()) {
+  if (!genericOp.hasPureTensorSemantics()) {
     return rewriter.notifyMatchFailure(
         genericOp, "only operations with tensor semantics are handled");
   }
diff --git a/mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp b/mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp
index c495956fa57702..e6f4ed5b51b1e6 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/DropUnitDims.cpp
@@ -83,7 +83,7 @@ struct MoveInitOperandsToInput : public OpRewritePattern<GenericOp> {
   using OpRewritePattern<GenericOp>::OpRewritePattern;
   LogicalResult matchAndRewrite(GenericOp genericOp,
                                 PatternRewriter &rewriter) const override {
-    if (!genericOp.hasTensorSemantics())
+    if (!genericOp.hasPureTensorSemantics())
       return failure();
     if (genericOp.getNumParallelLoops() != genericOp.getNumLoops())
       return failure();
diff --git a/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp b/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
index 3eb91190751ef1..031f5c7a5d4783 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
@@ -105,7 +105,7 @@ bool mlir::linalg::areElementwiseOpsFusable(OpOperand *fusedOperand) {
   // Consumer can have mixed semantics, just check operand itself has tensor
   // type. Producer must have full tensor semantics to avoid potential
   // aliasing between producer and consumer memrefs.
-  if (!producer.hasTensorSemantics() ||
+  if (!producer.hasPureTensorSemantics() ||
       !isa<RankedTensorType>(fusedOperand->get().getType()))
     return false;
 
@@ -530,7 +530,7 @@ static bool isFusableWithReshapeByDimExpansion(GenericOp genericOp,
   //   permutations.
   // - The fused tensor is not a scalar.
   // - All the loops are parallel loops.
-  return genericOp.hasTensorSemantics() &&
+  return genericOp.hasPureTensorSemantics() &&
          llvm::all_of(genericOp.getIndexingMaps().getValue(),
                       [](Attribute attr) {
                         return cast<AffineMapAttr>(attr)
@@ -1124,7 +1124,7 @@ static SmallVector<ReassociationIndices>
 getCollapsableIterationSpaceDims(GenericOp genericOp, OpOperand *fusableOperand,
                                  ArrayRef<ReassociationIndices> reassociation) {
   // Some basic checks for this fusion to be valid.
-  if (!genericOp.hasTensorSemantics() || genericOp.getNumDpsInits() != 1)
+  if (!genericOp.hasPureTensorSemantics() || genericOp.getNumDpsInits() != 1)
     return {};
 
   if (!llvm::all_of(genericOp.getIndexingMapsArray(), [](AffineMap map) {
@@ -1476,7 +1476,7 @@ Operation *createCollapsedOp(LinalgType op,
     outputOperands.push_back(newOutput);
     // If the op has "buffer semantics", then the init operands are ranked
     // memrefs and the op has no results.
-    if (!op.hasBufferSemantics())
+    if (!op.hasPureBufferSemantics())
       resultTypes.push_back(newOutput.getType());
   }
 
@@ -1521,8 +1521,8 @@ FailureOr<SmallVector<Value>> mlir::linalg::collapseOpIterationDims(
       }))
     return failure();
 
-  bool hasBufferSemantics = op.hasBufferSemantics();
-  if (hasBufferSemantics &&
+  bool hasPureBufferSemantics = op.hasPureBufferSemantics();
+  if (hasPureBufferSemantics &&
       !llvm::all_of(op->getOperands(), [&](Value operand) -> bool {
         MemRefType memRefToCollapse = dyn_cast<MemRefType>(operand.getType());
         if (!memRefToCollapse)
@@ -1705,7 +1705,7 @@ class FoldScalarOrSplatConstant : public OpRewritePattern<GenericOp> {
 
   LogicalResult matchAndRewrite(GenericOp genericOp,
                                 PatternRewriter &rewriter) const override {
-    if (!genericOp.hasTensorSemantics())
+    if (!genericOp.hasPureTensorSemantics())
       return failure();
     for (OpOperand *opOperand : genericOp.getDpsInputOperands()) {
       Operation *def = opOperand->get().getDefiningOp();
@@ -1857,7 +1857,7 @@ struct FoldFillWithGenericOp : public OpRewritePattern<GenericOp> {
 
   LogicalResult matchAndRewrite(GenericOp genericOp,
                                 PatternRewriter &rewriter) const override {
-    if (!genericOp.hasTensorSemantics())
+    if (!genericOp.hasPureTensorSemantics())
       return failure();
     bool fillFound = false;
     Block &payload = genericOp.getRegion().front();
diff --git a/mlir/lib/Dialect/Linalg/Transforms/EraseUnusedOperandsAndResults.cpp b/mlir/lib/Dialect/Linalg/Transforms/EraseUnusedOperandsAndResults.cpp
index 4e54e48c914aeb..3378eda2bd6734 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/EraseUnusedOperandsAndResults.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/EraseUnusedOperandsAndResults.cpp
@@ -183,7 +183,7 @@ struct DeduplicateAndRemoveDeadOperandsAndResults
         dedupedOutpts;
     // If the op doesn't have tensor semantics or outputs should not be removed,
     // keep all the outputs as preserved.
-    if (!genericOp.hasTensorSemantics() || !removeOutputs) {
+    if (!genericOp.hasPureTensorSemantics() || !removeOutputs) {
       for (const auto &en : llvm::enumerate(genericOp.getDpsInitsMutable())) {
         origToNewPos[en.index()] = newOutputOperands.size();
         newOutputOperands.push_back(en.value().get());
@@ -317,7 +317,7 @@ struct RemoveUnusedCycleInGenericOp : public OpRewritePattern<GenericOp> {
                                 PatternRewriter &rewriter) const override {
 
     // If the op doesnt have tensor semantics, preserve the outputs as is.
-    if (!genericOp.hasTensorSemantics())
+    if (!genericOp.hasPureTensorSemantics())
       return failure();
 
     bool hasRemovedCycles = false;
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Generalization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Generalization.cpp
index 1d9ce4144f998d..d03d1f3a163c32 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Generalization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Generalization.cpp
@@ -59,7 +59,7 @@ FailureOr<GenericOp> mlir::linalg::generalizeNamedOp(RewriterBase &rewriter,
   ValueRange outputs = linalgOp.getDpsInits();
   SmallVector<AffineMap> indexingMaps = linalgOp.getIndexingMapsArray();
   SmallVector<utils::IteratorType> iterators = linalgOp.getIteratorTypesArray();
-  SmallVector<Type> resultTypes = linalgOp.hasTensorSemantics()
+  SmallVector<Type> resultTypes = linalgOp.hasPureTensorSemantics()
                                       ? TypeRange(ValueRange(outputs))
                                       : TypeRange{};
 
diff --git a/mlir/lib/Dialect/Linalg/Transforms/InlineScalarOperands.cpp b/mlir/lib/Dialect/Linalg/Transforms/InlineScalarOperands.cpp
index cc39fe932c24bf..34db710b1721d6 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/InlineScalarOperands.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/InlineScalarOperands.cpp
@@ -35,7 +35,7 @@ struct InlineScalarOperands : public OpRewritePattern<GenericOp> {
   using OpRewritePattern<GenericOp>::OpRewritePattern;
   LogicalResult matchAndRewrite(GenericOp genericOp,
                                 PatternRewriter &rewriter) const override {
-    if (!genericOp.hasTensorSemantics())
+    if (!genericOp.hasPureTensorSemantics())
       return failure();
 
     SmallVector<size_t> scalarOperands;
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Loops.cpp b/mlir/lib/Dialect/Linalg/Transforms/Loops.cpp
index 5a56e914ea4c77..4c93da6fe9253f 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Loops.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Loops.cpp
@@ -128,7 +128,7 @@ template <typename LoadOpTy, typename StoreOpTy>
 static void emitScalarImplementation(OpBuilder &b, Location loc,
                                      ArrayRef<Value> allIvs,
                                      LinalgOp linalgOp) {
-  assert(linalgOp.hasBufferSemantics() &&
+  assert(linalgOp.hasPureBufferSemantics() &&
          "expected linalg op with buffer semantics");
   SmallVector<Value> indexedValues;
   indexedValues.reserve(linalgOp->getNumOperands());
@@ -218,7 +218,7 @@ static FailureOr<LinalgLoops> linalgOpToLoopsImpl(RewriterBase &rewriter,
 
   // The flattened loopToOperandRangesMaps is expected to be an invertible
   // permutation map (which is asserted in the inverse calculation).
-  assert(linalgOp.hasBufferSemantics() &&
+  assert(linalgOp.hasPureBufferSemantics() &&
          "expected linalg op with buffer semantics");
 
   auto loopRanges = linalgOp.createLoopRanges(rewriter, linalgOp.getLoc());
@@ -264,7 +264,7 @@ class LinalgRewritePattern : public RewritePattern {
   LogicalResult matchAndRewrite(Operation *op,
                                 PatternRewriter &rewriter) const override {
     auto linalgOp = dyn_cast<LinalgOp>(op);
-    if (!isa<LinalgOp>(op) || !linalgOp.hasBufferSemantics()) {
+    if (!isa<LinalgOp>(op) || !linalgOp.hasPureBufferSemantics()) {
       return rewriter.notifyMatchFailure(
           op, "expected linalg op with buffer semantics");
     }
diff --git a/mlir/lib/Dialect/Linalg/Transforms/NamedOpConversions.cpp b/mlir/lib/Dialect/Linalg/Transforms/NamedOpConversions.cpp
index 93fa5ff24ac6a6..250360603fa5dd 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/NamedOpConversions.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/NamedOpConversions.cpp
@@ -39,7 +39,7 @@ matchAndReplaceDepthwiseConv(Operation *operation, Value input, Value kernel,
   Location loc = operation->getLoc();
   auto linalgOp = dyn_cast<LinalgOp>(operation);
   // Exit out on the memref version of this operation.
-  if (!linalgOp || !linalgOp.hasTensorSemantics())
+  if (!linalgOp || !linalgOp.hasPureTensorSemantics())
...
[truncated]

``````````

</details>


https://github.com/llvm/llvm-project/pull/77574


More information about the Mlir-commits mailing list