[Mlir-commits] [mlir] [mlir][bufferization] `MaterializeInDestinationOp`: Support memref destinations (PR #68074)
Nicolas Vasilache
llvmlistbot at llvm.org
Fri Oct 6 03:24:37 PDT 2023
================
@@ -216,33 +216,58 @@ def Bufferization_CloneOp : Bufferization_Op<"clone", [
def Bufferization_MaterializeInDestinationOp
: Bufferization_Op<"materialize_in_destination",
- [BufferizableOpInterface, SameOperandsAndResultType,
- DestinationStyleOpInterface,
+ [AllShapesMatch<["source", "dest"]>,
+ AllElementTypesMatch<["source", "dest"]>,
+ BufferizableOpInterface, DestinationStyleOpInterface,
DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>,
DeclareOpInterfaceMethods<SubsetInsertionOpInterface,
["getSourceOperand", "getValuesNeededToBuildSubsetExtraction",
- "buildSubsetExtraction", "isEquivalentSubset"]>]> {
+ "buildSubsetExtraction", "isEquivalentSubset"]>,
+ DeclareOpInterfaceMethods<MemoryEffectsOpInterface, ["getEffects"]>]> {
let summary = "copy a tensor";
let description = [{
This op indicates that the data of the `source` tensor should materialize
- in the future buffer of the `dest` tensors. Both tensors must have the same
- shape and element type at runtime.
+ in `dest`, which can be a tensor or a memref. In case of a tensor, `source`
+ should materialize in the future buffer of `dest` and a the updated
+ destination tensor is returned. In case of a memref, `source` should
+ materialize in `dest`, which is already a buffer. The op has no results in
+ that case.
+
+ `source`, `dest` and `result` (if present) must have the same shape and
+ element type. If the op has a result, the types of `result` and `dest` must
+ match exactly (e.g., including any tensor encodings).
By default, this op bufferizes to a memcpy from the future buffer of the
- `source` tensor to the future buffer of the `dest` tensor. However,
- transformations such as "empty tensor elimination" may rewrite IR such that
- a computation is performed directly in the future buffer of the `dest`
- tensor and no memcpy is needed.
-
- Note: "tensor.insert_slice" could be used for the same purpose, but since
- tensor dialect ops only indicate *what* should be computed but not *where*,
- it could fold away, causing the computation to materialize in a different
- buffer.
+ `source` tensor to the future buffer of the `dest` tensor or to the `dest`
+ buffer. However, transformations such as "empty tensor elimination" may
+ rewrite IR such that a computation is performed directly in `dest` and no
+ memcpy is needed.
+
+ If `dest` is a buffer, the `restrict` and `writable` attributes must be
+ specified. These attributes have the same meaning as the respective
+ attributes of `bufferization.to_tensor`. `writable` indicates that the
+ `dest` buffer is considered writable. It does not make sense to materialize
+ a computation in a read-only buffer, so `writable` is required. `restrict`
+ indicates that this op is the only way for the tensor IR to access `dest`
+ (or an alias thereof). E.g., there must be no other `to_tensor` ops with
+ `dest` or with an alias of `dest`. Such IR is not supported by
+ One-Shot Bufferize.
+
+ Note: `restrict` and `writable` could be removed from this op because they
+ must always be set for memref destinations. This op has these attributes to
+ make clear the requirements on the `dest` operand in the op assembly format.
+ Moreover, these requirements may be relaxed at some point in the future.
+
+ Note: If `dest` is a tensor, `tensor.insert_slice` could be used for the
+ same purpose, but since tensor dialect ops only indicate *what* should be
----------------
nicolasvasilache wrote:
is this true ?
If one uses tensor.insert_slice, the analysis will insert copies if required, whereas `materialize_in_destination` is prescriptive IIUC.
https://github.com/llvm/llvm-project/pull/68074
More information about the Mlir-commits
mailing list