[Mlir-commits] [mlir] [mlir][tensor] Document `dest` operand (PR #71726)

Matthias Springer llvmlistbot at llvm.org
Thu Nov 9 19:46:09 PST 2023


================
@@ -18,31 +18,32 @@ def Tensor_Dialect : Dialect {
   let description = [{
     The `tensor` dialect is intended to hold core tensor creation and
     manipulation ops, which are not strongly associated with any particular
-    other dialect or domain abstraction. The primary smoke test of this is ops
-    that make sense for any tensor element type.
-
-    We leave it to other dialects to hold the vast swath of possible
-    computations one might want to do on a tensor.
-
-    The `tensor` type is (for better or for worse) used to represent all kinds
-    of things, and supports an open-ended set of element types. Examples:
+    other dialect or domain abstraction. The primary inclusion criteria for ops
+    in this dialect is that they make sense for any tensor element type. When
+    this is not the case, the op is left to live in other dialects. Examples of
+    element types that could be supported by the `tensor` dialect include:
 
     - representing large, dense aggregations of primitive types, suitable for
       high-performance numerical computing.
-    - representing shapes in the `shape` dialect, which consist of small
-      1D tensors of `index` data type.
+    - representing shapes in the `shape` dialect, which consist of small 1D
+      tensors of `index` data type.
     - representing aggregations of strings or “variant” types.
-    - representing large, sparse aggregations of primitive types, suitable
-      for high-performance numerical computing.
+    - representing large, sparse aggregations of primitive types, suitable for
+      high-performance numerical computing.
 
-    Thus, for the `tensor` dialect, we prefer for now to constrain the
-    scope as much as possible. The expectation is that at some point
+    Because of this broad element type support, we prefer for now to keep the
+    `tensor` dialect as small as possible. The expectation is that at some point
     in the future, the `tensor` dialect’s scope may be broadened through a
     careful discussion of the tradeoffs.
 
-    The `tensor` type is actually a builtin type (it lives in the builtin
-    dialect), and does not live in this dialect.
-
+    On the `tensor` type itself, note that it is actually a builtin type (it
+    lives in the builtin dialect), and does not live in this dialect. Furthermore,
+    a `tensor` is an immutable object. For example, this means that the `dest`
+    operand used by some ops in this dialect does not mean that the `tensor` is
+    mutated in place, but rather that the operand can be used as bufferization
+    hint. For more information, see the [Destination Passing Style](
----------------
matthias-springer wrote:

`linalg.fill` would be one. It could be rewritten as a splat `arith.constant`.

But bufferization is indeed just one use case for DPS. It is also used in tiling.

This is from the documentation of the `DestinationStyleOpInterface`:
```
    Ops that are in destination style have designated "init" operands, which act
    as initial tensor values for the results of the operation or the init
    buffers to which the results of the op will be written.

    [...]

    Destination-passing style abstraction makes certain transformations easier.
    For example, tiling implementation can extract/insert slices from/into the
    destination of an op and use the resulting shaped value as an iter_arg in
    the surrounding loop structure. As another example, bufferization does not
    have to allocate new buffers for destinations (in case of in-place
    bufferization) and can directly reuse the existing destination buffer.
```

Maybe we can just mention here that some tensor ops have "dest" operands and directly link to the interface documentation without mentioning bufferization?



https://github.com/llvm/llvm-project/pull/71726


More information about the Mlir-commits mailing list