[Mlir-commits] [mlir] [mlir][linalg] do not break outs from block argument (PR #73572)

Mehdi Amini llvmlistbot at llvm.org
Tue Nov 28 17:11:37 PST 2023


================
@@ -1818,6 +1818,11 @@ struct RemoveOutsDependency : public OpRewritePattern<GenericOp> {
         if (sparse_tensor::getSparseTensorEncoding(operandVal.getType()))
           continue;
 
+        // If outs is wired from a block argument, keep the dependency to
+        // prevent the argument from being optimized away.
----------------
joker-eph wrote:

> At 10,000 feet, I'd like to use a tensor operand as output.

To begin with, this is ill-defined: you mean as an "init" of the linalg.generic right?

> Here with iree.abi.output, we say that %out is used for output. So the function does not allocate buffer for %r and uses the caller-allocated storage for %out for the output storage.

Seems like a nice contract to provide to a bufferization optimization, but fundamentally that seems like it can't rely on the `init` of the linalg generic being "by chance" the one you want: the bufferization should recover.

> But the rewriter breaks the existing dependency and blindly introduces tensor.empty() which makes %out unused, so it gets removed later.

Is your function "private"? Are you running some global optimization pass? If it is an upstream pass, yeah it can't know about your ABI attribute and you need another mechanism (mark these with ops in the function instead? Like `iree.abi_output(%arg0)` ?

https://github.com/llvm/llvm-project/pull/73572


More information about the Mlir-commits mailing list