[Mlir-commits] [mlir] [mlir][linalg] do not break outs from block argument (PR #73572)

Okwan Kwon llvmlistbot at llvm.org
Tue Nov 28 18:29:26 PST 2023


================
@@ -1818,6 +1818,11 @@ struct RemoveOutsDependency : public OpRewritePattern<GenericOp> {
         if (sparse_tensor::getSparseTensorEncoding(operandVal.getType()))
           continue;
 
+        // If outs is wired from a block argument, keep the dependency to
+        // prevent the argument from being optimized away.
----------------
okkwon wrote:

> To begin with, this is ill-defined: you mean as an "init" of the linalg.generic right?
>Or are you referring to the IREE ABI attribute? (which really can't have much bearing upstream...)

I may say that the logic around the tensor empty and linalg init is difficult to grasp for me too. I described the context with the IREE attribute, but it would be great we can have a solution to use a function operand as output.

My change here is pretty vanilla as a preparation step to take next steps; the main question here for me was "Can I use a block argument for `outs`? And, actually the change was suggested by @MaheshRavishankar in a different thread. I also thought it is okay.

Do you think it is ill-defined because a tensor is not supposed to be written? I had the same question for the tensor from `tensor.empty()`.

One solution IREE used is converting the ABI operands into `iree.hal.buffer` and supports importing and exporting. Generalizing this might be a good solution to have a tensor reference semantic.

Another solution is to have memref from the very early stage of the compilation process. But there might be people who prefer to optimizations with the tensor representation before going to the stages with memref.

> Seems like a nice contract to provide to a bufferization optimization, but fundamentally that seems like it can't rely on the init of the linalg generic being "by chance" the one you want: the bufferization should recover.

What do you mean "bufferization should recover"?

Regarding `tensor.empty` used as "init" is also not very clear about the storage. Actually, tensor as value is confusing without having a reference or pointer to a storage. I thought tensor.empty follows an allocation semantic, but we allow CSE for it if the shape is the same.

> Is your function "private"? Are you running some global optimization pass? If it is an upstream pass, yeah it can't know about your ABI attribute and you need another mechanism (mark these with ops in the function instead? Like iree.abi_output(%arg0) ?

Yes, the function is private and there is a pass (I created, but not yet reviewed) that finds the function return value to the tensor empty and replace the tensor empty value with the argument with the attribute.

For now, the problem is at the ABI level, but we can generalize the situation as the following questions:

- "How can we use a tensor operand as output?" 
  - Is it ill-formed?
  - If so, do we want to introduce an attribute to mark a tensor writable?
- Do we want to introduce type for tensor reference?
- Do we want to use memref directly along with tensor? (Sounds feasible.)

Thanks for your time to raise the questions! It is good to have a discussion here.

@MaheshRavishankar feel free to chime in and give a comment!




https://github.com/llvm/llvm-project/pull/73572


More information about the Mlir-commits mailing list