[Mlir-commits] [mlir] [mlir][vector] Make `TransposeOpLowering` configurable (PR #73915)

llvmlistbot at llvm.org llvmlistbot at llvm.org
Thu Nov 30 14:44:55 PST 2023


MaheshRavishankar wrote:

> > Just to clarify this is nothing to do with IREE. This is an issue hit in IREE cause shape_cast cannot be handled on conversion to SpIRV in MLIR.
> 
> Thanks, someone pointed a downstream sharding analysis with affine maps which I thought was also an issue.

I think that was a different discussion of why the `vector.transpose` shouldnt be canonicalized to `vector.shape_cast`... thats a different discussion.

> 
> So we can just match this in upstream lowering? Do we have a repro for the SPIRV lowering.

Yes. Let me try to summarize the repro in MLIR. Take any of the outputs that are generated from the tests added here https://github.com/llvm/llvm-project/pull/72105/files#diff-ec70fcfbae18679d0f257f9112001765e0306ef8dbcf152956b46d50862a4223R798 , say

```
%1 = vector.shape_cast %0 : vector<4x1xf32> to vector<1x4xf32>
```

If you run `mlir-opt --convert-vector-to-spirv` this `vector.shape_cast` does not convert to SPIRV. It doesnt fail because the pass uses dialect conversion with `applyPartialConversion`. So it just bails. If it used `applyFullConversion` it would fail. So thats the repro. There are no patterns that lower `vector.shape_cast` to SPIR-V. So
(a) Either we add it and then the vector lowering patterns can introduce the `vector.shape_cast` without control, or
(b) We allow opt-out so that for SPIR-V backend we dont introduce these shape_casts. (like this patch does).

What we have is basically a vector lowering that introduces shape_cast irrespective of the backend being used, without it being supported on all backends. Thats seems like a bad situation to be in.


https://github.com/llvm/llvm-project/pull/73915


More information about the Mlir-commits mailing list