[Mlir-commits] [mlir] [vector][mlir] Canonicalize to shape_cast where possible (PR #140583)

llvmlistbot at llvm.org llvmlistbot at llvm.org
Sun Nov 9 20:26:35 PST 2025


MaheshRavishankar wrote:

@banach-space  I think I might not have communicated the intent of my example properly. This was more to show why `vector.shape_cast` is not the canonical representation of transposes with unit-dims, and that you cannot always recover the transpose from a shape_cast cause it looses information. So a `vector.transpose` carries more information than a `vector.shape_cast`. 

It of course lowers to the same thing if you are just lowering to LLVM or SPIR-V. I have already agreed that while lowering to LLVM you should lower both these transposes to `vector.shape_cast` and then just cancel these out, cause at LLVM level this makes no difference. So it is perfectly valid to say that for the sequence of transformations lowering from vector dialect to LLVM without any further context specific analysis, all transpose with unit-dims should be lowered to shape_cast. In other words, the "normal form" for the set of passes that lower to vector dialect ops to LLVM prefers using `shape_cast`. I am on board with that. Canonical form is different. As shown above, the `vector.transpose` to `vector.shape_cast` and back is not always possible to do. You "lose" information when you lower to `vector.shape_cast`. That makes it non-canonical since a canonicalizer would not help you reach a "better state" of the program.

https://github.com/llvm/llvm-project/pull/140583


More information about the Mlir-commits mailing list