[Mlir-commits] [mlir] [MLIR] Folding unpack and pack sequence in data layout propagation (PR #138332)

llvmlistbot at llvm.org llvmlistbot at llvm.org
Mon May 5 06:41:53 PDT 2025


Max191 wrote:

> However, do we need the patch? We have folding for `pack->unpack` ops in canonicalization patterns. The missing feature seems to be that we can reuse the pack's destination tensor in the propagation.

I think the reason this patch is useful is because we want to fold some pack/unpack pairs when there is a padding value (which is not correct to do in all cases). The idea of this patch is to optimize the propagation by never creating the unpack + pack in cases where the padding value does not matter the the op that is being propagated through (for example, most non-reduction ops). If we do not do this, then the pack + unpack canonicalization patterns will not have enough information to know whether the folding is possible, and we will end up with pack + unpack pairs in the final IR.

https://github.com/llvm/llvm-project/pull/138332


More information about the Mlir-commits mailing list