[all-commits] [llvm/llvm-project] cb4a40: [mlir][tensor] Loosen restrictions on folding dyna...
Artem Gindinson via All-commits
all-commits at lists.llvm.org
Tue Jun 3 09:09:23 PDT 2025
Branch: refs/heads/main
Home: https://github.com/llvm/llvm-project
Commit: cb4a407e5c2a8a5972781d2a3be362f437602fae
https://github.com/llvm/llvm-project/commit/cb4a407e5c2a8a5972781d2a3be362f437602fae
Author: Artem Gindinson <gindinson at roofline.ai>
Date: 2025-06-03 (Tue, 03 Jun 2025)
Changed paths:
M mlir/lib/Dialect/Utils/ReshapeOpsUtils.cpp
M mlir/test/Dialect/Linalg/simplify-pack-unpack.mlir
M mlir/test/Dialect/Tensor/canonicalize.mlir
M mlir/unittests/Dialect/Utils/CMakeLists.txt
A mlir/unittests/Dialect/Utils/ReshapeOpsUtilsTest.cpp
Log Message:
-----------
[mlir][tensor] Loosen restrictions on folding dynamic reshapes (#137963)
The main idea behind the change is to allow expand-of-collapse folds for
reshapes like `?x?xk` -> `?` (k>1). The rationale here is that the
expand op must have a coherent index/affine expression specified in its
`output_shape` argument (see example below), and if it doesn't, the IR
has already been invalidated at an earlier stage:
```
%c32 = arith.constant 32 : index
%div = arith.divsi %<some_index>, %c32 : index
%collapsed = tensor.collapse_shape %41#1 [[0], [1, 2], [3, 4]]
: tensor<9x?x32x?x32xf32> into tensor<9x?x?xf32>
%affine = affine.apply affine_map<()[s0] -> (s0 * 32)> ()[%div]
%expanded = tensor.expand_shape %collapsed [[0], [1, 2], [3]] output_shape [9, %div, 32, %affine]
: tensor<9x?x?xf32> into tensor<9x?x32x?xf32>
```
On the above assumption, adjust the routine in
`getReassociationIndicesForCollapse()` to allow dynamic reshapes beyond
just `?x..?x1x1x..x1` -> `?`. Dynamic subshapes introduce two kinds of
issues:
1. n>2 consecutive dynamic dimensions in the source shape cannot be
collapsed together into 1<k<n neighboring dynamic dimensions in the
target shape, since there'd be more than one suitable reassociation
(example: `?x?x10x? into ?x?`)
2. When figuring out static subshape reassociations based on products,
there are cases where a static dimension is collapsed with a dynamic
one, and should therefore be skipped when comparing products of source &
target dimensions (e.g. `?x2x3x4 into ?x12`)
To address 1, we should detect such sequences in the target shape before
assigning multiple dynamic dimensions into the same index set. For 2, we
take note that a static target dimension was preceded by a dynamic one
and allow an "offset" subshape of source static dimensions, as long as
there's an exact sequence for the target size later in the source shape.
This PR aims to address all reshapes that can be determined based purely
on shapes (and original reassociation
maps, as done in
`ComposeExpandOfCollapseOp::findCollapsingReassociation)`. It doesn't
seem possible to fold all qualifying dynamic shape patterns in a
deterministic way without looking into affine expressions
simultaneously. That would be difficult to maintain in a single general
utility, so a path forward would be to provide dialect-specific
implementations for Linalg/Tensor.
Signed-off-by: Artem Gindinson <gindinson at roofline.ai>
---------
Signed-off-by: Artem Gindinson <gindinson at roofline.ai>
Co-authored-by: Ian Wood <ianwood2024 at u.northwestern.edu>
To unsubscribe from these emails, change your notification settings at https://github.com/llvm/llvm-project/settings/notifications
More information about the All-commits
mailing list