[Mlir-commits] [mlir] [DRAFT] Generalize expand_shape to take shape as explicit input (PR #69267)
James Newling
llvmlistbot at llvm.org
Fri Nov 10 17:27:31 PST 2023
================
@@ -230,24 +309,17 @@ LogicalResult mlir::reshapeLikeShapesAreCompatible(
ArrayRef<ReassociationIndices> reassociationMaps, bool isExpandingReshape) {
unsigned expandedDimStart = 0;
for (const auto &map : llvm::enumerate(reassociationMaps)) {
- std::optional<int64_t> dynamicShape;
+ bool foundDynamicShape = false;
int64_t linearizedStaticShape = 1;
+
for (const auto &dim : llvm::enumerate(
expandedShape.slice(expandedDimStart, map.value().size()))) {
- if (ShapedType::isDynamic(dim.value())) {
- if (isExpandingReshape && dynamicShape) {
- return emitError("invalid to have a single dimension (" +
- Twine(map.index()) +
----------------
newling wrote:
aI'd like to get a better understanding of this PR from a high-level. Is it now permitted to have multiple dynamic dimensions in an association?
Currently this is what happens when you try to expand to multiple dynamic dimensions:
```
foo.mlir:3:8: error: 'tensor.expand_shape' op invalid to have a single dimension (0) expanded into multiple dynamic dims (0,1)
%0 = tensor.expand_shape %arg0 [[0, 1], [2]] : tensor<?x?xf32> into tensor<?x?x?xf32>
```
Is the idea that with this PR this is allowed, but only as long as the dynamic dimensions have values associated to them?
I guess one high-level question is, does this make expand_shape more general purpose, and does that mean that the optimisations that can be applied to it are less powerful?
Any additional high-level info would be welcome :-)
https://github.com/llvm/llvm-project/pull/69267
More information about the Mlir-commits
mailing list