[PATCH] D148159: [mlir][TransformDialect] Simplify the lowering of pack/unpack when these are just pad/unpad

Nicolas Vasilache via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Apr 13 01:11:09 PDT 2023


nicolasvasilache added inline comments.


================
Comment at: mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp:833
+    for (int64_t dstSize : origShape)
+      sizes.push_back(rewriter.getIndexAttr(dstSize));
+
----------------
nicolasvasilache wrote:
> This assumes static, we really want `createOrFold<DimOp>` here; we should have some helpers around that return the vector<OpFoldResult> with the right quantities.
> Not sure where to look offhand for those helpers ..
Ok, after quick offline chat, I see this is not strictly needed given that the shape is asserted to be static.
This is because `tensor.expand_shape` is not yet powerful enough re dynamic sizes, but this is a longer endeavor.
I still pulled out the relevant functionality here: https://reviews.llvm.org/D148201 for better reuse.

While it will do the same thing for now using an idiomatic API, it is also future proof.
Could you update to using getMixedDimensions ?


================
Comment at: mlir/test/Dialect/Linalg/transform-lower-pack.mlir:41
+  //      CHECK: %[[EMPTY:.*]] = tensor.empty() : tensor<1x1x1x1x136x64x16x16xf32>
+  //      CHECK: %[[RES:.*]] = tensor.insert_slice %[[PAD]] into %[[EMPTY]]
+  // offsets.
----------------
mravishankar wrote:
> I wondering if we even want to have an `insert_slice` here? we could just do a `reshape` here. In which case the only difference is whether there is a transpose or not? If we can teach bufferization to handle this reshape without introducing additional buffers, that'd be very useful (we used to have a `memref.reshape` to just change the indexing, wonder if we still have that).
In principle we could but I'd like to see how these things connect in practice.

At the graph level, reshapes tend to simplify reasonably well.
At the codegen level, as soon as we start tiling and taking slices, reshapes don't behave properly atm.

I'd say this is dependent on where we apply the lowering and we will likely want this to be configurable in the future, based on data.



Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D148159/new/

https://reviews.llvm.org/D148159



More information about the llvm-commits mailing list