[Mlir-commits] [mlir] [mlir][tensor] Refine the semantics of `createPadHighOp` (PR #109667)
llvmlistbot at llvm.org
llvmlistbot at llvm.org
Wed Sep 25 09:52:56 PDT 2024
================
@@ -24,12 +24,16 @@ using namespace mlir::tensor;
PadOp mlir::tensor::createPadHighOp(RankedTensorType type, Value source,
Value pad, bool nofold, Location loc,
OpBuilder &b) {
+
+ assert(!ShapedType::isDynamicShape(type.getShape()) &&
----------------
MaheshRavishankar wrote:
> > Instead cant we make the semantics here that the dynamic shapes of the output are same as the dynamic shapes of the input?
>
> How would we verify that these are indeed identical?
I think its a matter of defining it. You can say that the `createPadHighOp` will effectively do low and high for dynamic shape to be 0. So for dynamic dims the input dim extent and output dim extent would be the same.
>
> Also, what should happen when the input is static, and the output is dynamic? This is one specific example that I have in mind right now:
>
> ```mlir
> func.func @simple_pad_and_pack_scalable(%input: tensor<5x1xf32>, %output: tensor<1x1x?x2xf32>, %pad: f32) -> tensor<1x1x?x2xf32> {
> %c8 = arith.constant 8 : index
> %vscale = vector.vscale
> %c8_vscale = arith.muli %vscale, %c8 : index
> %0 = tensor.pack %input padding_value(%pad : f32) inner_dims_pos = [0, 1] inner_tiles = [%c8_vscale, 2] into %output : tensor<5x1xf32> -> tensor<1x1x?x2xf32>
> return %0 : tensor<1x1x?x2xf32>
> }
> ```
Thats a pack operation? I am not following your point.
>
> That's based on the example from #109815. Overall, there's quite a few cases to consider (and to test):
>
> * input static, output static (easy)
+1
> * input dynamic, output static (tricky)
You can compute the high padding needed as output dim - input dim, and the high padding can be dynamic.
> * input static, output dynamic (tricky)
> * input dynamic, output dynamic (tricky)
>
> I'd rather restrict the semantics to begin with, and then gradually relax them when there's need (e.g. for #109815). No strong opinion though, whatever works and is correct :)
Ok, that makes sense. Let me re-review.
https://github.com/llvm/llvm-project/pull/109667
More information about the Mlir-commits
mailing list