[Mlir-commits] [mlir] [mlir][tensor] Refine the semantics of `createPadHighOp` (PR #109667)

Andrzej WarzyƄski llvmlistbot at llvm.org
Tue Sep 24 08:28:26 PDT 2024


================
@@ -24,12 +24,16 @@ using namespace mlir::tensor;
 PadOp mlir::tensor::createPadHighOp(RankedTensorType type, Value source,
                                     Value pad, bool nofold, Location loc,
                                     OpBuilder &b) {
+
+  assert(!ShapedType::isDynamicShape(type.getShape()) &&
----------------
banach-space wrote:

> Instead cant we make the semantics here that the dynamic shapes of the output are same as the dynamic shapes of the input?

How would we verify that these are indeed identical? 

Also, what should happen when the input is static, and the output is dynamic? This is one specific example that I have in mind right now:
```mlir
func.func @simple_pad_and_pack_scalable(%input: tensor<5x1xf32>, %output: tensor<1x1x?x2xf32>, %pad: f32) -> tensor<1x1x?x2xf32> {
  %c8 = arith.constant 8 : index
  %vscale = vector.vscale
  %c8_vscale = arith.muli %vscale, %c8 : index
  %0 = tensor.pack %input padding_value(%pad : f32) inner_dims_pos = [0, 1] inner_tiles = [%c8_vscale, 2] into %output : tensor<5x1xf32> -> tensor<1x1x?x2xf32>
  return %0 : tensor<1x1x?x2xf32>
}
```

That's based on the example from https://github.com/llvm/llvm-project/pull/109815/.  Overall, there's quite a few cases to consider (and to test):
* input static, output static (easy)
* input dynamic, output static (tricky)
* input static, output dynamic (tricky)
* input dynamic, output dynamic (tricky)

I'd rather restrict the semantics to begin with, and then gradually relax them when there's need (e.g. for #109815). No strong opinion though, whatever works and is correct :)

https://github.com/llvm/llvm-project/pull/109667


More information about the Mlir-commits mailing list