[Mlir-commits] [mlir] 75e08a5 - [mlir][linalg] Add tests for PadOp (#110271)
llvmlistbot at llvm.org
llvmlistbot at llvm.org
Sat Sep 28 02:30:41 PDT 2024
Author: Andrzej WarzyĆski
Date: 2024-09-28T10:30:37+01:00
New Revision: 75e08a527b716a11b3085a9ea4f5bed80c386323
URL: https://github.com/llvm/llvm-project/commit/75e08a527b716a11b3085a9ea4f5bed80c386323
DIFF: https://github.com/llvm/llvm-project/commit/75e08a527b716a11b3085a9ea4f5bed80c386323.diff
LOG: [mlir][linalg] Add tests for PadOp (#110271)
Adds 3 tests for the logic to pad Linalg ops. Specifically, for
transformation under the `transform.structured.pad` TD Op.
For `@zero_pad_static` I simply took an existing test and added
check-lines. According to comments, it should fail. However, when I
tried it, it actually worked. Indeed, it triggers an important edge
cases - padding by 0 when all the shapes are static.
`@zero_pad_dynamic` exercises similar case, but some dimensions in the
input tensors are made dynamic - that's added to improve the test
coverage. Note that in this case we are padding the static dim.
Finally, `@negative_no_ub_estimate` is similar to `@zero_pad_dynamic`,
but we are trying to pad a dynamic dim instead. This fails as it's
impossible to compute the padded shape.
Added:
Modified:
mlir/test/Dialect/Linalg/transform-op-pad.mlir
Removed:
################################################################################
diff --git a/mlir/test/Dialect/Linalg/transform-op-pad.mlir b/mlir/test/Dialect/Linalg/transform-op-pad.mlir
index 47bb5ddf4afc3e..120a525f3bdae9 100644
--- a/mlir/test/Dialect/Linalg/transform-op-pad.mlir
+++ b/mlir/test/Dialect/Linalg/transform-op-pad.mlir
@@ -209,12 +209,26 @@ module attributes {transform.with_named_sequence} {
// -----
-// CHECK-LABEL: @pad(
-func.func @pad(%arg0: tensor<24x12xf32>,
- %arg1: tensor<12x25xf32>,
- %arg2: tensor<24x25xf32>) -> tensor<24x25xf32> {
- // This is attached to an error that is silenceable and is not reported by this transform
- // {{when applied to this op}}
+// With all padded being static, there's nothing to pad. However, with the
+// `nofold` attribute set (see `pack_paddings`), the corresponding pad Ops are
+// preserved.
+
+// CHECK-LABEL: @zero_pad_static(
+func.func @zero_pad_static(%arg0: tensor<24x12xf32>,
+ %arg1: tensor<12x25xf32>,
+ %arg2: tensor<24x25xf32>) -> tensor<24x25xf32> {
+
+// CHECK-SAME: %[[ARG_0:.*]]: tensor<24x12xf32>,
+// CHECK-SAME: %[[ARG_1:.*]]: tensor<12x25xf32>,
+// CHECK-SAME: %[[ARG_2:.*]]: tensor<24x25xf32>) -> tensor<24x25xf32> {
+
+// CHECK: %[[PAD_ARG_0:.*]] = tensor.pad %[[ARG_0]] nofold low[0, 0] high[0, 0]
+// CHECK: %[[PAD_ARG_1:.*]] = tensor.pad %[[ARG_1]] nofold low[0, 0] high[0, 0]
+// CHECK-NOT: tensor.pad
+
+// CHECK: %[[MATMUL:.*]] = linalg.matmul
+// CHECK-SAME: ins(%[[PAD_ARG_0:.*]], %[[PAD_ARG_1:.*]] : tensor<24x12xf32>, tensor<12x25xf32>)
+// CHECK-SAME: outs(%[[ARG_2]]
%0 = linalg.matmul ins(%arg0, %arg1 : tensor<24x12xf32>, tensor<12x25xf32>) outs(%arg2 : tensor<24x25xf32>) -> tensor<24x25xf32>
func.return %0 : tensor<24x25xf32>
}
@@ -222,8 +236,6 @@ func.func @pad(%arg0: tensor<24x12xf32>,
module attributes {transform.with_named_sequence} {
transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
%0 = transform.structured.match ops{["linalg.matmul"]} in %arg1 : (!transform.any_op) -> !transform.any_op
- // This error is silenceable and is not reported by this transform
- // {{transform.structured.pad failed to apply}}
%padded, %pad, %copy_back = transform.structured.pad %0 {
padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
padding_dimensions=[0, 1, 2],
@@ -235,6 +247,72 @@ module attributes {transform.with_named_sequence} {
// -----
+// With all padded dims being static, there's nothing to pad. However, with the
+// `nofold` attribute set (see `pack_paddings`), the corresponding pad Ops are
+// preserved. Same as above, but some dims are now dynamic.
+
+// CHECK-LABEL: @zero_pad_dynamic(
+func.func @zero_pad_dynamic(%arg0: tensor<?x12xf32>,
+ %arg1: tensor<12x?xf32>,
+ %arg2: tensor<?x?xf32>) -> tensor<?x?xf32> {
+
+// CHECK-SAME: %[[ARG_0:.*]]: tensor<?x12xf32>,
+// CHECK-SAME: %[[ARG_1:.*]]: tensor<12x?xf32>,
+// CHECK-SAME: %[[ARG_2:.*]]: tensor<?x?xf32>) -> tensor<?x?xf32> {
+
+// CHECK: %[[PAD_ARG_0:.*]] = tensor.pad %[[ARG_0]] nofold low[0, 0] high[0, 0]
+// CHECK: %[[PAD_ARG_1:.*]] = tensor.pad %[[ARG_1]] nofold low[0, 0] high[0, 0]
+// CHECK: %[[PAD_ARG_2:.*]] = tensor.pad %[[ARG_2]] nofold low[0, 0] high[0, 0]
+
+// CHECK: %[[MATMUL:.*]] = linalg.matmul
+// CHECK-SAME: ins(%[[PAD_ARG_0:.*]], %[[PAD_ARG_1:.*]] : tensor<?x12xf32>, tensor<12x?xf32>)
+// CHECK-SAME: outs(%[[PAD_ARG_2]]
+ %0 = linalg.matmul ins(%arg0, %arg1 : tensor<?x12xf32>, tensor<12x?xf32>) outs(%arg2 : tensor<?x?xf32>) -> tensor<?x?xf32>
+ func.return %0 : tensor<?x?xf32>
+}
+
+module attributes {transform.with_named_sequence} {
+ transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
+ %0 = transform.structured.match ops{["linalg.matmul"]} in %arg1 : (!transform.any_op) -> !transform.any_op
+ %padded, %pad, %copy_back = transform.structured.pad %0 {
+ padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
+ // Note - only the static dim is padded
+ padding_dimensions=[2],
+ pack_paddings=[1, 1, 1]
+ } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
+ transform.yield
+ }
+}
+
+// -----
+
+// Impossible to get a bound for padding - fails
+
+func.func @negative_no_ub_estimate(%arg0: tensor<?x12xf32>,
+ %arg1: tensor<12x?xf32>,
+ %arg2: tensor<?x?xf32>) -> tensor<?x?xf32> {
+
+ // expected-note @below {{target op}}
+ %0 = linalg.matmul ins(%arg0, %arg1 : tensor<?x12xf32>, tensor<12x?xf32>) outs(%arg2 : tensor<?x?xf32>) -> tensor<?x?xf32>
+ func.return %0 : tensor<?x?xf32>
+}
+
+module attributes {transform.with_named_sequence} {
+ transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
+ %0 = transform.structured.match ops{["linalg.matmul"]} in %arg1 : (!transform.any_op) -> !transform.any_op
+ // expected-error @below {{ailed to pad op}}
+ %padded, %pad, %copy_back = transform.structured.pad %0 {
+ padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
+ // Note - attempting to pad non-static dim
+ padding_dimensions=[1],
+ pack_paddings=[1, 1, 1]
+ } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
+ transform.yield
+ }
+}
+
+// -----
+
// Check that the padding can be applied even when the output argument of the
// linalg op is not produced by an empty op or an extract_slice op.
More information about the Mlir-commits
mailing list