[Mlir-commits] [mlir] [mlir][tosa] Enhance verify checks for PAD Op (PR #137177)

Luke Hutton llvmlistbot at llvm.org
Fri Apr 25 01:59:36 PDT 2025


================
@@ -1534,15 +1534,49 @@ LogicalResult tosa::PadOp::verify() {
   if (!inputType || !outputType)
     return success();
 
-  auto paddingRank = cast<tosa::shapeType>(getPadding().getType()).getRank();
+  auto inputRank = inputType.getRank();
+  auto outputRank = outputType.getRank();
+  if (inputRank != outputRank)
+    return emitOpError() << "expect same input and output tensor rank, but got "
+                         << "inputRank: " << inputRank
+                         << ", outputRank: " << outputRank;
+
+  DenseIntElementsAttr paddingAttr;
+  if (!matchPattern(getPadding(), m_Constant(&paddingAttr)))
+    return failure();
+
+  auto paddingValues = paddingAttr.getValues<APInt>();
+  if (paddingValues.size() != static_cast<size_t>(inputRank * 2))
+    return emitOpError() << "padding tensor must have " << inputRank
+                         << " * 2 = " << inputRank * 2 << " elements, but got "
+                         << paddingValues.size();
+
+  auto inputShape = inputType.getShape();
+  auto outputShape = outputType.getShape();
+
+  for (int64_t i = 0; i < inputRank; ++i) {
+    // Skip shape verification for dynamic dims
+    if (inputShape[i] == ShapedType::kDynamic ||
+        outputShape[i] == ShapedType::kDynamic)
+      continue;
+
+    int64_t padStart = paddingValues[i * 2].getSExtValue();
+    int64_t padEnd = paddingValues[i * 2 + 1].getSExtValue();
 
-  if (inputType.getRank() != outputType.getRank())
-    return emitOpError() << "expect same input and output tensor rank.";
+    if (padStart < 0 || padEnd < 0) {
+      return emitOpError() << "padding values must be non-negative, got ["
----------------
lhutton1 wrote:

Ah good catch, yes we probably need to add the check in the validation pass as well since no shapes should be dynamic at that point.

Thanks for the changes, I suspect we could still be a bit stricter e.g. if the padding is dynamic, we now skip the output shape check.
```
if ((padStart != -1 && padStart < 0) || (padEnd != -1 && padEnd < 0)) {
      return emitOpError() << ...
}
```

https://github.com/llvm/llvm-project/pull/137177


More information about the Mlir-commits mailing list