[Mlir-commits] [mlir] d18523c - [mlir][linalg] Check for tensor of 0 dims during vectorization

Thomas Raoux llvmlistbot at llvm.org
Mon Feb 6 22:34:04 PST 2023


Author: Thomas Raoux
Date: 2023-02-07T06:33:06Z
New Revision: d18523c0430cee513a07b5b2a89ad1adb43c91f1

URL: https://github.com/llvm/llvm-project/commit/d18523c0430cee513a07b5b2a89ad1adb43c91f1
DIFF: https://github.com/llvm/llvm-project/commit/d18523c0430cee513a07b5b2a89ad1adb43c91f1.diff

LOG: [mlir][linalg] Check for tensor of 0 dims during vectorization

tensor with dims of size 0 cannot be vectorized. Add precondition to
prevent a crash in vectorization.

Differential Revision: https://reviews.llvm.org/D143462

Added: 
    

Modified: 
    mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
    mlir/test/Dialect/Linalg/vectorization.mlir

Removed: 
    


################################################################################
diff  --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index 3e8af9eacd40f..3cfdc271668c6 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -977,6 +977,10 @@ LogicalResult
 mlir::linalg::vectorizeLinalgOpPrecondition(LinalgOp linalgOp,
                                             ArrayRef<int64_t> inputVectorSizes,
                                             bool vectorizeNDExtract) {
+  // tensor with dimension of 0 cannot be vectorized.
+  if (llvm::any_of(linalgOp.getStaticShape(),
+                   [](int64_t dim) { return dim == 0; }))
+    return failure();
   // Check API contract for input vector sizes.
   if (!inputVectorSizes.empty()) {
     assert(inputVectorSizes.size() == linalgOp.getNumLoops() &&

diff  --git a/mlir/test/Dialect/Linalg/vectorization.mlir b/mlir/test/Dialect/Linalg/vectorization.mlir
index 171a518447697..d928117ef4b87 100644
--- a/mlir/test/Dialect/Linalg/vectorization.mlir
+++ b/mlir/test/Dialect/Linalg/vectorization.mlir
@@ -2004,3 +2004,29 @@ transform.sequence failures(propagate) {
 // CHECK-LABEL: @wrong_reduction_detection
 // CHECK:         vector.broadcast
 // CHECK:         vector.transfer_write
+
+// -----
+
+// Don't vectorize tensor<0xf32>
+// CHECK-LABEL: @tensor_size0
+// CHECK:         linalg.generic
+func.func @tensor_size0(%arg0: tensor<0xf32>,
+                        %arg1: tensor<f32>) -> tensor<f32> {
+  %0 = linalg.generic
+  {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> ()>],
+  iterator_types = ["reduction"]}
+  ins(%arg0 : tensor<0xf32>) outs(%arg1 : tensor<f32>) {
+    ^bb0(%in: f32, %out: f32):
+    %12 = arith.addf %out, %in : f32
+    linalg.yield %12 : f32
+  } -> tensor<f32>
+  return %0 : tensor<f32>
+}
+
+transform.sequence failures(propagate) {
+^bb1(%arg1: !pdl.operation):
+  %0 = transform.structured.match ops{["linalg.generic"]} in %arg1 : (!pdl.operation) -> !pdl.operation
+  %1 = get_closest_isolated_parent %0 : (!pdl.operation) -> !pdl.operation
+  %2 = transform.structured.vectorize %1
+}
+


        


More information about the Mlir-commits mailing list