[Mlir-commits] [mlir] b4c8c49 - Fix handling of rank-1 tensors in tosa.reduce_sum

Jacques Pienaar llvmlistbot at llvm.org
Thu Oct 13 09:02:12 PDT 2022


Author: Ramiro Leal-Cavazos
Date: 2022-10-13T09:02:01-07:00
New Revision: b4c8c4952a4da88706f275708430be2a25de7658

URL: https://github.com/llvm/llvm-project/commit/b4c8c4952a4da88706f275708430be2a25de7658
DIFF: https://github.com/llvm/llvm-project/commit/b4c8c4952a4da88706f275708430be2a25de7658.diff

LOG: Fix handling of rank-1 tensors in tosa.reduce_sum

The conversion of `tosa.reduce_sum` to linalg creates a
`linalg.generic` op that produces a tensor of rank `input_rank -
1`. This tensor is then expanded back into a tensor of rank
`input_rank`. In the case where the tensor being expanded is rank-0,
the reassociation map used must be empty. However, the current
implementation indexes and modifies the reassociation map independent
of the rank of the tensor being expanded, resulting in out-of-bounds
indexing when the tensor being expanded is rank-0. This commit adds a
guard to the reassociation map indexing.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D135828

Added: 
    

Modified: 
    mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
    mlir/test/Conversion/TosaToLinalg/tosa-to-linalg.mlir

Removed: 
    


################################################################################
diff  --git a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
index 63a491664052f..e176e8866790b 100644
--- a/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
+++ b/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp
@@ -825,9 +825,12 @@ static LogicalResult reduceMatchAndRewriteHelper(Operation *op, uint64_t axis,
     int32_t dimToPush = i > axis ? i + 1 : i;
     reassociationMap[i].push_back(rewriter.getAffineDimExpr(dimToPush));
   }
-  int32_t expandedDim = axis < expandInputRank ? axis : expandInputRank - 1;
-  reassociationMap[expandedDim].push_back(
-      rewriter.getAffineDimExpr(expandedDim + 1));
+
+  if (expandInputRank != 0) {
+    int32_t expandedDim = axis < expandInputRank ? axis : expandInputRank - 1;
+    reassociationMap[expandedDim].push_back(
+        rewriter.getAffineDimExpr(expandedDim + 1));
+  }
 
   rewriter.replaceOpWithNewOp<tensor::ExpandShapeOp>(
       op, resultTy, linalgOp.getResults()[0], reassociationMap);

diff  --git a/mlir/test/Conversion/TosaToLinalg/tosa-to-linalg.mlir b/mlir/test/Conversion/TosaToLinalg/tosa-to-linalg.mlir
index a9b43d2b9f929..f94104ac2a0d8 100644
--- a/mlir/test/Conversion/TosaToLinalg/tosa-to-linalg.mlir
+++ b/mlir/test/Conversion/TosaToLinalg/tosa-to-linalg.mlir
@@ -777,6 +777,26 @@ func.func @reduce_float_dyn(%arg0: tensor<?x5x4xf32>) -> () {
 
 // -----
 
+// CHECK-DAG: #[[$MAP0:.*]] = affine_map<(d0) -> (d0)>
+// CHECK-DAG: #[[$MAP1:.*]] = affine_map<(d0) -> ()>
+
+// CHECK-LABEL: @reduce_float_dyn_rank_1
+// CHECK-SAME: %[[ARG0:[0-9a-zA-Z_]*]]: tensor<?xf32>
+func.func @reduce_float_dyn_rank_1(%arg0: tensor<?xf32>) -> () {
+  // CHECK-DAG: %[[INIT:.+]] = tensor.empty() : tensor<f32>
+  // CHECK-DAG: %[[CST0:.+]] = arith.constant 0.0
+  // CHECK: %[[FILL:.+]] = linalg.fill ins(%[[CST0]]{{.*}}outs(%[[INIT]]
+  // CHECK: %[[GENERIC:.+]] = linalg.generic {indexing_maps = [#[[$MAP0]], #[[$MAP1]]], iterator_types = ["reduction"]} ins(%[[ARG0]] : tensor<?xf32>) outs(%[[FILL]] : tensor<f32>)
+  // CHECK: ^bb0(%[[ARG1:.*]]: f32, %[[ARG2:.*]]: f32)
+  // CHECK:   %[[RES:.+]] = arith.addf %[[ARG1]], %[[ARG2]] : f32
+  // CHECK:   linalg.yield %[[RES]] : f32
+  // CHECK: tensor.expand_shape %[[GENERIC]] {{\[}}] : tensor<f32> into tensor<1xf32>
+  %0 = "tosa.reduce_sum"(%arg0) {axis = 0 : i64} : (tensor<?xf32>) -> tensor<1xf32>
+  return
+}
+
+// -----
+
 // CHECK-DAG: #[[$MAP0:.*]] = affine_map<(d0, d1, d2) -> (d0, d1, d2)>
 // CHECK-DAG: #[[$MAP1:.*]] = affine_map<(d0, d1, d2) -> (d0, d1)>
 


        


More information about the Mlir-commits mailing list