[Mlir-commits] [mlir] [MLIR][Linalg] Introduce SpecializeOp (PR #70326)

Aviad Cohen llvmlistbot at llvm.org
Sat Oct 28 09:04:18 PDT 2023


================
@@ -0,0 +1,52 @@
+//===- Specialize.cpp - linalg generic ops to named ops  ------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements a method to specialize generic operations to named
+// operations. Conceptually it is the opposite of generalize.cpp.
+//
+//===----------------------------------------------------------------------===//
+
+#include "mlir/Dialect/Linalg/IR/Linalg.h"
+#include "mlir/Dialect/Linalg/Transforms/Transforms.h"
+#include "llvm/Support/Debug.h"
+
+#define DEBUG_TYPE "linalg-specialization"
+
+using namespace mlir;
+using namespace mlir::linalg;
+
+static bool isaCopyOp(GenericOp genericOp) {
+  // Structural.
+  if (genericOp.getNumParallelLoops() != genericOp.getNumLoops())
+    return false;
+
+  // Operands and maps.
+  if (genericOp.getNumDpsInputs() != 1 || genericOp.getNumDpsInits() != 1)
+    return false;
+  auto mapRange = genericOp.getIndexingMapsArray();
+  if (mapRange.size() != 2 || !mapRange.front().isIdentity() ||
----------------
AviadCo wrote:

In my concrete exmaple I have faced, I lowered `tosa.add` to linalg (so it is not exactly lowered into `linalg.copy`):

```mlir
func.func private @CustomAddLayer_kernel(%arg0: tensor<1x24x32x512xf32>, %arg1: tensor<1x24x32x512xf32>) -> (tensor<1x24x32x512xf32>) {
  %0 = tosa.add %arg0, %arg1 : (tensor<1x24x32x512xf32>, tensor<1x24x32x512xf32>) -> tensor<1x24x32x512xf32>
  return %0 : tensor<1x24x32x512xf32>
}

// -----// IR Dump After TosaToLinalg (tosa-to-linalg) //----- //
func.func private @CustomAddLayer_kernel(%arg0: tensor<1x24x32x512xf32>, %arg1: tensor<1x24x32x512xf32>) -> (tensor<1x24x32x512xf32>) {
  %0 = tensor.empty() : tensor<1x24x32x512xf32>
  %1 = linalg.generic {indexing_maps = [affine_map<(d0, d1, d2, d3) -> (0, d1, d2, d3)>, affine_map<(d0, d1, d2, d3) -> (0, d1, d2, d3)>, affine_map<(d0, d1, d2, d3) -> (d0, d1, d2, d3)>], iterator_types = ["parallel", "parallel", "parallel", "parallel"]} ins(%arg0, %arg1 : tensor<1x24x32x512xf32>, tensor<1x24x32x512xf32>) outs(%0 : tensor<1x24x32x512xf32>) {
  ^bb0(%in: f32, %in_0: f32, %out: f32):
    %2 = arith.addf %in, %in_0 : f32
    linalg.yield %2 : f32
  } -> tensor<1x24x32x512xf32>
  return %1 : tensor<1x24x32x512xf32>
}
```

In this example, we actually use some `CanonicalizedIdentityMap` in the `linalg.generic` indexing maps.
This is actually happens a lot when lowering from Tosa to Linalg, so I believe it would be useful to have such function in the `LinalgOp` interface and use it also here. If you agree, it can also be in different patch.

https://github.com/llvm/llvm-project/pull/70326


More information about the Mlir-commits mailing list