[PATCH] D73145: [mlir][Linalg] Add a Linalg DRR test to go from matmul to vectors

Alex Zinenko via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Jan 23 08:01:25 PST 2020


ftynse requested changes to this revision.
ftynse added inline comments.
This revision now requires changes to proceed.


================
Comment at: mlir/test/Dialect/Linalg/matmul-to-vector.mlir:3
+
+// TODO(ntv, rriddle) Need ViewOp canonicalizer to enable vectorization
+// without phase ordering. Unfortunately there is some pattern interaction
----------------
Is this comment relevant to the test?


================
Comment at: mlir/test/Dialect/Linalg/matmul-to-vector.mlir:25
+// CHECK-SAME: iterator_types = ["parallel", "parallel", "reduction"]
+// CHECK-SAME: : vector<8x16xf32>, vector<16x12xf32> into vector<8x12xf32>
----------------
Could you describe where do this magic sizes come from?


================
Comment at: mlir/test/lib/DeclarativeTransforms/TestLinalgMatmulToVectorPatterns.td:3
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
----------------
MLIR Project


================
Comment at: mlir/test/lib/Transforms/TestLinalgMatmulToVector.cpp:27
+
+/// Canonicalize a LinalgOp fed by either:
+///
----------------
a MemrefCastOp, you don't change the LinalgOp


================
Comment at: mlir/test/lib/Transforms/TestLinalgMatmulToVector.cpp:39
+/// ```mlir
+///   ... = memref_cast ... : ... to memref<8x16xf32>
+/// ```
----------------
I don't understand this comment.

If you have a `memref_cast %0` fed by %0 = `%0 = memref_cast %? : ... to memref<?x?xf32>, why do we convert it to `memref_cast %0 : ... to memref<8x16xf32>`.  There's something wrong with the order in your description.

I understand it even less after reading the code.


================
Comment at: mlir/test/lib/Transforms/TestLinalgMatmulToVector.cpp:48
+                                     PatternRewriter &rewriter) const override {
+    MemRefType sourceType = castOp.source().getType().dyn_cast<MemRefType>();
+    AffineMap sourceMap = (sourceType.getAffineMaps().empty())
----------------
Nit: cast


================
Comment at: mlir/test/lib/Transforms/TestLinalgMatmulToVector.cpp:56
+
+    // If we don't have MemRefType as source and destination, bail out.
+    if (!sourceType || !resultType)
----------------
Can verified `memref_cast` apply to non-memrefs?


================
Comment at: mlir/test/lib/Transforms/TestLinalgMatmulToVector.cpp:81
+    //   3. if result is partially static, ensure sizes match.
+    // if (!sourceType.hasStaticShape())
+      // return matchFailure();
----------------
Leftover comment


================
Comment at: mlir/test/lib/Transforms/TestLinalgMatmulToVector.cpp:83
+      // return matchFailure();
+    if (!sourceType.hasStaticShape() ||
+        sourceType.getRank() != resultType.getRank())
----------------
Nit: I'd put it before iterating over the use list


================
Comment at: mlir/test/lib/Transforms/TestLinalgMatmulToVector.cpp:95
+
+    if (sourceMap) {
+      int64_t offset;
----------------
Please add a comment, it looks like this block is useless (it isn't the trick is in the "linear" part).


================
Comment at: mlir/test/lib/Transforms/TestLinalgMatmulToVector.cpp:112
+  }
+};
+
----------------
I understand this is a test code, but having a separate test that exercises this pattern specifically would definitely help. One one hand, avoid spurious failures in a bigger test. On the other hand, better see what kind of transformation you are doing here.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D73145/new/

https://reviews.llvm.org/D73145





More information about the llvm-commits mailing list