[PATCH] D143558: [mlir][Tiling] Properly reject "buffer semantic" operations

Quentin Colombet via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Feb 8 00:34:39 PST 2023


qcolombet created this revision.
qcolombet added reviewers: nicolasvasilache, ftynse.
qcolombet added a project: MLIR.
Herald added subscribers: hanchung, Moerafaat, zero9178, bzcheeseman, sdasgup3, wenzhicui, wrengr, jsetoain, cota, mravishankar, teijeong, rdzhabarov, tatianashp, msifontes, jurahul, Kayjukh, grosul1, Joonsoo, liufengdb, aartbik, mgester, arpith-jacob, csigg, antiagainst, shauheen, rriddle, mehdi_amini, thopre.
Herald added a reviewer: ThomasRaoux.
Herald added a project: All.
qcolombet requested review of this revision.
Herald added a reviewer: herhut.
Herald added a subscriber: stephenneuendorffer.

Our tiling implementation assumes a "tensor semantic" for the operation to
be tiled.
Prior to this patch, if we provide a tile-able op with "buffer semantic", we
will assert instead of gracefully reject the input.

This patch turns the assert in a proper error.


Repository:
  rG LLVM Github Monorepo

https://reviews.llvm.org/D143558

Files:
  mlir/lib/Dialect/Linalg/Transforms/Tiling.cpp
  mlir/test/Dialect/GPU/transform-gpu-failing.mlir


Index: mlir/test/Dialect/GPU/transform-gpu-failing.mlir
===================================================================
--- mlir/test/Dialect/GPU/transform-gpu-failing.mlir
+++ mlir/test/Dialect/GPU/transform-gpu-failing.mlir
@@ -274,4 +274,32 @@
   transform.gpu.map_nested_foreach_to_threads %funcop { blockDim = [32, 32]}
 }
 
+// -----
+
+func.func @tiling_buffer_semantic_op(%x: memref<32x32xf32>, %y: memref<32x32xf32>, %stream : !gpu.async.token) {
+  %one = arith.constant 1 : index
+  %name = gpu.launch async[%stream] blocks(%arg3, %arg4, %arg5) in (%arg9 = %one, %arg10 = %one, %arg11 = %one)
+            threads(%arg6, %arg7, %arg8) in (%arg12 = %one, %arg13 = %one, %arg14 = %one)
+  {
+    // expected-error @below {{'linalg.generic' op must have "tensor semantic" for tiling}}
+    // expected-note @below {{when applied to this op}}
+    linalg.generic
+      {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>,
+                        affine_map<(d0, d1) -> (d0, d1)>],
+       iterator_types = ["parallel", "parallel"]}
+      ins(%x : memref<32x32xf32>)
+      outs(%y : memref<32x32xf32>) {
+        ^bb0(%in: f32, %out: f32):
+          linalg.yield %in : f32
+    }
+    gpu.terminator
+  }
+  return
+}
 
+transform.sequence failures(propagate) {
+^bb1(%arg0: !pdl.operation):
+  %matmul = transform.structured.match ops{["linalg.generic"]} in %arg0 : (!pdl.operation) -> !pdl.operation
+  // expected-error @below {{transform.structured.tile_to_foreach_thread_op failed to apply}}
+  %foreach, %tiled = transform.structured.tile_to_foreach_thread_op %matmul num_threads [10, 20, 30] (mapping = [ #gpu.thread<y>, #gpu.thread<x>, #gpu.thread<z> ] )
+}
Index: mlir/lib/Dialect/Linalg/Transforms/Tiling.cpp
===================================================================
--- mlir/lib/Dialect/Linalg/Transforms/Tiling.cpp
+++ mlir/lib/Dialect/Linalg/Transforms/Tiling.cpp
@@ -381,7 +381,8 @@
     if (destinationStyleOp) {
       for (OpOperand *outOperand : destinationStyleOp.getDpsInitOperands()) {
         auto *it = llvm::find(dest, outOperand->get());
-        assert(it != dest.end() && "dest operand not found in dest");
+        if (it == dest.end())
+          return op->emitOpError("must have \"tensor semantic\" for tiling");
         unsigned destNum = std::distance(dest.begin(), it);
         outOperand->set(destBbArgs[destNum]);
       }


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D143558.495749.patch
Type: text/x-patch
Size: 2386 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20230208/940738fe/attachment.bin>


More information about the llvm-commits mailing list