[Mlir-commits] [mlir] [mlir][vector] Add leading unit dim folding patterns for masked transfers (PR #71466)

Diego Caballero llvmlistbot at llvm.org
Mon Nov 6 16:37:28 PST 2023


================
@@ -410,3 +436,12 @@ func.func @cast_away_insert_leading_one_dims_one_two_dest_scalable(%s: vector<1x
   %0 = vector.insert %s, %v [0, 0, 7] : vector<1x[8]xi1> into vector<1x1x8x1x[8]xi1>
   return %0: vector<1x1x8x1x[8]xi1>
 }
+
+// CHECK-LABEL:   func.func @cast_away_constant_mask() -> vector<1x1x8x2x1xi1> {
+// CHECK:           %[[MASK:.*]] = vector.constant_mask [6, 1, 1] : vector<8x2x1xi1>
+// CHECK:           %[[BCAST:.*]] = vector.broadcast %[[MASK]] : vector<8x2x1xi1> to vector<1x1x8x2x1xi1>
+// CHECK:           return %[[BCAST]] : vector<1x1x8x2x1xi1>
+func.func @cast_away_constant_mask() -> vector<1x1x8x2x1xi1> {
+  %0 = vector.constant_mask [1, 1, 6, 1, 1] : vector<1x1x8x2x1xi1>
+  return %0: vector<1x1x8x2x1xi1>
----------------
dcaballe wrote:

I understand inserting a broadcast is just for users where the unit dim will prevail, right? Or is this to trigger some other canonicalization pattern that is only implemented on the broadcast op?

https://github.com/llvm/llvm-project/pull/71466


More information about the Mlir-commits mailing list