[Mlir-commits] [mlir] [mlir][vector] Add leading unit dim folding patterns for masked transfers (PR #71466)
Quinn Dawkins
llvmlistbot at llvm.org
Mon Nov 6 16:42:16 PST 2023
================
@@ -410,3 +436,12 @@ func.func @cast_away_insert_leading_one_dims_one_two_dest_scalable(%s: vector<1x
%0 = vector.insert %s, %v [0, 0, 7] : vector<1x[8]xi1> into vector<1x1x8x1x[8]xi1>
return %0: vector<1x1x8x1x[8]xi1>
}
+
+// CHECK-LABEL: func.func @cast_away_constant_mask() -> vector<1x1x8x2x1xi1> {
+// CHECK: %[[MASK:.*]] = vector.constant_mask [6, 1, 1] : vector<8x2x1xi1>
+// CHECK: %[[BCAST:.*]] = vector.broadcast %[[MASK]] : vector<8x2x1xi1> to vector<1x1x8x2x1xi1>
+// CHECK: return %[[BCAST]] : vector<1x1x8x2x1xi1>
+func.func @cast_away_constant_mask() -> vector<1x1x8x2x1xi1> {
+ %0 = vector.constant_mask [1, 1, 6, 1, 1] : vector<1x1x8x2x1xi1>
+ return %0: vector<1x1x8x2x1xi1>
----------------
qedawkins wrote:
Primarily the latter, plus it's consistent with the other folding patterns, e.g. transfer_read. We just need something to cast back to the original shape, and I think broadcast is easier to reason about than shape_cast (i.e. vector.extract(vector.broadcast)) ).
https://github.com/llvm/llvm-project/pull/71466
More information about the Mlir-commits
mailing list