[Mlir-commits] [mlir] [mlir] Extend CombineTransferReadOpTranspose pattern to handle extf ops. (PR #74754)

llvmlistbot at llvm.org llvmlistbot at llvm.org
Thu Dec 7 15:01:47 PST 2023


================
@@ -460,3 +460,33 @@ func.func @cast_f16_to_f32_write(%arg0: memref<16x16xf16>, %arg1: memref<16x16xf
   vector.transfer_write %cast, %arg3[%c0, %c0] {in_bounds = [true, true]} : vector<16x16xf32>, memref<16x16xf32>
   return
 }
+
+// -----
+
+#map1 = affine_map<(d0, d1, d2) -> (d0, d2)>
+#map2 = affine_map<(d0, d1, d2) -> (d2, d1)>
+#map3 = affine_map<(d0, d1, d2) -> (d0, d1)>
+
+//   CHECK-DAG: #[[$MAP:.+]] = affine_map<(d0, d1) -> (d1, d0)>
+// CHECK-LABEL: func @fold_transpose_into_transfer_read(
+//  CHECK-SAME:      %[[ALLOC:.+]]: memref<64x128xf16>
+//   CHECK-DAG:      %[[C0:.+]] = arith.constant 0 : index
+//   CHECK-DAG:      %[[CST:.+]] = arith.constant 0.000000e+00 : f16
+//       CHECK:      %[[READ:.+]] = vector.transfer_read %[[ALLOC]][%[[C0]], %[[C0]]], %[[CST]] {in_bounds = [true, true], permutation_map = #[[$MAP]]}
+//       CHECK:      %[[EXTF1:.+]] = arith.extf %[[READ]]
+//   CHECK-NOT:      vector.transpose
+//       CHECK:      %[[RESULT:.+]] = vector.contract
+func.func @fold_transpose_into_transfer_read(%alloc: memref<64x128xf16>, %vector: vector<32x128xf16>, %alloc2: memref<32x64xf32>) {
----------------
harsh-nod wrote:

I think that makes it a little more confusing because then it seems like we are folding the transpose and extf into the transfer read (when we are only folding the transpose into the transfer read).

https://github.com/llvm/llvm-project/pull/74754


More information about the Mlir-commits mailing list