[Mlir-commits] [mlir] [MLIR][memref] Fix normalization issue in memref.load (PR #107771)

Kai Sasaki llvmlistbot at llvm.org
Wed Sep 18 22:53:49 PDT 2024


================
@@ -363,3 +363,33 @@ func.func @memref_with_strided_offset(%arg0: tensor<128x512xf32>, %arg1: index,
   %1 = bufferization.to_tensor %cast : memref<16x512xf32, strided<[?, ?], offset: ?>>
   return %1 : tensor<16x512xf32>
 }
+
+#map0 = affine_map<(i,k) -> (2 * (i mod 2) + (k mod 2) + 4 * (i floordiv 2) + 8 * (k floordiv 2))>
+#map1 = affine_map<(k,j) -> ((k mod 2) + 2 * (j mod 2) + 8 * (k floordiv 2) + 4 * (j floordiv 2))>
+#map2 = affine_map<(i,j) -> (4 * i + j)>
+// CHECK-LABEL: func @memref_load_with_reduction_map
+func.func @memref_load_with_reduction_map(%arg0 :  memref<4x4xf32,#map2>) -> () {
+  %0 = memref.alloc() : memref<4x8xf32,#map0>
+  %1 = memref.alloc() : memref<8x4xf32,#map1>
+  %2 = memref.alloc() : memref<4x4xf32,#map2>
+  // CHECK-NOT:  memref<4x8xf32>
+  // CHECK-NOT:  memref<8x4xf32>
+  // CHECK-NOT:  memref<4x4xf32>
+  %cst = arith.constant 3.0 : f32
+  %cst0 = arith.constant 0 : index
+  affine.for %i = 0 to 4 {
+    affine.for %j = 0 to 8 {
+      affine.for %k = 0 to 8 {
+        // CHECK: affine.apply #map{{.*}}(%{{.*}}, %{{.*}})
+        // CHECK: memref.load %alloc[%{{.*}}] : memref<32xf32>
----------------
Lewuathe wrote:

We can know the normalization does not fail with this test. But it would be better to check the calculation of the affine_map is kept precisely as well as we do with `affine.load` if possible.

How about checking `affine.apply` is calling the same logic as follows to ensure the consistency?

```
          %0 = affine.load %alloc[%arg1 * 2 + %arg3 + (%arg3 floordiv 2) * 6] : memref<32xf32>
          %1 = affine.load %alloc_0[%arg3 + %arg2 * 2 + (%arg3 floordiv 2) * 6] : memref<32xf32>
          %2 = affine.load %alloc_1[%arg1 * 4 + %arg2] : memref<16xf32>
```

https://godbolt.org/z/bPhYGrhKa



https://github.com/llvm/llvm-project/pull/107771


More information about the Mlir-commits mailing list