<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/150203>150203</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
[mlir] Incorrect sizes/offsets after tile + fuse
</td>
</tr>
<tr>
<th>Labels</th>
<td>
mlir
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
banach-space
</td>
</tr>
</table>
<pre>
**REPRO**
```
$. mlir-opt --transform-interpreter tile_and_fuse.mlir -cse -test-transform-dialect-erase-schedule --split-input-file -cse
```
```mlir
#map = affine_map<(d0, d1) -> (d0, d1)>
func.func @pack_scalable_prod(%2:tensor<64x32xf32>) ->tensor<?x32x?x1xf32>
{
%c0 = arith.constant 0 : index
%c8 = arith.constant 8 : index
%vscale = vector.vscale
%c8_vscale = arith.muli %vscale, %c8 : index
%0 = affine.apply affine_map<()[s0] -> (64 ceildiv s0)>()[%c8_vscale]
%3 = tensor.empty(%0, %c8_vscale) : tensor<?x32x?x1xf32>
%4 = tensor.empty() : tensor<64x32xf32>
%5 = linalg.generic {indexing_maps = [#map, #map], iterator_types = ["parallel", "parallel"]} ins(%2 : tensor<64x32xf32>) outs(%4 : tensor<64x32xf32>) {
^bb0(%in: f32, %out: f32):
%7 = arith.addf %in, %in : f32
linalg.yield %7 : f32
} -> tensor<64x32xf32>
%pack = linalg.pack %5 inner_dims_pos = [0, 1] inner_tiles = [%c8_vscale, 1] into %3 : tensor<64x32xf32> -> tensor<?x32x?x1xf32>
return %pack: tensor<?x32x?x1xf32>
}
module attributes {transform.with_named_sequence} {
transform.named_sequence @__transform_main(%module : !transform.any_op {transform.readonly}) {
%generic = transform.structured.match ops{["linalg.generic"]} in %module
: (!transform.any_op) -> !transform.any_op
%pack = transform.structured.match ops{["linalg.pack"]} in %module
: (!transform.any_op) -> !transform.any_op
%tiled_unpack, %loops = transform.structured.tile_using_forall %pack tile_sizes [[8], 1]
: (!transform.any_op) -> (!transform.any_op, !transform.any_op)
%fused_op, %new_containing_op =
transform.structured.fuse_into_containing_op %generic into %loops
: (!transform.any_op, !transform.any_op) -> (!transform.any_op, !transform.any_op)
transform.yield
}
}
// -----
// Fixed-width version for comparison
#map = affine_map<(d0, d1) -> (d0, d1)>
func.func @pack_fixed_prod(%2:tensor<64x32xf32>) ->tensor<8x32x8x1xf32>
{
%c0 = arith.constant 0 : index
%c8 = arith.constant 8 : index
%3 = tensor.empty() : tensor<8x32x8x1xf32>
%4 = tensor.empty() : tensor<64x32xf32>
%5 = linalg.generic {indexing_maps = [#map, #map], iterator_types = ["parallel", "parallel"]} ins(%2 : tensor<64x32xf32>) outs(%4 : tensor<64x32xf32>) {
^bb0(%in: f32, %out: f32):
%7 = arith.addf %in, %in : f32
linalg.yield %7 : f32
} -> tensor<64x32xf32>
%pack = linalg.pack %5 inner_dims_pos = [0, 1] inner_tiles = [8, 1] into %3 : tensor<64x32xf32> -> tensor<8x32x8x1xf32>
return %pack: tensor<8x32x8x1xf32>
}
module attributes {transform.with_named_sequence} {
transform.named_sequence @__transform_main(%module : !transform.any_op {transform.readonly}) {
%generic = transform.structured.match ops{["linalg.generic"]} in %module
: (!transform.any_op) -> !transform.any_op
%pack = transform.structured.match ops{["linalg.pack"]} in %module
: (!transform.any_op) -> !transform.any_op
%tiled_unpack, %loops = transform.structured.tile_using_forall %pack tile_sizes [8, 1]
: (!transform.any_op) -> (!transform.any_op, !transform.any_op)
%fused_op, %new_containing_op =
transform.structured.fuse_into_containing_op %generic into %loops
: (!transform.any_op, !transform.any_op) -> (!transform.any_op, !transform.any_op)
transform.yield
}
}
```
**ISSUE**
After the transformation, you will get this `linalg.pack` Op:
```mlir
#map = affine_map<()[s0] -> (64 ceildiv s0)>
#map2 = affine_map<(d0) -> (d0 * 8)>
#map3 = affine_map<(d0)[s0] -> (-d0 + s0, 8)>
%0 = affine.apply #map()[%c8_vscale]
%1 = tensor.empty(%0, %c8_vscale) : tensor<?x32x?x1xf32>
%4 = scf.forall (%arg1, %arg2) in (%3, 32) shared_outs(%arg3 = %1) -> (tensor<?x32x?x1xf32>) {
%5 = affine.apply #map2(%arg1)
%6 = affine.min #map3(%5)[%dim]
%9 = linalg.generic {
} -> tensor<?x1xf32>
%dim_1 = tensor.dim %arg3, %c2 : tensor<?x32x?x1xf32>
%extracted_slice_2 = tensor.extract_slice %arg3[%5, %arg2, 0, 0] [%6, 1, %dim_1, 1] [1, 1, 1, 1] : tensor<?x32x?x1xf32> to tensor<?x1x?x1xf32>
%pack = linalg.pack %9 inner_dims_pos = [0, 1] inner_tiles = [%c8_vscale, 1] into %extracted_slice_2 : tensor<?x1xf32> -> tensor<?x1x?x1xf32>
}
```
Note the lack of `vscale` in size + offset computation for `%extracted_slice_2 = tensor.extract_slice`.
</pre>
<img width="1" height="1" alt="" src="http://email.email.llvm.org/o/eJzsWF9v6joS_zTmxUpkHALhgQcorXRf9q7u0T5HxnHAu46dtZ227KdfjfOHQEPbc251nlpVQOyZ8c-_Gc84w5yTRy3EBqU7lO5nrPEnYzcHphk_Ra5mXMwOpjhvEN0iuv3r8Z9__dn-RCT8L0n3T7aILmJcKWkjU3scRd4y7Upjq0hqL2xthRcWe6lEznSRl40TMYjjiDuBIy-cH-kUkinBfSQscyJy_CSKRgkcRa5W0kdS142PSglD3IlbKKNHWCKgSypWY5TsMStLqUVesRolD4hmBUH0ARdzRNc4Qskjvh5DySMi27LRPIYPjBakZvw_ueNMsYMSeW1NgWiGaEpRsvVCO2NR8rBcvCb0tUwoGOhMD5MoeYJZ-Jr3MmSLVjtEthhjjGjKSYvWSn-KudHOM-0xDG6x1IV4HYlmU6LZlOgzwBZB_Flwb2zcjoyN5SOh1mbVKHnRBmr6Vd8sQEYcx6yu1fkN4UBqunMEpfuB8OUCcyFVIZ-xIx3rveQYFEr3l7WSsFZLaiyq2p9bP5ABYa9F1wHrB_x3ZhfTZm9MXPl30E2DrpKaqWN8FFpYyTFa7QJLUh-BBRdkwsYgKFuw4Ve6hwfphWXe2NyfazESpjWzTCmhEKWt0tVIukerPZbadcH4Dly6xqbxneDifcFRUKaPhwNplaQGJZBqqTaNHwbWKNl2KoGT1SiSWFGUuNVvFaXGvWKv0rF3lkIVvf5YArYZ4uYDX8AxHbujfQYfSa2FzQtZubw2A8MhbOYQle085KoR_eNwGuS86QPxDoU3SO8HnhW-sbrH_aloRat9m-0qE7Ij897KQ-MB9Wo3JNP4RfpTrlklityJ_zZCcwEc9p69CF7LQKrL82E2rxh4DbzfrQcgEZ1f9Jk-56a-XtwKVhitzgD2Ek6IpsPpgMM2iDtvG-4bK4q4Yp6fsKkd6IQDcH2uxkGPB1TBegCWTWAbJfm3c28j5-dwBcfdBdUeh19FdgEHYVnkjW5XC4dIGdNllUnAoeg2DtJPaSBhDHsMM07-DyIGrgC7rMtB80ui_STo6emHye3Q9WU7cBEoBtlUi5ecG-2Z1IDXhJo9IJncHljI4SjeKl5irD-ogahP7ese8F_ebrfqZTLkt_E57k8zok-IPuEI_q5GnuSrKKIXWfgTfhbWSaNxaSzmpqqZlc7oXvyrLzslrPxLN50M5rLffMu5czG4qeBvoX1X_-_qP1n9s79R8ifC7P1yP6XwXeq_S_1Xlvrsu8j_3iJ_VeNvehWho_LHjx__ehw3V7ZlaJicxMUc89KE1Hk2DX6RSuGj8NifpMNoScahuST4z7rNxD_RCvnsm3lvhN69Y4xvFxjRLc5uVJP7qm8gRMHGLqz_MLb0XtehL68fNRHmX9xEuO0jOF7Gw1EEw8we551tZo9QMdv8AXMJTIQqit2JWTgzQ6Vm9tiSBqDHDL8L6KqGjy4pE1zRMb71WGU5VqkC2uDDViEdGC5k1XI7Vl7fuRRdZN5U9GlKg7VCVvmVywpZdVQmvcfoZ5s9waJ49ZZxD9VQSS5yehUQ7WQ7NawTNpteOfEBk_Yj7dsFyzbFtkIB9nCJQOluPsyOhj9Ajb255ejepu7dftZf3vuYom877cpJL9_raoxy5D-MFyERKtiFKSHZdWiWBA4PlLWQIUxZOuHDC1HjQ7YML0hg5mccjZYknhWbpFgnazYTm_kqTehymWRkdtoUc8aX2Zzx7FDydZodsqQsiqzMVrxIlms-kxtKaEpWNJkTuqQkTni25odDmhKSzjNG0YKIikkVK_VcxcYeZ9K5RmzmKaEkmSl2EMpt2vtGSNrhojGzG5CPDs3RoQVR0nl3seClV6GPHxTSPf5Dc2Ot4D6w4xB9aslxmJV9Jz5wBpV11li1OXlfO6gZ4W3zKP2pOcTcVIg-wTLdV1Rb82_BPaJPATVY7oA_b-j_AwAA__-KrhzB">