<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/113600>113600</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
[mlir] Bufferization analysis heuristic
</td>
</tr>
<tr>
<th>Labels</th>
<td>
mlir
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
n-io
</td>
</tr>
</table>
<pre>
I have encountered a case where the results of `one-shot-bufferize` appear to differ when run on code inside vs. outside an unknown op. To reproduce, I've copy and pasted the same code block once inside a function body and once inside the region of an unknown op:
```
builtin.module {
func.func @test() {
%0, %1, %2 = "test.op"() : () -> (memref<14xf32>, memref<10xf32>, memref<10xf32>)
%3 = bufferization.to_tensor %0 restrict : memref<14xf32>
%4 = bufferization.to_tensor %1 restrict : memref<10xf32>
%5 = bufferization.to_tensor %2 restrict : memref<10xf32>
%6 = "tensor.extract_slice"(%3) <{"static_offsets" = array<i64: 2>, "static_sizes" = array<i64: 10>, "static_strides" = array<i64: 1>, "operandSegmentSizes" = array<i32: 1, 0, 0, 0>}> : (tensor<14xf32>) -> tensor<10xf32>
%7 = linalg.mul ins(%6, %5 : tensor<10xf32>, tensor<10xf32>) outs(%6 : tensor<10xf32>) -> tensor<10xf32>
%8 = linalg.mul ins(%6, %4 : tensor<10xf32>, tensor<10xf32>) outs(%6 : tensor<10xf32>) -> tensor<10xf32>
%9 = bufferization.to_memref %7 : memref<10xf32>
%10 = bufferization.to_memref %8 : memref<10xf32>
"test.op"(%9, %10) : (memref<10xf32>, memref<10xf32>) -> ()
func.return
}
func.func @test2() {
"test.op_with_region"() ({
%0, %1, %2 = "test.op"() : () -> (memref<14xf32>, memref<10xf32>, memref<10xf32>)
%3 = bufferization.to_tensor %0 restrict : memref<14xf32>
%4 = bufferization.to_tensor %1 restrict : memref<10xf32>
%5 = bufferization.to_tensor %2 restrict : memref<10xf32>
%6 = "tensor.extract_slice"(%3) <{"static_offsets" = array<i64: 2>, "static_sizes" = array<i64: 10>, "static_strides" = array<i64: 1>, "operandSegmentSizes" = array<i32: 1, 0, 0, 0>}> : (tensor<14xf32>) -> tensor<10xf32>
%7 = linalg.mul ins(%6, %5 : tensor<10xf32>, tensor<10xf32>) outs(%6 : tensor<10xf32>) -> tensor<10xf32>
%8 = linalg.mul ins(%6, %4 : tensor<10xf32>, tensor<10xf32>) outs(%6 : tensor<10xf32>) -> tensor<10xf32>
%9 = bufferization.to_memref %7 : memref<10xf32>
%10 = bufferization.to_memref %8 : memref<10xf32>
"test.op"(%9, %10) : (memref<10xf32>, memref<10xf32>) -> ()
}) : () -> ()
return
}
}
```
For the first function, bufferization creates two allocs resulting in `"test.op"(%alloc_0, %alloc)`, while in the second case it creates only alloc resulting in `"test.op"(%alloc, %alloc)` (see [godbolt.org](https://godbolt.org/z/bd41hYfK3)).
It seems to be related to the analysis heuristic used, and setting `analysis-heuristic=top-down` results in two buffer allocs in both cases. Removing the `tensor.extract_slice` resizing (in favour of using size `10xf32` throughout) also results in two buffer allocs. Please could someone confirm whether they think it's a bug?
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJzsV09v47YT_TT0ZWCBoixLOviwidfA4nf5odtLTwEtjix2KdLgH3uTT1-QsmK7cTa7QIrmUCCRKXLmzeNw9MDhzsmdRlyR8o6U6xkPvjd2pefSzLZGPK6-QM8PCKhbE7RHiwI4tNwhHHu0CL5HsOiC8g5MB2RJjca5642fb0PXoZVPSJYU-H6P3II3IGScju4abNBgNLRGIEjtpEA4uAxM8GnMNQT9TZujBrPP4HcDFvfWiNAiYffwhbDqgNCa_SNwLWDPnUeRKDk-4Ai7Vab9Bka3zxE4dEG3XhoNcYvJ9XJ93NIurpvumgMpPhG6JnR6LunpL71ug1Re6mwwIigEUt2N8yleFh9AFtSj84TVhDUXFgBAWEnjrggr89MvA1KsgTAWfTKzJ4xNnsUnOA3npPgcxwMOFjtS3OeL713BSPE5opxn6RuzzRWVIoWejpDHbGXePHjUztjENR67t7L1icyN6Jdwi7fg8tfg6C248i049ktwy3OeI0CG373lrX9wSsZKG5NeFmPi7-OpMeY897J9MF3n0DvCxrPi1vJHUtzL5SLGnfJ9tnfyCV-zzulLc2-leN3hbG_2aLkWX3E3oPZfb0aJG49O7B7oxaP4TKp1KqKxqMYkXBfSqczOSzfzWKV4SmqudtkQVPyixtwtTxVdpiA3YNj9zdkmacEJ41Xfn2NXv8lu8S-ya26X9Fi5U3LfrOScvgVT_xjmpdiUzaRK9EJ5fkFXnhXqrDFJEC36YPVEP9bgafhCLtltvZyYPhyl7x9Gyb6QSFZf2X8ohX13jX13lX13nf1Pad9LaT-61n50tX0vvX0fxYVrMfsnZRdGmb2tbldmV9r8LM3nwfXFd3xujE13505a55_v2JHfVXKgtcg9OvBHA1wp07pTByH1DqSOPcSNdCTLh0m_01tkvEwzx16qeH8f7_7YGi3GJkX653BGq8cx3k-HexksJsohAinvdkZsjfKZsTtSrgmre-_3LnYIbEPY5nKZbZ4I22zFIu__6P4X1Y2wJrvM3RcPDnFwsUHaxvZD8dTKmLQjrrl6dNJBj8FK52ULwaGI9GLz4tCnzZAlnSznz5akWHuznwtz1JH_1KvFXB3N6WSmY5CxI_J9Sp3L4DcczCECRw5kSW9q9ogpnxIBVksNHT-YYGPvFFycjUIc3U_FuaTge2vCrjfBxwLkypkf8srg_wrjabYmKAHODGh0fNOdtEPsJH2PqfYewfdSfwPpCasccNiGHSk2M7EqRFM0fIarvGJN1bC6orN-JXjF85wW26Jq63rRVVXd8DoXDCmyulzM5IpRtsjjf03znGZFxRtRCtHlC8SG12RBceBSZUodhnjYM-lcwFWeF0tKZ4pvUbnUXTM2KGljgZXrmV1F-_k27BxZUCWdd2cEL71KHXlyKNdwd_X9vKyGWbBq9bfyk74P26w1A2GbiHz6me-t-RNbT9gmEXWEbU5cDyv2VwAAAP__e_2mfw">