<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/59016>59016</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
SparseDialect test seems to have intermittent failures
</td>
</tr>
<tr>
<th>Labels</th>
<td>
mlir:sparse
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
MaheshRavishankar
</td>
</tr>
</table>
<pre>
The `test/Dialect/SparseTensors/codegen_buffer_initialization.mlir` test seems to fail on Windows intermittently. The generated IR is different over 100 runs.
Expected output from `mlir-opt --sparse-tensor-codegen=enable-buffer-initialization=true codegen_buffer_initialization.mlir `
```
#map = affine_map<(d0) -> (d0)>
module {
func.func @sparse_alloc_sparse_vector(%arg0: index) -> (memref<1xindex>, memref<3xindex>, memref<?xindex>, memref<?xindex>, memref<?xf64>) {
%c16 = arith.constant 16 : index
%alloc = memref.alloc() : memref<1xindex>
%alloc_0 = memref.alloc() : memref<3xindex>
%alloc_1 = memref.alloc(%c16) : memref<?xindex>
%c0 = arith.constant 0 : index
linalg.fill ins(%c0 : index) outs(%alloc_1 : memref<?xindex>)
%alloc_2 = memref.alloc(%c16) : memref<?xindex>
%c0_3 = arith.constant 0 : index
linalg.fill ins(%c0_3 : index) outs(%alloc_2 : memref<?xindex>)
%alloc_4 = memref.alloc(%c16) : memref<?xf64>
%cst = arith.constant 0.000000e+00 : f64
linalg.fill ins(%cst : f64) outs(%alloc_4 : memref<?xf64>)
%c0_5 = arith.constant 0 : index
linalg.fill ins(%c0_5 : index) outs(%alloc_0 : memref<3xindex>)
%c0_6 = arith.constant 0 : index
%c0_7 = arith.constant 0 : index
memref.store %arg0, %alloc[%c0_7] : memref<1xindex>
%0 = sparse_tensor.push_back %alloc_0, %alloc_1, %c0_6 {idx = 0 : index} : memref<3xindex>, memref<?xindex>, index
%c1 = arith.constant 1 : index
%c0_8 = arith.constant 0 : index
%1 = sparse_tensor.push_back %alloc_0, %0, %c0_8, %c1 {idx = 0 : index} : memref<3xindex>, memref<?xindex>, index, index
%2 = builtin.unrealized_conversion_cast %alloc, %alloc_0, %1, %alloc_2, %alloc_4 : memref<1xindex>, memref<3xindex>, memref<?xindex>, memref<?xindex>, memref<?xf64> to tensor<?xf64, #sparse_tensor.encoding<{ dimLevelType = [ "compressed" ] }>>
%3 = builtin.unrealized_conversion_cast %alloc, %alloc_0, %1, %alloc_2, %alloc_4 : memref<1xindex>, memref<3xindex>, memref<?xindex>, memref<?xindex>, memref<?xf64> to tensor<?xf64, #sparse_tensor.encoding<{ dimLevelType = [ "compressed" ] }>>
return %alloc, %alloc_0, %1, %alloc_2, %alloc_4 : memref<1xindex>, memref<3xindex>, memref<?xindex>, memref<?xindex>, memref<?xf64>
}
}
```
but a few times the output is
```
#map = affine_map<(d0) -> (d0)>
module {
func.func @sparse_alloc_sparse_vector(%arg0: index) -> (memref<1xindex>, memref<3xindex>, memref<?xindex>, memref<?xindex>, memref<?xf64>) {
%c16 = arith.constant 16 : index
%alloc = memref.alloc() : memref<1xindex>
%alloc_0 = memref.alloc() : memref<3xindex>
%alloc_1 = memref.alloc(%c16) : memref<?xindex>
%alloc_2 = memref.alloc(%c16) : memref<?xindex>
%alloc_3 = memref.alloc(%c16) : memref<?xf64>
%c0 = arith.constant 0 : index
linalg.fill ins(%c0 : index) outs(%alloc_0 : memref<3xindex>)
%c0_4 = arith.constant 0 : index
%c0_5 = arith.constant 0 : index
memref.store %arg0, %alloc[%c0_5] : memref<1xindex>
%0 = sparse_tensor.push_back %alloc_0, %alloc_1, %c0_4 {idx = 0 : index} : memref<3xindex>, memref<?xindex>, index
%c1 = arith.constant 1 : index
%c0_6 = arith.constant 0 : index
%1 = sparse_tensor.push_back %alloc_0, %0, %c0_6, %c1 {idx = 0 : index} : memref<3xindex>, memref<?xindex>, index, index
%2 = builtin.unrealized_conversion_cast %alloc, %alloc_0, %1, %alloc_2, %alloc_3 : memref<1xindex>, memref<3xindex>, memref<?xindex>, memref<?xindex>, memref<?xf64> to tensor<?xf64, #sparse_tensor.encoding<{ dimLevelType = [ "compressed" ] }>>
%3 = builtin.unrealized_conversion_cast %alloc, %alloc_0, %1, %alloc_2, %alloc_3 : memref<1xindex>, memref<3xindex>, memref<?xindex>, memref<?xindex>, memref<?xf64> to tensor<?xf64, #sparse_tensor.encoding<{ dimLevelType = [ "compressed" ] }>>
return %alloc, %alloc_0, %1, %alloc_2, %alloc_3 : memref<1xindex>, memref<3xindex>, memref<?xindex>, memref<?xindex>, memref<?xf64>
}
}
```
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJztWE1v2zgQ_TXyhbCgT8s-6JDUDbDA7qUbYI8CJY1sbinKICk32V-_Q1Ju5ERO7SbpfjSBoWhIambeIzl8YNnV9_ntFoi3CDQo7UU3a0Y5VObt9x2VCm5BqE4qtKuuhg2IouybBmTBBNM4lv1FNeuE33Im0QsxbogCaBXRHWko46QT5A8m6u6LIkxokC3TGoTm9z4xsdEnSKqhJr98IkyRmhn_OIB0e5AkDAIie6F8L1h7wZV7frzbYZL4SdfrXa9JI7vWgDBZzLudJvO5sunPtc1_PiTvxWsQtOQwdyjmxyiwW8seCPk2VhNtnJEx3c-ZUdzSHUGHhDYNE1Cg6cUfvGhZB160InMv_kgOFr67z9qu7jlOR3btbEKaXlS-eRAvCRymgnLeVcVg7JGITqInL0qp3ARefIU013A3jtJCK6HB-OGd68OA0QfytTmebvbim-_oaBaJbV6NcRBMI63ChaNEMr31q04oTXGebesh6_F4C9R-4QL4tsFiXdlPpnA9cVAEZ7mIn3MRTrswiJ44OiLnCH4whT6YAs-ZoHzjN4xz7FJDsPFQjIqLf-h5SPJUHrjInqKKXgdVEb8Ul_XwDLLoUmTJhciGNXuECwvZFCw_sH_gRdeBw2g-_gZC68uNnICXnEzoEThkKn0x1-nzXAcnt8XTZCa382Qybnx29vhh6hTWNiCH0oaV5pCml14PLr10fVYtcJtvqJruWPB3vdoWJa0-j4rFOEoRDpbDml2z-s66GeecrU8z9kzNnCAonKyOp-lcXkJ_eBn-4AH58vAavgkFk1y40lT2jGsm_F5IMOcv1AXiRFmg8BwuKmq21WFBjKftkHx41BodWcnJVfPGJ6MRRo7_UbPNLD6eHRCoQ5jYmGHZNQqj9lfYA7-934FlB_cAfhRVXbuToBTUaBC7HbK1Cf9oA8TvlL4qpRJ0L8V_kK0DBIPJqdWvL49UrH2WKLApaeAL0awF1PUo2gfdjXL9XQS_i-BL5OIrKk_nKn4FqffGwvwSSZVcKKnO14NnS6r0R0mq5N8nqS5StN8vqRb_S0kV_1Tn_w-RVD8XpS-VVP8YW2dLqhnk4WKxTLNlHKSzOo_rVbyiM800h9xduQ5XsI-uUrd0D0f3p_ZytUdKZ73k-VbrnULwXnSDvw2Wr77EAtaiwfn-8G--k92f7nqXKdWDuddNV0G4mG3zMshSlGJx3JiGrKqblMIqWzawostVE884LYGr3B4Tkb3vja_cUkAbJ3TG8iiIojAMF0GaLqPED5NolSxhmaVpuEiSDNUbtJi1b3LxO7mZydymVfYbhZ2cKa0eOqlSbCMAbEj0T3u97WT-G92C2n6ie6a2VHymcmax5BbI3wjlt-g">