<table border="1" cellspacing="0" cellpadding="8">
    <tr>
        <th>Issue</th>
        <td>
            <a href=http://email.email.llvm.org/c/eJztV8Fu2zgQ_Rr5QtiQJTuxDz6kTQIE2F6KBfYY0OLIYk2RAknFdr9-31CynXS3KbZ72B4WMBJpSM68mXnDGW2dOm0evbORrAqiklbsyJKXkQREwfkgYiOjaOQLia_knSBDLdkYRFasOhfiNDSyIyGtNKegQ1asZ-IpZsUt1Lm2dVZENxxvTh35TnrZUqSz4gq2vTOiIh-ltqImGXtPQbhaSNE6RUZsT8m0tjvh-gix0oAQNHQDxFbGqnkO-itl5X0O-8CiYFTJk5DGwAwFEq4LwlPLJmQQumbxSRzIA3oVe2w8CZVMBAd8DZ5mWX6f5XfD34ejbDtD7BNvqr1rYcNXTVZ8FCkOlwiU45HsJh9-rdF-EAkAXs5XNwJQhdE4sZtpq-PzEGyRLT_krBC74OOcH-fL8iZb3uPE3ZiSrPyYH7Py8chLx7ossvLhjfrbpH7YPKtkiFer7-ngRF1Wee09G6vXLiTO6Epktx-0VXREhJ5biYjzHvgk61pbYhFrLlYqOamSf6rglE3ZPq8U55U1W8TzPzj77QprWN7zq47MaeefIzh4gZUVBdPRGDJ4HOL-AwnU3d4LbcHzFeJQhfh8m7-Na4pYuVqcowaEYO14YEjO3ftxxglE8hLu5cN2mw_Hpd_N-ThvHGgCSXGRrPkJ0kf8hHWi86SoosB1fFYnzkk7aTJqVDEXZx1no_AyxfXHhLjQ_HW5PNXQXC7LMi_EAQXXedfJnYxcO6g8ESKeKzHcHdrWzrcQ8GXR6CAOrge0Lf33pfQ_zX-S5j-6bH5Nkn-HDH9L8T9I7K07DG2MOY2u1RtulWheFh3Ho32hveTXdhWG1jQcAM-ts9OgFU2prqlKxeE6rhXVV_yCdkxtF0_iRZqe0FfroXhS1XBZ1T33rbGYeOlM0WqsIIDqjKxIiYOOzVUhfJEekhn6L47byM4h2IdGo8mpk5Uta5GBwqgLXZmrk9pA5gVSbdEuBmW1NiZw94A9pLBxHSVgnOPKkLQw33fcxh0UeFE7o9i9g_P7X6zE2ZUr58_agSHx9UrKd7rozzDqd6QuNq7fNeAPzyQ6XHll6RiZF33goKVEMWw8iBYzDoKephFpMcFoaUCkkWaa710MP57zJZFH73kR_NphCBPTtI2MTsngCWtqyO5AkyGhgZBsBGZP15loQBWaRAncgJgtcGlrjIPukKYxTFY6YpKiYDEDAonxJNUpjYV3dsi_7ABBVs3lnmfuKHSI5C4bT8i-9GEcPFkMd_9KgZG12z5FCnEbA6J0qPowlpx3_bnqnpK-AVn0stqTeg-WZKvjHAUdVCd7j1coZ-E4y8qOZ1eV8L4q-hTC7-FP4DGC4pjkLHVOB4R0rPgHnaAd5Ck1CdFy_UUejnPmSNsDLsmgsYfLEbkkYRzGWqQACdH23G2rPWPE8DveGXTsEGMYveIa7w4I3ky-cLhCz1vknyQm6eazfNG4gOxeejFRm1Kty7WcyB709Zst2Rdp9X7Se7NpYuzSNJyu6h086be4b1q8GPNy_jdF0L-AlnjVIfSEDvLIc0M5aTZzWi1UUW2XckXLNdV5gZteLtRitVLl7U0xMXJLJmyGJmfpIJKKoYNN_j0CvSnyosjn83WBCi4XM1lQWdJ6ubi5KfOlqhEV_qYwM9Yzc3438ZukctvvAhYNCiRcFyUoubNECTAQRh1Nevn029NnvrYeUgJHbgBdSkz6drl-hXHTsdP0LcYfPWE2Sag3CfKfbf8-Bw>53303</a>
        </td>
    </tr>

    <tr>
        <th>Summary</th>
        <td>
            [MLIR] Elide tensor/linalg ops that have known-zero sizes.
        </td>
    </tr>

    <tr>
      <th>Labels</th>
      <td>
            new issue
      </td>
    </tr>

    <tr>
      <th>Assignees</th>
      <td>
      </td>
    </tr>

    <tr>
      <th>Reporter</th>
      <td>
          benvanik
      </td>
    </tr>
</table>

<pre>
    Frontends can generate tensors that have zero elements (post-shape analysis). It's common to have hyperparameters that control certain features of a model by zeroing out a dimension (batch_size=0) and today all these ops remain as if they were actually doing something.

Example coming from torch, post-analysis:
```mlir
  %186 = linalg.init_tensor [0, %dim1, 1536] : tensor<0x?x1536xf32>
  %187 = tensor.cast %186 : tensor<0x?x1536xf32> to tensor<?x?x1536xf32>
  %188 = linalg.generic {indexing_maps = [affine_map<(d0, d1, d2) -> (d2, d1)>, affine_map<(d0, d1, d2) -> (d0, d1, d2)>], iterator_types = ["parallel", "parallel", "parallel"]} ins(%cst_70 : tensor<1536x384xf32>) outs(%187 : tensor<?x?x1536xf32>) {
  ^bb0(%arg1: f32, %arg2: f32):  // no predecessors
    linalg.yield %arg1 : f32
  } -> tensor<?x?x1536xf32>
```

If #53302 was propagating the static shape information this would be:
```mlir
  %186 = linalg.init_tensor [0, %dim1, 1536] : tensor<0x?x1536xf32>
  %188 = linalg.generic {indexing_maps = [affine_map<(d0, d1, d2) -> (d2, d1)>, affine_map<(d0, d1, d2) -> (d0, d1, d2)>], iterator_types = ["parallel", "parallel", "parallel"]} ins(%cst_70 : tensor<1536x384xf32>) outs(%186 : tensor<0x?x1536xf32>) {
  ^bb0(%arg1: f32, %arg2: f32):  // no predecessors
    linalg.yield %arg1 : f32
  } -> tensor<0x?x1536xf32>
```

We know that the result has one or more 0 dimensions and that this non-side-effecting op producing an empty value. If the shape was fully static the generic could be replaced with an empty `arith.constant`, while dynamic cases could turn themselves into empty fills to be (hopefully) cleaned up by other folding work:
```mlir
  %186 = linalg.init_tensor [0, %dim1, 1536] : tensor<0x?x1536xf32>
  %188 = linalg.fill(%cst_0, %186) : f32, tensor<0x?x1536xf32> -> tensor<0x?x1536xf32>
```
The thought here is that the next op using `%188` may be from any dialect and this preserves a correct program - and eliding zero-length fills seems like something that should exist on its own (if it doesn't already). Another approach would be to drop the fill and just have the `linalg.init_tensor`, but there may be discussions around that I haven't tracked. Another approach would be a `tensor.undef`/`linalg.undef` that captured the dimensions like `linalg.init_tensor` but acted as a poison value. Either way, a memset of 0 is much easier to elide lower down in the stack than a fully expanded `linalg.generic`.

/cc @MaheshRavishankar 
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJztV8Fu4zgM_RrnIiRw7CZNDjl0pi1QYOcyWGCPhWLRsSayZEhyk8zX76PsJO3uToudy85hgaC1KYl8JB9FeuvUafPonY1kVRCVtGJHlryMJCAKzgcRGxlFI19IfCfvBBlqycYgsmLVuRCnoZEdCWmlOQUdsmI9E08xK26hzrWtsyK64Xhz6sh30suWIp0VV7DtnREV-Si1FTXJ2HsKwtVCitYpMmJ7Sqa13QnXR4iVBoSgoRsgtjJWzXPQ3ykr73PYBxYFo0qehDQGZiiQcF0Qnlo2IYPQNYtP4kAe0KvYY-NJqGQiOOBr8DTL8vssvxv-Phxl2xlin3hT7V0LG75qsuKzSHG4RKAcj2TLfPi1RvtBJAB4MV8tBaAKo3FiN9NWx-ch2CJbfMpZIXbBxzk_zhflMlvc48TdmJKs_Jwfs_LxyEvHuiyy8uGN-tukftg8q2SIV6vv6eBEXVZ57T0bq9cuJM7oSmS3n7RVdESEnluJiPMe-CTrWltiEWsuVio5qZJ_quCUTdk-rxTnlTVbxPO_OPvXFdawuOdXHZnTzj9HcPACKysKpqMxZPA4xP0DCdTd3gttwfMV4lCF-Hybv41rili5ujlHDQjB2vHAkJy79-OME4jkJdyLh-02H45Lv5vzcd440ASS4iJZ8xOkj_gJ60TnSVFFgev4rE6ck3bSZNSoYi7OOs5G4WWK68eEuND8dbk81dBcLsoyL8QBBdd518mdjFw7qDwRIp4rMdwd2tbOtxDwZdHoIA6uB7Qt_fel9D_Nf5LmH102vybJf0CGf6T4HyT21h2GNsacRtfqDbdKNC-LjuPRvtBe8mu7CkNrGg6A59bZadCKplTXVKXicB3XiuorfkE7praLJ_EiTU_oq_VQPKlquKzqnvvWWEy8dKZoNVYQQHVGVqTEQcfmqhC-SA_JDP0Xx21k5xDsQ6PR5NTJypa1yEBh1IWuzNVJbSDzAqm2aBeDslobE7h7wB5S2LiOEjDOcWVIWpjvO27jDgq8qJ1R7N7B-f0vVuLsypXzZ-3AkPh6JeU7XfRnGPU7Uhcb1-8a8IdnEh2uvLJ0jMyLPnDQUqIYNh5EixkHQU_TiLSYYLQ0INJIM833LoYfz_mSyKP3vAh-7TCEiWnaRkanZPCENTVkd6DJkNBASDYCs6frTDSgCk2iBG5AzBa4tDXGQXdI0xgmKx0xSVGwmAGBxHiS6pTGwjs75F92gCCr5nLPM3cUOkRyl40nZN_6MA6eLIa7f6fAyNptnyKFuI0BUTpUfRhLzrv-XHVPSd-ALHpZ7Um9B0uy1XGOgg6qk73HK5SzcJxlZcezq0p4XxV9CuGP8CfwGEFxTHKWOqcDQjpW_INO0A7ylJqEaLn-Ig_HOXOk7QGXZNDYw-WIXJIwDmMtUoCEaHvuttWeMWL4He8MOnaIMYxecY13BwRvJl84XKHn3eRfJCbp5qt80biA7F56MVGbUq3LtZxEHQ1tUJBffnv6ygX4kKCMXhaPg4k0hV-_J_j6tNP0VcHje5hNem82TYxdGqLTDb9DAPotrqmWtZiX878pcvUNbMarDqEnNJ5HHjfKSbOZ38xvCrks54tqK6uVLIq54tdqLpfrvKSJkVsyYTP0RksHkVQMjW-iN0VeFPl8vi5QtuXNTBZUlrRe3CyXZb5QNULBHxJmxjhmzu8mfpMgbftdwKJBVYTrogQPd5ZSdFi_7FHpfrMl-yKt3k-S7U3C_ie3_yfe">