<table border="1" cellspacing="0" cellpadding="8">
    <tr>
        <th>Issue</th>
        <td>
            <a href=https://github.com/llvm/llvm-project/issues/133373>133373</a>
        </td>
    </tr>

    <tr>
        <th>Summary</th>
        <td>
            64bit loop variable on 32bit may be optimized
        </td>
    </tr>

    <tr>
      <th>Labels</th>
      <td>
            new issue
      </td>
    </tr>

    <tr>
      <th>Assignees</th>
      <td>
      </td>
    </tr>

    <tr>
      <th>Reporter</th>
      <td>
          wzssyqa
      </td>
    </tr>
</table>

<pre>
    The Code like this
```
int x[128000];

void in(int k, int n) {
        for(unsigned long long i=0; i<128000; i++)
 x[i] = k;
}
```

If we build it for a 32bit system such as
```
./bin/clang --target=riscv32-linux-gnu -O3 -S n.c -emit-llvm
```
the 64bit one will be used, while in fact we can transform it to 32bit one, as it is less than 128000 always.

In fact gcc does this transform.
</pre>
<img width="1" height="1" alt="" src="http://email.email.llvm.org/o/eJxcUkGPszgM_TXmYoFCAi09cGgHIe1pD7t_IIAB74SkS0I7nV__KRRppEERGLDfe3629p4nS1RDeYOySfQWZrfWz2_vX__rpHPDq_53JvxwA6HhT8IwswdxhZM4jriyDfgF5S2XlRACygbULaaI68PxgGxBVjHnE-QHxsCCvCCcYxIe1-hWkNVmdzkDGmen941BNQLULQYfB8P-Jm_7uUSQyM5QNgiqwc-D_tz80gni-teIT8JuYzMgh8iKGpXsOKB_-UAL-q2fUf9uMQPZdrGRtjfaTpimQa8TBVDNyr5_KJkatttXOtkN078Vpv-gzXpMaeGQGvNYfgGGmfBURF5nCZ9sDHaEm6chmvSc2RCyxVH3ISrutcWwautHty5ReXCHbGcpVmgfv7JHQ95jmLXFt1mozVO_fHb0f0BOfY-DI7-P8wc5S4ZaDRd10QnV-blQeSHzc5HMtcqLrhqJlJaiKvJiHEu65Hmvej12UpUJ11LIUihZiZO4KJXldFH5iXSvu6KSlYZC0KLZZNGNzK1Twt5vVOdKqbNKjO7I-H0PpbT0xP0vSBnXcq1jUdptk4dCGPbB_8AEDobqt5nGuTs-9Mq6M4TOHiYt-hXtdffAC3_TkGyrqecQ7h7UFWQLsp04zFuX9W4B2e4Dez_S--r-oz6AbHdFHmR7SH7U8k8AAAD__wDs-_o">