<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/141627>141627</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
[RISCV] Unsigned atomics use signed load + zero-extension instead of unsigned loads
</td>
</tr>
<tr>
<th>Labels</th>
<td>
new issue
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
tom-rein
</td>
</tr>
</table>
<pre>
The following code ([godbolt](https://godbolt.org/z/raqnhrGxa)):
```C++
#include <atomic>
unsigned char load(std::atomic_uchar &x) {
return x.load(std::memory_order_relaxed);
}
```
Compiles into:
```
load(std::atomic<unsigned char>&):
lb a0, 0(a0)
andi a0, a0, 255
ret
```
Instead of:
```
load(std::atomic<unsigned char>&):
lbu a0, 0(a0)
ret
```
Same is the case for `unsigned short` and signed atomics casted to unsigned.
For `unsigned int` signed load is correct because of the ABI, but when it is zero-extended, llvm fails to combine the zero-extension with the load.
Non-atomic loads correctly result in unsigned loads.
</pre>
<img width="1" height="1" alt="" src="http://email.email.llvm.org/o/eJyslM2SozYQgJ-muXTZJTcGzIEDtkNqLjlkk1ynBGobpYQ0kcTas0-fEnY8E29yW0qFhPrv65ZoGYI-W-YGij0Ux0zOcXS-iW5aedY26516b34bGU_OGHfR9oyDU4xAOyj2Z6d6ZyIUR6DdGONbgLwF6oC6u2jt_Bmo-wbUefmXHf3PVwlUp5G3IFooxW0cgPZpiBYo13Ywc4qSH2R0kx4g_wlEO9sFVuEwSo_GSQW0C1ElV3l703ydFyFQeQWqEarkEhHRc5y9xev62W7iyfn3V-cV-1fPRl5ZLXwLTHX8TJnWoj246U0bDqhtdE95gGj_Gwzyw7_4U0pUPgqB98f0t1kKoAMKoF1a1UlDWqU_RLc3FcUnY8_xe9oXGyJLhe70w1Hn_0P9HuSLnBh1wDgyDjKkC-URSvEIFEbnI5QipYn3vRtOSAaRFUaH_6ivQbTdkwdtF_v7V0otBRyc9zxE7HmQc2B0p4Wh3b8k6n6OeBnZoo5J-Rt7t-JrZKvSJTigMV8nPEltQoo-uKnXlhcHH6pBO4sXHcdlP8VNdL84u7rxL1sPEPOOnsNsImr7SOemkswy1eSqzmuZcbOptru83FSFyMaGekVVLU-7XhYVD5t6RwVv5SlX200v-zzTDQkqREHVphSUi3W9LWrZ10VPioSqCtgKnqQ265RU-jEzHcLMzWa7KanKjOzZhKUREFm-4CIFotQXfJOMVv18DrAVRocYPtxEHc3SQX59-XL4A4oj_m6fTjBV_vPBAO2fK6gf9_SpLNnsTfPUXXQc5349uAmoSxz3afXm3Z88RKBuoQ9A3T29rw39HQAA__95hYN_">