<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/56294>56294</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
Inefficient code generation when bitwise AND is used
</td>
</tr>
<tr>
<th>Labels</th>
<td>
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
pmor13
</td>
</tr>
</table>
<pre>
`clang -O3` optimizes this code:
```
_Bool f1(char x)
{
_Bool b1 = x == 4;
_Bool b2 = x & 3;
return b1 & b2;
}
```
to:
```
f1:
xor eax, eax
ret
```
However, `clang -O3` does not optimize this code:
```
_Bool f1(char x)
{
_Bool b1 = x == 2;
_Bool b2 = x & 1;
return b1 & b2;
}
```
```
f1:
cmp dil, 2
sete al
and al, dil
ret
```
Why?
Note: the `&` `b1 & b2` is used intentionally. If `&&` is used, then `clang -O3` optimizes it to:
```
f1:
xor eax, eax
ret
```
How it can be explained?
Extra: if `#define LOGIC_AND` is uncommitted [here](https://godbolt.org/z/j8YWeM47b), then the whole program transfers into a no-op. Courtesy of [Lundin](https://stackoverflow.com/users/584518/lundin).
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJy9U01vnDAQ_TVwsYLA3i8OHJJs00ZKk2PUU2RgDE6NjWyT3c2v75iFRJukSqWqRcDYnhnPm_fs0tSHIlqlleK6IWd3DMfE9F528hkc8a10pDI1ROw8SrdReo7-6R2nDxfGKCKyiG6qlluyj2g-Ba4vjgOCzzGszEjEtmQf_mGwiNj7GDrH0BVhJwEW_GD1uAv6SvrijNbbD8F58zvYCHj2kOnZGzta4NjD5WhO_Fj9w62-mR08gQ05b3msDVKojX8h9B_zST_jM_s7Pj8nser60dZSBULoqdeBh2C5Ol3nuibTOiaF3D9i_r49ROxqco3_W-MDs0gzBDGwrSADfq894hwlGBzURGoP2kujuVKHhFyLOeeYNoUFSLiffqfu6y2RnvyvoxZqVRxFAwL7XnGpEeEpCV_23vLAgpwaYjUIjCM3d1-vLx_Ob7dzd7oyXSe9Ry6i5UULFqLlFk9e633vAmp6hW9j6tIonxjb4OwZv8fNj3v4vliX4XDO9ATOd61RQHprGss7gjC0E2BdYNoQjlfhzPQJuTSD9eAOxIhQ92bQtdQfVXaeVz8NXi6hzC5BsLiGkliHdrlZLLMNDtQxnebJsf24Llids5zHXnoFxbUGIWQlUerx5pEGNFgedEe8CLyUficdECRm1jwerCresCB9O5QTCKWeZnOG3T5C5XEqnRtgxLai-SJui6XgUIFYM0apSDmrcijXdZ2xNWVMZFmseAnKFUgCth_LgqaUppic5dlqkSUbECUry00lKgZVtowWKXRcqiQUDnrEthgxlEPj0Kmk8-7VyZ2TjQaY9-eDb40t-s7YjMUj2mKE-gt8u6cR">