<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/58313>58313</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
LLVM 13 regression on xor lowering
</td>
</tr>
<tr>
<th>Labels</th>
<td>
new issue
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
ljmf00-wekaio
</td>
</tr>
</table>
<pre>
I noticed that compiling the following D code on LLVM 12 and LLVM 13 yields different results:
```d
bool xor(bool lhs, bool rhs)
{
return (lhs && !rhs) || (!lhs && rhs);
}
```
https://godbolt.org/z/WTW1qeEMY
Eventhough I'm not entirely sure this is a regression with LLVM itself or perhaps a different approach LDC takes to glue the LLVM backend, but it seems that it is not a correctness issue. Running alive2 against the different generated IRs, it can validate the transformation successfully. https://alive2.llvm.org/ce/z/cWdpvA
Perhaps this can be seen as an improvement to add this logic simplifiation in a transformation pass.
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJxdU01vozAQ_TVwGRUZE0g4cGibVqrUSqtqtdUeDQzg1tisbZLN_vodDN12G03iGXs-nt9zatNeqgfQxssGW_CD8NCYcZJK6p5ChM4oZc5LdKSTFsFoeHz88QQpB6Hbzc_gIlG1DlrZdWhRe7DoZuVdlF1H7Bix66hgq7VrXBuj4LexET8EVw0u4rcQfLv45Va3v1kdoI9FP1sNVEPptBRktKRrAUT7W7LlmPY-ZWz9spv3lscvmNZw8H4KgPk9WW_a2iifGNtT9Ie-L99f0l949_Rzqw6_dye67WDmfoCHiO_HhUygLWlRXcDNFolH6YBMEP6eeHGSSDxLP6zsSe9QdWAsTGgHMS2JHzyKabJGNJR7vAUv3tCBN9CrGYM-oUMtmjfUbeBv9tQQHOLoVj0potkLKkEKWouN14SBNt2MCTzPWi_yCiVPSJr2QmrnQ-8PED1qtMLTE3l4DjJR00ZoOFFVS_sh3VuhXWfsKPxyQTc3Dc3pZqUuCfzP7TosUeo0bvw2uJHcvLTT6fozw982VgKNy9Qal_tpEMSUBjkSQSccF5zEjGjbNVOZXjbg6FjJTq6YJBV9xTkJ55IYq7Qo8ny_KwoWt1XWllkpYi-9wur9jX9Sj4zeLs04oyX24tmq6svzIX3nOqF_EwXLRbflitC-kgYUBgWIzvv8kKVZPFSY5rtil5cpFrnI-YFxti9T3qQ1ZyWyMlaiRuWqKL-JONd4XkUkP8qPsaw44zxlKU_TbJ-VCcvq-sBZUfA6bViaRzuGo5DqH--xrQKkeu4dHSrpvPs4JF5krxHDOOovZnrnVPE6doxdnfFNSBMHAFW4wF8vTmLe">