<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/58808>58808</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
Divergence analysis for SimpleLoopUnswitch
</td>
</tr>
<tr>
<th>Labels</th>
<td>
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
Peakulorain
</td>
</tr>
</table>
<pre>
Since LoopUnswitch requires Divergence Analysis to ensure correctness(pls see [https://bugs.llvm.org/show_bug.cgi?id=48819](https://bugs.llvm.org/show_bug.cgi?id=48819)), many GPU backends have optimizations for NonTrivial unswitch turned off by default. so is someone in the community introducing Divergence Analysis to improve SimpleLoopUnswitchPass? I'm asking this question because llvm-15 currently has many loop cases that cannot be optimized by SimpleLoopUnswitchPass.
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJydksGOmzAQhp8GLqNFgCGQA4e0UapKVRVpu-fKmAHcGDvrsbNKn77DbtPdqu2lkiVmbM8w3--_d8O1u9dWIXxy7vxg6UkHNYPHx6g9Euz1Bf2E64WdleZKmiA4QEvRIyjnPapgkSgp27MhIERI6ndzCGdKxC4pD7z6OFFmzGXJnJ84p9k9feXNTE06EQc9JGJftW2xTeo99_nf4vJlvYdF2it8OD5AL9UJ7UAwywuCOwe96O8yaGcJRufhs7NfvL5oaSDe0EP0Fgdw4wj9FQYcZTQhA3LA6OQWdBZBWwjzyr8s0epw5Y3g3RCVttO_NNPL2Tse454Dg2_lPkrWTxzgY1I2C0g6rV3CzGWPEWkdF3pUMhLCKsRdUYOKrLwN5spo9AJsuCMoSfxqYZaBQ2td4MobOFMx0d9_nyX5Psl3KXbFZtPUdSWKPB06MWzFVqZBB4PdGy5541pV_LNjGr3pfn_GSYc59hkrxslK8fNzx6J8Yw9xqokYl4O6bfM2nbu-HOvNoKqq6oVExGZoWiWVaOV2U7SNSI3s0VDHhmPnpLor87IsirzKt6Ko84zTSjSy2vRj0xdVm1Q5LlKbX3ZKffc8w-oxPjSawqvXUpZFTxbx1l_GMDvfHVGeonFeaps-j9w9z_sDUhwg0A">