<table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Issue</th>
<td>
<a href=https://github.com/llvm/llvm-project/issues/71355>71355</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>
[clang][offload] Different optimization level for compiling host code and offload target code
</td>
</tr>
<tr>
<th>Labels</th>
<td>
clang
</td>
</tr>
<tr>
<th>Assignees</th>
<td>
</td>
</tr>
<tr>
<th>Reporter</th>
<td>
littlewu2508
</td>
</tr>
</table>
<pre>
Currently when I tried to compile a hip program, `clang` accepts one optimization level `-O<N>` and use it in both compiling CPU code and GPU code. Is there any reason to use the same optimization level, or it can be set differently?
X problem: Linux distro commonly used `-O2` flags (for CPU, apparently), while for GPGPU programs `-O3` is more widely adopted. See https://github.com/gentoo/gentoo/pull/33400#discussion_r1377792830
</pre>
<img width="1px" height="1px" alt="" src="http://email.email.llvm.org/o/eJx8UkGPmzwU_DXm8rSRsWMSDhx2ky-rlT61K1WVeqsMfoArYyP7sWn66ytDqu1h1Qvgp2HmzXh0SnbwiA1TT0ydC73QGGLjLJHD6yIUPxZtMLfmtMSIntwNriN6eAGKFg1QgC5Ms3UIGkY7wxzDEPXExAlYxTun_cAqDrrrcKYEwSOEmexkf2mywYPDN3QZ-vCZydMnJv9b4d7AkhAsgfXQBhrvMtYPcHr9Cl0wuKKe74cdvCSgEWMe3yCiTsHn9TINjQhJTx9J50VDzEKd9tAiJCQwtu9xs8vkhfEz44_b81s22DqcmHyE_61ffoKxieIawxS8u2VBsxkS2Urv9JCAiWMfYl49C-p51nd6UefBdcwJZsTza3Z0TzFtPDLz2ARTiAhXa9DdQJswE5odfEGEkWhOTD4ycWHiMlgal3bXhSkf0FMIf3_Mi3NMXKTcc86ENDZ1S0o2-O-xlIfDoRZHyQvTSFPLWhfYlFVdi32lyroYm67fG7U_yvLY97LuRcUPQla8UvJQVcceC9sILmRZ8orvlSrrnZKi7yVyXbV1KU3L9hwnbd3OubdpF-JQ2JQWbA6lVKpwukWX1joKsdVHiNzM2GT8Q7sMie25s4nSOwNZcmuHtz_Umamn0PcuaMPUGc5_7vOj7uXU38s1hkTv7bpzAOk44DYvluiafwSed7q_HuYYfmBHTFxWi4mJy-rydwAAAP__x50hzQ">