<div dir="ltr">Nadav,<div><br></div><div>You are absolutely right, it's ISPC workload. I've checked SSE4 and it's also severely affected.</div><div><br></div><div style>We use intrinsics only for conversion <N x i32> <=> i32, i.e. <a href="http://movmsk.ps">movmsk.ps</a>. For the rest we use general LLVM instructions. And I actually would really like to stick this way. We rely on LLVM's ability to produce efficient code from general LLVM IR. Relying on intrinsics too much would be a crunch and a path to nowhere for many reasons :)</div>
<div style><br></div><div style>What is the reason for this transformation, if it doesn't lead to efficient code?</div><div style><br></div><div style>Dmitry.</div><div style><br></div></div><div class="gmail_extra"><br>
<br><div class="gmail_quote">On Mon, Oct 21, 2013 at 7:01 PM, Nadav Rotem <span dir="ltr"><<a href="mailto:nrotem@apple.com" target="_blank">nrotem@apple.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Dmitry.<br>
<br>
This looks like an ISPC workload. ISPC works around a limitation in selection dag which does not know how to legalize mask types when both 128 and 256 bit registers are available. ISPC works around this problem by expanding the mask to i32s and using intrinsics. Can you please verify that this regression only happens on AVX ? Can you change ISPC to use intrinsics ?<br>
<br>
Thanks<br>
Nadav<br>
<br>
Sent from my iPhone<br>
<div class="HOEnZb"><div class="h5"><br>
> On Oct 21, 2013, at 4:04, Dmitry Babokin <<a href="mailto:babokin@gmail.com">babokin@gmail.com</a>> wrote:<br>
><br>
> Nadav,<br>
><br>
> Could you please have a look at bug #16941 and let us know what you think about it? It's performance regression after one of your commits.<br>
><br>
> Thanks.<br>
><br>
> Dmitry.<br>
</div></div></blockquote></div><br></div>