HI Justin, <div><br></div><div>37 w/o widening, and 40 w/ widening. <div><br></div><div>There is some other weirdness in the register allocation: I didn't specify any upper bound on register usage, but ptxas on the version w/ widening aggressively rematerializes some arithmetics instructions for fewer registers. Nevertheless, it still uses more registers than the version w/o widening. </div><div><br></div><div>Btw, Justin, do you have time to take a look at this (<a href="http://reviews.llvm.org/D5612">http://reviews.llvm.org/D5612</a>)? Eli and I think it's OK, but would like you to confirm. <br><div><br></div><div>Jingyue</div></div></div><br><div class="gmail_quote">On Fri Oct 24 2014 at 11:29:33 AM Justin Holewinski <<a href="mailto:jholewinski@nvidia.com">jholewinski@nvidia.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Fri, 24 Oct 2014, Jingyue Wu wrote:<br>
<br>
> Hi, <br>
> I noticed a significant performance regression (up to 40%) on some internal CUDA benchmarks (a reduced example presented below). The root cause of this regression seems<br>
> that IndVarSimpilfy widens induction variables assuming arithmetics on wider integer types are as cheap as those on narrower ones. However, this assumption is wrong at<br>
> least for the NVPTX64 target. <br>
><br>
> Although the NVPTX64 target supports 64-bit arithmetics, since the actual NVIDIA GPU typically has only 32-bit integer registers, one 64-bit arithmetic typically ends up<br>
> with two machine instructions taking care of the low 32 bits and the high 32 bits respectively. I haven't looked at other GPU targets such as R600, but I suspect this<br>
> problem is not restricted to the NVPTX64 target. <br>
><br>
> Below is a reduced example:<br>
> __attribute__((global)) void foo(int n, int *output) {<br>
> for (int i = 0; i < n; i += 3) {<br>
> output[i] = i * i;<br>
> }<br>
> }<br>
><br>
> Without widening, the loop body in the PTX (a low-level assembly-like language generated by NVPTX64) is:<br>
> BB0_2: // =>This Inner Loop Header: Depth=1 <br>
> mul.lo.s32 %r5, %r6, %r6; <br>
> st.u32 [%rd4], %r5; <br>
> add.s32 %r6, %r6, 3; <br>
> add.s64 %rd4, %rd4, 12; <br>
> setp.lt.s32 %p2, %r6, %r3;<br>
> @%p2 bra BB0_2;<br>
> in which %r6 is the induction variable i. <br>
><br>
> With widening, the loop body becomes:<br>
> BB0_2: // =>This Inner Loop Header: Depth=1 <br>
> mul.lo.s64 %rd8, %rd10, %rd10; <br>
> st.u32 [%rd9], %rd8; <br>
> add.s64 %rd10, %rd10, 3; <br>
> add.s64 %rd9, %rd9, 12; <br>
> setp.lt.s64 %p2, %rd10, %rd1; <br>
> @%p2 bra BB0_2;<br>
><br>
> Although the number of PTX instructions in both versions are the same, the version with widening uses more mul.lo.s64, add.s64, and setp.lt.s64 instructions which are<br>
> more expensive than their 32-bit counterparts. Indeed, the SASS code (disassembly of the actual machine code running on GPUs) of the version with widening looks<br>
> significantly longer. <br>
><br>
> Without widening (7 instructions): <br>
> .L_1: <br>
> /*0048*/ IMUL R2, R0, R0; <br>
> /*0050*/ IADD R0, R0, 0x1; <br>
> /*0058*/ ST.E [R4], R2; <br>
> /*0060*/ ISETP.NE.AND P0, PT, R0, c[0x0][0x140], PT; /*0068*/ IADD R4.CC, R4, 0x4; <br>
> /*0070*/ IADD.X R5, R5, RZ; <br>
> /*0078*/ @P0 BRA `(.L_1);<br>
><br>
> With widening (12 instructions):<br>
> .L_1: <br>
> /*0050*/ IMUL.U32.U32 R6.CC, R4, R4; <br>
> /*0058*/ IADD R0, R0, -0x1; <br>
> /*0060*/ IMAD.U32.U32.HI.X R8.CC, R4, R4, RZ; <br>
> /*0068*/ IMAD.U32.U32.X R8, R5, R4, R8; <br>
> /*0070*/ IMAD.U32.U32 R7, R4, R5, R8; <br>
> /*0078*/ IADD R4.CC, R4, 0x1; <br>
> /*0088*/ ST.E [R2], R6; <br>
> /*0090*/ IADD.X R5, R5, RZ; <br>
> /*0098*/ ISETP.NE.AND P0, PT, R0, RZ, PT; <br>
> /*00a0*/ IADD R2.CC, R2, 0x4; <br>
> /*00a8*/ IADD.X R3, R3, RZ; <br>
> /*00b0*/ @P0 BRA `(.L_1);<br>
><br>
> I hope the issue is clear up to this point. So what's a good solution to fix this issue? I am thinking of having IndVarSimplify consult TargetTransformInfo about the cost<br>
> of integer arithmetics of different types. If operations on wider integer types are more expensive, IndVarSimplify should disable the widening. <br>
<br>
TargetTransformInfo seems like a good place to put a hook for this.<br>
You're right that 64-bit integer math will be slower for NVPTX targets, as<br>
the hardware needs to emulate 64-bit integer ops with 32-bit ops.<br>
<br>
How much is register usage affected by this in your benchmarks?<br>
<br>
><br>
> Another thing I am concerned about: are there other optimizations that make similar assumptions about integer widening? Those might cause performance regression too just<br>
> as IndVarSimplify does. <br>
><br>
> Jingyue<br>
><br>
></blockquote></div>