[llvm-dev] A 4x slower initialization loop in LLVM vs GCC and MSVC

Florian Hahn via llvm-dev llvm-dev at lists.llvm.org
Fri Oct 2 02:27:05 PDT 2020


> On Oct 1, 2020, at 23:16, Stefanos Baziotis <stefanos.baziotis at gmail.com> wrote:
> 
> While the code for the initialization is not ideal, it appears the main issue causing the slowdown is the fact that GCC interchanges the main loops, but LLVM does not. After interchanging, the memory access patterns are completely different
> 
> Well, definitely that's one thing and without interchange cache locality plays a big role but I think two things should be noted:
> 1) GCC, even though it interchanges the original code, still runs 25 iterations, so it still kind of does the benchmark.

Agreed, the benchmark does exactly the same computation, just in a different order.

I didn’t write the benchmark so I can only assume what it is intended to benchmark. From the title of the email and the code, it seems the goal is to benchmark the performance of initializing a large array of matrixes and the outer `run` loop has been added to make sure the benchmark runs a sufficiently large number of times to get more stable timings.

With that assumption in mind, I don’t think the benchmark really measures the same thing after interchanging. In other words, say interchanging gives a 4x speedup on the benchmark, but that is unlikely to translate into practice for loops that just initialize matrixes with constants without the artificial ’no-op’ outer loop. Granted, it is easy to come up with code where interchanging is highly beneficial for the interesting parts of the computation.

> 2) Most importantly, even if we interchange the code ourselves, Clang's codegen is still very bad both compared to GCC and in general. Also, GCC does the obvious thing of completely removing the inner loop (now that with IV `run`), while
> Clang doesn't. Basically, these are problems that I feel loop optimizations should not suffer from at this point :/

Personally I try to take a look at the bright side: more potential for further optimizations! I also don’t think there’s some fundamental transformations missing here. It appears the main issue is the llvm.memcpy calls, which means we have to go through memory in the main loop instead of hoisting the invariant ‘load’ part out of the loop (in the original, un-interchanged version).

As for LLVM not removing the redundant loop after interchanging: this is indeed surprising, but again is likely caused by the memcpys, which loop invariant code motion does currently not hoist out in that case. However adding support for that should be relatively straight forward. If corresponding load/store pairs are used instead of the memcpy, LLVM should also eliminate the loop as well.

And with load/stores instead of memcpy, LLVM’s loop interchange pass should also be able to interchange the loops (it’s disabled by default though).

> Oh also another thing that's important:
> 
> on top of manually interchanging the loops, which gives a ~3x speedup).
> 
> Indeed in my machine too. Although I understand that this speedup is not the important thing here, it has to be noted what
> GCC does with them interchanged:
> 
> Clang:
> - Original: 2.04
> - Interchanged: 0.60
> 
> G++:
> - Original: 0.43
> - Interchanged: 0.15
> 
> So, GCC gets a 3x speedup too and also the interchanged version of GCC is still 4x faster.

Right, but as you indicated earlier, if you manually interchange the loops first, GCC will completely remove the `run` loop while LLVM does not currently, so we are now even further removed from benchmarking the performance of the matrix array initialization code. The example certainly doesn’t make LLVM look great compared to GCC. (Also there appears to be a phase-ordering issue in GCC, because the `run` loop is not removed when automatically interchanged).

But going back to my original point, I’d said that if the goal is to compare the performance of the generated code to initialize a large array of matrixes, the most interesting comparison would be comparing GCC & Clang with interchanging disabled for GCC.

On my system, the runtime difference of the un-interchanged versions generated by Clang with and without SROA are in the noise, which is not too surprising I think, because it will be dominated by the memory writes and that hides the other smaller codegen differences. On different processors, e.g. smaller in-order ones, the difference will be more visible.

To summarize, I think the benchmark is very interesting and highlights some important issues with how different optimizations deal with llvm.memcpy and how SROA decides to use memcpy, which would be good to address. Will those changes make loops that initialize large arrays with constant matrixes 2x or 4x faster in practice? I doubt it.

Cheers,
Florian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20201002/93a4dc11/attachment-0001.html>


More information about the llvm-dev mailing list