[LLVMdev] [Polly] Update of Polly compile-time performance on LLVM test-suite

Tobias Grosser tobias at grosser.es
Wed Jul 31 07:50:57 PDT 2013

On 07/30/2013 10:03 AM, Star Tan wrote:
> Hi Tobias and all Polly developers,
> I have re-evaluated the Polly compile-time performance using newest
> LLVM/Polly source code.  You can view the results on
> <>.
> Especially, I also evaluated ourr187102 patch file that avoids expensive
> failure string operations in normal execution. Specifically, I evaluated
> two cases for it:
> Polly-NoCodeGen: clang -O3 -load LLVMPolly.so -mllvm
> -polly-optimizer=none -mllvm -polly-code-generator=none
> Polly-Opt: clang -O3 -load LLVMPolly.so -mllvm -polly
> The "Polly-NoCodeGen" case is mainly used to compare the compile-time
> performance for the polly-detect pass. As shown in the results, our
> patch file could significantly reduce the compile-time overhead for some
> benchmarks such as tramp3dv4
> <> (24.2%), simple_types_constant_folding
> <>(12.6%),
> oggenc
> <>(9.1%),
> loop_unroll
> <>(7.8%)

Very nice!

Though I am surprised to also see performance regressions. They are all 
in very shortly executing kernels, so they may very well be measuring 
noice. Is this really the case?

Also, it may be interesting to compare against the non-polly case to see
how much overhead there is still due to our scop detetion.

> The "Polly-opt" case is used to compare the whole compile-time
> performance of Polly. Since our patch file mainly affects the
> Polly-Detect pass, it shows similar performance to "Polly-NoCodeGen". As
> shown in results, it reduces the compile-time overhead of some
> benchmarks such as tramp3dv4
> <> (23.7%), simple_types_constant_folding
> <>(12.9%),
> oggenc
> <>(8.3%),
> loop_unroll
> <>(7.5%)
> At last, I also evaluated the performance of the ScopBottomUp patch that
> changes the up-down scop detection into bottom-up scop detection.
> Results can be viewed by:
> pNoCodeGen-ScopBottomUp: clang -O3 -load LLVMPolly.so (v.s.
> LLVMPolly-ScopBottomUp.so)  -mllvm -polly-optimizer=none -mllvm
> -polly-code-generator=none
> pOpt-ScopBottomUp: clang -O3 -load LLVMPolly.so (v.s.
> LLVMPolly-ScopBottomUp.so)  -mllvm -polly
> (*Both of these results are based on LLVM r187116, which has included
> the r187102 patch file that we discussed above)
> Please notice that this patch file will lead to some errors in
> Polly-tests, so the data shown here can not be regards as confident
> results. For example, this patch can significantly reduce the
> compile-time overhead of SingleSource/Benchmarks/Shootout/nestedloop
> <> only
> because it regards the nested loop as an invalid scop and skips all
> following transformations and optimizations. However, I evaluated it
> here to see its potential performance impact.  Based on the results
> shown on
> we can see detecting scops bottom-up may further reduce Polly
> compile-time by more than 10%.

Interesting. For some reason it also regresses huffbench quite a bit. 
:-( I think here an up-to-date non-polly to polly comparision would come 
handy to see which benchmarks we still see larger performance 
regressions. And if the bottom-up scop detection actually helps here.
As this is a larger patch, we should really have a need for it before 
switching to it.


More information about the llvm-dev mailing list