[polly] reorganize scop detection for faster compiles

Tobias Grosser tobias at grosser.es
Tue Jun 4 08:48:45 PDT 2013


On 06/03/2013 09:13 AM, Sebastian Pop wrote:
> Tobias Grosser wrote:
>> On 06/01/2013 09:31 PM, Sebastian Pop wrote:
>>> Hi Tobi,
>>>
>>> Here are two patches that reorganize scop detection filters: the idea behind these
>>> patches is to run the lightweight passes accessing the CFG and LoopInfo before
>>> starting iterating over instructions in basic blocks.
>>>
>>> With these and previous patches, we still have room for improvement: here is
>>> what I still see in one of the files:
>>>
>>> Ok to commit?
>>
>>
>> I don't think they will hurt correctness wise.
>>
>
> So can I take this as an ok to commit? ;-)

You can. ;-)

>> However, if you see compile time improvements with these patches, it
>> would be nice to know on which benchmarks you see how much
>> improvements. Also, do you see any compile-time regressions?
>
> I was compiling a big chunk of the Android codebase.

I see. However, this answered my question only partially.

I believe if we include performance fixes, we should get performance 
timings before/after a patch and report those timings. This does not 
need to be anything fancy, just two runs of the llvm test-suite over 
night should be enough.

I understand that you are running Polly on various pieces of possibly 
non-public software. If you also have timing information for the test 
suite, I would highly appreciate if you report it. If there is no such 
information or the difference shows only up in one of your closed source 
files, I would like to read this and to hear the performance difference 
in the closed source project.

>> I can see that the patches may reduce compile time for cases where
>> the CFG commonly breaks the scops. On the other side, in case the
>> CFGs are perfectly well structured, this may actually have a
>> negative effect. I tend to agree that the first one may be more
>> common, but it would be nice to back this up this intuition with
>> some data.
>
> Overall, with all my changes, I see the compile time of Polly dropping by 8x on
> that benchmark.  Most of the remaining time is spent in scop detection as I
> mentioned below, and I will be working on reducing that compile time overhead.

Interesting. This is such kind of information that I would like to hear.

> IMNSHO, scop detection should really be less expensive than polyhedral opts ;-)

It normally is.

>>> 285.8400 ( 69.2%)   2.1700 ( 44.7%)  288.0100 ( 68.9%)  295.9582 (
>> 67.8%)  Polly - Detect static control parts (SCoPs)
>>
>> Interesting. Do you happen to have a test case for this .ll file,
>> which you could create a bug report for?
>
> I don't know which parts of my benchmark are proprietary or open source, so I
> wouldn't be able to extract a testcase to share.  As I mentioned in another
> email, you can get the same kind of bad behavior in scop detection by looking at
> other C++ code (dealII) and use -g.  I have an idea why adding -g takes longer,
> so let me write you another email to explain my intuition.

Maybe you can take your knowledge and use it to point to a test case 
from the LLVM test suite that shows this behavior.

Is your intuition explained in the email 'scop detection: could we stop 
iterating over uses?'

Cheers,
Tobi



More information about the llvm-commits mailing list