[Patch] GVN fold conditional-branch on-the-fly

Shuxin Yang shuxin.llvm at gmail.com
Sat Sep 7 11:27:57 PDT 2013


On 9/7/13 9:14 AM, Daniel Berlin wrote:
>>>>> GVN is getting more and more complicated,
>>>> GVN is almost always complicated:-)
>>> It's really not.  LLVM's GVN is not really a GVN.  It's really a very
>>> complicated set of interleaved analysis and optimization that happens
>>> to include some amount of value numbering.
>> LLVM GVN is a old-style GVN + some PRE.  But for the PRE part, I don't
>> think LLVM GVN is complicated at all.
> No, it's not. It is a very simple PRE that tries to catch a few cases.
I disagree.  The PRE in LLVM GVN has two part, one tackle load, the 
other one tackle
simple expr in diamond-shaped cfg. I believe the former catch *all* load 
PREs (depending
on alias analysis's result). If you disable load PRE, you will certainly 
see noticeable
difference in performance.

>>
>>> It is actually becoming
>>> more and more like GCC's old dominator optimization every day - a grab
>>> bag of optimizations that all fall under some abstract notion of
>>> value based redundancy elimination, but aren't really the same, and
>>> are not unified in any real way.  This is okay if it was not a compile
>>> time sink, but it's a huge one now, and adding stuff like this only
>>> makes it worse.
>>>
>>>
>>>>> and is already a compile time sink.
>>>> Alias analysis bear the blame, It remember alias analysis account for
>>>> 1/3+
>>>> of GVN compile-time.
>>> This is directly because of the algorithm in GVN and how it works.
>> I don't think so.  I believe the culprit is lacking a sparse way to
>> represent
>> mem-dep -- when GVN try to figure out a the set of mem-opt a particular
>> load/store
>> deps on, the mem-dep search all over the place.
> I disagree. As I said, the reason GVN's memory analysis is slow is
> simply because of how it uses memdep, which is unnecessarily slow and
> expensive.
>
> It's true that a sparse representation would fit better with the
> current usage pattern, but as I mentioned in a later message, the
> current usage pattern is probably not actually a good thing.
>
> As for a sparse representation, you can't sparsely and accurately
> represent mem-dep at the same time in a way that satisfies a lot of
> clients.


>   GCC learned this the hard way, and we tried for something
> like 8 years until we settled on what is there now, where clients are
> expected to use the sparse representation to get the nearest
> "possible" memory dependence, and then further disambiguate.
>
> Computing this representation is actually fairly expensive to do well,
> and without a client past GVN, i'm not sure it would be worth it.
>
>
>
This topic seems to deviate from what we are focusing. But I'd have to say
I disagree, LLVM dose not like mem-ssa, and gcc dose not do mem-ssa
very well, dose not mean other compiler cannot handle it very well.
At least, Open64's mem-ssa is good enough for my need albeit it suffers
from poor engineering, and there are few defects (like do not
allow overlap of virtual variable live-ranges).

It seems to me that gcc "steal" the idea from Open64, and in the mean  
time suffers
from its confusing documents & src.  I remember the in the early version 
(<4.5?),
it tries to *verbosely* annotate side-effect for every load/store. (I 
wouldn't be surprise
it is very expensive). But later on go to another extreme. Open64's 
approach is
much better -- it only verbosely annotate store, and concisely annotate 
load.
This never show up in its doc.






More information about the llvm-commits mailing list